plotCoef.enetLTS {enetLTS}R Documentation

coefficients plots from the "enetLTS" object

Description

Produce plots for the coefficients of the current model.

Usage

plotCoef.enetLTS(object,vers=c("reweighted","raw"),colors=NULL,...)

Arguments

object

the model fit to be plotted.

vers

a character string denoting which model to use for the plots. Possible values are "reweighted" (the default) for plots from the reweighted fit, and "raw" for plots from the raw fit.

colors

optional parameter, list object with list names bars, errorbars, background, abline, scores, cutoffs, badouts, modouts, each containing a string referring to a color.

...

additional arguments from the enetLTS object if needed.

Value

An object of class "ggplot" (see ggplot).

Note

gives the matplot of

- coefficients vs indices.

Author(s)

Fatma Sevinc KURNAZ, Irene HOFFMANN, Peter FILZMOSER
Maintainer: Fatma Sevinc KURNAZ <fatmasevinckurnaz@gmail.com>;<fskurnaz@yildiz.edu.tr>

References

Kurnaz, F.S., Hoffmann, I. and Filzmoser, P. (2017) Robust and sparse estimation methods for high dimensional linear and logistic regression. Chemometrics and Intelligent Laboratory Systems.

See Also

ggplot, enetLTS, coef.enetLTS, predict.enetLTS

Examples

## for gaussian

set.seed(86)
n <- 100; p <- 25                             # number of observations and variables
beta <- rep(0,p); beta[1:6] <- 1              # 10% nonzero coefficients
sigma <- 0.5                                  # controls signal-to-noise ratio
x <- matrix(rnorm(n*p, sigma),nrow=n)
e <- rnorm(n,0,1)                             # error terms
eps <- 0.1                                    # contamination level
m <- ceiling(eps*n)                           # observations to be contaminated
eout <- e; eout[1:m] <- eout[1:m] + 10        # vertical outliers
yout <- c(x %*% beta + sigma * eout)        # response
xout <- x; xout[1:m,] <- xout[1:m,] + 10      # bad leverage points


fit1 <- enetLTS(xout,yout,crit.plot=FALSE)
plotCoef.enetLTS(fit1)
plotCoef.enetLTS(fit1,vers="raw")

## for binomial
eps <-0.05                                     # %10 contamination to only class 0
m <- ceiling(eps*n)
y <- sample(0:1,n,replace=TRUE)
xout <- x
xout[y==0,][1:m,] <- xout[1:m,] + 10;          # class 0
yout <- y                                      # wrong classification for vertical outliers



fit2 <- enetLTS(xout,yout,family="binomial")
plotCoef.enetLTS(fit2)
plotCoef.enetLTS(fit2,vers="raw")


## for multinomial

n <- 120; p <- 15
NC <- 3
X <- matrix(rnorm(n * p), n, p)
betas <- matrix(1:NC, ncol=NC, nrow=p, byrow=TRUE)
betas[(p-5):p,]=0; betas <- rbind(rep(0,NC),betas)
lv <- cbind(1,X) %*% betas
probs <- exp(lv)/apply(exp(lv),1,sum)
y <- apply(probs,1,function(prob){sample(1:NC, 1, TRUE, prob)})
xout <- X
eps <-0.05                          # %10 contamination to only class 0
m <- ceiling(eps*n)
xout[1:m,] <- xout[1:m,] + 10       # bad leverage points
yout <- y


fit3 <- enetLTS(xout,yout,family="multinomial")
plotCoef.enetLTS(fit3)
plotCoef.enetLTS(fit3,vers="raw")


[Package enetLTS version 1.1.0 Index]