ernet {SALES} | R Documentation |
Regularization paths for the sparse asymmetric least squares (SALES) regression (or the sparse expectile regression)
Description
Fits regularization paths for the Lasso or elastic net penalized asymmetric least squares regression at a sequence of regularization parameters.
Usage
ernet(
x,
y,
nlambda = 100L,
method = "er",
lambda.factor = ifelse(nobs < nvars, 0.01, 1e-04),
lambda = NULL,
lambda2 = 0,
pf = rep(1, nvars),
pf2 = rep(1, nvars),
exclude,
dfmax = nvars + 1,
pmax = min(dfmax * 1.2, nvars),
standardize = TRUE,
intercept = TRUE,
eps = 1e-08,
maxit = 1000000L,
tau = 0.5
)
Arguments
x |
matrix of predictors, of dimension (nobs * nvars); each row is an observation. |
y |
response variable. |
nlambda |
the number of |
method |
a character string specifying the loss function to use. only
|
lambda.factor |
The factor for getting the minimal lambda in the
|
lambda |
a user-supplied |
lambda2 |
regularization parameter |
pf |
L1 penalty factor of length |
pf2 |
L2 penalty factor of length |
exclude |
indices of variables to be excluded from the model. Default is none. Equivalent to an infinite penalty factor. |
dfmax |
the maximum number of variables allowed in the model. Useful for
very large |
pmax |
the maximum number of coefficients allowed ever to be nonzero.
For example once |
standardize |
logical flag for variable standardization, prior to
fitting the model sequence. The coefficients are always returned to the
original scale. Default is |
intercept |
Should intercept(s) be fitted (default is |
eps |
convergence threshold for coordinate descent. Each inner
coordinate descent loop continues until the maximum change in any
coefficient is less than |
maxit |
maximum number of outer-loop iterations allowed at fixed lambda
values. Default is 1e7. If the algorithm does not converge, consider
increasing |
tau |
the parameter |
Details
Note that the objective function in ernet
is
1'\Psi_{\tau}(y-X\beta)/N + \lambda_{1}*\Vert\beta\Vert_1 +
0.5\lambda_{2}*\Vert\beta\Vert_2^2,
where
\Psi_{\tau}
denotes the asymmetric squared error loss and
the penalty is a combination of weighted L1 and L2 terms.
For faster computation, if the algorithm is not converging or running slow,
consider increasing eps
, decreasing nlambda
, or increasing
lambda.factor
before increasing maxit
.
Value
An object with S3 class ernet
.
call |
the call that produced this object |
b0 |
intercept sequence of length |
beta |
a |
lambda |
the actual sequence of |
df |
the number of nonzero coefficients for each value of
|
dim |
dimension of coefficient matrix |
npasses |
total number of iterations summed over all lambda values |
jerr |
error flag, for warnings and errors, 0 if no error. |
Author(s)
Yuwen Gu and Hui Zou
Maintainer: Yuwen Gu <yuwen.gu@uconn.edu>
References
Gu, Y., and Zou, H. (2016).
"High-dimensional generalizations of asymmetric least squares regression and their applications."
The Annals of Statistics, 44(6), 2661–2694.
See Also
plot.ernet
, coef.ernet
,
predict.ernet
, print.ernet
Examples
set.seed(1)
n <- 100
p <- 400
x <- matrix(rnorm(n * p), n, p)
y <- rnorm(n)
tau <- 0.90
pf <- abs(rnorm(p))
pf2 <- abs(rnorm(p))
lambda2 <- 1
m1 <- ernet(y = y, x = x, tau = tau, eps = 1e-8, pf = pf,
pf2 = pf2, standardize = FALSE, intercept = FALSE,
lambda2 = lambda2)