cpernet {SALES} | R Documentation |
Regularization paths for the coupled sparse asymmetric least squares (COSALES) regression (or the coupled sparse expectile regression)
Description
Fits regularization paths for coupled sparse asymmetric least squares regression at a sequence of regularization parameters.
Usage
cpernet(
x,
y,
w = 1,
nlambda = 100L,
method = "cper",
lambda.factor = ifelse(2 * nobs < nvars, 0.01, 1e-04),
lambda = NULL,
lambda2 = 0,
pf.mean = rep(1, nvars),
pf2.mean = rep(1, nvars),
pf.scale = rep(1, nvars),
pf2.scale = rep(1, nvars),
exclude,
dfmax = nvars + 1,
pmax = min(dfmax * 1.2, nvars),
standardize = TRUE,
intercept = TRUE,
eps = 1e-08,
maxit = 1000000L,
tau = 0.8
)
Arguments
x |
matrix of predictors, of dimension (nobs * nvars); each row is an observation. |
y |
response variable. |
w |
weight applied to the asymmetric squared error loss of the mean part. See details. Default is 1.0. |
nlambda |
the number of |
method |
a character string specifying the loss function to use. Only
|
lambda.factor |
The factor for getting the minimal lambda in the
|
lambda |
a user-supplied |
lambda2 |
regularization parameter |
pf.mean , pf.scale |
L1 penalty factor of length |
pf2.mean , pf2.scale |
L2 penalty factor of length |
exclude |
indices of variables to be excluded from the model. Default is none. Equivalent to an infinite penalty factor. |
dfmax |
limit the maximum number of variables in the model. Useful for
very large |
pmax |
limit the maximum number of variables ever to be nonzero. For
example once |
standardize |
logical flag for variable standardization, prior to
fitting the model sequence. The coefficients are always returned to the
original scale. Default is |
intercept |
Should intercept(s) be fitted (default=TRUE) or set to zero (FALSE). |
eps |
convergence threshold for coordinate descent. Each inner
coordinate descent loop continues until the maximum change in any
coefficient is less than |
maxit |
maximum number of outer-loop iterations allowed at fixed lambda
values. Default is 1e7. If the algorithm does not converge, consider
increasing |
tau |
the parameter |
Details
Note that the objective function in cpernet
is
w*1'\Psi(y-X\beta,0.5)/N + 1'\Psi(y-X\beta-X\theta,\tau)/N +
\lambda_1*\Vert\beta\Vert_1 + 0.5\lambda_2\Vert\beta\Vert_2^2 +
\mu_1*\Vert\theta\Vert +
0.5\mu_2\Vert\theta\Vert_2^2,
where
\Psi(u,\tau)=|\tau-I(u<0)|*u^2
denotes the asymmetric squared error
loss and the penalty is a combination of L1 and L2 terms for both the mean
and scale coefficients.
For faster computation, if the algorithm is not converging or running slow,
consider increasing eps
, decreasing nlambda
, or increasing
lambda.factor
before increasing maxit
.
Value
An object with S3 class cpernet
.
call |
the call that produced this object. |
b0 , t0 |
intercept sequences both of
length |
beta , theta |
|
lambda |
the actual sequence of |
df.beta , df.theta |
the number of nonzero mean and scale coefficients
respectively for each value of |
dim |
dimensions of coefficient matrices. |
npasses |
total number of iterations summed over all lambda values. |
jerr |
error flag, for warnings and errors, 0 if no error. |
Author(s)
Yuwen Gu and Hui Zou
Maintainer: Yuwen Gu <yuwen.gu@uconn.edu>
References
Gu, Y., and Zou, H. (2016).
"High-dimensional generalizations of asymmetric least squares regression and their applications."
The Annals of Statistics, 44(6), 2661–2694.
See Also
plot.cpernet
, coef.cpernet
,
predict.cpernet
, print.cpernet
Examples
set.seed(1)
n <- 100
p <- 400
x <- matrix(rnorm(n * p), n, p)
y <- rnorm(n)
tau <- 0.30
pf <- abs(rnorm(p))
pf2 <- abs(rnorm(p))
w <- 2.0
lambda2 <- 1
m2 <- cpernet(y = y, x = x, w = w, tau = tau, eps = 1e-8,
pf.mean = pf, pf.scale = pf2,
standardize = FALSE, lambda2 = lambda2)