adaHuber.cv.lasso {adaHuber}R Documentation

Cross-Validated Regularized Adaptive Huber Regression.

Description

Sparse regularized adaptive Huber regressionwith "lasso" penalty. The function implements a localized majorize-minimize algorithm with a gradient-based method. The regularization parameter \lambda is selected by cross-validation, and the robustification parameter \tau is determined by a tuning-free principle.

Usage

adaHuber.cv.lasso(
  X,
  Y,
  lambdaSeq = NULL,
  kfolds = 5,
  numLambda = 50,
  phi0 = 0.01,
  gamma = 1.2,
  epsilon = 0.001,
  iteMax = 500
)

Arguments

X

A n by p design matrix. Each row is a vector of observation with p covariates.

Y

An n-dimensional response vector.

lambdaSeq

(optional) A sequence of candidate regularization parameters. If unspecified, a reasonable sequence will be generated.

kfolds

(optional) Number of folds for cross-validation. Default is 5.

numLambda

(optional) Number of \lambda values for cross-validation if lambdaSeq is unspeficied. Default is 50.

phi0

(optional) The initial quadratic coefficient parameter in the local adaptive majorize-minimize algorithm. Default is 0.01.

gamma

(optional) The adaptive search parameter (greater than 1) in the local adaptive majorize-minimize algorithm. Default is 1.2.

epsilon

(optional) A tolerance level for the stopping rule. The iteration will stop when the maximum magnitude of the change of coefficient updates is less than epsilon. Default is 0.001.

iteMax

(optional) Maximum number of iterations. Default is 500.

Value

An object containing the following items will be returned:

coef

A (p + 1) vector of estimated sparse regression coefficients, including the intercept.

lambdaSeq

The sequence of candidate regularization parameters.

lambda

Regularization parameter selected by cross-validation.

tau

The robustification parameter calibrated by the tuning-free principle.

iteration

Number of iterations until convergence.

phi

The quadratic coefficient parameter in the local adaptive majorize-minimize algorithm.

References

Pan, X., Sun, Q. and Zhou, W.-X. (2021). Iteratively reweighted l1-penalized robust regression. Electron. J. Stat., 15, 3287-3348.

Sun, Q., Zhou, W.-X. and Fan, J. (2020). Adaptive Huber regression. J. Amer. Statist. Assoc., 115 254-265.

Wang, L., Zheng, C., Zhou, W. and Zhou, W.-X. (2021). A new principle for tuning-free Huber regression. Stat. Sinica, 31, 2153-2177.

See Also

See adaHuber.lasso for regularized adaptive Huber regression with a specified lambda.

Examples

n = 100; p = 200; s = 5
beta = c(rep(1.5, s + 1), rep(0, p - s))
X = matrix(rnorm(n * p), n, p)
err = rt(n, 2)
Y = cbind(rep(1, n), X) %*% beta + err 

fit.lasso = adaHuber.cv.lasso(X, Y)
beta.lasso = fit.lasso$coef

[Package adaHuber version 1.1 Index]