| grpl.control {grplasso} | R Documentation |
Options for the Group Lasso Algorithm
Description
Definition of options such as bounds on the Hessian, convergence criteria and output management for the group lasso algorithm.
Usage
grpl.control(save.x = FALSE, save.y = TRUE,
update.hess = c("lambda", "always"), update.every = 3,
inner.loops = 10, line.search = TRUE, max.iter = 500,
tol = 5 * 10^-8, lower = 10^-2, upper = Inf, beta = 0.5,
sigma = 0.1, trace = 1)
Arguments
save.x |
a logical indicating whether the design matrix should be saved. |
save.y |
a logical indicating whether the response should be saved. |
update.hess |
should the hessian be updated in each iteration ("always")? update.hess = "lambda" will update the Hessian once for each component of the penalty parameter "lambda" based on the parameter estimates corresponding to the previous value of the penalty parameter. |
update.every |
Only used if update.hess = "lambda". E.g. set to 3 if you want to update the Hessian only every third grid point. |
inner.loops |
How many loops should be done (at maximum) when solving only the active set (without considering the remaining predictors). Useful if the number of predictors is large. Set to 0 if no inner loops should be performed. |
line.search |
Should line searches be performed? |
max.iter |
Maximal number of loops through all groups |
tol |
convergence tolerance; the smaller the more precise, see details below. |
lower |
lower bound for the diagonal approximation of the corresponding block submatrix of the Hessian of the negative log-likelihood function. |
upper |
upper bound for the diagonal approximation of the corresponding block submatrix of the Hessian of the negative log-likelihood function. |
beta |
scaling factor |
sigma |
|
trace |
integer. |
Details
For the convergence criteria see chapter 8.2.3.2 of Gill et al. (1981).
Value
An object of class grpl.control.
References
Philip E. Gill, Walter Murray and Margaret H. Wright (1981) Practical Optimization, Academic Press.
Dimitri P. Bertsekas (2003) Nonlinear Programming, Athena Scientific.