l2boost {l2boost} | R Documentation |
Generic gradient descent boosting method for linear regression.
Description
Efficient implementation of Friedman's boosting algorithm [Friedman (2001)] with L2-loss function and coordinate direction (design matrix columns) basis functions. This includes the elasticNet data augmentation of Ehrlinger and Ishwaran (2012), which adds an L2-penalization (lambda) similar to the elastic net [Zou and Hastie (2005)].
Usage
l2boost(x, ...)
## Default S3 method:
l2boost(x, y, M, nu, lambda, trace, type , qr.tolerance, eps.tolerance, ...)
## S3 method for class 'formula'
l2boost(formula, data, ...)
Arguments
x |
design matrix of dimension n x p |
... |
other arguments (currently unused) |
y |
response variable of length n |
M |
number of steps to run boost algorithm (M >1) |
nu |
L1 shrinkage parameter (0 < nu <= 1) |
lambda |
L2 shrinkage parameter used for elastic net boosting (lambda > 0 || lambda = NULL) |
trace |
show runtime messages (default: FALSE) |
type |
Choice of l2boost algorithm from "discrete", "hybrid", "friedman","lars". See details below. (default "discrete") |
qr.tolerance |
tolerance limit for use in |
eps.tolerance |
dynamic step size lower limit (default: .Machine$double.eps) |
formula |
an object of class |
data |
an optional data frame, list or environment
(or object coercible by |
Details
The l2boost
function is an efficient implementation of a generic boosting method [Friedman (2001)] for
linear regression using an L2-loss function. The basis functions are the column vectors of the design matrix.
l2boost
scales the design matrix such that the coordinate columns of the design correspond to the
gradient directions for each covariate. The boosting coefficients are equivalent to the gradient-correlation of each
covariate. Friedman's gradient descent boosting algorithm proceeds at each step along the covariate direction closest
(in L2 distance) to the maximal gradient descent direction.
We include a series of algorithms to solve the boosting optimization. These are selected through the type argument
-
friedman - The original, bare-bones l2boost (Friedman (2001)). This method takes a fixed step size of length nu.
-
lars - The l2boost-lars-limit (See Efron et.al (2004)). This algorithm takes a single step of the optimal length to the critical point required for a new coordinate direction to become favorable. Although optimal in the number of steps required to reach the OLS solution, this method may be computationally expensive for large p problems, as the method requires a matrix inversion to calculate the step length.
-
discrete - Optimized Friedman algorithm to reduce number of evaluations required [Ehrlinger and Ishwaran 2012]. The algorithm dynamically determines the number of steps of length nu to take along a descent direction. The discrete method allows the algorithm to take step sizes of multiples of nu at any evaluation.
-
hybrid - Similar to discrete, however only allows combining steps along the first descent direction. hybrid Works best if nu is moderate, but not too small. In this case, Friedman's algorithm would take many steps along the first coordinate direction, and then cycle when multiple coordinates have similar gradient directions (by the L2 measure).
l2boost
keeps track of all gradient-correlation coefficients (rho) at each iteration in addition to the maximal
descent direction taken by the method. Visualizing these coefficients can be informative of the inner workings of gradient boosting
(see the examples in the plot.l2boost
method).
The l2boost
function uses an arbitrary L1-regularization parameter (nu), and includes the elementary
data augmentation of Ehrlinger and Ishwaran (2012), to add an L2-penalization (lambda) similar to the elastic net
[Zou and Hastie (2005)]. The L2-regularization reverses repressibility, a condition where one variable acts as
a boosting surrogate for other, possibly informative, variables. Along with the decorrelation
effect, this elasticBoost regularization circumvents L2Boost deficiencies in correlated settings.
We include a series of S3 functions for working with l2boost
objects:
-
print
(print.l2boost
) prints a summary of thel2boost
fit. -
coef
(coef.l2boost
) returns thel2boost
model regression coefficients at any point along the solution path. -
fitted
(fitted.l2boost
) returns the fittedl2boost
response estimates (from the training dataset) along the solution path. -
residuals
(residuals.l2boost
) returns the training setl2boost
residuals along the solution path. -
plot
(plot.l2boost
) for graphing model coefficients of anl2boost
object. -
predict
(predict.l2boost
) for generatingl2boost
prediction estimates on possibly new test set observations.
A cross-validation method (cv.l2boost
) is also included for L2boost and elasticBoost, for cross-validated error estimates
and regularization parameter optimizations.
Value
A "l2boost" object is returned, for which print, plot, predict, and coef methods exist.
call |
the matched call. |
type |
Choice of l2boost algorithm from "friedman", "discrete", "hybrid", "lars" |
nu |
The L1 boosting shrinkage parameter value |
lambda |
The L2 elasticNet shrinkage parameter value |
x |
The training dataset |
x.na |
Columns of original design matrix with values na, these have been removed from x |
x.attr |
scale attributes of design matrix |
names |
Column names of design matrix |
y |
training response vector associated with x, centered about the mean value ybar |
ybar |
mean value of training response vector |
mjk |
measure to favorability. This is a matrix of size p by m. Each coordinate j has a measure at each step m |
stepSize |
vector of step lengths taken ( |
l.crit |
vector of column index of critical direction |
L.crit |
number of steps along each l.crit direction |
S.crit |
The critical step value where a direction change occurs |
path.Fm |
estimates of response at each step m |
Fm |
estimate of response at final step M |
rhom.path |
boosting parameter estimate at each step m |
betam.path |
beta parameter estimates at each step m. List of m vectors of length p |
betam |
beta parameter estimate at final step M |
The notation for the return values is described in Ehrlinger and Ishwaran (2012).
References
Friedman J. (2001) Greedy function approximation: A gradient boosting machine. Ann. Statist., 29:1189-1232
Ehrlinger J., and Ishwaran H. (2012). "Characterizing l2boosting" Ann. Statist., 40 (2), 1074-1101
Zou H. and Hastie T (2005) "Regularization and variable selection via the elastic net" J. R. Statist. Soc. B, 67, Part 2, pp. 301-320
Efron B., Hastie T., Johnstone I., and Tibshirani R. (2004). "Least Angle Regression" Ann. Statist. 32:407-499
See Also
print.l2boost
, plot.l2boost
, predict.l2boost
,
coef.l2boost
, residuals.l2boost
, fitted.l2boost
methods of l2boost
and cv.l2boost
for K fold cross-validation of the l2boost method.
Examples
#--------------------------------------------------------------------------
# Example 1: Diabetes data
#
# See Efron B., Hastie T., Johnstone I., and Tibshirani R.
# Least angle regression. Ann. Statist., 32:407-499, 2004.
data(diabetes, package="l2boost")
l2.object <- l2boost(diabetes$x,diabetes$y, M=1000, nu=.01)
# Plot the boosting rho, and regression beta coefficients as a function of
# boosting steps m
#
# Note: The selected coordinate trajectories are colored in red after selection, and
# blue before. Unselected coordinates are colored grey.
#
par(mfrow=c(2,2))
plot(l2.object)
plot(l2.object, type="coef")
# increased shrinkage and number of iterations.
l2.shrink <- l2boost(diabetes$x,diabetes$y,M=5000, nu=1.e-3)
plot(l2.shrink)
plot(l2.shrink, type="coef")
## Not run:
#--------------------------------------------------------------------------
# Example 2: elasticBoost simulation
# Compare l2boost and elastic net boosting
#
# See Zou H. and Hastie T. Regularization and variable selection via the
# elastic net. J. Royal Statist. Soc. B, 67(2):301-320, 2005
set.seed(1025)
# The default simulation uses 40 covariates with signal concentrated on
# 3 groups of 5 correlated covariates (for 15 signal covariates)
dta <- elasticNetSim(n=100)
# l2boost the simulated data with groups of correlated coordinates
l2.object <- l2boost(dta$x,dta$y,M=10000, nu=1.e-3, lambda=NULL)
par(mfrow=c(2,2))
# plot the l2boost trajectories over all M
plot(l2.object, main="l2Boost nu=1.e-3")
# Then zoom into the first m=500 steps
plot(l2.object, xlim=c(0,500), ylim=c(.25,.5), main="l2Boost nu=1.e-3")
# elasticNet same data with L1 parameter lambda=0.1
en.object <- l2boost(dta$x,dta$y,M=10000, nu=1.e-3, lambda=.1)
# plot the elasticNet trajectories over all M
#
# Note 2: The elasticBoost selects all coordinates close to the selection boundary,
# where l2boost leaves some unselected (in grey)
plot(en.object, main="elasticBoost nu=1.e-3, lambda=.1")
# Then zoom into the first m=500 steps
plot(en.object, xlim=c(0,500), ylim=c(.25,.5),
main="elasticBoost nu=1.e-3, lambda=.1")
## End(Not run)