cv.grpreg.gamma {sparseGAM}R Documentation

Cross-validation for Group-regularized Gamma Regression

Description

This function implements K-fold cross-validation for group-regularized gamma regression with a known shape parameter \nu and the log link. For a description of group-regularized gamma regression, see the description for the grpreg.gamma function.

Our implementation is based on the least squares approximation approach of Wang and Leng (2007), and hence, the function does not allow the total number of covariates p to be greater than \frac{K-1}{K} \times sample size, where K is the number of folds.

Usage

cv.grpreg.gamma(y, X, groups, gamma.shape=1, penalty=c("gLASSO","gSCAD","gMCP"),
                nfolds=10, weights, taper, nlambda=100, lambda, max.iter=10000, 
                tol=1e-4)

Arguments

y

n \times 1 vector of responses.

X

n \times p design matrix, where the jth column of X corresponds to the jth overall covariate.

groups

p-dimensional vector of group labels. The jth entry in groups should contain either the group number or the name of the factor level that the jth covariate belongs to. groups must be either a vector of integers or factors.

gamma.shape

known shape parameter \nu in Gamma(\mu_i,\nu) distribution for the responses. Default is gamma.shape=1.

penalty

group regularization method to use on the groups of coefficients. The options are "gLASSO", "gSCAD", and "gMCP". To implement cross-validation for gamma regression with the SSGL penalty, use the cv.SSGL function.

nfolds

number of folds K to use in K-fold cross-validation. Default is nfolds=10.

weights

group-specific, nonnegative weights for the penalty. Default is to use the square roots of the group sizes.

taper

tapering term \gamma in group SCAD and group MCP controlling how rapidly the penalty tapers off. Default is taper=4 for group SCAD and taper=3 for group MCP. Ignored if "gLASSO" is specified as the penalty.

nlambda

number of regularization parameters L. Default is nlambda=100.

lambda

grid of L regularization parameters. The user may specify either a scalar or a vector. If the user does not provide this, the program chooses the grid automatically.

max.iter

maximum number of iterations in the algorithm. Default is max.iter=10000.

tol

convergence threshold for algorithm. Default is tol=1e-4.

Value

The function returns a list containing the following components:

lambda

L \times 1 vector of regularization parameters lambda used to fit the model. lambda is displayed in descending order.

cve

L \times 1 vector of mean cross-validation error across all K folds. The kth entry in cve corresponds to the kth regularization parameter in lambda.

cvse

L \times 1 vector of standard errors for cross-validation error across all K folds. The kth entry in cvse corresponds to the kth regularization parameter in lambda.

lambda.min

value of lambda that minimizes mean cross-validation error cve.

References

Breheny, P. and Huang, J. (2015). "Group descent algorithms for nonconvex penalized linear and logistic regression models with grouped predictors." Statistics and Computing, 25:173-187.

Wang, H. and Leng, C. (2007). "Unified LASSO estimation by least squares approximation." Journal of the American Statistical Association, 102:1039-1048.

Examples

## Generate data
set.seed(12345)
X = matrix(runif(100*11), nrow=100)
n = dim(X)[1]
groups = c(1,1,1,2,2,2,3,3,4,5,5)
true.beta = c(-1,1,1,0,0,0,0,0,0,1.5,-1.5)

## Generate responses from gamma regression with known shape parameter 1
eta = crossprod(t(X), true.beta)
shape = 1
y = rgamma(n, rate=shape/exp(eta), shape=shape)

## 10-fold cross-validation for group-regularized gamma regression
## with the group LASSO penalty
gamma.cv = cv.grpreg.gamma(y, X, groups, penalty="gLASSO")

## Plot cross-validation curve
plot(gamma.cv$lambda, gamma.cv$cve, type="l", xlab="lambda", ylab="CVE")
## lambda which minimizes mean CVE
gamma.cv$lambda.min

[Package sparseGAM version 1.0 Index]