grpreg.nb {sparseGAM} | R Documentation |
Group-regularized Negative Binomial Regression
Description
This function implements group-regularized negative binomial regression with a known size parameter \alpha
and the log link. In negative binomial regression, we assume that y_i \sim NB(\alpha, \mu_i)
, where
f(y_i | \alpha, \mu_i ) = \frac{\Gamma(y+\alpha)}{y! \Gamma(\alpha)} (\frac{\mu_i}{\mu_i+\alpha})^{y}(\frac{\alpha}{\mu_i +\alpha})^{\alpha}, y = 0, 1, 2, ...
Then E(y_i) = \mu_i
, and we relate \mu_i
to a set of p
covariates x_i
through the log link,
\log(\mu_i) = \beta_0 + x_i^T \beta, i=1,..., n
If the covariates in each x_i
are grouped according to known groups g=1, ..., G
, then this function may estimate some of the G
groups of coefficients as all zero, depending on the amount of regularization.
Our implementation for regularized negative binomial regression is based on the least squares approximation approach of Wang and Leng (2007), and hence, the function does not allow the total number of covariates p
to be greater than sample size.
Usage
grpreg.nb(y, X, X.test, groups, nb.size=1, penalty=c("gLASSO","gSCAD","gMCP"),
weights, taper, nlambda=100, lambda, max.iter=10000, tol=1e-4)
Arguments
y |
|
X |
|
X.test |
|
groups |
|
nb.size |
known size parameter |
penalty |
group regularization method to use on the groups of coefficients. The options are |
weights |
group-specific, nonnegative weights for the penalty. Default is to use the square roots of the group sizes. |
taper |
tapering term |
nlambda |
number of regularization parameters |
lambda |
grid of |
max.iter |
maximum number of iterations in the algorithm. Default is |
tol |
convergence threshold for algorithm. Default is |
Value
The function returns a list containing the following components:
lambda |
|
beta0 |
|
beta |
|
mu.pred |
|
classifications |
|
loss |
|
References
Breheny, P. and Huang, J. (2015). "Group descent algorithms for nonconvex penalized linear and logistic regression models with grouped predictors." Statistics and Computing, 25:173-187.
Wang, H. and Leng, C. (2007). "Unified LASSO estimation by least squares approximation." Journal of the American Statistical Association, 102:1039-1048.
Examples
## Generate training data
set.seed(1234)
X = matrix(runif(100*16), nrow=100)
n = dim(X)[1]
groups = c("A","A","A","B","B","B","C","C","D","E","E","F","G","H","H","H")
groups = as.factor(groups)
true.beta = c(-2,2,2,0,0,0,0,0,0,1.5,-1.5,0,0,-2,2,2)
## Generate count responses from negative binomial regression
eta = crossprod(t(X), true.beta)
y = rnbinom(n,size=1, mu=exp(eta))
## Generate test data
n.test = 50
X.test = matrix(runif(n.test*16), nrow=n.test)
## Fit negative binomial regression models with the group SCAD penalty
nb.mod = grpreg.nb(y, X, X.test, groups, penalty="gSCAD")
## Tuning parameters used to fit models
nb.mod$lambda
# Predicted n.test-dimensional vectors mu=E(Y.test) based on test data, X.test.
# The kth column of 'mu.pred' corresponds to the kth entry in 'lambda.'
nb.mod$mu.pred
# Classifications of the 8 groups. The kth column of 'classifications'
# corresponds to the kth entry in lambda.
nb.mod$classifications