nb_grpreg {SSGL}R Documentation

Group-regularized Negative Binomial Regression

Description

This function implements group-regularized negative binomial regression with a known size parameter \alpha and the log link. In negative binomial regression, we assume that y_i \sim NB(\alpha, \mu_i), where

f(y_i | \alpha, \mu_i ) = \frac{\Gamma(y_i+\alpha)}{y_i! \Gamma(\alpha)} (\frac{\mu_i}{\mu_i+\alpha})^{y_i}(\frac{\alpha}{\mu_i +\alpha})^{\alpha}, y_i = 0, 1, 2, ...

Then E(y_i) = \mu_i, and we relate \mu_i to a set of p covariates x_i through the log link,

\log(\mu_i) = \beta_0 + x_i^T \beta, i=1,..., n

If the covariates in each x_i are grouped according to known groups g=1, ..., G, then this function can estimate some of the G groups of coefficients as all zero, depending on the amount of regularization. Our implementation for regularized negative binomial regression is based on the least squares approximation approach of Wang and Leng (2007), and hence, the function does not allow the total number of covariates p to be greater than sample size.

In addition, this function has the option of returning the generalized information criterion (GIC) of Fan and Tang (2013) for each regularization parameter in the grid lambda. The GIC can be used for model selection and serves as a useful alternative to cross-validation.

Usage

nb_grpreg(Y, X, groups, X_test, nb_size=1, penalty=c("gLASSO","gSCAD","gMCP"),
          group_weights, taper, n_lambda=100, lambda, 
          max_iter=10000, tol=1e-4, return_GIC=TRUE)

Arguments

Y

n \times 1 vector of strictly nonnegative integer responses for training data.

X

n \times p design matrix for training data, where the jth column corresponds to the jth overall feature.

groups

p-dimensional vector of group labels. The jth entry in groups should contain either the group number or the factor level name that the feature in the jth column of X belongs to. groups must be either a vector of integers or factors.

X_test

n_{test} \times p design matrix for test data to calculate predictions. X_test must have the same number of columns as X, but not necessarily the same number of rows. If no test data is provided or if in-sample predictions are desired, then the function automatically sets X_test=X in order to calculate in-sample predictions.

nb_size

known size parameter \alpha in NB(\alpha,\mu_i) distribution for the responses. Default is nb_size=1.

penalty

group regularization method to use on the groups of regression coefficients. The options are "gLASSO", "gSCAD", "gMCP". To implement gamma regression with the SSGL penalty, use the SSGL function.

group_weights

group-specific, nonnegative weights for the penalty. Default is to use the square roots of the group sizes.

taper

tapering term \gamma in group SCAD and group MCP controlling how rapidly the penalty tapers off. Default is taper=4 for group SCAD and taper=3 for group MCP. Ignored if "gLASSO" is specified as the penalty.

n_lambda

number of regularization parameters L. Default is n_lambda=100.

lambda

grid of L regularization parameters. The user may specify either a scalar or a vector. If the user does not provide this, the program chooses the grid automatically.

max_iter

maximum number of iterations in the algorithm. Default is max_iter=10000.

tol

convergence threshold for algorithm. Default is tol=1e-4.

return_GIC

Boolean variable for whether or not to return the GIC. Default is return_GIC=TRUE.

Value

The function returns a list containing the following components:

lambda

L \times 1 vector of regularization parameters lambda used to fit the model. lambda is displayed in descending order.

beta

p \times L matrix of estimated regression coefficients. The kth column in beta corresponds to the kth regularization parameter in lambda.

beta0

L \times 1 vector of estimated intercepts. The kth entry in beta0 corresponds to the kth regularization parameter in lambda.

classifications

G \times L matrix of classifications, where G is the number of groups. An entry of "1" indicates that the group was classified as nonzero, and an entry of "0" indicates that the group was classified as zero. The kth column of classifications corresponds to the kth regularization parameter in lambda.

Y_pred

n_{test} \times L matrix of predicted mean response values \mu_{test} = E(Y_{test}) based on the test data in X_test (or training data X if no argument was specified for X_test). The kth column in Y_pred corresponds to the predictions for the kth regularization parameter in lambda.

GIC

L \times 1 vector of GIC values. The kth entry of GIC corresponds to the kth entry in our lambda grid. This is not returned if return_GIC=FALSE.

lambda_min

The value in lambda that minimizes GIC. This is not returned if return_GIC=FALSE.

min_index

The index of lambda_min in lambda. This is not returned if return_GIC=FALSE.

References

Breheny, P. and Huang, J. (2015). "Group descent algorithms for nonconvex penalized linear and logistic regression models with grouped predictors." Statistics and Computing, 25:173-187.

Fan, Y. and Tang, C. Y. (2013). "Tuning parameter selection in high dimensional penalized likelihood." Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75:531-552.

Wang, H. and Leng, C. (2007). "Unified LASSO estimation by least squares approximation." Journal of the American Statistical Association, 102:1039-1048.

Yuan, M. and Lin, Y. (2006). "Model selection and estimation in regression with grouped variables." Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68:49-67.

Examples

## Generate data
set.seed(1234)
X = matrix(runif(100*15), nrow=100)
n = dim(X)[1]
groups = c("A","A","A","A","B","B","B","B","C","C","D","D","E","E","E")
groups = as.factor(groups)
beta_true = c(-1.5,1.5,-1.5,1.5,0,0,0,0,0,0,2,-2,0,0,0)

## Generate count responses from negative binomial regression
eta = crossprod(t(X), beta_true)
Y = rnbinom(n,size=1, mu=exp(eta))

## Generate test data
n_test = 50
X_test = matrix(runif(n_test*15), nrow=n_test)
  
## Fit negative binomial regression models with the group MCP penalty
nb_mod = nb_grpreg(Y, X, groups, X_test, penalty="gMCP")
  
## Tuning parameters used to fit models 
nb_mod$lambda
  
# Predicted n_test-dimensional vectors mu=E(Y_test) based on test data, X_test. 
# The kth column of 'Y_pred' corresponds to the kth entry in 'lambda.'
nb_mod$Y_pred
  
# Classifications of the 8 groups. The kth column of 'classifications'
# corresponds to the kth entry in lambda.
nb_mod$classifications

## Plot lambda vs. GIC
plot(nb_mod$lambda, nb_mod$GIC, type='l')

## Model selection with the lambda that minimizes GIC
nb_mod$lambda_min
nb_mod$min_index 
nb_mod$classifications[, nb_mod$min_index]
nb_mod$beta[, nb_mod$min_index]

[Package SSGL version 1.0 Index]