SFGAM {sparseGAM} | R Documentation |
Sparse Frequentist Generalized Additive Models
Description
This function implements sparse frequentist generalized additive models (GAMs) with the group LASSO, group SCAD, and group MCP penalties. Let y_i
denote the i
th response and x_i
denote a p
-dimensional vector of covariates. GAMs are of the form,
g(E(y_i)) = \beta_0 + \sum_{j=1}^{p} f_j (x_{ij}), i = 1, ..., n,
where g
is a monotone increasing link function. The identity link function is used for Gaussian regression, the logit link is used for binomial regression, and the log link is used for Poisson, negative binomial, and gamma regression. The univariate functions are estimated using linear combinations of B-spline basis functions. Under group regularization of the basis coefficients, some of the univariate functions f_j(x_j)
will be estimated as \hat{f}_j(x_j) = 0
, depending on the size of the regularization parameter \lambda
.
For implementation of sparse Bayesian GAMs with the SSGL penalty, use the SBGAM
function.
Usage
SFGAM(y, X, X.test, df=6,
family=c("gaussian","binomial", "poisson", "negativebinomial","gamma"),
nb.size=1, gamma.shape=1, penalty=c("gLASSO","gMCP","gSCAD"), taper,
nlambda=100, lambda, max.iter=10000, tol=1e-4)
Arguments
y |
|
X |
|
X.test |
|
df |
number of B-spline basis functions to use in each basis expansion. Default is |
family |
exponential dispersion family. Allows for |
nb.size |
known size parameter |
gamma.shape |
known shape parameter |
penalty |
group regularization method to use on the groups of basis coefficients. The options are |
taper |
tapering term |
nlambda |
number of regularization parameters |
lambda |
grid of |
max.iter |
maximum number of iterations in the algorithm. Default is |
tol |
convergence threshold for algorithm. Default is |
Value
The function returns a list containing the following components:
lambda |
|
f.pred |
List of |
mu.pred |
|
classifications |
|
beta0 |
|
beta |
|
loss |
vector of either the residual sum of squares ( |
References
Breheny, P. and Huang, J. (2015). "Group descent algorithms for nonconvex penalized linear and logistic regression models with grouped predictors." Statistics and Computing, 25:173-187.
Wang, H. and Leng, C. (2007). "Unified LASSO estimation by least squares approximation." Journal of the American Statistical Association, 102:1039-1048.
Yuan, M. and Lin, Y. (2006). Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68: 49-67.
Examples
## Generate data
set.seed(12345)
X = matrix(runif(100*20), nrow=100)
n = dim(X)[1]
y = 5*sin(2*pi*X[,1])-5*cos(2*pi*X[,2]) + rnorm(n)
## Test data with 50 observations
X.test = matrix(runif(50*20), nrow=50)
## K-fold cross-validation with group MCP penalty
cv.mod = cv.SFGAM(y, X, family="gaussian", penalty="gMCP")
## Plot CVE curve
plot(cv.mod$lambda, cv.mod$cve, type="l", xlab="lambda", ylab="CVE")
## lambda which minimizes cross-validation error
lambda.opt = cv.mod$lambda.min
## Fit a single model with lambda.opt
SFGAM.mod = SFGAM(y, X, X.test, penalty="gMCP", lambda=lambda.opt)
## Classifications
SFGAM.mod$classifications
## Predicted function evaluations on test data
f.pred = SFGAM.mod$f.pred
## Plot estimated first function
x1 = X.test[,1]
f1.hat = f.pred[,1]
## Plot x_1 against f_1(x_1)
plot(x1[order(x1)], f1.hat[order(x1)], xlab=expression(x[1]),
ylab=expression(f[1](x[1])))