SBGAM {sparseGAM} | R Documentation |
Sparse Bayesian Generalized Additive Models
Description
This function implements sparse Bayesian generalized additive models (GAMs) with the spike-and-slab group lasso (SSGL) penalty. Let y_i
denote the i
th response and x_i
denote a p
-dimensional vector of covariates. GAMs are of the form,
g(E(y_i)) = \beta_0 + \sum_{j=1}^{p} f_j (x_{ij}), i = 1, ..., n,
where g
is a monotone increasing link function. The identity link function is used for Gaussian regression, the logit link is used for binomial regression, and the log link is used for Poisson, negative binomial, and gamma regression. With the SSGL penalty, some of the univariate functions f_j(x_j)
will be estimated as \hat{f}_j(x_j) = 0
, depending on the size of the spike hyperparameter \lambda_0
in the SSGL prior. The functions f_j(x_j), j = 1, ..., p
, are modeled using B-spline basis expansions.
There is another implementation of sparse Gaussian GAMs with the SSGL penalty available at https://github.com/jantonelli111/SSGL, which uses natural cubic splines as the basis functions. This package sparseGAM
uses B-spline basis functions and also implements sparse GAMs with the SSGL penalty for binomial, Poisson, negative binomial, and gamma regression.
For implementation of sparse frequentist GAMs with the group LASSO, group SCAD, and group MCP penalties, use the SFGAM
function.
Usage
SBGAM(y, X, X.test, df=6,
family=c("gaussian","binomial","poisson","negativebinomial","gamma"),
nb.size=1, gamma.shape=1, nlambda0=20, lambda0, lambda1, a, b,
max.iter=100, tol = 1e-6, print.iter=TRUE)
Arguments
y |
|
X |
|
X.test |
|
df |
number of B-spline basis functions to use in each basis expansion. Default is |
family |
exponential dispersion family. Allows for |
nb.size |
known size parameter |
gamma.shape |
known shape parameter |
nlambda0 |
number of spike hyperparameter |
lambda0 |
grid of |
lambda1 |
slab hyperparameter |
a |
shape hyperparameter for the |
b |
shape hyperparameter for the |
max.iter |
maximum number of iterations in the algorithm. Default is |
tol |
convergence threshold for algorithm. Default is |
print.iter |
Boolean variable for whether or not to print the current |
Value
The function returns a list containing the following components:
lambda0 |
|
f.pred |
List of |
mu.pred |
|
classifications |
|
beta0 |
|
beta |
|
loss |
vector of either the residual sum of squares ( |
References
Bai R. (2021). "Spike-and-slab group lasso for consistent Bayesian estimation and variable selection in non-Gaussian generalized additive models." arXiv pre-print arXiv:2007.07021.
Bai, R., Moran, G. E., Antonelli, J. L., Chen, Y., and Boland, M.R. (2021). "Spike-and-slab group lassos for grouped regression and sparse generalized additive models." Journal of the American Statistical Association, in press.
Examples
## Generate data
set.seed(12345)
X = matrix(runif(100*5), nrow=100)
n = dim(X)[1]
y = 3*sin(2*pi*X[,1])-3*cos(2*pi*X[,2]) + rnorm(n)
## Test data with 30 observations
X.test = matrix(runif(30*5), nrow=30)
## Fit sparse Bayesian generalized additive model to data with the SSGL penalty
## and 5 spike hyperparameters
SBGAM.mod = SBGAM(y, X, X.test, family="gaussian", lambda0=seq(from=50,to=10,by=-10))
## The model corresponding to the 1st spike hyperparameter
SBGAM.mod$lambda[1]
SBGAM.mod$classifications[,1]
## Plot first function f_1(x_1) in 2nd model
x1 = X.test[,1]
## Estimates of all 20 function evaluations on test data
f.hat = SBGAM.mod$f.pred[[1]]
## Extract estimates of f_1
f1.hat = f.hat[,1]
## Plot X_1 against f_1(x_1)
plot(x1[order(x1)], f1.hat[order(x1)], xlab=expression(x[1]),
ylab=expression(f[1](x[1])))