l0ara {l0ara} | R Documentation |
fit a generalized linear model with l0 penalty
Description
An adaptive ridge algorithm for feature selection with L0 penalty.
Usage
l0ara(x, y, family, lam, standardize, maxit, eps)
Arguments
x |
Input matrix, of dimension nobs x nvars; each row is an observation vector. |
y |
Response variable. Quantitative for |
family |
Response type(see above). |
lam |
A user supplied |
standardize |
Logical flag for data normalization. If |
maxit |
Maximum number of passes over the data for |
eps |
Convergence threshold. Default value is |
Details
The sequence of models indexed by the parameter lambda is fit using adptive ridge algorithm. The objective function for generalized linear models (including family
above) is defined to be
-(log likelihood)+(\lambda/2)*|\beta|_0
|\beta|_0
is the number of non-zero elements in \beta
. To select the "best" model with AIC or BIC criterion, let lambda
to be 2 or log(n)
. This adaptive ridge algorithm is developed to approximate L0 penalized generalized linear models with sequential optimization and is efficient for high-dimensional data.
Value
An object with S3 class "l0ara" containing:
beta |
A vector of coefficients |
df |
Number of nonzero coefficients |
iter |
Number of iterations |
lambda |
The lambda used |
x |
Design matrix |
y |
Response variable |
Author(s)
Wenchuan Guo <wguo007@ucr.edu>, Shujie Ma <shujie.ma@ucr.edu>, Zhenqiu Liu <Zhenqiu.Liu@cshs.org>
See Also
cv.l0ara
, predict.l0ara
, coef.l0ara
, plot.l0ara
methods.
Examples
# Linear regression
# Generate design matrix and response variable
n <- 100
p <- 40
x <- matrix(rnorm(n*p), n, p)
beta <- c(1,0,2,3,rep(0,p-4))
noise <- rnorm(n)
y <- x%*%beta+noise
# fit sparse linear regression using BIC
res.gaussian <- l0ara(x, y, family="gaussian", log(n))
# predict for new observations
print(res.gaussian)
predict(res.gaussian, newx=matrix(rnorm(3,p),3,p))
coef(res.gaussian)
# Logistic regression
# Generate design matrix and response variable
n <- 100
p <- 40
x <- matrix(rnorm(n*p), n, p)
beta <- c(1,0,2,3,rep(0,p-4))
prob <- exp(x%*%beta)/(1+exp(x%*%beta))
y <- rbinom(n, rep(1,n), prob)
# fit sparse logistic regression
res.logit <- l0ara(x, y, family="logit", 0.7)
# predict for new observations
print(res.logit)
predict(res.logit, newx=matrix(rnorm(3,p),3,p))
coef(res.logit)
# Poisson regression
# Generate design matrix and response variable
n <- 100
p <- 40
x <- matrix(rnorm(n*p), n, p)
beta <- c(1,0,0.5,0.3,rep(0,p-4))
mu <- exp(x%*%beta)
y <- rpois(n, mu)
# fit sparse Poisson regression using AIC
res.pois <- l0ara(x, y, family="poisson", 2)
# predict for new observations
print(res.pois)
predict(res.pois, newx=matrix(rnorm(3,p),3,p))
coef(res.pois)