glamlasso {glamlasso} | R Documentation |
Penalization in Large Scale Generalized Linear Array Models
Description
Efficient design matrix free procedure for fitting large scale penalized 2 or 3-dimensional generalized linear array models (GLAM). It is also possible to fit an additional non-tensor structured component - e.g an intercept - however this can reduce the computational efficiency of the procedure substantially. Currently the LASSO penalty and the SCAD penalty are both implemented. Furthermore, the Gaussian model with identity link, the Binomial model with logit link, the Poisson model with log link and the Gamma model with log link is currently implemented. The underlying algorithm combines gradient descent and proximal gradient (gdpg algorithm), see Lund et al., 2017.
Usage
glamlasso(X,
Y,
Z = NULL,
family = "gaussian",
penalty = "lasso",
intercept = FALSE,
weights = NULL,
betainit = NULL,
alphainit = NULL,
nlambda = 100,
lambdaminratio = 1e-04,
lambda = NULL,
penaltyfactor = NULL,
penaltyfactoralpha = NULL,
reltolinner = 1e-07,
reltolouter = 1e-04,
maxiter = 15000,
steps = 1,
maxiterinner = 3000,
maxiterouter = 25,
btinnermax = 100,
btoutermax = 100,
iwls = "exact",
nu = 1)
Arguments
X |
A list containing the tensor components (2 or 3) of the tensor design matrix.
These are matrices of sizes |
Y |
The response values, an array of size |
Z |
The non tensor structrured part of the design matrix. A matrix of size |
family |
A string specifying the model family (essentially the response distribution). Possible values
are |
penalty |
A string specifying the penalty. Possible values
are |
intercept |
Logical variable indicating if the model includes an intercept.
When |
weights |
Observation weights, an array of size |
betainit |
The initial parameter values. Default is NULL in which case all parameters are initialized at zero. |
alphainit |
A |
nlambda |
The number of |
lambdaminratio |
The smallest value for |
lambda |
The sequence of penalty parameters for the regularization path. |
penaltyfactor |
An array of size |
penaltyfactoralpha |
A |
reltolinner |
The convergence tolerance for the inner loop |
reltolouter |
The convergence tolerance for the outer loop. |
maxiter |
The maximum number of inner iterations allowed for each |
steps |
The number of steps used in the multi-step adaptive lasso algorithm for non-convex penalties. Automatically set to 1 when |
maxiterinner |
The maximum number of inner iterations allowed for each outer iteration. |
maxiterouter |
The maximum number of outer iterations allowed for each lambda. |
btinnermax |
Maximum number of backtracking steps allowed in each inner iteration. Default is |
btoutermax |
Maximum number of backtracking steps allowed in each outer iteration. Default is |
iwls |
A string indicating whether to use the exact iwls weight matrix or use a kronecker structured approximation to it. |
nu |
A number between 0 and 1 that controls the step size |
Details
Consider a (two component) generalized linear model (GLM)
Here is a link function,
is a
vector
containing the mean of the response variable
,
is a
matrix and
a
matrix with tensor structure
where are the marginal
design
matrices (tensor factors) such that
and
. Then
is the
parameter associated
with the tensor component
and
the
parameter associated with the non-tensor component
, e.g. the intercept.
Using the generalized linear array model (GLAM) framework the model equation is
where is the so called rotated
-transform and
is the
array version of
. See Currie et al., 2006 for more details.
The log-likelihood is a function of through
the linear predictor
i.e.
.
In the usual exponential family framework this can be expressed as
where , the canonical parameter map, is linked to the linear
predictor via the identity
with
the cumulant function. Here
are observation
weights and
is the dispersion parameter.
For or
, using only the marginal matrices
,
the function
glamlasso
solves the penalized estimation problem
for either the LASSO or SCAD penalty function, in the GLAM setup for
a sequence of penalty parameters
. The underlying algorithm is
based on an outer gradient descent loop and an inner proximal gradient based
loop. We note that if
is not convex, as with the SCAD penalty, we use
the multiple step adaptive lasso procedure to loop over the inner proximal
algorithm, see Lund et al., 2017 for more details.
Note that the package is optimized towards solving the estimation problem,
for . For
the user incurs a potentially
substantial computational cost. Especially it is not advisable to inlcude a
very large non-tensor component in the model (large
) and even
adding an intecept to the model (
) will result in a reduction of
computational efficiency.
Value
An object with S3 Class 'glamlasso'.
spec |
A string indicating the GLAM dimension ( |
beta |
A |
alpha |
A |
lambda |
A vector containing the sequence of penalty values used in the estimation procedure. |
df |
The number of nonzero coefficients for each value of |
dimcoef |
A vector giving the dimension of the model coefficient array
|
dimobs |
A vector giving the dimension of the observation (response)
array |
Iter |
A list with 4 items:
|
Author(s)
Adam Lund
Maintainer: Adam Lund, adam.lund@math.ku.dk
References
Lund, A., M. Vincent, and N. R. Hansen (2017). Penalized estimation in large-scale generalized linear array models. Journal of Computational and Graphical Statistics, 26, 3, 709-724. url = https://doi.org/10.1080/10618600.2017.1279548.
Currie, I. D., M. Durban, and P. H. C. Eilers (2006). Generalized linear array models with applications to multidimensional smoothing. Journal of the Royal Statistical Society. Series B. 68, 259-280. url = http://dx.doi.org/10.1111/j.1467-9868.2006.00543.x.
Examples
##size of example
n1 <- 65; n2 <- 26; n3 <- 13; p1 <- 12; p2 <- 6; p3 <- 4
##marginal design matrices (tensor components)
X1 <- matrix(rnorm(n1 * p1), n1, p1)
X2 <- matrix(rnorm(n2 * p2), n2, p2)
X3 <- matrix(rnorm(n3 * p3), n3, p3)
X <- list(X1, X2, X3)
##############gaussian example
Beta <- array(rnorm(p1 * p2 * p3) * rbinom(p1 * p2 * p3, 1, 0.1), c(p1 , p2, p3))
Mu <- RH(X3, RH(X2, RH(X1, Beta)))
Y <- array(rnorm(n1 * n2 * n3, Mu), c(n1, n2, n3))
system.time(fit <- glamlasso(X, Y))
modelno <- length(fit$lambda)
plot(c(Beta), type = "h", ylim = range(Beta, fit$coef[, modelno]))
points(c(Beta))
lines(fit$coef[ , modelno], col = "red", type = "h")
###with non tensor design component Z
q <- 5
alpha <- matrix(rnorm(q)) * rbinom(q, 1, 0.5)
Z <- matrix(rnorm(n1 * n2 * n3 * q), n1 * n2 *n3, q)
Y <- array(rnorm(n1 * n2 * n3, Mu + array(Z %*% alpha, c(n1, n2, n3))), c(n1, n2, n3))
system.time(fit <- glamlasso(X, Y, Z))
modelno <- length(fit$lambda)
oldmfrow <- par()$mfrow
par(mfrow = c(1, 2))
plot(c(Beta), type = "l", ylim = range(Beta, fit$coef[, modelno]))
points(c(Beta))
lines(fit$coef[ , modelno], col = "red")
plot(c(alpha), type = "h", ylim = range(Beta, fit$alpha[, modelno]))
points(c(alpha))
lines(fit$alpha[ , modelno], col = "red", type = "h")
par(mfrow = oldmfrow)
################ poisson example
Beta <- array(rnorm(p1 * p2 * p3, 0, 0.1) * rbinom(p1 * p2 * p3, 1, 0.1), c(p1 , p2, p3))
Mu <- RH(X3, RH(X2, RH(X1, Beta)))
Y <- array(rpois(n1 * n2 * n3, exp(Mu)), dim = c(n1, n2, n3))
system.time(fit <- glamlasso(X, Y, family = "poisson", nu = 0.1))
modelno <- length(fit$lambda)
plot(c(Beta), type = "h", ylim = range(Beta, fit$coef[, modelno]))
points(c(Beta))
lines(fit$coef[ , modelno], col = "red", type = "h")
###with non tensor design component Z
q <- 5
alpha <- matrix(rnorm(q)) * rbinom(q, 1, 0.5)
Z <- matrix(rnorm(n1 * n2 * n3 * q), n1 * n2 *n3, q)
Y <- array(rpois(n1 * n2 * n3, exp(Mu + array(Z %*% alpha, c(n1, n2, n3)))), dim = c(n1, n2, n3))
system.time(fit <- glamlasso(X, Y, Z, family = "poisson", nu = 0.1))
modelno <- length(fit$lambda)
oldmfrow <- par()$mfrow
par(mfrow = c(1, 2))
plot(c(Beta), type = "l", ylim = range(Beta, fit$coef[, modelno]))
points(c(Beta))
lines(fit$coef[ , modelno], col = "red")
plot(c(alpha), type = "h", ylim = range(Beta, fit$alpha[, modelno]))
points(c(alpha))
lines(fit$alpha[ , modelno], col = "red", type = "h")
par(mfrow = oldmfrow)