dminML.LA.ridgeGLM {squeezy} | R Documentation |
Partial derivatives of -log(ML) of ridge penalised GLMs
Description
Returns the partial derivatives (w.r.t. 'loglambdas') of the minus log Laplace approximation (LA) of the marginal likelihood of ridge penalised generalised linear models. Note: currently only implemented for linear and logistic regression.
Usage
dminML.LA.ridgeGLM(loglambdas, XXblocks, Y, sigmasq = 1,
Xunpen = NULL, intrcpt = TRUE, model, minlam = 0,
opt.sigma = FALSE)
Arguments
loglambdas |
Logarithm of the ridge penalties as returned by ecpc or squeezy; Gx1 vector. |
XXblocks |
List of sample covariance matrices X_g %*% t(X_g) for groups g = 1,..., G. |
Y |
Response data; n-dimensional vector (n: number of samples) for linear and logistic outcomes. |
sigmasq |
(linear model only) Noise level (Y~N(X*beta,sd=sqrt(sigmasq))). |
Xunpen |
Unpenalised variables; nxp_1-dimensional matrix for p_1 unpenalised variables. |
intrcpt |
Should an intercept be included? Set to TRUE by default. |
model |
Type of model for the response; linear or logistic. |
minlam |
Minimum value of lambda that is added to exp(loglambdas); set to 0 as default. |
opt.sigma |
(linear model only) TRUE/FALSE if log(sigmasq) is given as first argument of loglambdas for optimisation purposes |
Value
Partial derivatives of the Laplace approximation of the minus log marginal likelihood to the model parameters 'loglambdas';
For opt.sigma=FALSE: Gx1-dimensional vector for the G log(group ridge penalties).
For opt.sigma=TRUE (linear model only): (G+1)x1-dimensional vector for the partial derivative to log(sigmasq) (first element) and for the G log(group ridge penalties).
Examples
#Simulate toy data
n<-100
p<-300
X <- matrix(rnorm(n*p),n,p)
Y <- rnorm(n)
groupset <- list(1:(p/2),(p/2+1):p)
sigmahat <- 2
alpha <- 0.5
tauMR <- c(0.01,0.005)
XXblocks <- lapply(groupset, function(x)X[,x]%*%t(X[,x]))
#compute partial derivatives of the minus log marginal likelihood to the penalties only
dminML.LA.ridgeGLM(loglambdas = log(sigmahat/tauMR),
XXblocks, Y, sigmasq = sigmahat,
model="linear",opt.sigma=FALSE)
#additionally, compute the partial derivative to the linear regression noise parameter sigma^2
dminML.LA.ridgeGLM(loglambdas = log(c(sigmahat,sigmahat/tauMR)),
XXblocks, Y, sigmasq = sigmahat,
model="linear",opt.sigma=TRUE)