ridgeGLMmultiT {porridge} | R Documentation |
Multi-targeted ridge estimation of generalized linear models.
Description
Function that evaluates the multi-targeted ridge estimator of the regression parameter of generalized linear models.
Usage
ridgeGLMmultiT(Y, X, U=matrix(ncol=0, nrow=length(Y)),
lambdas, targetMat, model="linear",
minSuccDiff=10^(-10), maxIter=100)
Arguments
Y |
A |
X |
The design |
U |
The design |
lambdas |
An all-positive |
targetMat |
A |
model |
A |
minSuccDiff |
A |
maxIter |
A |
Details
This function finds the maximizer of the following penalized loglikelihood: \mathcal{L}( \mathbf{Y}, \mathbf{X}; \boldsymbol{\beta}) - \frac{1}{2} \sum_{k=1}^K \lambda_k \| \boldsymbol{\beta} - \boldsymbol{\beta}_{k,0} \|_2^2
, with loglikelihood \mathcal{L}( \mathbf{Y}, \mathbf{X}; \boldsymbol{\beta})
, response \mathbf{Y}
, design matrix \mathbf{X}
, regression parameter \boldsymbol{\beta}
, penalty parameter \lambda
, and the k
-th shrinkage target \boldsymbol{\beta}_{k,0}
. For more details, see van Wieringen, Binder (2020).
Value
The ridge estimate of the regression parameter.
Author(s)
W.N. van Wieringen.
References
van Wieringen, W.N. Binder, H. (2020), "Online learning of regression models from a sequence of datasets by penalized estimation", submitted.
Examples
# set the sample size
n <- 50
# set the true parameter
betas <- (c(0:100) - 50) / 20
# generate covariate data
X <- matrix(rnorm(length(betas)*n), nrow=n)
# sample the response
probs <- exp(tcrossprod(betas, X)[1,]) / (1 + exp(tcrossprod(betas, X)[1,]))
Y <- numeric()
for (i in 1:n){
Y <- c(Y, sample(c(0,1), 1, prob=c(1-probs[i], probs[i])))
}
# set the penalty parameter
lambdas <- c(1,3)
# estimate the logistic regression parameter
# bHat <- ridgeGLMmultiT(Y, X, lambdas, model="logistic",
# targetMat=cbind(betas/2, rnorm(length(betas))))