optPenaltyPrep.kCVauto {porridge} | R Documentation |
Automatic search for optimal penalty parameters (for precision estimation of data with replicates).
Description
Function that performs an automatic search of the optimal penalty parameter for the ridgePrep
call by employing either the Nelder-Mead or quasi-Newton
method to calculate the cross-validated (negative) log-likelihood score.
Usage
optPenaltyPrep.kCVauto(Y, ids, lambdaInit,
fold=nrow(Y), CVcrit,
splitting="stratified",
targetZ=matrix(0, ncol(Y), ncol(Y)),
targetE=matrix(0, ncol(Y), ncol(Y)),
nInit=100, minSuccDiff=10^(-10))
Arguments
Y |
Data |
ids |
A |
lambdaInit |
A |
fold |
A |
CVcrit |
A |
splitting |
A |
targetZ |
A semi-positive definite target |
targetE |
A semi-positive definite target |
nInit |
A |
minSuccDiff |
A |
Value
The function returns an all-positive numeric
, the cross-validated optimal penalty parameters.
Author(s)
W.N. van Wieringen.
References
van Wieringen, W.N., Chen, Y. (2021), "Penalized estimation of the Gaussian graphical model from data with replicates", Statistics in Medicine, 40(19), 4279-4293.
See Also
ridgePrep
Examples
# set parameters
p <- 10
Se <- diag(runif(p))
Sz <- matrix(3, p, p)
diag(Sz) <- 4
# draw data
n <- 100
ids <- numeric()
Y <- numeric()
for (i in 1:n){
Ki <- sample(2:5, 1)
Zi <- mvtnorm::rmvnorm(1, sigma=Sz)
for (k in 1:Ki){
Y <- rbind(Y, Zi + mvtnorm::rmvnorm(1, sigma=Se))
ids <- c(ids, i)
}
}
# find optimal penalty parameters
### optLambdas <- optPenaltyPrep.kCVauto(Y, ids,
### lambdaInit=c(1,1),
### fold=nrow(Y),
### CVcrit="LL")
# estimate the precision matrices
### Ps <- ridgePrep(Y, ids, optLambdas[1], optLambdas[2])