optPenalty.aLOOCV {rags2ridges} | R Documentation |
Select optimal penalty parameter by approximate leave-one-out cross-validation
Description
Function that selects the optimal penalty parameter for the
ridgeP
call by usage of approximate leave-one-out
cross-validation. Its output includes (a.o.) the precision matrix under the
optimal value of the penalty parameter.
Usage
optPenalty.aLOOCV(
Y,
lambdaMin,
lambdaMax,
step,
type = "Alt",
cor = FALSE,
target = default.target(covML(Y)),
output = "light",
graph = TRUE,
verbose = TRUE
)
Arguments
Y |
Data |
lambdaMin |
A |
lambdaMax |
A |
step |
An |
type |
A |
cor |
A |
target |
A target |
output |
A |
graph |
A |
verbose |
A |
Details
The function calculates an approximate leave-one-out cross-validated
(aLOOCV) negative log-likelihood score (using a regularized ridge estimator
for the precision matrix) for each value of the penalty parameter contained
in the search grid. The utilized aLOOCV score was proposed by Lian (2011)
and Vujacic et al. (2014). The aLOOCV negative log-likeliho od score is
computationally more efficient than its non-approximate counterpart (see
optPenalty.LOOCV
). For details on the aLOOCV negative
log-likelihood score see Lian (2011) and Vujacic et al (2014). For scalar
matrix targets (see default.target
) the complete solution path
of the alternative Type I and II ridge estimators (see ridgeP
)
depends on only 1 eigendecomposition and 1 matrix inversion, making the
determination of the optimal penalty value particularly efficient (see van
Wieringen and Peeters, 2015).
The value of the penalty parameter that achieves the lowest aLOOCV negative
log-likelihood score is deemed optimal. The penalty parameter must be
positive such that lambdaMin
must be a positive scalar. The maximum
allowable value of lambdaMax
depends on the type of ridge estimator
employed. For details on the type of ridge estimator one may use (one of:
"Alt", "ArchI", "ArchII") see ridgeP
. The ouput consists of an
object of class list (see below). When output = "light"
(default)
only the optLambda
and optPrec
elements of the list are given.
Value
An object of class list:
optLambda |
A |
optPrec |
A |
lambdas |
A |
aLOOCVs |
A |
Note
When cor = TRUE
correlation matrices are used in the
computation of the approximate (cross-validated) negative log-likelihood
score, i.e., the sample covariance matrix is a matrix on the correlation
scale. When performing evaluation on the correlation scale the data are
assumed to be standardized. If cor = TRUE
and one wishes to used the
default target specification one may consider using target =
default.target(covML(Y, cor = TRUE))
. This gives a default target under the
assumption of standardized data.
Author(s)
Carel F.W. Peeters <carel.peeters@wur.nl>, Wessel N. van Wieringen
References
Lian, H. (2011). Shrinkage tuning parameter selection in precision matrices estimation. Journal of Statistical Planning and Inference, 141: 2839-2848.
van Wieringen, W.N. & Peeters, C.F.W. (2016). Ridge Estimation of Inverse Covariance Matrices from High-Dimensional Data, Computational Statistics & Data Analysis, vol. 103: 284-303. Also available as arXiv:1403.0904v3 [stat.ME].
Vujacic, I., Abbruzzo, A., and Wit, E.C. (2014). A computationally fast alternative to cross-validation in penalized Gaussian graphical models. arXiv: 1309.6216v2 [stat.ME].
See Also
ridgeP
, optPenalty.LOOCV
,
optPenalty.LOOCVauto
,
default.target
,
covML
Examples
## Obtain some (high-dimensional) data
p = 25
n = 10
set.seed(333)
X = matrix(rnorm(n*p), nrow = n, ncol = p)
colnames(X)[1:25] = letters[1:25]
## Obtain regularized precision under optimal penalty
OPT <- optPenalty.aLOOCV(X, lambdaMin = .001, lambdaMax = 30, step = 400); OPT
OPT$optLambda # Optimal penalty
OPT$optPrec # Regularized precision under optimal penalty
## Another example with standardized data
X <- scale(X, center = TRUE, scale = TRUE)
OPT <- optPenalty.aLOOCV(X, lambdaMin = .001, lambdaMax = 30,
step = 400, cor = TRUE,
target = default.target(covML(X, cor = TRUE))); OPT
OPT$optLambda # Optimal penalty
OPT$optPrec # Regularized precision under optimal penalty