| optPenalty.kCV {rags2ridges} | R Documentation | 
Select optimal penalty parameter by K-fold cross-validation
Description
Function that selects the optimal penalty parameter for the
ridgeP call by usage of K-fold cross-validation. Its
output includes (a.o.) the precision matrix under the optimal value of the
penalty parameter.
Usage
optPenalty.kCV(
  Y,
  lambdaMin,
  lambdaMax,
  step,
  fold = nrow(Y),
  cor = FALSE,
  target = default.target(covML(Y)),
  type = "Alt",
  output = "light",
  graph = TRUE,
  verbose = TRUE
)
Arguments
| Y | Data  | 
| lambdaMin | A  | 
| lambdaMax | A  | 
| step | An  | 
| fold | A  | 
| cor | A  | 
| target | A target  | 
| type | A  | 
| output | A  | 
| graph | A  | 
| verbose | A  | 
Details
The function calculates a cross-validated negative log-likelihood score
(using a regularized ridge estimator for the precision matrix) for each
value of the penalty parameter contained in the search grid by way of
K-fold cross-validation. The value of the penalty parameter that
achieves the lowest cross-validated negative log-likelihood score is deemed
optimal. The penalty parameter must be positive such that lambdaMin
must be a positive scalar. The maximum allowable value of lambdaMax
depends on the type of ridge estimator employed. For details on the type of
ridge estimator one may use (one of: "Alt", "ArchI", "ArchII") see
ridgeP. The ouput consists of an object of class list (see
below). When output = "light" (default) only the optLambda and
optPrec elements of the list are given.
Value
An object of class list:
| optLambda | A  | 
| optPrec | A  | 
| lambdas | A  | 
| LLs | A  | 
Note
When cor = TRUE correlation matrices are used in the
computation of the (cross-validated) negative log-likelihood score, i.e.,
the K-fold sample covariance matrix is a matrix on the correlation
scale. When performing evaluation on the correlation scale the data are
assumed to be standardized. If cor = TRUE and one wishes to used the
default target specification one may consider using target =
default.target(covML(Y, cor = TRUE)). This gives a default target under the
assumption of standardized data.
Under the default setting of the fold-argument, fold = nrow(Y), one
performes leave-one-out cross-validation.
Author(s)
Carel F.W. Peeters <carel.peeters@wur.nl>, Wessel N. van Wieringen
See Also
ridgeP, optPenalty.kCVauto,
optPenalty.aLOOCV, 
 default.target,
covML
Examples
## Obtain some (high-dimensional) data
p = 25
n = 10
set.seed(333)
X = matrix(rnorm(n*p), nrow = n, ncol = p)
colnames(X)[1:25] = letters[1:25]
## Obtain regularized precision under optimal penalty using K = n
OPT  <- optPenalty.kCV(X, lambdaMin = .5, lambdaMax = 30, step = 100); OPT
OPT$optLambda	# Optimal penalty
OPT$optPrec	  # Regularized precision under optimal penalty
## Another example with standardized data
X <- scale(X, center = TRUE, scale = TRUE)
OPT  <- optPenalty.kCV(X, lambdaMin = .5, lambdaMax = 30, step = 100, cor = TRUE,
                       target = default.target(covML(X, cor = TRUE))); OPT
OPT$optLambda	# Optimal penalty
OPT$optPrec	  # Regularized precision under optimal penalty
## Another example using K = 5
OPT  <- optPenalty.kCV(X, lambdaMin = .5, lambdaMax = 30, step = 100, fold = 5); OPT
OPT$optLambda	# Optimal penalty
OPT$optPrec	  # Regularized precision under optimal penalty