| optPenalty.kCVauto {rags2ridges} | R Documentation | 
Automatic search for optimal penalty parameter
Description
Function that performs an 'automatic' search for the optimal penalty
parameter for the ridgeP call by employing Brent's method to
the calculation of a cross-validated negative log-likelihood score.
Usage
optPenalty.kCVauto(
  Y,
  lambdaMin,
  lambdaMax,
  lambdaInit = (lambdaMin + lambdaMax)/2,
  fold = nrow(Y),
  cor = FALSE,
  target = default.target(covML(Y)),
  type = "Alt"
)
Arguments
Y | 
 Data   | 
lambdaMin | 
 A   | 
lambdaMax | 
 A   | 
lambdaInit | 
 A   | 
fold | 
 A   | 
cor | 
 A   | 
target | 
 A target   | 
type | 
 A   | 
Details
The function determines the optimal value of the penalty parameter by
application of the Brent algorithm (1971) to the K-fold
cross-validated negative log-likelihood score (using a regularized ridge
estimator for the precision matrix). The search for the optimal value is
automatic in the sense that in order to invoke the root-finding abilities of
the Brent method, only a minimum value and a maximum value for the penalty
parameter need to be specified as well as a starting penalty value. The
value at which the K-fold cross-validated negative log-likelihood
score is minimized is deemed optimal. The function employs the Brent
algorithm as implemented in the
optim
function.
Value
An object of class list: 
optLambda | 
 A   | 
optPrec | 
 A
  | 
Note
When cor = TRUE correlation matrices are used in the
computation of the (cross-validated) negative log-likelihood score, i.e.,
the K-fold sample covariance matrix is a matrix on the correlation
scale. When performing evaluation on the correlation scale the data are
assumed to be standardized. If cor = TRUE and one wishes to used the
default target specification one may consider using target =
default.target(covML(Y, cor = TRUE)). This gives a default target under the
assumption of standardized data.
Under the default setting of the fold-argument, fold = nrow(Y), one
performes leave-one-out cross-validation.
Author(s)
Wessel N. van Wieringen, Carel F.W. Peeters <carel.peeters@wur.nl>
References
Brent, R.P. (1971). An Algorithm with Guaranteed Convergence for Finding a Zero of a Function. Computer Journal 14: 422-425.
See Also
GGMblockNullPenalty, GGMblockTest,
ridgeP, optPenalty.aLOOCV,
optPenalty.kCV, 
 default.target,
covML
Examples
## Obtain some (high-dimensional) data
p = 25
n = 10
set.seed(333)
X = matrix(rnorm(n*p), nrow = n, ncol = p)
colnames(X)[1:25] = letters[1:25]
## Obtain regularized precision under optimal penalty using K = n
OPT <- optPenalty.kCVauto(X, lambdaMin = .001, lambdaMax = 30); OPT
OPT$optLambda # Optimal penalty
OPT$optPrec   # Regularized precision under optimal penalty
## Another example with standardized data
X <- scale(X, center = TRUE, scale = TRUE)
OPT <- optPenalty.kCVauto(X, lambdaMin = .001, lambdaMax = 30, cor = TRUE,
                          target = default.target(covML(X, cor = TRUE))); OPT
OPT$optLambda # Optimal penalty
OPT$optPrec   # Regularized precision under optimal penalty
## Another example using K = 5
OPT <- optPenalty.kCVauto(X, lambdaMin = .001, lambdaMax = 30, fold = 5); OPT
OPT$optLambda # Optimal penalty
OPT$optPrec   # Regularized precision under optimal penalty