tuning {CVEK}R Documentation

Calculating Tuning Parameters

Description

Calculate tuning parameters based on given criteria.

Usage

tuning(Y, X, K_mat, mode, lambda)

Arguments

Y

(matrix, n*1) The vector of response variable.

X

(matrix, n*d_fix) The fixed effect matrix.

K_mat

(list of matrices) A nested list of kernel term matrices, corresponding to each kernel term specified in the formula for a base kernel function in kern_func_list.

mode

(character) A character string indicating which tuning parameter criteria is to be used.

lambda

(numeric) A numeric string specifying the range of tuning parameter to be chosen. The lower limit of lambda must be above 0.

Details

There are seven tuning parameter selections here:

leave-one-out Cross Validation

\lambda_{n-CV}={argmin}_{\lambda \in \Lambda}\;\Big\{log\;y^{\star T}[I-diag(A_\lambda)-\frac{1}{n}I]^{-1}(I-A_\lambda)^2[I-diag(A_\lambda)- \frac{1}{n}I]^{-1}y^\star \Big\}

Akaike Information Criteria

\lambda_{AIC}={argmin}_{\lambda \in \Lambda}\Big\{log\; y^{\star T}(I-A_\lambda)^2y^\star+\frac{2[tr(A_\lambda)+2]}{n}\Big\}

Akaike Information Criteria (small-sample variant)

\lambda_{AICc}={argmin}_{\lambda \in \Lambda}\Big\{log\; y^{\star T}(I-A_\lambda)^2y^\star+\frac{2[tr(A_\lambda)+2]}{n-tr(A_\lambda)-3}\Big\}

Bayesian Information Criteria

\lambda_{BIC}={argmin}_{\lambda \in \Lambda}\Big\{log\; y^{\star T}(I-A_\lambda)^2y^\star+\frac{log(n)[tr(A_\lambda)+2]}{n}\Big\}

Generalized Cross Validation

\lambda_{GCV}={argmin}_{\lambda \in \Lambda}\Big\{log\; y^{\star T}(I-A_\lambda)^2y^\star-2log[1-\frac{tr(A_\lambda)}{n}-\frac{1}{n}]_+\Big\}

Generalized Cross Validation (small-sample variant)

\lambda_{GCVc}={argmin}_{\lambda \in \Lambda}\Big\{log\; y^{\star T}(I-A_\lambda)^2y^\star-2log[1-\frac{tr(A_\lambda)}{n}-\frac{2}{n}]_+\Big\}

Generalized Maximum Profile Marginal Likelihood

\lambda_{GMPML}={argmin}_{\lambda \in \Lambda}\Big\{log\; y^{\star T}(I-A_\lambda)y^\star-\frac{1}{n-1}log \mid I-A_\lambda \mid \Big\}

Value

lambda0

(numeric) The selected tuning parameter.

Author(s)

Wenying Deng

References

Philip S. Boonstra, Bhramar Mukherjee, and Jeremy M. G. Taylor. A Small-Sample Choice of the Tuning Parameter in Ridge Regression. July 2015.

Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition. Springer Series in Statistics. Springer- Verlag, New York, 2 edition, 2009.

Hirotogu Akaike. Information Theory and an Extension of the Maximum Likelihood Principle. In Selected Papers of Hirotugu Akaike, Springer Series in Statistics, pages 199–213. Springer, New York, NY, 1998.

Clifford M. Hurvich and Chih-Ling Tsai. Regression and time series model selection in small samples. June 1989.

Hurvich Clifford M., Simonoff Jeffrey S., and Tsai Chih-Ling. Smoothing parameter selection in nonparametric regression using an improved Akaike information criterion. January 2002.


[Package CVEK version 0.1-2 Index]