kernel.pls.fit {plsdof} | R Documentation |
Kernel Partial Least Squares Fit
Description
This function computes the Partial Least Squares fit. This algorithm scales mainly in the number of observations.
Usage
kernel.pls.fit(
X,
y,
m = ncol(X),
compute.jacobian = FALSE,
DoF.max = min(ncol(X) + 1, nrow(X) - 1)
)
Arguments
X |
matrix of predictor observations. |
y |
vector of response observations. The length of |
m |
maximal number of Partial Least Squares components. Default is
|
compute.jacobian |
Should the first derivative of the regression
coefficients be computed as well? Default is |
DoF.max |
upper bound on the Degrees of Freedom. Default is
|
Details
We first standardize X
to zero mean and unit variance.
Value
coefficients |
matrix of regression coefficients |
intercept |
vector of regression intercepts |
DoF |
Degrees of Freedom |
sigmahat |
vector of estimated model error |
Yhat |
matrix of fitted values |
yhat |
vector of squared length of fitted values |
RSS |
vector of residual sum of error |
covariance |
|
TT |
matrix of normalized PLS components |
Author(s)
Nicole Kraemer, Mikio L. Braun
References
Kraemer, N., Sugiyama M. (2011). "The Degrees of Freedom of Partial Least Squares Regression". Journal of the American Statistical Association 106 (494) https://www.tandfonline.com/doi/abs/10.1198/jasa.2011.tm10107
Kraemer, N., Braun, M.L. (2007) "Kernelizing PLS, Degrees of Freedom, and Efficient Model Selection", Proceedings of the 24th International Conference on Machine Learning, Omni Press, 441 - 448
See Also
linear.pls.fit
,
pls.cv
,pls.model
, pls.ic
Examples
n<-50 # number of observations
p<-5 # number of variables
X<-matrix(rnorm(n*p),ncol=p)
y<-rnorm(n)
pls.object<-kernel.pls.fit(X,y,m=5,compute.jacobian=TRUE)