kplsr {rchemo}R Documentation

KPLSR Models

Description

NIPALS Kernel PLSR algorithm described in Rosipal & Trejo (2001).

The algorithm is slow for n >= 500.

Usage


kplsr(X, Y, weights = NULL, nlv, kern = "krbf",
     tol = .Machine$double.eps^0.5, maxit = 100, ...)

## S3 method for class 'Kplsr'
transform(object, X, ..., nlv = NULL)  

## S3 method for class 'Kplsr'
coef(object, ..., nlv = NULL)  

## S3 method for class 'Kplsr'
predict(object, X, ..., nlv = NULL)  

Arguments

X

For the main function: Training X-data (n, p). — For the auxiliary functions: New X-data (m, p) to consider.

Y

Training Y-data (n, q).

weights

Weights (n, 1) to apply to the training observations. Internally, weights are "normalized" to sum to 1. Default to NULL (weights are set to 1 / n).

nlv

The number(s) of LVs to calculate. — For the auxiliary functions: The number(s) of LVs to consider.

kern

Name of the function defining the considered kernel for building the Gram matrix. See krbf for syntax, and other available kernel functions.

tol

Tolerance level for stopping the NIPALS iterations.

maxit

Maximum number of NIPALS iterations.

...

Optional arguments to pass in the kernel function defined in kern (e.g. gamma for krbf).

object

For the auxiliary functions: A fitted model, output of a call to the main function.

Value

For kplsr:

X

Training X-data (n, p).

Kt

Gram matrix

T

X-scores matrix.

C

The Y-loading weights matrix.

U

intermediate output.

R

The PLS projection matrix (p,nlv).

ymeans

the centering vector of Y (q,1).

weights

vector of observation weights.

kern

kern function.

dots

Optional arguments.

For transform.Kplsr: X-scores matrix for new X-data.

For coef.Kplsr:

int

intercept values matrix.

beta

beta coefficient matrix.

For predict.Kplsr:

pred

predicted values matrix for new X-data.

Note

The second example concerns the fitting of the function sinc(x) described in Rosipal & Trejo 2001 p. 105-106

References

Rosipal, R., Trejo, L.J., 2001. Kernel Partial Least Squares Regression in Reproducing Kernel Hilbert Space. Journal of Machine Learning Research 2, 97-123.

Examples


## EXAMPLE 1

n <- 6 ; p <- 4
Xtrain <- matrix(rnorm(n * p), ncol = p)
ytrain <- rnorm(n)
Ytrain <- cbind(y1 = ytrain, y2 = 100 * ytrain)
m <- 3
Xtest <- Xtrain[1:m, , drop = FALSE] 
Ytest <- Ytrain[1:m, , drop = FALSE] ; ytest <- Ytest[1:m, 1]

nlv <- 2
fm <- kplsr(Xtrain, Ytrain, nlv = nlv, kern = "krbf", gamma = .8)
transform(fm, Xtest)
transform(fm, Xtest, nlv = 1)
coef(fm)
coef(fm, nlv = 1)

predict(fm, Xtest)
predict(fm, Xtest, nlv = 0:nlv)$pred

pred <- predict(fm, Xtest)$pred
msep(pred, Ytest)

nlv <- 2
fm <- kplsr(Xtrain, Ytrain, nlv = nlv, kern = "kpol", degree = 2, coef0 = 10)
predict(fm, Xtest, nlv = nlv)

## EXAMPLE 2

x <- seq(-10, 10, by = .2)
x[x == 0] <- 1e-5
n <- length(x)
zy <- sin(abs(x)) / abs(x)
y <- zy + rnorm(n, 0, .2)
plot(x, y, type = "p")
lines(x, zy, lty = 2)
X <- matrix(x, ncol = 1)

nlv <- 2
fm <- kplsr(X, y, nlv = nlv)
pred <- predict(fm, X)$pred
plot(X, y, type = "p")
lines(X, zy, lty = 2)
lines(X, pred, col = "red")


[Package rchemo version 0.1-1 Index]