dkrr {rchemo}R Documentation

Direct KRR Models

Description

Direct kernel ridge regression (DKRR), following the same approcah as for DKPLSR (Bennett & Embrechts 2003). The method builds kernel Gram matrices and then runs a RR algorithm on them. This is not equivalent to the "true" KRR (= LS-SVM) algorithm.

Usage


dkrr(X, Y, weights = NULL, lb = 1e-2, kern = "krbf", ...)

## S3 method for class 'Dkrr'
coef(object, ..., lb = NULL)  

## S3 method for class 'Dkrr'
predict(object, X, ..., lb = NULL)  

Arguments

X

For the main function: Training X-data (n, p). — For the auxiliary functions: New X-data (m, p) to consider.

Y

Training Y-data (n, q).

weights

Weights (n, 1) to apply to the training observations. Internally, weights are "normalized" to sum to 1. Default to NULL (weights are set to 1 / n).

lb

A value of regularization parameter lambda.

kern

Name of the function defining the considered kernel for building the Gram matrix. See krbf for syntax, and other available kernel functions.

...

Optional arguments to pass in the kernel function defined in kern (e.g. gamma for krbf).

object

For the auxiliary functions: A fitted model, output of a call to the main function.

Value

For dkrr:

X

Matrix with the training X-data (n, p).

fm

List with the outputs of the RR ((V): eigenvector matrix of the correlation matrix (n,n); (TtDY): intermediate output; (sv): singular values of the matrix (1,n); (lb): value of regularization parameter lambda; (xmeans): the centering vector of X (p,1); (ymeans): the centering vector of Y (q,1); (weights): the weights vector of X-variables (p,1)

K

kernel Gram matrix

kern

kernel function

dots

Optional arguments passed in the kernel function

For predict.Dkrr:

pred

A list of matrices (m, q) with the Y predicted values for the new X-data

K

kernel Gram matrix (m, nlv), with values for the new X-data

For coef.Dkrr:

int

matrix (1,nlv) with the intercepts

B

matrix (n,nlv) with the coefficients

df

model complexity (number of degrees of freedom)

Note

The second example concerns the fitting of the function sinc(x) described in Rosipal & Trejo 2001 p. 105-106

References

Bennett, K.P., Embrechts, M.J., 2003. An optimization perspective on kernel partial least squares regression, in: Advances in Learning Theory: Methods, Models and Applications, NATO Science Series III: Computer & Systems Sciences. IOS Press Amsterdam, pp. 227-250.

Rosipal, R., Trejo, L.J., 2001. Kernel Partial Least Squares Regression in Reproducing Kernel Hilbert Space. Journal of Machine Learning Research 2, 97-123.

Examples


## EXAMPLE 1

n <- 6 ; p <- 4
Xtrain <- matrix(rnorm(n * p), ncol = p)
ytrain <- rnorm(n)
Ytrain <- cbind(y1 = ytrain, y2 = 100 * ytrain)
m <- 3
Xtest <- Xtrain[1:m, , drop = FALSE] 
Ytest <- Ytrain[1:m, , drop = FALSE] ; ytest <- Ytest[1:m, 1]

lb <- 2
fm <- dkrr(Xtrain, Ytrain, lb = lb, kern = "krbf", gamma = .8)
coef(fm)
coef(fm, lb = .6)
predict(fm, Xtest)
predict(fm, Xtest, lb = c(0.1, .8))

pred <- predict(fm, Xtest)$pred
msep(pred, Ytest)

lb <- 2
fm <- dkrr(Xtrain, Ytrain, lb = lb, kern = "kpol", degree = 2, coef0 = 10)
predict(fm, Xtest)

## EXAMPLE 1

x <- seq(-10, 10, by = .2)
x[x == 0] <- 1e-5
n <- length(x)
zy <- sin(abs(x)) / abs(x)
y <- zy + rnorm(n, 0, .2)
plot(x, y, type = "p")
lines(x, zy, lty = 2)
X <- matrix(x, ncol = 1)

fm <- dkrr(X, y, lb = .01, gamma = .5)
pred <- predict(fm, X)$pred
plot(X, y, type = "p")
lines(X, zy, lty = 2)
lines(X, pred, col = "red")


[Package rchemo version 0.1-1 Index]