KLR {calibrateBinary}R Documentation

Kernel Logistic Regression

Description

The function performs a kernel logistic regression for binary outputs.

Usage

KLR(X, y, xnew, lambda = 0.01, kernel = c("matern", "exponential")[1],
  nu = 1.5, power = 1.95, rho = 0.1)

Arguments

X

a design matrix with dimension n by d.

y

a response vector with length n. The values in the vector are 0 or 1.

xnew

a testing matrix with dimension n_new by d in which each row corresponds to a predictive location.

lambda

a positive value specifing the tuning parameter for KLR. The default is 0.01.

kernel

"matern" or "exponential" which specifies the matern kernel or power exponential kernel. The default is "matern".

nu

a positive value specifying the order of matern kernel if kernel == "matern". The default is 1.5 if matern kernel is chosen.

power

a positive value (between 1.0 and 2.0) specifying the power of power exponential kernel if kernel == "exponential". The default is 1.95 if power exponential kernel is chosen.

rho

a positive value specifying the scale parameter of matern and power exponential kernels. The default is 0.1.

Details

This function performs a kernel logistic regression, where the kernel can be assigned to Matern kernel or power exponential kernel by the argument kernel. The arguments power and rho are the tuning parameters in the power exponential kernel function, and nu and rho are the tuning parameters in the Matern kernel function. The power exponential kernel has the form

K_{ij}=\exp(-\frac{\sum_{k}{|x_{ik}-x_{jk}|^{power}}}{rho}),

and the Matern kernel has the form

K_{ij}=\prod_{k}\frac{1}{\Gamma(nu)2^{nu-1}}(2\sqrt{nu}\frac{|x_{ik}-x_{jk}|}{rho})^{nu} \kappa(2\sqrt{nu}\frac{|x_{ik}-x_{jk}|}{rho}).

The argument lambda is the tuning parameter for the function smoothness.

Value

Predictive probabilities at given locations xnew.

Author(s)

Chih-Li Sung <iamdfchile@gmail.com>

References

Zhu, J. and Hastie, T. (2005). Kernel logistic regression and the import vector machine. Journal of Computational and Graphical Statistics, 14(1), 185-205.

See Also

cv.KLR for performing cross-validation to choose the tuning parameters.

Examples

library(calibrateBinary)

set.seed(1)
np <- 10
xp <- seq(0,1,length.out = np)
eta_fun <- function(x) exp(exp(-0.5*x)*cos(3.5*pi*x)-1) # true probability function
eta_x <- eta_fun(xp)
yp <- rep(0,np)
for(i in 1:np) yp[i] <- rbinom(1,1, eta_x[i])

x.test <- seq(0,1,0.001)
etahat <- KLR(xp,yp,x.test)

plot(xp,yp)
curve(eta_fun, col = "blue", lty = 2, add = TRUE)
lines(x.test, etahat, col = 2)

#####   cross-validation with K=5    #####
##### to determine the parameter rho #####

cv.out <- cv.KLR(xp,yp,K=5)
print(cv.out)

etahat.cv <- KLR(xp,yp,x.test,lambda=cv.out$lambda,rho=cv.out$rho)

plot(xp,yp)
curve(eta_fun, col = "blue", lty = 2, add = TRUE)
lines(x.test, etahat, col = 2)
lines(x.test, etahat.cv, col = 3)


[Package calibrateBinary version 0.1 Index]