k1Matern3_2 {kergp}R Documentation

One-Dimensional Classical Covariance Kernel Functions

Description

One-dimensional classical covariance kernel Functions.

Usage

k1FunExp(x1, x2, par)
k1FunGauss(x1, x2, par)
k1FunPowExp(x1, x2, par)
k1FunMatern3_2(x1, x2, par)
k1FunMatern5_2(x1, x2, par)

k1Fun1Cos(x)
k1Fun1Exp(x)
k1Fun1Gauss(x)
k1Fun1PowExp(x, alpha = 1.5)
k1Fun1Matern3_2(x)
k1Fun1Matern5_2(x)

Arguments

x1

First location vector.

x2

Second location vector. Must have the same length as x1.

x

For stationary covariance functions, the vector containing difference of positions: x = x1 - x2.

alpha

Regularity parameter in (0, 2] for Power Exponential covariance function.

par

Vector of parameters. The length and the meaning of the elements in this vector depend on the chosen kernel. The first parameter is the range parameter (if there is one), the last is the variance. So the shape parameter of k1FunPowExp is the second one out of the three parameters.

Details

These kernel functions are described in the Roustant et al (2012), table 1 p. 8. More details are given in chap. 4 of Rasmussen et al (2006).

Value

A matrix with a "gradient" attribute. This matrix has n_1 rows and n_2 columns where n_1 and n_2 are the length of x1 and x2. If x1 and x2 have length 1, the attribute is a vector of the same length p as par and gives the derivative of the kernel with respect to the parameters in the same order. If x1 or x2 have length > 1, the attribute is an array with dimension (n_1, n_2, p).

Note

The kernel functions are coded in C through the .Call interface and are mainly intended for internal use. They are used by the covTS class.

Be aware that very few checks are done (length of objects, order of the parameters, ...).

Author(s)

Oivier Roustant, David Ginsbourger, Yves Deville

References

C.E. Rasmussen and C.K.I. Williams (2006), Gaussian Processes for Machine Learning, the MIT Press, doi:10.7551/mitpress/3206.001.0001

O. Roustant, D. Ginsbourger, Y. Deville (2012). "DiceKriging, DiceOptim: Two R Packages for the Analysis of Computer Experiments by Kriging-Based Metamodeling and Optimization." Journal of Statistical Software, 51(1), 1-55. doi:10.18637/jss.v051.i01

Examples

## show the functions
n <- 300
x0 <- 0
x <- seq(from = 0, to = 3, length.out = n)
kExpVal <- k1FunExp(x0, x, par = c(range = 1, var = 2))
kGaussVal <- k1FunGauss(x0, x, par = c(range = 1, var = 2))
kPowExpVal <- k1FunPowExp(x0, x, par = c(range = 1, shape = 1.5, var = 2))
kMatern3_2Val <- k1FunMatern3_2(x0, x, par = c(range = 1, var = 2))
kMatern5_2Val <- k1FunMatern5_2(x0, x, par = c(range = 1, var = 2))
kerns <- cbind(as.vector(kExpVal), as.vector(kGaussVal), as.vector(kPowExpVal),
               as.vector(kMatern3_2Val), as.vector(kMatern5_2Val))
matplot(x, kerns, type = "l", main = "five 'kergp' 1d-kernels", lwd = 2)

## extract gradient
head(attr(kPowExpVal, "gradient"))

[Package kergp version 0.5.7 Index]