quantKrig {quantkriging} | R Documentation |
Quantile Kriging
Description
Implements Quantile Kriging from Plumlee and Tuo (2014).
Usage
quantKrig(x, y, quantv, lower, upper, method = "loo",
type = "Gaussian", rs = TRUE, nm = TRUE, known = NULL,
optstart = NULL, control = list())
Arguments
x |
Inputs |
y |
Univariate Response |
quantv |
Vector of Quantile values to estimate (ex: c(0.025, 0.975)) |
lower |
Lower bound of hyperparameters, if isotropic set lengthscale then nugget, if anisotropic set k lengthscales and then nugget |
upper |
Upper bound of hyperparameters, if isotropic set lengthscale then nugget, if anisotropic set k lengthscales and then nugget |
method |
Either maximum likelihood ('mle') or leave-one-out cross validation ('loo') optimization of hyperparameters |
type |
Covariance type, either 'Gaussian', 'Matern3_2', or 'Matern5_2' |
rs |
If TRUE, rescales inputs to [0,1] |
nm |
If TRUE, normalizes output to mean 0, variance 1 |
known |
Fixes all hyperparamters to a known value |
optstart |
Sets the starting value for the optimization |
control |
Control from optim function |
Details
Fits quantile kriging using a double exponential or Matern covariance function. This emulator is for a stochastic simulation and models the distribution of the results (through the quantiles), not just the mean. The hyperparameters can be trained using maximum likelihood estimation or leave-one-out cross validation as recommended in Plumlee and Tuo (2014). The GP is trained using the Woodbury formula to improve computation speed with replication as shown in Binois et al. (2018). To get meaningful results, there should be sufficient replication at each input. The quantiles at a location x0
are found using:
\mu(x0) + kn(x0)Kn^{-1}(y(i) - \mu(x)
) where Kn
is the kernel of the design matrix (with nugget effect), y(i)
the ordered sample closest to that quantile at each input, and \mu(x)
the mean at each input.
Value
- quants
The estimated quantile values in matrix form
- yquants
The actual quantile values from the data in matrix form
- g
The scaling parameter for the kernel
- l
The lengthscale parameter(s)
- ll
The log likelihood
- beta0
Estimated linear trend
- nu
Estimator of the variance
- xstar
Matrix of unique input values
- ystar
Average value at each unique input value
- Ki
Inverted covariance matrix
- quantv
Vector of alpha values between 0 and 1 for estimated quantiles, it is recommended that only a small number of quantiles are used for fitting and more quantiles can be found later using newQuants
- mult
Number of replicates at each input
References
Matthew Plumlee & Rui Tuo (2014) Building Accurate Emulators for Stochastic Simulations via Quantile Kriging, Technometrics, 56:4, 466-473, DOI: 10.1080/00401706.2013.860919
Mickael Binois, Robert B. Gramacy & Mike Ludkovski (2018) Practical Heteroscedastic Gaussian Process Modeling for Large Simulation Experiments, Journal of Computational and Graphical Statistics, 27:4, 808-821, DOI: 10.1080/10618600.2018.1458625
Examples
# Simple example
X <- seq(0,1,length.out = 20)
Y <- cos(5*X) + cos(X)
Xstar <- rep(X,each = 100)
Ystar <- rep(Y,each = 100)
Ystar <- rnorm(length(Ystar),Ystar,1)
lb <- c(0.0001,0.0001)
ub <- c(10,10)
Qout <- quantKrig(Xstar,Ystar, quantv = seq(0.05,0.95, length.out = 7), lower = lb, upper = ub)
QuantPlot(Qout, Xstar, Ystar)
#fit for non-normal errors
Ystar <- rep(Y,each = 100)
e <- rchisq(length(Ystar),5)/5 - 1
Ystar <- Ystar + e
Qout <- quantKrig(Xstar,Ystar, quantv = seq(0.05,0.95, length.out = 7), lower = lb, upper = ub)
QuantPlot(Qout, Xstar, Ystar)