predict {deepgp} | R Documentation |
Acts on a gp
, dgp2
, or dgp3
object.
Calculates posterior mean and variance/covariance over specified input
locations. Optionally calculates expected improvement (EI) over
candidate inputs. Optionally utilizes SNOW parallelization.
## S3 method for class 'gp'
predict(object, x_new, lite = TRUE, EI = FALSE, cores = detectCores() - 1, ...)
## S3 method for class 'dgp2'
predict(
object,
x_new,
lite = TRUE,
store_latent = FALSE,
mean_map = TRUE,
EI = FALSE,
cores = detectCores() - 1,
...
)
## S3 method for class 'dgp3'
predict(
object,
x_new,
lite = TRUE,
store_latent = FALSE,
mean_map = TRUE,
EI = FALSE,
cores = detectCores() - 1,
...
)
## S3 method for class 'gpvec'
predict(
object,
x_new,
m = object$m,
lite = TRUE,
cores = detectCores() - 1,
...
)
## S3 method for class 'dgp2vec'
predict(
object,
x_new,
m = object$m,
lite = TRUE,
store_latent = FALSE,
mean_map = TRUE,
cores = detectCores() - 1,
...
)
## S3 method for class 'dgp3vec'
predict(
object,
x_new,
m = object$m,
lite = TRUE,
store_latent = FALSE,
mean_map = TRUE,
cores = detectCores() - 1,
...
)
object |
object from |
x_new |
matrix of predictive input locations |
lite |
logical indicating whether to calculate only point-wise
variances ( |
EI |
logical indicating whether to calculate expected improvement (for minimizing the response) |
cores |
number of cores to utilize in parallel, defaults to available cores minus one |
... |
N/A |
store_latent |
logical indicating whether to store and return mapped values of latent layers (two or three layer models only) |
mean_map |
logical indicating whether to map hidden layers using
conditional mean ( |
m |
size of Vecchia conditioning sets (only for fits with
|
All iterations in the object are used for prediction, so samples
should be burned-in. Thinning the samples using trim
will speed
up computation. Posterior moments are calculated using conditional
expectation and variance. As a default, only point-wise variance is
calculated. Full covariance may be calculated using lite = FALSE
.
Expected improvement is calculated with the goal of minimizing the response. See Chapter 7 of Gramacy (2020) for details.
SNOW parallelization reduces computation time but requires more memory storage.
object of the same class with the following additional elements:
x_new
: copy of predictive input locations
mean
: predicted posterior mean, indices correspond to
x_new
locations
s2
: predicted point-wise variances, indices correspond to
x_new
locations (only returned when lite = TRUE
)
s2_smooth
: predicted point-wise variances with g
removed, indices correspond to x_new
locations (only returned
when lite = TRUE
)
Sigma
: predicted posterior covariance, indices correspond to
x_new
locations (only returned when lite = FALSE
)
Sigma_smooth
: predicted posterior covariance with g
removed from the diagonal (only returned when lite = FALSE
)
EI
: vector of expected improvement values, indices correspond
to x_new
locations (only returned when EI = TRUE
)
w_new
: list of hidden layer mappings (only returned when
store_latent = TRUE
), list index corresponds to iteration and
row index corresponds to x_new
location (two or three layer
models only)
z_new
: list of hidden layer mappings (only returned when
store_latent = TRUE
), list index corresponds to iteration and
row index corresponds to x_new
location (three layer models only)
Computation time is added to the computation time of the existing object.
Sauer, A, RB Gramacy, and D Higdon. 2020. "Active Learning for Deep Gaussian
Process Surrogates." Technometrics, to appear; arXiv:2012.08015.
Sauer, A, A Cooper, and RB Gramacy. 2022. "Vecchia-approximated Deep Gaussian
Processes for Computer Experiments." pre-print on arXiv:2204.02904
Gramacy, RB. Surrogates: Gaussian Process Modeling, Design, and
Optimization for the Applied Sciences. Chapman Hall, 2020.
# See "fit_one_layer", "fit_two_layer", or "fit_three_layer"
# for an example