gp_optim {gplite} | R Documentation |
Optimize hyperparameters of a GP model
Description
This function can be used to optimize the hyperparameters of the model to the maximum marginal likelihood (or maximum marginal posterior if priors are used), using Nelder-Mead algorithm.
Usage
gp_optim(
gp,
x,
y,
tol = 1e-04,
tol_param = 0.1,
maxiter = 500,
restarts = 1,
verbose = TRUE,
warnings = TRUE,
...
)
Arguments
gp |
The gp model object to be fitted. |
x |
n-by-d matrix of input values (n is the number of observations and d the input dimension). Can also be a vector of length n if the model has only a single input. |
y |
Vector of n output (target) values. |
tol |
Relative change in the objective function value (marginal log posterior) after which the optimization is terminated. This will be passed to the function stats::optim as a convergence criterion. |
tol_param |
After the optimizer (Nelder-Mead) has terminated, the found hyperparameter values will be checked for convergence within tolerance tol_param. More precisely, if we perturb any of the hyperparameters by the amount tol_param or -tol_param, then the resulting log posterior must be smaller than the value with the found hyperparameter values. If not, then the optimizer will automatically attempt a restart (see argument restarts). Note: tol_param will be applied for the logarithms of the parameters (e.g. log length-scale), not for the native parameter values. |
maxiter |
Maximum number of iterations. |
restarts |
Number of possible restarts during optimization. The Nelder-Mead iteration can sometimes terminate prematurely before a local optimum is found, and this argument can be used to specify how many times the optimization is allowed to restart from where it left when Nelder-Mead terminated. By setting restarts > 0, one can often find local optimum without having to call gp_optim several times. Note: usually there is no need to allow more than a few (say 1-3) restarts; if the optimization does not converge with a few restarts, then one usually must try to reduce argument tol in order to achieve convergence. If this does not help either, then the optimization problem is usually ill-conditioned somehow. |
verbose |
If TRUE, then some information about the progress of the optimization is printed to the console. |
warnings |
Whether to print out some potential warnings (such as maximum number of iterations reached) during the optimization. |
... |
Further arguments to be passed to |
Value
An updated GP model object.
References
Rasmussen, C. E. and Williams, C. K. I. (2006). Gaussian processes for machine learning. MIT Press.
Examples
# Generate some toy data
set.seed(1242)
n <- 50
x <- matrix(rnorm(n * 3), nrow = n)
f <- sin(x[, 1]) + 0.5 * x[, 2]^2 + x[, 3]
y <- f + 0.5 * rnorm(n)
x <- data.frame(x1 = x[, 1], x2 = x[, 2], x3 = x[, 3])
# Basic usage
cf <- cf_sexp()
lik <- lik_gaussian()
gp <- gp_init(cf, lik)
gp <- gp_optim(gp, x, y)