mlr_loop_functions_ego {mlr3mbo} | R Documentation |
Sequential Single-Objective Bayesian Optimization
Description
Loop function for sequential single-objective Bayesian Optimization. Normally used inside an OptimizerMbo.
In each iteration after the initial design, the surrogate and acquisition function are updated and the next candidate is chosen based on optimizing the acquisition function.
Usage
bayesopt_ego(
instance,
surrogate,
acq_function,
acq_optimizer,
init_design_size = NULL,
random_interleave_iter = 0L
)
Arguments
instance |
(bbotk::OptimInstanceBatchSingleCrit) |
surrogate |
(Surrogate) |
acq_function |
(AcqFunction) |
acq_optimizer |
(AcqOptimizer) |
init_design_size |
( |
random_interleave_iter |
( |
Value
invisible(instance)
The original instance is modified in-place and returned invisible.
Note
The
acq_function$surrogate
, even if already populated, will always be overwritten by thesurrogate
.The
acq_optimizer$acq_function
, even if already populated, will always be overwritten byacq_function
.The
surrogate$archive
, even if already populated, will always be overwritten by the bbotk::Archive of the bbotk::OptimInstanceBatchSingleCrit.
References
Jones, R. D, Schonlau, Matthias, Welch, J. W (1998). “Efficient Global Optimization of Expensive Black-Box Functions.” Journal of Global optimization, 13(4), 455–492.
Snoek, Jasper, Larochelle, Hugo, Adams, P R (2012). “Practical Bayesian Optimization of Machine Learning Algorithms.” In Pereira F, Burges CJC, Bottou L, Weinberger KQ (eds.), Advances in Neural Information Processing Systems, volume 25, 2951–2959.
See Also
Other Loop Function:
loop_function
,
mlr_loop_functions
,
mlr_loop_functions_emo
,
mlr_loop_functions_mpcl
,
mlr_loop_functions_parego
,
mlr_loop_functions_smsego
Examples
if (requireNamespace("mlr3learners") &
requireNamespace("DiceKriging") &
requireNamespace("rgenoud")) {
library(bbotk)
library(paradox)
library(mlr3learners)
fun = function(xs) {
list(y = xs$x ^ 2)
}
domain = ps(x = p_dbl(lower = -10, upper = 10))
codomain = ps(y = p_dbl(tags = "minimize"))
objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain)
instance = OptimInstanceBatchSingleCrit$new(
objective = objective,
terminator = trm("evals", n_evals = 5))
surrogate = default_surrogate(instance)
acq_function = acqf("ei")
acq_optimizer = acqo(
optimizer = opt("random_search", batch_size = 100),
terminator = trm("evals", n_evals = 100))
optimizer = opt("mbo",
loop_function = bayesopt_ego,
surrogate = surrogate,
acq_function = acq_function,
acq_optimizer = acq_optimizer)
optimizer$optimize(instance)
# expected improvement per second example
fun = function(xs) {
list(y = xs$x ^ 2, time = abs(xs$x))
}
domain = ps(x = p_dbl(lower = -10, upper = 10))
codomain = ps(y = p_dbl(tags = "minimize"), time = p_dbl(tags = "time"))
objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain)
instance = OptimInstanceBatchSingleCrit$new(
objective = objective,
terminator = trm("evals", n_evals = 5))
surrogate = default_surrogate(instance, n_learner = 2)
surrogate$cols_y = c("y", "time")
optimizer = opt("mbo",
loop_function = bayesopt_ego,
surrogate = surrogate,
acq_function = acqf("eips"),
acq_optimizer = acq_optimizer)
optimizer$optimize(instance)
}