AcqOptimizer {mlr3mbo} | R Documentation |
Acquisition Function Optimizer
Description
Optimizer for AcqFunctions which performs the acquisition function optimization. Wraps an bbotk::Optimizer and bbotk::Terminator.
Parameters
n_candidates
integer(1)
Number of candidate points to propose. Note that this does not affect how the acquisition function itself is calculated (e.g., settingn_candidates > 1
will not result in computing the q- or multi-Expected Improvement) but rather the topn-candidates
are selected from the bbotk::Archive of the acquisition function bbotk::OptimInstance. Note that settingn_candidates > 1
is usually not a sensible idea but it is still supported for experimental reasons. Default is1
.logging_level
character(1)
Logging level during the acquisition function optimization. Can be"fatal"
,"error"
,"warn"
,"info"
,"debug"
or"trace"
. Default is"warn"
, i.e., only warnings are logged.warmstart
logical(1)
Should the acquisition function optimization be warm-started by evaluating the best point(s) present in the bbotk::Archive of the actual bbotk::OptimInstance? This is sensible when using a population based acquisition function optimizer, e.g., local search or mutation. Default isFALSE
.warmstart_size
integer(1) | "all"
Number of best points selected from the bbotk::Archive that are to be used for warm starting. Can also be "all" to use all available points. Only relevant ifwarmstart = TRUE
. Default is1
.skip_already_evaluated
logical(1)
It can happen that the candidate resulting of the acquisition function optimization was already evaluated in a previous iteration. Should this candidate proposal be ignored and the next best point be selected as a candidate? Default isTRUE
.catch_errors
logical(1)
Should errors during the acquisition function optimization be caught and propagated to theloop_function
which can then handle the failed acquisition function optimization appropriately by, e.g., proposing a randomly sampled point for evaluation? Default isTRUE
.
Public fields
optimizer
terminator
acq_function
(AcqFunction).
Active bindings
print_id
(
character
)
Id used when printing.param_set
(paradox::ParamSet)
Set of hyperparameters.
Methods
Public methods
Method new()
Creates a new instance of this R6 class.
Usage
AcqOptimizer$new(optimizer, terminator, acq_function = NULL)
Arguments
optimizer
terminator
acq_function
(
NULL
| AcqFunction).
Method format()
Helper for print outputs.
Usage
AcqOptimizer$format()
Method print()
Print method.
Usage
AcqOptimizer$print()
Returns
(character()
).
Method optimize()
Optimize the acquisition function.
Usage
AcqOptimizer$optimize()
Returns
data.table::data.table()
with 1 row per optimum and x as columns.
Method clone()
The objects of this class are cloneable with this method.
Usage
AcqOptimizer$clone(deep = FALSE)
Arguments
deep
Whether to make a deep clone.
Examples
if (requireNamespace("mlr3learners") &
requireNamespace("DiceKriging") &
requireNamespace("rgenoud")) {
library(bbotk)
library(paradox)
library(mlr3learners)
library(data.table)
fun = function(xs) {
list(y = xs$x ^ 2)
}
domain = ps(x = p_dbl(lower = -10, upper = 10))
codomain = ps(y = p_dbl(tags = "minimize"))
objective = ObjectiveRFun$new(fun = fun, domain = domain, codomain = codomain)
instance = OptimInstanceBatchSingleCrit$new(
objective = objective,
terminator = trm("evals", n_evals = 5))
instance$eval_batch(data.table(x = c(-6, -5, 3, 9)))
learner = default_gp()
surrogate = srlrn(learner, archive = instance$archive)
acq_function = acqf("ei", surrogate = surrogate)
acq_function$surrogate$update()
acq_function$update()
acq_optimizer = acqo(
optimizer = opt("random_search", batch_size = 1000),
terminator = trm("evals", n_evals = 1000),
acq_function = acq_function)
acq_optimizer$optimize()
}