| mlr_tuners_nloptr {mlr3tuning} | R Documentation |
Hyperparameter Tuning with Non-linear Optimization
Description
Subclass for non-linear optimization (NLopt). Calls nloptr::nloptr from package nloptr.
Details
The termination conditions stopval, maxtime and maxeval of nloptr::nloptr() are deactivated and replaced by the bbotk::Terminator subclasses.
The x and function value tolerance termination conditions (xtol_rel = 10^-4, xtol_abs = rep(0.0, length(x0)), ftol_rel = 0.0 and ftol_abs = 0.0) are still available and implemented with their package defaults.
To deactivate these conditions, set them to -1.
Dictionary
This Tuner can be instantiated with the associated sugar function tnr():
tnr("nloptr")
Logging
All Tuners use a logger (as implemented in lgr) from package
bbotk.
Use lgr::get_logger("bbotk") to access and control the logger.
Optimizer
This Tuner is based on bbotk::OptimizerBatchNLoptr which can be applied on any black box optimization problem. See also the documentation of bbotk.
Parameters
algorithmcharacter(1)eval_g_ineqfunction()xtol_relnumeric(1)xtol_absnumeric(1)ftol_relnumeric(1)ftol_absnumeric(1)start_valuescharacter(1)
Createrandomstart values or based oncenterof search space? In the latter case, it is the center of the parameters before a trafo is applied.
For the meaning of the control parameters, see nloptr::nloptr() and
nloptr::nloptr.print.options().
The termination conditions stopval, maxtime and maxeval of
nloptr::nloptr() are deactivated and replaced by the Terminator
subclasses. The x and function value tolerance termination conditions
(xtol_rel = 10^-4, xtol_abs = rep(0.0, length(x0)), ftol_rel = 0.0 and
ftol_abs = 0.0) are still available and implemented with their package
defaults. To deactivate these conditions, set them to -1.
Resources
There are several sections about hyperparameter optimization in the mlr3book.
The gallery features a collection of case studies and demos about optimization.
Use the Hyperband optimizer with different budget parameters.
Progress Bars
$optimize() supports progress bars via the package progressr
combined with a Terminator. Simply wrap the function in
progressr::with_progress() to enable them. We recommend to use package
progress as backend; enable with progressr::handlers("progress").
Super classes
mlr3tuning::Tuner -> mlr3tuning::TunerBatch -> mlr3tuning::TunerBatchFromOptimizerBatch -> TunerBatchNLoptr
Methods
Public methods
Inherited methods
Method new()
Creates a new instance of this R6 class.
Usage
TunerBatchNLoptr$new()
Method clone()
The objects of this class are cloneable with this method.
Usage
TunerBatchNLoptr$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Source
Johnson, G S (2020). “The NLopt nonlinear-optimization package.” https://github.com/stevengj/nlopt.
See Also
Other Tuner:
Tuner,
mlr_tuners,
mlr_tuners_cmaes,
mlr_tuners_design_points,
mlr_tuners_gensa,
mlr_tuners_grid_search,
mlr_tuners_internal,
mlr_tuners_irace,
mlr_tuners_random_search
Examples
# Hyperparameter Optimization
# load learner and set search space
learner = lrn("classif.rpart",
cp = to_tune(1e-04, 1e-1, logscale = TRUE)
)
# run hyperparameter tuning on the Palmer Penguins data set
instance = tune(
tuner = tnr("nloptr", algorithm = "NLOPT_LN_BOBYQA"),
task = tsk("penguins"),
learner = learner,
resampling = rsmp("holdout"),
measure = msr("classif.ce")
)
# best performing hyperparameter configuration
instance$result
# all evaluated hyperparameter configuration
as.data.table(instance$archive)
# fit final model on complete data set
learner$param_set$values = instance$result_learner_param_vals
learner$train(tsk("penguins"))