tune {mlr3tuning} | R Documentation |
Function for Tuning a Learner
Description
Function to tune a mlr3::Learner.
The function internally creates a TuningInstanceBatchSingleCrit or TuningInstanceBatchMultiCrit which describes the tuning problem.
It executes the tuning with the Tuner (tuner
) and returns the result with the tuning instance ($result
).
The ArchiveBatchTuning and ArchiveAsyncTuning ($archive
) stores all evaluated hyperparameter configurations and performance scores.
You can find an overview of all tuners on our website.
Usage
tune(
tuner,
task,
learner,
resampling,
measures = NULL,
term_evals = NULL,
term_time = NULL,
terminator = NULL,
search_space = NULL,
store_benchmark_result = TRUE,
store_models = FALSE,
check_values = FALSE,
callbacks = NULL,
rush = NULL
)
Arguments
tuner |
(Tuner) |
task |
(mlr3::Task) |
learner |
(mlr3::Learner) |
resampling |
(mlr3::Resampling) |
measures |
(mlr3::Measure or list of mlr3::Measure) |
term_evals |
( |
term_time |
( |
terminator |
(bbotk::Terminator) |
search_space |
(paradox::ParamSet) |
store_benchmark_result |
( |
store_models |
( |
check_values |
( |
callbacks |
(list of mlr3misc::Callback) |
rush |
( |
Details
The mlr3::Task, mlr3::Learner, mlr3::Resampling, mlr3::Measure and bbotk::Terminator are used to construct a TuningInstanceBatchSingleCrit.
If multiple performance mlr3::Measures are supplied, a TuningInstanceBatchMultiCrit is created.
The parameter term_evals
and term_time
are shortcuts to create a bbotk::Terminator.
If both parameters are passed, a bbotk::TerminatorCombo is constructed.
For other Terminators, pass one with terminator
.
If no termination criterion is needed, set term_evals
, term_time
and terminator
to NULL
.
The search space is created from paradox::TuneToken or is supplied by search_space
.
Value
TuningInstanceBatchSingleCrit | TuningInstanceBatchMultiCrit
Resources
There are several sections about hyperparameter optimization in the mlr3book.
Simplify tuning with the
tune()
function.Learn about tuning spaces.
The gallery features a collection of case studies and demos about optimization.
Optimize an rpart classification tree with only a few lines of code.
Tune an XGBoost model with early stopping.
Make us of proven search space.
Learn about hotstarting models.
Default Measures
If no measure is passed, the default measure is used. The default measure depends on the task type.
Task | Default Measure | Package |
"classif" | "classif.ce" | mlr3 |
"regr" | "regr.mse" | mlr3 |
"surv" | "surv.cindex" | mlr3proba |
"dens" | "dens.logloss" | mlr3proba |
"classif_st" | "classif.ce" | mlr3spatial |
"regr_st" | "regr.mse" | mlr3spatial |
"clust" | "clust.dunn" | mlr3cluster |
Analysis
For analyzing the tuning results, it is recommended to pass the ArchiveBatchTuning to as.data.table()
.
The returned data table is joined with the benchmark result which adds the mlr3::ResampleResult for each hyperparameter evaluation.
The archive provides various getters (e.g. $learners()
) to ease the access.
All getters extract by position (i
) or unique hash (uhash
).
For a complete list of all getters see the methods section.
The benchmark result ($benchmark_result
) allows to score the hyperparameter configurations again on a different measure.
Alternatively, measures can be supplied to as.data.table()
.
The mlr3viz package provides visualizations for tuning results.
Examples
# Hyperparameter optimization on the Palmer Penguins data set
task = tsk("pima")
# Load learner and set search space
learner = lrn("classif.rpart",
cp = to_tune(1e-04, 1e-1, logscale = TRUE)
)
# Run tuning
instance = tune(
tuner = tnr("random_search", batch_size = 2),
task = tsk("pima"),
learner = learner,
resampling = rsmp ("holdout"),
measures = msr("classif.ce"),
terminator = trm("evals", n_evals = 4)
)
# Set optimal hyperparameter configuration to learner
learner$param_set$values = instance$result_learner_param_vals
# Train the learner on the full data set
learner$train(task)
# Inspect all evaluated configurations
as.data.table(instance$archive)