ResampleResult {mlr3} | R Documentation |
Container for Results of resample()
Description
This is the result container object returned by resample()
.
Note that all stored objects are accessed by reference. Do not modify any object without cloning it first.
ResampleResults can be visualized via mlr3viz's autoplot()
function.
S3 Methods
-
as.data.table(rr, reassemble_learners = TRUE, convert_predictions = TRUE, predict_sets = "test")
ResampleResult ->data.table::data.table()
Returns a tabular view of the internal data. -
c(...)
(ResampleResult, ...) -> BenchmarkResult
Combines multiple objects convertible to BenchmarkResult into a new BenchmarkResult.
Active bindings
task_type
(
character(1)
)
Task type of objects in theResampleResult
, e.g."classif"
or"regr"
. This isNA
for empty ResampleResults.uhash
(
character(1)
)
Unique hash for this object.iters
(
integer(1)
)
Number of resampling iterations stored in theResampleResult
.task
(Task)
The taskresample()
operated on.learner
(Learner)
Learner prototyperesample()
operated on. For a list of trained learners, see methods$learners()
.resampling
(Resampling)
Instantiated Resampling object which stores the splits into training and test.learners
(list of Learner)
List of trained learners, sorted by resampling iteration.warnings
(
data.table::data.table()
)
A table with all warning messages. Column names are"iteration"
and"msg"
. Note that there can be multiple rows per resampling iteration if multiple warnings have been recorded.errors
(
data.table::data.table()
)
A table with all error messages. Column names are"iteration"
and"msg"
. Note that there can be multiple rows per resampling iteration if multiple errors have been recorded.
Methods
Public methods
Method new()
Creates a new instance of this R6 class.
An alternative construction method is provided by as_resample_result()
.
Usage
ResampleResult$new(data = ResultData$new(), view = NULL)
Arguments
data
(ResultData |
data.table()
)
An object of type ResultData, either extracted from another ResampleResult, another BenchmarkResult, or manually constructed withas_result_data()
.view
(
character()
)
Singleuhash
of the ResultData to operate on. Used internally for optimizations.
Method format()
Helper for print outputs.
Usage
ResampleResult$format(...)
Arguments
...
(ignored).
Method print()
Printer.
Usage
ResampleResult$print(...)
Arguments
...
(ignored).
Method help()
Opens the corresponding help page referenced by field $man
.
Usage
ResampleResult$help()
Method prediction()
Combined Prediction of all individual resampling iterations, and all provided predict sets. Note that, per default, most performance measures do not operate on this object directly, but instead on the prediction objects from the resampling iterations separately, and then combine the performance scores with the aggregate function of the respective Measure (macro averaging).
If you calculate the performance on this prediction object directly, this is called micro averaging.
Usage
ResampleResult$prediction(predict_sets = "test")
Arguments
predict_sets
(
character()
)
Returns
Prediction.
Subset of {"train", "test"}
.
Method predictions()
List of prediction objects, sorted by resampling iteration. If multiple sets are given, these are combined to a single one for each iteration.
If you evaluate the performance on all of the returned prediction objects and then average them, this
is called macro averaging. For micro averaging, operate on the combined prediction object as returned by
$prediction()
.
Usage
ResampleResult$predictions(predict_sets = "test")
Arguments
predict_sets
(
character()
)
Subset of{"train", "test"}
.
Returns
List of Prediction objects, one per element in predict_sets
.
Method score()
Returns a table with one row for each resampling iteration, including all involved objects:
Task, Learner, Resampling, iteration number (integer(1)
), and Prediction.
Additionally, a column with the individual (per resampling iteration) performance is added
for each Measure in measures
,
named with the id of the respective measure id.
If measures
is NULL
, measures
defaults to the return value of default_measures()
.
Usage
ResampleResult$score( measures = NULL, ids = TRUE, conditions = FALSE, predict_sets = "test" )
Arguments
measures
ids
(
logical(1)
)
Ifids
isTRUE
, extra columns with the ids of objects ("task_id"
,"learner_id"
,"resampling_id"
) are added to the returned table. These allow to subset more conveniently.conditions
(
logical(1)
)
Adds condition messages ("warnings"
,"errors"
) as extra list columns of character vectors to the returned tablepredict_sets
(
character()
)
Vector of predict sets ({"train", "test"}
) to construct the Prediction objects from. Default is"test"
.
Returns
Method aggregate()
Calculates and aggregates performance values for all provided measures, according to the
respective aggregation function in Measure.
If measures
is NULL
, measures
defaults to the return value of default_measures()
.
Usage
ResampleResult$aggregate(measures = NULL)
Arguments
Returns
Named numeric()
.
Method filter()
Subsets the ResampleResult, reducing it to only keep the iterations specified in iters
.
Usage
ResampleResult$filter(iters)
Arguments
iters
(
integer()
)
Resampling iterations to keep.
Returns
Returns the object itself, but modified by reference.
You need to explicitly $clone()
the object beforehand if you want to keeps
the object in its previous state.
Method discard()
Shrinks the ResampleResult by discarding parts of the internally stored data. Note that certain operations might stop work, e.g. extracting importance values from learners or calculating measures requiring the task's data.
Usage
ResampleResult$discard(backends = FALSE, models = FALSE)
Arguments
backends
(
logical(1)
)
IfTRUE
, the DataBackend is removed from all stored Tasks.models
(
logical(1)
)
IfTRUE
, the stored model is removed from all Learners.
Returns
Returns the object itself, but modified by reference.
You need to explicitly $clone()
the object beforehand if you want to keeps
the object in its previous state.
Method marshal()
Marshals all stored models.
Usage
ResampleResult$marshal(...)
Arguments
...
(any)
Additional arguments passed tomarshal_model()
.
Method unmarshal()
Unmarshals all stored models.
Usage
ResampleResult$unmarshal(...)
Arguments
...
(any)
Additional arguments passed tounmarshal_model()
.
Method clone()
The objects of this class are cloneable with this method.
Usage
ResampleResult$clone(deep = FALSE)
Arguments
deep
Whether to make a deep clone.
See Also
-
as_benchmark_result()
to convert to a BenchmarkResult. Chapter in the mlr3book: https://mlr3book.mlr-org.com/chapters/chapter3/evaluation_and_benchmarking.html#sec-resampling
Package mlr3viz for some generic visualizations.
Other resample:
resample()
Examples
task = tsk("penguins")
learner = lrn("classif.rpart")
resampling = rsmp("cv", folds = 3)
rr = resample(task, learner, resampling)
print(rr)
# combined predictions and predictions for each fold separately
rr$prediction()
rr$predictions()
# folds scored separately, then aggregated (macro)
rr$aggregate(msr("classif.acc"))
# predictions first combined, then scored (micro)
rr$prediction()$score(msr("classif.acc"))
# check for warnings and errors
rr$warnings
rr$errors