summary_control {mlr3summary} | R Documentation |
Control for Learner summaries
Description
Various parameters that control aspects of summary.Learner
.
Usage
summary_control(
measures = NULL,
complexity_measures = c("sparsity", "interaction_strength"),
importance_measures = NULL,
n_important = 15L,
effect_measures = c("pdp", "ale"),
fairness_measures = NULL,
protected_attribute = NULL,
hide = NULL,
digits = max(3L, getOption("digits") - 3L)
)
Arguments
measures |
(mlr3::Measure | list of mlr3::Measure | NULL) |
complexity_measures |
(character) |
importance_measures |
(character()|NULL) |
n_important |
(numeric(1)) |
effect_measures |
(character | NULL) |
fairness_measures |
(mlr3fairness::MeasureFairness |
list of mlr3fairness::MeasureFairness | NULL) |
protected_attribute |
(character(1)) |
hide |
(character) |
digits |
(numeric(1)) |
Details
The following provides some details on the different choices of measures.
Performance
The default measures
depend on the type of task. Therefore, NULL is displayed as default and
the measures will be initialized in summary.Learner
with the help of mlr3::msr
.
The following provides an overview of these defaults:
Regression: regr.rmse, regr.rsq, regr.mae, regr.medae
Binary classification with probabilities: classif.auc, classif.fbeta, classif.bbrier, classif.mcc
Binary classification with hard labels: classif.acc, classif.bacc, classif.fbeta, classif.mcc
Multi-class classification with probabilities: classif.mauc_aunp, classif.mbrier
Complexity
Currently only two complexity_measures
are available, which are
based on Molnar et al. (2020):
sparsity
: The number of used features, that have a non-zero effect on the prediction (evaluated by accumulated local effects (ale, Apley and Zhu (2020)). The measure can have values between 0 and the number of features.interaction_strength
: The scaled approximation error between a main effect model (based on ale) and the prediction function. It can have values between 0 and 1, where 0 means no interaction and 1 only interaction, and no main effects. Interaction strength can only be measured for binary classification and regression models.
Importance The importance_measures
are based on the iml
and
fastshap
packages. Multiple measures are available:
pdp: This corrensponds to importances based on the standard deviations in partial dependence plots (Friedmann (2001)), as proposed by Greenwell et al. (2018).
pfi.
<
loss>
: This corresponds to the permutation feature importance as implemented in iml::FeatureImp. Different loss functions are possible and rely on the task at hand.shap: This importance corresponds to the mean absolute Shapley values computed with fastshap::explain. Higher values display higher importance.
NULL is the default, corresponding to importance calculations based on pdp and pfi.
Because the loss function for pfi relies on the task at hand, the importance measures
are initialized in summary
."pdp" and "pfi.ce" are the defaults for
classification, "pdp" and "pfi.mse" for regression.
Effects The effect_measures
are based on iml::FeatureEffects.
Currently partial dependence plots (pdp) and accumulated local effects are
available (ale). Ale has the advantage over pdp that it takes feature
correlations into account but has a less natural interpretation than pdp.
Therefore, both "pdp" and "ale" are the defaults.
Fairness
The default fairness_measures
depend on the type of task.
Therefore, NULL is displayed as default and
the measures will be initialized in summary.Learner
based on
mlr3fairness::mlr_measures_fairness.
There is currently a mismatch between the naming convention of
measures in mlr3fairness
and the underlying measurements displayed.
To avoid confusion, the id of the fairness measures were adapted.
The following provides an overview of these defaults and adapted names:
Binary classification: "fairness.dp" (demographic parity) based on "fairness.cv", "fairness.cuae" (conditional use accuracy equality) based on "fairness.pp", "fairness.eod" (equalized odds) based on "fairness.eod". Smaller values are better.
Multi-class classification: "fairness.acc", the smallest absolute difference in accuracy between groups of the
protected_attribute
. Smaller values are better.Regression: "fairness.rmse" and "fairness.mae", the smallest absolute difference (see mlr3fairness::groupdiff_absdiff) in the either the root mean-squared error (rmse) or the mean absolute error (mae) between groups of the
protected_attribute
. Smaller values are better.
Value
list of class summary_control
References
Molnar, Christoph, Casalicchio, Giuseppe, Bischl, Bernd (2020). “Quantifying Model Complexity via Functional Decomposition for Better Post-hoc Interpretability.” In Communications in Computer and Information Science, chapter 1, 193–204. Springer International Publishing.
Greenwell, M. B, Boehmke, C. B, McCarthy, J. A (2018). “A Simple and Effective Model-Based Variable Importance Measure.” arXiv preprint. arXiv:1805.04755, http://arxiv.org/abs/1805.04755.
Apley, W. D, Zhu, Jingyu (2020). “Visualizing the Effects of Predictor Variables in Black Box Supervised Learning Models.” Journal of the Royal Statistical Society Series B: Statistical Methodology, 82(4), 1059-1086.
Friedman, H. J (2001). “Greedy Function Approximation: A Gradient Boosting Machine.” The Annals of Statistics, 29(5).