metrics {yardstick} | R Documentation |
General Function to Estimate Performance
Description
This function estimates one or more common performance estimates depending
on the class of truth
(see Value below) and returns them in a three
column tibble. If you wish to modify the metrics used or how they are used
see metric_set()
.
Usage
metrics(data, ...)
## S3 method for class 'data.frame'
metrics(data, truth, estimate, ..., na_rm = TRUE, options = list())
Arguments
data |
A |
... |
A set of unquoted column names or one or more
|
truth |
The column identifier for the true results (that
is |
estimate |
The column identifier for the predicted results
(that is also |
na_rm |
A |
options |
No longer supported as of yardstick 1.0.0. If you pass something here it will be ignored with a warning. Previously, these were options passed on to |
Value
A three column tibble.
When
truth
is a factor, there are rows foraccuracy()
and the Kappa statistic (kap()
).When
truth
has two levels and 1 column of class probabilities is passed to...
, there are rows for the two class versions ofmn_log_loss()
androc_auc()
.When
truth
has more than two levels and a full set of class probabilities are passed to...
, there are rows for the multiclass version ofmn_log_loss()
and the Hand Till generalization ofroc_auc()
.When
truth
is numeric, there are rows forrmse()
,rsq()
, andmae()
.
See Also
Examples
# Accuracy and kappa
metrics(two_class_example, truth, predicted)
# Add on multinomal log loss and ROC AUC by specifying class prob columns
metrics(two_class_example, truth, predicted, Class1)
# Regression metrics
metrics(solubility_test, truth = solubility, estimate = prediction)
# Multiclass metrics work, but you cannot specify any averaging
# for roc_auc() besides the default, hand_till. Use the specific function
# if you need more customization
library(dplyr)
hpc_cv %>%
group_by(Resample) %>%
metrics(obs, pred, VF:L) %>%
print(n = 40)