roc_aunp {yardstick} | R Documentation |
Area under the ROC curve of each class against the rest, using the a priori class distribution
Description
roc_aunp()
is a multiclass metric that computes the area under the ROC
curve of each class against the rest, using the a priori class distribution.
This is equivalent to roc_auc(estimator = "macro_weighted")
.
Usage
roc_aunp(data, ...)
## S3 method for class 'data.frame'
roc_aunp(data, truth, ..., na_rm = TRUE, case_weights = NULL, options = list())
roc_aunp_vec(
truth,
estimate,
na_rm = TRUE,
case_weights = NULL,
options = list(),
...
)
Arguments
data |
A |
... |
A set of unquoted column names or one or more |
truth |
The column identifier for the true class results
(that is a |
na_rm |
A |
case_weights |
The optional column identifier for case weights.
This should be an unquoted column name that evaluates to a numeric column
in |
options |
No longer supported as of yardstick 1.0.0. If you pass something here it will be ignored with a warning. Previously, these were options passed on to |
estimate |
A matrix with as many
columns as factor levels of |
Value
A tibble
with columns .metric
, .estimator
,
and .estimate
and 1 row of values.
For grouped data frames, the number of rows returned will be the same as the number of groups.
For roc_aunp_vec()
, a single numeric
value (or NA
).
Relevant Level
There is no common convention on which factor level should
automatically be considered the "event" or "positive" result
when computing binary classification metrics. In yardstick
, the default
is to use the first level. To alter this, change the argument
event_level
to "second"
to consider the last level of the factor the
level of interest. For multiclass extensions involving one-vs-all
comparisons (such as macro averaging), this option is ignored and
the "one" level is always the relevant result.
Multiclass
This multiclass method for computing the area under the ROC curve uses the
a priori class distribution and is equivalent to
roc_auc(estimator = "macro_weighted")
.
Author(s)
Julia Silge
References
Ferri, C., Hernández-Orallo, J., & Modroiu, R. (2009). "An experimental comparison of performance measures for classification". Pattern Recognition Letters. 30 (1), pp 27-38.
See Also
roc_aunu()
for computing the area under the ROC curve of each class against
the rest, using the uniform class distribution.
Other class probability metrics:
average_precision()
,
brier_class()
,
classification_cost()
,
gain_capture()
,
mn_log_loss()
,
pr_auc()
,
roc_auc()
,
roc_aunu()
Examples
# Multiclass example
# `obs` is a 4 level factor. The first level is `"VF"`, which is the
# "event of interest" by default in yardstick. See the Relevant Level
# section above.
data(hpc_cv)
# You can use the col1:colN tidyselect syntax
library(dplyr)
hpc_cv %>%
filter(Resample == "Fold01") %>%
roc_aunp(obs, VF:L)
# Change the first level of `obs` from `"VF"` to `"M"` to alter the
# event of interest. The class probability columns should be supplied
# in the same order as the levels.
hpc_cv %>%
filter(Resample == "Fold01") %>%
mutate(obs = relevel(obs, "M")) %>%
roc_aunp(obs, M, VF:L)
# Groups are respected
hpc_cv %>%
group_by(Resample) %>%
roc_aunp(obs, VF:L)
# Vector version
# Supply a matrix of class probabilities
fold1 <- hpc_cv %>%
filter(Resample == "Fold01")
roc_aunp_vec(
truth = fold1$obs,
matrix(
c(fold1$VF, fold1$F, fold1$M, fold1$L),
ncol = 4
)
)