tpr {mlr3measures} | R Documentation |
True Positive Rate
Description
Measure to compare true observed labels with predicted labels in binary classification tasks.
Usage
tpr(truth, response, positive, na_value = NaN, ...)
recall(truth, response, positive, na_value = NaN, ...)
sensitivity(truth, response, positive, na_value = NaN, ...)
Arguments
truth |
( |
response |
( |
positive |
( |
na_value |
( |
... |
( |
Details
The True Positive Rate is defined as
\frac{\mathrm{TP}}{\mathrm{TP} + \mathrm{FN}}.
Also know as "recall" or "sensitivity".
This measure is undefined if TP + FN = 0.
Value
Performance value as numeric(1)
.
Meta Information
Type:
"binary"
Range:
[0, 1]
Minimize:
FALSE
Required prediction:
response
References
https://en.wikipedia.org/wiki/Template:DiagnosticTesting_Diagram
Goutte C, Gaussier E (2005). “A Probabilistic Interpretation of Precision, Recall and F-Score, with Implication for Evaluation.” In Lecture Notes in Computer Science, 345–359. doi:10.1007/978-3-540-31865-1_25.
See Also
Other Binary Classification Measures:
auc()
,
bbrier()
,
dor()
,
fbeta()
,
fdr()
,
fn()
,
fnr()
,
fomr()
,
fp()
,
fpr()
,
gmean()
,
gpr()
,
npv()
,
ppv()
,
prauc()
,
tn()
,
tnr()
,
tp()
Examples
set.seed(1)
lvls = c("a", "b")
truth = factor(sample(lvls, 10, replace = TRUE), levels = lvls)
response = factor(sample(lvls, 10, replace = TRUE), levels = lvls)
tpr(truth, response, positive = "a")