fbeta {mlr3measures} | R Documentation |
F-beta Score
Description
Measure to compare true observed labels with predicted labels in binary classification tasks.
Usage
fbeta(truth, response, positive, beta = 1, na_value = NaN, ...)
Arguments
truth |
( |
response |
( |
positive |
( |
beta |
( |
na_value |
( |
... |
( |
Details
With P
as precision()
and R
as recall()
, the F-beta Score is defined as
(1 + \beta^2) \frac{P \cdot R}{(\beta^2 P) + R}.
It measures the effectiveness of retrieval with respect to a user who attaches \beta
times
as much importance to recall as precision.
For \beta = 1
, this measure is called "F1" score.
This measure is undefined if precision or recall is undefined, i.e. TP + FP = 0 or TP + FN = 0.
Value
Performance value as numeric(1)
.
Meta Information
Type:
"binary"
Range:
[0, 1]
Minimize:
FALSE
Required prediction:
response
References
Rijsbergen, Van CJ (1979). Information Retrieval, 2nd edition. Butterworth-Heinemann, Newton, MA, USA. ISBN 408709294.
Goutte C, Gaussier E (2005). “A Probabilistic Interpretation of Precision, Recall and F-Score, with Implication for Evaluation.” In Lecture Notes in Computer Science, 345–359. doi:10.1007/978-3-540-31865-1_25.
See Also
Other Binary Classification Measures:
auc()
,
bbrier()
,
dor()
,
fdr()
,
fn()
,
fnr()
,
fomr()
,
fp()
,
fpr()
,
gmean()
,
gpr()
,
npv()
,
ppv()
,
prauc()
,
tn()
,
tnr()
,
tp()
,
tpr()
Examples
set.seed(1)
lvls = c("a", "b")
truth = factor(sample(lvls, 10, replace = TRUE), levels = lvls)
response = factor(sample(lvls, 10, replace = TRUE), levels = lvls)
fbeta(truth, response, positive = "a")