metric_fbetascore {tfaddons} | R Documentation |
FBetaScore
Description
Computes F-Beta score.
Usage
metric_fbetascore(
num_classes,
average = NULL,
beta = 1,
threshold = NULL,
name = "fbeta_score",
dtype = tf$float32,
...
)
Arguments
num_classes |
Number of unique classes in the dataset. |
average |
Type of averaging to be performed on data. Acceptable values are None, micro, macro and weighted. Default value is NULL. micro, macro and weighted. Default value is NULL. - None: Scores for each class are returned - micro: True positivies, false positives and false negatives are computed globally. - macro: True positivies, false positives and - false negatives are computed for each class and their unweighted mean is returned. - weighted: Metrics are computed for each class and returns the mean weighted by the number of true instances in each class.- |
beta |
Determines the weight of precision and recall in harmonic mean. Determines the weight given to the precision and recall. Default value is 1. |
threshold |
Elements of y_pred greater than threshold are converted to be 1, and the rest 0. If threshold is None, the argmax is converted to 1, and the rest 0. |
name |
(optional) String name of the metric instance. |
dtype |
(optional) Data type of the metric result. Defaults to 'tf$float32'. |
... |
additional parameters to pass |
Details
It is the weighted harmonic mean of precision and recall. Output range is [0, 1]. Works for both multi-class and multi-label classification. F-Beta = (1 + beta^2) * (prec * recall) / ((beta^2 * prec) + recall)
Value
F-Beta Score: float
Raises
ValueError: If the 'average' has values other than [NULL, micro, macro, weighted].