metric_recall_at_precision {keras3} | R Documentation |
Computes best recall where precision is >= specified value.
Description
For a given score-label-distribution the required precision might not be achievable, in this case 0.0 is returned as recall.
This metric creates four local variables, true_positives
,
true_negatives
, false_positives
and false_negatives
that are used to
compute the recall at the given precision. The threshold for the given
precision value is computed and used to evaluate the corresponding recall.
If sample_weight
is NULL
, weights default to 1.
Use sample_weight
of 0 to mask values.
If class_id
is specified, we calculate precision by considering only the
entries in the batch for which class_id
is above the threshold
predictions, and computing the fraction of them for which class_id
is
indeed a correct label.
Usage
metric_recall_at_precision(
...,
precision,
num_thresholds = 200L,
class_id = NULL,
name = NULL,
dtype = NULL
)
Arguments
... |
For forward/backward compatability. |
precision |
A scalar value in range |
num_thresholds |
(Optional) Defaults to 200. The number of thresholds to use for matching the given precision. |
class_id |
(Optional) Integer class ID for which we want binary metrics.
This must be in the half-open interval |
name |
(Optional) string name of the metric instance. |
dtype |
(Optional) data type of the metric result. |
Value
a Metric
instance is returned. The Metric
instance can be passed
directly to compile(metrics = )
, or used as a standalone object. See
?Metric
for example usage.
Usage
Standalone usage:
m <- metric_recall_at_precision(precision = 0.8) m$update_state(c(0, 0, 1, 1), c(0, 0.5, 0.3, 0.9)) m$result()
## tf.Tensor(0.5, shape=(), dtype=float32)
m$reset_state() m$update_state(c(0, 0, 1, 1), c(0, 0.5, 0.3, 0.9), sample_weight = c(1, 0, 0, 1)) m$result()
## tf.Tensor(1.0, shape=(), dtype=float32)
Usage with compile()
API:
model |> compile( optimizer = 'sgd', loss = 'binary_crossentropy', metrics = list(metric_recall_at_precision(precision = 0.8)) )
See Also
Other confusion metrics:
metric_auc()
metric_false_negatives()
metric_false_positives()
metric_precision()
metric_precision_at_recall()
metric_recall()
metric_sensitivity_at_specificity()
metric_specificity_at_sensitivity()
metric_true_negatives()
metric_true_positives()
Other metrics:
Metric()
custom_metric()
metric_auc()
metric_binary_accuracy()
metric_binary_crossentropy()
metric_binary_focal_crossentropy()
metric_binary_iou()
metric_categorical_accuracy()
metric_categorical_crossentropy()
metric_categorical_focal_crossentropy()
metric_categorical_hinge()
metric_cosine_similarity()
metric_f1_score()
metric_false_negatives()
metric_false_positives()
metric_fbeta_score()
metric_hinge()
metric_huber()
metric_iou()
metric_kl_divergence()
metric_log_cosh()
metric_log_cosh_error()
metric_mean()
metric_mean_absolute_error()
metric_mean_absolute_percentage_error()
metric_mean_iou()
metric_mean_squared_error()
metric_mean_squared_logarithmic_error()
metric_mean_wrapper()
metric_one_hot_iou()
metric_one_hot_mean_iou()
metric_poisson()
metric_precision()
metric_precision_at_recall()
metric_r2_score()
metric_recall()
metric_root_mean_squared_error()
metric_sensitivity_at_specificity()
metric_sparse_categorical_accuracy()
metric_sparse_categorical_crossentropy()
metric_sparse_top_k_categorical_accuracy()
metric_specificity_at_sensitivity()
metric_squared_hinge()
metric_sum()
metric_top_k_categorical_accuracy()
metric_true_negatives()
metric_true_positives()