metric_mean_iou {keras3}R Documentation

Computes the mean Intersection-Over-Union metric.

Description

Formula:

iou <- true_positives / (true_positives + false_positives + false_negatives)

Intersection-Over-Union is a common evaluation metric for semantic image segmentation.

To compute IoUs, the predictions are accumulated in a confusion matrix, weighted by sample_weight and the metric is then calculated from it.

If sample_weight is NULL, weights default to 1. Use sample_weight of 0 to mask values.

Note that this class first computes IoUs for all individual classes, then returns the mean of these values.

Usage

metric_mean_iou(
  ...,
  num_classes,
  name = NULL,
  dtype = NULL,
  ignore_class = NULL,
  sparse_y_true = TRUE,
  sparse_y_pred = TRUE,
  axis = -1L
)

Arguments

...

For forward/backward compatability.

num_classes

The possible number of labels the prediction task can have. This value must be provided, since a confusion matrix of dimension = ⁠[num_classes, num_classes]⁠ will be allocated.

name

(Optional) string name of the metric instance.

dtype

(Optional) data type of the metric result.

ignore_class

Optional integer. The ID of a class to be ignored during metric computation. This is useful, for example, in segmentation problems featuring a "void" class (commonly -1 or 255) in segmentation maps. By default (ignore_class=NULL), all classes are considered.

sparse_y_true

Whether labels are encoded using integers or dense floating point vectors. If FALSE, the argmax function is used to determine each sample's most likely associated label.

sparse_y_pred

Whether predictions are encoded using integers or dense floating point vectors. If FALSE, the argmax function is used to determine each sample's most likely associated label.

axis

(Optional) The dimension containing the logits. Defaults to -1.

Value

a Metric instance is returned. The Metric instance can be passed directly to compile(metrics = ), or used as a standalone object. See ?Metric for example usage.

Examples

Standalone usage:

# cm = [[1, 1],
#        [1, 1]]
# sum_row = [2, 2], sum_col = [2, 2], true_positives = [1, 1]
# iou = true_positives / (sum_row + sum_col - true_positives))
# result = (1 / (2 + 2 - 1) + 1 / (2 + 2 - 1)) / 2 = 0.33
m <- metric_mean_iou(num_classes = 2)
m$update_state(c(0, 0, 1, 1), c(0, 1, 0, 1))
m$result()
## tf.Tensor(0.33333334, shape=(), dtype=float32)

m$reset_state()
m$update_state(c(0, 0, 1, 1), c(0, 1, 0, 1),
               sample_weight=c(0.3, 0.3, 0.3, 0.1))
m$result()
## tf.Tensor(0.2380952, shape=(), dtype=float32)

Usage with compile() API:

model %>% compile(
  optimizer = 'sgd',
  loss = 'mse',
  metrics = list(metric_mean_iou(num_classes=2)))

See Also

Other iou metrics:
metric_binary_iou()
metric_iou()
metric_one_hot_iou()
metric_one_hot_mean_iou()

Other metrics:
Metric()
custom_metric()
metric_auc()
metric_binary_accuracy()
metric_binary_crossentropy()
metric_binary_focal_crossentropy()
metric_binary_iou()
metric_categorical_accuracy()
metric_categorical_crossentropy()
metric_categorical_focal_crossentropy()
metric_categorical_hinge()
metric_cosine_similarity()
metric_f1_score()
metric_false_negatives()
metric_false_positives()
metric_fbeta_score()
metric_hinge()
metric_huber()
metric_iou()
metric_kl_divergence()
metric_log_cosh()
metric_log_cosh_error()
metric_mean()
metric_mean_absolute_error()
metric_mean_absolute_percentage_error()
metric_mean_squared_error()
metric_mean_squared_logarithmic_error()
metric_mean_wrapper()
metric_one_hot_iou()
metric_one_hot_mean_iou()
metric_poisson()
metric_precision()
metric_precision_at_recall()
metric_r2_score()
metric_recall()
metric_recall_at_precision()
metric_root_mean_squared_error()
metric_sensitivity_at_specificity()
metric_sparse_categorical_accuracy()
metric_sparse_categorical_crossentropy()
metric_sparse_top_k_categorical_accuracy()
metric_specificity_at_sensitivity()
metric_squared_hinge()
metric_sum()
metric_top_k_categorical_accuracy()
metric_true_negatives()
metric_true_positives()


[Package keras3 version 1.1.0 Index]