Ranking-based metrics {mldr}R Documentation

Multi-label ranking-based evaluation metrics

Description

Functions that compute ranking-based metrics, given a matrix of true labels and a matrix of predicted probabilities.

Usage

average_precision(true_labels, predictions, ...)

one_error(true_labels, predictions)

coverage(true_labels, predictions, ...)

ranking_loss(true_labels, predictions)

macro_auc(true_labels, predictions, undefined_value = 0.5,
  na.rm = FALSE)

micro_auc(true_labels, predictions)

example_auc(true_labels, predictions, undefined_value = 0.5,
  na.rm = FALSE)

Arguments

true_labels

Matrix of true labels, columns corresponding to labels and rows to instances.

predictions

Matrix of probabilities predicted by a classifier.

...

Additional parameters to be passed to the ranking function.

undefined_value

A default value for the cases when macro-averaged and example-averaged AUC encounter undefined (not computable) values, e.g. 0, 0.5, or NA.

na.rm

Logical specifying whether to ignore undefined values when undefined_value is set to NA.

Details

Available metrics in this category

Breaking ties in rankings

The additional ties_method parameter for the ranking function is passed to R's own rank. It accepts the following values:

See rank for information on the effect of each parameter. The default behavior in mldr corresponds to value "last", since this is the behavior of the ranking method in MULAN, in order to facilitate fair comparisons among classifiers over both platforms.

Value

Atomical numeric vector specifying the resulting performance metric value.

See Also

mldr_evaluate, mldr_to_labels

Other evaluation metrics: Averaged metrics, Basic metrics

Examples

true_labels <- matrix(c(
1,1,1,
0,0,0,
1,0,0,
1,1,1,
0,0,0,
1,0,0
), ncol = 3, byrow = TRUE)
predicted_probs <- matrix(c(
.6,.5,.9,
.0,.1,.2,
.8,.3,.2,
.7,.9,.1,
.7,.3,.2,
.1,.8,.3
), ncol = 3, byrow = TRUE)

# by default, labels with same ranking are assigned ascending rankings
# in the order they are encountered
coverage(true_labels, predicted_probs)
# in the following, labels with same ranking will receive the same,
# averaged ranking
average_precision(true_labels, predicted_probs, ties_method = "average")

# the following will treat all undefined values as 0 (counting them
# for the average)
example_auc(true_labels, predicted_probs, undefined_value = 0)
# the following will ignore undefined values (not counting them for
# the average)
example_auc(true_labels, predicted_probs, undefined_value = NA, na.rm = TRUE)

[Package mldr version 0.4.3 Index]