multilabel_evaluate {utiml}R Documentation

Evaluate multi-label predictions

Description

This method is used to evaluate multi-label predictions. You can create a confusion matrix object or use directly the test dataset and the predictions. You can also specify which measures do you desire use.

Usage

multilabel_evaluate(object, ...)

## S3 method for class 'mldr'
multilabel_evaluate(object, mlresult, measures = c("all"), labels = FALSE, ...)

## S3 method for class 'mlconfmat'
multilabel_evaluate(object, measures = c("all"), labels = FALSE, ...)

Arguments

object

A mldr dataset or a mlconfmat confusion matrix

...

Extra parameters to specific measures.

mlresult

The prediction result (Optional, required only when the mldr is used).

measures

The measures names to be computed. Call multilabel_measures() to see the expected measures. You can also use "bipartition", "ranking", "label-based", "example-based", "macro-based", "micro-based" and "label-problem" to include a set of measures. (Default: "all").

labels

Logical value defining if the label results should be also returned. (Default: FALSE)

Value

If labels is FALSE return a vector with the expected multi-label measures, otherwise, a list contained the multi-label and label measures.

Methods (by class)

References

Madjarov, G., Kocev, D., Gjorgjevikj, D., & Dzeroski, S. (2012). An extensive experimental comparison of methods for multi-label learning. Pattern Recognition, 45(9), 3084-3104. Zhang, M.-L., & Zhou, Z.-H. (2014). A Review on Multi-Label Learning Algorithms. IEEE Transactions on Knowledge and Data Engineering, 26(8), 1819-1837. Gibaja, E., & Ventura, S. (2015). A Tutorial on Multilabel Learning. ACM Comput. Surv., 47(3), 52:1-2:38.

See Also

Other evaluation: cv(), multilabel_confusion_matrix(), multilabel_measures()

Examples


prediction <- predict(br(toyml), toyml)

# Compute all measures
multilabel_evaluate(toyml, prediction)
multilabel_evaluate(toyml, prediction, labels=TRUE) # Return a list

# Compute bipartition measures
multilabel_evaluate(toyml, prediction, "bipartition")

# Compute multilples measures
multilabel_evaluate(toyml, prediction, c("accuracy", "F1", "macro-based"))

# Compute the confusion matrix before the measures
cm <- multilabel_confusion_matrix(toyml, prediction)
multilabel_evaluate(cm)
multilabel_evaluate(cm, "example-based")
multilabel_evaluate(cm, c("hamming-loss", "subset-accuracy", "F1"))


[Package utiml version 0.1.7 Index]