AUC-package {AUC}R Documentation

Threshold independent performance measures for probabilistic classifiers.

Description

Summary and plotting functions for threshold independent performance measures for probabilistic classifiers.

Details

This package includes functions to compute the area under the curve (function auc) of selected measures: The area under the sensitivity curve (AUSEC) (function sensitivity), the area under the specificity curve (AUSPC) (function specificity), the area under the accuracy curve (AUACC) (function accuracy), and the area under the receiver operating characteristic curve (AUROC) (function roc). The curves can also be visualized using the function plot. Support for partial areas is provided.

Auxiliary code in this package is adapted from the ROCR package. The measures available in this package are not available in the ROCR package or vice versa (except for the AUROC). As for the AUROC, we adapted the ROCR code to increase computational speed (so it can be used more effectively in objective functions). As a result less funtionality is offered (e.g., averaging cross validation runs). Please use the ROCR package for that purposes.

Author(s)

Michel Ballings and Dirk Van den Poel, Maintainer: Michel.Ballings@UGent.be

References

Ballings, M., Van den Poel, D., Threshold Independent Performance Measures for Probabilistic Classifcation Algorithms, Forthcoming.

See Also

sensitivity, specificity, accuracy, roc, auc, plot

Examples

data(churn)

auc(sensitivity(churn$predictions,churn$labels))
auc(specificity(churn$predictions,churn$labels))
auc(accuracy(churn$predictions,churn$labels))
auc(roc(churn$predictions,churn$labels))

plot(sensitivity(churn$predictions,churn$labels))
plot(specificity(churn$predictions,churn$labels))
plot(accuracy(churn$predictions,churn$labels))
plot(roc(churn$predictions,churn$labels))


[Package AUC version 0.3.2 Index]