| calculateROCMeasures {mlr} | R Documentation |
Calculate receiver operator measures.
Description
Calculate the absolute number of correct/incorrect classifications and the following evaluation measures:
-
tprTrue positive rate (Sensitivity, Recall) -
fprFalse positive rate (Fall-out) -
fnrFalse negative rate (Miss rate) -
tnrTrue negative rate (Specificity) -
ppvPositive predictive value (Precision) -
forFalse omission rate -
lrpPositive likelihood ratio (LR+) -
fdrFalse discovery rate -
npvNegative predictive value -
accAccuracy -
lrmNegative likelihood ratio (LR-) -
dorDiagnostic odds ratio
For details on the used measures see measures and also https://en.wikipedia.org/wiki/Receiver_operating_characteristic.
The element for the false omission rate in the resulting object is not called for but
fomr since for should never be used as a variable name in an object.
Usage
calculateROCMeasures(pred)
## S3 method for class 'ROCMeasures'
print(x, abbreviations = TRUE, digits = 2, ...)
Arguments
pred |
(Prediction) |
x |
( |
abbreviations |
( |
digits |
( |
... |
|
Value
(ROCMeasures).
A list containing two elements confusion.matrix which is
the 2 times 2 confusion matrix of absolute frequencies and measures, a list of the above mentioned measures.
Functions
-
print(ROCMeasures):
See Also
Other roc:
asROCRPrediction()
Other performance:
ConfusionMatrix,
calculateConfusionMatrix(),
estimateRelativeOverfitting(),
makeCostMeasure(),
makeCustomResampledMeasure(),
makeMeasure(),
measures,
performance(),
setAggregation(),
setMeasurePars()
Examples
lrn = makeLearner("classif.rpart", predict.type = "prob")
fit = train(lrn, sonar.task)
pred = predict(fit, task = sonar.task)
calculateROCMeasures(pred)