calculateROCMeasures {mlr} | R Documentation |
Calculate receiver operator measures.
Description
Calculate the absolute number of correct/incorrect classifications and the following evaluation measures:
-
tpr
True positive rate (Sensitivity, Recall) -
fpr
False positive rate (Fall-out) -
fnr
False negative rate (Miss rate) -
tnr
True negative rate (Specificity) -
ppv
Positive predictive value (Precision) -
for
False omission rate -
lrp
Positive likelihood ratio (LR+) -
fdr
False discovery rate -
npv
Negative predictive value -
acc
Accuracy -
lrm
Negative likelihood ratio (LR-) -
dor
Diagnostic odds ratio
For details on the used measures see measures and also https://en.wikipedia.org/wiki/Receiver_operating_characteristic.
The element for the false omission rate in the resulting object is not called for
but
fomr
since for
should never be used as a variable name in an object.
Usage
calculateROCMeasures(pred)
## S3 method for class 'ROCMeasures'
print(x, abbreviations = TRUE, digits = 2, ...)
Arguments
pred |
(Prediction) |
x |
( |
abbreviations |
( |
digits |
( |
... |
|
Value
(ROCMeasures
).
A list containing two elements confusion.matrix
which is
the 2 times 2 confusion matrix of absolute frequencies and measures
, a list of the above mentioned measures.
Functions
-
print(ROCMeasures)
:
See Also
Other roc:
asROCRPrediction()
Other performance:
ConfusionMatrix
,
calculateConfusionMatrix()
,
estimateRelativeOverfitting()
,
makeCostMeasure()
,
makeCustomResampledMeasure()
,
makeMeasure()
,
measures
,
performance()
,
setAggregation()
,
setMeasurePars()
Examples
lrn = makeLearner("classif.rpart", predict.type = "prob")
fit = train(lrn, sonar.task)
pred = predict(fit, task = sonar.task)
calculateROCMeasures(pred)