| measureit.default {ROCit} | R Documentation | 
Performance Metrics of Binary Classifier
Description
This function computes various performance metrics at different cutoff values.
Usage
## Default S3 method:
measureit(
  score,
  class,
  negref = NULL,
  measure = c("ACC", "SENS"),
  step = FALSE,
  ... = NULL
)
Arguments
| score | An numeric array of diagnostic score. | 
| class | An array of equal length of score, containing the class of the observations. | 
| negref | The reference value, same as the
 | 
| measure | The performance metrics to be evaluated. See "Details" for available options. | 
| step | Logical, default in  | 
| ... | 
 | 
Details
Various performance metrics for binary classifier are
available that are cutoff specific. For a certain cutoff value, all the
observations having score equal or greater are predicted as
positive. Following metrics can be called for
via measure argument:
- ACC:Overall accuracy of classification =- P(Y = \hat{Y})= (TP + TN) / (TP + FP + TN + FN)
- MIS:Misclassification rate =- 1 - ACC
- SENS:Sensitivity =- P(\hat{Y} = 1|Y = 1) = TP / (TP + FN)
- SPEC:Specificity =- P(\hat{Y} = 0|Y = 0) = TN / (TN + FP)
- PREC:Precision =- P(Y = 1| \hat{Y} = 1) = TP / (TP + FP)
- REC:Recall. Same as sensitivity.
- PPV:Positive predictive value. Same as precision
- NPV:Positive predictive value =- P(Y = 0| \hat{Y} = 0) = TN / (TN + FN)
- TPR:True positive rate. Same as sensitivity.
- FPR:False positive rate. Same as- 1 - specificity.
- TNR:True negative rate. Same as specificity.
- FNR:False negative rate =- P(\hat{Y} = 0|Y = 1) = FN / (FN +TP)
- pDLR:Positive diagnostic likelihood ratio =- TPR / FPR
- nDLR:Negative diagnostic likelihood ratio =- FNR / TNR
- FSCR:F-score, defined as- 2 * (PPV * TPR) / (PPV + TPR)
Exact match is required. If the values passed in the
measure argument do not match with the
available options, then ignored.
Value
An object of class "measureit". By default it contains the
followings:
| Cutoff | Cutoff at which metrics are evaluated. | 
| Depth | What portion of the observations fall on or above the cutoff. | 
| TP | Number of true positives, when the observations having score equal or greater than cutoff are predicted positive. | 
| FP | Number of false positives, when the observations having score equal or greater than cutoff are predicted positive. | 
| TN | Number of true negatives, when the observations having score equal or greater than cutoff are predicted positive. | 
| FN | Number of false negatives, when the observations having score equal or greater than cutoff are predicted positive. | 
When other metrics are called via measure, those also appear
in the return in the order they are listed above.
Note
The algorithm is designed for complete cases. If NA(s) found in
either score or class, then removed.
Internally sorting is performed, with respect to the
score. In case of tie, sorting is done with respect to class.
Author(s)
Riaz Khan, mdriazahmed.khan@jacks.sdstate.edu
See Also
measureit.rocit, print.measureit
Examples
data("Diabetes")
logistic.model <- glm(factor(dtest)~chol+age+bmi,
                      data = Diabetes,family = "binomial")
class <- logistic.model$y
score <- logistic.model$fitted.values
# -------------------------------------------------------------
measure <- measureit(score = score, class = class,
                     measure = c("ACC", "SENS", "FSCR"))
names(measure)
plot(measure$ACC~measure$Cutoff, type = "l")
plot(measure$TP~measure$FP, type = "l")