roc_neat {neatStats}R Documentation

Difference of Two Areas Under the Curves

Description

Comparison of two areas under the receiver operating characteristic curves (AUCs) and plotting any number of ROC curves.

Usage

roc_neat(
  roc1,
  roc2 = NULL,
  pair = FALSE,
  greater = NULL,
  ci = NULL,
  hush = FALSE,
  plot_rocs = FALSE,
  roc_labels = "",
  cutoff_auto = TRUE,
  cutoff_custom = NULL
)

Arguments

roc1

Receiver operating characteristic (ROC) object, or, for plotting only, a list including any number of such ROC objects.

roc2

Receiver operating characteristic (ROC) object, or, for plotting only, leave it as NULL (default) and provide list for the first parameter (roc1).

pair

Logical. If TRUE, the test is conducted for paired samples. Otherwise (default) for independent samples.

greater

NULL or string (or number); optionally specifies one-sided test: either "1" (roc1 AUC expected to be greater than roc2 AUC) or "2" (roc2 AUC expected to be greater than roc2 AUC). If NULL (default), the test is two-sided.

ci

Numeric; confidence level for the returned CIs (raw difference).

hush

Logical. If TRUE, prevents printing any details to console (and plotting).

plot_rocs

Logical. If TRUE, plots and returns ROC curves.

roc_labels

Optional character vector to provide legend label texts (in the order of the provided ROC objects) for the ROC plot.

cutoff_auto

Logical. If TRUE (default), optimal cutoffs on the ROC plots are displayed.

cutoff_custom

Custom cutoff to be indicated on the plot can be given here in a list. The list index must exactly correspond to the index of the list index of the AUC (given in roc1) for which the given cutoff is intended.

Value

Prints DeLong's test results for the comparison of the two given AUCs in APA style, as well as corresponding CI for the AUC difference. Furthermore, when assigned, returns a list with stat (D value), p (p value), and, when plot is added, ROC plot.

Note

The main test statistics are calculated via pROC::roc.test as DeLong's test (for both paired and unpaired). The roc_neat function merely prints it in APA style. The CI is calculated based on the p value, as described by Altman and Bland (2011).

The ROC object may be calculated via t_neat, or directly with pROC::roc.

References

Altman, D. G., & Bland, J. M. (2011). How to obtain the confidence interval from a P value. Bmj, 343(d2090). doi:10.1136/bmj.d2090

DeLong, E. R., DeLong, D. M., & Clarke-Pearson, D. L. (1988). Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics, 44(3), 837-845. doi:10.2307/2531595

Robin, X., Turck, N., Hainard, A., Tiberti, N., Lisacek, F., Sanchez, J. C., & Muller, M. (2011). pROC: an open-source package for R and S+ to analyze and compare ROC curves. BMC bioinformatics, 12(1), 77. doi:10.1186/1471-2105-12-77

See Also

t_neat

Examples


# calculate first AUC (from v1 and v2)
v1 = c(191, 115, 129, 43, 523,-4, 34, 28, 33,-1, 54)
v2 = c(4,-2, 23, 13, 32, 16, 3, 29, 37,-4, 65)
results1 = t_neat(v1, v2, auc_added = TRUE)

# calculate second AUC (from v3 and v4)
v3 = c(14.1, 58.5, 25.5, 42.2, 13, 4.4, 55.5, 28.5, 25.6, 37.1)
v4 = c(36.2, 45.2, 41, 24.6, 30.5, 28.2, 40.9, 45.1, 31, 16.9)
results2 = t_neat(v3, v4, auc_added = TRUE)

# one-sided comparison of the two AUCs
roc_neat(results1$roc_obj, results2$roc_obj, greater = "1")


# create a list of randomlz generated AUCs
set.seed(1)
aucs_list = list()
for (i in 1:4) {
    aucs_list[[i]] = t_neat(rnorm(50, (i-1)),
                            rnorm(50),
                            auc_added = TRUE,
                            hush = TRUE)$roc_obj
}
# depict AUCs (recognized as list)
roc_neat(aucs_list)


# with custom cutoffs depicted
roc_neat(aucs_list,
         cutoff_custom = list(0.2),
         cutoff_auto = FALSE)
roc_neat(aucs_list,
         cutoff_custom = list(.1, c(-.5, 0), NULL, c(.7, 1.6)),
         cutoff_auto = FALSE)
roc_neat(aucs_list,
         cutoff_custom = list(.6, NULL, NULL, 1.1))



[Package neatStats version 1.13.3 Index]