getAccuracy {mcradds} | R Documentation |
Summary Method for MCTab
Objects
Description
Provides a concise summary of the content of MCTab
objects. Computes
sensitivity, specificity, positive and negative predictive values and positive
and negative likelihood ratios for a diagnostic test with reference/gold standard.
Computes positive/negative percent agreement, overall percent agreement and Kappa
when the new test is evaluated by comparison to a non-reference standard. Computes
average positive/negative agreement when the both tests are all not the
reference, such as paired reader precision.
Usage
getAccuracy(object, ...)
## S4 method for signature 'MCTab'
getAccuracy(
object,
ref = c("r", "nr", "bnr"),
alpha = 0.05,
r_ci = c("wilson", "wald", "clopper-pearson"),
nr_ci = c("wilson", "wald", "clopper-pearson"),
bnr_ci = "bootstrap",
bootCI = c("perc", "norm", "basic", "stud", "bca"),
nrep = 1000,
rng.seed = NULL,
digits = 4,
...
)
Arguments
object |
( |
... |
other arguments to be passed to DescTools::BinomCI. |
ref |
( |
alpha |
( |
r_ci |
( |
nr_ci |
( |
bnr_ci |
( |
bootCI |
( |
nrep |
( |
rng.seed |
( |
digits |
( |
Value
A data frame contains the qualitative diagnostic accuracy criteria with three columns for estimated value and confidence interval.
sens: Sensitivity refers to how often the test is positive when the condition of interest is present.
spec: Specificity refers to how often the test is negative when the condition of interest is absent.
ppv: Positive predictive value refers to the percentage of subjects with a positive test result who have the target condition.
npv: Negative predictive value refers to the percentage of subjects with a negative test result who do not have the target condition.
plr: Positive likelihood ratio refers to the probability of true positive rate divided by the false negative rate.
nlr: Negative likelihood ratio refers to the probability of false positive rate divided by the true negative rate.
ppa: Positive percent agreement, equals to sensitivity when the candidate method is evaluated by comparison with a comparative method, not reference/gold standard.
npa: Negative percent agreement, equals to specificity when the candidate method is evaluated by comparison with a comparative method, not reference/gold standard.
opa: Overall percent agreement.
kappa: Cohen's kappa coefficient to measure the level of agreement.
apa: Average positive agreement refers to the positive agreements and can be regarded as weighted ppa.
ana: Average negative agreement refers to the negative agreements and can be regarded as weighted npa.
Examples
# For qualitative performance
data("qualData")
tb <- qualData %>%
diagTab(
formula = ~ CandidateN + ComparativeN,
levels = c(1, 0)
)
getAccuracy(tb, ref = "r")
getAccuracy(tb, ref = "nr", nr_ci = "wilson")
# For Between-Reader precision performance
data("PDL1RP")
reader <- PDL1RP$btw_reader
tb2 <- reader %>%
diagTab(
formula = Reader ~ Value,
bysort = "Sample",
levels = c("Positive", "Negative"),
rep = TRUE,
across = "Site"
)
getAccuracy(tb2, ref = "bnr")
getAccuracy(tb2, ref = "bnr", rng.seed = 12306)