comp_accu_prob {riskyr} | R Documentation |
Compute exact accuracy metrics based on probabilities.
Description
comp_accu_prob
computes a list of exact accuracy metrics
from a sufficient and valid set of 3 essential probabilities
(prev
, and
sens
or its complement mirt
, and
spec
or its complement fart
).
Usage
comp_accu_prob(
prev = prob$prev,
sens = prob$sens,
mirt = NA,
spec = prob$spec,
fart = NA,
tol = 0.01,
w = 0.5
)
Arguments
prev |
The condition's prevalence |
sens |
The decision's sensitivity |
mirt |
The decision's miss rate |
spec |
The decision's specificity value |
fart |
The decision's false alarm rate |
tol |
A numeric tolerance value for |
w |
The weighting parameter Notes:
|
Details
Currently computed accuracy metrics include:
-
acc
: Overall accuracy as the proportion (or probability) of correctly classifying cases or ofdec_cor
cases:(a) from
prob
:acc = (prev x sens) + [(1 - prev) x spec]
(b) from
freq
:acc = dec_cor/N = (hi + cr)/(hi + mi + fa + cr)
When frequencies in
freq
are not rounded, (b) coincides with (a).Values range from 0 (no correct prediction) to 1 (perfect prediction).
-
wacc
: Weighted accuracy, as a weighted average of the sensitivitysens
(aka. hit rateHR
,TPR
,power
orrecall
) and the the specificityspec
(aka.TNR
) in whichsens
is multiplied by a weighting parameterw
(ranging from 0 to 1) andspec
is multiplied byw
's complement(1 - w)
:wacc = (w * sens) + ((1 - w) * spec)
If
w = .50
,wacc
becomes balanced accuracybacc
. -
mcc
: The Matthews correlation coefficient (with values ranging from -1 to +1):mcc = ((hi * cr) - (fa * mi)) / sqrt((hi + fa) * (hi + mi) * (cr + fa) * (cr + mi))
A value of
mcc = 0
implies random performance;mcc = 1
implies perfect performance.See Wikipedia: Matthews correlation coefficient for additional information.
-
f1s
: The harmonic mean of the positive predictive valuePPV
(aka.precision
) and the sensitivitysens
(aka. hit rateHR
,TPR
,power
orrecall
):f1s = 2 * (PPV * sens) / (PPV + sens)
See Wikipedia: F1 score for additional information.
Note that some accuracy metrics can be interpreted
as probabilities (e.g., acc
) and some as correlations (e.g., mcc
).
Also, accuracy can be viewed as a probability (e.g., the ratio of or link between
dec_cor
and N
) or as a frequency type
(containing dec_cor
and dec_err
).
comp_accu_prob
computes exact accuracy metrics from probabilities.
When input frequencies were rounded (see the default of round = TRUE
in comp_freq
and comp_freq_prob
) the accuracy
metrics computed by comp_accu
correspond these rounded values.
Value
A list accu
containing current accuracy metrics.
References
Consult Wikipedia: Confusion matrix for additional information.
See Also
accu
for all accuracy metrics;
comp_accu_freq
computes accuracy metrics from frequencies;
num
for basic numeric parameters;
freq
for current frequency information;
txt
for current text settings;
pal
for current color settings;
popu
for a table of the current population.
Other metrics:
accu
,
acc
,
comp_accu_freq()
,
comp_acc()
,
comp_err()
,
err
Other functions computing probabilities:
comp_FDR()
,
comp_FOR()
,
comp_NPV()
,
comp_PPV()
,
comp_accu_freq()
,
comp_acc()
,
comp_comp_pair()
,
comp_complement()
,
comp_complete_prob_set()
,
comp_err()
,
comp_fart()
,
comp_mirt()
,
comp_ppod()
,
comp_prob_freq()
,
comp_prob()
,
comp_sens()
,
comp_spec()
Examples
comp_accu_prob() # => accuracy metrics for prob of current scenario
comp_accu_prob(prev = .2, sens = .5, spec = .5) # medium accuracy, but cr > hi.
# Extreme cases:
comp_accu_prob(prev = NaN, sens = NaN, spec = NaN) # returns list of NA values
comp_accu_prob(prev = 0, sens = NaN, spec = 1) # returns list of NA values
comp_accu_prob(prev = 0, sens = 0, spec = 1) # perfect acc = 1, but f1s is NaN
comp_accu_prob(prev = .5, sens = .5, spec = .5) # random performance
comp_accu_prob(prev = .5, sens = 1, spec = 1) # perfect accuracy
comp_accu_prob(prev = .5, sens = 0, spec = 0) # zero accuracy, but f1s is NaN
comp_accu_prob(prev = 1, sens = 1, spec = 0) # perfect, but see wacc (0.5) and mcc (0)
# Effects of w:
comp_accu_prob(prev = .5, sens = .6, spec = .4, w = 1/2) # equal weights to sens and spec
comp_accu_prob(prev = .5, sens = .6, spec = .4, w = 2/3) # more weight on sens: wacc up
comp_accu_prob(prev = .5, sens = .6, spec = .4, w = 1/3) # more weight on spec: wacc down
# Contrasting comp_accu_freq and comp_accu_prob:
# (a) comp_accu_freq (based on rounded frequencies):
freq1 <- comp_freq(N = 10, prev = 1/3, sens = 2/3, spec = 3/4) # => rounded frequencies!
accu1 <- comp_accu_freq(freq1$hi, freq1$mi, freq1$fa, freq1$cr) # => accu1 (based on rounded freq).
# accu1
# (b) comp_accu_prob (based on probabilities):
accu2 <- comp_accu_prob(prev = 1/3, sens = 2/3, spec = 3/4) # => exact accu (based on prob).
# accu2
all.equal(accu1, accu2) # => 4 differences!
#
# (c) comp_accu_freq (exact values, i.e., without rounding):
freq3 <- comp_freq(N = 10, prev = 1/3, sens = 2/3, spec = 3/4, round = FALSE)
accu3 <- comp_accu_freq(freq3$hi, freq3$mi, freq3$fa, freq3$cr) # => accu3 (based on EXACT freq).
# accu3
all.equal(accu2, accu3) # => TRUE (qed).