p_superiority {effectsize}R Documentation

Cohen's Us and Other Common Language Effect Sizes (CLES)

Description

Cohen's U_1, U_2, and U_3, probability of superiority, proportion of overlap, Wilcoxon-Mann-Whitney odds, and Vargha and Delaney's A are CLESs. These are effect sizes that represent differences between two (independent) distributions in probabilistic terms (See details). Pair with any reported stats::t.test() or stats::wilcox.test().

Usage

p_superiority(
  x,
  y = NULL,
  data = NULL,
  mu = 0,
  paired = FALSE,
  parametric = TRUE,
  ci = 0.95,
  alternative = "two.sided",
  verbose = TRUE,
  ...
)

cohens_u1(
  x,
  y = NULL,
  data = NULL,
  mu = 0,
  parametric = TRUE,
  ci = 0.95,
  alternative = "two.sided",
  iterations = 200,
  verbose = TRUE,
  ...
)

cohens_u2(
  x,
  y = NULL,
  data = NULL,
  mu = 0,
  parametric = TRUE,
  ci = 0.95,
  alternative = "two.sided",
  iterations = 200,
  verbose = TRUE,
  ...
)

cohens_u3(
  x,
  y = NULL,
  data = NULL,
  mu = 0,
  parametric = TRUE,
  ci = 0.95,
  alternative = "two.sided",
  iterations = 200,
  verbose = TRUE,
  ...
)

p_overlap(
  x,
  y = NULL,
  data = NULL,
  mu = 0,
  parametric = TRUE,
  ci = 0.95,
  alternative = "two.sided",
  iterations = 200,
  verbose = TRUE,
  ...
)

vd_a(
  x,
  y = NULL,
  data = NULL,
  mu = 0,
  ci = 0.95,
  alternative = "two.sided",
  verbose = TRUE,
  ...
)

wmw_odds(
  x,
  y = NULL,
  data = NULL,
  mu = 0,
  paired = FALSE,
  ci = 0.95,
  alternative = "two.sided",
  verbose = TRUE,
  ...
)

Arguments

x, y

A numeric vector, or a character name of one in data. Any missing values (NAs) are dropped from the resulting vector. x can also be a formula (see stats::t.test()), in which case y is ignored.

data

An optional data frame containing the variables.

mu

a number indicating the true value of the mean (or difference in means if you are performing a two sample test).

paired

If TRUE, the values of x and y are considered as paired. This produces an effect size that is equivalent to the one-sample effect size on x - y.

parametric

Use parametric estimation (see cohens_d()) or non-parametric estimation (see rank_biserial()). See details.

ci

Confidence Interval (CI) level

alternative

a character string specifying the alternative hypothesis; Controls the type of CI returned: "two.sided" (default, two-sided CI), "greater" or "less" (one-sided CI). Partial matching is allowed (e.g., "g", "l", "two"...). See One-Sided CIs in effectsize_CIs.

verbose

Toggle warnings and messages on or off.

...

Arguments passed to or from other methods. When x is a formula, these can be subset and na.action.

iterations

The number of bootstrap replicates for computing confidence intervals. Only applies when ci is not NULL and parametric = FALSE.

Details

These measures of effect size present group differences in probabilistic terms:

Wilcoxon-Mann-Whitney odds are the odds of non-parametric superiority (via probs_to_odds()), that is the odds that, when sampling an observation from each of the groups at random, that the observation from the second group will be larger than the sample from the first group.

Where U_1, U_2, and Overlap are agnostic to the direction of the difference between the groups, U_3 and probability of superiority are not.

The parametric version of these effects assumes normality of both populations and homoscedasticity. If those are not met, the non parametric versions should be used.

Value

A data frame containing the common language effect sizes (and optionally their CIs).

Confidence (Compatibility) Intervals (CIs)

For parametric CLES, the CIs are transformed CIs for Cohen's d (see d_to_u3()). For non-parametric (parametric = FALSE) CLES, the CI of Pr(superiority) is a transformed CI of the rank-biserial correlation (rb_to_p_superiority()), while for all others, confidence intervals are estimated using the bootstrap method (using the {boot} package).

CIs and Significance Tests

"Confidence intervals on measures of effect size convey all the information in a hypothesis test, and more." (Steiger, 2004). Confidence (compatibility) intervals and p values are complementary summaries of parameter uncertainty given the observed data. A dichotomous hypothesis test could be performed with either a CI or a p value. The 100 (1 - \alpha)% confidence interval contains all of the parameter values for which p > \alpha for the current data and model. For example, a 95% confidence interval contains all of the values for which p > .05.

Note that a confidence interval including 0 does not indicate that the null (no effect) is true. Rather, it suggests that the observed data together with the model and its assumptions combined do not provided clear evidence against a parameter value of 0 (same as with any other value in the interval), with the level of this evidence defined by the chosen \alpha level (Rafi & Greenland, 2020; Schweder & Hjort, 2016; Xie & Singh, 2013). To infer no effect, additional judgments about what parameter values are "close enough" to 0 to be negligible are needed ("equivalence testing"; Bauer & Kiesser, 1996).

Bootstrapped CIs

Some effect sizes are directionless–they do have a minimum value that would be interpreted as "no effect", but they cannot cross it. For example, a null value of Kendall's W is 0, indicating no difference between groups, but it can never have a negative value. Same goes for U2 and Overlap: the null value of U_2 is 0.5, but it can never be smaller than 0.5; am Overlap of 1 means "full overlap" (no difference), but it cannot be larger than 1.

When bootstrapping CIs for such effect sizes, the bounds of the CIs will never cross (and often will never cover) the null. Therefore, these CIs should not be used for statistical inference.

Plotting with see

The see package contains relevant plotting functions. See the plotting vignette in the see package.

Note

If mu is not 0, the effect size represents the difference between the first shifted sample (by mu) and the second sample.

References

See Also

sd_pooled()

Other standardized differences: cohens_d(), mahalanobis_d(), means_ratio(), rank_biserial(), repeated_measures_d()

Other rank-based effect sizes: rank_biserial(), rank_epsilon_squared()

Examples

cohens_u2(mpg ~ am, data = mtcars)

p_superiority(mpg ~ am, data = mtcars, parametric = FALSE)

wmw_odds(mpg ~ am, data = mtcars)

x <- c(1.83, 0.5, 1.62, 2.48, 1.68, 1.88, 1.55, 3.06, 1.3)
y <- c(0.878, 0.647, 0.598, 2.05, 1.06, 1.29, 1.06, 3.14, 1.29)

p_overlap(x, y)
p_overlap(y, x) # direction of effect does not matter

cohens_u3(x, y)
cohens_u3(y, x) # direction of effect does matter


[Package effectsize version 0.8.7 Index]