BSDT {singcar}R Documentation

Bayesian Standardised Difference Test

Description

A test on the discrepancy between two tasks in a single case, by comparison to the discrepancy of means in the same two tasks in a control sample. Can take both tasks measured on the same scale with the same underlying distribution or tasks measured on different scales by setting unstandardised to TRUE or FALSE (default). Calculates a standardised effects size of task discrepancy as well as a point estimate of the proportion of the control population that would be expected to show a more extreme discrepancy as well as relevant credible intervals. This test is based on random number generation which means that results may vary between runs. This is by design and the reason for not using set.seed() to reproduce results inside the function is to emphasise the randomness of the test. To get more accurate and stable results please increase the number of iterations by increasing iter whenever feasible. Developed by Crawford and Garthwaite (2007).

Usage

BSDT(
  case_a,
  case_b,
  controls_a,
  controls_b,
  sd_a = NULL,
  sd_b = NULL,
  sample_size = NULL,
  r_ab = NULL,
  alternative = c("two.sided", "greater", "less"),
  int_level = 0.95,
  iter = 10000,
  unstandardised = FALSE,
  calibrated = TRUE,
  na.rm = FALSE
)

Arguments

case_a

Case's score on task A.

case_b

Case's score on task B.

controls_a

Controls' scores on task A. Takes either a vector of observations or a single value interpreted as mean. Note: you can supply a vector as input for task A while mean and SD for task B.

controls_b

Controls' scores on task A. Takes either a vector of observations or a single value interpreted as mean. Note: you can supply a vector as input for task B while mean and SD for task A.

sd_a

If single value for task A is given as input you must supply the standard deviation of the sample.

sd_b

If single value for task B is given as input you must supply the standard deviation of the sample.

sample_size

If A or B is given as mean and SD you must supply the sample size. If controls_a is given as vector and controls_b as mean and SD, sample_size must equal the number of observations in controls_a.

r_ab

If A or B is given as mean and SD you must supply the correlation between the tasks.

alternative

A character string specifying the alternative hypothesis, must be one of "two.sided" (default), "greater" or "less". You can specify just the initial letter. Since the direction of the expected effect depends on which task is set as A and which is set as B, be very careful if changing this parameter.

int_level

Level of confidence for credible intervals, defaults to 95%.

iter

Number of iterations, defaults to 10000. Greater number gives better estimation but takes longer to calculate.

unstandardised

Estimate z-value based on standardised or unstandardised task scores. Set to TRUE only if tasks are measured on the same scale with the same underlying distribution.

calibrated

TRUE is default. Whether or not to use the standard theory (Jeffreys) prior distribution (if set to FALSE) or a calibrated prior examined by Berger and Sun (2008). The sample estimation of the covariance matrix is based on the sample size being n - 1 when the calibrated prior is used. See Crawford et al. (2011) for further information. Calibrated prior is recommended.

na.rm

Remove NAs from controls.

Details

Uses random generation of inverse wishart distributions from the CholWishart package (Geoffrey Thompson, 2019).

Value

A list with class "htest" containing the following components:

statistic the mean z-value over iter number of iterations.
parameter the degrees of freedom used to specify the posterior distribution.
p.value the mean p-value over iter number of iterations.
estimate case scores expressed as z-scores on task A and B. Standardised effect size (Z-DCC) of task difference between case and controls and point estimate of the proportion of the control population estimated to show a more extreme task difference.
null.value the value of the difference under the null hypothesis.
alternative a character string describing the alternative hypothesis.
method a character string indicating what type of test was performed.
data.name a character string giving the name(s) of the data

References

Berger, J. O., & Sun, D. (2008). Objective Priors for the Bivariate Normal Model. The Annals of Statistics, 36(2), 963-982. JSTOR.

Crawford, J. R., & Garthwaite, P. H. (2007). Comparison of a single case to a control or normative sample in neuropsychology: Development of a Bayesian approach. Cognitive Neuropsychology, 24(4), 343-372. doi:10.1080/02643290701290146

Crawford, J. R., Garthwaite, P. H., & Ryan, K. (2011). Comparing a single case to a control sample: Testing for neuropsychological deficits and dissociations in the presence of covariates. Cortex, 47(10), 1166-1178. doi:10.1016/j.cortex.2011.02.017

Geoffrey Thompson (2019). CholWishart: Cholesky Decomposition of the Wishart Distribution. R package version 1.1.0. https://CRAN.R-project.org/package=CholWishart

Examples

BSDT(-3.857, -1.875, controls_a = 0, controls_b = 0, sd_a = 1,
sd_b = 1, sample_size = 20, r_ab = 0.68, iter = 100)

BSDT(case_a = size_weight_illusion[1, "V_SWI"], case_b = size_weight_illusion[1, "K_SWI"],
 controls_a = size_weight_illusion[-1, "V_SWI"],
 controls_b = size_weight_illusion[-1, "K_SWI"], iter = 100)


[Package singcar version 0.1.5 Index]