BSDT_cov {singcar}R Documentation

Bayesian Standardised Difference Test with Covariates

Description

Takes two single observations from a case on two variables (A and B) and compares their standardised discrepancy to the discrepancies of the variables in a control sample, while controlling for the effects of covariates, using Bayesian methodology. This test is used when assessing a case conditioned on some other variable, for example, assessing abnormality of discrepancy when controlling for years of education or sex. Under the null hypothesis the case is an observation from the distribution of discrepancies between the tasks of interest coming from observations having the same score as the case on the covariate(s). Returns a significance test, point and interval estimates of difference between the case and the mean of the controls as well as point and interval estimates of abnormality, i.e. an estimation of the proportion of controls that would exhibit a more extreme conditioned score. This test is based on random number generation which means that results may vary between runs. This is by design and the reason for not using set.seed() to reproduce results inside the function is to emphasise the randomness of the test. To get more accurate and stable results please increase the number of iterations by increasing iter whenever feasible. Developed by Crawford, Garthwaite and Ryan (2011).

Usage

BSDT_cov(
  case_tasks,
  case_covar,
  control_tasks,
  control_covar,
  alternative = c("two.sided", "greater", "less"),
  int_level = 0.95,
  calibrated = TRUE,
  iter = 10000,
  use_sumstats = FALSE,
  cor_mat = NULL,
  sample_size = NULL
)

Arguments

case_tasks

A vector of length 2. The case scores from the two tasks.

case_covar

A vector containing the case scores on all covariates included.

control_tasks

A matrix or dataframe with 2 columns and n rows containing the control scores for the two tasks. Or if use_sumstats is set to TRUE a 2x2 matrix or dataframe containing summary statistics where the first column represents the means for each task and the second column represents the standard deviation.

control_covar

A matrix or dataframe containing the control scores on the covariates included. Or if use_sumstats is set to TRUE a matrix or dataframe containing summary statistics where the first column represents the means for each covariate and the second column represents the standard deviation.

alternative

A character string specifying the alternative hypothesis, must be one of "two.sided" (default), "greater" or "less". You can specify just the initial letter. Since the direction of the expected effect depends on which task is set as A and which is set as B, be very careful if changing this parameter.

int_level

The probability level on the Bayesian credible intervals, defaults to 95%.

calibrated

Whether or not to use the standard theory (Jeffreys) prior distribution (if set to FALSE) or a calibrated prior examined by Berger and Sun (2008). The sample estimation of the covariance matrix is based on the sample size being n - 1 when the calibrated prior is used. See Crawford et al. (2011) for further information. Calibrated prior is recommended.

iter

Number of iterations to be performed. Greater number gives better estimation but takes longer to calculate. Defaults to 10000.

use_sumstats

If set to TRUE, control_tasks and control_covar are treated as matrices with summary statistics. Where the first column represents the means for each variable and the second column represents the standard deviation.

cor_mat

A correlation matrix of all variables included. NOTE: the two first variables should be the tasks of interest. Only needed if use_sumstats is set to TRUE.

sample_size

An integer specifying the sample size of the controls. Only needed if use_sumstats is set to TRUE.

Details

Uses random generation of inverse wishart distributions from the CholWishart package (Geoffrey Thompson, 2019).

Value

A list with class "htest" containing the following components:

statistic the average z-value over iter number of iterations.
parameter the degrees of freedom used to specify the posterior distribution.
p.value the average p-value over iter number of iterations.
estimate case scores expressed as z-scores on task A and B. Standardised effect size (Z-DCCC) of task difference between case and controls and point estimate of the proportion of the control population estimated to show a more extreme task difference.
null.value the value of the difference between tasks under the null hypothesis.
interval named numerical vector containing level of confidence and confidence intervals for both effect size and p-value.
desc data frame containing means and standard deviations for controls as well as case scores.
cor.mat matrix giving the correlations between the tasks of interest and the covariates included.
sample.size number of controls.
alternative a character string describing the alternative hypothesis.
method a character string indicating what type of test was performed.
data.name a character string giving the name(s) of the data

References

Berger, J. O., & Sun, D. (2008). Objective Priors for the Bivariate Normal Model. The Annals of Statistics, 36(2), 963-982. JSTOR.

Crawford, J. R., Garthwaite, P. H., & Ryan, K. (2011). Comparing a single case to a control sample: Testing for neuropsychological deficits and dissociations in the presence of covariates. Cortex, 47(10), 1166-1178. doi:10.1016/j.cortex.2011.02.017

#' Geoffrey Thompson (2019). CholWishart: Cholesky Decomposition of the Wishart Distribution. R package version 1.1.0. https://CRAN.R-project.org/package=CholWishart

Examples

BSDT_cov(case_tasks = c(size_weight_illusion[1, "V_SWI"],
                        size_weight_illusion[1, "K_SWI"]),
         case_covar = size_weight_illusion[1, "YRS"],
         control_tasks = cbind(size_weight_illusion[-1, "V_SWI"],
                               size_weight_illusion[-1, "K_SWI"]),
         control_covar = size_weight_illusion[-1, "YRS"], iter = 100)


[Package singcar version 0.1.5 Index]