heterogeneity {psychmeta} | R Documentation |
Supplemental heterogeneity statistics for meta-analyses
Description
This function computes a variety of supplemental statistics for meta-analyses.
The statistics here are included for interested users.
It is strongly recommended that heterogeneity in meta-analysis be interpreted using the SD_{res}
, SD_{\rho}
, and SD_{\delta}
statistics, along with corresponding credibility intervals, which are reported in the default ma_obj
output (Wiernik et al., 2017).
Usage
heterogeneity(
ma_obj,
es_failsafe = NULL,
conf_level = attributes(ma_obj)$inputs$conf_level,
var_res_ci_method = c("profile_var_es", "profile_Q", "normal_logQ"),
...
)
Arguments
ma_obj |
Meta-analysis object. |
es_failsafe |
Failsafe effect-size value for file-drawer analyses. |
conf_level |
Confidence level to define the width of confidence intervals (default is |
var_res_ci_method |
Which method to use to estimate the limits. Options are |
... |
Additional arguments. |
Value
ma_obj with heterogeneity statistics added. Included statistics include:
es_type |
The effect size metric used. |
percent_var_accounted |
Percent variance accounted for statistics (by sampling error, by other artifacts, and total). These statistics are widely reported, but not recommended, as they tend to be misinterpreted as suggesting only a small portion of the observed variance is accounted for by sampling error and other artifacts (Schmidt, 2010; Schmidt & Hunter, 2015, p. 15, 425). The square roots of these values are more interpretable and appropriate indices of the relations between observed effect sizes and statistical artifacts (see |
cor(es , perturbations) |
The correlation between observed effect sizes and statistical artifacts in each sample (with sampling error, with other artifacts, and with artifacts in total), computed as |
rel_es_obs |
|
H_squared |
The ratio of the observed effect size variance to the predicted (error) variance. Also the square root of |
H |
The ratio of the observed effect size standard deviation to the predicted (error) standard deviation. |
I_squared |
The estimated percent variance not accounted for by sampling error or other artifacts (attributable to moderators and uncorrected artifacts). This statistic is simply |
Q |
Cochran's |
tau_squared |
|
tau |
|
Q_r , H_r_squared , H_r , I_r_squared , tau_r_squared , tau_r |
Outlier-robust versions of these statistics, computed based on absolute deviations from the weighted mean effect size (see Lin et al., 2017). These values are not accurate when artifact distribution methods are used for corrections. |
Q_m , H_m_squared , H_m , I_m_squared , tau_m_squared , tau_m |
Outlier-robust versions of these statistics, computed based on absolute deviations from the weighted median effect size (see Lin et al., 2017). These values are not accurate when artifact distribution methods are used for corrections. |
file_drawer |
Fail-safe |
Results are reported using computation methods described by Schmidt and Hunter. For barebones and individual-correction meta-analyses, results are also reported using computation methods described by DerSimonian and Laird, outlier-robust computation methods, and, if weights from metafor are used, heterogeneity results from metafor.
References
Becker, B. J. (2005). Failsafe N or file-drawer number. In H. R. Rothstein, A. J. Sutton, & M. Borenstein (Eds.), Publication bias in meta-analysis: Prevention, assessment and adjustments (pp. 111–125). Wiley. doi:10.1002/0470870168.ch7
Higgins, J. P. T., & Thompson, S. G. (2002). Quantifying heterogeneity in a meta-analysis. Statistics in Medicine, 21(11), 1539–1558. doi:10.1002/sim.1186
Lin, L., Chu, H., & Hodges, J. S. (2017). Alternative measures of between-study heterogeneity in meta-analysis: Reducing the impact of outlying studies. Biometrics, 73(1), 156–166. doi:10.1111/biom.12543
Schmidt, F. (2010). Detecting and correcting the lies that data tell. Perspectives on Psychological Science, 5(3), 233–242. doi:10.1177/1745691610369339
Schmidt, F. L., & Hunter, J. E. (2015). Methods of meta-analysis: Correcting error and bias in research findings (3rd ed.). Sage. doi:10.4135/9781483398105. pp. 15, 414, 426, 533–534.
Wiernik, B. M., Kostal, J. W., Wilmot, M. P., Dilchert, S., & Ones, D. S. (2017). Empirical benchmarks for interpreting effect size variability in meta-analysis. Industrial and Organizational Psychology, 10(3). doi:10.1017/iop.2017.44
Examples
## Correlations
ma_obj <- ma_r_ic(rxyi = rxyi, n = n, rxx = rxxi, ryy = ryyi, ux = ux,
correct_rr_y = FALSE, data = data_r_uvirr)
ma_obj <- ma_r_ad(ma_obj, correct_rr_y = FALSE)
ma_obj <- heterogeneity(ma_obj = ma_obj)
ma_obj$heterogeneity[[1]]$barebones
ma_obj$heterogeneity[[1]]$individual_correction$true_score
ma_obj$heterogeneity[[1]]$artifact_distribution$true_score
## d values
ma_obj <- ma_d_ic(d = d, n1 = n1, n2 = n2, ryy = ryyi,
data = data_d_meas_multi)
ma_obj <- ma_d_ad(ma_obj)
ma_obj <- heterogeneity(ma_obj = ma_obj)
ma_obj$heterogeneity[[1]]$barebones
ma_obj$heterogeneity[[1]]$individual_correction$latentGroup_latentY
ma_obj$heterogeneity[[1]]$artifact_distribution$latentGroup_latentY