pubbias_meta {PublicationBias} | R Documentation |
Estimate publication bias-corrected meta-analysis
Description
For a chosen ratio of publication probabilities, selection_ratio
, estimates
a publication bias-corrected pooled point estimate and confidence interval
per Mathur and VanderWeele (2020). Model options include
fixed-effects (a.k.a. "common-effect"), robust independent, and robust
clustered specifications.
Usage
pubbias_meta(
yi,
vi,
sei,
cluster = 1:length(yi),
selection_ratio,
selection_tails = 1,
model_type = "robust",
favor_positive = TRUE,
alpha_select = 0.05,
ci_level = 0.95,
small = TRUE,
return_worst_meta = FALSE
)
corrected_meta(
yi,
vi,
eta,
clustervar = 1:length(yi),
model,
selection.tails = 1,
favor.positive,
alpha.select = 0.05,
CI.level = 0.95,
small = TRUE
)
Arguments
yi |
A vector of point estimates to be meta-analyzed. |
vi |
A vector of estimated variances (i.e., squared standard errors) for the point estimates. |
sei |
A vector of estimated standard errors for the point estimates.
(Only one of |
cluster |
Vector of the same length as the number of rows in the data, indicating which cluster each study should be considered part of (defaults to treating studies as independent; i.e., each study is in its own cluster). |
selection_ratio |
Ratio by which publication bias favors affirmative
studies (i.e., studies with p-values less than |
selection_tails |
1 (for one-tailed selection, recommended for its conservatism) or 2 (for two-tailed selection). |
model_type |
"fixed" for fixed-effects (a.k.a. "common-effect") or "robust" for robust random-effects. |
favor_positive |
|
alpha_select |
Alpha level at which an estimate's probability of being favored by publication bias is assumed to change (i.e., the threshold at which study investigators, journal editors, etc., consider an estimate to be significant). |
ci_level |
Confidence interval level (as proportion) for the corrected
point estimate. (The alpha level for inference on the corrected point
estimate will be calculated from |
small |
Should inference allow for a small meta-analysis? We recommend
always using |
return_worst_meta |
Should the worst-case meta-analysis of only the nonaffirmative studies be returned? |
eta |
(deprecated) see selection_ratio |
clustervar |
(deprecated) see cluster |
model |
(deprecated) see model_type |
selection.tails |
(deprecated) see selection_tails |
favor.positive |
(deprecated) see favor_positive |
alpha.select |
(deprecated) see alpha_select |
CI.level |
(deprecated) see ci_level |
Details
The selection_ratio
represents the number of times more likely
affirmative studies (i.e., those with a "statistically significant" and
positive estimate) are to be published than nonaffirmative studies (i.e.,
those with a "nonsignificant" or negative estimate).
If favor_positive
is FALSE
, such that publication bias is assumed to
favor negative rather than positive estimates, the signs of yi
will be
reversed prior to performing analyses. The corrected estimate will be
reported based on the recoded signs rather than the original sign
convention.
Value
An object of class metabias::metabias()
, a list containing:
- data
A tibble with one row per study and the columns
yi
,yif
,vi
,affirm
,cluster
.- values
A list with the elements
selection_ratio
,selection_tails
,model_type
,favor_positive
,alpha_select
,ci_level
,small
,k
,k_affirmative
,k_nonaffirmative
.- stats
A tibble with the columns
model
,estimate
,se
,ci_lower
,ci_upper
,p_value
.- fit
A list of fitted models, if any.
References
Mathur MB, VanderWeele TJ (2020). “Sensitivity analysis for publication bias in meta-analyses.” Journal of the Royal Statistical Society: Series C (Applied Statistics), 69(5), 1091–1119.
Examples
# calculate effect sizes from example dataset in metafor
require(metafor)
dat <- metafor::escalc(measure = "RR", ai = tpos, bi = tneg, ci = cpos,
di = cneg, data = dat.bcg)
# first fit fixed-effects model without any bias correction
# since the point estimate is negative here, we'll assume publication bias
# favors negative log-RRs rather than positive ones
metafor::rma(yi, vi, data = dat, method = "FE")
# warmup
# note that passing selection_ratio = 1 (no publication bias) yields the naive
# point estimate from rma above, which makes sense
meta <- pubbias_meta(yi = dat$yi,
vi = dat$vi,
selection_ratio = 1,
model_type = "fixed",
favor_positive = FALSE)
summary(meta)
# assume a known selection ratio of 5
# i.e., affirmative results are 5x more likely to be published than
# nonaffirmative ones
meta <- pubbias_meta(yi = dat$yi,
vi = dat$vi,
selection_ratio = 5,
model_type = "fixed",
favor_positive = FALSE)
summary(meta)
# same selection ratio, but now account for heterogeneity and clustering via
# robust specification
meta <- pubbias_meta(yi = dat$yi,
vi = dat$vi,
cluster = dat$author,
selection_ratio = 5,
model_type = "robust",
favor_positive = FALSE)
summary(meta)
##### Make sensitivity plot as in Mathur & VanderWeele (2020) #####
# range of parameters to try (more dense at the very small ones)
selection_ratios <- c(200, 150, 100, 50, 40, 30, 20, seq(15, 1))
# compute estimate for each value of selection_ratio
estimates <- lapply(selection_ratios, function(e) {
pubbias_meta(yi = dat$yi, vi = dat$vi, cluster = dat$author,
selection_ratio = e, model_type = "robust",
favor_positive = FALSE)$stats
})
estimates <- dplyr::bind_rows(estimates)
estimates$selection_ratio <- selection_ratios
require(ggplot2)
ggplot(estimates, aes(x = selection_ratio, y = estimate)) +
geom_ribbon(aes(ymin = ci_lower, ymax = ci_upper), fill = "gray") +
geom_line(lwd = 1.2) +
labs(x = bquote(eta), y = bquote(hat(mu)[eta])) +
theme_classic()