tippett {poolr}R Documentation

Tippett's Method

Description

Function to carry out Tippett's method.

Usage

tippett(p, adjust = "none", R, m,
        size = 10000, threshold, side = 2, batchsize, nearpd = TRUE, ...)

Arguments

p

vector of length \(k\) with the (one- or two-sided) p-values to be combined.

adjust

character string to specify an adjustment method to account for dependence. The default is "none", in which case no adjustment is applied. Methods "nyholt", "liji", "gao", or "galwey" are adjustments based on an estimate of the effective number of tests (see meff). Adjustment method "empirical" uses an empirically-derived null distribution using pseudo replicates. See ‘Details’.

R

a \(k \times k\) symmetric matrix that reflects the dependence structure among the tests. Must be specified if adjust is set to something other than "none". See ‘Details’.

m

optional scalar (between 1 and \(k\)) to manually specify the effective number of tests (instead of estimating it via one of the methods described above).

size

size of the empirically-derived null distribution. Can also be a numeric vector of sizes, in which case a stepwise algorithm is used. This (and the following arguments) are only relevant when adjust = "empirical".

threshold

numeric vector to specify the significance thresholds for the stepwise algorithm (only relevant when size is a vector).

side

scalar to specify the sidedness of the \(p\)-values that are used to simulate the null distribution (2, by default, for two-sided tests; 1 for one-sided tests).

batchsize

optional scalar to specify the batch size for generating the null distribution. When unspecified (the default), this is done in a single batch.

nearpd

logical indicating if a negative definite R matrix should be turned into the nearest positive definite matrix (only relevant when adjust = "empirical").

...

other arguments.

Details

Tippett's Method

By default (i.e., when adjust = "none"), the function applies the Tippett's method to the \(p\)-values (Tippett, 1931). Letting \(p_1, p_2, \ldots, p_k\) denote the individual (one- or two-sided) \(p\)-values of the \(k\) hypothesis tests to be combined, the combined \(p\)-value is then computed with \[p_c = 1 - (1 - \min(p_1, p_2, \ldots, p_k))^k.\]

Tippett's method asumes that the \(p\)-values to be combined are independent. If this is not the case, the method can either be conservative (not reject often enough) or liberal (reject too often), depending on the dependence structure among the tests. In this case, one can adjust the method to account for such dependence (to bring the Type I error rate closer to some desired nominal significance level).

Adjustment Based on the Effective Number of Tests

When adjust is set to "nyholt", "liji", "gao" or "galwey", Tippett's method is adjusted based on an estimate of the effective number of tests (see meff for details on these methods for estimating the effective number of tests). In this case, argument R needs to be set to a matrix that reflects the dependence structure among the tests.

There is no general solution for constructing such a matrix, as this depends on the type of test that generated the \(p\)-values and the sidedness of these tests. If the \(p\)-values are obtained from tests whose test statistics can be assumed to follow a multivariate normal distribution and a matrix is available that reflects the correlations among the test statistics, then the mvnconv function can be used to convert this correlation matrix into the correlations among the (one- or two-sided) \(p\)-values, which can then be passed to the R argument. See ‘Examples’.

Once the effective number of tests, \(m\), is estimated based on R using one of the four methods described above, the combined \(p\)-value is then computed with \[p_c = 1 - (1 - \min(p_1, p_2, \ldots, p_k))^m.\]

Alternatively, one can also directly specify the effective number of tests via the m argument (e.g., if some other method not implemented in the poolr package is used to estimate the effective number of tests). Argument R is then irrelevant and doesn't need to be specified.

Adjustment Based on an Empirically-Derived Null Distribution

When adjust = "empirical", the combined \(p\)-value is computed based on an empirically-derived null distribution using pseudo replicates (using the empirical function). This is appropriate if the test statistics that generated the \(p\)-values to be combined can be assumed to follow a multivariate normal distribution and a matrix is available that reflects the correlations among the test statistics (which is specified via the R argument). In this case, test statistics are repeatedly simulated from a multivariate normal distribution under the joint null hypothesis, converted into one- or two-sided \(p\)-values (depending on the side argument), and Tippett's method is applied. Repeating this process size times yields a null distribution based on which the combined \(p\)-value can be computed, or more precisely, estimated, since repeated applications of this method will yield (slightly) different results. To obtain a stable estimate of the combined \(p\)-value, size should be set to a large value (the default is 10000, but this can be increased for a more precise estimate). If we consider the combined \(p\)-value an estimate of the ‘true’ combined \(p\)-value that would be obtained for a null distribution of infinite size, we can also construct a 95% (pseudo) confidence interval based on a binomial distribution.

If batchsize is unspecified, the null distribution is simulated in a single batch, which requires temporarily storing a matrix with dimensions [size,k]. When size*k is large, allocating the memory for this matrix might not be possible. Instead, one can specify a batchsize value, in which case a matrix with dimensions [batchsize,k] is repeatedly simulated until the desired size of the null distribution has been obtained.

One can also specify a vector for the size argument, in which case one must also specify a corresponding vector for the threshold argument. In that case, a stepwise algorithm is used that proceeds as follows. For j = 1, ..., length(size),

  1. estimate the combined \(p\)-value based on size[j]

  2. if the combined \(p\)-value is \(\ge\) than threshold[j], stop (and report the combined \(p\)-value), otherwise go back to 1.

By setting size to increasing values (e.g., size = c(1000, 10000, 100000)) and threshold to decreasing values (e.g., threshold = c(.10, .01, 0)), one can quickly obtain a fairly accurate estimate of the combined \(p\)-value if it is far from significant (e.g., \(\ge\) .10), but hone in on a more accurate estimate for a combined \(p\)-value that is closer to 0. Note that the last value of threshold should be 0 (and is forced to be inside of the function), so that the algorithm is guaranteed to terminate (hence, one can also leave out the last value of threshold, so threshold = c(.10, .01) would also work in the example above). One can also specify a single threshold (which is replicated as often as necessary depending on the length of size).

Value

An object of class "poolr". The object is a list containing the following components:

p

combined \(p\)-value.

ci

confidence interval for the combined \(p\)-value (only when adjust = "empirical"; otherwise NULL).

k

number of \(p\)-values that were combined.

m

estimate of the effective number of tests (only when adjust is one of "nyholt", "liji", "gao" or "galwey"; otherwise NULL).

adjust

chosen adjustment method.

statistic

value of the (adjusted) test statistic.

fun

name of calling function.

Note

The method underlying adjust = "empirical" assumes that the test statistics that generated the \(p\)-values to be combined follow a multivariate normal distribution. Hence, the matrix specified via R must be positive definite. If it is not and nearpd = TRUE, it will be turned into one (based on Higham, 2002, and a slightly simplified version of nearPD from the Matrix package).

Author(s)

Ozan Cinar ozancinar86@gmail.com
Wolfgang Viechtbauer wvb@wvbauer.com

References

Cinar, O. & Viechtbauer, W. (2022). The poolr package for combining independent and dependent p values. Journal of Statistical Software, 101(1), 1–42. ⁠https://doi.org/10.18637/jss.v101.i01⁠

Higham, N. J. (2002). Computing the nearest correlation matrix: A problem from finance. IMA Journal of Numerical Analysis, 22(3), 329–343.

Tippett, L. H. C. (1931). Methods of Statistics. London: Williams Norgate.

Examples

# copy p-values and LD correlation matrix into p and r
# (see help(grid2ip) for details on these data)
p <- grid2ip.p
r <- grid2ip.ld

# apply Tippett's method
tippett(p)

# use mvnconv() to convert the LD correlation matrix into a matrix with the
# correlations among the (two-sided) p-values assuming that the test
# statistics follow a multivariate normal distribution with correlation
# matrix r (note: 'side = 2' by default in mvnconv())
mvnconv(r, target = "p", cov2cor = TRUE)[1:5,1:5] # show only rows/columns 1-5

# adjustment based on estimates of the effective number of tests
tippett(p, adjust = "nyholt", R = mvnconv(r, target = "p", cov2cor = TRUE))
tippett(p, adjust = "liji",   R = mvnconv(r, target = "p", cov2cor = TRUE))
tippett(p, adjust = "gao",    R = mvnconv(r, target = "p", cov2cor = TRUE))
tippett(p, adjust = "galwey", R = mvnconv(r, target = "p", cov2cor = TRUE))

# setting argument 'm' manually
tippett(p, m = 12)

# adjustment based on an empirically-derived null distribution (setting the
# seed for reproducibility)
set.seed(1234)
tippett(p, adjust = "empirical", R = r)

# generate the empirical distribution in batches of size 100
tippett(p, adjust = "empirical", R = r, batchsize = 100)

# using the stepwise algorithm
tippett(p, adjust = "empirical", R = r, size = c(1000, 10000, 100000), threshold = c(.10, .01))

[Package poolr version 1.1-1 Index]