power.glm.random.a0 {BayesPPD}R Documentation

Power/type I error calculation for generalized linear models with random a0

Description

Power/type I error calculation using normalized power priors for generalized linear models with random a_0

Usage

power.glm.random.a0(
  data.type,
  data.link,
  data.size,
  n = 1,
  treat.assign.prob = 0.5,
  borrow.treat = FALSE,
  historical,
  nullspace.ineq = ">",
  samp.prior.beta,
  samp.prior.var,
  prior.beta.var = rep(10, 50),
  prior.a0.shape1 = rep(1, 10),
  prior.a0.shape2 = rep(1, 10),
  a0.coefficients,
  lower.limits = NULL,
  upper.limits = NULL,
  slice.widths = rep(0.1, 50),
  delta = 0,
  gamma = 0.95,
  nMC = 10000,
  nBI = 250,
  N = 10000
)

Arguments

data.type

Character string specifying the type of response. The options are "Normal", "Bernoulli", "Binomial", "Poisson" and "Exponential".

data.link

Character string specifying the link function. The options are "Logistic", "Probit", "Log", "Identity-Positive", "Identity-Probability" and "Complementary Log-Log". Does not apply if data.type is "Normal".

data.size

Sample size of the simulated datasets.

n

(For binomial data only) vector of integers specifying the number of subjects who have a particular value of the covariate vector. If the data is binary and all covariates are discrete, collapsing Bernoulli data into a binomial structure can make the slice sampler much faster. The sum of n should be equal to data.size. The length of n should be equal to the number of rows of x0.

treat.assign.prob

Probability of being assigned to the treatment group. The default value is 0.5. Only applies if borrow.treat=FALSE.

borrow.treat

Logical value indicating whether the historical information is used to inform the treatment effect parameter. The default value is FALSE. If TRUE, the first column of the historical covariate matrix must be the treatment indicator. If FALSE, the historical covariate matrix must NOT have the treatment indicator, since the historical data is assumed to be from the control group only.

historical

List of historical dataset(s). East historical dataset is stored in a list which contains two named elements: y0 and x0.

  • y0 is a vector of responses.

  • x0 is a matrix of covariates. If borrow.treat is FALSE (the default), x0 should NOT have the treatment indicator. If borrow.treat is TRUE, the first column of x0 must be the treatment indicator.

For binomial data, an additional element n0 is required.

  • n0 is vector of integers specifying the number of subjects who have a particular value of the covariate vector. The length of n0 should be equal to the number of rows of x0.

nullspace.ineq

Character string specifying the inequality of the null hypothesis. The options are ">" and "<". If ">" is specified, the null hypothesis is H_0: \beta_1 \ge \delta. If "<" is specified, the null hypothesis is H_0: \beta_1 \le \delta. The default choice is ">".

samp.prior.beta

Matrix of possible values of \beta to sample (with replacement) from. Each row is a possible \beta vector (a realization from the sampling prior for \beta), where the first element is the coefficient for the intercept and the second element is the coefficient for the treatment indicator. The length of the vector should be equal to the total number of parameters. If P is the number of columns of x0 in historical, the total number of parameters is P+2 if borrow.treat=FALSE, and is P+1 if borrow.treat=TRUE.

samp.prior.var

Vector of possible values of \sigma^2 to sample (with replacement) from. Only applies if data.type is "Normal". The vector contains realizations from the sampling prior (e.g. inverse-gamma distribution) for \sigma^2.

prior.beta.var

Vector of variances of the independent normal initial priors on \beta with mean zero. The length of the vector should be equal to the length of \beta. The default variance is 10.

prior.a0.shape1

Vector of the first shape parameters of the independent beta priors for a_0. The length of the vector should be equal to the number of historical datasets. The default is a vector of one's.

prior.a0.shape2

Vector of the second shape parameters of the independent beta priors for a_0. The length of the vector should be equal to the number of historical datasets. The default is a vector of one's.

a0.coefficients

Vector of coefficients for a_0 returned by the function normalizing.constant. This is necessary for estimating the normalizing constant for the normalized power prior. Does not apply if data.type is "Normal".

lower.limits

Vector of lower limits for parameters to be used by the slice sampler. If data.type is "Normal", slice sampling is used for a_0, and the length of the vector should be equal to the number of historical datasets. For all other data types, slice sampling is used for \beta and a_0. The first P+1 elements apply to the sampling of \beta and the rest apply to the sampling of a_0. The length of the vector should be equal to the sum of the total number of parameters (i.e. P+1 where P is the number of covariates) and the number of historical datasets. The default is -100 for \beta and 0 for a_0 (may not be appropriate for all situations).

upper.limits

Vector of upper limits for parameters to be used by the slice sampler. If data.type is "Normal", slice sampling is used for a_0, and the length of the vector should be equal to the number of historical datasets. For all other data types, slice sampling is used for \beta and a_0. The first P+1 elements apply to the sampling of \beta and the rest apply to the sampling of a_0. The length of the vector should be equal to the sum of the total number of parameters (i.e. P+1 where P is the number of covariates) and the number of historical datasets. The default is 100 for \beta and 1 for a_0 (may not be appropriate for all situations).

slice.widths

Vector of initial slice widths used by the slice sampler. If data.type is "Normal", slice sampling is used for a_0, and the length of the vector should be equal to the number of historical datasets. For all other data types, slice sampling is used for \beta and a_0. The first P+1 elements apply to the sampling of \beta and the rest apply to the sampling of a_0. The length of the vector should be equal to the sum of the total number of parameters (i.e. P+1 where P is the number of covariates) and the number of historical datasets. The default is 0.1 for all parameter (may not be appropriate for all situations).

delta

Prespecified constant that defines the boundary of the null hypothesis. The default is zero.

gamma

Posterior probability threshold for rejecting the null. The null hypothesis is rejected if posterior probability is greater gamma. The default is 0.95.

nMC

Number of iterations (excluding burn-in samples) for the slice sampler or Gibbs sampler. The default is 10,000.

nBI

Number of burn-in samples for the slice sampler or Gibbs sampler. The default is 250.

N

Number of simulated datasets to generate. The default is 10,000.

Details

The user should use the function normalizing.constant to obtain a0.coefficients (does not apply if data.type is "Normal").

The sampling prior for the treatment parameter can be generated from a normal distribution (see examples). For example, suppose one wants to compute the power for the hypotheses H_0: \beta_1 \ge 0 and H_1: \beta_1 < 0. To approximate the sampling prior for \beta_1, one can simply sample from a normal distribution with negative mean, so that the mass of the prior falls in the alternative space. Conversely, to compute the type I error rate, one can sample from a normal distribution with positive mean, so that the mass of the prior falls in the null space. The sampling prior for the other parameters can be generated by using the glm.fixed.a0 function with current.data set to FALSE. The posterior samples based on only historical data can be used as a discrete approximation to the sampling prior.

samp.prior.var is necessary for generating normally distributed data.

If data.type is "Normal", the response y_i is assumed to follow N(x_i'\beta, \tau^{-1}) where x_i is the vector of covariates for subject i. Historical datasets are assumed to have the same precision parameter as the current dataset for computational simplicity. The initial prior for \tau is the Jeffery's prior, \tau^{-1}. Independent normal priors with mean zero and variance prior.beta.var are used for \beta to ensure the propriety of the normalized power prior. Posterior samples for \beta and \tau are obtained through Gibbs sampling. Independent beta(prior.a0.shape1, prior.a0.shape1) priors are used for a_0. Posterior samples for a_0 are obtained through slice sampling.

For all other data types, posterior samples are obtained through slice sampling. The default lower limits are -100 for \beta and 0 for a_0. The default upper limits for the parameters are 100 for \beta and 1 for a_0. The default slice widths for the parameters are 0.1. The defaults may not be appropriate for all situations, and the user can specify the appropriate limits and slice width for each parameter.

If a sampling prior with support in the null space is used, the value returned is a Bayesian type I error rate. If a sampling prior with support in the alternative space is used, the value returned is a Bayesian power.

Because running power.glm.fixed.a0() and power.glm.random.a0() is potentially time-consuming, an approximation method based on asymptotic theory has been implemented for the model with fixed a_0. In order to attain the exact sample size needed for the desired power, the user can start with the approximation to get a rough estimate of the sample size required, using power.glm.fixed.a0() with approximate=TRUE.

Value

The function returns a S3 object with a summary method. Power or type I error is returned, depending on the sampling prior used. The posterior probabilities of the alternative hypothesis are returned. The average posterior mean of \beta and its corresponding bias are returned. The average posterior mean of a_0 is returned. If data.type is "Normal", the average posterior mean of \tau is also returned. The first element of the average posterior means of \beta is the average posterior mean of the intercept. The second element is the average posterior mean of \beta_1, the parameter for the treatment indicator.

References

Chen, Ming-Hui, et al. "Bayesian design of noninferiority trials for medical devices using historical data." Biometrics 67.3 (2011): 1163-1170.

Neal, Radford M. Slice sampling. Ann. Statist. 31 (2003), no. 3, 705–767.

See Also

normalizing.constant and glm.random.a0

Examples


data.type <- "Bernoulli"
data.link <- "Logistic"
data.size <- 100

# Simulate two historical datasets
p <- 3
historical <- list(list(y0=rbinom(data.size,size=1,prob=0.2),
                        x0=matrix(rnorm(p*data.size),ncol=p,nrow=data.size)),
                   list(y0=rbinom(data.size, size=1, prob=0.5),
                        x0=matrix(rnorm(p*data.size),ncol=p,nrow=data.size)))

# Generate sampling priors

# The null hypothesis here is H0: beta_1 >= 0. To calculate power,
# we can provide samples of beta_1 such that the mass of beta_1 < 0.
# To calculate type I error, we can provide samples of beta_1 such that
# the mass of beta_1 >= 0.
samp.prior.beta1 <- rnorm(100, mean=-3, sd=1)
# Here, mass is put on the alternative region, so power is calculated.
samp.prior.beta <- cbind(rnorm(100), samp.prior.beta1, matrix(rnorm(100*p), 100, p))

# Please see function "normalizing.constant" for how to obtain a0.coefficients
# Here, suppose one-degree polynomial regression is chosen by the "normalizing.constant"
# function. The coefficients are obtained for the intercept, a0_1 and a0_2.
a0.coefficients <- c(1, 0.5, -1)

nMC <- 100 # nMC should be larger in practice
nBI <- 50
N <- 3 # N should be larger in practice
result <- power.glm.random.a0(data.type=data.type, data.link=data.link,
                              data.size=data.size, historical=historical,
                              samp.prior.beta=samp.prior.beta, a0.coefficients=a0.coefficients,
                              delta=0, nMC=nMC, nBI=nBI, N=N)
summary(result)


[Package BayesPPD version 1.1.2 Index]