n.nb.gf {spass} | R Documentation |
Sample Size Calculation for Comparing Two Groups when observing Longitudinal Count Data with marginal Negative Binomial Distribution and underlying Gamma Frailty with Autoregressive Correlation Structure of Order One
Description
n.nb.gf
calculates required sample sizes for testing trend parameters in a Gamma frailty model
Usage
n.nb.gf(
alpha = 0.025,
power = 0.8,
lambda,
size,
rho,
tp,
k = 1,
h,
hgrad,
h0,
trend = c("constant", "exponential", "custom"),
approx = 20
)
Arguments
alpha |
level (type I error) to which the hypothesis is tested. |
power |
power (1 - type II error) to which an alternative should be proven. |
lambda |
the set of trend parameters assumed to be true at the beginning prior to trial onset |
size |
dispersion parameter (the shape parameter of the gamma mixing distribution). Must be strictly positive, need not be integer (see |
rho |
correlation coefficient of the autoregressive correlation structure of the underlying Gamma frailty. Must be between 0 and 1 (see |
tp |
number of observed time points. (see |
k |
sample size allocation factor between groups: see 'Details'. |
h |
hypothesis to be tested. The function must return a single value when evaluated on lambda. |
hgrad |
gradient of function h |
h0 |
the value against which h is tested, see 'Details'. |
trend |
the trend which assumed to underlying in the data. |
approx |
numer of iterations in numerical calculation of the sandwich estimator, see 'Details'. |
Details
The function calculates required samples sizes for testing trend parameters of trends in longitudinal negative binomial data. The underlying
one-sided null-hypothesis is defined by H_0: h(\eta, \lambda) \geq h_0
vs. the alternative H_A: h(\eta, \lambda) < h_0
. For testing
these hypothesis, the program therefore requires a function h
and a value h0
.
n.nb.gf
gives back the required sample size for the control and treatment group, to prove an existing alternative h(\eta, \lambda) - h_0
with a power of power
when testing at level alpha
. For sample sizes n_C
and n_T
of the control and treatment group, respectively, the argument k
is the
sample size allocation factor, i.e. k = n_T/n_C
.
When calculating the expected sandwich estimator required for the sample size, certain terms can not be computed analytically and have
to be approximated numerically. The value approx
defines how close the approximation is to the true expected sandwich estimator.
High values of approx
provide better approximations but are compuationally more expensive.
Value
n.nb.gf
returns the required sample size within the control group and treatment group.
Source
n.nb.gf
uses code contributed by Thomas Asendorf.
See Also
rnbinom.gf
for information on the Gamma frailty model, fit.nb.gf
for calculating
initial parameters required when performing sample size estimation, bssr.nb.gf
for blinded
sample size reestimation within a running trial.
Examples
##The example is commented as it may take longer than 10 seconds to run.
##Please uncomment prior to execution.
##Example for constant rates
#h<-function(lambda.eta){
# lambda.eta[2]
#}
#hgrad<-function(lambda.eta){
# c(0, 1, 0)
#}
##We assume the rate in the control group to be exp(lambda[1]) = exp(0) and an
##effect of lambda[2] = -0.3. The \code{size} is assumed to be 1 and the correlation
##coefficient \code{\rho} 0.5. At the end of the study, we would like to test
##the treatment effect specified in lambda[2], and therefore define function
##\code{h} and value \code{h0} accordingly.
#estimate<-n.nb.gf(lambda=c(0,-0.3), size=1, rho=1, tp=6, k=1, h=h, hgrad=hgrad,
# h0=0.2, trend="constant", approx=20)
#summary(estimate)
##Example for exponential trend
#h<-function(lambda.eta){
# lambda.eta[3]
#}
#hgrad<-function(lambda.eta){
# c(0, 0, 1, 0)
#}
#estimate<-n.nb.gf(lambda=c(0, 0, -0.3/6), size=1, rho=0.5, tp=7, k=1, h=h, hgrad=hgrad,
# h0=0, trend="exponential", approx=20)
#summary(estimate)