p_ndfa {pooling}R Documentation

Normal Discriminant Function Approach for Estimating Odds Ratio with Exposure Measured in Pools and Potentially Subject to Additive Normal Errors

Description

Assumes exposure given covariates and outcome is a normal-errors linear regression. Pooled exposure measurements can be assumed precise or subject to additive normal processing error and/or measurement error. Parameters are estimated using maximum likelihood.

Usage

p_ndfa(g, y, xtilde, c = NULL, constant_or = TRUE,
  errors = "processing", start_nonvar_var = c(0.01, 1),
  lower_nonvar_var = c(-Inf, 1e-04), upper_nonvar_var = c(Inf, Inf),
  jitter_start = 0.01, nlminb_list = list(control = list(trace = 1,
  eval.max = 500, iter.max = 500)), hessian_list = list(method.args =
  list(r = 4)), nlminb_object = NULL)

Arguments

g

Numeric vector of pool sizes, i.e. number of members in each pool.

y

Numeric vector of poolwise Y values (number of cases in each pool).

xtilde

Numeric vector (or list of numeric vectors, if some pools have replicates) with Xtilde values.

c

Numeric matrix with poolwise C values (if any), with one row for each pool. Can be a vector if there is only 1 covariate.

constant_or

Logical value for whether to assume a constant odds ratio for X, which means that sigsq_1 = sigsq_0. If NULL, model is fit with and without this assumption, and a likelihood ratio test is performed to test it.

errors

Character string specifying the errors that X is subject to. Choices are "neither", "processing" for processing error only, "measurement" for measurement error only, and "both".

start_nonvar_var

Numeric vector of length 2 specifying starting value for non-variance terms and variance terms, respectively.

lower_nonvar_var

Numeric vector of length 2 specifying lower bound for non-variance terms and variance terms, respectively.

upper_nonvar_var

Numeric vector of length 2 specifying upper bound for non-variance terms and variance terms, respectively.

jitter_start

Numeric value specifying standard deviation for mean-0 normal jitters to add to starting values for a second try at maximizing the log-likelihood, should the initial call to nlminb result in non-convergence. Set to NULL for no second try.

nlminb_list

List of arguments to pass to nlminb for log-likelihood maximization.

hessian_list

List of arguments to pass to hessian for approximating the Hessian matrix. Only used if estimate_var = TRUE.

nlminb_object

Object returned from nlminb in a prior call. Useful for bypassing log-likelihood maximization if you just want to re-estimate the Hessian matrix with different options.

Value

List containing:

  1. Numeric vector of parameter estimates.

  2. Variance-covariance matrix.

  3. Returned nlminb object from maximizing the log-likelihood function.

  4. Akaike information criterion (AIC).

If constant_or = NULL, two such lists are returned (one under a constant odds ratio assumption and one not), along with a likelihood ratio test for H0: sigsq_1 = sigsq_0, which is equivalent to H0: odds ratio is constant.

References

Lyles, R.H., Van Domelen, D.R., Mitchell, E.M. and Schisterman, E.F. (2015) "A discriminant function approach to adjust for processing and measurement error When a biomarker is assayed in pooled samples." Int. J. Environ. Res. Public Health 12(11): 14723–14740.

Schisterman, E.F., Vexler, A., Mumford, S.L. and Perkins, N.J. (2010) "Hybrid pooled-unpooled design for cost-efficient measurement of biomarkers." Stat. Med. 29(5): 597–613.

Examples

# Load data frame with (g, Y, X, Xtilde, C) values for 4,996 pools and list
# of Xtilde values where 25 subjects have replicates. Xtilde values are
# affected by processing error and measurement error. True log-OR = 0.5,
# sigsq = 1, sigsq_p = 0.5, sigsq_m = 0.1.
data(dat_p_ndfa)
dat <- dat_p_ndfa$dat
reps <- dat_p_ndfa$reps

# Unobservable truth estimator - use precise X's
fit.unobservable <- p_ndfa(
  g = dat$g,
  y = dat$numcases,
  xtilde = dat$x,
  c = dat$c,
  errors = "neither"
)
fit.unobservable$estimates

# Naive estimator - use imprecise Xtilde's, but treat as precise
fit.naive <- p_ndfa(
  g = dat$g,
  y = dat$numcases,
  xtilde = dat$xtilde,
  c = dat$c,
  errors = "neither"
)
fit.naive$estimates

# Corrected estimator - use Xtilde's and account for errors (not using
# replicates here)
## Not run: 
fit.noreps <- p_ndfa(
  g = dat$g,
  y = dat$numcases,
  xtilde = dat$xtilde,
  c = dat$c,
  errors = "both"
)
fit.noreps$estimates

# Corrected estimator - use Xtilde's including 25 replicates
fit.reps <- p_ndfa(
  g = dat$g,
  y = dat$numcases,
  xtilde = reps,
  c = dat$c,
  errors = "both"
)
fit.reps$estimates

# Same as previous, but allowing for non-constant odds ratio.
fit.nonconstant <- p_ndfa(
  g = dat$g,
  y = dat$numcases,
  xtilde = reps,
  c = dat$c,
  constant_or = FALSE,
  errors = "both"
)
fit.nonconstant$estimates

# Visualize estimated log-OR vs. X based on previous model fit
p <- plot_ndfa(
  estimates = fit.nonconstant$estimates,
  varcov = fit.nonconstant$theta.var,
  xrange = range(dat$xtilde[dat$g == 1]),
  cvals = mean(dat$c / dat$g)
)
p

# Likelihood ratio test for H0: odds ratio is constant.
test.constantOR <- p_ndfa(
  g = dat$g,
  y = dat$numcases,
  xtilde = reps,
  c = dat$c,
  constant_or = NULL,
  errors = "both"
)
test.constantOR$lrt

## End(Not run)



[Package pooling version 1.1.2 Index]