estimatePopsizeFit {singleRcapture}R Documentation

Regression fitting in single source capture-recapture models

Description

estimatePopsizeFit does for estimatePopsize what glm.fit does for glm. It is internally called in estimatePopsize. Since estimatePopsize does much more than just regression fitting estimatePopsizeFit is much faster.

Usage

estimatePopsizeFit(
  y,
  X,
  family,
  control,
  method,
  priorWeights,
  coefStart,
  etaStart,
  offset,
  ...
)

Arguments

y

vector of dependent variables.

X

model matrix, the vglm one.

family

same as model in estimatePopsize.

control

control parameters created in controlModel.

method

method of estimation same as in estimatePopsize.

priorWeights

vector of prior weights its the same argument as weights in estimatePopsize.

etaStart, coefStart

initial value of regression parameters or linear predictors.

offset

offset passed from by default passed from estimatePopsize().

...

arguments to pass to other methods.

Details

If method argument was set to "optim" the stats::optim function will be used to fit regression with analytically computed gradient and (minus) log likelihood functions as gr and fn arguments. Unfortunately optim does not allow for hessian to be specified. More information about how to modify optim fitting is included in controlMethod().

If method argument was set to "IRLS" the iteratively reweighted least squares. The algorithm is well know in generalised linear models. Thomas W. Yee later extended this algorithm to vector generalised linear models and in more general terms it can roughly be described as (this is Yee's description after changing some conventions):

  1. Initialize with:

    • converged <- FALSE

    • iter <- 1

    • \(\boldsymbol{\beta}\) <- start

    • \(\boldsymbol{W}\) <- prior

    • \(\ell\) <- \(\ell(\boldsymbol{\beta})\)

  2. If converged or iter > Maxiter move to step 7.

  3. Store values from previous algorithm step:

    • \(\boldsymbol{W}_{-}\) <- \(\boldsymbol{W}\)

    • \(\ell_{-}\) <- \(\ell\)

    • \(\boldsymbol{\beta}_{-}\) <- \(\boldsymbol{\beta}\)

    and assign values at current step:

    • \(\boldsymbol{\eta}\) <- \(\boldsymbol{X}_{vlm}\boldsymbol{\beta}\)

    • \(Z_{i}\) <- \( \eta_{i}+\frac{\partial\ell_{i}}{\partial\eta_{i}} \mathbb{E}\left(\frac{\partial^{2}\ell_{i}}{ \partial\eta_{i}^{T}\partial\eta_{i}}\right)^{-1}\)

    • \(\boldsymbol{W}_{ij}\) <- \(\mathbb{E}\left(\frac{\partial^{2}\ell}{ \partial\boldsymbol{\eta}_{j}^{T}\partial\boldsymbol{\eta}_{i}}\right)\)

    where \(\ell_{i}\) is the ith component of log likelihood function, \(\eta_{i}\) is the vector of linear predictors associated with ith row and \(\mathbb{E}\left(\frac{\partial^{2}\ell_{i}}{ \partial\eta_{i}^{T}\partial\eta_{i}}\right)\) corresponds to weights associated with ith row and \(\boldsymbol{W}\) is a block matrix, made of diagonal matrixes \(\mathbb{E}\left(\frac{\partial^{2}\ell}{ \partial\boldsymbol{\eta}_{j}^{T}\partial\boldsymbol{\eta}_{i}}\right)\)

  4. Regress \(\boldsymbol{Z}\) on \(\boldsymbol{X}_{vlm}\) to obtain \(\boldsymbol{\beta}\) as: \[\boldsymbol{\beta}= \left(\boldsymbol{X}_{vlm}^{T}\boldsymbol{W}\boldsymbol{X}_{vlm}\right)^{-1} \boldsymbol{X}_{vlm}^{T}\boldsymbol{W}\boldsymbol{Z}\]

  5. Assign:

    • converged <- \( \ell(\boldsymbol{\beta})-\ell_{-} < \varepsilon\cdot\ell_{-}\) or \( ||\boldsymbol{\beta}-\boldsymbol{\beta}_{-}||_{\infty} < \varepsilon\)

    • iter <- iter + 1

    where \(\varepsilon\) is the relative tolerance level, by default 1e-8.

  6. Return to step 2.

  7. Return \(\boldsymbol{\beta}, \boldsymbol{W}\), iter.

In this package we use different conventions for \(\boldsymbol{X}_{vlm}\) matrix hence slight differences are present in algorithm description but results are identical.

Value

List with regression parameters, working weights (if IRLS fitting method) was chosen and number of iterations taken.

Author(s)

Piotr Chlebicki, Maciej Beresewicz

References

Yee, T. W. (2015). Vector Generalized Linear and Additive Models: With an Implementation in R. New York, USA: Springer. ISBN 978-1-4939-2817-0.

See Also

stats::glm() estimatePopsize() controlMethod() stats::optim()

Examples


summary(farmsubmission)

# construct vglm model matrix
X <- matrix(data = 0, nrow = 2 * NROW(farmsubmission), ncol = 7)
X[1:NROW(farmsubmission), 1:4] <- model.matrix(
~ 1 + log_size + log_distance + C_TYPE, 
farmsubmission
)


X[-(1:NROW(farmsubmission)), 5:7] <- X[1:NROW(farmsubmission), c(1, 3, 4)]

# this attribute tells the function which elements of the design matrix 
# correspond to which linear predictor 
attr(X, "hwm") <- c(4, 3)

# get starting points
start <- glm.fit(
y = farmsubmission$TOTAL_SUB, 
x = X[1:NROW(farmsubmission), 1:4], 
family = poisson()
)$coefficients

res <- estimatePopsizeFit(
y = farmsubmission$TOTAL_SUB, 
X = X, 
method = "IRLS", 
priorWeights = 1, 
family = ztoigeom(), 
control = controlMethod(verbose = 5), 
coefStart = c(start, 0, 0, 0),
etaStart = matrix(X %*% c(start, 0, 0, 0), ncol = 2),
offset = cbind(rep(0, NROW(farmsubmission)), rep(0, NROW(farmsubmission)))
)

# extract results

# regression coefficient vector
res$beta

# check likelihood
ll <- ztoigeom()$makeMinusLogLike(y = farmsubmission$TOTAL_SUB, X = X)

-ll(res$beta)

# number of iterations
res$iter

# working weights
head(res$weights)

# Compare with optim call

res2 <- estimatePopsizeFit(
  y = farmsubmission$TOTAL_SUB, 
  X = X, 
  method = "optim", 
  priorWeights = 1, 
  family = ztoigeom(), 
  coefStart = c(start, 0, 0, 0),
  control = controlMethod(verbose = 1, silent = TRUE),
  offset = cbind(rep(0, NROW(farmsubmission)), rep(0, NROW(farmsubmission)))
)
# extract results

# regression coefficient vector
res2$beta


# check likelihood
-ll(res2$beta)

# number of calls to log lik function
# since optim does not return the number of
# iterations
res2$iter

# optim does not calculated working weights
head(res2$weights)


[Package singleRcapture version 0.2.1.2 Index]