calc.marglogL {rpql} | R Documentation |
Calculate the marginal log-likelihood for a GLMM fitted using rpql
Description
After fitting and performing joint (fixed and random effects) using regularized PQL, one may then (for one reason or another) want to calculate the marginal likelihood for the (sub)model, possibly on a test dataset for prediction. This is the main purpose of calc.marglogL
.
Usage
calc.marglogL(new.data, fit, B = 1000)
Arguments
new.data |
A list containing the elements |
fit |
An object of class |
B |
A positive integer for the number of random effects examples to generate, when performing Monte-Carlo integration. Defaults to 1000. |
Details
Regularized PQL performs penalized joint (fixed and random effects) selection for GLMMs, where the penalized quasi-likelihood (PQL, Breslow and Clayton, 1993) is used the loss function. After fitting, one may then wish to calculate the marginal log-likelihood for the (sub)model, defined as
\ell = \log\left(\int f(\bm{y}; \bm{\beta}, \bm{b}, \phi) f(\bm{b}; \bm{\Sigma}) d\bm{b}\right),
where f(\bm{y}; \bm{\beta}, \bm{b}, \phi)
denotes the conditional likelihood of the responses \bm{y}
given the fixed effects \bm{\beta}
, random effects \bm{b}
, and nuisance parameters \phi
if appropriate, and f(\bm{b}; \bm{\Sigma})
is the multivariate normal distribution for the random effects, with covariance matrix \bm{\Sigma}
. calc.marglogL
calculates the above marginal likelihood using Monte-Carlo integration.
Admittedly, this function is not really useful for fitting the GLMM per-se: it is never called by the main function rpql
, and the marginal likelihood is (approximately) calculated anyway if hybrid.est = TRUE
and the final submodel is refitted using lme4
. Where the function comes in handy is if you have a validation or test dataset, and you want to calculated the predicted (log) likelihood of the test data given the regularized PQL fit.
Value
The marginal log-likelihood of new.data
given the GLMM in fit
.
Warnings
No check is made to see if the dimensions of the elements
new.data
andfit
match, e.g. the number of columns innew.data$X
is equal to the number of elements infit$fixef
. Please ensure they are!Monte-Carlo integration is computationally intensive especially if
\bm{y}
is long!
Author(s)
Francis K.C. Hui <francis.hui@gmail.com>, with contributions from Samuel Mueller <samuel.mueller@sydney.edu.au> and A.H. Welsh <Alan.Welsh@anu.edu.au>
Maintainer: Francis Hui <fhui28@gmail.com>
References
Breslow, N. E., & Clayton, D. G. (1993). Approximate inference in generalized linear mixed models. Journal of the American Statistical Association, 88, 9-25.
See Also
rpql
for fitting and performing model selection in GLMMs using regularized PQL. lme4
also approximately calculates the marginal log-likelihood when fitting a GLMM.
Examples
## Not given