predict.glmmNPML {npmlreg} | R Documentation |
Prediction from objects of class glmmNPML or glmmGQ
Description
The functions alldist
and allvc
produce objects of type glmmGQ
,
if Gaussian quadrature (Hinde, 1982, random.distribution="gq"
)
was applied for computation, and objects of class glmmNPML
, if
parameter estimation was carried out by nonparametric maximum likelihood
(Aitkin, 1996a, random.distribution="np"
). The functions presented here
give predictions from those objects.
Usage
## S3 method for class 'glmmNPML'
predict(object, newdata, type = "link", ...)
## S3 method for class 'glmmGQ'
predict(object, newdata, type = "link", ...)
Arguments
object |
a fitted object of class |
newdata |
a data frame with covariates from which prediction is desired. If omitted, empirical Bayes predictions for the original data will be given. |
type |
if set to |
... |
further arguments which will mostly not have any effect (and are
included only to ensure compatibility
with the generic |
Details
The predicted values are obtained by
Empirical Bayes (Aitkin, 1996b), if
newdata
has not been specified. That is, the prediction on the linear predictor scale is given by\sum{\eta_{ik}w_{ik}}
, whereby\eta_{ik}
are the fitted linear predictors,w_{ik}
are the weights in the final iteration of the EM algorithm (corresponding to the posterior probability for observationi
to come from componentk
), and the sum is taken over the number of componentsk
for fixedi
.the marginal model, if object is of class
glmmNPML
andnewdata
has been specified. That is, computation is identical as above, but withw_{ik}
replaced by the masses\pi_k
of the fitted model.the analytical expression for the marginal mean of the responses, if object is of class
glmmGQ
andnewdata
has been specified. See Aitkin et al. (2009), p. 481, for the formula. This method is only supported for the logarithmic link function, as otherwise no analytical expression for the marginal mean of the responses exists.
It is sufficient to call predict
instead of predict.glmmNPML
or
predict.glmmGQ
, since the generic predict function provided in R automatically selects the right
model class.
Value
A vector of predicted values.
Note
The results of the generic fitted()
method
correspond to predict(object, type="response")
. Note that, as we are
working with random effects, fitted values are never really ‘fitted’ but rather
‘predicted’.
Author(s)
Jochen Einbeck and John Hinde (2007).
References
Aitkin, M. (1996a). A general maximum likelihood analysis of overdispersion in generalized linear models. Statistics and Computing 6, 251-262.
Aitkin, M. (1996b). Empirical Bayes shrinkage using posterior random effect means from nonparametric maximum likelihood estimation in general random effect models. Statistical Modelling: Proceedings of the 11th IWSM 1996, 87-94.
Aitkin, M., Francis, B. and Hinde, J. (2009). Statistical Modelling in R. Oxford Statistical Science Series, Oxford, UK.
Hinde, J. (1982). Compound Poisson regression models. Lecture Notes in Statistics 14, 109-121.
See Also
Examples
# Toxoplasmosis data:
data(rainfall)
rainfall$x<-rainfall$Rain/1000
toxo.0.3x<- alldist(cbind(Cases,Total-Cases)~1, random=~x,
data=rainfall, k=3, family=binomial(link=logit))
toxo.1.3x<- alldist(cbind(Cases,Total-Cases)~x, random=~x,
data=rainfall, k=3, family=binomial(link=logit))
predict(toxo.0.3x, type="response", newdata=data.frame(x=2))
# [1] 0.4608
predict(toxo.1.3x, type="response", newdata=data.frame(x=2))
# [1] 0.4608
# gives the same result, as both models are equivalent and only differ
# by a parameter transformation.
# Fabric faults data:
data(fabric)
names(fabric)
# [1] "leng" "y" "x"
faults.g2<- alldist(y ~ x, family=poisson(link=log), random=~1,
data= fabric,k=2, random.distribution="gq")
predict(faults.g2, type="response",newdata=fabric[1:6,])
# [1] 8.715805 10.354556 13.341242 5.856821 11.407828 13.938013
# is not the same as
predict(faults.g2, type="response")[1:6]
# [1] 6.557786 7.046213 17.020242 7.288989 13.992591 9.533823
# since in the first case prediction is done using the analytical
# mean of the marginal distribution, and in the second case using the
# individual posterior probabilities in an empirical Bayes approach.