| pvalues {lme4} | R Documentation |
Getting p-values for fitted models
Description
One of the most frequently asked questions about
lme4 is "how do I calculate p-values for estimated
parameters?" Previous versions of lme4 provided
the mcmcsamp function, which efficiently generated
a Markov chain Monte Carlo sample from the posterior
distribution of the parameters, assuming flat (scaled
likelihood) priors. Due to difficulty in constructing a
version of mcmcsamp that was reliable even in
cases where the estimated random effect variances were
near zero (e.g.
https://stat.ethz.ch/pipermail/r-sig-mixed-models/2009q4/003115.html),
mcmcsamp has been withdrawn (or more precisely,
not updated to work with lme4 versions >=1.0.0).
Many users, including users of the aovlmer.fnc function from
the languageR package which relies on mcmcsamp, will be
deeply disappointed by this lacuna. Users who need p-values have a
variety of options. In the list below, the methods marked MC
provide explicit model comparisons; CI denotes confidence
intervals; and P denotes parameter-level or sequential tests of
all effects in a model. The starred (*) suggestions provide
finite-size corrections (important when the number of groups is <50);
those marked (+) support GLMMs as well as LMMs.
likelihood ratio tests via
anovaordrop1(MC,+)profile confidence intervals via
profile.merModandconfint.merMod(CI,+)parametric bootstrap confidence intervals and model comparisons via
bootMer(orPBmodcompin thepbkrtestpackage) (MC/CI,*,+)for random effects, simulation tests via the
RLRsimpackage (MC,*)for fixed effects, F tests via Kenward-Roger approximation using
KRmodcompfrom thepbkrtestpackage (MC,*)car::AnovaandlmerTest::anovaprovide wrappers for Kenward-Roger-corrected tests usingpbkrtest:lmerTest::anovaalso provides t tests via the Satterthwaite approximation (P,*)afex::mixedis another wrapper forpbkrtestandanovaproviding "Type 3" tests of all effects (P,*,+)
arm::sim, or bootMer, can be used
to compute confidence intervals on predictions.
For glmer models, the summary output provides p-values
based on asymptotic Wald tests (P); while this is standard practice
for generalized linear models, these tests make assumptions both about
the shape of the log-likelihood surface and about the accuracy of
a chi-squared approximation to differences in log-likelihoods.
When all else fails, don't forget to keep p-values in perspective: https://phdcomics.com/comics/archive.php?comicid=905