model-information-criteria {mcmcsae} | R Documentation |
Compute DIC, WAIC and leave-one-out cross-validation model measures
Description
Compute the Deviance Information Criterion (DIC) or
Watanabe-Akaike Information Criterion (WAIC) from an
object of class mcdraws
output by MCMCsim
.
Method waic.mcdraws
computes WAIC using package loo.
Method loo.mcdraws
also depends on package loo to compute
a Pareto-smoothed importance sampling (PSIS) approximation
to leave-one-out cross-validation.
Usage
compute_DIC(x, use.pV = FALSE)
compute_WAIC(
x,
diagnostic = FALSE,
batch.size = NULL,
show.progress = TRUE,
cl = NULL,
n.cores = 1L
)
## S3 method for class 'mcdraws'
waic(x, by.unit = FALSE, ...)
## S3 method for class 'mcdraws'
loo(x, by.unit = FALSE, r_eff = FALSE, n.cores = 1L, ...)
Arguments
x |
an object of class |
use.pV |
whether half the posterior variance of the deviance should be used as an alternative estimate of the effective number of model parameters for DIC. |
diagnostic |
whether vectors of log-pointwise-predictive-densities and pointwise contributions to the WAIC effective number of model parameters should be returned. |
batch.size |
number of data units to process per batch. |
show.progress |
whether to show a progress bar. |
cl |
an existing cluster can be passed for parallel computation. If |
n.cores |
the number of cpu cores to use. Default is one, i.e. no parallel computation. |
by.unit |
if |
... |
Other arguments, passed to |
r_eff |
whether to compute relative effective sample size estimates
for the likelihood of each observation. This takes more time, but should
result in a better PSIS approximation. See |
Value
For compute_DIC
a vector with the deviance information criterion and
effective number of model parameters. For compute_WAIC
a vector with the
WAIC model selection criterion and WAIC effective number of model parameters.
Method waic
returns an object of class waic, loo
, see the
documentation for waic
in package loo.
Method loo
returns an object of class psis_loo
, see
loo
.
References
D. Spiegelhalter, N. Best, B. Carlin and A. van der Linde (2002). Bayesian Measures of Model Complexity and Fit. Journal of the Royal Statistical Society B 64 (4), 583-639.
S. Watanabe (2010). Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory. Journal of Machine Learning 11, 3571-3594.
A. Gelman, J. Hwang and A. Vehtari (2014). Understanding predictive information criteria for Bayesian models. Statistics and Computing 24, 997-1016.
A. Vehtari, D. Simpson, A. Gelman, Y. Yao and J. Gabry (2015). Pareto smoothed importance sampling. arXiv:1507.02646.
A. Vehtari, A. Gelman and J. Gabry (2017). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing 27, 1413-1432.
P.-C. Buerkner, J. Gabry and A. Vehtari (2021). Efficient leave-one-out cross-validation for Bayesian non-factorized normal and Student-t models. Computational Statistics 36, 1243-1261.
Examples
ex <- mcmcsae_example(n=100)
sampler <- create_sampler(ex$model, data=ex$dat)
sim <- MCMCsim(sampler, burnin=100, n.iter=300, n.chain=4, store.all=TRUE)
compute_DIC(sim)
compute_WAIC(sim)
if (require(loo)) {
waic(sim)
loo(sim, r_eff=TRUE)
}