compare_models {MixSIAR} | R Documentation |
Compare the predictive accuracy of 2 or more MixSIAR models
Description
compare_models
uses the 'loo' package
to compute LOO (leave-one-out cross-validation) or WAIC (widely applicable information criterion)
for 2 of more fit MixSIAR models.
Usage
compare_models(x, loo = TRUE)
Arguments
x |
list of two or more |
loo |
|
Details
LOO and WAIC are "methods for estimating pointwise out-of-sample prediction accuracy from a fitted Bayesian model using the log-likelihood evaluated at the posterior simulations of the parameter values". See Vehtari, Gelman, & Gabry (2017). In brief:
LOO and WAIC are preferred over AIC or DIC
LOO is more robust than WAIC
-
'loo'
estimates standard errors for the difference in LOO/WAIC between two models We can calculate the relative support for each model using LOO/WAIC weights
Value
Data frame with the following columns:
-
Model
: names ofx
(input list) -
LOOic
/WAIC
: LOO information criterion or WAIC -
se_LOOic
/se_WAIC
: standard error of LOOic / WAIC -
dLOOic
/dWAIC
: difference between each model and the model with lowest LOOic/WAIC. Best model has dLOOic = 0. -
se_dLOOic
/se_dWAIC
: standard error of the difference between each model and the model with lowest LOOic/WAIC -
weight
: relative support for each model, calculated as Akaike weights (p.75 Burnham & Anderson 2002). Interpretation: "an estimate of the probability that the model will make the best predictions on new data, conditional on the set of models considered" (McElreath 2015).
See Also
Vehtari, A, A Gelman, and J Gabry. 2017. Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing.
Pages 75-88 in Burnham, KP and DR Anderson. 2002. Model selection and multimodel inference: a practical information-theoretic approach. Springer Science & Business Media.
Pages 188-201 in McElreath, R. 2016. Statistical rethinking: a Bayesian course with examples in R and Stan. CRC Press.
Examples
## Not run:
# Model 1 = wolf diet by Region + Pack
mix.1 <- load_mix_data(filename=mix.filename,
iso_names=c("d13C","d15N"),
factors=c("Region","Pack"),
fac_random=c(TRUE,TRUE),
fac_nested=c(FALSE,TRUE),
cont_effects=NULL)
source.1 <- load_source_data(filename=source.filename, source_factors="Region",
conc_dep=FALSE, data_type="means", mix.1)
discr.1 <- load_discr_data(filename=discr.filename, mix.1)
# Run Model 1
jags.1 <- run_model(run="test", mix.1, source.1, discr.1, model_filename,
alpha.prior = 1, resid_err=T, process_err=T)
# Model 2 = wolf diet by Region (no Pack)
mix.2 <- load_mix_data(filename=mix.filename,
iso_names=c("d13C","d15N"),
factors=c("Region"),
fac_random=c(TRUE),
fac_nested=c(FALSE),
cont_effects=NULL)
source.2 <- load_source_data(filename=source.filename, source_factors="Region",
conc_dep=FALSE, data_type="means", mix.2)
discr.2 <- load_discr_data(filename=discr.filename, mix.2)
# Run Model 2
jags.2 <- run_model(run="test", mix.2, source.2, discr.2, model_filename,
alpha.prior = 1, resid_err=T, process_err=T)
# Compare models 1 and 2 using LOO
compare_models(x=list(jags.1, jags.2), loo=TRUE)
# Compare models 1 and 2 using WAIC
compare_models(x=list(jags.1, jags.2), loo=FALSE)
# Get WAIC for model 1
compare_models(x=list(jags.1), loo=FALSE)
# Create named list of rjags objects to get model names in summary
x <- list(jags.1, jags.2)
names(x) <- c("Region + Pack", "Region")
compare_models(x)
## End(Not run)