diagnostics {spBFA} | R Documentation |
diagnostics
Description
Calculates diagnostic metrics using output from the spBFA
model.
Usage
diagnostics(
object,
diags = c("dic", "dinf", "waic"),
keepDeviance = FALSE,
keepPPD = FALSE,
Verbose = TRUE,
seed = 54
)
Arguments
object |
A |
diags |
A vector of character strings indicating the diagnostics to compute. Options include: Deviance Information Criterion ("dic"), d-infinity ("dinf") and Watanabe-Akaike information criterion ("waic"). At least one option must be included. Note: The probit model cannot compute the DIC or WAIC diagnostics due to computational issues with computing the multivariate normal CDF. |
keepDeviance |
A logical indicating whether the posterior deviance distribution is returned (default = FALSE). |
keepPPD |
A logical indicating whether the posterior predictive distribution at each observed location is returned (default = FALSE). |
Verbose |
A boolean logical indicating whether progress should be output (default = TRUE). |
seed |
An integer value used to set the seed for the random number generator (default = 54). |
Details
To assess model fit, DIC, d-infinity and WAIC are used. DIC is based on the deviance statistic and penalizes for the complexity of a model with an effective number of parameters estimate pD (Spiegelhalter et al 2002). The d-infinity posterior predictive measure is an alternative diagnostic tool to DIC, where d-infinity=P+G. The G term decreases as goodness of fit increases, and P, the penalty term, inflates as the model becomes over-fit, so small values of both of these terms and, thus, small values of d-infinity are desirable (Gelfand and Ghosh 1998). WAIC is invariant to parametrization and is asymptotically equal to Bayesian cross-validation (Watanabe 2010). WAIC = -2 * (lppd - p_waic_2). Where lppd is the log pointwise predictive density and p_waic_2 is the estimated effective number of parameters based on the variance estimator from Vehtari et al. 2016. (p_waic_1 is the mean estimator).
Value
diagnostics
returns a list containing the diagnostics requested and
possibly the deviance and/or posterior predictive distribution objects.
Author(s)
Samuel I. Berchuck
References
Gelfand, A. E., & Ghosh, S. K. (1998). Model choice: a minimum posterior predictive loss approach. Biometrika, 1-11.
Spiegelhalter, D. J., Best, N. G., Carlin, B. P., & Van Der Linde, A. (2002). Bayesian measures of model complexity and fit. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 64(4), 583-639.
Vehtari, A., Gelman, A., & Gabry, J. (2016). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing, 1-20.
Watanabe, S. (2010). Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory. Journal of Machine Learning Research, 11(Dec), 3571-3594.
Examples
###Load pre-computed regression results
data(reg.bfa_sp)
###Compute and print diagnostics
diags <- diagnostics(reg.bfa_sp)
print(unlist(diags))