discrepancyPlot {mcmcOutput} | R Documentation |
Graphic comparison of observed vs simulated discrepancies
Description
One way to assess the fit of a model is to calculate the discrepancy between the observed data and the values predicted by the model. For binomial and count data, the discrepancy will not be zero because the data are integers while the predictions are continuous. To assess whether the observed discrepancy is acceptable, we simulate new data according to the model and calculate discrepancies for the simulated data.
Function discrepancyPlot
produces a scatter plot of the MCMC chains for observed vs simulated discrepancies and calculates and displays a p-value, the proportion of simulated discrepancy values that exceed the observed discrepancy.
Usage
discrepancyPlot(object, observed, simulated, ...)
Arguments
object |
An object of class |
observed |
character; the name of the parameter for the observed discrepancy. |
simulated |
character; the name of the parameter for the simulated discrepancies. |
... |
additional graphical parameters passed to the |
Value
Returns the proportion of simulated discrepancy values that exceed the observed discrepancy, often referred to as a "Bayesian p-value".
Author(s)
Mike Meredith.
Examples
# Get some data
data(mcmcListExample)
( mco <- mcmcOutput(mcmcListExample) )
# Tobs and Tsim are the Freeman-Tukey discrepancy measures
discrepancyPlot(mco, observed="Tobs", simulated="Tsim") # defaults
discrepancyPlot(mco, observed="Tobs", simulated="Tsim",
main="Salamanders", col='red')