defaultpost {rjmcmc} | R Documentation |
Perform Post-Processing Using Default Bijections
Description
Performs Bayesian multimodel inference, estimating Bayes factors and
posterior model probabilities for N candidate models. Unlike
rjmcmcpost
, this function uses a default bijection scheme based
on approximating each posterior by a multivariate normal distribution. The
result is reminiscent of the algorithm of Carlin & Chib (1995) with a
multivariate normal pseudo-prior. Transformation Jacobians are computed using
automatic differentiation so do not need to be specified.
Usage
defaultpost(posterior, likelihood, param.prior, model.prior,
chainlength = 10000, TM.thin = chainlength/10, progress = TRUE,
save.all = TRUE)
Arguments
posterior |
A list of N matrices containing the posterior distributions under each model. Generally this will be obtained from MCMC output. Note that each parameter should be real-valued so some parameters may need to be transformed, using logarithms for example. |
likelihood |
A list of N functions specifying the log-likelihood functions for the data under each model. |
param.prior |
A list of N functions specifying the prior distributions for each model-specific parameter vector. |
model.prior |
A numeric vector of the prior model probabilities. Note that this argument is not required to sum to one as it is automatically normalised. |
chainlength |
How many iterations to run the Markov chain for. |
TM.thin |
How regularly to calculate transition matrices as the chain progresses. |
progress |
A logical determining whether a progress bar is drawn. |
save.all |
A logical determining whether to save the value of the
universal parameter at each iteration, as well as the corresponding
likelihoods, priors and posteriors. If |
Value
Returns an object of class rj
(see rjmethods
).
If save.all=TRUE
, the output has named elements result
,
densities
, psidraws
, progress
and meta
. If
save.all=FALSE
, the densities
and psidraws
elements
are omitted.
result
contains useful point estimates, progress
contains
snapshots of these estimates over time, and meta
contains
information about the function call.
References
Carlin, B. P. and Chib, S. (1995) Bayesian Model Choice via Markov Chain Monte Carlo Methods. Journal of the Royal Statistical Society, Series B, 473-484.
Barker, R. J. and Link, W. A. (2013) Bayesian multimodel inference by RJMCMC: A Gibbs sampling approach. The American Statistician, 67(3), 150-156.
See Also
Examples
## Comparing two binomial models -- see Barker & Link (2013) for further details.
y=c(8,16); sumy=sum(y)
n=c(20,30); sumn=sum(n)
L1=function(p){if((all(p>=0))&&(all(p<=1))) sum(dbinom(y,n,p,log=TRUE)) else -Inf}
L2=function(p){if((p[1]>=0)&&(p[1]<=1)) sum(dbinom(y,n,p[1],log=TRUE)) else -Inf}
p.prior1=function(p){sum(dbeta(p,1,1,log=TRUE))}
p.prior2=function(p){dbeta(p[1],1,1,log=TRUE)+dbeta(p[2],17,15,log=TRUE)}
draw1=matrix(rbeta(2000,y+1,n-y+1), 1000, 2, byrow=TRUE) ## full conditional posterior
draw2=matrix(c(rbeta(1000,sumy+1,sumn-sumy+1),rbeta(1000,17,15)), 1000, 2)
out=defaultpost(posterior=list(draw1,draw2), likelihood=list(L1,L2),
param.prior=list(p.prior1,p.prior2), model.prior=c(1,1), chainlength=1000)