rnlp {mombf} | R Documentation |
Posterior sampling for regression parameters
Description
Gibbs sampler for linear, generalized linear and survival models under product non-local priors, Zellner's prior and a Normal approximation to the posterior. Both sampling conditional on a model and Bayesian model averaging are implemented (see Details).
If x and y not specified samples from non-local priors/posteriors with density proportional to d(theta) N(theta; m, V) are produced, where d(theta) is the non-local penalty term.
Usage
rnlp(y, x, m, V, msfit, outcometype, family, priorCoef, priorGroup,
priorVar, isgroup, niter=10^3, burnin=round(niter/10), thinning=1, pp='norm')
Arguments
y |
Vector with observed responses. When |
x |
Design matrix with all potential predictors |
m |
Mean for the Normal kernel |
V |
Covariance for the Normal kernel |
msfit |
Object of class |
outcometype |
Type of outcome. Possible values are "Continuous", "glm" or "Survival" |
family |
Assumed family for the family. Some possible values are "normal", "binomial logit" and "Cox" |
priorCoef |
Prior distribution for the coefficients. Ignored if
|
priorGroup |
Prior on grouped coefficients (e.g. categorical
predictors with >2 categories, splines), as passed to |
priorVar |
Prior on residual variance. Ignored if |
isgroup |
Logical vector where |
niter |
Number of MCMC iterations |
burnin |
Number of burn-in MCMC iterations. Defaults to |
thinning |
MCMC thinning factor, i.e. only one out of each |
pp |
When |
Details
The algorithm is implemented for product MOM (pMOM), product iMOM (piMOM) and product eMOM (peMOM) priors. The algorithm combines an orthogonalization that provides low serial correlation with a latent truncation representation that allows fast sampling.
When y
and x
are specified sampling is for the linear
regression posterior.
When argument msfit
is left missing, posterior sampling is for
the full model regressing y
on all covariates in x
.
When msfit
is specified each model is drawn with
probability given by postProb(msfit)
. In this case, a Bayesian
Model Averaging estimate of the regression coefficients can be
obtained by applying colMeans
to the rnlp
ouput matrix.
When y
and x
are left missing, sampling is from a
density proportional to d(theta) N(theta; m,V), where d(theta) is the
non-local penalty (e.g. d(theta)=prod(theta^(2r)) for the pMOM prior).
Value
Matrix with posterior samples
Author(s)
David Rossell
References
D. Rossell and D. Telesca. Non-local priors for high-dimensional estimation, 2014. http://arxiv.org/pdf/1402.5107v2.pdf
See Also
modelSelection
to perform model selection and compute
posterior model probabilities.
For more details on prior specification see msPriorSpec-class
.
Examples
#Simulate data
x <- matrix(rnorm(100*3),nrow=100,ncol=3)
theta <- matrix(c(1,1,0),ncol=1)
y <- x %*% theta + rnorm(100)
fit1 <- modelSelection(y=y, x=x, center=FALSE, scale=FALSE)
th <- rnlp(msfit=fit1, niter=100)
colMeans(th)