dl {dlbayes}R Documentation

Implement the Dirichlet Laplace shrinkage prior in Bayesian linear regression

Description

This function is the baysian linear regression version of the algorithm proposed in Bhattacharya et al. (2015). The function is fast because we use fast sampling method compute posterior samples. The method proposed in Bhattacharya et al. (2015) is used in the second step perfectly solving the large p problem. The local shrinkage controlling parameter psi_j are updated via a slice sampling scheme given by Polson et al. (2014). And the parameters phi_j have various inverse gaussian distribution. We generate variates with transformation into multiple roots by Michael et al. (1976).

Usage

dl(x, y, burn = 5000, nmc = 5000, thin = 1, hyper = 1/2)

Arguments

x

input matrix, each row is an observation vector, dimension n*p.

y

Response variable, a n*1 vector.

burn

Number of burn-in MCMC samples. Default is 5000.

nmc

Number of posterior draws to be saved. Default is 5000.

thin

Thinning parameter of the chain. Default is 1 means no thinning.

hyper

The value of hyperparameter in the prior, can be [1/max(n,p),1/2]. It controls local shrinkage scales through psi. Small values of hyperparameter would lead most of the result close to zero; while large values allow small singularity at zero. We give a method and a function to tuning this parameter. See the function called "dlhyper" for details.

Value

betamatrix

Posterior samples of beta. A large matrix (nmc/thin)*p

Examples

{
p=50
n=5
#generate x
x=matrix(rnorm(n*p),nrow=n)
#generate beta
beta=c(rep(0,10),runif(n=5,min=-1,max=1),rep(0,10),runif(n=5,min=-1,max=1),rep(0,p-30))
#generate y
y=x%*%beta+rnorm(n)
hyper=dlhyper(x,y)
dlresult=dl(x,y,hyper=hyper)}



[Package dlbayes version 0.1.0 Index]