exact_horseshoe {Mhorseshoe} | R Documentation |
Run exact MCMC algorithm for horseshoe prior
Description
The exact MCMC algorithm for the horseshoe prior introduced in section 2.1 of Johndrow et al. (2020).
Usage
exact_horseshoe(
y,
X,
burn = 1000,
iter = 5000,
a = 1/5,
b = 10,
s = 0.8,
tau = 1,
sigma2 = 1,
w = 1,
alpha = 0.05
)
Arguments
y |
Response vector, |
X |
Design matrix, |
burn |
Number of burn-in samples. The default is 1000. |
iter |
Number of samples to be drawn from the posterior. The default is 5000. |
a |
A tuning parameter of the rejection sampler, where the default
value is |
b |
A tuning parameter of the rejection sampler, where the default
value is |
s |
|
tau |
Initial value of the global shrinkage parameter |
sigma2 |
Initial value of error variance |
w |
A hyperparameter of gamma prior for |
alpha |
|
Details
The exact MCMC algorithm introduced in Section 2.1 of Johndrow et al. (2020)
is implemented in this function. This algorithm uses a blocked-Gibbs
sampler for (\tau, \beta, \sigma^2)
, where the global shrinkage
parameter \tau
is updated by an Metropolis-Hastings algorithm. The
local shrinkage parameter \lambda_{j},\ j = 1,2,...,p
is updated by
the rejection sampler.
Value
BetaHat |
Posterior mean of |
LeftCI |
Lower bound of |
RightCI |
Upper bound of |
Sigma2Hat |
Posterior mean of |
TauHat |
Posterior mean of |
LambdaHat |
Posterior mean of |
BetaSamples |
Samples from the posterior of |
LambdaSamples |
Lambda samples through rejection sampling. |
TauSamples |
Tau samples through MH algorithm. |
Sigma2Samples |
Samples from the posterior of the parameter
|
References
Johndrow, J., Orenstein, P., & Bhattacharya, A. (2020). Scalable Approximate MCMC Algorithms for the Horseshoe Prior. In Journal of Machine Learning Research, 21, 1-61.
Examples
# Making simulation data.
set.seed(123)
N <- 50
p <- 100
true_beta <- c(rep(1, 10), rep(0, 90))
X <- matrix(1, nrow = N, ncol = p) # Design matrix X.
for (i in 1:p) {
X[, i] <- stats::rnorm(N, mean = 0, sd = 1)
}
y <- vector(mode = "numeric", length = N) # Response variable y.
e <- rnorm(N, mean = 0, sd = 2) # error term e.
for (i in 1:10) {
y <- y + true_beta[i] * X[, i]
}
y <- y + e
# Run exact_horseshoe
result <- exact_horseshoe(y, X, burn = 0, iter = 100)
# posterior mean
betahat <- result$BetaHat
# Lower bound of the 95% credible interval
leftCI <- result$LeftCI
# Upper bound of the 95% credible interval
RightCI <- result$RightCI