exact_horseshoe {Mhorseshoe} | R Documentation |
Run exact MCMC algorithm for horseshoe prior
Description
The exact MCMC algorithm introduced in Section 2.1 of Johndrow et al. (2020)
was implemented in this function. This algorithm is the horseshoe estimator
that updates the global shrinkage parameter \tau
using
Metropolis-Hastings algorithm, and uses blocked-Gibbs sampling for
(\tau, \beta, \sigma)
. The local shrinkage parameter
\lambda_{j},\ j = 1,2,...,p
is updated by the rejection sampler.
Usage
exact_horseshoe(
X,
y,
burn = 1000,
iter = 5000,
a = 1/5,
b = 10,
s = 0.8,
tau = 1,
sigma2 = 1,
w = 1,
alpha = 0.05
)
Arguments
X |
Design matrix, |
y |
Response vector, |
burn |
Number of burn-in samples. Default is 1000. |
iter |
Number of samples to be drawn from the posterior. Default is 5000. |
a |
Parameter of the rejection sampler, and it is recommended to leave
it at the default value, |
b |
Parameter of the rejection sampler, and it is recommended to leave
it at the default value, |
s |
|
tau |
Initial value of the global shrinkage parameter |
sigma2 |
error variance |
w |
Parameter of gamma prior for |
alpha |
|
Details
See Mhorseshoe
or browseVignettes("Mhorseshoe").
Value
BetaHat |
Posterior mean of |
LeftCI |
Lower bound of |
RightCI |
Upper bound of |
Sigma2Hat |
Posterior mean of |
TauHat |
Posterior mean of |
LambdaHat |
Posterior mean of |
BetaSamples |
Samples from the posterior of |
LambdaSamples |
Lambda samples through rejection sampling. |
TauSamples |
Tau samples through MH algorithm. |
Sigma2Samples |
Samples from the posterior of the parameter
|
References
Johndrow, J., Orenstein, P., & Bhattacharya, A. (2020). Scalable Approximate MCMC Algorithms for the Horseshoe Prior. In Journal of Machine Learning Research (Vol. 21).
Examples
# Making simulation data.
set.seed(123)
N <- 50
p <- 100
true_beta <- c(rep(1, 10), rep(0, 90))
X <- matrix(1, nrow = N, ncol = p) # Design matrix X.
for (i in 1:p) {
X[, i] <- stats::rnorm(N, mean = 0, sd = 1)
}
y <- vector(mode = "numeric", length = N) # Response variable y.
e <- rnorm(N, mean = 0, sd = 2) # error term e.
for (i in 1:10) {
y <- y + true_beta[i] * X[, i]
}
y <- y + e
# Run
result <- exact_horseshoe(X, y, burn = 0, iter = 100)
# posterior mean
betahat <- result$BetaHat
# Lower bound of the 95% credible interval
leftCI <- result$LeftCI
# Upper bound of the 95% credible interval
RightCI <- result$RightCI