dist.Horseshoe {LaplacesDemon}  R Documentation 
Horseshoe Distribution
Description
This is the density function and random generation from the horseshoe distribution.
Usage
dhs(x, lambda, tau, log=FALSE)
rhs(n, lambda, tau)
Arguments
n 
This is the number of draws from the distribution. 
x 
This is a location vector at which to evaluate density. 
lambda 
This vector is a positiveonly local parameter

tau 
This scalar is a positiveonly global parameter

log 
Logical. If 
Details
Application: Multivariate Scale Mixture
Density: (see below)
Inventor: Carvalho et al. (2008)
Notation 1:
\theta \sim \mathcal{HS}(\lambda, \tau)
Notation 2:
p(\theta) = \mathcal{HS}(\theta  \lambda, \tau)
Parameter 1: local scale
\lambda > 0
Parameter 2: global scale
\tau > 0
Mean:
E(\theta)
Variance:
var(\theta)
Mode:
mode(\theta)
The horseshoe distribution (Carvalho et al., 2008) is a heavytailed mixture distribution that can be considered a variance mixture, and it is in the family of multivariate scale mixtures of normals.
The horseshoe distribution was proposed as a prior distribution, and recommended as a default choice for shrinkage priors in the presence of sparsity. Horseshoe priors are most appropriate in largep models where dimension reduction is necessary to avoid overly complex models that predict poorly, and also perform well in estimating a sparse covariance matrix via Cholesky decomposition (Carvalho et al., 2009).
When the number of parameters in variable selection is assumed to be sparse, meaning that most elements are zero or nearly zero, a horseshoe prior is a desirable alternative to the Laplacedistributed parameters in the LASSO, or the parameterization in ridge regression. When the true value is far from zero, the horseshoe prior leaves the parameter unshrunk. Yet, the horseshoe prior is accurate in shrinking parameters that are truly zero or nearzero. Parameters near zero are shrunk more than parameters far from zero. Therefore, parameters far from zero experience less shrinkage and are closer to their true values. The horseshoe prior is valuable in discriminating signal from noise.
By replacing the Laplacedistributed parameters in LASSO with horseshoedistributed parameters and including a global scale, the result is called horseshoe regression.
Value
dhs
gives the density and
rhs
generates random deviates.
References
Carvalho, C.M., Polson, N.G., and Scott, J.G. (2008). "The Horseshoe Estimator for Sparse Signals". Discussion Paper 200831. Duke University Department of Statistical Science.
Carvalho, C.M., Polson, N.G., and Scott, J.G. (2009). "Handling Sparsity via the Horseshoe". Journal of Machine Learning Research, 5, p. 73–80.
See Also
Examples
library(LaplacesDemon)
x < rnorm(100)
lambda < rhalfcauchy(100, 5)
tau < 5
x < dhs(x, lambda, tau, log=TRUE)
x < rhs(100, lambda=lambda, tau=tau)
plot(density(x))