Sequential2Means {VsusP} | R Documentation |
Variable selection using shrinkage priors :: Sequential2Means
Description
Sequential2Means function will take as input X: design matrix, Y : response vector, t: vector of tuning parameter values from Sequential 2-means (S2M) variable selection algorithm. The function will return a list S2M which will hold p: the total number of variables, b.i: the values of the tuning parameter, H.b.i : the estimated number of signals corresponding to each b.i, abs.post.median: medians of the absolute values of the posterior samples.
Usage
Sequential2Means(
X,
Y,
b.i,
prior = "horseshoe+",
n.samples = 5000,
burnin = 2000
)
Arguments
X |
Design matrix of dimension n X p, where n = total data points and p = total number of features |
Y |
Response vector of dimension n X 1 |
b.i |
Vector of tuning parameter values from Sequential 2-means (S2M) variable selection algorithm of dimension specified by user. |
prior |
Shrinkage prior distribution over the Beta. Available options are ridge regression: prior="rr" or prior="ridge", lasso regression: prior="lasso", horseshoe regression: prior="hs" or prior="horseshoe", and horseshoe+ regression : prior="hs+" or prior="horseshoe+" ( String data type) |
n.samples |
Number of posterior samples to generate of numeric data type |
burnin |
Number of burn-in samples of numeric data type |
Value
A list S2M which will hold Beta, b.i, and H.b.i.
Beta |
N by p matrix consisting of N posterior samples of p variables |
b.i |
the user specified vector holding the tuning parameter values |
H.b.i |
the estimated number of signals of numeric data type corresponding to each b.i |
References
Makalic, E. & Schmidt, D. F. High-Dimensional Bayesian Regularised Regression with the BayesReg Package arXiv:1611.06649, 2016
Li, H., & Pati, D. Variable selection using shrinkage priors Computational Statistics & Data Analysis, 107, 107-119.
Examples
# -----------------------------------------------------------------
# Example 1: Gaussian Model and Horseshoe prior
n <- 10
p <- 5
X <- matrix(rnorm(n * p), n, p)
beta <- exp(rnorm(p))
Y <- as.vector(X %*% beta + rnorm(n, 0, 1))
b.i <- seq(0, 1, 0.05)
# Sequential2Means with horseshoe+ using gibbs sampling
# recommended n.samples is 5000 and burning is 2000
S2M <- Sequential2Means(X, Y, b.i, "horseshoe+", 110, 100)
Beta <- S2M$Beta
H.b.i <- S2M$H.b.i
# -----------------------------------------------------------------
# Example 2: Gaussian Model and ridge prior
n <- 10
p <- 5
X <- matrix(rnorm(n * p), n, p)
beta <- exp(rnorm(p))
Y <- as.vector(X %*% beta + rnorm(n, 0, 1))
b.i <- seq(0, 1, 0.05)
# Sequential2Means with ridge regression using gibbs sampling
# recommended n.samples is 5000 and burning is 2000
S2M <- Sequential2Means(X, Y, b.i, "ridge", 110, 100)
Beta <- S2M$Beta
H.b.i <- S2M$H.b.i