MHLS {EAinference} | R Documentation |
Metropolis-Hastings lasso sampler under a fixed active set.
Description
Metropolis-Hastings sampler to simulate from the sampling distribution of lasso given a fixed active set.
Usage
MHLS(X, PE, sig2, lbd, weights = rep(1, ncol(X)), B0, S0, A = which(B0 !=
0), tau = rep(1, ncol(X)), niter = 2000, burnin = 0, PEtype = "coeff",
updateS.itv = 1, verbose = FALSE, ...)
Arguments
X |
predictor matrix. |
PE , sig2 , lbd |
parameters of target distribution.
(point estimate of beta or |
weights |
weight vector with length |
B0 |
numeric vector with length |
S0 |
numeric vector with length |
A |
numeric vector. Active coefficient index.
Every active coefficient index in |
tau |
numeric vector with length |
niter |
integer. The number of iterations. Default is |
burnin |
integer. The length of burin-in periods. Default is |
PEtype |
Type of |
updateS.itv |
integer. Update subgradients every |
verbose |
logical. If true, print out the progress step. |
... |
complementary arguments.
|
Details
Given appropriate initial value, provides Metropolis-Hastings samples
under the fixed active set.
From the initial values, B0
and S0, MHLS
draws beta
and subgrad
samples.
In every iteration, given t
-th iteration values, t
-th beta
and t
-th subgrad
,
a new set of proposed beta and subgradient is sampled. We either accept the proposed sample
and use that as (t+1)
-th iteration values or reuse t
-th iteration values.
See Zhou(2014) for more details.
Value
MHLS
returns an object of class "MHLS"
.
The functions summary.MHLS
and plot.MHLS
provide a brief summary and generate plots.
beta |
lasso samples. |
subgrad |
subgradient samples. |
acceptHistory |
numbers of acceptance and proposal. |
niter , burnin , PE , type |
same as function arguments. |
References
Zhou, Q. (2014), "Monte Carlo simulation for Lasso-type problems by estimator augmentation," Journal of the American Statistical Association, 109, 1495-1516.
Examples
#-------------------------
# Low dim
#-------------------------
set.seed(123)
n <- 10
p <- 5
X <- matrix(rnorm(n * p), n)
Y <- X %*% rep(1, p) + rnorm(n)
sigma2 <- 1
lbd <- .37
weights <- rep(1, p)
LassoResult <- lassoFit(X = X, Y = Y, lbd = lbd, type = "lasso", weights = weights)
B0 <- LassoResult$B0
S0 <- LassoResult$S0
MHLS(X = X, PE = rep(0, p), sig2 = 1, lbd = 1,
weights = weights, B0 = B0, S0 = S0, niter = 50, burnin = 0,
PEtype = "coeff")
MHLS(X = X, PE = rep(0, n), sig2 = 1, lbd = 1,
weights = weights, B0 = B0, S0 = S0, niter = 50, burnin = 0,
PEtype = "mu")
#-------------------------
# High dim
#-------------------------
set.seed(123)
n <- 5
p <- 10
X <- matrix(rnorm(n*p),n)
Y <- X %*% rep(1,p) + rnorm(n)
weights <- rep(1,p)
LassoResult <- lassoFit(X = X,Y = Y,lbd = lbd, type = "lasso", weights = weights)
B0 <- LassoResult$B0
S0 <- LassoResult$S0
MHLS(X = X, PE = rep(0, p), sig2 = 1, lbd = 1,
weights = weights, B0 = B0, S0 = S0, niter = 50, burnin = 0,
PEtype = "coeff")
MHLS(X = X, PE = rep(0, n), sig2 = 1, lbd = 1,
weights = weights, B0 = B0, S0 = S0, niter = 50, burnin = 0,
PEtype = "mu")