sven {bravo}  R Documentation 
Selection of variables with embedded screening using Bayesian methods (SVEN) in Gaussian linear models (ultrahigh, high or low dimensional).
Description
SVEN is an approach to selecting variables with embedded screening using a Bayesian hierarchical model. It is also a variable selection method in the spirit of the stochastic shotgun search algorithm. However, by embedding a unique model based screening and using fast Cholesky updates, SVEN produces a highly scalable algorithm to explore gigantic model spaces and rapidly identify the regions of high posterior probabilities. It outputs the log (unnormalized) posterior probability of a set of best (highest probability) models. For more details, see Li et al. (2023, https://doi.org/10.1080/10618600.2022.2074428)
Usage
sven(
X,
y,
w = NULL,
lam = NULL,
Ntemp = 10,
Tmax = NULL,
Miter = 50,
wam.threshold = 0.5,
log.eps = 16,
L = 20,
verbose = FALSE
)
Arguments
X 
The 
y 
The response vector of length 
w 
The prior inclusion probability of each variable. Default: NULL, whence it is set as

lam 
The slab precision parameter. Default: NULL, whence it is set as 
Ntemp 
The number of temperatures. Default: 10. 
Tmax 
The maximum temperature. Default: 
Miter 
The number of iterations per temperature. Default: 
wam.threshold 
The threshold probability to select the covariates for WAM. A covariate will be included in WAM if its corresponding marginal inclusion probability is greater than the threshold. Default: 0.5. 
log.eps 
The tolerance to choose the number of top models. See detail. Default: 16. 
L 
The minimum number of neighboring models screened. Default: 20. 
verbose 
If 
Details
SVEN is developed based on a hierarchical Gaussian linear model with priors placed on the regression coefficients as well as on the model space as follows:
y  X, \beta_0,\beta,\gamma,\sigma^2,w,\lambda \sim N(\beta_01 + X_\gamma\beta_\gamma,\sigma^2I_n)
\beta_i\beta_0,\gamma,\sigma^2,w,\lambda \stackrel{indep.}{\sim} N(0, \gamma_i\sigma^2/\lambda),~i=1,\ldots,p,
(\beta_0,\sigma^2)\gamma,w,p \sim p(\beta_0,\sigma^2) \propto 1/\sigma^2
\gamma_iw,\lambda \stackrel{iid}{\sim} Bernoulli(w)
where X_\gamma
is the n \times \gamma
submatrix of X
consisting of
those columns of X
for which \gamma_i=1
and similarly, \beta_\gamma
is the
\gamma
subvector of \beta
corresponding to \gamma
.
Degenerate spike priors on inactive variables and Gaussian slab priors on active
covariates makes the posterior
probability (up to a normalizing constant) of a model P(\gammay)
available in
explicit form (Li et al., 2020).
The variable selection starts from an empty model and updates the model according to the posterior probability of its neighboring models for some prespecified number of iterations. In each iteration, the models with small probabilities are screened out in order to quickly identify the regions of high posterior probabilities. A temperature schedule is used to facilitate exploration of models separated by valleys in the posterior probability function, thus mitigate posterior multimodality associated with variable selection models. The default maximum temperature is guided by the asymptotic posterior model selection consistency results in Li et al. (2020).
SVEN provides the maximum a posteriori (MAP) model as well as the weighted average model
(WAM). WAM is obtained in the following way: (1) keep the best (highest probability) K
distinct models \gamma^{(1)},\ldots,\gamma^{(K)}
with
\log P\left(\gamma^{(1)}y\right) \ge \cdots \ge \log P\left(\gamma^{(K)}y\right)
where K
is chosen so that
\log \left\{P\left(\gamma^{(K)}y\right)/P\left(\gamma^{(1)}y\right)\right\} > \code{log.eps}
;
(2) assign the weights
w_i = P(\gamma^{(i)}y)/\sum_{k=1}^K P(\gamma^{(k)}y)
to the model \gamma^{(i)}
; (3) define the approximate marginal inclusion probabilities
for the j
th variable as
\hat\pi_j = \sum_{k=1}^K w_k I(\gamma^{(k)}_j = 1).
Then, the WAM is defined as the model containing variables j
with
\hat\pi_j > \code{wam.threshold}
. SVEN also provides all the top K
models which
are stored in an p \times K
sparse matrix, along with their corresponding log (unnormalized)
posterior probabilities.
When X
is a list with two matrices, say, W
and Z
, the above method is extended
to ncol(W)+ncol(Z)
dimensional regression. However, the hyperparameters lam
and w
are chosen separately for the two matrices, the default values being nrow(W)/ncol(W)^2
and nrow(Z)/ncol(Z)^2
for lam
and sqrt(nrow(W))/ncol(W)
and
sqrt(nrow(Z))/ncol(Z)
for w
.
The marginal inclusion probabities can be extracted by using the function mip
.
Value
A list with components
model.map 
A vector of indices corresponding to the selected variables in the MAP model. 
model.wam 
A vector of indices corresponding to the selected variables in the WAM. 
model.top 
A sparse matrix storing the top models. 
beta.map 
The ridge estimator of regression coefficients in the MAP model. 
beta.wam 
The ridge estimator of regression coefficients in the WAM. 
mip.map 
The marginal inclusion probabilities of the variables in the MAP model. 
mip.wam 
The marginal inclusion probabilities of the variables in the WAM. 
pprob.map 
The log (unnormalized) posterior probability corresponding to the MAP model. 
pprob.top 
A vector of the log (unnormalized) posterior probabilities corresponding to the top models. 
stats 
Additional statistics. 
Author(s)
Dongjin Li, Debarshi Chakraborty, and Somak Dutta
Maintainer:
Dongjin Li <liyangxiaobei@gmail.com>
References
Li, D., Dutta, S., and Roy, V. (2023). Model based screening embedded Bayesian variable selection for ultrahigh dimensional settings. Journal of Computational and Graphical Statistics, 32(1), 6173.
See Also
[mip.sven()] for marginal inclusion probabilities, [predict.sven()](via [predict()]) for prediction for .
Examples
n < 50; p < 100; nonzero < 3
trueidx < 1:3
truebeta < c(4,5,6)
X < matrix(rnorm(n*p), n, p) # n x p covariate matrix
y < 0.5 + X[,trueidx] %*% truebeta + rnorm(n)
res < sven(X=X, y=y)
res$model.map # the MAP model
Z < matrix(rnorm(n*p), n, p) # another covariate matrix
y2 = 0.5 + X[,trueidx] %*% truebeta + Z[,1:2] %*% c(2,2) + rnorm(n)
res2 < sven(X=list(X,Z), y=y2)