sEM {hdiVAR}R Documentation

sparse expectation-maximization algorithm for high-dimensional vector autoregression with measurement error

Description

Alteranting between expectation step (by kalman filter and smoothing) and maximization step (by generalized Dantzig selector for transiton matrix) to estimate transtion matrix and error variances.

Usage

sEM(
  Y,
  A_init,
  sig2_eta_init,
  sig2_epsilon_init,
  Ti_train = NULL,
  Ti_gap = NULL,
  tol_seq = NULL,
  ht_seq = 0,
  is_cv = TRUE,
  thres = 0.001,
  count_vanish = 1,
  n_em = NULL,
  is_echo = FALSE,
  is_sparse = TRUE
)

Arguments

Y

observations of time series, a p by T matrix.

A_init

a p by p matrix as initial value of transition matrix A estimate.

sig2_eta_init

scalar; initial value of propagation error variance \sigma_\eta^2 estimate in latent signal process.

sig2_epsilon_init

scalar; initial value of measurement error variance \sigma_\epsilon^2 estimate in observation process.

Ti_train

scalar; the number of time points in training data in cross-validation.

Ti_gap

scalar; the number of time points between test data and train data in cross-validation.

tol_seq

vector; grid of tolerance parameter in Dantzig selector for cross-validation. If is_cv=FALSE, use the first element.

ht_seq

vector; grid of hard-thresholding levels for transition matrix estimate. If is_cv=FALSE, use the first element. To avoid hard thresholding, set ht_seq=0.

is_cv

logical; if true, run cross-validation to tune Dantzig selector tolerance parameter each sparse EM iteration.

thres

scalar; if the difference between updates of two consecutive iterations is less that thres, record one hit. The algorithm is terminated due to vanishing updates if hit times accumulate up to count_vanish. If thres=NULL, the algorithm will not be terminated due to vanishing updates, but too many iterations instead.

count_vanish

scalar; if the difference between updates of two consecutive iterations is less that thres up to count_vanish times, the algorithm is terminated due to vanishing updates.

n_em

scalar; the maximal allowed number of EM iterations, otherwise the algorithm is terminated due to too many iterations. If n_em=NULL, the algorithm will not be terminated due to too many iterations, but vanishing updates instead.

is_echo

logical; if true, display the information of CV-optimal (tol, ht) each iteration, and of algorithm termination.

is_sparse

logical; if false, use standard EM algorithm, and arguments for cross-validation are not needed.

Value

a list of parameter estimates.

A_est estimate of transition matrix A.
sig2_eta_hat estimate of propagation error variance \sigma_\eta^2.
sig2_epsilon_hat estimate of measurement error variance \sigma_\epsilon^2.
iter_err the difference between updates of two consecutive iterations.
iter_err_ratio the difference ratio (over the previous estimate) between updates of two consecutive iterations.

Author(s)

Xiang Lyu, Jian Kang, Lexin Li

Examples

p= 3; Ti=20  # dimension and time
A=diag(1,p) # transition matrix
sig_eta=sig_epsilon=0.2 # error std
Y=array(0,dim=c(p,Ti)) #observation t=1, ...., Ti
X=array(0,dim=c(p,Ti)) #latent t=1, ...., T
Ti_burnin=30 # time for burn-in to stationarity
for (t in 1:(Ti+Ti_burnin)) {
  if (t==1){
    x1=rnorm(p)
  } else if (t<=Ti_burnin) { # burn in
    x1=A%*%x1+rnorm(p,mean=0,sd=sig_eta)
  } else if (t==(Ti_burnin+1)){ # time series used for learning
    X[,t-Ti_burnin]=x1
    Y[,t-Ti_burnin]=X[,t-Ti_burnin]+rnorm(p,mean=0,sd=sig_epsilon)
  } else {
    X[,t- Ti_burnin]=A%*%X[,t-1- Ti_burnin]+rnorm(p,mean=0,sd=sig_eta)
    Y[,t- Ti_burnin]=X[,t- Ti_burnin]+rnorm(p,mean=0,sd=sig_epsilon)
  }
}

sEM_fit=sEM(Y,diag(0.5,p),0.1,0.1,Ti*0.5,Ti*0.2,c(0.01,0.1))



[Package hdiVAR version 1.0.2 Index]