predict.varlse {bvhar}R Documentation

Forecasting Multivariate Time Series

Description

Forecasts multivariate time series using given model.

Usage

## S3 method for class 'varlse'
predict(object, n_ahead, level = 0.05, ...)

## S3 method for class 'vharlse'
predict(object, n_ahead, level = 0.05, ...)

## S3 method for class 'bvarmn'
predict(object, n_ahead, n_iter = 100L, level = 0.05, ...)

## S3 method for class 'bvharmn'
predict(object, n_ahead, n_iter = 100L, level = 0.05, ...)

## S3 method for class 'bvarflat'
predict(object, n_ahead, n_iter = 100L, level = 0.05, ...)

## S3 method for class 'bvarssvs'
predict(object, n_ahead, level = 0.05, ...)

## S3 method for class 'bvharssvs'
predict(object, n_ahead, level = 0.05, ...)

## S3 method for class 'bvarhs'
predict(object, n_ahead, level = 0.05, ...)

## S3 method for class 'bvharhs'
predict(object, n_ahead, level = 0.05, ...)

## S3 method for class 'bvarsv'
predict(object, n_ahead, level = 0.05, ...)

## S3 method for class 'bvharsv'
predict(object, n_ahead, level = 0.05, ...)

## S3 method for class 'predbvhar'
print(x, digits = max(3L, getOption("digits") - 3L), ...)

## S3 method for class 'predbvhar'
knit_print(x, ...)

Arguments

object

Model object

n_ahead

step to forecast

level

Specify alpha of confidence interval level 100(1 - alpha) percentage. By default, .05.

...

not used

n_iter

Number to sample residual matrix from inverse-wishart distribution. By default, 100.

x

predbvhar object

digits

digit option to print

Value

predbvhar class with the following components:

process

object$process

forecast

forecast matrix

se

standard error matrix

lower

lower confidence interval

upper

upper confidence interval

lower_joint

lower CI adjusted (Bonferroni)

upper_joint

upper CI adjusted (Bonferroni)

y

object$y

n-step ahead forecasting VAR(p)

See pp35 of Lütkepohl (2007). Consider h-step ahead forecasting (e.g. n + 1, ... n + h).

Let y_{(n)}^T = (y_n^T, ..., y_{n - p + 1}^T, 1). Then one-step ahead (point) forecasting:

\hat{y}_{n + 1}^T = y_{(n)}^T \hat{B}

Recursively, let \hat{y}_{(n + 1)}^T = (\hat{y}_{n + 1}^T, y_n^T, ..., y_{n - p + 2}^T, 1). Then two-step ahead (point) forecasting:

\hat{y}_{n + 2}^T = \hat{y}_{(n + 1)}^T \hat{B}

Similarly, h-step ahead (point) forecasting:

\hat{y}_{n + h}^T = \hat{y}_{(n + h - 1)}^T \hat{B}

How about confident region? Confidence interval at h-period is

y_{k,t}(h) \pm z_(\alpha / 2) \sigma_k (h)

Joint forecast region of 100(1-\alpha)% can be computed by

\{ (y_{k, 1}, y_{k, h}) \mid y_{k, n}(i) - z_{(\alpha / 2h)} \sigma_n(i) \le y_{n, i} \le y_{k, n}(i) + z_{(\alpha / 2h)} \sigma_k(i), i = 1, \ldots, h \}

See the pp41 of Lütkepohl (2007).

To compute covariance matrix, it needs VMA representation:

Y_{t}(h) = c + \sum_{i = h}^{\infty} W_{i} \epsilon_{t + h - i} = c + \sum_{i = 0}^{\infty} W_{h + i} \epsilon_{t - i}

Then

\Sigma_y(h) = MSE [ y_t(h) ] = \sum_{i = 0}^{h - 1} W_i \Sigma_{\epsilon} W_i^T = \Sigma_y(h - 1) + W_{h - 1} \Sigma_{\epsilon} W_{h - 1}^T

n-step ahead forecasting VHAR

Let T_{HAR} is VHAR linear transformation matrix (See var_design_formulation). Since VHAR is the linearly transformed VAR(22), let y_{(n)}^T = (y_n^T, y_{n - 1}^T, ..., y_{n - 21}^T, 1).

Then one-step ahead (point) forecasting:

\hat{y}_{n + 1}^T = y_{(n)}^T T_{HAR} \hat{\Phi}

Recursively, let \hat{y}_{(n + 1)}^T = (\hat{y}_{n + 1}^T, y_n^T, ..., y_{n - 20}^T, 1). Then two-step ahead (point) forecasting:

\hat{y}_{n + 2}^T = \hat{y}_{(n + 1)}^T T_{HAR} \hat{\Phi}

and h-step ahead (point) forecasting:

\hat{y}_{n + h}^T = \hat{y}_{(n + h - 1)}^T T_{HAR} \hat{\Phi}

n-step ahead forecasting BVAR(p) with minnesota prior

Point forecasts are computed by posterior mean of the parameters. See Section 3 of Bańbura et al. (2010).

Let \hat{B} be the posterior MN mean and let \hat{V} be the posterior MN precision.

Then predictive posterior for each step

y_{n + 1} \mid \Sigma_e, y \sim N( vec(y_{(n)}^T A), \Sigma_e \otimes (1 + y_{(n)}^T \hat{V}^{-1} y_{(n)}) )

y_{n + 2} \mid \Sigma_e, y \sim N( vec(\hat{y}_{(n + 1)}^T A), \Sigma_e \otimes (1 + \hat{y}_{(n + 1)}^T \hat{V}^{-1} \hat{y}_{(n + 1)}) )

and recursively,

y_{n + h} \mid \Sigma_e, y \sim N( vec(\hat{y}_{(n + h - 1)}^T A), \Sigma_e \otimes (1 + \hat{y}_{(n + h - 1)}^T \hat{V}^{-1} \hat{y}_{(n + h - 1)}) )

See bvar_predictive_density how to generate the predictive distribution.

n-step ahead forecasting BVHAR

Let \hat\Phi be the posterior MN mean and let \hat\Psi be the posterior MN precision.

Then predictive posterior for each step

y_{n + 1} \mid \Sigma_e, y \sim N( vec(y_{(n)}^T \tilde{T}^T \Phi), \Sigma_e \otimes (1 + y_{(n)}^T \tilde{T} \hat\Psi^{-1} \tilde{T} y_{(n)}) )

y_{n + 2} \mid \Sigma_e, y \sim N( vec(y_{(n + 1)}^T \tilde{T}^T \Phi), \Sigma_e \otimes (1 + y_{(n + 1)}^T \tilde{T} \hat\Psi^{-1} \tilde{T} y_{(n + 1)}) )

and recursively,

y_{n + h} \mid \Sigma_e, y \sim N( vec(y_{(n + h - 1)}^T \tilde{T}^T \Phi), \Sigma_e \otimes (1 + y_{(n + h - 1)}^T \tilde{T} \hat\Psi^{-1} \tilde{T} y_{(n + h - 1)}) )

See bvar_predictive_density how to generate the predictive distribution.

n-step ahead forecasting VAR(p) with SSVS and Horseshoe

The process of the computing point estimate is the same. However, predictive interval is achieved from each Gibbs sampler sample.

y_{n + 1} \mid A, \Sigma_e, y \sim N( vec(y_{(n)}^T A), \Sigma_e )

y_{n + h} \mid A, \Sigma_e, y \sim N( vec(\hat{y}_{(n + h - 1)}^T A), \Sigma_e )

n-step ahead forecasting VHAR with SSVS and Horseshoe

The process of the computing point estimate is the same. However, predictive interval is achieved from each Gibbs sampler sample.

y_{n + 1} \mid \Sigma_e, y \sim N( vec(y_{(n)}^T \tilde{T}^T \Phi), \Sigma_e \otimes (1 + y_{(n)}^T \tilde{T} \hat\Psi^{-1} \tilde{T} y_{(n)}) )

y_{n + h} \mid \Sigma_e, y \sim N( vec(y_{(n + h - 1)}^T \tilde{T}^T \Phi), \Sigma_e \otimes (1 + y_{(n + h - 1)}^T \tilde{T} \hat\Psi^{-1} \tilde{T} y_{(n + h - 1)}) )

References

Lütkepohl, H. (2007). New Introduction to Multiple Time Series Analysis. Springer Publishing.

Corsi, F. (2008). A Simple Approximate Long-Memory Model of Realized Volatility. Journal of Financial Econometrics, 7(2), 174–196.

Baek, C. and Park, M. (2021). Sparse vector heterogeneous autoregressive modeling for realized volatility. J. Korean Stat. Soc. 50, 495–510.

Bańbura, M., Giannone, D., & Reichlin, L. (2010). Large Bayesian vector auto regressions. Journal of Applied Econometrics, 25(1).

Gelman, A., Carlin, J. B., Stern, H. S., & Rubin, D. B. (2013). Bayesian data analysis. Chapman and Hall/CRC.

Karlsson, S. (2013). Chapter 15 Forecasting with Bayesian Vector Autoregression. Handbook of Economic Forecasting, 2, 791–897.

Litterman, R. B. (1986). Forecasting with Bayesian Vector Autoregressions: Five Years of Experience. Journal of Business & Economic Statistics, 4(1), 25.

Ghosh, S., Khare, K., & Michailidis, G. (2018). High-Dimensional Posterior Consistency in Bayesian Vector Autoregressive Models. Journal of the American Statistical Association, 114(526).

George, E. I., Sun, D., & Ni, S. (2008). Bayesian stochastic search for VAR model restrictions. Journal of Econometrics, 142(1), 553–580.

George, E. I., Sun, D., & Ni, S. (2008). Bayesian stochastic search for VAR model restrictions. Journal of Econometrics, 142(1), 553–580.


[Package bvhar version 2.0.1 Index]