forecast_comparison {OOS} | R Documentation |
Compare forecast accuracy
Description
A function to compare forecasts. Options include: simple forecast error ratios, Diebold-Mariano test, and Clark and West test for nested models
Usage
forecast_comparison(
Data,
baseline.forecast,
test = "ER",
loss = "MSE",
horizon = NULL
)
Arguments
Data |
data.frame: data frame of forecasts, model names, and dates |
baseline.forecast |
string: column name of baseline (null hypothesis) forecasts |
test |
string: which test to use; ER = error ratio, DM = Diebold-Mariano, CM = Clark and West |
loss |
string: error loss function to use if creating forecast error ratio |
horizon |
int: horizon of forecasts being compared in DM and CW tests |
Value
numeric test result
Examples
# simple time series
A = c(1:100) + rnorm(100)
date = seq.Date(from = as.Date('2000-01-01'), by = 'month', length.out = 100)
Data = data.frame(date = date, A)
# run forecast_univariate
forecast.uni =
forecast_univariate(
Data = Data,
forecast.dates = tail(Data$date,10),
method = c('naive','auto.arima', 'ets'),
horizon = 1,
recursive = FALSE,
freq = 'month')
forecasts =
dplyr::left_join(
forecast.uni,
data.frame(date, observed = A),
by = 'date'
)
# run ER (MSE)
er.ratio.mse =
forecast_comparison(
forecasts,
baseline.forecast = 'naive',
test = 'ER',
loss = 'MSE')
[Package OOS version 1.0.0 Index]