CompareModels-class {EBMAforecast}R Documentation

Function for comparing multiple models based on predictive performance

Description

This function produces statistics to compare the predictive performance of the different models component models, as well as for the EBMA model itself, for either the calibration or the test period. It currently calculates the area under the ROC (auc), the brier score, the percent of observations predicted correctly (percCorrect), as well as the proportional reduction in error compared to some baseline model (pre) for binary models. For models with normally distributed outcomes the CompareModels function can be used to calculate the root mean squared error (rmse) as well as the mean absolute error (mae).

Usage

compareModels(
  .forecastData,
  .period = "calibration",
  .fitStatistics = c("brier", "auc", "perCorrect", "pre"),
  .threshold = 0.5,
  .baseModel = 0,
  ...
)

## S4 method for signature 'ForecastData'
compareModels(.forecastData, .period, .fitStatistics, .threshold, .baseModel)

Arguments

.forecastData

An object of class 'ForecastData'.

.period

Can take value of "calibration" or "test" and indicates the period for which the test statistics should be calculated.

.fitStatistics

A vector naming statistics that should be calculated. Possible values include "auc", "brier", "percCorrect", "pre" for logit models and "mae","rsme" for normal models.

.threshold

The threshold used to calculate when a "positive" prediction is made by the model for binary dependent variables.

.baseModel

Vector containing predictions used to calculate proportional reduction of error ("pre").

...

Not implemented

Value

A data object of the class 'CompareModels' with the following slots:

fitStatistics

The output of the fit statistics for each model.

period

The period, "calibration" or "test", for which the statistics were calculated.

threshold

The threshold used to calculate when a "positive" prediction is made by the model.

baseModel

Vector containing predictions used to calculate proportional reduction of error ("pre").

Author(s)

Michael D. Ward <michael.d.ward@duke.edu> and Jacob M. Montgomery <jacob.montgomery@wustl.edu> and Florian M. Hollenbach <florian.hollenbach@tamu.edu>

References

Montgomery, Jacob M., Florian M. Hollenbach and Michael D. Ward. (2012). Improving Predictions Using Ensemble Bayesian Model Averaging. Political Analysis. 20: 271-291.

See Also

ensembleBMA, other functions

Examples

## Not run: data(calibrationSample)

data(testSample) 

this.ForecastData <- makeForecastData(.predCalibration=calibrationSample[,c("LMER", "SAE", "GLM")],
.outcomeCalibration=calibrationSample[,"Insurgency"],.predTest=testSample[,c("LMER", "SAE", "GLM")],
.outcomeTest=testSample[,"Insurgency"], .modelNames=c("LMER", "SAE", "GLM"))

this.ensemble <- calibrateEnsemble(this.ForecastData, model="logit", tol=0.001, exp=3)

compareModels(this.ensemble,"calibration")

compareModels(this.ensemble,"test") 

## End(Not run)


[Package EBMAforecast version 1.0.32 Index]