| evaluate {TSPred} | R Documentation | 
Evaluating prediction/modeling quality
Description
evaluate is a generic function for evaluating the quality of time series prediction
or modeling fitness based on a particular metric defined in an evaluating object.
The function invokes particular methods which
depend on the class of the first argument.
Usage
evaluate(obj, ...)
## S3 method for class 'evaluating'
evaluate(obj, test, pred, ...)
## S3 method for class 'fitness'
evaluate(obj, mdl, test = NULL, pred = NULL, ...)
## S3 method for class 'error'
evaluate(obj, mdl = NULL, test = NULL, pred = NULL, ..., fitness = FALSE)
Arguments
| obj | An object of class  | 
| ... | Other parameters passed to  | 
| test | A vector or univariate time series containing actual values
for a time series that are to be compared against  | 
| pred | A vector or univariate time series containing time series
predictions that are to be compared against the values in  | 
| mdl | A time series model object for which fitness is to be evaluated. | 
| fitness | Should the function compute the fitness quality? If  For  | 
Value
A list containing obj and the computed metric values.
Author(s)
Rebecca Pontes Salles
See Also
Other evaluate: 
evaluate.tspred()
Examples
data(CATS,CATS.cont)
mdl <- forecast::auto.arima(CATS[,1])
pred <- forecast::forecast(mdl, h=length(CATS.cont[,1]))
evaluate(MSE_eval(), test=CATS.cont[,1], pred=pred$mean)
evaluate(MSE_eval(), mdl, fitness=TRUE)
evaluate(AIC_eval(), mdl)