TimeSeriesAnalysis[AIC]  Akaike's information criterion
TimeSeriesAnalysis[AICc]  Akaike's information criterion with sample size correction
TimeSeriesAnalysis[BIC]  Bayesian information criterion

Calling Sequence


AIC(model, ts, ll)
AICc(model, ts, ll)
BIC(model, ts, ll)


Description


•

Information criteria are functions used to evaluate goodness of fit for a model representing a time series.

•

The functions take into account both the goodness of fit itself and the number of parameters of the model: a model is better if it fits more closely and if it has fewer parameters.

•

Akaike's information criterion is defined by


where is the number of parameters and the log likelihood of obtaining the given time series from the given model.

•

Akaike's information criterion gives very good results if used to evaluate goodness of fit against a large sample size (i.e., a long time series), but for smaller sample sizes a correction is needed. This criterion is obtained as follows:

•

Finally, the Bayesian information criterion is given by

•

The number of parameters of the model is always computed by the information criterion procedure, as is the sample size. The log likelihood can also be computed, but if the log likelihood is known beforehand (e.g. because of running the Optimize command), then it can be passed in using the loglikelihood option. This prevents recomputing the log likelihood and thereby increases efficiency very slightly.



Compatibility


•

The TimeSeriesAnalysis[AIC], TimeSeriesAnalysis[AICc] and TimeSeriesAnalysis[BIC] commands were introduced in Maple 18.



Examples


>


Consider the following time series.
>


 (1) 
We create a list of potentially applicable models and optimize them.
>


 (2) 
>


 (3) 
We compute Akaike's information criterion for each model.
>


 (4) 
The model has the best balance between number of parameters and goodness of fit, according to this criterion, and the worst.
Because the sample size is rather small, it might be useful to consider the criterion with sample size correction.
>


 (5) 
This time, the model does best. Note how some of the models have a value of ; this is because they have at least as many parameters as there are sample points.
Alternatively, one can use the Bayesian information criterion; it also corrects for the sample size, but not as strongly as AICc in this case.
>


 (6) 
The Bayesian information criterion also favors the model.


Download Help Document
Was this information helpful?