TimeSeriesAnalysis - Maple Programming Help

Online Help

All Products    Maple    MapleSim


Home : Support : Online Help : Statistics and Data Analysis : Time Series Analysis Package : TimeSeriesAnalysis/AIC

TimeSeriesAnalysis

  

AIC

  

Akaike's information criterion

  

AICc

  

Akaike's information criterion with sample size correction

  

BIC

  

Bayesian information criterion

 

Calling Sequence

Parameters

Description

Examples

Compatibility

Calling Sequence

AIC(model, ts, ll)

AICc(model, ts, ll)

BIC(model, ts, ll)

Parameters

model

-

Exponential smoothing model

ts

-

Time series consisting of a single data set

ll

-

(optional) equation of the form loglikelihood = value to pass in a precomputed log likelihood value

Description

• 

Information criteria are functions used to evaluate goodness of fit for a model representing a time series.

• 

The functions take into account both the goodness of fit itself and the number of parameters of the model: a model is better if it fits more closely and if it has fewer parameters.

• 

Akaike's information criterion is defined by

AIC=2k2l

  

where k is the number of parameters and l the log likelihood of obtaining the given time series from the given model.

• 

Akaike's information criterion gives very good results if used to evaluate goodness of fit against a large sample size (i.e., a long time series), but for smaller sample sizes a correction is needed. This criterion is obtained as follows:

AICc={2k2l+2kk+1nk1k+1<notherwise

  

where k and l are as before and n is the size of the sample.

• 

Finally, the Bayesian information criterion is given by

BIC&equals;klogn2l

  

where k, l, and n are as above.

• 

The number of parameters of the model is always computed by the information criterion procedure, as is the sample size. The log likelihood can also be computed, but if the log likelihood is known beforehand (e.g. because of running the Optimize command), then it can be passed in using the loglikelihood option. This prevents recomputing the log likelihood and thereby increases efficiency very slightly.

Examples

withTimeSeriesAnalysis&colon;

Consider the following time series.

tsTimeSeries1.8&comma;3.4&comma;2.1&comma;2.9&comma;2.4&comma;2.9&comma;2.5&comma;3.1&comma;period&equals;2

ts:=Time seriesdata set8 rows of data:2008 - 2015

(1)

We create a list of potentially applicable models and optimize them.

modelsSpecializeExponentialSmoothingModel&comma;ts

models:=< an ETS(A,A,A) model >&comma;< an ETS(A,A,N) model >&comma;< an ETS(A,Ad,A) model >&comma;< an ETS(A,Ad,N) model >&comma;< an ETS(A,N,A) model >&comma;< an ETS(A,N,N) model >&comma;< an ETS(M,A,A) model >&comma;< an ETS(M,A,M) model >&comma;< an ETS(M,A,N) model >&comma;< an ETS(M,Ad,A) model >&comma;< an ETS(M,Ad,M) model >&comma;< an ETS(M,Ad,N) model >&comma;< an ETS(M,M,M) model >&comma;< an ETS(M,M,N) model >&comma;< an ETS(M,Md,M) model >&comma;< an ETS(M,Md,N) model >&comma;< an ETS(M,N,A) model >&comma;< an ETS(M,N,M) model >&comma;< an ETS(M,N,N) model >

(2)

mapOptimize&comma;models&comma;ts

0.014065563&comma;5.416379083&comma;0.031961103&comma;5.145765873&comma;0.002827297&comma;5.804974813&comma;0.603105141&comma;1.517769899&comma;6.694364663&comma;0.487294335&comma;1.376178599&comma;6.769140123&comma;0.975463079&comma;6.912940013&comma;1.134387223&comma;6.715408243&comma;1.772582973&comma;2.135892023&comma;6.808173573

(3)

We compute Akaike's information criterion for each model.

forminmodelsdoprintm&comma;AICm&comma;tsend do

< an ETS(A,A,A) model >&comma;12.01066840

< an ETS(A,A,N) model >&comma;18.83275817

< an ETS(A,Ad,A) model >&comma;13.89411477

< an ETS(A,Ad,N) model >&comma;20.29153175

< an ETS(A,N,A) model >&comma;7.994336806

< an ETS(A,N,N) model >&comma;15.60994963

< an ETS(M,A,A) model >&comma;13.26051935

< an ETS(M,A,M) model >&comma;15.27182966

< an ETS(M,A,N) model >&comma;21.38872933

< an ETS(M,Ad,A) model >&comma;15.51428223

< an ETS(M,Ad,M) model >&comma;16.93433454

< an ETS(M,Ad,N) model >&comma;23.53828025

< an ETS(M,M,M) model >&comma;13.94926348

< an ETS(M,M,N) model >&comma;21.82588003

< an ETS(M,Md,M) model >&comma;16.21600049

< an ETS(M,Md,N) model >&comma;23.43081649

< an ETS(M,N,A) model >&comma;11.54039483

< an ETS(M,N,M) model >&comma;12.29990591

< an ETS(M,N,N) model >&comma;17.61634715

(4)

The A&comma;N&comma;A model has the best balance between number of parameters and goodness of fit, according to this criterion, and M&comma;Ad&comma;N the worst.

Because the sample size is rather small, it might be useful to consider the criterion with sample size correction.

forminmodelsdoprintm&comma;AICcm&comma;tsend do

< an ETS(A,A,A) model >&comma;96.01066840

< an ETS(A,A,N) model >&comma;32.16609150

< an ETS(A,Ad,A) model >&comma;&infin;

< an ETS(A,Ad,N) model >&comma;50.29153175

< an ETS(A,N,A) model >&comma;21.32767014

< an ETS(A,N,N) model >&comma;18.00994963

< an ETS(M,A,A) model >&comma;97.26051935

< an ETS(M,A,M) model >&comma;99.27182966

< an ETS(M,A,N) model >&comma;34.72206266

< an ETS(M,Ad,A) model >&comma;&infin;

< an ETS(M,Ad,M) model >&comma;&infin;

< an ETS(M,Ad,N) model >&comma;53.53828025

< an ETS(M,M,M) model >&comma;97.94926348

< an ETS(M,M,N) model >&comma;35.15921336

< an ETS(M,Md,M) model >&comma;&infin;

< an ETS(M,Md,N) model >&comma;53.43081649

< an ETS(M,N,A) model >&comma;24.87372816

< an ETS(M,N,M) model >&comma;25.63323924

< an ETS(M,N,N) model >&comma;20.01634715

(5)

This time, the A&comma;N&comma;N model does best. Note how some of the models have a value of ; this is because they have at least as many parameters as there are sample points.

Alternatively, one can use the Bayesian information criterion; it also corrects for the sample size, but not as strongly as AICc in this case.

forminmodelsdoprintm&comma;BICm&comma;tsend do

< an ETS(A,A,A) model >&comma;12.48731765

< an ETS(A,A,N) model >&comma;19.15052434

< an ETS(A,Ad,A) model >&comma;14.45020556

< an ETS(A,Ad,N) model >&comma;20.68873946

< an ETS(A,N,A) model >&comma;8.312102973

< an ETS(A,N,N) model >&comma;15.76883271

< an ETS(M,A,A) model >&comma;13.73716860

< an ETS(M,A,M) model >&comma;15.74847891

< an ETS(M,A,N) model >&comma;21.70649550

< an ETS(M,Ad,A) model >&comma;16.07037302

< an ETS(M,Ad,M) model >&comma;17.49042533

< an ETS(M,Ad,N) model >&comma;23.93548796

< an ETS(M,M,M) model >&comma;14.42591273

< an ETS(M,M,N) model >&comma;22.14364620

< an ETS(M,Md,M) model >&comma;16.77209128

< an ETS(M,Md,N) model >&comma;23.82802420

< an ETS(M,N,A) model >&comma;11.85816099

< an ETS(M,N,M) model >&comma;12.61767207

< an ETS(M,N,N) model >&comma;17.77523023

(6)

The Bayesian information criterion also favors the A&comma;N&comma;A model.

Compatibility

• 

The TimeSeriesAnalysis[AIC], TimeSeriesAnalysis[AICc] and TimeSeriesAnalysis[BIC] commands were introduced in Maple 18.

• 

For more information on Maple 18 changes, see Updates in Maple 18.

See Also

TimeSeriesAnalysis

 


Download Help Document

Was this information helpful?



Please add your Comment (Optional)
E-mail Address (Optional)
What is ? This question helps us to combat spam