Stationary and Invertible AR & MA Models
The following was implemented in Maple by Marcus Davidsson (2008) davidsson_marcus@hotmail.com
and is based upon the work by Brooks, C (2008) Introductory Econometrics for Finance
This worksheet is divided into four main chapters:
1) Basic Concepts
2) MA(1) Model
3) MA(2) Model
4) AR(1) Model
1.1) Proposition-1
We first note that if is the Arithmetic Average of x then the Covariance(x,x) is given by.
and the Correlation between x and x is defined as
1.2) Proposition-2
if is the Arithmetic Average of y then the Covariance(x,y) is given by.
and the Correlation between a and y is defined as
1.3) Proposition-3
The AutoCovariance for y(t) and y(t) at time t is therefore given by:
and the AutoCorrelation between y(t) and y(t) is defined as
1.4) Proposition-4
The AutoCovariance for y(t) and y(t-1) at time t is given by:
The AutoCorrelation for lag L is therefore defined as:
1.5) Proposition-5
The first assumption states that the Mean of the error terms is equal to zero,
This also makes sense since the error term is completely random (coin toss: Head (1) or tail (-1), average value is zero)
This means that Expected Value (Arithmetic Average) of a random variable with any lag is approximately equal to zero.
For example if then or or
1.6) Proposition-6
We know that the Mean of the error terms is equal to zero,
The second assumption therefore states that the variance of the error terms is equal to the
Expected Value of the error terms^2 (Since )
This means that Expected Value of a random variable^2 with any lag is approximately equal to the Variance of that random variable.
1.7) Proposition-7
The third assumption therefore states that that the Covariance between the error terms is zero (Since ).
This also makes sense since the error term is completely random there should not be any dependency between them.
This means that Expected Value (Arithmetic Average) of the product of any lagged random variables is approximately zero.
2) A Moving Average Model of Order One MA(1)
Our First Order Moving Average Model is given by:
where is a first order serial correlation parameter. Note that the amount of first order serial correlation in is not equal to
Note that for a invertible MA(1) model.
For an invertible MA(q) all roots of the characteristic equation should lie outside of the unit circle.
We know that we can calculate the AutoCorrelation for any lag as:
2.1) AutoCovariance(y(t) y(t))
We can now calculate the denominator in the equation for the AutoCorrelation(L) given by
If we take the Expectation on both sides of the equation for the First Order Moving Average Model we get:
Which can be written as:
We know from Proposition-5 that the Expected Value of a random variable with any lag is approximately equal to zero
This means that and and
This gives us:
We now note that the AutoCovariance for y(t) and y(t) is given by
Since the above equation is reduced to
We now note that is defined as:
If we substitute in that expression we get:
We know from Proposition-6 that the Expected Value of a random variable^2 with any lag is approximately equal to the Variance of that variable.
This means that and
We know from Proposition-7 that the Expected Value of the product of a lagged random variables is approximately zero.
This means that .
Note that these terms are called cross products and are always equal to zero
Which is our final expression for the
2.2) AutoCovariance(y(t) y(t-1))
We can now calculate the numerator in the equation for the AutoCorrelation(L) given by
We will in this section calculate the AutoCovariance for lag one
The AutoCovariance for y(t) and y(t-1) is given by
AA
We now note that the equation for our First Order Moving Average Model is given by
This equation can be generalized. The equation for is therefore given by
This gives us
If we plug in this expression into the expression for the AutoCovariance for y(t) and y(t-1) given by expression AA we get:
We know since previously ( Proposition-7 ) that the Expected value of the cross products are equal to zero, so we get:
Which can be written as
We know since previously (Proposition-6) that the Expected Value of a random variable^2 with any lag is approximately equal to the Variance of that variable. This gives us:
2.3) AutoCovariance(y(t) y(t-2))
We will in this section calculate the AutoCovariance for lag two
The AutoCovariance for y(t) and y(t-2) is given by
We know since previously ( Proposition-7 ) that the Expected Value of the cross products are equal to zero, so we get:
2.4) Summary for Our First Order Moving Average Model
We have in the previous sections calculated the AutoCovariances for y(t), y(t-1), y(t-2) and y(t-3) given by:
This means that:
For example if we assume that then the AutoCorrelation for each lag is given by
We can show that this indeed is true by simulating a MA(1)
3) A Moving Average Model of Order Two MA(2)
Our Second Order Moving Average Model is given by:
where again is a first order serial correlation parameter and is a second order serial correlation parameter Again note that the amount of first and second order serial correlation in is not equal to and respectively
3.1) AutoCovariance(y(t) y(t))
If we take the Expectation on both sides of the equation for the Second Order Moving Average Model we get:
This means that and or .
3.2) AutoCovariance(y(t) y(t-1))
We now note that the equation for our Second Order Moving Average Model is given by
We know since previously (Proposition-6) that the Expected Value of a random variable^2 with any lag is approximately equal to the Variance of
that variable. This gives us:
3.3) AutoCovariance(y(t) y(t-2))
3.4) AutoCovariance(y(t) y(t-3))
We will in this section calculate the AutoCovariance for lag three
The AutoCovariance for y(t) and y(t-3) is given by
3.5) Summary for Our Second Order Moving Average Model
We can show that this indeed is true by simulating a MA(2)
4) An AutoRegressive Model of Order One AR(1)
Our First Order AutoRegressive Model is given by:
Note that for a stationary AR(1) model. The stationarity conditions depend only the AR part.
For an stationary AR(p) all roots of the characteristic equation should lie outside of the unit circle.
4.1) AutoCovariance(y(t) y(t))
We again note that our Autoregressive equation of order one AR(1) is given by
We now note that we can write the above model as:
etc etc
So if we for example start from an recursively substitute in the previous expressions we get:
This is the result of World's decomposition theorem that states that any stationary series can be express by a deterministic part and a stochastic part. We now note that since we assume that our AR(1) equation is stationary it means that the expression
will go to zero if we would have selected a larger starting value for example
For example if we assume that we get:
This means that we are left with
We again note that if we would have selected a much larger starting value than then our expression would have been infinite long. This means that we can write the above equation as:
We now take expectation on both sides, so we get:
This means that
If we substitute in that expression into the equation for the we get
This means that etc etc
This means that . etc etc
We now note that if we multiply both sides by we get:
We now subtract the second expression from the first which gives us:
Which is given by:
We now note that since we assume stationarity it means that the term will become zero if we select a larger starting value.
For example if we assume that the we get:
This means that we can write the above equation as:
which means that:
4.2) AutoCovariance(y(t) y(t-1))
We now note that the equation for our First Order Autoregressive equation AR(1) is given by
Again this is the result of World's decomposition theorem that states that any stationary series can be express by a deterministic part and a stochastic part. We are now interested in the equation for and which are given by
or as:
We again note that since we assume that our AR(1) equation is stationary it means that the and expressions
We now take expectation on both sides E
We now note that the AutoCovariance for y(t) and y(t-1) is given by
Since and it means that we can write the autocovariance as:
So if we substitute in the expression for and and multiply we get:
Again note that these terms are called cross products and are always equal to zero
4.3) AutoCovariance(y(t) y(t-2))
We again note that since we assume that our AR(1) equation is stationary it means that the and expression
4.4) Summary for Our First Order Autoregressive Model
We have in the previous sections calculated the AutoCovariances for y(t), y(t-1), y(t-2) given by:
We can show that this indeed is true by simulating a AR(1)
Legal Notice: ? Maplesoft, a division of Waterloo Maple Inc. 2009. Maplesoft and Maple are trademarks of Waterloo Maple Inc. Neither Maplesoft nor the authors are responsible for any errors contained within and are not liable for any damages resulting from the use of this material. This application is intended for non-commercial, non-profit use only. Contact the authors for permission if you wish to use this application in for-profit activities.