...

An introduction to ARMA and GARCH processes 402KB

by user

on
Category:

geometry

84

views

Report

Comments

Transcript

An introduction to ARMA and GARCH processes 402KB
Introduction to ARMA and GARCH processes
Fulvio Corsi
SNS Pisa
3 March 2010
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
1 / 24
Stationarity
Strict stationarity:
d
(X1 , X2 , ..., Xn ) = (X1+k , X2+k , ..., Xn+k )
n > 1, k
for any integer
Weak/second-order/covariance stationarity:
E[xt ] = µ
E[Xt − µ]2 = σ 2 < +∞ (i.e. constant and indipendent of t)
E[(Xt − µ)(Xt+k − µ)] = γ(|k|) (i.e. indipendent of t for each k)
Interpretation:
mean and variance are constant
mean reversion
shocks are transient
covariance between Xt and Xt−k tends to 0 as k → ∞
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
2 / 24
Stationarity
Strict stationarity:
d
(X1 , X2 , ..., Xn ) = (X1+k , X2+k , ..., Xn+k )
n > 1, k
for any integer
Weak/second-order/covariance stationarity:
E[xt ] = µ
E[Xt − µ]2 = σ 2 < +∞ (i.e. constant and indipendent of t)
E[(Xt − µ)(Xt+k − µ)] = γ(|k|) (i.e. indipendent of t for each k)
Interpretation:
mean and variance are constant
mean reversion
shocks are transient
covariance between Xt and Xt−k tends to 0 as k → ∞
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
2 / 24
Stationarity
Strict stationarity:
d
(X1 , X2 , ..., Xn ) = (X1+k , X2+k , ..., Xn+k )
n > 1, k
for any integer
Weak/second-order/covariance stationarity:
E[xt ] = µ
E[Xt − µ]2 = σ 2 < +∞ (i.e. constant and indipendent of t)
E[(Xt − µ)(Xt+k − µ)] = γ(|k|) (i.e. indipendent of t for each k)
Interpretation:
mean and variance are constant
mean reversion
shocks are transient
covariance between Xt and Xt−k tends to 0 as k → ∞
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
2 / 24
Withe noise
weak (uncorrelated)
E(ǫt ) = 0
V(ǫt ) = σ 2
ρ(ǫt , ǫs ) = 0
∀t
∀t
∀s 6= t
where
ρ≡
γ(|t−s|)
γ(0)
strong (independence)
ǫt ∼ I.I.D.(0, σ 2 )
Gaussian (weak=strong)
ǫt ∼ N.I.D.(0, σ 2 )
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
3 / 24
Lag operator
the Lag operator is defined as:
LXt ≡ Xt−1
is a linear operator:
L(βXt )
=
L(Xt + Yt )
=
β · LXt = βXt−1
LXt + LYt = Xt−1 + Yt−1
and admits power exponent, for instance:
L2 Xt
=
L(LXt ) = LXt−1 = Xt−2
k
L Xt
=
Xt−k
L−1 Xt
=
Xt+1
Some examples:
∆Xt
=
yt
=
Xt − Xt−1 = Xt − LXt = (1 − L)Xt
(θ1 + θ2 L)LXt = (θ1 L + θ2 L2 )Xt = θ1 Xt−1 + θ2 Xt−2
Expression like
(θ0 + θ1 L + θ2 L2 + ... + θn Ln )
with possibly n = ∞, are called lag polynomial and are indicated as θ(L)
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
4 / 24
Lag operator
the Lag operator is defined as:
LXt ≡ Xt−1
is a linear operator:
L(βXt )
=
L(Xt + Yt )
=
β · LXt = βXt−1
LXt + LYt = Xt−1 + Yt−1
and admits power exponent, for instance:
L2 Xt
=
L(LXt ) = LXt−1 = Xt−2
k
L Xt
=
Xt−k
L−1 Xt
=
Xt+1
Some examples:
∆Xt
=
yt
=
Xt − Xt−1 = Xt − LXt = (1 − L)Xt
(θ1 + θ2 L)LXt = (θ1 L + θ2 L2 )Xt = θ1 Xt−1 + θ2 Xt−2
Expression like
(θ0 + θ1 L + θ2 L2 + ... + θn Ln )
with possibly n = ∞, are called lag polynomial and are indicated as θ(L)
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
4 / 24
Lag operator
the Lag operator is defined as:
LXt ≡ Xt−1
is a linear operator:
L(βXt )
=
L(Xt + Yt )
=
β · LXt = βXt−1
LXt + LYt = Xt−1 + Yt−1
and admits power exponent, for instance:
L2 Xt
=
L(LXt ) = LXt−1 = Xt−2
k
L Xt
=
Xt−k
L−1 Xt
=
Xt+1
Some examples:
∆Xt
=
yt
=
Xt − Xt−1 = Xt − LXt = (1 − L)Xt
(θ1 + θ2 L)LXt = (θ1 L + θ2 L2 )Xt = θ1 Xt−1 + θ2 Xt−2
Expression like
(θ0 + θ1 L + θ2 L2 + ... + θn Ln )
with possibly n = ∞, are called lag polynomial and are indicated as θ(L)
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
4 / 24
Lag operator
the Lag operator is defined as:
LXt ≡ Xt−1
is a linear operator:
L(βXt )
=
L(Xt + Yt )
=
β · LXt = βXt−1
LXt + LYt = Xt−1 + Yt−1
and admits power exponent, for instance:
L2 Xt
=
L(LXt ) = LXt−1 = Xt−2
k
L Xt
=
Xt−k
L−1 Xt
=
Xt+1
Some examples:
∆Xt
=
yt
=
Xt − Xt−1 = Xt − LXt = (1 − L)Xt
(θ1 + θ2 L)LXt = (θ1 L + θ2 L2 )Xt = θ1 Xt−1 + θ2 Xt−2
Expression like
(θ0 + θ1 L + θ2 L2 + ... + θn Ln )
with possibly n = ∞, are called lag polynomial and are indicated as θ(L)
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
4 / 24
Moving Average (MA) process
The
way to construct a stationary process is to use a lag polynomial θ(L) with
P ∞simplest
2
j=0 θj < ∞ to construct a sort of “weighted moving average” of withe noises ǫt , i.e.
MA(q)
Yt = θ(L)ǫt = ǫt + θ1 ǫt−1 + θ2 ǫt−2 + ... + +θq ǫt−q
Example, MA(1)
Yt = ǫt + θǫt−1 = (1 + θL)ǫt
being E[Yt ] = 0
γ(0)
=
E[Yt Yt ] = E[(ǫt + θǫt−1 )(ǫt + θǫt−1 )] = σ 2 (1 + θ2 );
γ(1)
=
E[Yt Yt−1 ] = E[(ǫt + θǫt−1 )(ǫt−1 + θǫt−2 )] = σ 2 θ;
γ(k)
=
E[Yt Yt−k ] = E[(ǫt + θǫt−1 )(ǫt−k + θǫt−k−1 )] = 0
and,
ρ(1)
=
ρ(k)
=
∀k > 1
θ
γ(1)
=
γ(0)
1 + θ2
γ(k)
=0
∀k > 1
γ(0)
hence, while a withe noise is “0-correlated”, MA(1) is 1-correlated
(i.e. it has only the first correlation ρ(1) different from zero)
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
5 / 24
Moving Average (MA) process
The
way to construct a stationary process is to use a lag polynomial θ(L) with
P ∞simplest
2
j=0 θj < ∞ to construct a sort of “weighted moving average” of withe noises ǫt , i.e.
MA(q)
Yt = θ(L)ǫt = ǫt + θ1 ǫt−1 + θ2 ǫt−2 + ... + +θq ǫt−q
Example, MA(1)
Yt = ǫt + θǫt−1 = (1 + θL)ǫt
being E[Yt ] = 0
γ(0)
=
E[Yt Yt ] = E[(ǫt + θǫt−1 )(ǫt + θǫt−1 )] = σ 2 (1 + θ2 );
γ(1)
=
E[Yt Yt−1 ] = E[(ǫt + θǫt−1 )(ǫt−1 + θǫt−2 )] = σ 2 θ;
γ(k)
=
E[Yt Yt−k ] = E[(ǫt + θǫt−1 )(ǫt−k + θǫt−k−1 )] = 0
and,
ρ(1)
=
ρ(k)
=
∀k > 1
θ
γ(1)
=
γ(0)
1 + θ2
γ(k)
=0
∀k > 1
γ(0)
hence, while a withe noise is “0-correlated”, MA(1) is 1-correlated
(i.e. it has only the first correlation ρ(1) different from zero)
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
5 / 24
Properties MA(q)
In general for a MA(q) process we have
γ(0)
=
σ 2 (1 + θ12 + θ22 + ... + θq2 )
γ(k)
=
σ2
q−k
X
θj θj+k
j=0
=
∀k ≤ q
∀k > q
0
and
ρ(k)
=
=
P q−k
θj θj+k
Pq
2
j=1 θj
j=0
1+
0
∀k ≤ q
∀k > q
Hence, an MA(q) is q-correlated and it can also be shown that any stationary q-correlated
process can be represented as an MA(q).
Wold Theorem: any mean zero covariance stationary process can be represented in the
form, MA(∞) + deterministic component (the two being uncorrelated).
But, given a q-correlated process, is the MA(q) process unique? In general no, indeed it can
be shown that for a q-correlated process there are 2q possible MA(q) with same
autocovariance structure. However, there is only one MA(q) which is invertible.
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
6 / 24
Properties MA(q)
In general for a MA(q) process we have
γ(0)
=
σ 2 (1 + θ12 + θ22 + ... + θq2 )
γ(k)
=
σ2
q−k
X
θj θj+k
j=0
=
∀k ≤ q
∀k > q
0
and
ρ(k)
=
=
P q−k
θj θj+k
Pq
2
j=1 θj
j=0
1+
0
∀k ≤ q
∀k > q
Hence, an MA(q) is q-correlated and it can also be shown that any stationary q-correlated
process can be represented as an MA(q).
Wold Theorem: any mean zero covariance stationary process can be represented in the
form, MA(∞) + deterministic component (the two being uncorrelated).
But, given a q-correlated process, is the MA(q) process unique? In general no, indeed it can
be shown that for a q-correlated process there are 2q possible MA(q) with same
autocovariance structure. However, there is only one MA(q) which is invertible.
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
6 / 24
Properties MA(q)
In general for a MA(q) process we have
γ(0)
=
σ 2 (1 + θ12 + θ22 + ... + θq2 )
γ(k)
=
σ2
q−k
X
θj θj+k
j=0
=
∀k ≤ q
∀k > q
0
and
ρ(k)
=
=
P q−k
θj θj+k
Pq
2
j=1 θj
j=0
1+
0
∀k ≤ q
∀k > q
Hence, an MA(q) is q-correlated and it can also be shown that any stationary q-correlated
process can be represented as an MA(q).
Wold Theorem: any mean zero covariance stationary process can be represented in the
form, MA(∞) + deterministic component (the two being uncorrelated).
But, given a q-correlated process, is the MA(q) process unique? In general no, indeed it can
be shown that for a q-correlated process there are 2q possible MA(q) with same
autocovariance structure. However, there is only one MA(q) which is invertible.
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
6 / 24
Invertibility conditions for MA
first consider the MA(1) case:
Yt = (1 + θL)ǫt
given the result
(1 + θL)−1 = (1 − θL + θ2 L2 − θ3 L3 + θ4 L4 + ...) =
∞
X
(−θL)i
i=0
inverting the θ(L) lag polynomial, we can write
(1 − θL + θ2 L2 − θ3 L3 + θ4 L4 + ...)Yt = ǫt
which can be considered an AR(∞) process.
If an MA process can be written as an AR(∞) of this type, such MA representation is said to
be invertible. For MA(1) process the invertibility condition is given by |θ| < 1.
For a general MA(q) process
Yt = (1 + θ1 L + θ2 L2 + ... + θq Lq )ǫt
the invertibility conditions are that the roots of the lag polynomial
1 + θ1 z + θ2 z2 + ... + θq zq = 0
lie outside the unit circle. Then the MA(q) can be written as an AR(∞) by inverting θ(L).
Invertibility also has important practical consequence in application. In fact, given that the ǫt
are not observable they have to be reconstructed from the observed Y’s through the
AR(∞) representation.
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
7 / 24
Invertibility conditions for MA
first consider the MA(1) case:
Yt = (1 + θL)ǫt
given the result
(1 + θL)−1 = (1 − θL + θ2 L2 − θ3 L3 + θ4 L4 + ...) =
∞
X
(−θL)i
i=0
inverting the θ(L) lag polynomial, we can write
(1 − θL + θ2 L2 − θ3 L3 + θ4 L4 + ...)Yt = ǫt
which can be considered an AR(∞) process.
If an MA process can be written as an AR(∞) of this type, such MA representation is said to
be invertible. For MA(1) process the invertibility condition is given by |θ| < 1.
For a general MA(q) process
Yt = (1 + θ1 L + θ2 L2 + ... + θq Lq )ǫt
the invertibility conditions are that the roots of the lag polynomial
1 + θ1 z + θ2 z2 + ... + θq zq = 0
lie outside the unit circle. Then the MA(q) can be written as an AR(∞) by inverting θ(L).
Invertibility also has important practical consequence in application. In fact, given that the ǫt
are not observable they have to be reconstructed from the observed Y’s through the
AR(∞) representation.
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
7 / 24
Auto-Regressive Process (AR)
A general AR process is defined as
φ(L)Yt = ǫt
It is always invertible but not always stationary.
Example: AR(1)
(1 − φL)Yt = ǫt
or
Yt = φYt−1 + ǫt
by inverting the lag polynomial (1 − φL) the AR(1) can be written as
Yt = (1 − φL)−1 ǫt =
∞
X
(φL)i ǫt =
i=0
∞
X
φi ǫt−i = MA(∞)
i=0
hence the stationarity condition is that |φ| < 1.
From this representation we can apply the general formula of MA to compute γ(·) and ρ(·).
In particular,
ρ(k) = φ|k|
∀k
i.e. monotonic exponential decay for φ > 0 and exponentially damped oscillatory decay for
φ < 0.
In general an AR(p) process
Yt = φ1 Yt−1 + φ2 Yt−2 + ... + φp Yt−p + ǫt
is stationarity if all the roots of the characteristic equation of the lag polynomial
are outside the unit circle.
Fulvio Corsi
1 − φ1 z − φ2 z2 − ... − φp zp = 0
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
8 / 24
Auto-Regressive Process (AR)
A general AR process is defined as
φ(L)Yt = ǫt
It is always invertible but not always stationary.
Example: AR(1)
(1 − φL)Yt = ǫt
or
Yt = φYt−1 + ǫt
by inverting the lag polynomial (1 − φL) the AR(1) can be written as
Yt = (1 − φL)−1 ǫt =
∞
X
(φL)i ǫt =
i=0
∞
X
φi ǫt−i = MA(∞)
i=0
hence the stationarity condition is that |φ| < 1.
From this representation we can apply the general formula of MA to compute γ(·) and ρ(·).
In particular,
ρ(k) = φ|k|
∀k
i.e. monotonic exponential decay for φ > 0 and exponentially damped oscillatory decay for
φ < 0.
In general an AR(p) process
Yt = φ1 Yt−1 + φ2 Yt−2 + ... + φp Yt−p + ǫt
is stationarity if all the roots of the characteristic equation of the lag polynomial
are outside the unit circle.
Fulvio Corsi
1 − φ1 z − φ2 z2 − ... − φp zp = 0
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
8 / 24
State Space Representation of AR(p)
to gain more intuition on the AR stationarity conditions write an AR(p) in its state space form



  

Yt
φ1 φ2 φ3 . . . φp−1 φp
ǫt
Yt−1
 Yt−1 
 1
0
0
...
0
0   Yt−2   0 



  


 =  .
.
.
.
.
.  . + . 
..
..
..
..
..   ..   .. 


 ..
...
Yt−p+1
Xt
0
0
0
=
...
1
Yt−p
0
F
Xt−1
0
+
vt
Hence, the expected value of Xt satisfy,
E[Xt ] = F Xt−1
and
E[Xt+j ] = F j+1 Xt−1
is a linear map in Rp whose dynamic properties are given by the eigenvalues of the matrix F.
The eigenvalues of F are given by solving the characteristic equation
λp − φ1 λp−1 − φ2 λp−2 − ... − φp−1 λ − φp
=
0.
Comparing this with the characteristic equation of the lag polynomial φ(L)
1 − φ1 z − φ2 z2 − ... − φp−1 zp−1 − φp zp
=
0
we can see that the roots of the 2 equations are such that
z1 = λ−1
1 ,
Fulvio Corsi
z2 = λ−1
2 ,
...
, zp = λ−1
p
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
9 / 24
State Space Representation of AR(p)
to gain more intuition on the AR stationarity conditions write an AR(p) in its state space form



  

Yt
φ1 φ2 φ3 . . . φp−1 φp
ǫt
Yt−1
 Yt−1 
 1
0
0
...
0
0   Yt−2   0 



  


 =  .
.
.
.
.
.  . + . 
..
..
..
..
..   ..   .. 


 ..
...
Yt−p+1
Xt
0
0
0
=
...
1
Yt−p
0
F
Xt−1
0
+
vt
Hence, the expected value of Xt satisfy,
E[Xt ] = F Xt−1
and
E[Xt+j ] = F j+1 Xt−1
is a linear map in Rp whose dynamic properties are given by the eigenvalues of the matrix F.
The eigenvalues of F are given by solving the characteristic equation
λp − φ1 λp−1 − φ2 λp−2 − ... − φp−1 λ − φp
=
0.
Comparing this with the characteristic equation of the lag polynomial φ(L)
1 − φ1 z − φ2 z2 − ... − φp−1 zp−1 − φp zp
=
0
we can see that the roots of the 2 equations are such that
z1 = λ−1
1 ,
Fulvio Corsi
z2 = λ−1
2 ,
...
, zp = λ−1
p
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
9 / 24
ARMA(p,q)
An ARMA(p,q) process is defined as
φ(L)Yt = θ(L)ǫt
where φ(·) and θ(·) are pth and qth lag polynomials.
the process is stationary if all the roots of
φ(z) ≡ 1 − φ1 z − φ2 z2 − ... − φp−1 zp−1 − φp zp
=
0
lie outside the unit circle and, hence, admits the MA(∞) representation:
Yt = φ(L)−1 θ(L)ǫt
the process is invertible if all the roots of
θ(z) ≡ 1 + θ1 z + θ2 z2 + ... + θq zq
=
0
lie outside the unit circle and, hence, admits the AR(∞) representation:
ǫt = θ(L)−1 φ(L)Yt
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
10 / 24
Estimation
For AR(p)
Yt = φ1 Yt−1 + φ2 Yt−2 + ... + φp Yt−p + ǫt
OLS are consistent and under gaussianity asymptotically equivalent to MLE
→ asymptotically efficient
For example, the full Likelihood of an AR(1) can be written as
L(θ) ≡ f (yT , yT−1 , ..., y1 ; θ) =
fY1 (y1 ; θ)
{z
}
|
·
marginal first obs
T
Y
t=2
|
fYt |Yt−1 (yt |yt−1 ; θ)
{z
}
conditional likelihood
under normality OLS=MLE
For a general ARMA(p,q)
Yt = φ1 Yt−1 + ... + φp Yt−p + ǫt + θ1 ǫt−1 + ... + θq ǫt−q
Yt−1 is correlated with ǫt−1 , ..., ǫt−q ⇒ OLS not consistent.
→ MLE with numerical optimization procedures.
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
11 / 24
Estimation
For AR(p)
Yt = φ1 Yt−1 + φ2 Yt−2 + ... + φp Yt−p + ǫt
OLS are consistent and under gaussianity asymptotically equivalent to MLE
→ asymptotically efficient
For example, the full Likelihood of an AR(1) can be written as
L(θ) ≡ f (yT , yT−1 , ..., y1 ; θ) =
fY1 (y1 ; θ)
{z
}
|
·
marginal first obs
T
Y
t=2
|
fYt |Yt−1 (yt |yt−1 ; θ)
{z
}
conditional likelihood
under normality OLS=MLE
For a general ARMA(p,q)
Yt = φ1 Yt−1 + ... + φp Yt−p + ǫt + θ1 ǫt−1 + ... + θq ǫt−q
Yt−1 is correlated with ǫt−1 , ..., ǫt−q ⇒ OLS not consistent.
→ MLE with numerical optimization procedures.
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
11 / 24
Prediction
write the model in its AR(∞) representation:
η(L)(Yt − µ) = ǫt
then the optimal prediction of Yt+s is given by
η(L)−1
E[Yt+s |Yt , Yt−1 , ...] = µ +
η(L)(Yt − µ)
Ls
+
with
h i
Lk = 0 for k < 0
+
which is known as Wiener-Kolmogorov prediction formula.
In the case of an AR(p) process the prediction formula can also be written as
(s)
(s)
(s)
E[Yt+s |Yt , Yt−1 , ...] = µ + f11 (Yt − µ) + f12 (Yt−1 − µ) + ... + f1p (Yt−p+1 − µ)
(j)
where f11 is the element (1, 1) of the matrix F j .
The easiest way to compute prediction from AR(p) model is, however, through recursive
methods.
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
12 / 24
Prediction
write the model in its AR(∞) representation:
η(L)(Yt − µ) = ǫt
then the optimal prediction of Yt+s is given by
η(L)−1
E[Yt+s |Yt , Yt−1 , ...] = µ +
η(L)(Yt − µ)
Ls
+
with
h i
Lk = 0 for k < 0
+
which is known as Wiener-Kolmogorov prediction formula.
In the case of an AR(p) process the prediction formula can also be written as
(s)
(s)
(s)
E[Yt+s |Yt , Yt−1 , ...] = µ + f11 (Yt − µ) + f12 (Yt−1 − µ) + ... + f1p (Yt−p+1 − µ)
(j)
where f11 is the element (1, 1) of the matrix F j .
The easiest way to compute prediction from AR(p) model is, however, through recursive
methods.
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
12 / 24
Box-Jenkins Approach
check for stationarity: if not try different transformation (ex differentiation→ARIMA models)
Identification:
check the autocorrelation (ACF) function: a q-correlated process is an MA(q) model
check the partial autocorrelation (PACF) function:
for an AR(p) process, while the k–lag ACF can be interpreted as simple regression
Yt = ρ(k)Yt−k + error, the k–lag PACF can be seen as a multiple regression
Yt = b1 Yt−1 + b2 Yt−2 + ... + bk Yt−k + error
it can be computed by solving the Yule-Walker system:

b1
 b2 


 . 
 . 
.
bk

=





γ(0)
γ(1)
.
.
.
γ(k − 1)
γ(1)
γ(0)
.
.
.
γ(k − 2)
...
...
...
...
γ(k − 1)
γ(k − 2)
.
.
.
γ(0)

γ(1)
 γ(2) 


 . 
 . 
.
γ(k)
 −1 




Importantly, AR(p) processes are “p–partially correlated” ⇒ identification of AR order
Validation: check the appropriateness of the model by some measure of fit.
AIC/Akaike = T log σ̂e2 + 2 m
BIC/Schwarz = T log σ̂e2 + m log T
with σe2 estimation error variance, m = p + q + 1 n◦ of parameters, and T n◦ of obs
Diagnostic checking of the residuals.
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
13 / 24
Box-Jenkins Approach
check for stationarity: if not try different transformation (ex differentiation→ARIMA models)
Identification:
check the autocorrelation (ACF) function: a q-correlated process is an MA(q) model
check the partial autocorrelation (PACF) function:
for an AR(p) process, while the k–lag ACF can be interpreted as simple regression
Yt = ρ(k)Yt−k + error, the k–lag PACF can be seen as a multiple regression
Yt = b1 Yt−1 + b2 Yt−2 + ... + bk Yt−k + error
it can be computed by solving the Yule-Walker system:

b1
 b2 


 . 
 . 
.
bk

=





γ(0)
γ(1)
.
.
.
γ(k − 1)
γ(1)
γ(0)
.
.
.
γ(k − 2)
...
...
...
...
γ(k − 1)
γ(k − 2)
.
.
.
γ(0)

γ(1)
 γ(2) 


 . 
 . 
.
γ(k)
 −1 




Importantly, AR(p) processes are “p–partially correlated” ⇒ identification of AR order
Validation: check the appropriateness of the model by some measure of fit.
AIC/Akaike = T log σ̂e2 + 2 m
BIC/Schwarz = T log σ̂e2 + m log T
with σe2 estimation error variance, m = p + q + 1 n◦ of parameters, and T n◦ of obs
Diagnostic checking of the residuals.
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
13 / 24
Box-Jenkins Approach
check for stationarity: if not try different transformation (ex differentiation→ARIMA models)
Identification:
check the autocorrelation (ACF) function: a q-correlated process is an MA(q) model
check the partial autocorrelation (PACF) function:
for an AR(p) process, while the k–lag ACF can be interpreted as simple regression
Yt = ρ(k)Yt−k + error, the k–lag PACF can be seen as a multiple regression
Yt = b1 Yt−1 + b2 Yt−2 + ... + bk Yt−k + error
it can be computed by solving the Yule-Walker system:

b1
 b2 


 . 
 . 
.
bk

=





γ(0)
γ(1)
.
.
.
γ(k − 1)
γ(1)
γ(0)
.
.
.
γ(k − 2)
...
...
...
...
γ(k − 1)
γ(k − 2)
.
.
.
γ(0)

γ(1)
 γ(2) 


 . 
 . 
.
γ(k)
 −1 




Importantly, AR(p) processes are “p–partially correlated” ⇒ identification of AR order
Validation: check the appropriateness of the model by some measure of fit.
AIC/Akaike = T log σ̂e2 + 2 m
BIC/Schwarz = T log σ̂e2 + m log T
with σe2 estimation error variance, m = p + q + 1 n◦ of parameters, and T n◦ of obs
Diagnostic checking of the residuals.
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
13 / 24
ARIMA
Integrated ARMA model:
ARIMA(p,1,q) denote a nonstationary process Yt for which the first difference
Yt − Yt−1 = (1 − L)Yt is a stationary ARMA(p,q) process.
⇓
Yt is said to be integrated of order 1 or I(1).
If 2 differentiations of Yt are necessary to get a stationary process i.e.
(1 − L)2 Yt
⇓
then the process Yt is said to be integrated of order 2 or I(2).
I(0) indicate a stationary process.
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
14 / 24
ARIMA
Integrated ARMA model:
ARIMA(p,1,q) denote a nonstationary process Yt for which the first difference
Yt − Yt−1 = (1 − L)Yt is a stationary ARMA(p,q) process.
⇓
Yt is said to be integrated of order 1 or I(1).
If 2 differentiations of Yt are necessary to get a stationary process i.e.
(1 − L)2 Yt
⇓
then the process Yt is said to be integrated of order 2 or I(2).
I(0) indicate a stationary process.
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
14 / 24
ARFIMA
The k-difference operator (1 − L)n with integer n can be generalized to a fractional
difference operator (1 − L)d with 0 < d < 1 defined by the binomial expansion
(1 − L)d = 1 − dL + d(d − 1)L2 /2! − d(d − 1)(d − 2)L3 /3! + ...
obtaining a fractionally integrated process of order d i.e. I(d).
If d < 0.5 the process is cov stationary and admits an AR(∞) representation.
The usefulness of a fractional filter (1 − L)d is that it produces hyperbolic decaying
autocorrelations i.e. the so called long memory. In fact, for ARFIMA(p,d,q) processes
φ(L)(1 − L)d Yt = θ(L)ǫt
the autocorrelation functions is proportional to
ρ(k) ≈ ck2d−1
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
15 / 24
ARFIMA
The k-difference operator (1 − L)n with integer n can be generalized to a fractional
difference operator (1 − L)d with 0 < d < 1 defined by the binomial expansion
(1 − L)d = 1 − dL + d(d − 1)L2 /2! − d(d − 1)(d − 2)L3 /3! + ...
obtaining a fractionally integrated process of order d i.e. I(d).
If d < 0.5 the process is cov stationary and admits an AR(∞) representation.
The usefulness of a fractional filter (1 − L)d is that it produces hyperbolic decaying
autocorrelations i.e. the so called long memory. In fact, for ARFIMA(p,d,q) processes
φ(L)(1 − L)d Yt = θ(L)ǫt
the autocorrelation functions is proportional to
ρ(k) ≈ ck2d−1
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
15 / 24
ARCH and GARCH models
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
16 / 24
Basic Structure and Properties of ARMA model
standard time series models have:
Yt = E[Yt |Ωt−1 ] + ǫt
E[Yt |Ωt−1 ] = f (Ωt−1 ; θ)
i
h
Var [Yt |Ωt−1 ] = E ǫ2t |Ωt−1 = σ 2
hence,
Conditional mean: varies with Ωt−1
Conditional variance: constant (unfortunately)
k-step-ahead mean forecasts: generally depends on Ωt−1
k-step-ahead variance forecasts : depends only on k, not on Ωt−1 (again
unfortunately)
Unconditional mean: constant
Unconditional variance: constant
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
17 / 24
AutoRegressive Conditional Heteroskedasticity
(ARCH) model
Engle (1982, Econometrica) intruduced the ARCH models:
Yt = E[Yt |Ωt−1 ] + ǫt
E[Yt |Ωt−1 ] = f (Ωt−1 ; θ)
i
h
Var [Yt |Ωt−1 ] = E ǫ2t |Ωt−1 = σ (Ωt−1 ; θ) ≡ σt2
hence,
Conditional mean: varies with Ωt−1
Conditional variance: varies with Ωt−1
k-step-ahead mean forecasts: generally depends on Ωt−1
k-step-ahead variance forecasts : generally depends on Ωt−1
Unconditional mean: constant
Unconditional variance: constant
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
18 / 24
ARCH(q)
How to parameterize E ǫ2t |Ωt−1 = σ((Ωt−1 ; θ) ≡ σt2 ?
ARCH(q) postulated that the conditional variance is a linear function of the past q squared
innovations
q
X
αi ǫ2t−i = ω + α(L)ǫ2t−1
σt2 = ω +
i=1
Defining vt = ǫ2t − σt2 , the ARCH(q) model can be written as
ǫ2t = ω + α(L)ǫ2t−1 + vt
Since Et−1 (vt ) = 0, the model corresponds directly to an AR(q) model for the squared
innovations, ǫ2t .
The process is covariance stationary if and only if the sum of the positive AR parameters is
less than 1. Then, the unconditional variance of ǫt is
Var(ǫt ) = σ 2 = w/(1 − α1 − α2 − ... − αq ).
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
19 / 24
ARCH(q)
How to parameterize E ǫ2t |Ωt−1 = σ((Ωt−1 ; θ) ≡ σt2 ?
ARCH(q) postulated that the conditional variance is a linear function of the past q squared
innovations
q
X
αi ǫ2t−i = ω + α(L)ǫ2t−1
σt2 = ω +
i=1
Defining vt = ǫ2t − σt2 , the ARCH(q) model can be written as
ǫ2t = ω + α(L)ǫ2t−1 + vt
Since Et−1 (vt ) = 0, the model corresponds directly to an AR(q) model for the squared
innovations, ǫ2t .
The process is covariance stationary if and only if the sum of the positive AR parameters is
less than 1. Then, the unconditional variance of ǫt is
Var(ǫt ) = σ 2 = w/(1 − α1 − α2 − ... − αq ).
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
19 / 24
AR(1)-ARCH(1)
Example: the AR(1)-ARCH(1) model
-
Yt
=
φYt−1 + ǫt
σt2
=
ω + αǫ2t−1
ǫt
∼
N(0, σt2 )
Conditional mean: E(Yt |Ωt−1 ) = φYt−1
Conditional variance: E([Yt − E(Yt |Ωt−1 )]2 |Ωt−1 = ω + αǫ2t−1
Unconditional mean: E(Yt ) = 0
ω
1
Unconditional variance: E(Yt − E(Yt ))2 = (1−φ
2 ) (1−α)
Note that the unconditional distribution of ǫt has Fat Tail.
In fact, the unconditional kurtosis E(ǫ4t )/E(ǫ2t )2 is
h i
h
i
h i
E ǫ4t = E E(ǫ4t |Ωt−1 ) = 3E σt4 = 3[Var(σt2 ) + E(σt2 )2 ] = 3[Var(σt2 ) +E(ǫ2t )2 ] > 3E(ǫ2t )2 .
| {z }
>0
Hence,
Kurtosis(ǫt ) = E(ǫ4t )/E(ǫ2t )2 > 3
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
20 / 24
AR(1)-ARCH(1)
Example: the AR(1)-ARCH(1) model
-
Yt
=
φYt−1 + ǫt
σt2
=
ω + αǫ2t−1
ǫt
∼
N(0, σt2 )
Conditional mean: E(Yt |Ωt−1 ) = φYt−1
Conditional variance: E([Yt − E(Yt |Ωt−1 )]2 |Ωt−1 = ω + αǫ2t−1
Unconditional mean: E(Yt ) = 0
ω
1
Unconditional variance: E(Yt − E(Yt ))2 = (1−φ
2 ) (1−α)
Note that the unconditional distribution of ǫt has Fat Tail.
In fact, the unconditional kurtosis E(ǫ4t )/E(ǫ2t )2 is
h i
h
i
h i
E ǫ4t = E E(ǫ4t |Ωt−1 ) = 3E σt4 = 3[Var(σt2 ) + E(σt2 )2 ] = 3[Var(σt2 ) +E(ǫ2t )2 ] > 3E(ǫ2t )2 .
| {z }
>0
Hence,
Kurtosis(ǫt ) = E(ǫ4t )/E(ǫ2t )2 > 3
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
20 / 24
GARCH(p,q)
Problem: empirical volatility very persistent ⇒ Large q i.e. too many α’s
Bollerslev (1986, J. of Econometrics) proposed the Generalized ARCH model.
The GARCH(p,q) is defined as
σt2 = ω +
q
X
i=1
αi ǫ2t−i +
p
X
2
2
βj σt−j
= ω + α(L)ǫ2t−1 + β(L)σt−1
j=1
As before also the GARCH(p,q) can be rewritten as
ǫ2t = ω + [α(L) + β(L)] ǫ2t−1 − β(L)vt−1 + vt
which defines an ARMA[max(p, q),p] model for et z.
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
21 / 24
GARCH(p,q)
Problem: empirical volatility very persistent ⇒ Large q i.e. too many α’s
Bollerslev (1986, J. of Econometrics) proposed the Generalized ARCH model.
The GARCH(p,q) is defined as
σt2 = ω +
q
X
i=1
αi ǫ2t−i +
p
X
2
2
βj σt−j
= ω + α(L)ǫ2t−1 + β(L)σt−1
j=1
As before also the GARCH(p,q) can be rewritten as
ǫ2t = ω + [α(L) + β(L)] ǫ2t−1 − β(L)vt−1 + vt
which defines an ARMA[max(p, q),p] model for et z.
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
21 / 24
GARCH(1,1)
By far the most commonly used is the GARCH(1,1):
2
σt2 = ω + α ǫ2t−1 + βj σt−1
with ω > 0, α > 0, β > 0.
By recursive substitution, the GARCH(1,1) may be written as the following ARCH(∞):
σt2 = ω(1 − β) + α
∞
X
β i−1 ǫ2t−i
i=1
which reduces to an exponentially weighted moving average filter for ω = 0 and α + β = 1
(sometimes referred to as Integrated GARCH or IGARCH(1,1)).
Moreover, GARCH(1,1) implies an ARMA(1,1) representation in the ǫ2t
ǫ2t = ω + [α + β]ǫ2t−1 − βvt−1 + vt
Forecasting. Denoting the unconditional variance σ 2 ≡ ω(1 − α − β)−1 we have:
2
2
σ̂t+h|t
= σ 2 + (α + β)h−1 (σt+1
− σ2 )
showing that the forecasts of the conditional variance revert to the long-run unconditional
variance at an exponential rate dictated by α + β
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
22 / 24
GARCH(1,1)
By far the most commonly used is the GARCH(1,1):
2
σt2 = ω + α ǫ2t−1 + βj σt−1
with ω > 0, α > 0, β > 0.
By recursive substitution, the GARCH(1,1) may be written as the following ARCH(∞):
σt2 = ω(1 − β) + α
∞
X
β i−1 ǫ2t−i
i=1
which reduces to an exponentially weighted moving average filter for ω = 0 and α + β = 1
(sometimes referred to as Integrated GARCH or IGARCH(1,1)).
Moreover, GARCH(1,1) implies an ARMA(1,1) representation in the ǫ2t
ǫ2t = ω + [α + β]ǫ2t−1 − βvt−1 + vt
Forecasting. Denoting the unconditional variance σ 2 ≡ ω(1 − α − β)−1 we have:
2
2
σ̂t+h|t
= σ 2 + (α + β)h−1 (σt+1
− σ2 )
showing that the forecasts of the conditional variance revert to the long-run unconditional
variance at an exponential rate dictated by α + β
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
22 / 24
Asymmetric GARCH
News Impact Curve
8
symmetric GARCH
asymmetric GARCH
σt2
=ω+
2
αrt−1
+
2
βσt−1
σt2 responds symmetrically to past returns.
conditional variance
In standard GARCH model:
6
4
2
The so called “news impact curve” is a parabola
0
−5
0
5
standardized lagged shocks
Empirically negative rt−1 impact more than positive ones → asymmetric news impact curve
GJR or T-GARCH
σt2
=
2
2
2
ω + αrt−1
+ γrt−1
Dt−1 + βσt−1
with
Dt =
1
0
if rt < 0
otherwise
- Positive returns (good news): α
- Negative returns (bad news): α + γ
- Empirically γ > 0 → “Leverage effect”
Exponential GARCH (EGARCH)
Fulvio Corsi
rt−1 + γ rt−1 + βln(σ 2 )
ln(σt2 ) = ω + α t−1
σt−1 σt−1
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
23 / 24
Asymmetric GARCH
News Impact Curve
8
symmetric GARCH
asymmetric GARCH
σt2
=ω+
2
αrt−1
+
2
βσt−1
σt2 responds symmetrically to past returns.
conditional variance
In standard GARCH model:
6
4
2
The so called “news impact curve” is a parabola
0
−5
0
5
standardized lagged shocks
Empirically negative rt−1 impact more than positive ones → asymmetric news impact curve
GJR or T-GARCH
σt2
=
2
2
2
ω + αrt−1
+ γrt−1
Dt−1 + βσt−1
with
Dt =
1
0
if rt < 0
otherwise
- Positive returns (good news): α
- Negative returns (bad news): α + γ
- Empirically γ > 0 → “Leverage effect”
Exponential GARCH (EGARCH)
Fulvio Corsi
rt−1 + γ rt−1 + βln(σ 2 )
ln(σt2 ) = ω + α t−1
σt−1 σt−1
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
23 / 24
Estimation
A GARCH process with gaussian innovation:
rt |Ωt−1 ∼ N(µt (θ), σt2 (θ))
has conditional densities:
1
1 (rt − µ(θ))
f (rt |Ωt−1 ; θ) = √ σt−1 (θ) exp −
2
2 σt (θ)
2π
using the “prediction–error” decomposition of the likelihood
L(rT , rT−1 , ..., r1 ; θ) = f (rT |ΩT−1 ; θ) × f (rT−1 |ΩT−2 ; θ) × ... × f (r1 |Ω0 ; θ)
the log-likelihood becomes:
log L(rT , rT−1 , ..., r1 ; θ) = −
T
T
X
1 X (rt − µ(θ))
T
log σt (θ) −
log(2π) −
2
2 t=1
σt2 (θ)
t=1
Non–linear function in θ ⇒ Numerical optimization techiques.
Fulvio Corsi
Introduction to ARMA
() and GARCH processes
SNS Pisa 3 March 2010
24 / 24
Fly UP