Chapter 6: Multivariate Cointegration Analysis


 Kristian Lucas Hopkins
 1 years ago
 Views:
Transcription
1 Chapter 6: Multivariate Cointegration Analysis 1
2 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie VI. Multivariate Cointegration Analysis  Johansen Test... 3 VI.1 The Simpelst Case: p = 1, VAR(1)... 3 VI.2 VAR(p)Model VI.3 Model Specification VI.4 Testing the Rank of Cointegration  An Example
3 VI. Multivariate Cointegration Analysis  Johansen Test VI.1 The Simpelst Case: p = 1, VAR(1) For example, there is a three dimensional vector Y consisting of the three month interest rates for the US dollar, the Euro and the Yen. Within these three I(1) variables we can find up to two cointegrating relations due to the interest rate parity and stationary expected changes in the rate of exchange. Z 1 Z Y 1 1 Y Y 3
4 As we seen before, we have a VAR(1) model for the M I(1) variables in levels. In this simple case, we can write: Y t = µ + ΓY t1 + ε t where: Y, µ and ε are (Mx1) vectors and Γ is a (MxM) matrix. 4
5 By subtracting the lagged vectors Y from both sides of the equation we receive the following relation: Y t  Y t1 = µ + ΓY t1  Y t1 + ε t or Y t = µ + (A 1  I)Y t1 + ε t Y t = µ + (Γ  I)Y t1 + ε t In this equation we have an I(0) vector on the left hand side. On the right side there is a vector of constants as well as another I(0) vector ε. Thus, the term (Γ  I)Y t1 must be also I(0). If the variables are not cointegrated, then the matrix Γ must be a unit matrix I. On the other hand, if there exists r cointegrated relations (Z is a (rx1) vector), this term can be written as a I(0) variable: (Γ  I)Y t1 = λγ Y t1 = λz t1 where γ is the (rxm) matrix of the cointegration coefficients and λ is a (Mxr) matrix. 5
6 When multiplying with the cointegration matrix the latter results in the (MxM) matrix (Γ  I). This term is I(0) and λ can be interpreted as the matrix of the M times r error correction coefficients: Y t = µ + λz t1 + ε t This model is a generalization of the ECM in the previous section. In the case of a VAR(1) model there appears no lagged differences in the error correction model. If the initial model constitutes a VAR(p) model then the error correction representation contains additionally (p1) difference terms. Since the matrix (Γ  I) can be represented by the product of a (rxm) and a (Mxr) matrix, it has the rank r. This means that the number of cointegrated relations is determined by the rank of the matrix. In the marginal case r = 0, i.e Γ = I, the model reduced to a VAR model in differences (M independent random walks). If r equals M we are concerned with M stationary level data, I(0). 6
7 The approach of Johansen is based on the maximum likelihood estimation of the matrix (Γ  I) under the assumption of normal distributed error variables. Following the estimation the hypotheses r = 0, r = 1,, r = M1 are tested using likelihood ratio (LR) tests. In the formulation of a VAR(p) model we receive the equation: y t = A 0 + Πy t1 + Γ i y t i + Bx t + ε t As all factors in this equation except Π y t1 are clearly stationary if the variables are cointegrated, it means that also Π y t1 must be stationary. Furthermore, every cointegration relationship has to appear in Π. Even more, their number is given by the rank of Π. Π can be decomposed as Π = αβ, where the relevant elements of the α matrix are adjustment coefficients and the β matrix contains the cointegrating vectors. As the interest lies in α and β, the system should be reduced to one containing only them. p1 i=1 7
8 To do that, one should regress y t on y t1,, y t(p1) and then Y t1 on the same variables. The residuals are denoted respectively R 0t and R 1t. Now the regression equation is reduced to R 0t = αβ R 1t + e t This is a multivariate regression problem: S S S S is the matrix of sums of squares and sums of products of R 0t and R 1t. Johansen (1991) shows that the asymptotic variance of β R 1t is β Σ 11 β, the asymptotic variance of R 0t is Σ 11 and the asymptotic covariance matrix of β R 1t and R 0t is β Σ 10, where Σ 00, Σ 10, and Σ 11 are the population counterparts of S 00, S 10 and S 11. The procedure is to maximize the likelihood function first with respect to α holding β constant and then maximize with respect to β. For α the result is: α = (β S 11 β) 1 β S 10 8
9 The conditional maximum of the likelihood function with respect to β is (L(β)) 2/T = S 00 S 01 β(β S 11 β) 1 β S 10 So maximization of the likelihood function with respect to β means minimization of this determinant. By further mathematical manipulations this is equivalent to the finding of the characteristic roots of the equation: S S10S00S λi = 0 The roots of this equation are the r canonical correlations between R 0t and R 1t. It means that those linear combinations of Y t1 will be selected that are highly correlated to linear combinations of Y t after conditioning on the lagged variables Y t1,, Y t(p1). 9
10 Denoting with λ i the characteristic value, the maximum likelihood function will be (under the assumption of normal distributed error terms): L 2 / T max = S 00 n i= 1 (1λˆ i ) Therefore, the estimation problem is a canonical correlation analysis of the current Y t and the lagged Y. 10
11 The trace statistic is Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie λ n = T ln(1λˆ ) trace i i= r+ 1 where λˆ r+ 1,, λˆ n are the smallest characteristic roots. If the statistic is bigger than the critical value, the null hypothesis of at most r cointegrating vectors is rejected. The maximum eigenvalue statistic is λ max = Tln(1λˆ r + 1) If the statistic is bigger than the critical value, the null hypothesis of exactly r cointegrated vectors is rejected. The critical values for both test are derived from the trace and maximum eigenvalue of the stochastic matrix and depend on whether we include a trend (either linear or quadratic) or a constant in the VAR model. Since we have not to deal with stationary variables, but with I(1) variables, the test values are not χ 2 and follow a different distribution that is tabulated by Johansen and Juselius. 11
12 VI.2 VAR(p)Model Consider a VAR of order p with M I(1) variables in levels: y t = A 0 + A 1 y t1 + A 2 y t A p y tp + Bx t + ε t y t = A 0 + (A 1  I)y t1 + A 2 y t2 + A 3 y t A p y tp + Bx t + ε t y t = A 0 + (A 1  I)y t1  (A 1  I)y t2 + (A 1  I)y t2 + A 2 y t2 + A 3 y t A p y tp + Bx t + ε t y t = A 0 + (A 1  I) y t1 + (A 2 + A 1  I)y t2 + A 3 y t A p y tp + Bx t + ε t y t = A 0 + (A 1  I) y t1 + (A 2 + A 1  I)y t2 + (A 2 + A 1  I)y t3 + (A 2 + A 1  I)y t3 + A 3 )y t A p y tp + Bx t + ε t y t = A 0 + (A 1  I) y t1 + (A 2 + A 1  I) y t2 + (A 3 + A 2 + A 1  I)y t A p y tp + Bx t + ε t with: Γ i = (A i + A i A 1 ), I = unit vector where: y tp is I(1) and Γ p y tp is I(0) y t = A 0 + Γ 1 y t1 + Γ 2 y t Γ p1 y tp1 + Γ p y tp + Bx t + ε t 12
13 Γ p calculates stationary linear combinations of the nonstationary y and the rows of Γ p are the cointegrating vectors for the elements of y. z p := Γ p y tp is I(0) or y t = A 0 + Πy t1 + Γ y + Bx + ε t where y t is a kvector of nonstationary I(1) variables, x t is a dvector of deterministic variables, and ε t is a vector of innovations. We may rewrite the VAR as, p1 i=1 i t i t with: Π = p A i  I and p Γ =  i i=1 j=i+ 1 Aj 13
14 VI.3 Model Specification Eviews considers the following five cases considered by Johansen (1995): 1. The level data y t have no deterministic trends and the cointegrating equations do not have intercepts: H(r): Πy t1 + Bx t = αβ y t1 2. The level data y t have no deterministic trends and the cointegrating equations have intercepts: H(r): Πy t1 + Bx t = α(β y t1 + ρ 0 ) 3. The level data y t have linear trends but the cointegrating equations have only intercepts: H(r): Πy t1 + Bx t = α(β y t1 + ρ 0 ) + α γ 0 14
15 4. The level data y t and the cointegrating equations have linear trends: H(r): Πy t1 + Bx t = α(β y t1 + ρ 0 + ρ 1 t) + α γ 0 5. The level data y t have quadratic trends and the cointegrating equations have linear trends: H(r): Πy t1 + Bx t = α(β y t1 + ρ 0 + ρ 1 t) + α (γ 0 + γ 1 t) The terms associated with α are the deterministic terms outside the cointegrating relations. When a deterministic term appears both inside and outside the cointegrating relation, the decomposition is not uniquely identified. Johansen (1995) identifies the part that belongs inside the error correction term by orthogonally projecting the exogenous terms on to the α space so that α is the null space of α such that α α = 0. EViews uses a different identification method so that the error correction term has a sample mean of zero. More specifically, we identify the part inside the error correction term by regressing the cointegration relations β y t on a constant (and linear trend). 15
16 VI.4 Testing the Rank of Cointegration  An Example a) The Choice of the optimal Lag Length Lag LogL LR FPE AIC SC HQ NA 6.74e * indicates lag order selected by the criterion e LR: sequential modified LR test statistic (each test at 5% level) e * FPE: Final prediction error e AIC: Akaike information criterion e SC: Schwarz information criterion e * HQ: HannanQuinn information criterion * 1.25e15* * e e e e
17 b) Trace statistics Unrestricted Cointegration Rank Test (Trace) Hypothesize d Trace 0.05 No. of CE(s) Eigenvalue Statistic Critical Value Prob.** None * At most 1 * At most E Trace test indicates 2 cointegrating eqn(s) at the 0.05 level * denotes rejection of the hypothesis at the 0.05 level **MacKinnonHaugMichelis (1999) pvalues 17
18 The portion of the output tells you whether there is cointegration and the number of cointegrated vectors. Here one cannot reject the null of two cointegrating vectors using the trace test. We saw in class the differences between the trace and maximal e igenvalue tests. The latter can be evaluated from the column of eigenvalues provided. The trace statistic reports in the first block tests the null hypothesis of r cointegrated relations against the alternative of k cointegrating relations, where k is the number of endogenous variables. We can see from the second column that the first two eigenvalues are much higher compared to the last eigenvalue, which lies near zero. This suggests that there exist two cointegrated relations. The null hypothesis r = 0 and r 1 can clearly be rejected. The calculated test value of 48,75 lies outside the interval between 0 and 29,79. Also the second test value of 15,91 is higher than 15,49. 18
19 c) Maximum eigenvalues statistics Unrestricted Cointegration Rank Test (Maximum Eigenvalue) Hypothesized MaxEigen 0.05 No. of CE(s) Eigenvalue Statistic Critical Value Prob.** None * At most 1 * At most E Maxeigenvalue test indicates 2 cointegrating eqn(s) at the 0.05 level * denotes rejection of the hypothesis at the 0.05 level **MacKinnonHaugMichelis (1999) pvalues 19
Explaining Cointegration Analysis: Part II
Explaining Cointegration Analysis: Part II David F. Hendry and Katarina Juselius Nuffield College, Oxford, OX1 1NF. Department of Economics, University of Copenhagen, Denmark Abstract We describe the concept
More informationThe Relationship Between Crude Oil and Natural Gas Prices
The Relationship Between Crude Oil and Natural Gas Prices by Jose A. Villar Natural Gas Division Energy Information Administration and Frederick L. Joutz Department of Economics The George Washington University
More informationBOUNDS TESTING APPROACHES TO THE ANALYSIS OF LEVEL RELATIONSHIPS
JOURNAL OF APPLIED ECONOMETRICS J. Appl. Econ. 16: 289 326 (21) DOI: 1.12/jae.616 BOUNDS TESTING APPROACHES TO THE ANALYSIS OF LEVEL RELATIONSHIPS M. HASHEM PESARAN, a * YONGCHEOL SHIN b AND RICHARD J.
More informationRegression Analysis: A Complete Example
Regression Analysis: A Complete Example This section works out an example that includes all the topics we have discussed so far in this chapter. A complete example of regression analysis. PhotoDisc, Inc./Getty
More informationThe VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series.
Cointegration The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series. Economic theory, however, often implies equilibrium
More information3.1 Stationary Processes and Mean Reversion
3. Univariate Time Series Models 3.1 Stationary Processes and Mean Reversion Definition 3.1: A time series y t, t = 1,..., T is called (covariance) stationary if (1) E[y t ] = µ, for all t Cov[y t, y t
More informationConsistent cotrending rank selection when both stochastic and. nonlinear deterministic trends are present
Consistent cotrending rank selection when both stochastic and nonlinear deterministic trends are present ZhengFeng Guo and Mototsugu Shintani y This version: April 2010 Abstract This paper proposes a
More informationTWO L 1 BASED NONCONVEX METHODS FOR CONSTRUCTING SPARSE MEAN REVERTING PORTFOLIOS
TWO L 1 BASED NONCONVEX METHODS FOR CONSTRUCTING SPARSE MEAN REVERTING PORTFOLIOS XIAOLONG LONG, KNUT SOLNA, AND JACK XIN Abstract. We study the problem of constructing sparse and fast mean reverting portfolios.
More informationPanel Unit Root Tests in the Presence of CrossSectional Dependency and Heterogeneity 1
Panel Unit Root Tests in the Presence of CrossSectional Dependency and Heterogeneity 1 Yoosoon Chang and Wonho Song Department of Economics Rice University Abstract An IV approach, using as instruments
More informationA MeanVariance Framework for Tests of Asset Pricing Models
A MeanVariance Framework for Tests of Asset Pricing Models Shmuel Kandel University of Chicago TelAviv, University Robert F. Stambaugh University of Pennsylvania This article presents a meanvariance
More informationGRADO EN ECONOMÍA. Is the Forward Rate a True Unbiased Predictor of the Future Spot Exchange Rate?
FACULTAD DE CIENCIAS ECONÓMICAS Y EMPRESARIALES GRADO EN ECONOMÍA Is the Forward Rate a True Unbiased Predictor of the Future Spot Exchange Rate? Autor: Elena Renedo Sánchez Tutor: Juan Ángel Jiménez Martín
More information1 Theory: The General Linear Model
QMIN GLM Theory  1.1 1 Theory: The General Linear Model 1.1 Introduction Before digital computers, statistics textbooks spoke of three procedures regression, the analysis of variance (ANOVA), and the
More information4.1 Learning algorithms for neural networks
4 Perceptron Learning 4.1 Learning algorithms for neural networks In the two preceding chapters we discussed two closely related models, McCulloch Pitts units and perceptrons, but the question of how to
More informationMultivariate Analysis of Variance (MANOVA): I. Theory
Gregory Carey, 1998 MANOVA: I  1 Multivariate Analysis of Variance (MANOVA): I. Theory Introduction The purpose of a t test is to assess the likelihood that the means for two groups are sampled from the
More informationImport and Economic Growth in Turkey: Evidence from Multivariate VAR Analysis
Journal of Economics and Business Vol. XI 2008, No 1 & No 2 Import and Economic Growth in Turkey: Evidence from Multivariate VAR Analysis Ahmet Uğur, Inonu University Abstract This study made an attempt
More informationUNIT ROOT TESTING TO HELP MODEL BUILDING. Lavan Mahadeva & Paul Robinson
Handbooks in Central Banking No. 22 UNIT ROOT TESTING TO HELP MODEL BUILDING Lavan Mahadeva & Paul Robinson Series editors: Andrew Blake & Gill Hammond Issued by the Centre for Central Banking Studies,
More informationAn Introduction into the SVAR Methodology: Identification, Interpretation and Limitations of SVAR models
Kiel Institute of World Economics Duesternbrooker Weg 120 24105 Kiel (Germany) Kiel Working Paper No. 1072 An Introduction into the SVAR Methodology: Identification, Interpretation and Limitations of SVAR
More informationMICHAEL MONOYIOS LUCIO SARNO*
MEAN REVERSION IN STOCK INDEX FUTURES MARKETS: A NONLINEAR ANALYSIS MICHAEL MONOYIOS LUCIO SARNO* Several stylized theoretical models of futures basis behavior under nonzero transactions costs predict
More informationOPRE 6201 : 2. Simplex Method
OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2
More informationIndices of Model Fit STRUCTURAL EQUATION MODELING 2013
Indices of Model Fit STRUCTURAL EQUATION MODELING 2013 Indices of Model Fit A recommended minimal set of fit indices that should be reported and interpreted when reporting the results of SEM analyses:
More information5.1 CHISQUARE TEST OF INDEPENDENCE
C H A P T E R 5 Inferential Statistics and Predictive Analytics Inferential statistics draws valid inferences about a population based on an analysis of a representative sample of that population. The
More informationTIME SERIES. Syllabus... Keywords...
TIME SERIES Contents Syllabus.................................. Books................................... Keywords................................. iii iii iv 1 Models for time series 1 1.1 Time series
More informationTIME SERIES ANALYSIS OF CHINA S EXTERNAL DEBT COMPONENTS, FOREIGN EXCHANGE RESERVES AND ECONOMIC GROWTH RATES. Hüseyin Çetin
TIME SERIES ANALYSIS OF CHINA S EXTERNAL DEBT COMPONENTS, FOREIGN EXCHANGE RESERVES AND ECONOMIC GROWTH RATES Hüseyin Çetin Phd Business Administration Candidate Okan University Social Science Institute,
More informationAn analysis method for a quantitative outcome and two categorical explanatory variables.
Chapter 11 TwoWay ANOVA An analysis method for a quantitative outcome and two categorical explanatory variables. If an experiment has a quantitative outcome and two categorical explanatory variables that
More informationMARC Working Paper Series Working Paper No. 200804
MARC Working Paper Series Working Paper No. 84 FINANCIAL DEVELOPMENT AND ECONOMIC GROWTH: COINTEGRATION AND CAUSALITY ANALYSIS FOR THE CASE OF TURKEY Ilhan EGE, Saban NAZLIOGLU 3 Ali BAYRAKDAROGLU Nevsehir
More informationSubspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at UrbanaChampaign
More informationOn estimation efficiency of the central mean subspace
J. R. Statist. Soc. B (2014) 76, Part 5, pp. 885 901 On estimation efficiency of the central mean subspace Yanyuan Ma Texas A&M University, College Station, USA and Liping Zhu Shanghai University of Finance
More informationPRINCIPAL COMPONENT ANALYSIS
1 Chapter 1 PRINCIPAL COMPONENT ANALYSIS Introduction: The Basics of Principal Component Analysis........................... 2 A Variable Reduction Procedure.......................................... 2
More informationRegression. Chapter 2. 2.1 Weightspace View
Chapter Regression Supervised learning can be divided into regression and classification problems. Whereas the outputs for classification are discrete class labels, regression is concerned with the prediction
More informationRecall this chart that showed how most of our course would be organized:
Chapter 4 OneWay ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical
More information