y t by left multiplication with 1 (L) as y t = 1 (L) t =ª(L) t 2.5 Variance decomposition and innovation accounting Consider the VAR(p) model where

Size: px
Start display at page:

Download "y t by left multiplication with 1 (L) as y t = 1 (L) t =ª(L) t 2.5 Variance decomposition and innovation accounting Consider the VAR(p) model where"

Transcription

1 . Variance decomposition and innovation accounting Consider the VAR(p) model where (L)y t = t, (L) =I m L L p L p is the lag polynomial of order p with m m coe±cient matrices i, i =,...p. Provided that the stationarity condition holds we may obtain a vector MA representation of y t by left multiplication with (L) as where y t = (L) t =ª(L) t (L) =ª(L) =I m +ª L +ª L +. 8

2 The m m coe±cient matrices ª, ª,... may be obtained from the identity (L)ª(L) =(I m as ª j = p i= j i= i L i )(I m + ª j i i i= ª i L i )=I m with ª = I m and j = when i>p, by multiplying out and setting the resulting coe±cient matrix for each power of L equal to zero. For example, start with L = L : L +ª L=(ª )L ª = =ª = Consider next L : i= ª i i ª L ª L L ª =ª + = i= ª i i The result generalizes to any power L j,which yields the transformation formula given above. 9

3 Now, since y t+s =ª(L) t+s = t+s + i= ª i t+s i we have that the e ect of a unit change in t on y t+s is y t+s =ª s. t Now the t 's represent shocks in the system. Therefore the ª i matrices represent the model's response to a unit shock (or innovation) at time point t in each of the variables i periods ahead. Economists call such parameters dynamic multipliers. The response of y i to a unit shock in y j is therefore given by the sequence below, known as the impulse response function, ψ ij,, ψ ij,, ψ ij,,..., where ψ ij,k is the ijth element of the matrix ª k (i, j =,...,m). 8

4 For example if we were told that the rst element in t changes by δ at the same time that the second element changed by δ,..., and the mth element by δ m, then the combined e ect of these changes on the value of the vector y t+s would be given by y t+s = y t+s δ + + y t+s δ m =ª s δ, t mt where δ =(δ,...,δ m ). Generally an impulse response function traces the e ect of a one-time shock to one of the innovations on current and future values of the endogenous variables. 8

5 Example: Exogeneity in MA representation Suppose we have a bivariate VAR system such that x t does not Granger cause y t. Then we can write yt x t = φ() φ () φ () + φ(p) φ (p) φ(p) yt x t yt p x t p + +,t,t Then the coe±cient matrices ª j = j i= ª j i i in the corresponding MA representation are lower triangular as well (exercise): yt x t =,t,t + ψ(i) i= ψ (i),t i ψ(i),t i Hence, we see that variable y does not react to a shock in x.. 8

6 Ambiguity of impulse response functions Consider a bivariate VAR model in vector MA representation, that is, y t =ª(L) t with E( t t)=, where ª(L) gives the response of y t =(y t,y t ) to both elements of t,thatis, t and t. Justaswellwemightbeinterestedinevaluating responses of y t to linear combinations of t and t, for example to unit movements in t and t +. t. This may be done by de ning new shocks ν t = t and ν t = t +. t, or in matrix notation ν t = Q t with Q =.. 8

7 The vector MA representation of our VAR in terms of the new chocks becomes then with y t =ª(L) t =ª(L)Q Q t =: ª (L)ν t ª (L) =ª(L)Q. Note that both representations are observationally equivalent (they produce the same y t ), but yield di erent impulse response functions. In particular, ψ = ψ Q = I Q = Q, which implies that single component shocks may now have contemporaneous e ects on more than one component of y t. Also the covariance matrix of residuals will change, since E(ν t ν t)=e(q t tq ) = unless Q = I. But the fact that both representations are observationally equivalent implies that it is our own choice which linear combination of the ti 'swe ndmostusefultolookatinthe response analysis! 8

8 Orthogonalized impulse response functions Usually the components of t are contemporaneously correlated. For example in our VAR() model of the equity-bond data the contemporaneous residual correlations are ================================= FTA DIV R TBILL FTA DIV. R TBILL ================================= If the correlations are high, it doesn't make much sense to ask "what if t has a unit impulse" with no change in t since both come usually at the same time. For impulse response analysis, it is therefore desirable to express the VAR in such a way that the shocks become orthogonal, (thatis,the ti 's are uncorrelated). Additionally it is convenient to rescale the shocks so that they have a unit variance. 8

9 So we want to pick a Q such that E(ν t ν t )=I. This may be accomplished by coosing Q such that Q Q =,sincethen E(ν t ν t)=e(q t tq )=E(Q Q )=I. Unfortunately there are many di erent Q's, whose inverse S = Q act as "square roots" for,thatis,ss =. This may be seen as follows. Choose any orthogonal matrix R (that is, RR = I) and set S = SR. We have then S S = SRR S = SS =. Which of the many possible S's, respectively Q's, should we choose? 8

10 Before turning to a clever choice of Q (resp. S), let us brie y restate our results obtained so far in terms of S = Q. If we nd a matrix S such that SS =,and transform our VAR residuals such that ν t = S t, then we obtain an observationally equivalent VAR where the shocks are orthogonal (i.e. uncorrelated with a unit variance), that is, E(ν t ν t)=s E( t t)s = S S = I. The new vector MA representation becomes y t =ª (L)ν t = i= ψ i ν t i, where ψi = ψ i S (m m matrices) so that ψ = S = I m. The impulse response function of y i to a unit shock in y j is then given by the othogonalised impulse response function ψ ij,, ψ ij,, ψ ij,,.... 8

11 Choleski Decomposition and Ordering of Variables Note that every orthogonalization of correlated shocks in the original VAR leads to contemporaneous e ects of single component shocks ν ti to more than one component of y t,sinceψ = S will not be diagonal unless was diagonal already. One generally used method in choosing S is to use Cholesky decomposition which results in a lower triangular matrix with positive main diagonal elements for ª,e.g. y,t y,t = ψ () ψ () ψ () ν,t ν,t +ª ()ν t +... Hence Cholesky decomposition of implies that the second shock ν,t does not a ect the rst variable y,t contemporaneously, but both shocks can have a contemporaneous effect on y,t (and all following variables, if we had choosen an example with more than two components). Hence the ordering of variablesisimportant! 88

12 It is recommended to try various orderings to see whether the resulting interpretations are consistent. The principle is that the rst variable should be selected such that it is the only one with potential immediate impact on all other variables. The second variable may have an immediate impact on the last m components of y t, but not on y t, the rst component, and so on. Of course this is usually a di±cult task in practice. Variance decomposition The uncorrelatedness of the ν t 's allows to decompose the error variance of the s stepahead forecast of y it into components accounted for by these shocks, or innovations (this is why this technique is usually called innovation accounting). Because the innovations have unit variances (besides the uncorrelatedness), the components of this error variance accounted for by innovations to y j is given by s l= ψ (l) ij,asweshallseebelow. 89

13 Consider an orthogonalized VAR with m components in vector MA representation, y t = l= ψ (l)ν t l. The s step-ahead forecast for y t is then E t (y t+s )= l=s ψ (l)ν t+s l. De ning the s step-ahead forecast error as we get e t+s = y t+s E t (y t+s ) e t+s = s l= ψ (l)ν t+s l. It's i'th component is given by e i,t+s = s m l= j= ψ (l) ij ν j,t+s l = m s j= l= ψ (l) ij ν j,t+s l. 9

14 Now, because the shocks are both serially and contemporaneously uncorrelated, we get fortheerrorvariance V(e i,t+s )= = m s j= l= m s j= l= V(ψ (l) ij ν j,t+s l ) ψ (l) ij V(νj,t+s l ). Now, recalling that all shock components have unit variance, this implies that V(e i,t+s )= m j= s l= ψ (l) ij, where s l= ψ (l) ij accounts for the error variance generatd by innovations to y j,asclaimed. Comparing this to the sum of innovation responses we get a relative measure how important variable js innovations are in the explaining the variation in variable i at di erent step-ahead forecasts, i.e., R ij,l = s l= ψ (l) ij mk= s l= ψ (l). ik 9

15 Thus, while impulse response functions traces the e ects of a shock to one endogenous variable on to the other variables in the VAR, variance decomposition separates the variation in an endogenous variable into the component shocks to the VAR. Example. Let us choose in our example two orderings. One according to the feedback analysis [(I: FTA, DIV, R, TBILL)], andanorderingbasedupontherelativetiming of the trading hours of the markets [(II: TBILL, R, DIV, FTA)]. In EViews the order is simply de ned in the Cholesky ordering option. Below are results in graphs with I: FTA, DIV, R, TBILL; II: R, TBILL DIV, FTA, and III: General impulse response function. 9

16 Impulse responses: Order {FTA, DIV, R, TBILL} Response to Cholesky One S.D. Innovations ± S.E. Response of DFTA to DFTA Response of DFTA to DDIV Response of DFTA to DR Response of DFTA to DTBILL Response of DDIV to DFTA Response of DDIV to DDIV Response of DDIV to DR Response of DDIV to DTBILL Response of DR to DFTA Response of DR to DDIV Response of DR to DR Response of DR to DTBILL Response of DTBILL to DFTA Response of DTBILL to DDIV Response of DTBILL to DR Response of DTBILL to DTBILL Order {TBILL, R, DIV, FTA} Response to Cholesky One S.D. Innovations ± S.E. Response of DFTA to DFTA Response of DFTA to DDIV Response of DFTA to DR Response of DFTA to DTBILL Response of DDIV to DFTA Response of DDIV to DDIV Response of DDIV to DR Response of DDIV to DTBILL Response of DR to DFTA Response of DR to DDIV Response of DR to DR Response of DR to DTBILL Response of DTBILL to DFTA Response of DTBILL to DDIV Response of DTBILL to DR Response of DTBILL to DTBILL

17 Impulse responses continue: Response to Generalized One S.D. Innovations ± S.E. Response of DFTA to DFTA Response of DFTA to DDIV Response of DFTA to DR Response of DFTA to DTBILL Response of DDIV to DFTA Response of DDIV to DDIV Response of DDIV to DR Response of DDIV to DTBILL Response of DR to DFTA Response of DR to DDIV Response of DR to DR Response of DR to DTBILL Response of DTBILL to DFTA Response of DTBILL to DDIV Response of DTBILL to DR Response of DTBILL to DTBILL The general impulse response function are de ned as GI(j, δ i, F t )=E[y t+j it = δ i, F t ] E[y t+j F t ]. That is di erence of conditional expectation given an one time shock occurs in series j. These coincide with the orthogonalized impulse responses if the residual covariance matrix is diagonal. Pesaran, M. Hashem and Yongcheol Shin (998). Impulse Response Analysis in Linear Multivariate Models, Economics Letters, 8, -9. 9

18 Variance decomposition graphs of the equitybond data Variance Decomposition ± S.E. Percent DFTA variance due to DFTA Percent DFTA variance due to DDIV Percent DFTA variance due to DR Percent DFTA variance due to DTBILL Percent DDIV variance due to DFTA Percent DDIV variance due to DDIV Percent DDIV variance due to DR Percent DDIV variance due to DTBILL Percent DR variance due to DFTA Percent DR variance due to DDIV Percent DR variance due to DR Percent DR variance due to DTBILL Percent DTBILL variance due to DFTA Percent DTBILL variance due to DDIV Percent DTBILL variance due to DR Percent DTBILL variance due to DTBILL

19 On estimation of the impulse response coe±cients Consider the VAR(p) model (L)y t = t, with (L) =I m L L p L p. Then under stationarity the vector MA representation is y = +ª t +ª t + When we have estimates of the AR-matrices i denoted by ^ i, i =,...,p; the next problem is to construct estimates ^ª j for the MA matrices ª j. Recall that ª j = j i= ª j i i with ª = I m,and j = when i>p. The estimates ^ª j can be obtained by replacing the i 's by their corresponding estimates ^ i. 9

20 Next we have to obtain the orthogonalized impulse response coe±cients. This can be done easily, for letting S bethecholeskydecomposition of such that we can write = SS, y t = i= ª i t i = i= ª i SS t i where = i= ª i ν t i, and ν t = S t. Then ª i =ª is Cov(ν t )=S S = I. The estimates for ª i are obtained by replacing ª t with their estimates ^ª t and using Cholesky decomposition of ^. 9

4. Multivariate Time Series Models

4. Multivariate Time Series Models 4. Multivariate Time Series Models Consider the crude oil spot and near futures prices from 24 June 1996 to 26 February 1999 below. Crude Oil Spot and Futures Returns.16.12.08.04.20.15.10.05.00 -.04 -.08

More information

Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem

Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem Chapter Vector autoregressions We begin by taking a look at the data of macroeconomics. A way to summarize the dynamics of macroeconomic data is to make use of vector autoregressions. VAR models have become

More information

Chapter 4: Vector Autoregressive Models

Chapter 4: Vector Autoregressive Models Chapter 4: Vector Autoregressive Models 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie IV.1 Vector Autoregressive Models (VAR)...

More information

Limitations of regression analysis

Limitations of regression analysis Limitations of regression analysis Ragnar Nymoen Department of Economics, UiO 8 February 2009 Overview What are the limitations to regression? Simultaneous equations bias Measurement errors in explanatory

More information

SYSTEMS OF REGRESSION EQUATIONS

SYSTEMS OF REGRESSION EQUATIONS SYSTEMS OF REGRESSION EQUATIONS 1. MULTIPLE EQUATIONS y nt = x nt n + u nt, n = 1,...,N, t = 1,...,T, x nt is 1 k, and n is k 1. This is a version of the standard regression model where the observations

More information

Examining the Relationship between ETFS and Their Underlying Assets in Indian Capital Market

Examining the Relationship between ETFS and Their Underlying Assets in Indian Capital Market 2012 2nd International Conference on Computer and Software Modeling (ICCSM 2012) IPCSIT vol. 54 (2012) (2012) IACSIT Press, Singapore DOI: 10.7763/IPCSIT.2012.V54.20 Examining the Relationship between

More information

Normalization and Mixed Degrees of Integration in Cointegrated Time Series Systems

Normalization and Mixed Degrees of Integration in Cointegrated Time Series Systems Normalization and Mixed Degrees of Integration in Cointegrated Time Series Systems Robert J. Rossana Department of Economics, 04 F/AB, Wayne State University, Detroit MI 480 E-Mail: r.j.rossana@wayne.edu

More information

Principal Components Analysis (PCA)

Principal Components Analysis (PCA) Principal Components Analysis (PCA) Janette Walde janette.walde@uibk.ac.at Department of Statistics University of Innsbruck Outline I Introduction Idea of PCA Principle of the Method Decomposing an Association

More information

A Introduction to Matrix Algebra and Principal Components Analysis

A Introduction to Matrix Algebra and Principal Components Analysis A Introduction to Matrix Algebra and Principal Components Analysis Multivariate Methods in Education ERSH 8350 Lecture #2 August 24, 2011 ERSH 8350: Lecture 2 Today s Class An introduction to matrix algebra

More information

Vector autoregressions, VAR

Vector autoregressions, VAR 1 / 45 Vector autoregressions, VAR Chapter 2 Financial Econometrics Michael Hauser WS15/16 2 / 45 Content Cross-correlations VAR model in standard/reduced form Properties of VAR(1), VAR(p) Structural VAR,

More information

Calculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants

Calculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants Calculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants Hartmut Führ fuehr@matha.rwth-aachen.de Lehrstuhl A für Mathematik, RWTH Aachen October 30, 2008 Overview

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

Topic 5: Stochastic Growth and Real Business Cycles

Topic 5: Stochastic Growth and Real Business Cycles Topic 5: Stochastic Growth and Real Business Cycles Yulei Luo SEF of HKU October 1, 2015 Luo, Y. (SEF of HKU) Macro Theory October 1, 2015 1 / 45 Lag Operators The lag operator (L) is de ned as Similar

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

1. The Classical Linear Regression Model: The Bivariate Case

1. The Classical Linear Regression Model: The Bivariate Case Business School, Brunel University MSc. EC5501/5509 Modelling Financial Decisions and Markets/Introduction to Quantitative Methods Prof. Menelaos Karanasos (Room SS69, Tel. 018956584) Lecture Notes 3 1.

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

Lecture 5 - Triangular Factorizations & Operation Counts

Lecture 5 - Triangular Factorizations & Operation Counts LU Factorization Lecture 5 - Triangular Factorizations & Operation Counts We have seen that the process of GE essentially factors a matrix A into LU Now we want to see how this factorization allows us

More information

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Trend and Seasonal Components

Trend and Seasonal Components Chapter 2 Trend and Seasonal Components If the plot of a TS reveals an increase of the seasonal and noise fluctuations with the level of the process then some transformation may be necessary before doing

More information

Chapter 6: Multivariate Cointegration Analysis

Chapter 6: Multivariate Cointegration Analysis Chapter 6: Multivariate Cointegration Analysis 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie VI. Multivariate Cointegration

More information

An Introduction into the SVAR Methodology: Identification, Interpretation and Limitations of SVAR models

An Introduction into the SVAR Methodology: Identification, Interpretation and Limitations of SVAR models Kiel Institute of World Economics Duesternbrooker Weg 120 24105 Kiel (Germany) Kiel Working Paper No. 1072 An Introduction into the SVAR Methodology: Identification, Interpretation and Limitations of SVAR

More information

Diagonal, Symmetric and Triangular Matrices

Diagonal, Symmetric and Triangular Matrices Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

More information

State Space Time Series Analysis

State Space Time Series Analysis State Space Time Series Analysis p. 1 State Space Time Series Analysis Siem Jan Koopman http://staff.feweb.vu.nl/koopman Department of Econometrics VU University Amsterdam Tinbergen Institute 2011 State

More information

Linear Least Squares

Linear Least Squares Linear Least Squares Suppose we are given a set of data points {(x i,f i )}, i = 1,...,n. These could be measurements from an experiment or obtained simply by evaluating a function at some points. One

More information

The Nominal-to-Real Transformation

The Nominal-to-Real Transformation The Nominal-to-Real Transformation Hans Christian Kongsted PhD course at Sandbjerg, May 2 Based on: Testing the nominal-to-real transformation, J. Econometrics (25) Nominal-to-real transformation: Reduce

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

Chapter 5: The Cointegrated VAR model

Chapter 5: The Cointegrated VAR model Chapter 5: The Cointegrated VAR model Katarina Juselius July 1, 2012 Katarina Juselius () Chapter 5: The Cointegrated VAR model July 1, 2012 1 / 41 An intuitive interpretation of the Pi matrix Consider

More information

Chapter 2. Dynamic panel data models

Chapter 2. Dynamic panel data models Chapter 2. Dynamic panel data models Master of Science in Economics - University of Geneva Christophe Hurlin, Université d Orléans Université d Orléans April 2010 Introduction De nition We now consider

More information

Lecture 6. Inverse of Matrix

Lecture 6. Inverse of Matrix Lecture 6 Inverse of Matrix Recall that any linear system can be written as a matrix equation In one dimension case, ie, A is 1 1, then can be easily solved as A x b Ax b x b A 1 A b A 1 b provided that

More information

Table 1: Unit Root Tests KPSS Test Augmented Dickey-Fuller Test with Time Trend

Table 1: Unit Root Tests KPSS Test Augmented Dickey-Fuller Test with Time Trend Table 1: Unit Root Tests KPSS Test Augmented Dickey-Fuller Test with Time Trend with Time Trend test statistic p-value test statistic Corn -2.953.146.179 Soy -2.663.252.353 Corn -2.752.215.171 Soy -2.588.285.32

More information

Amath 546/Econ 589 Multivariate GARCH Models

Amath 546/Econ 589 Multivariate GARCH Models Amath 546/Econ 589 Multivariate GARCH Models Eric Zivot May 15, 2013 Lecture Outline Exponentially weighted covariance estimation Multivariate GARCH models Prediction from multivariate GARCH models Reading

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Notes for STA 437/1005 Methods for Multivariate Data

Notes for STA 437/1005 Methods for Multivariate Data Notes for STA 437/1005 Methods for Multivariate Data Radford M. Neal, 26 November 2010 Random Vectors Notation: Let X be a random vector with p elements, so that X = [X 1,..., X p ], where denotes transpose.

More information

Inverses. Stephen Boyd. EE103 Stanford University. October 27, 2015

Inverses. Stephen Boyd. EE103 Stanford University. October 27, 2015 Inverses Stephen Boyd EE103 Stanford University October 27, 2015 Outline Left and right inverses Inverse Solving linear equations Examples Pseudo-inverse Left and right inverses 2 Left inverses a number

More information

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix.

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An m-by-n matrix is a rectangular array of numbers that has m rows and n columns: a 11

More information

Chapter 5. Analysis of Multiple Time Series. 5.1 Vector Autoregressions

Chapter 5. Analysis of Multiple Time Series. 5.1 Vector Autoregressions Chapter 5 Analysis of Multiple Time Series Note: The primary references for these notes are chapters 5 and 6 in Enders (2004). An alternative, but more technical treatment can be found in chapters 10-11

More information

Maximum Likelihood Estimation of an ARMA(p,q) Model

Maximum Likelihood Estimation of an ARMA(p,q) Model Maximum Likelihood Estimation of an ARMA(p,q) Model Constantino Hevia The World Bank. DECRG. October 8 This note describes the Matlab function arma_mle.m that computes the maximum likelihood estimates

More information

CHAPTER 6. SIMULTANEOUS EQUATIONS

CHAPTER 6. SIMULTANEOUS EQUATIONS Economics 24B Daniel McFadden 1999 1. INTRODUCTION CHAPTER 6. SIMULTANEOUS EQUATIONS Economic systems are usually described in terms of the behavior of various economic agents, and the equilibrium that

More information

Testing for Granger causality between stock prices and economic growth

Testing for Granger causality between stock prices and economic growth MPRA Munich Personal RePEc Archive Testing for Granger causality between stock prices and economic growth Pasquale Foresti 2006 Online at http://mpra.ub.uni-muenchen.de/2962/ MPRA Paper No. 2962, posted

More information

Time Series Analysis

Time Series Analysis Time Series Analysis Forecasting with ARIMA models Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 2012 Alonso and García-Martos (UC3M-UPM)

More information

Vector Time Series Model Representations and Analysis with XploRe

Vector Time Series Model Representations and Analysis with XploRe 0-1 Vector Time Series Model Representations and Analysis with plore Julius Mungo CASE - Center for Applied Statistics and Economics Humboldt-Universität zu Berlin mungo@wiwi.hu-berlin.de plore MulTi Motivation

More information

EXCHANGE RATE PASS-THROUGH TO INFLATION IN MONGOLIA

EXCHANGE RATE PASS-THROUGH TO INFLATION IN MONGOLIA 1 EXCHANGE RATE PASS-THROUGH TO INFLATION IN MONGOLIA by Gan-Ochir Doojav doojav_ganochir@mongolbank.mn February 2009 Economist at the Monetary Policy and Research Department of the Bank of Mongolia. Opinions

More information

Chapter 6. Econometrics. 6.1 Introduction. 6.2 Univariate techniques Transforming data

Chapter 6. Econometrics. 6.1 Introduction. 6.2 Univariate techniques Transforming data Chapter 6 Econometrics 6.1 Introduction We re going to use a few tools to characterize the time series properties of macro variables. Today, we will take a relatively atheoretical approach to this task,

More information

Elementary Matrices and The LU Factorization

Elementary Matrices and The LU Factorization lementary Matrices and The LU Factorization Definition: ny matrix obtained by performing a single elementary row operation (RO) on the identity (unit) matrix is called an elementary matrix. There are three

More information

Testing The Quantity Theory of Money in Greece: A Note

Testing The Quantity Theory of Money in Greece: A Note ERC Working Paper in Economic 03/10 November 2003 Testing The Quantity Theory of Money in Greece: A Note Erdal Özmen Department of Economics Middle East Technical University Ankara 06531, Turkey ozmen@metu.edu.tr

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

8. Linear least-squares

8. Linear least-squares 8. Linear least-squares EE13 (Fall 211-12) definition examples and applications solution of a least-squares problem, normal equations 8-1 Definition overdetermined linear equations if b range(a), cannot

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

2.5 Elementary Row Operations and the Determinant

2.5 Elementary Row Operations and the Determinant 2.5 Elementary Row Operations and the Determinant Recall: Let A be a 2 2 matrtix : A = a b. The determinant of A, denoted by det(a) c d or A, is the number ad bc. So for example if A = 2 4, det(a) = 2(5)

More information

Fiscal and Monetary Policy in Australia: an SVAR Model

Fiscal and Monetary Policy in Australia: an SVAR Model Fiscal and Monetary Policy in Australia: an SVAR Model Mardi Dungey and Renée Fry University of Tasmania, CFAP University of Cambridge, CAMA Australian National University September 21 ungey and Fry (University

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series.

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series. Cointegration The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series. Economic theory, however, often implies equilibrium

More information

Basics Inversion and related concepts Random vectors Matrix calculus. Matrix algebra. Patrick Breheny. January 20

Basics Inversion and related concepts Random vectors Matrix calculus. Matrix algebra. Patrick Breheny. January 20 Matrix algebra January 20 Introduction Basics The mathematics of multiple regression revolves around ordering and keeping track of large arrays of numbers and solving systems of equations The mathematical

More information

Matrix Inverse and Determinants

Matrix Inverse and Determinants DM554 Linear and Integer Programming Lecture 5 and Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1 2 3 4 and Cramer s rule 2 Outline 1 2 3 4 and

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

ELEC-E8104 Stochastics models and estimation, Lecture 3b: Linear Estimation in Static Systems

ELEC-E8104 Stochastics models and estimation, Lecture 3b: Linear Estimation in Static Systems Stochastics models and estimation, Lecture 3b: Linear Estimation in Static Systems Minimum Mean Square Error (MMSE) MMSE estimation of Gaussian random vectors Linear MMSE estimator for arbitrarily distributed

More information

MAT Solving Linear Systems Using Matrices and Row Operations

MAT Solving Linear Systems Using Matrices and Row Operations MAT 171 8.5 Solving Linear Systems Using Matrices and Row Operations A. Introduction to Matrices Identifying the Size and Entries of a Matrix B. The Augmented Matrix of a System of Equations Forming Augmented

More information

2.2 Elimination of Trend and Seasonality

2.2 Elimination of Trend and Seasonality 26 CHAPTER 2. TREND AND SEASONAL COMPONENTS 2.2 Elimination of Trend and Seasonality Here we assume that the TS model is additive and there exist both trend and seasonal components, that is X t = m t +

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

Topic 1: Matrices and Systems of Linear Equations.

Topic 1: Matrices and Systems of Linear Equations. Topic 1: Matrices and Systems of Linear Equations Let us start with a review of some linear algebra concepts we have already learned, such as matrices, determinants, etc Also, we shall review the method

More information

CAPM, Arbitrage, and Linear Factor Models

CAPM, Arbitrage, and Linear Factor Models CAPM, Arbitrage, and Linear Factor Models CAPM, Arbitrage, Linear Factor Models 1/ 41 Introduction We now assume all investors actually choose mean-variance e cient portfolios. By equating these investors

More information

Factor analysis. Angela Montanari

Factor analysis. Angela Montanari Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number

More information

Matrix Algebra and Applications

Matrix Algebra and Applications Matrix Algebra and Applications Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 1 / 49 EC2040 Topic 2 - Matrices and Matrix Algebra Reading 1 Chapters

More information

Multivariate Analysis (Slides 13)

Multivariate Analysis (Slides 13) Multivariate Analysis (Slides 13) The final topic we consider is Factor Analysis. A Factor Analysis is a mathematical approach for attempting to explain the correlation between a large set of variables

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Dimensionality Reduction: Principal Components Analysis

Dimensionality Reduction: Principal Components Analysis Dimensionality Reduction: Principal Components Analysis In data mining one often encounters situations where there are a large number of variables in the database. In such situations it is very likely

More information

Additional Examples of using the Elimination Method to Solve Systems of Equations

Additional Examples of using the Elimination Method to Solve Systems of Equations Additional Examples of using the Elimination Method to Solve Systems of Equations. Adjusting Coecients and Avoiding Fractions To use one equation to eliminate a variable, you multiply both sides of that

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

Introduction. Agents have preferences over the two goods which are determined by a utility function. Speci cally, type 1 agents utility is given by

Introduction. Agents have preferences over the two goods which are determined by a utility function. Speci cally, type 1 agents utility is given by Introduction General equilibrium analysis looks at how multiple markets come into equilibrium simultaneously. With many markets, equilibrium analysis must take explicit account of the fact that changes

More information

1 Introduction. 2 Matrices: Definition. Matrix Algebra. Hervé Abdi Lynne J. Williams

1 Introduction. 2 Matrices: Definition. Matrix Algebra. Hervé Abdi Lynne J. Williams In Neil Salkind (Ed.), Encyclopedia of Research Design. Thousand Oaks, CA: Sage. 00 Matrix Algebra Hervé Abdi Lynne J. Williams Introduction Sylvester developed the modern concept of matrices in the 9th

More information

Forecasting in STATA: Tools and Tricks

Forecasting in STATA: Tools and Tricks Forecasting in STATA: Tools and Tricks Introduction This manual is intended to be a reference guide for time series forecasting in STATA. It will be updated periodically during the semester, and will be

More information

4. MATRICES Matrices

4. MATRICES Matrices 4. MATRICES 170 4. Matrices 4.1. Definitions. Definition 4.1.1. A matrix is a rectangular array of numbers. A matrix with m rows and n columns is said to have dimension m n and may be represented as follows:

More information

3.1 Least squares in matrix form

3.1 Least squares in matrix form 118 3 Multiple Regression 3.1 Least squares in matrix form E Uses Appendix A.2 A.4, A.6, A.7. 3.1.1 Introduction More than one explanatory variable In the foregoing chapter we considered the simple regression

More information

Linear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold:

Linear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold: Linear Systems Example: Find x, x, x such that the following three equations hold: x + x + x = 4x + x + x = x + x + x = 6 We can write this using matrix-vector notation as 4 {{ A x x x {{ x = 6 {{ b General

More information

The basic unit in matrix algebra is a matrix, generally expressed as: a 11 a 12. a 13 A = a 21 a 22 a 23

The basic unit in matrix algebra is a matrix, generally expressed as: a 11 a 12. a 13 A = a 21 a 22 a 23 (copyright by Scott M Lynch, February 2003) Brief Matrix Algebra Review (Soc 504) Matrix algebra is a form of mathematics that allows compact notation for, and mathematical manipulation of, high-dimensional

More information

1 Eigenvalues and Eigenvectors

1 Eigenvalues and Eigenvectors Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

Practical Numerical Training UKNum

Practical Numerical Training UKNum Practical Numerical Training UKNum 7: Systems of linear equations C. Mordasini Max Planck Institute for Astronomy, Heidelberg Program: 1) Introduction 2) Gauss Elimination 3) Gauss with Pivoting 4) Determinants

More information

Estimating an ARMA Process

Estimating an ARMA Process Statistics 910, #12 1 Overview Estimating an ARMA Process 1. Main ideas 2. Fitting autoregressions 3. Fitting with moving average components 4. Standard errors 5. Examples 6. Appendix: Simple estimators

More information

Monetary Policy Surprises, Credit Costs. and. Economic Activity

Monetary Policy Surprises, Credit Costs. and. Economic Activity Monetary Policy Surprises, Credit Costs and Economic Activity Mark Gertler and Peter Karadi NYU and ECB BIS, March 215 The views expressed are those of the authors and do not necessarily reflect the offi

More information

Random Vectors and the Variance Covariance Matrix

Random Vectors and the Variance Covariance Matrix Random Vectors and the Variance Covariance Matrix Definition 1. A random vector X is a vector (X 1, X 2,..., X p ) of jointly distributed random variables. As is customary in linear algebra, we will write

More information

( % . This matrix consists of $ 4 5 " 5' the coefficients of the variables as they appear in the original system. The augmented 3 " 2 2 # 2 " 3 4&

( % . This matrix consists of $ 4 5  5' the coefficients of the variables as they appear in the original system. The augmented 3  2 2 # 2  3 4& Matrices define matrix We will use matrices to help us solve systems of equations. A matrix is a rectangular array of numbers enclosed in parentheses or brackets. In linear algebra, matrices are important

More information

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication). MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix

More information

Sample Problems. Lecture Notes Equations with Parameters page 1

Sample Problems. Lecture Notes Equations with Parameters page 1 Lecture Notes Equations with Parameters page Sample Problems. In each of the parametric equations given, nd the value of the parameter m so that the equation has exactly one real solution. a) x + mx m

More information

What is the interpretation of R 2?

What is the interpretation of R 2? What is the interpretation of R 2? Karl G. Jöreskog October 2, 1999 Consider a regression equation between a dependent variable y and a set of explanatory variables x'=(x 1, x 2,..., x q ): or in matrix

More information

Identifying aggregate demand and supply shocks in a small open economy

Identifying aggregate demand and supply shocks in a small open economy Identifying aggregate demand and supply shocks in a small open economy Walter Enders Department of Economics, Finance and Legal Studies University of Alabama Tuscaloosa, Alabama 35487-0224 wenders@cba.ua.edu

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, Chapter 1: Linear Equations and Matrices MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

More information

Estimation and Inference in Cointegration Models Economics 582

Estimation and Inference in Cointegration Models Economics 582 Estimation and Inference in Cointegration Models Economics 582 Eric Zivot May 17, 2012 Tests for Cointegration Let the ( 1) vector Y be (1). Recall, Y is cointegrated with 0 cointegrating vectors if there

More information

Contemporaneous Spill-over among Equity, Gold, and Exchange Rate Implied Volatility Indices

Contemporaneous Spill-over among Equity, Gold, and Exchange Rate Implied Volatility Indices Contemporaneous Spill-over among Equity, Gold, and Exchange Rate Implied Volatility Indices Ihsan Ullah Badshah, Bart Frijns*, Alireza Tourani-Rad Department of Finance, Faculty of Business and Law, Auckland

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form

1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form 1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and

More information

Time Series Analysis

Time Series Analysis Time Series Analysis Autoregressive, MA and ARMA processes Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 212 Alonso and García-Martos

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Price and Volatility Transmission in International Wheat Futures Markets

Price and Volatility Transmission in International Wheat Futures Markets ANNALS OF ECONOMICS AND FINANCE 4, 37 50 (2003) Price and Volatility Transmission in International Wheat Futures Markets Jian Yang * Department of Accounting, Finance and Information Systems Prairie View

More information