y t by left multiplication with 1 (L) as y t = 1 (L) t =ª(L) t 2.5 Variance decomposition and innovation accounting Consider the VAR(p) model where

Size: px
Start display at page:

Download "y t by left multiplication with 1 (L) as y t = 1 (L) t =ª(L) t 2.5 Variance decomposition and innovation accounting Consider the VAR(p) model where"

Transcription

1 . Variance decomposition and innovation accounting Consider the VAR(p) model where (L)y t = t, (L) =I m L L p L p is the lag polynomial of order p with m m coe±cient matrices i, i =,...p. Provided that the stationarity condition holds we may obtain a vector MA representation of y t by left multiplication with (L) as where y t = (L) t =ª(L) t (L) =ª(L) =I m +ª L +ª L +. 8

2 The m m coe±cient matrices ª, ª,... may be obtained from the identity (L)ª(L) =(I m as ª j = p i= j i= i L i )(I m + ª j i i i= ª i L i )=I m with ª = I m and j = when i>p, by multiplying out and setting the resulting coe±cient matrix for each power of L equal to zero. For example, start with L = L : L +ª L=(ª )L ª = =ª = Consider next L : i= ª i i ª L ª L L ª =ª + = i= ª i i The result generalizes to any power L j,which yields the transformation formula given above. 9

3 Now, since y t+s =ª(L) t+s = t+s + i= ª i t+s i we have that the e ect of a unit change in t on y t+s is y t+s =ª s. t Now the t 's represent shocks in the system. Therefore the ª i matrices represent the model's response to a unit shock (or innovation) at time point t in each of the variables i periods ahead. Economists call such parameters dynamic multipliers. The response of y i to a unit shock in y j is therefore given by the sequence below, known as the impulse response function, ψ ij,, ψ ij,, ψ ij,,..., where ψ ij,k is the ijth element of the matrix ª k (i, j =,...,m). 8

4 For example if we were told that the rst element in t changes by δ at the same time that the second element changed by δ,..., and the mth element by δ m, then the combined e ect of these changes on the value of the vector y t+s would be given by y t+s = y t+s δ + + y t+s δ m =ª s δ, t mt where δ =(δ,...,δ m ). Generally an impulse response function traces the e ect of a one-time shock to one of the innovations on current and future values of the endogenous variables. 8

5 Example: Exogeneity in MA representation Suppose we have a bivariate VAR system such that x t does not Granger cause y t. Then we can write yt x t = φ() φ () φ () + φ(p) φ (p) φ(p) yt x t yt p x t p + +,t,t Then the coe±cient matrices ª j = j i= ª j i i in the corresponding MA representation are lower triangular as well (exercise): yt x t =,t,t + ψ(i) i= ψ (i),t i ψ(i),t i Hence, we see that variable y does not react to a shock in x.. 8

6 Ambiguity of impulse response functions Consider a bivariate VAR model in vector MA representation, that is, y t =ª(L) t with E( t t)=, where ª(L) gives the response of y t =(y t,y t ) to both elements of t,thatis, t and t. Justaswellwemightbeinterestedinevaluating responses of y t to linear combinations of t and t, for example to unit movements in t and t +. t. This may be done by de ning new shocks ν t = t and ν t = t +. t, or in matrix notation ν t = Q t with Q =.. 8

7 The vector MA representation of our VAR in terms of the new chocks becomes then with y t =ª(L) t =ª(L)Q Q t =: ª (L)ν t ª (L) =ª(L)Q. Note that both representations are observationally equivalent (they produce the same y t ), but yield di erent impulse response functions. In particular, ψ = ψ Q = I Q = Q, which implies that single component shocks may now have contemporaneous e ects on more than one component of y t. Also the covariance matrix of residuals will change, since E(ν t ν t)=e(q t tq ) = unless Q = I. But the fact that both representations are observationally equivalent implies that it is our own choice which linear combination of the ti 'swe ndmostusefultolookatinthe response analysis! 8

8 Orthogonalized impulse response functions Usually the components of t are contemporaneously correlated. For example in our VAR() model of the equity-bond data the contemporaneous residual correlations are ================================= FTA DIV R TBILL FTA DIV. R TBILL ================================= If the correlations are high, it doesn't make much sense to ask "what if t has a unit impulse" with no change in t since both come usually at the same time. For impulse response analysis, it is therefore desirable to express the VAR in such a way that the shocks become orthogonal, (thatis,the ti 's are uncorrelated). Additionally it is convenient to rescale the shocks so that they have a unit variance. 8

9 So we want to pick a Q such that E(ν t ν t )=I. This may be accomplished by coosing Q such that Q Q =,sincethen E(ν t ν t)=e(q t tq )=E(Q Q )=I. Unfortunately there are many di erent Q's, whose inverse S = Q act as "square roots" for,thatis,ss =. This may be seen as follows. Choose any orthogonal matrix R (that is, RR = I) and set S = SR. We have then S S = SRR S = SS =. Which of the many possible S's, respectively Q's, should we choose? 8

10 Before turning to a clever choice of Q (resp. S), let us brie y restate our results obtained so far in terms of S = Q. If we nd a matrix S such that SS =,and transform our VAR residuals such that ν t = S t, then we obtain an observationally equivalent VAR where the shocks are orthogonal (i.e. uncorrelated with a unit variance), that is, E(ν t ν t)=s E( t t)s = S S = I. The new vector MA representation becomes y t =ª (L)ν t = i= ψ i ν t i, where ψi = ψ i S (m m matrices) so that ψ = S = I m. The impulse response function of y i to a unit shock in y j is then given by the othogonalised impulse response function ψ ij,, ψ ij,, ψ ij,,.... 8

11 Choleski Decomposition and Ordering of Variables Note that every orthogonalization of correlated shocks in the original VAR leads to contemporaneous e ects of single component shocks ν ti to more than one component of y t,sinceψ = S will not be diagonal unless was diagonal already. One generally used method in choosing S is to use Cholesky decomposition which results in a lower triangular matrix with positive main diagonal elements for ª,e.g. y,t y,t = ψ () ψ () ψ () ν,t ν,t +ª ()ν t +... Hence Cholesky decomposition of implies that the second shock ν,t does not a ect the rst variable y,t contemporaneously, but both shocks can have a contemporaneous effect on y,t (and all following variables, if we had choosen an example with more than two components). Hence the ordering of variablesisimportant! 88

12 It is recommended to try various orderings to see whether the resulting interpretations are consistent. The principle is that the rst variable should be selected such that it is the only one with potential immediate impact on all other variables. The second variable may have an immediate impact on the last m components of y t, but not on y t, the rst component, and so on. Of course this is usually a di±cult task in practice. Variance decomposition The uncorrelatedness of the ν t 's allows to decompose the error variance of the s stepahead forecast of y it into components accounted for by these shocks, or innovations (this is why this technique is usually called innovation accounting). Because the innovations have unit variances (besides the uncorrelatedness), the components of this error variance accounted for by innovations to y j is given by s l= ψ (l) ij,asweshallseebelow. 89

13 Consider an orthogonalized VAR with m components in vector MA representation, y t = l= ψ (l)ν t l. The s step-ahead forecast for y t is then E t (y t+s )= l=s ψ (l)ν t+s l. De ning the s step-ahead forecast error as we get e t+s = y t+s E t (y t+s ) e t+s = s l= ψ (l)ν t+s l. It's i'th component is given by e i,t+s = s m l= j= ψ (l) ij ν j,t+s l = m s j= l= ψ (l) ij ν j,t+s l. 9

14 Now, because the shocks are both serially and contemporaneously uncorrelated, we get fortheerrorvariance V(e i,t+s )= = m s j= l= m s j= l= V(ψ (l) ij ν j,t+s l ) ψ (l) ij V(νj,t+s l ). Now, recalling that all shock components have unit variance, this implies that V(e i,t+s )= m j= s l= ψ (l) ij, where s l= ψ (l) ij accounts for the error variance generatd by innovations to y j,asclaimed. Comparing this to the sum of innovation responses we get a relative measure how important variable js innovations are in the explaining the variation in variable i at di erent step-ahead forecasts, i.e., R ij,l = s l= ψ (l) ij mk= s l= ψ (l). ik 9

15 Thus, while impulse response functions traces the e ects of a shock to one endogenous variable on to the other variables in the VAR, variance decomposition separates the variation in an endogenous variable into the component shocks to the VAR. Example. Let us choose in our example two orderings. One according to the feedback analysis [(I: FTA, DIV, R, TBILL)], andanorderingbasedupontherelativetiming of the trading hours of the markets [(II: TBILL, R, DIV, FTA)]. In EViews the order is simply de ned in the Cholesky ordering option. Below are results in graphs with I: FTA, DIV, R, TBILL; II: R, TBILL DIV, FTA, and III: General impulse response function. 9

16 Impulse responses: Order {FTA, DIV, R, TBILL} Response to Cholesky One S.D. Innovations ± S.E. Response of DFTA to DFTA Response of DFTA to DDIV Response of DFTA to DR Response of DFTA to DTBILL Response of DDIV to DFTA Response of DDIV to DDIV Response of DDIV to DR Response of DDIV to DTBILL Response of DR to DFTA Response of DR to DDIV Response of DR to DR Response of DR to DTBILL Response of DTBILL to DFTA Response of DTBILL to DDIV Response of DTBILL to DR Response of DTBILL to DTBILL Order {TBILL, R, DIV, FTA} Response to Cholesky One S.D. Innovations ± S.E. Response of DFTA to DFTA Response of DFTA to DDIV Response of DFTA to DR Response of DFTA to DTBILL Response of DDIV to DFTA Response of DDIV to DDIV Response of DDIV to DR Response of DDIV to DTBILL Response of DR to DFTA Response of DR to DDIV Response of DR to DR Response of DR to DTBILL Response of DTBILL to DFTA Response of DTBILL to DDIV Response of DTBILL to DR Response of DTBILL to DTBILL

17 Impulse responses continue: Response to Generalized One S.D. Innovations ± S.E. Response of DFTA to DFTA Response of DFTA to DDIV Response of DFTA to DR Response of DFTA to DTBILL Response of DDIV to DFTA Response of DDIV to DDIV Response of DDIV to DR Response of DDIV to DTBILL Response of DR to DFTA Response of DR to DDIV Response of DR to DR Response of DR to DTBILL Response of DTBILL to DFTA Response of DTBILL to DDIV Response of DTBILL to DR Response of DTBILL to DTBILL The general impulse response function are de ned as GI(j, δ i, F t )=E[y t+j it = δ i, F t ] E[y t+j F t ]. That is di erence of conditional expectation given an one time shock occurs in series j. These coincide with the orthogonalized impulse responses if the residual covariance matrix is diagonal. Pesaran, M. Hashem and Yongcheol Shin (998). Impulse Response Analysis in Linear Multivariate Models, Economics Letters, 8, -9. 9

18 Variance decomposition graphs of the equitybond data Variance Decomposition ± S.E. Percent DFTA variance due to DFTA Percent DFTA variance due to DDIV Percent DFTA variance due to DR Percent DFTA variance due to DTBILL Percent DDIV variance due to DFTA Percent DDIV variance due to DDIV Percent DDIV variance due to DR Percent DDIV variance due to DTBILL Percent DR variance due to DFTA Percent DR variance due to DDIV Percent DR variance due to DR Percent DR variance due to DTBILL Percent DTBILL variance due to DFTA Percent DTBILL variance due to DDIV Percent DTBILL variance due to DR Percent DTBILL variance due to DTBILL

19 On estimation of the impulse response coe±cients Consider the VAR(p) model (L)y t = t, with (L) =I m L L p L p. Then under stationarity the vector MA representation is y = +ª t +ª t + When we have estimates of the AR-matrices i denoted by ^ i, i =,...,p; the next problem is to construct estimates ^ª j for the MA matrices ª j. Recall that ª j = j i= ª j i i with ª = I m,and j = when i>p. The estimates ^ª j can be obtained by replacing the i 's by their corresponding estimates ^ i. 9

20 Next we have to obtain the orthogonalized impulse response coe±cients. This can be done easily, for letting S bethecholeskydecomposition of such that we can write = SS, y t = i= ª i t i = i= ª i SS t i where = i= ª i ν t i, and ν t = S t. Then ª i =ª is Cov(ν t )=S S = I. The estimates for ª i are obtained by replacing ª t with their estimates ^ª t and using Cholesky decomposition of ^. 9

4. Multivariate Time Series Models

4. Multivariate Time Series Models 4. Multivariate Time Series Models Consider the crude oil spot and near futures prices from 24 June 1996 to 26 February 1999 below. Crude Oil Spot and Futures Returns.16.12.08.04.20.15.10.05.00 -.04 -.08

More information

Chapter 4: Vector Autoregressive Models

Chapter 4: Vector Autoregressive Models Chapter 4: Vector Autoregressive Models 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie IV.1 Vector Autoregressive Models (VAR)...

More information

Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem

Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem Chapter Vector autoregressions We begin by taking a look at the data of macroeconomics. A way to summarize the dynamics of macroeconomic data is to make use of vector autoregressions. VAR models have become

More information

SYSTEMS OF REGRESSION EQUATIONS

SYSTEMS OF REGRESSION EQUATIONS SYSTEMS OF REGRESSION EQUATIONS 1. MULTIPLE EQUATIONS y nt = x nt n + u nt, n = 1,...,N, t = 1,...,T, x nt is 1 k, and n is k 1. This is a version of the standard regression model where the observations

More information

Examining the Relationship between ETFS and Their Underlying Assets in Indian Capital Market

Examining the Relationship between ETFS and Their Underlying Assets in Indian Capital Market 2012 2nd International Conference on Computer and Software Modeling (ICCSM 2012) IPCSIT vol. 54 (2012) (2012) IACSIT Press, Singapore DOI: 10.7763/IPCSIT.2012.V54.20 Examining the Relationship between

More information

Normalization and Mixed Degrees of Integration in Cointegrated Time Series Systems

Normalization and Mixed Degrees of Integration in Cointegrated Time Series Systems Normalization and Mixed Degrees of Integration in Cointegrated Time Series Systems Robert J. Rossana Department of Economics, 04 F/AB, Wayne State University, Detroit MI 480 E-Mail: r.j.rossana@wayne.edu

More information

Topic 5: Stochastic Growth and Real Business Cycles

Topic 5: Stochastic Growth and Real Business Cycles Topic 5: Stochastic Growth and Real Business Cycles Yulei Luo SEF of HKU October 1, 2015 Luo, Y. (SEF of HKU) Macro Theory October 1, 2015 1 / 45 Lag Operators The lag operator (L) is de ned as Similar

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

An Introduction into the SVAR Methodology: Identification, Interpretation and Limitations of SVAR models

An Introduction into the SVAR Methodology: Identification, Interpretation and Limitations of SVAR models Kiel Institute of World Economics Duesternbrooker Weg 120 24105 Kiel (Germany) Kiel Working Paper No. 1072 An Introduction into the SVAR Methodology: Identification, Interpretation and Limitations of SVAR

More information

State Space Time Series Analysis

State Space Time Series Analysis State Space Time Series Analysis p. 1 State Space Time Series Analysis Siem Jan Koopman http://staff.feweb.vu.nl/koopman Department of Econometrics VU University Amsterdam Tinbergen Institute 2011 State

More information

Trend and Seasonal Components

Trend and Seasonal Components Chapter 2 Trend and Seasonal Components If the plot of a TS reveals an increase of the seasonal and noise fluctuations with the level of the process then some transformation may be necessary before doing

More information

Chapter 6: Multivariate Cointegration Analysis

Chapter 6: Multivariate Cointegration Analysis Chapter 6: Multivariate Cointegration Analysis 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie VI. Multivariate Cointegration

More information

Chapter 5: The Cointegrated VAR model

Chapter 5: The Cointegrated VAR model Chapter 5: The Cointegrated VAR model Katarina Juselius July 1, 2012 Katarina Juselius () Chapter 5: The Cointegrated VAR model July 1, 2012 1 / 41 An intuitive interpretation of the Pi matrix Consider

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Chapter 5. Analysis of Multiple Time Series. 5.1 Vector Autoregressions

Chapter 5. Analysis of Multiple Time Series. 5.1 Vector Autoregressions Chapter 5 Analysis of Multiple Time Series Note: The primary references for these notes are chapters 5 and 6 in Enders (2004). An alternative, but more technical treatment can be found in chapters 10-11

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

Chapter 2. Dynamic panel data models

Chapter 2. Dynamic panel data models Chapter 2. Dynamic panel data models Master of Science in Economics - University of Geneva Christophe Hurlin, Université d Orléans Université d Orléans April 2010 Introduction De nition We now consider

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Vector Time Series Model Representations and Analysis with XploRe

Vector Time Series Model Representations and Analysis with XploRe 0-1 Vector Time Series Model Representations and Analysis with plore Julius Mungo CASE - Center for Applied Statistics and Economics Humboldt-Universität zu Berlin mungo@wiwi.hu-berlin.de plore MulTi Motivation

More information

Time Series Analysis

Time Series Analysis Time Series Analysis Forecasting with ARIMA models Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 2012 Alonso and García-Martos (UC3M-UPM)

More information

Elementary Matrices and The LU Factorization

Elementary Matrices and The LU Factorization lementary Matrices and The LU Factorization Definition: ny matrix obtained by performing a single elementary row operation (RO) on the identity (unit) matrix is called an elementary matrix. There are three

More information

Testing The Quantity Theory of Money in Greece: A Note

Testing The Quantity Theory of Money in Greece: A Note ERC Working Paper in Economic 03/10 November 2003 Testing The Quantity Theory of Money in Greece: A Note Erdal Özmen Department of Economics Middle East Technical University Ankara 06531, Turkey ozmen@metu.edu.tr

More information

Fiscal and Monetary Policy in Australia: an SVAR Model

Fiscal and Monetary Policy in Australia: an SVAR Model Fiscal and Monetary Policy in Australia: an SVAR Model Mardi Dungey and Renée Fry University of Tasmania, CFAP University of Cambridge, CAMA Australian National University September 21 ungey and Fry (University

More information

Testing for Granger causality between stock prices and economic growth

Testing for Granger causality between stock prices and economic growth MPRA Munich Personal RePEc Archive Testing for Granger causality between stock prices and economic growth Pasquale Foresti 2006 Online at http://mpra.ub.uni-muenchen.de/2962/ MPRA Paper No. 2962, posted

More information

8. Linear least-squares

8. Linear least-squares 8. Linear least-squares EE13 (Fall 211-12) definition examples and applications solution of a least-squares problem, normal equations 8-1 Definition overdetermined linear equations if b range(a), cannot

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series.

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series. Cointegration The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series. Economic theory, however, often implies equilibrium

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

TIME SERIES ANALYSIS

TIME SERIES ANALYSIS TIME SERIES ANALYSIS L.M. BHAR AND V.K.SHARMA Indian Agricultural Statistics Research Institute Library Avenue, New Delhi-0 02 lmb@iasri.res.in. Introduction Time series (TS) data refers to observations

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

2.2 Elimination of Trend and Seasonality

2.2 Elimination of Trend and Seasonality 26 CHAPTER 2. TREND AND SEASONAL COMPONENTS 2.2 Elimination of Trend and Seasonality Here we assume that the TS model is additive and there exist both trend and seasonal components, that is X t = m t +

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

Factor analysis. Angela Montanari

Factor analysis. Angela Montanari Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

CAPM, Arbitrage, and Linear Factor Models

CAPM, Arbitrage, and Linear Factor Models CAPM, Arbitrage, and Linear Factor Models CAPM, Arbitrage, Linear Factor Models 1/ 41 Introduction We now assume all investors actually choose mean-variance e cient portfolios. By equating these investors

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

Dimensionality Reduction: Principal Components Analysis

Dimensionality Reduction: Principal Components Analysis Dimensionality Reduction: Principal Components Analysis In data mining one often encounters situations where there are a large number of variables in the database. In such situations it is very likely

More information

Time Series Analysis III

Time Series Analysis III Lecture 12: Time Series Analysis III MIT 18.S096 Dr. Kempthorne Fall 2013 MIT 18.S096 Time Series Analysis III 1 Outline Time Series Analysis III 1 Time Series Analysis III MIT 18.S096 Time Series Analysis

More information

Forecasting in STATA: Tools and Tricks

Forecasting in STATA: Tools and Tricks Forecasting in STATA: Tools and Tricks Introduction This manual is intended to be a reference guide for time series forecasting in STATA. It will be updated periodically during the semester, and will be

More information

The Dynamic Effects of Personal and Corporate Income Tax Changes in the United States

The Dynamic Effects of Personal and Corporate Income Tax Changes in the United States The Dynamic Effects of Personal and Corporate Income Tax Changes in the United States Karel Mertens and Morten O. Ravn March, Abstract This paper estimates the dynamic effects of changes in taxes in the

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication). MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

Contemporaneous Spill-over among Equity, Gold, and Exchange Rate Implied Volatility Indices

Contemporaneous Spill-over among Equity, Gold, and Exchange Rate Implied Volatility Indices Contemporaneous Spill-over among Equity, Gold, and Exchange Rate Implied Volatility Indices Ihsan Ullah Badshah, Bart Frijns*, Alireza Tourani-Rad Department of Finance, Faculty of Business and Law, Auckland

More information

Monetary Policy Surprises, Credit Costs. and. Economic Activity

Monetary Policy Surprises, Credit Costs. and. Economic Activity Monetary Policy Surprises, Credit Costs and Economic Activity Mark Gertler and Peter Karadi NYU and ECB BIS, March 215 The views expressed are those of the authors and do not necessarily reflect the offi

More information

Price and Volatility Transmission in International Wheat Futures Markets

Price and Volatility Transmission in International Wheat Futures Markets ANNALS OF ECONOMICS AND FINANCE 4, 37 50 (2003) Price and Volatility Transmission in International Wheat Futures Markets Jian Yang * Department of Accounting, Finance and Information Systems Prairie View

More information

Multivariate Analysis (Slides 13)

Multivariate Analysis (Slides 13) Multivariate Analysis (Slides 13) The final topic we consider is Factor Analysis. A Factor Analysis is a mathematical approach for attempting to explain the correlation between a large set of variables

More information

A Trading Strategy Based on the Lead-Lag Relationship of Spot and Futures Prices of the S&P 500

A Trading Strategy Based on the Lead-Lag Relationship of Spot and Futures Prices of the S&P 500 A Trading Strategy Based on the Lead-Lag Relationship of Spot and Futures Prices of the S&P 500 FE8827 Quantitative Trading Strategies 2010/11 Mini-Term 5 Nanyang Technological University Submitted By:

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

Chapter 3: The Multiple Linear Regression Model

Chapter 3: The Multiple Linear Regression Model Chapter 3: The Multiple Linear Regression Model Advanced Econometrics - HEC Lausanne Christophe Hurlin University of Orléans November 23, 2013 Christophe Hurlin (University of Orléans) Advanced Econometrics

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

More information

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers Variance Reduction The statistical efficiency of Monte Carlo simulation can be measured by the variance of its output If this variance can be lowered without changing the expected value, fewer replications

More information

1 Example of Time Series Analysis by SSA 1

1 Example of Time Series Analysis by SSA 1 1 Example of Time Series Analysis by SSA 1 Let us illustrate the 'Caterpillar'-SSA technique [1] by the example of time series analysis. Consider the time series FORT (monthly volumes of fortied wine sales

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

TIME SERIES ANALYSIS

TIME SERIES ANALYSIS TIME SERIES ANALYSIS Ramasubramanian V. I.A.S.R.I., Library Avenue, New Delhi- 110 012 ram_stat@yahoo.co.in 1. Introduction A Time Series (TS) is a sequence of observations ordered in time. Mostly these

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Time Series Analysis

Time Series Analysis Time Series Analysis Autoregressive, MA and ARMA processes Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 212 Alonso and García-Martos

More information

Estimating an ARMA Process

Estimating an ARMA Process Statistics 910, #12 1 Overview Estimating an ARMA Process 1. Main ideas 2. Fitting autoregressions 3. Fitting with moving average components 4. Standard errors 5. Examples 6. Appendix: Simple estimators

More information

Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics

Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics INTERNATIONAL BLACK SEA UNIVERSITY COMPUTER TECHNOLOGIES AND ENGINEERING FACULTY ELABORATION OF AN ALGORITHM OF DETECTING TESTS DIMENSIONALITY Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree

More information

Impulse Response Functions

Impulse Response Functions Impulse Response Functions Wouter J. Den Haan University of Amsterdam April 28, 2011 General definition IRFs The IRF gives the j th -period response when the system is shocked by a one-standard-deviation

More information

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

The comovement of US and German bond markets

The comovement of US and German bond markets The comovement of US and German bond markets Tom Engsted and Carsten Tanggaard The Aarhus School of Business, Fuglesangs alle 4, DK-8210 Aarhus V. E-mails: tom@asb.dk (Engsted); cat@asb.dk (Tanggaard).

More information

10. Fixed-Income Securities. Basic Concepts

10. Fixed-Income Securities. Basic Concepts 0. Fixed-Income Securities Fixed-income securities (FIS) are bonds that have no default risk and their payments are fully determined in advance. Sometimes corporate bonds that do not necessarily have certain

More information

3.1 Least squares in matrix form

3.1 Least squares in matrix form 118 3 Multiple Regression 3.1 Least squares in matrix form E Uses Appendix A.2 A.4, A.6, A.7. 3.1.1 Introduction More than one explanatory variable In the foregoing chapter we considered the simple regression

More information

Chapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor

Chapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

Elucidating the Relationship among Volatility Index, US Dollar Index and Oil Price

Elucidating the Relationship among Volatility Index, US Dollar Index and Oil Price 23-24 July 25, Sheraton LaGuardia East Hotel, New York, USA, ISBN: 978--92269-79-5 Elucidating the Relationship among Volatility Index, US Dollar Index and Oil Price John Wei-Shan Hu* and Hsin-Yi Chang**

More information

Multivariate Analysis of Variance (MANOVA): I. Theory

Multivariate Analysis of Variance (MANOVA): I. Theory Gregory Carey, 1998 MANOVA: I - 1 Multivariate Analysis of Variance (MANOVA): I. Theory Introduction The purpose of a t test is to assess the likelihood that the means for two groups are sampled from the

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

Multivariate time series analysis is used when one wants to model and explain the interactions and comovements among a group of time series variables:

Multivariate time series analysis is used when one wants to model and explain the interactions and comovements among a group of time series variables: Multivariate Time Series Consider n time series variables {y 1t },...,{y nt }. A multivariate time series is the (n 1) vector time series {Y t } where the i th row of {Y t } is {y it }.Thatis,for any time

More information

Forecast covariances in the linear multiregression dynamic model.

Forecast covariances in the linear multiregression dynamic model. Forecast covariances in the linear multiregression dynamic model. Catriona M Queen, Ben J Wright and Casper J Albers The Open University, Milton Keynes, MK7 6AA, UK February 28, 2007 Abstract The linear

More information

MULTIPLE REGRESSIONS ON SOME SELECTED MACROECONOMIC VARIABLES ON STOCK MARKET RETURNS FROM 1986-2010

MULTIPLE REGRESSIONS ON SOME SELECTED MACROECONOMIC VARIABLES ON STOCK MARKET RETURNS FROM 1986-2010 Advances in Economics and International Finance AEIF Vol. 1(1), pp. 1-11, December 2014 Available online at http://www.academiaresearch.org Copyright 2014 Academia Research Full Length Research Paper MULTIPLE

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

5. Linear Regression

5. Linear Regression 5. Linear Regression Outline.................................................................... 2 Simple linear regression 3 Linear model............................................................. 4

More information

WORKING PAPER CENTRAL BANK OF ICELAND. A Variance Decomposition of Index-Linked Bond Returns. No. 57. By Francis Breedon

WORKING PAPER CENTRAL BANK OF ICELAND. A Variance Decomposition of Index-Linked Bond Returns. No. 57. By Francis Breedon WORKING PAPER CENTRAL BANK OF ICELAND No. 57 A Variance Decomposition of Index-Linked Bond Returns By Francis Breedon January 2012 Central Bank of Iceland Working Papers are published by the Economics

More information

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

More information

STATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. Clarificationof zonationprocedure described onpp. 238-239

STATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. Clarificationof zonationprocedure described onpp. 238-239 STATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. by John C. Davis Clarificationof zonationprocedure described onpp. 38-39 Because the notation used in this section (Eqs. 4.8 through 4.84) is inconsistent

More information

Goodness of fit assessment of item response theory models

Goodness of fit assessment of item response theory models Goodness of fit assessment of item response theory models Alberto Maydeu Olivares University of Barcelona Madrid November 1, 014 Outline Introduction Overall goodness of fit testing Two examples Assessing

More information

Implied volatility transmissions between Thai and selected advanced stock markets

Implied volatility transmissions between Thai and selected advanced stock markets MPRA Munich Personal RePEc Archive Implied volatility transmissions between Thai and selected advanced stock markets Supachok Thakolsri and Yuthana Sethapramote and Komain Jiranyakul Public Enterprise

More information

3. Regression & Exponential Smoothing

3. Regression & Exponential Smoothing 3. Regression & Exponential Smoothing 3.1 Forecasting a Single Time Series Two main approaches are traditionally used to model a single time series z 1, z 2,..., z n 1. Models the observation z t as a

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Forecasting Geographic Data Michael Leonard and Renee Samy, SAS Institute Inc. Cary, NC, USA

Forecasting Geographic Data Michael Leonard and Renee Samy, SAS Institute Inc. Cary, NC, USA Forecasting Geographic Data Michael Leonard and Renee Samy, SAS Institute Inc. Cary, NC, USA Abstract Virtually all businesses collect and use data that are associated with geographic locations, whether

More information

Projecting the Leontief inverse directly by the RAS method

Projecting the Leontief inverse directly by the RAS method Projecting the Leontief inverse directly by the RAS method Mun-Heng TOH National University of Singapore April 1998 ----------------------------------- Paper prepared for presentation at the 12th International

More information

Univariate and Multivariate Methods PEARSON. Addison Wesley

Univariate and Multivariate Methods PEARSON. Addison Wesley Time Series Analysis Univariate and Multivariate Methods SECOND EDITION William W. S. Wei Department of Statistics The Fox School of Business and Management Temple University PEARSON Addison Wesley Boston

More information

Univariate Time Series Analysis; ARIMA Models

Univariate Time Series Analysis; ARIMA Models Econometrics 2 Spring 25 Univariate Time Series Analysis; ARIMA Models Heino Bohn Nielsen of4 Outline of the Lecture () Introduction to univariate time series analysis. (2) Stationarity. (3) Characterizing

More information

16 : Demand Forecasting

16 : Demand Forecasting 16 : Demand Forecasting 1 Session Outline Demand Forecasting Subjective methods can be used only when past data is not available. When past data is available, it is advisable that firms should use statistical

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

Portfolio selection based on upper and lower exponential possibility distributions

Portfolio selection based on upper and lower exponential possibility distributions European Journal of Operational Research 114 (1999) 115±126 Theory and Methodology Portfolio selection based on upper and lower exponential possibility distributions Hideo Tanaka *, Peijun Guo Department

More information

Inner product. Definition of inner product

Inner product. Definition of inner product Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product

More information

ANALYSIS OF EUROPEAN, AMERICAN AND JAPANESE GOVERNMENT BOND YIELDS

ANALYSIS OF EUROPEAN, AMERICAN AND JAPANESE GOVERNMENT BOND YIELDS Applied Time Series Analysis ANALYSIS OF EUROPEAN, AMERICAN AND JAPANESE GOVERNMENT BOND YIELDS Stationarity, cointegration, Granger causality Aleksandra Falkowska and Piotr Lewicki TABLE OF CONTENTS 1.

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information