Technical Appendix to Are Stocks Really Less Volatile in the Long Run?

Size: px
Start display at page:

Download "Technical Appendix to Are Stocks Really Less Volatile in the Long Run?"

Transcription

1 Technical Appendix to Are Stocks Really Less Volatile in the Long Run? by Ľuboš Pástor and Robert F. Stambaugh February 14, 9

2 This Technical Appendix describes how the predictive variance of multiperiod returns is computed in two alternative frameworks that are different from the basic framework used in the paper. Both alternative frameworks are analyzed for the purpose of assessing the robustness of the paper s results, and both feature noninformative prior beliefs. The first framework is the general VAR form of the predictive system, whose results are reported in Section 4. in the paper. This framework is discussed in Section B1 below. The second framework is the perfect-predictor model, whose results are reported in Section 5 in the paper. That framework is discussed in Section B below. B1. Predictive system: General VAR We begin working with multiple assets, so that not only x t but also r t and t are vectors. The predictive system in its most general form is a VAR for r t, x t, and t, with coefficients restricted so that t is the conditional mean of r tc1. We assume that x t and t are stationary with means E x and E r. We work with the first-order VAR: 1 r tc1 E r I 4 x tc1 E x 5 4 A 1 A A 5 4 tc1 E r A 1 A A We assume the errors in (B1) are i.i.d. across t 1; : : : ; T : u t 4 v t w t ; 4 r t x t t E r E x E r uu uv uw vu vv vw wu wv ww 5 C 4 1 5A : u tc1 v tc1 w tc1 5 : (B1) Let N A denote the entire coefficient matrix in (B1), and let denote the entire covariance matrix in (B). efine the vector t 4 and let V denote its unconditional covariance matrix. Then V rr V rx V r V 4 V xr V xx V x 5 AV N AN C ; V r V x V r t x t t (B) 5 ; (B) 1 The basic framework used in the paper is a special case of (B1) in which the coefficient matrix is restricted as I I 4 A 1 A A 5 4 A 5 : A 1 A A B The predictive system in (B1) can also be viewed alternatively as an unrestricted VAR for returns and predictors when some predictors are unobserved. See the Technical Appendix to Pástor and Stambaugh (9). (B4) 1

3 which can be solved as vec.v / ŒI. N A N A/ 1 vec. /; (B5) using the well known identity vec.fg/.g /vec.f/. Let z t denote the vector of the observed data at time t, rt z t : x t enote the data we observe through time t as t.z 1 ; : : : ; z t /, and note that our complete data consist of T. Also define E z Er E x ; V zz Vrr V xr V rx ; V V z xx Vr V x : (B6) Let denote the full set of parameters in the model,. N A; ; E z /, and let denote the full time series of t, t 1; : : : ; T. To obtain the joint posterior distribution of and, denoted by p.; j T /, we use an MCMC procedure in which we alternate between drawing from the conditional posterior p.j; T / and drawing from the conditional posterior p.j; T /. The procedure for drawing from p.j; T / is described in Section B1.1. The procedure for drawing from p.j; T / / p./p. T ; j/ is described in Section B1.. B1.1. rawing the time series of t To draw the time series of the unobservable values of t conditional on the current parameter draws, we apply the forward filtering, backward sampling (FFBS) approach developed by Carter and Kohn (1994) and Frühwirth-Schnatter (1994). See also West and Harrison (1997, chapter 15). B Filtering The first stage follows the standard methodology of Kalman filtering. efine a t E. t j t 1 / b t E. t j t / e t E.z t j t ; t 1 / (B7) f t E.z t j t 1 / P t Var. t j t 1 / Q t Var. t j t / (B8) R t Var.z t j t ; t 1 / S t Var.z t j t 1 / G t Cov.z t ; t j t 1/ (B9) Conditioning on the (unknown) parameters of the model is assumed throughout but suppressed in the notation for convenience. First observe that j N.b ; Q /; (B1)

4 where denotes the null information set, so that the unconditional moments of are given by b E r and Q V. Also, 1 j N.a 1 ; P 1 /; (B11) where a 1 E r and P 1 V, and z 1 j N.f 1 ; S 1 /; (B1) where f 1 E z and S 1 V zz. Note that G 1 V z (B1) and that where z 1 j 1 ; N.e 1 ; R 1 /; (B14) e 1 f 1 C G 1 P a 1 / (B15) R 1 S 1 G 1 P 1 1 G 1 : (B16) Combining this density with equation (B11) using Bayes rule gives 1 j 1 N.b 1 ; Q 1 /; (B17) where b 1 a 1 C P 1.P 1 C G 1 R 1 1 G 1/ 1 G 1 R 1 1.z 1 f 1 / (B18) Q 1 P 1.P 1 C G 1 R 1 1 G 1/ 1 P 1 : (B19) Continuing in this fashion, we find that all conditional densities are normally distributed, and we obtain all the required moments for t ; : : : ; T : a t.i A 1 A /E r A E x C A 1 r t 1 C A x t 1 C A b t 1 (B) f t b t 1.I A /E x.a 1 C A /E r C A 1 r t 1 C A x t 1 C A b t 1 (B1) S t Q t 1 Q t 1 A A Q t 1 A Q t 1 A C uu vu uv vv (B) G t Q t 1 A A Q t 1 A C uw vw (B) P t A Q t 1 A C ww (B4)

5 e t f t C G t P 1 t. t a t / (B5) R t S t G t P 1 t G t (B6) b t a t C P t.p t C G t R 1 t G t / 1 G t R 1 t.z t f t / (B7) a t C G t S 1 t.z t f t / (B8) Q t P t.p t C G t R 1 t G t / 1 P t : (B9) The values of fa t ; b t ; Q t ; S t ; G t ; P t g for t 1; : : : ; T are retained for the next stage. Equations (B) through (B4) are derived as St G t Var. P t j t 1 / t G t AVar. N t 1 j t 1 / AN C AN 4 5 AN C Q t 1 4 Q t 1 Q t 1 A Q t 1 A A Q t 1 A Q t 1 A A Q t 1 A A Q t 1 A Q t 1 A A Q t 1 A 5 C 4 uu uv uw vu vv vw wu wv ww 5 : B1.1.. Sampling We wish to draw. ; 1 ; : : : ; T / conditional on T. The backward-sampling approach relies on the Markov property of the evolution of t and the resulting identity, p. ; 1 ; : : : ; T j T / p. T j T /p. T 1 j T ; T 1 / p. 1 j ; 1 /p. j 1 ; /: (B) We first sample T from p. T j T /, the normal density obtained in the last step of the filtering. Then, for t T 1; T ; : : : ; 1;, we sample t from the conditional density p. t j tc1 ; t /. (Note that the first two subvectors of t are already observed and thus need not be sampled.) To obtain that conditional density, first note that tc1 j t N ftc1 a tc1 StC1 ; G tc1 G tc1 P tc1 ; (B1) t j t 4 r t x t b t 5 ; 4 Q t 1 5A ; (B) and Cov. t ; tc1 j t/ Var. t j t / AN 4

6 4 4 Q t 5 4 Q t Q t A Q t A A 1 A 1 A A I A A 5 5 : (B) Therefore, where t j tc1 ; t N.h t ; H t /; (B4) h t E. t j t / C Cov. t ; tc1 j t/ ŒVar. tc1 j t / 1 Œ tc1 E. tc1 j t / r t 1 4 x t 5 C 4 5 StC1 G tc1 ztc1 f tc1 b t Q t Q t A Q t A G tc1 P tc1 tc1 a tc1 and H t Var. t j t / 4 Q t Cov.t ; tc1 j t/ 1 ŒVar. tc1 j t / Cov. t ; tc1 j t/ 1 Q StC1 G t tc1 4 Q t Q t A Q ta G A tc1 P Q t tc1 A Q t 5 The mean and covariance matrix of t are taken as the relevant elements of h t and H t. In the rest of the Appendix, we discuss the special case (implemented in the paper) in which r t and t are scalars. The dimensions of N A and are then.k C /.K C /, where K is the number of predictors in the vector x t. B1.. rawing the parameters This section describes how we obtain the posterior draws of all parameters conditional on the current draw of the time series of t. The parameters are. A, N, E xr ), where E xr.ex E r/. B1..1. Prior distributions We specify noninformative priors for all parameters. The prior on AN is flat, except for the stationarity restriction that all eigenvalues of AN must lie inside the unit circle. The prior on E xr is normal, E xr N.E xr ; V xr /. The prior covariance matrix V xr has very large elements on the main diagonal and zeros off the diagonal, so the choice of E xr is unimportant. The prior on is inverted Wishart with a small number of degrees of freedom: IW. ; /. We 5

7 choose K C 4, so that 7 in our basic specification with three predictors. The choice of, the prior mean of, is unimportant because the prior standard deviation of is very large, corresponding to the posterior from a hypothetical sample of only seven observations. All parameters are independent a priori. B1... Posterior distributions rawing given.e xr ; N A/ Given the normal likelihood and the inverted Wishart prior for, the posterior of is also inverted Wishart, j IW.S C ;.T 1/ C /, where S.Y XB/.Y XB/ and r E r x E x K B 1 E r C : : : A r T E r xt E x K T E r 1 r 1 E r x1 E x K 1 E r B C : : : A r T 1 E r xt 1 E x K T 1 E r B N A : Above, K denotes a K 1 vector of ones. The dimensions of both X and Y are.t 1/.K C/. rawing E xr given. ; N A/ The conditional posterior of E xr is normal, E xr j N. QE xr ; QV xr /, where QV xr V 1 xr C.T 1/Q 1Q 1 # QE xr QV xr "V 1 xr E xr C Q TX 1 1. tc1 A t / Q 4 t1 I K A.A 1 C A / A 1 A 1 A (B5) (B6) 5 : (B7) Above, Q is.k C /.K C 1/, I K is a K K identity matrix, and t is defined in equation (B). rawing N A given.e xr ; / In order to draw A, N we need to draw the.k C /.K C 1/ matrix A 1 A 1 A 4 A A 5 : (B8) A A 6

8 First, we establish some notation. enote Z I KC1 X, where I KC1 is a.k C 1/.K C 1/ identity matrix, so that Z is Œ.T NY 1/.K C 1/ Œ.K C /.K C 1/. Let x E x K 1 E : : : : : : A xt E x K T E r and let z vec. NY /, so that z is Œ.T u v u 4 : 5 ; V 4 : u T vt 1/.K C 1/ 1. enote w r ; w 4 : 5 ; r 4 : w T r T ;.l/ 4 1 : T ; (B9) where u, w, r, and.l/ are.t 1/ 1 vectors and V is a.t 1/ K matrix. Let v vec.v /. efine 1, 1, and so that That is, 1 Œ uv uw and uu 1 vv wv 1 vw ww With the above notation, we can write the system (B1) without its first equation as v z Zb C ; w where b vec.a/. Conditional on u, the error terms Œv w are normally distributed with the mean of 1 1 uu u and the covariance matrix of. 1 1 uu 1/ I T 1. Recall that the prior distribution on b is given by 1 bs, which is equal to one when the stationarity restriction on AN is satisfied and zero otherwise. Given the normal likelihood, the full conditional posterior distribution of b is then given by bj N Qb; QV b 1 bs ; where : : (B4) Qb.Z Z/ 1 Z z 1 1 uu.r.l// (B41) QV b. 1 1 uu 1/.X X/ 1 : (B4) We obtain the posterior draws of b by making draws from N Qb; QV b and retaining only draws that satisfy b S. The posterior draws of A are constructed from the posterior draws of b vec.a/. 7

9 B1.. Predictive variance for general VAR In addition to the notation from equations (B1), (B), and (B), define also E r E 4 E x E r 5 ; u t t 4 v t w t 5 ; (B4) and Equation (B1) can then be written as e t t E : (B44) e tc1 N A e t C tc1: (B45) For i > 1, successive substitution using (B45) gives e tci AN i e t C AN i 1 tc1 C N A i tc C C tci : (B46) efine T;T CK Summing (B45) over K periods then gives KX T Ci ; i1 K e T;T CK X e T Ci T;T CK KE : i1 K! e T;T CK X AN i e t C I C AN C C N i1 A K 1 tc1 C I C AN C C AN K tc C C tck. N KC1 I/ e t C N K tc1 C N K 1 tc C C tck ; (B47) where N i I C AN C C AN i 1.I A/ N 1.I AN i /: (B48) It then follows that E e T;T CK j T ; ; T. N KC1 I/ e T ; E. T;T CK j T ; ; T /. N KC1 I/ e T C KE ; (B49) 8

10 and Var. T;T CK j T ; ; T / Var e T;T CK j T ; ; T KX N i N i : (B5) The first and second moments of (B49) given T and are given by i1 E. T;T CK j T ; /. N KC1 I/ 4 r T x T b T E r E x E r 5 C KE (B51) and Var ŒE. T;T CK j T ; ; T / j T ;. N KC1 I/ 4 Combining (B5) and (B5) gives Q T 5. N KC1 I/ : (B5) Var. T;T CK j T ; / E ŒVar. T;T CK j T ; ; T / j T ; C Var ŒE. T;T CK j T ; ; T / j T ; KX i1 N i N i C. N KC1 I/ 4 Q T 5. N KC1 I/ : (B5) By evaluating (B51) and (B5) for repeated draws of from its posterior, the predictive variance of T;T CK can be computed using the decomposition, Var. T;T CK j T / E fvar. T;T CK j; T /j T g C Var fe. T;T CK j; T /j T g : (B54) Finally, the predictive variance of r T;T CK is the (1,1) element of Var. T;T CK j T /. B. Perfect predictors Note: The notation in this section is distinct from the notation in the previous section. The paper focuses on the realistic scenario in which the observable predictors x t are imperfect, in that they do not perfectly capture the conditional expected return t. In this section, we discuss the calculation of predictive variance in an alternative framework with perfect predictors, for which t a C b x t. In that case, the predictive system is replaced by a model consisting of equations r tc1 a C b x t C e tc1 (B55) x tc1 C Ax t C v tc1 ; (B56) 9

11 combined with the following distributional assumption on the residuals: et N ; e ev : (B57) v t Let ı denote the full set of parameters in equations (B55), (B56), and (B57). Let denote the covariance matrix in (B57), let B denote the matrix of the slope coefficients in (B55) and (B56), a B b A ; ve vv and let c vec.b/. Note that ı consists of the elements of c and. B.1. Posterior distributions under perfect predictors We specify the prior distribution on ı as p.ı/ p.c/p. /. The priors on c and are noninformative, except for the restriction that the spectral radius of A,.A/, is less than 1. The prior on c is p.c/ / IŒ.A/ < 1, where IŒ denotes the indicator function. The prior on is p. / / j j.mc1/=, where m is the number of rows in (i.e., x t is.m 1/ 1). To obtain the posterior distribution of ı, the prior p.ı/ is combined with the normal likelihood function p. T jı/ implied by equation (B57). The posterior draws of ı can be obtained by applying standard results from the multivariate regression model (e.g., Zellner, 1971). efine the following notation: r Œr 1 r r T, Q C Œx 1 x x T, Q Œx x 1 x T 1, X Œ T Q, where T denotes a T 1 vector of ones, Y r Q C, OB.X X/ 1 X Y, and S.Y X OB/.Y X OB/. We first draw 1 from a Wishart distribution with T m degrees of freedom and parameter matrix S 1. Given that draw of 1, we then draw c from a normal distribution with mean Oc vec. OB/ and covariance matrix.x X/ 1. That draw of ı is retained as a draw from p.ıj T / if.a/ < 1. B.. Predictive variance under perfect predictors The conditional moments of the k-period return r T;T Ck are given by E.r T;T Ck j T ; ı/ ka C b k 1 C b k x T (B58)! Var.r T;T Ck j T ; ı/ k e C Xk 1 b k 1 ve C b i vv i b; (B59) i1 1

12 where i I C A C C A i 1.I A/ 1.I A i / (B6) k 1 1 C C C k 1.I A/ 1 ŒkI.I A/ 1.I A k / : (B61) The first term in (B59) reflects i.i.d. uncertainty. The second term reflects correlation between unexpected returns and innovations in future x T Ci s, which deliver innovations in future T Ci s. That term can be positive or negative and captures any mean reversion. The third term, always positive, reflects uncertainty about future x T Ci s, and thus uncertainty about future T Ci s. This third term, which contains a summation, can also be written without the summation as! Xk 1 b i vv i b b b.i A/ 1.I A/ 1 h ki k I I k i1 i C.I A A/ 1.I.A A/ k / vec. vv / : Applying the standard variance decomposition Var.r T;T Ck j T / EfVar.r T;T Ck j T ; ı/j T g C VarfE.r T;T Ck j T ; ı/j T g; (B6) the predictive variance Var.r T;T Ck j T / can be computed as the sum of the posterior mean of the right-hand side of equation (B59) and the posterior variance of the right-hand side of equation (B58). These posterior moments are computed from the posterior draws of ı, which are described in Section B.1. References Carter, Chris K., and Robert Kohn, 1994, On Gibbs sampling for state space models, Biometrika 81, Frühwirth-Schnatter, Sylvia, 1994, ata augmentation and dynamic linear models, Journal of Time Series Analysis 15, 18. Pástor, Ľuboš, and Robert F. Stambaugh, 9, Predictive systems: Living with imperfect predictors, Journal of Finance, forthcoming. West, Mike, and Jeff Harrison, 1997, Bayesian Forecasting and ynamic Models (Springer-Verlag, New York, NY). 11

11. Time series and dynamic linear models

11. Time series and dynamic linear models 11. Time series and dynamic linear models Objective To introduce the Bayesian approach to the modeling and forecasting of time series. Recommended reading West, M. and Harrison, J. (1997). models, (2 nd

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

3.1 Least squares in matrix form

3.1 Least squares in matrix form 118 3 Multiple Regression 3.1 Least squares in matrix form E Uses Appendix A.2 A.4, A.6, A.7. 3.1.1 Introduction More than one explanatory variable In the foregoing chapter we considered the simple regression

More information

Centre for Central Banking Studies

Centre for Central Banking Studies Centre for Central Banking Studies Technical Handbook No. 4 Applied Bayesian econometrics for central bankers Andrew Blake and Haroon Mumtaz CCBS Technical Handbook No. 4 Applied Bayesian econometrics

More information

Gaussian Conjugate Prior Cheat Sheet

Gaussian Conjugate Prior Cheat Sheet Gaussian Conjugate Prior Cheat Sheet Tom SF Haines 1 Purpose This document contains notes on how to handle the multivariate Gaussian 1 in a Bayesian setting. It focuses on the conjugate prior, its Bayesian

More information

State Space Time Series Analysis

State Space Time Series Analysis State Space Time Series Analysis p. 1 State Space Time Series Analysis Siem Jan Koopman http://staff.feweb.vu.nl/koopman Department of Econometrics VU University Amsterdam Tinbergen Institute 2011 State

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

Least Squares Estimation

Least Squares Estimation Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David

More information

Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem

Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem Chapter Vector autoregressions We begin by taking a look at the data of macroeconomics. A way to summarize the dynamics of macroeconomic data is to make use of vector autoregressions. VAR models have become

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series.

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series. Cointegration The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series. Economic theory, however, often implies equilibrium

More information

Master s Theory Exam Spring 2006

Master s Theory Exam Spring 2006 Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem

More information

Chapter 4: Vector Autoregressive Models

Chapter 4: Vector Autoregressive Models Chapter 4: Vector Autoregressive Models 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie IV.1 Vector Autoregressive Models (VAR)...

More information

EE 570: Location and Navigation

EE 570: Location and Navigation EE 570: Location and Navigation On-Line Bayesian Tracking Aly El-Osery 1 Stephen Bruder 2 1 Electrical Engineering Department, New Mexico Tech Socorro, New Mexico, USA 2 Electrical and Computer Engineering

More information

Chapter 6: Multivariate Cointegration Analysis

Chapter 6: Multivariate Cointegration Analysis Chapter 6: Multivariate Cointegration Analysis 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie VI. Multivariate Cointegration

More information

SYSTEMS OF REGRESSION EQUATIONS

SYSTEMS OF REGRESSION EQUATIONS SYSTEMS OF REGRESSION EQUATIONS 1. MULTIPLE EQUATIONS y nt = x nt n + u nt, n = 1,...,N, t = 1,...,T, x nt is 1 k, and n is k 1. This is a version of the standard regression model where the observations

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

Analysis of Bayesian Dynamic Linear Models

Analysis of Bayesian Dynamic Linear Models Analysis of Bayesian Dynamic Linear Models Emily M. Casleton December 17, 2010 1 Introduction The main purpose of this project is to explore the Bayesian analysis of Dynamic Linear Models (DLMs). The main

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d. DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

More information

3. Regression & Exponential Smoothing

3. Regression & Exponential Smoothing 3. Regression & Exponential Smoothing 3.1 Forecasting a Single Time Series Two main approaches are traditionally used to model a single time series z 1, z 2,..., z n 1. Models the observation z t as a

More information

Forecast covariances in the linear multiregression dynamic model.

Forecast covariances in the linear multiregression dynamic model. Forecast covariances in the linear multiregression dynamic model. Catriona M Queen, Ben J Wright and Casper J Albers The Open University, Milton Keynes, MK7 6AA, UK February 28, 2007 Abstract The linear

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

Regression Analysis: A Complete Example

Regression Analysis: A Complete Example Regression Analysis: A Complete Example This section works out an example that includes all the topics we have discussed so far in this chapter. A complete example of regression analysis. PhotoDisc, Inc./Getty

More information

Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression

Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression Saikat Maitra and Jun Yan Abstract: Dimension reduction is one of the major tasks for multivariate

More information

Multivariate Normal Distribution

Multivariate Normal Distribution Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #4-7/21/2011 Slide 1 of 41 Last Time Matrices and vectors Eigenvalues

More information

The Bivariate Normal Distribution

The Bivariate Normal Distribution The Bivariate Normal Distribution This is Section 4.7 of the st edition (2002) of the book Introduction to Probability, by D. P. Bertsekas and J. N. Tsitsiklis. The material in this section was not included

More information

Subspaces of R n LECTURE 7. 1. Subspaces

Subspaces of R n LECTURE 7. 1. Subspaces LECTURE 7 Subspaces of R n Subspaces Definition 7 A subset W of R n is said to be closed under vector addition if for all u, v W, u + v is also in W If rv is in W for all vectors v W and all scalars r

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Understanding and Applying Kalman Filtering

Understanding and Applying Kalman Filtering Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding

More information

Lecture 2 Matrix Operations

Lecture 2 Matrix Operations Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or

More information

Time Series Analysis

Time Series Analysis Time Series Analysis Forecasting with ARIMA models Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 2012 Alonso and García-Martos (UC3M-UPM)

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Lecture 05: Mean-Variance Analysis & Capital Asset Pricing Model (CAPM)

Lecture 05: Mean-Variance Analysis & Capital Asset Pricing Model (CAPM) Lecture 05: Mean-Variance Analysis & Capital Asset Pricing Model (CAPM) Prof. Markus K. Brunnermeier Slide 05-1 Overview Simple CAPM with quadratic utility functions (derived from state-price beta model)

More information

Exploratory Factor Analysis and Principal Components. Pekka Malo & Anton Frantsev 30E00500 Quantitative Empirical Research Spring 2016

Exploratory Factor Analysis and Principal Components. Pekka Malo & Anton Frantsev 30E00500 Quantitative Empirical Research Spring 2016 and Principal Components Pekka Malo & Anton Frantsev 30E00500 Quantitative Empirical Research Spring 2016 Agenda Brief History and Introductory Example Factor Model Factor Equation Estimation of Loadings

More information

Examination paper for TMA4115 Matematikk 3

Examination paper for TMA4115 Matematikk 3 Department of Mathematical Sciences Examination paper for TMA45 Matematikk 3 Academic contact during examination: Antoine Julien a, Alexander Schmeding b, Gereon Quick c Phone: a 73 59 77 82, b 40 53 99

More information

Internet Appendix to Stock Market Liquidity and the Business Cycle

Internet Appendix to Stock Market Liquidity and the Business Cycle Internet Appendix to Stock Market Liquidity and the Business Cycle Randi Næs, Johannes A. Skjeltorp and Bernt Arne Ødegaard This Internet appendix contains additional material to the paper Stock Market

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

CAPM, Arbitrage, and Linear Factor Models

CAPM, Arbitrage, and Linear Factor Models CAPM, Arbitrage, and Linear Factor Models CAPM, Arbitrage, Linear Factor Models 1/ 41 Introduction We now assume all investors actually choose mean-variance e cient portfolios. By equating these investors

More information

On the Efficiency of Competitive Stock Markets Where Traders Have Diverse Information

On the Efficiency of Competitive Stock Markets Where Traders Have Diverse Information Finance 400 A. Penati - G. Pennacchi Notes on On the Efficiency of Competitive Stock Markets Where Traders Have Diverse Information by Sanford Grossman This model shows how the heterogeneous information

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

CCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York

CCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York BME I5100: Biomedical Signal Processing Linear Discrimination Lucas C. Parra Biomedical Engineering Department CCNY 1 Schedule Week 1: Introduction Linear, stationary, normal - the stuff biology is not

More information

Multivariate GARCH models

Multivariate GARCH models Multivariate GARCH models CIDE-Bertinoro-Courses for PhD students - June 2015 Malvina Marchese Universita degli Studi di Genova m.marchese@lse.ac.uk alvina Marchese (Universita degli Studi di Genova) June

More information

Bias in the Estimation of Mean Reversion in Continuous-Time Lévy Processes

Bias in the Estimation of Mean Reversion in Continuous-Time Lévy Processes Bias in the Estimation of Mean Reversion in Continuous-Time Lévy Processes Yong Bao a, Aman Ullah b, Yun Wang c, and Jun Yu d a Purdue University, IN, USA b University of California, Riverside, CA, USA

More information

Sections 2.11 and 5.8

Sections 2.11 and 5.8 Sections 211 and 58 Timothy Hanson Department of Statistics, University of South Carolina Stat 704: Data Analysis I 1/25 Gesell data Let X be the age in in months a child speaks his/her first word and

More information

What s New in Econometrics? Lecture 8 Cluster and Stratified Sampling

What s New in Econometrics? Lecture 8 Cluster and Stratified Sampling What s New in Econometrics? Lecture 8 Cluster and Stratified Sampling Jeff Wooldridge NBER Summer Institute, 2007 1. The Linear Model with Cluster Effects 2. Estimation with a Small Number of Groups and

More information

19 LINEAR QUADRATIC REGULATOR

19 LINEAR QUADRATIC REGULATOR 19 LINEAR QUADRATIC REGULATOR 19.1 Introduction The simple form of loopshaping in scalar systems does not extend directly to multivariable (MIMO) plants, which are characterized by transfer matrices instead

More information

Vector Spaces. Chapter 2. 2.1 R 2 through R n

Vector Spaces. Chapter 2. 2.1 R 2 through R n Chapter 2 Vector Spaces One of my favorite dictionaries (the one from Oxford) defines a vector as A quantity having direction as well as magnitude, denoted by a line drawn from its original to its final

More information

1 Short Introduction to Time Series

1 Short Introduction to Time Series ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

Time Series Analysis

Time Series Analysis Time Series Analysis Autoregressive, MA and ARMA processes Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 212 Alonso and García-Martos

More information

" Y. Notation and Equations for Regression Lecture 11/4. Notation:

 Y. Notation and Equations for Regression Lecture 11/4. Notation: Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through

More information

1. Introduction to multivariate data

1. Introduction to multivariate data . Introduction to multivariate data. Books Chat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall Krzanowski, W.J. Principles of multivariate analysis. Oxford.000 Johnson,

More information

Factor analysis. Angela Montanari

Factor analysis. Angela Montanari Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number

More information

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( ) Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3 MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................

More information

ON THE ROBUSTNESS OF FIXED EFFECTS AND RELATED ESTIMATORS IN CORRELATED RANDOM COEFFICIENT PANEL DATA MODELS

ON THE ROBUSTNESS OF FIXED EFFECTS AND RELATED ESTIMATORS IN CORRELATED RANDOM COEFFICIENT PANEL DATA MODELS ON THE ROBUSTNESS OF FIXED EFFECTS AND RELATED ESTIMATORS IN CORRELATED RANDOM COEFFICIENT PANEL DATA MODELS Jeffrey M. Wooldridge THE INSTITUTE FOR FISCAL STUDIES DEPARTMENT OF ECONOMICS, UCL cemmap working

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

Factor Analysis. Factor Analysis

Factor Analysis. Factor Analysis Factor Analysis Principal Components Analysis, e.g. of stock price movements, sometimes suggests that several variables may be responding to a small number of underlying forces. In the factor model, we

More information

Multivariate normal distribution and testing for means (see MKB Ch 3)

Multivariate normal distribution and testing for means (see MKB Ch 3) Multivariate normal distribution and testing for means (see MKB Ch 3) Where are we going? 2 One-sample t-test (univariate).................................................. 3 Two-sample t-test (univariate).................................................

More information

Observing the Changing Relationship Between Natural Gas Prices and Power Prices

Observing the Changing Relationship Between Natural Gas Prices and Power Prices Observing the Changing Relationship Between Natural Gas Prices and Power Prices The research views expressed herein are those of the author and do not necessarily represent the views of the CME Group or

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

Section 13.5 Equations of Lines and Planes

Section 13.5 Equations of Lines and Planes Section 13.5 Equations of Lines and Planes Generalizing Linear Equations One of the main aspects of single variable calculus was approximating graphs of functions by lines - specifically, tangent lines.

More information

Estimating an ARMA Process

Estimating an ARMA Process Statistics 910, #12 1 Overview Estimating an ARMA Process 1. Main ideas 2. Fitting autoregressions 3. Fitting with moving average components 4. Standard errors 5. Examples 6. Appendix: Simple estimators

More information

Chapter 3: The Multiple Linear Regression Model

Chapter 3: The Multiple Linear Regression Model Chapter 3: The Multiple Linear Regression Model Advanced Econometrics - HEC Lausanne Christophe Hurlin University of Orléans November 23, 2013 Christophe Hurlin (University of Orléans) Advanced Econometrics

More information

Department of Economics

Department of Economics Department of Economics On Testing for Diagonality of Large Dimensional Covariance Matrices George Kapetanios Working Paper No. 526 October 2004 ISSN 1473-0278 On Testing for Diagonality of Large Dimensional

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1982, 2008. This chapter originates from material used by author at Imperial College, University of London, between 1981 and 1990. It is available free to all individuals,

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

5. Linear Regression

5. Linear Regression 5. Linear Regression Outline.................................................................... 2 Simple linear regression 3 Linear model............................................................. 4

More information

10-601. Machine Learning. http://www.cs.cmu.edu/afs/cs/academic/class/10601-f10/index.html

10-601. Machine Learning. http://www.cs.cmu.edu/afs/cs/academic/class/10601-f10/index.html 10-601 Machine Learning http://www.cs.cmu.edu/afs/cs/academic/class/10601-f10/index.html Course data All up-to-date info is on the course web page: http://www.cs.cmu.edu/afs/cs/academic/class/10601-f10/index.html

More information

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Recall areal data also known as lattice data are data Y (s), s D where D is a discrete index set. This usually corresponds to data

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

Large Time-Varying Parameter VARs

Large Time-Varying Parameter VARs Large Time-Varying Parameter VARs Gary Koop University of Strathclyde Dimitris Korobilis University of Glasgow Abstract In this paper we develop methods for estimation and forecasting in large timevarying

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Lecture 4: Partitioned Matrices and Determinants

Lecture 4: Partitioned Matrices and Determinants Lecture 4: Partitioned Matrices and Determinants 1 Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying

More information

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A = MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

More information

MAT188H1S Lec0101 Burbulla

MAT188H1S Lec0101 Burbulla Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u

More information

1 Teaching notes on GMM 1.

1 Teaching notes on GMM 1. Bent E. Sørensen January 23, 2007 1 Teaching notes on GMM 1. Generalized Method of Moment (GMM) estimation is one of two developments in econometrics in the 80ies that revolutionized empirical work in

More information

Factor Analysis. Chapter 420. Introduction

Factor Analysis. Chapter 420. Introduction Chapter 420 Introduction (FA) is an exploratory technique applied to a set of observed variables that seeks to find underlying factors (subsets of variables) from which the observed variables were generated.

More information

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances It is possible to construct a matrix X of Cartesian coordinates of points in Euclidean space when we know the Euclidean

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

Dynamics of Commercial Real Estate Asset Markets, Return Volatility, and the Investment Horizon. Christian Rehring* Steffen Sebastian**

Dynamics of Commercial Real Estate Asset Markets, Return Volatility, and the Investment Horizon. Christian Rehring* Steffen Sebastian** Dynamics of Commercial Real Estate Asset Markets, Return Volatility, and the Investment Horizon Christian Rehring* Steffen Sebastian** This version: May 6 200 Abstract The term structure of return volatility

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

8. Linear least-squares

8. Linear least-squares 8. Linear least-squares EE13 (Fall 211-12) definition examples and applications solution of a least-squares problem, normal equations 8-1 Definition overdetermined linear equations if b range(a), cannot

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information