Estimation of Vector Autoregressive Processes

Size: px
Start display at page:

Download "Estimation of Vector Autoregressive Processes"

Transcription

1 Estimation of Vector Autoregressive Processes Based on Chapter 3 of book by H.Lütkepohl: New Introduction to Multiple Time Series Analysis Yordan Mahmudiev Pavol Majher December 13th, 2011 Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

2 Contents 1 Introduction 2 Multivariate Least Squares Estimation Asymptotic Properties of the LSE Example Small Sample Properties of the LSE 3 Least Squares Estimation with Mean-Adjusted Data Estimation when the process mean is known Estimation when the process mean is unknown 4 The Yule-Walker Estimator 5 Maximum Likelihood Estimation The Likelihood Function The ML Estimators Properties of the ML Estimator Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

3 Introduction Introduction Basic Assumptions: y t = (y 1t,..., y Kt ) R K available time series y 1,..., y T, which is known to be generated by stationary, stable VAR(p) process where y t = ν + A 1 y t A p y t p + u t (1) ν = (ν 1,..., ν K ) is K 1 vector of intercept terms A i are K K coefficient matrices u t is white noise with nonsingular covariance matrix Σ u moreover p presample values for each variable, y p+1,..., y 0 are assumed to be available Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

4 Multivariate Least Squares Estimation Notation Y := (y 1,..., y T ) (K T ) B := (ν, A 1,..., A p ) (K (Kp + 1)) Z t := (1, y t, y t 1,..., y t p+1 ) ((Kp + 1) 1) Z := (Z 0,..., Z T 1 ) ((Kp + 1) T ) U := (u 1,..., u T ) (K T ) y := vec(y ) (KT 1) β := vec(b) ((K 2 p + K) 1) b := vec(b ) ((K 2 p + K) 1) u := vec(u) (KT 1) Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

5 Multivariate Least Squares Estimation Estimation (1) using this notation, the VAR(p) model (1) can be written as Y = BZ + U, after application of vec operator and Kronecker product we obtain vec(y ) = vec(bz) + vec(u) = (Z I K )vec(b) + vec(u), which is equivalent to y = (Z I K )β + u note that covariance matrix of u is Σ u = I T Σ u Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

6 Multivariate Least Squares Estimation Estimation (2) multivariate LS estimation (or GLS estimation) of β minimizes note that S(β) = u (I T Σ u ) 1 u = = [ y (Z I K )β ] (IT Σ 1 u ) [ y (Z I K )β ] S(β) = y (I T Σ 1 u the first order conditions S(β) β )y + β (ZZ Σ 1 u )β 2β (Z Σ 1 u )y = 2(ZZ Σ 1 u )β 2(Z Σ 1 u )y = 0 after simple algebraic exercise yield the LS estimator ˆβ = ((ZZ ) 1 Z I K )y Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

7 Multivariate Least Squares Estimation Estimation (3) the Hessian of S(β) 2 S β β = 2(ZZ Σ 1 u ) is positive definite ˆβ is minimizing vector the LS estimator can be written in differen ways ˆβ = β + ((ZZ ) 1 Z I K )u = = vec(yz (ZZ ) 1 ) another possible representation is ˆb = (I K (ZZ ) 1 Z)vec(Y ), where we can see that multivariate LS estimation is equivalent to OLS estimation of each of the K equations of (1) Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

8 Multivariate Least Squares Estimation Asymptotic Properties of the LSE Asymptotic Properties (1) Definition A white noise process u t = (u 1 t,..., u K t) is called standard white noise if the u t are continuous random vectors satisfying E(u t ) = 0, Σ u = E(u t u t ) is nonsingular, u t and u s are independent for s t and E u it u jt u kt u mt c for i, j, k, m = 1,..., K, and all t for some finite constant c. we need this property as a sufficient condition for the following results: Γ := plim ZZ 1 T T t=1 T exists and is nonsingular vec(u t Z t 1) = 1 d (Z I K )u N (0, Γ Σ u) T T Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

9 Multivariate Least Squares Estimation Asymptotic Properties of the LSE Asymptotic Properties (2) the above conditions provide for consistency and asymptotic normality of the LS estimator Proposition Asymptotic Properties of the LS Estimator Let y t be a stable, K-dimensional VAR(p) process with standard white noise residuals, ˆB is the LS estimator of the VAR coefficients B. Then ˆB p B T and. T ( ˆβ β) = T vec(ˆb B) d T N (0, Γ 1 Σ u ) Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

10 Multivariate Least Squares Estimation Asymptotic Properties of the LSE Asymptotic Properties (3) Proposition Asymptotic Properties of the White Noise Covariance Matrix Estimators Let y t be a stable, K-dimensional VAR(p) process with standard white noise residuals and let B be an estimator of the VAR coefficients B so that T vec( B B) converges in distribution. Furthermore suppose that where c is a fixed constant. Then Σ u = (Y BZ)(Y BZ), T c T ( Σ u UU /T ) p T 0. Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

11 Multivariate Least Squares Estimation Example Example (1) three-dimensional system, data for Western Germany ( ) fixed investment y 1 disposable income y 2 consumption expenditures y 3 Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

12 Multivariate Least Squares Estimation Example Example (2) assumption: data generated by VAR(2) process LS estimates are the following stability of estimated process is satisfied, since all roots of the polynomial det(i 3  1 z  2 z 2 ) have modulus greater than 1 Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

13 Multivariate Least Squares Estimation Example Example (3) we can calculate the matrix of t-ratios these quantities can be compared with critical values from a t-distribution d.f. = KT K 2 p K = 198 or d.f. = T Kp 1 = 66 for a two-tailed test with significance level 5% we get critical values of approximately ±2 in both cases apparently several coefficients are not significant model contains unnecessarily many free parameters Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

14 Multivariate Least Squares Estimation Small Sample Properties of the LSE Small Sample Properties difficult do analytically derive small sample properties of LS estimation numerical experiments are used, such as Monte Carlo method example process ( ) ( ) ( ) y t = + y t 1 + y t 2 + u t ( ) 9 0 Σ u = time series generated of length T = 30 (plus 2 presample values) u t N (0, Σ u ) Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

15 Multivariate Least Squares Estimation Small Sample Properties of the LSE Empirical Results Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

16 Least Squares Estimation with Mean-Adjusted Data Estimation when the process mean is known Process with Known Mean (1) The process mean µ is known The mean-adjusted VAR(p) is given by (y t µ) = A 1 (y t 1 µ) A p (y t p µ) + u t One can use LS estimation by defining: Y 0 (y t µ,..., y T µ) A (A 1,..., A p ) y t µ Yt 0 ( ). X Y0 0,..., Y T 0 1 y t p+1 µ Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

17 Least Squares Estimation with Mean-Adjusted Data Estimation when the process mean is known Process with Known Mean (2) y 0 vec(y 0 ) α vec(a) Then the mean-adjusted VAR(p) can be rewritten as: Y 0 = AX + U or y 0 = (X I K ) α + u where u is defined as before. The LS estimator is: ) ˆα = ((XX ) 1 X I K y 0 or  = Y 0 X (XX ) 1 If y t is stable and u t is white noise, it follows that d T (ˆα α) N (0, Σˆα ) where ˆα = Γ Y (0) 1 ( Σ u with ( ) Γ Y (0) E Y 0 t ). Y 0 t Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

18 Least Squares Estimation with Mean-Adjusted Data Estimation when the process mean is unknown Process with Unknown Mean (1) Usually the process mean is not known and we have to estimated it: ȳ = 1 T T y t t=1 Plugging in for each y t expressed from (y t µ) = A 1 (y t 1 µ) A p (y t p µ) + u t and rewriting gives: Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

19 Least Squares Estimation with Mean-Adjusted Data Estimation when the process mean is unknown Process with Unknown Mean (2) ] ȳ = µ + A 1 [ȳ + 1 T (y 0 y T ) µ ] A p [ȳ + 1 T (y p y 0 y T p+1... y T ) µ + 1 T T u t t=1 The exact meaning of elements such as y p+1 for p>1 is unclear (presample observations). Equivalently: (I K A 1... A p ) (ȳ µ) = 1 T z T + 1 T u t T t=1 [ ] where z T = p i 1 A i (y 0 j y T j ) i=1 j=1 Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

20 Least Squares Estimation with Mean-Adjusted Data Estimation when the process mean is unknown Process with Unknown Mean (3) ( z T / ) T Obviously E T E (z T ) = 0 ( Moreover, as y t is stable var z T / ) T = 1 z T / T converges to zero in mean square and = 1 T Var (z T ) T 0 (I K A 1... A p ) (ȳ µ) has the same asymptotic distribution 1 T as T u t t=1 By the CLT 1 T d T u t N (0, Σu ) t=1 Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

21 Least Squares Estimation with Mean-Adjusted Data Estimation when the process mean is unknown Process with Unknown Mean (4) therefore, if y t is stable and u t is white noise: T (ȳ µ) d N (0, Σȳ) with Σȳ = (I K A 1... A p ) 1 Σ u (I K A 1... A p ) -1 another way of estimating the mean is obtained from the LS estimator: ˆµ = ( I K Â1... Âp) 1 ˆν these two ways are asymptotically equivalent Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

22 Least Squares Estimation with Mean-Adjusted Data Estimation when the process mean is unknown Process with Unknown Mean (5) Replacing µ with ȳ in the vectors and matrices from before, e.g. Ŷ 0 (y t ȳ,..., y T ȳ) gives the corresponding LS estimator: ( ( ˆα = X X ) ) 1 X I K ^y 0 This estimator is asymptotically equivalent to LS estimator for a process with known mean ˆα T (ˆα α) d N (0, Γ Y (0) 1 Σ u ) ( with Γ Y (0) E Y 0 t ( Y 0 t ) ) Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

23 The Yule-Walker Estimator The Yule-Walker Estimator (1) Recall from the lecture slides that for VAR(1) it holds: A 1 = Γ y (0)Γ y (1) 1 and in general Γ y (h) = A 1 Γ y (h 1) = A h 1 Γ y(0) Γ y (h 1) Extending to VAR(p): Γ y (h) = [A 1,..., A p ]. or: Γ y (h p) Γ y (0)... Γ y (p 1) [Γ y (1),..., Γ y (p)] = [A 1,..., A p ]..... Γ y ( p + 1)... Γ y (0) Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

24 The Yule-Walker Estimator The Yule-Walker Estimator (2) [Γ y (1),..., Γ y (p)] = AΓ Y (0) Hence, A = [Γ y (1),..., Γ y (p)] Γ Y (0) 1 If p presample observations are available, the mean µ can be estimated by: ȳ = 1 T y t T + p t= p+1 Then ˆΓ y (h) = 1 T T +p h t= p+h+1 (y t ȳ ) (y t h ȳ ) Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

25 The Yule-Walker Estimator The Yule-Walker Estimator (3) The Yule-Walker Estimator has the same asymptotic properties as the LS estimator for stable VAR processes. However, it could be less attractive for small samples. The following example shows that asymptotically equivalent estimators can give different results for small samples (here T=73) ( ) ȳ = ˆµ = I 3  1  2 ˆν = Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

26 The Yule-Walker Estimator The Yule-Walker Estimator (4) ) Â = (Â1, Â2 = Â YW = Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

27 Maximum Likelihood Estimation The Likelihood Function Maximum Likelihood Estimation Assume that the VAR(p) is Gaussian, i.e. u =vec (U) = u 1. u T N (0, I T Σ u ) The probability density of u is 1 f u (u) = I (2π) KT /2 T [ ( u 1/2 exp 1 2 u I T 1 u ) ] u Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

28 Maximum Likelihood Estimation The Likelihood Function The Log-Likelihood Function From the probability density of u, a probability density for y vec (Y ), f y (y), can be derived After some modification the log-likelihood function is given by: ln l (µ, α, u ) = KT 2 ln2π T 2 ln u 1 2 tr [ (Y 0 AX ) 1 u From ln(l) µ, ln(l) α, and ln(l) u which can be solved for the estimators ( Y 0 AX )] we get the system of normal equations, Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

29 Maximum Likelihood Estimation The ML Estimators The ML Estimators The three ML Estimators: µ = 1 ( ) 1 p I K à i T α = i=1 T ( ) p y t à i y t i t=1 i=1 ( ( X X ) 1 X I K ) (y µ ) Σ u = 1 T (Ỹ 0 à X ) (Ỹ 0 à X ) Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

30 Maximum Likelihood Estimation Properties of the ML Estimator Properties of the ML Estimator (1) The estimators are asymptotically consistent and asymptotically normal distributed: µ µ Σ µ 0 0 T α α d N 0, 0 Σ α 0 σ σ 0 0 Σ σ ) ( where σ = vech ( Σ u and Σ µ = I K p ) 1 ( Ã i u I K p i=1 Ã i i=1 ) 1 Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

31 Maximum Likelihood Estimation Properties of the ML Estimator Properties of the ML Estimator (2) Σ α = Γ Y (0) 1 Σ u Σ σ = 2D + K (Σ u Σ u ) ( ) D K + where D K is given by vec (Σ u )=D K vech (Σ u ) and D K + Moore-Penrose generalized inverse. σ 11 σ 11 σ 12 σ σ σ = vech (Σ u ) = vech σ 21 σ 22 σ 23 = σ 31 σ σ 31 σ 32 σ σ 32 σ 33 is the Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

32 Maximum Likelihood Estimation Properties of the ML Estimator Thank you for your attention! Yordan Mahmudiev, Pavol Majher () Estimation of VAR Processes December 13th, / 32

SYSTEMS OF REGRESSION EQUATIONS

SYSTEMS OF REGRESSION EQUATIONS SYSTEMS OF REGRESSION EQUATIONS 1. MULTIPLE EQUATIONS y nt = x nt n + u nt, n = 1,...,N, t = 1,...,T, x nt is 1 k, and n is k 1. This is a version of the standard regression model where the observations

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

Multivariate Normal Distribution

Multivariate Normal Distribution Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #4-7/21/2011 Slide 1 of 41 Last Time Matrices and vectors Eigenvalues

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

More information

1 Short Introduction to Time Series

1 Short Introduction to Time Series ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The

More information

CS229 Lecture notes. Andrew Ng

CS229 Lecture notes. Andrew Ng CS229 Lecture notes Andrew Ng Part X Factor analysis Whenwehavedatax (i) R n thatcomesfromamixtureofseveral Gaussians, the EM algorithm can be applied to fit a mixture model. In this setting, we usually

More information

Chapter 5. Analysis of Multiple Time Series. 5.1 Vector Autoregressions

Chapter 5. Analysis of Multiple Time Series. 5.1 Vector Autoregressions Chapter 5 Analysis of Multiple Time Series Note: The primary references for these notes are chapters 5 and 6 in Enders (2004). An alternative, but more technical treatment can be found in chapters 10-11

More information

Impulse Response Functions

Impulse Response Functions Impulse Response Functions Wouter J. Den Haan University of Amsterdam April 28, 2011 General definition IRFs The IRF gives the j th -period response when the system is shocked by a one-standard-deviation

More information

Vector Time Series Model Representations and Analysis with XploRe

Vector Time Series Model Representations and Analysis with XploRe 0-1 Vector Time Series Model Representations and Analysis with plore Julius Mungo CASE - Center for Applied Statistics and Economics Humboldt-Universität zu Berlin mungo@wiwi.hu-berlin.de plore MulTi Motivation

More information

Time Series Analysis III

Time Series Analysis III Lecture 12: Time Series Analysis III MIT 18.S096 Dr. Kempthorne Fall 2013 MIT 18.S096 Time Series Analysis III 1 Outline Time Series Analysis III 1 Time Series Analysis III MIT 18.S096 Time Series Analysis

More information

Time Series Analysis

Time Series Analysis Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture Identification of univariate time series models, cont.:

More information

Lecture 3: Linear methods for classification

Lecture 3: Linear methods for classification Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,

More information

i=1 In practice, the natural logarithm of the likelihood function, called the log-likelihood function and denoted by

i=1 In practice, the natural logarithm of the likelihood function, called the log-likelihood function and denoted by Statistics 580 Maximum Likelihood Estimation Introduction Let y (y 1, y 2,..., y n be a vector of iid, random variables from one of a family of distributions on R n and indexed by a p-dimensional parameter

More information

Chapter 4: Vector Autoregressive Models

Chapter 4: Vector Autoregressive Models Chapter 4: Vector Autoregressive Models 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie IV.1 Vector Autoregressive Models (VAR)...

More information

1 Teaching notes on GMM 1.

1 Teaching notes on GMM 1. Bent E. Sørensen January 23, 2007 1 Teaching notes on GMM 1. Generalized Method of Moment (GMM) estimation is one of two developments in econometrics in the 80ies that revolutionized empirical work in

More information

Estimating an ARMA Process

Estimating an ARMA Process Statistics 910, #12 1 Overview Estimating an ARMA Process 1. Main ideas 2. Fitting autoregressions 3. Fitting with moving average components 4. Standard errors 5. Examples 6. Appendix: Simple estimators

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

Bias in the Estimation of Mean Reversion in Continuous-Time Lévy Processes

Bias in the Estimation of Mean Reversion in Continuous-Time Lévy Processes Bias in the Estimation of Mean Reversion in Continuous-Time Lévy Processes Yong Bao a, Aman Ullah b, Yun Wang c, and Jun Yu d a Purdue University, IN, USA b University of California, Riverside, CA, USA

More information

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series.

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series. Cointegration The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series. Economic theory, however, often implies equilibrium

More information

Chapter 6: Multivariate Cointegration Analysis

Chapter 6: Multivariate Cointegration Analysis Chapter 6: Multivariate Cointegration Analysis 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie VI. Multivariate Cointegration

More information

From the help desk: Bootstrapped standard errors

From the help desk: Bootstrapped standard errors The Stata Journal (2003) 3, Number 1, pp. 71 80 From the help desk: Bootstrapped standard errors Weihua Guan Stata Corporation Abstract. Bootstrapping is a nonparametric approach for evaluating the distribution

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

Lecture 8: Signal Detection and Noise Assumption

Lecture 8: Signal Detection and Noise Assumption ECE 83 Fall Statistical Signal Processing instructor: R. Nowak, scribe: Feng Ju Lecture 8: Signal Detection and Noise Assumption Signal Detection : X = W H : X = S + W where W N(, σ I n n and S = [s, s,...,

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

Lecture Notes on Polynomials

Lecture Notes on Polynomials Lecture Notes on Polynomials Arne Jensen Department of Mathematical Sciences Aalborg University c 008 Introduction These lecture notes give a very short introduction to polynomials with real and complex

More information

Univariate Time Series Analysis; ARIMA Models

Univariate Time Series Analysis; ARIMA Models Econometrics 2 Spring 25 Univariate Time Series Analysis; ARIMA Models Heino Bohn Nielsen of4 Outline of the Lecture () Introduction to univariate time series analysis. (2) Stationarity. (3) Characterizing

More information

Exact Inference for Gaussian Process Regression in case of Big Data with the Cartesian Product Structure

Exact Inference for Gaussian Process Regression in case of Big Data with the Cartesian Product Structure Exact Inference for Gaussian Process Regression in case of Big Data with the Cartesian Product Structure Belyaev Mikhail 1,2,3, Burnaev Evgeny 1,2,3, Kapushev Yermek 1,2 1 Institute for Information Transmission

More information

NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS

NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS TEST DESIGN AND FRAMEWORK September 2014 Authorized for Distribution by the New York State Education Department This test design and framework document

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

2DI36 Statistics. 2DI36 Part II (Chapter 7 of MR)

2DI36 Statistics. 2DI36 Part II (Chapter 7 of MR) 2DI36 Statistics 2DI36 Part II (Chapter 7 of MR) What Have we Done so Far? Last time we introduced the concept of a dataset and seen how we can represent it in various ways But, how did this dataset came

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information

Lecture 5 Least-squares

Lecture 5 Least-squares EE263 Autumn 2007-08 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property

More information

Chapter 5: Bivariate Cointegration Analysis

Chapter 5: Bivariate Cointegration Analysis Chapter 5: Bivariate Cointegration Analysis 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie V. Bivariate Cointegration Analysis...

More information

Univariate Time Series Analysis; ARIMA Models

Univariate Time Series Analysis; ARIMA Models Econometrics 2 Fall 25 Univariate Time Series Analysis; ARIMA Models Heino Bohn Nielsen of4 Univariate Time Series Analysis We consider a single time series, y,y 2,..., y T. We want to construct simple

More information

8. Linear least-squares

8. Linear least-squares 8. Linear least-squares EE13 (Fall 211-12) definition examples and applications solution of a least-squares problem, normal equations 8-1 Definition overdetermined linear equations if b range(a), cannot

More information

What s New in Econometrics? Lecture 8 Cluster and Stratified Sampling

What s New in Econometrics? Lecture 8 Cluster and Stratified Sampling What s New in Econometrics? Lecture 8 Cluster and Stratified Sampling Jeff Wooldridge NBER Summer Institute, 2007 1. The Linear Model with Cluster Effects 2. Estimation with a Small Number of Groups and

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Analysis and Computation for Finance Time Series - An Introduction

Analysis and Computation for Finance Time Series - An Introduction ECMM703 Analysis and Computation for Finance Time Series - An Introduction Alejandra González Harrison 161 Email: mag208@exeter.ac.uk Time Series - An Introduction A time series is a sequence of observations

More information

Trend and Seasonal Components

Trend and Seasonal Components Chapter 2 Trend and Seasonal Components If the plot of a TS reveals an increase of the seasonal and noise fluctuations with the level of the process then some transformation may be necessary before doing

More information

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

From the help desk: Demand system estimation

From the help desk: Demand system estimation The Stata Journal (2002) 2, Number 4, pp. 403 410 From the help desk: Demand system estimation Brian P. Poi Stata Corporation Abstract. This article provides an example illustrating how to use Stata to

More information

Simple Linear Regression Inference

Simple Linear Regression Inference Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation

More information

Detekce změn v autoregresních posloupnostech

Detekce změn v autoregresních posloupnostech Nové Hrady 2012 Outline 1 Introduction 2 3 4 Change point problem (retrospective) The data Y 1,..., Y n follow a statistical model, which may change once or several times during the observation period

More information

INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition)

INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition) INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition) Abstract Indirect inference is a simulation-based method for estimating the parameters of economic models. Its

More information

Penalized regression: Introduction

Penalized regression: Introduction Penalized regression: Introduction Patrick Breheny August 30 Patrick Breheny BST 764: Applied Statistical Modeling 1/19 Maximum likelihood Much of 20th-century statistics dealt with maximum likelihood

More information

Factor analysis. Angela Montanari

Factor analysis. Angela Montanari Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number

More information

F Matrix Calculus F 1

F Matrix Calculus F 1 F Matrix Calculus F 1 Appendix F: MATRIX CALCULUS TABLE OF CONTENTS Page F1 Introduction F 3 F2 The Derivatives of Vector Functions F 3 F21 Derivative of Vector with Respect to Vector F 3 F22 Derivative

More information

y t by left multiplication with 1 (L) as y t = 1 (L) t =ª(L) t 2.5 Variance decomposition and innovation accounting Consider the VAR(p) model where

y t by left multiplication with 1 (L) as y t = 1 (L) t =ª(L) t 2.5 Variance decomposition and innovation accounting Consider the VAR(p) model where . Variance decomposition and innovation accounting Consider the VAR(p) model where (L)y t = t, (L) =I m L L p L p is the lag polynomial of order p with m m coe±cient matrices i, i =,...p. Provided that

More information

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points Introduction to Algebraic Geometry Bézout s Theorem and Inflection Points 1. The resultant. Let K be a field. Then the polynomial ring K[x] is a unique factorisation domain (UFD). Another example of a

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Lecture 6: Poisson regression

Lecture 6: Poisson regression Lecture 6: Poisson regression Claudia Czado TU München c (Claudia Czado, TU Munich) ZFS/IMS Göttingen 2004 0 Overview Introduction EDA for Poisson regression Estimation and testing in Poisson regression

More information

Chapter 3: The Multiple Linear Regression Model

Chapter 3: The Multiple Linear Regression Model Chapter 3: The Multiple Linear Regression Model Advanced Econometrics - HEC Lausanne Christophe Hurlin University of Orléans November 23, 2013 Christophe Hurlin (University of Orléans) Advanced Econometrics

More information

Maximum likelihood estimation of mean reverting processes

Maximum likelihood estimation of mean reverting processes Maximum likelihood estimation of mean reverting processes José Carlos García Franco Onward, Inc. jcpollo@onwardinc.com Abstract Mean reverting processes are frequently used models in real options. For

More information

Lecture 2: ARMA(p,q) models (part 3)

Lecture 2: ARMA(p,q) models (part 3) Lecture 2: ARMA(p,q) models (part 3) Florian Pelgrin University of Lausanne, École des HEC Department of mathematics (IMEA-Nice) Sept. 2011 - Jan. 2012 Florian Pelgrin (HEC) Univariate time series Sept.

More information

MA107 Precalculus Algebra Exam 2 Review Solutions

MA107 Precalculus Algebra Exam 2 Review Solutions MA107 Precalculus Algebra Exam 2 Review Solutions February 24, 2008 1. The following demand equation models the number of units sold, x, of a product as a function of price, p. x = 4p + 200 a. Please write

More information

Multiple Linear Regression in Data Mining

Multiple Linear Regression in Data Mining Multiple Linear Regression in Data Mining Contents 2.1. A Review of Multiple Linear Regression 2.2. Illustration of the Regression Process 2.3. Subset Selection in Linear Regression 1 2 Chap. 2 Multiple

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Time Series Analysis

Time Series Analysis Time Series Analysis Andrea Beccarini Center for Quantitative Economics Winter 2013/2014 Andrea Beccarini (CQE) Time Series Analysis Winter 2013/2014 1 / 143 Introduction Objectives Time series are ubiquitous

More information

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication). MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

1 Review of Least Squares Solutions to Overdetermined Systems

1 Review of Least Squares Solutions to Overdetermined Systems cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Time Series Analysis

Time Series Analysis Time Series Analysis Forecasting with ARIMA models Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 2012 Alonso and García-Martos (UC3M-UPM)

More information

Sections 2.11 and 5.8

Sections 2.11 and 5.8 Sections 211 and 58 Timothy Hanson Department of Statistics, University of South Carolina Stat 704: Data Analysis I 1/25 Gesell data Let X be the age in in months a child speaks his/her first word and

More information

Time Series Analysis

Time Series Analysis Time Series Analysis Autoregressive, MA and ARMA processes Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 212 Alonso and García-Martos

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Algebraic Concepts Algebraic Concepts Writing

Algebraic Concepts Algebraic Concepts Writing Curriculum Guide: Algebra 2/Trig (AR) 2 nd Quarter 8/7/2013 2 nd Quarter, Grade 9-12 GRADE 9-12 Unit of Study: Matrices Resources: Textbook: Algebra 2 (Holt, Rinehart & Winston), Ch. 4 Length of Study:

More information

A degrees-of-freedom approximation for a t-statistic with heterogeneous variance

A degrees-of-freedom approximation for a t-statistic with heterogeneous variance The Statistician (1999) 48, Part 4, pp. 495±506 A degrees-of-freedom approximation for a t-statistic with heterogeneous variance Stuart R. Lipsitz and Joseph G. Ibrahim Harvard School of Public Health

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

More information

SOLVING EQUATIONS WITH EXCEL

SOLVING EQUATIONS WITH EXCEL SOLVING EQUATIONS WITH EXCEL Excel and Lotus software are equipped with functions that allow the user to identify the root of an equation. By root, we mean the values of x such that a given equation cancels

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

The Image Deblurring Problem

The Image Deblurring Problem page 1 Chapter 1 The Image Deblurring Problem You cannot depend on your eyes when your imagination is out of focus. Mark Twain When we use a camera, we want the recorded image to be a faithful representation

More information

Logistic Regression. Vibhav Gogate The University of Texas at Dallas. Some Slides from Carlos Guestrin, Luke Zettlemoyer and Dan Weld.

Logistic Regression. Vibhav Gogate The University of Texas at Dallas. Some Slides from Carlos Guestrin, Luke Zettlemoyer and Dan Weld. Logistic Regression Vibhav Gogate The University of Texas at Dallas Some Slides from Carlos Guestrin, Luke Zettlemoyer and Dan Weld. Generative vs. Discriminative Classifiers Want to Learn: h:x Y X features

More information

Revisiting Inter-Industry Wage Differentials and the Gender Wage Gap: An Identification Problem

Revisiting Inter-Industry Wage Differentials and the Gender Wage Gap: An Identification Problem DISCUSSION PAPER SERIES IZA DP No. 2427 Revisiting Inter-Industry Wage Differentials and the Gender Wage Gap: An Identification Problem Myeong-Su Yun November 2006 Forschungsinstitut zur Zukunft der Arbeit

More information

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson JUST THE MATHS UNIT NUMBER 1.8 ALGEBRA 8 (Polynomials) by A.J.Hobson 1.8.1 The factor theorem 1.8.2 Application to quadratic and cubic expressions 1.8.3 Cubic equations 1.8.4 Long division of polynomials

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

CURVE FITTING LEAST SQUARES APPROXIMATION

CURVE FITTING LEAST SQUARES APPROXIMATION CURVE FITTING LEAST SQUARES APPROXIMATION Data analysis and curve fitting: Imagine that we are studying a physical system involving two quantities: x and y Also suppose that we expect a linear relationship

More information

Improving the Composite Indices of Business Cycles by Minimum Distance Factor Analysis

Improving the Composite Indices of Business Cycles by Minimum Distance Factor Analysis Improving the Composite Indices of Business Cycles by Minimum Distance Factor Analysis Yasutomo Murasawa Institute of Economic Research, Kyoto University Kyoto 606-850, Japan (murasawa@kierkyoto-uacjp

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

More information

State Space Time Series Analysis

State Space Time Series Analysis State Space Time Series Analysis p. 1 State Space Time Series Analysis Siem Jan Koopman http://staff.feweb.vu.nl/koopman Department of Econometrics VU University Amsterdam Tinbergen Institute 2011 State

More information

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Recall areal data also known as lattice data are data Y (s), s D where D is a discrete index set. This usually corresponds to data

More information

Multivariate normal distribution and testing for means (see MKB Ch 3)

Multivariate normal distribution and testing for means (see MKB Ch 3) Multivariate normal distribution and testing for means (see MKB Ch 3) Where are we going? 2 One-sample t-test (univariate).................................................. 3 Two-sample t-test (univariate).................................................

More information

Chapter 2. Dynamic panel data models

Chapter 2. Dynamic panel data models Chapter 2. Dynamic panel data models Master of Science in Economics - University of Geneva Christophe Hurlin, Université d Orléans Université d Orléans April 2010 Introduction De nition We now consider

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Time Series Analysis

Time Series Analysis Time Series Analysis Identifying possible ARIMA models Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 2012 Alonso and García-Martos

More information

Notes on Factoring. MA 206 Kurt Bryan

Notes on Factoring. MA 206 Kurt Bryan The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

Understanding and Applying Kalman Filtering

Understanding and Applying Kalman Filtering Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

2.3. Finding polynomial functions. An Introduction:

2.3. Finding polynomial functions. An Introduction: 2.3. Finding polynomial functions. An Introduction: As is usually the case when learning a new concept in mathematics, the new concept is the reverse of the previous one. Remember how you first learned

More information

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers Variance Reduction The statistical efficiency of Monte Carlo simulation can be measured by the variance of its output If this variance can be lowered without changing the expected value, fewer replications

More information

Univariate and Multivariate Methods PEARSON. Addison Wesley

Univariate and Multivariate Methods PEARSON. Addison Wesley Time Series Analysis Univariate and Multivariate Methods SECOND EDITION William W. S. Wei Department of Statistics The Fox School of Business and Management Temple University PEARSON Addison Wesley Boston

More information

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix 7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse

More information