Simultaneous Equations

Size: px
Start display at page:

Download "Simultaneous Equations"

Transcription

1 Simultaneous Equations 1 Motivation and Examples Now we relax the assumption that EX u 0 his wil require new techniques: 1 Instrumental variables 2 2- and 3-stage least squares 3 Limited LIML and full FIML information maximum likelihood Also it is no longer clear if you can estimate parameters at all identification problem his is the 11 Example 1: Consumption c t α + βy t + u t ; y t c t + i t he second equation is called an identity because there is no error he two equations together are the structure of the model or the structural equations In this model, c t and y t are endogenous because they are determined inside the model, and i t and u t are exogenous because they are determined outside the model In the first equation, 1 and y t are explanatory variables If c t 1 were in the model, it would be predetermined and therefore maybe have many of the characteristics of exogenous variables, but it would not be exogenous In this model, there is correlation between y t one of the explanatory variables and u t the error Why? We can write our structure as c t α + βy t + u t 1 α + β c t + i t + u t α 1 β + β 1 β i t β u t; y t c t + i t 2 α 1 β + β 1 β i t + i t β u t α 1 β β i t β u t Equations 1 and 2 are called reduced form equations because they represent each endogenous variable as a function of only exogenous variables We can write the reduced form equations as c t π 01 + π 11 i t + v 1t y t π 02 + π 12 i t + v 2t 1

2 where π 01 π 11 π 02 π 12 α 1 β, β 1 β, α 1 β, 1 1 β are reduced form parameters and v 1t v 2t 1 1 β u t are reduced form errors Note the restrictions on the reduced form parameters and errors such as but not limited to π 01 π 02 ; v 1t v 2t Also note that we can solve for possibly nonunique values of the structural parameters in terms of the reduced form parameters For example, β π 11 ; 3 π 12 α π 01 1 π 11 π 12 So a feasible estimation approach is to estimate π using OLS OLS provides a consistent estimate of π Why? hen solve for α, β using equation 3 his is called the indirect least squares estimate of α, β, and it is consistent Why? However, α, β is biased why? and nonunique why? If, instead, we used OLS on the structural equation for c t, we would get β t ỹt c t t ỹt βỹ t + ũ t β + t ỹtũ t t ỹ2 t t ỹ2 t t ỹ2 t But ỹ t 1 1 β ĩt β ũt from the reduced form equation So 1 Eỹ t ũ t E 1 β ĩt β ũt ũ t 1 1 β Eũ2 t σ2 u 1 β 2

3 hus, plim β β + plim t ỹtũ t t ỹ2 t β + plim 1 t ỹtũ t plim 1 t ỹ2 t σ 2 u 1 β β β σ 2 i β 2 σ 2 u β + 1 β σ2 u σ 2 i + σ2 u > β 12 Example 2: Bananas Consider the structural model below describing the supply and demand for bananas: q d t α + βp t + e t qt s γ + δp t + u t qt d qt s We never observe supply or demand; we observe only the equilibrium price and quantity Consider the scatter plot of combinations of equilibrium prices and 3

4 quantities It is not at all obvious how to fit a supply and demand curve to them especially when both are moving around over time because of the errors e t and u t We try to analyze this problem more rigorously below so we don t slip on the banana problem First define q t qt d qt s, and rewrite the structural equations as q t α + βp t + e t q t γ + δp t + u t If we regress q t as a function of p t, what are we estimating? reduced form of the model: Consider the We can estimate and α + βp t + e t γ + δp t + u t p t γ α β δ + u t e t β δ q t α + β γ α β δ + e t + β u t e t β δ π p γ α β δ π q α + β γ α β δ 4

5 But knowledge of π p and π q will not provide us with any information about the structural parameters We say that all of the structural parameters are not identified Now consider changing the structural model to q d t α + βp t + θy t + e t qt s γ + δp t + u t qt d qt s In the new model, we know that the demand curve is shifting in a way predicted by the exogenous variable, y t his is going to allow us to plot a supply curve However we still won t be able to say much about the demand curve Consider the problem more rigorously First, as before, write q t α + βp t + θy t + e t q t γ + δp t + u t Now when we solve for the reduced form parameters, we get p t q t α γ δ β + θ δ β y t + et ut δ β δα γβ δ β π 0p + π 1p y t + v pt + δθ δ β y t + δet βut δ β π 0q + π 1q y t + v qt So, given consistent estimates of π 1p and π 1q eg, from OLS, we can estimate δ π1q / π 1p ; 5

6 thus δ is identified Similarly, we can set γ π 0q δ π 0p + π 1q he students should try at home the structural model q d t α + βp t + θy t + e t q s t γ + δp t + φz t + u t q d t q s t Note that, if the parameter is not identified, then no amount of data will allow us to estimate it hus, before we can even discuss estimation, we must first determine identification Over the rest of this topic, we will discuss: 1 Concepts and erminology 2 Identification 3 Estimation 4 Properties of Estimators 2 Concepts and erminology he general structural specification of our model is B y t C x t + u t n nn 1 n m m 1 n 1 By stacking equations over time next to each other, we get We assume that: 1 B is nonsingular; B Y C n nn n mm X + U n 2 All diagonal elements of B are 1 scaling convention; 3 u t iid 0, Ω with Ω positive definite; 4 x t are exogenous; 5 a priori from theory restrictions on B and C he reduced form of the model is Y B 1 CX + B 1 U AX + V 6

7 3 Identification We observe Â, but we want to observe B and Ĉ n2 n unknown parameters in B, nm unknown parameters in C, and nm known parameters in  We have nm equations in A B 1 C to explain n 2 n+nm parameters We can t do it; we need to place restrictions from theory on B and C to reduce the number of unknown parameters Possible restrictions: 1 B ij 0 or C ij 0 exclusion or zero restrictions; 2 γ ij B ij + i,j i,j δ ij C ij α; 3 Restrictions on Ω or nonlinear restrictions By far, the most popular restrictions are exclusion restrictions hings that can happen: 1 No structural parameters are identified; 2 Some structural parameters are identified; 3 Some structural parameters are identified, and there are restrictions on reduced form parameters overidentified; 4 All structural parameters are identified, and there are restrictions on reduced form parameters overidentified; 5 All structural parameters are identified just identified We also can discuss identifying an equation If all of the structural parameters in a structural equation are identified, then the equation is identified 31 Necessary and Suffi cient Conditions for Identification Let P [B C P 1 P 2 P n Write the jth restriction on the ith equation as P i φ ij 0 7

8 Assume R i restrictions on the ith equation, and let Φ i [ φ i1, φ i2,, φ iri hen we can write all of the restrictions on the ith equation as Example 1 Example: hen with restrictions P i Φ i 0 y 1t β 12 y 2t + u 1t y 2t γ 21 x 1t + γ 22 x 2t + u 2t P Φ 1 [ 1 β12 γ 11 γ 12 β 22 1 γ 21 γ 22 γ 11 γ 12 β ; Φ heorem 2 he ith equation is identified iff Rank P Φ i n 1 An implication of this theorem is that a necessary condition for the ith equation to be identified is R i n 1 his is called the order condition While it is not suffi cient for identification, it is necessary and very easy to verify Example 3 continued from Example 1 [ 1 β12 γ P Φ 1 11 γ 12 β 22 1 γ 21 γ 22 If the restrictions are true, then P Φ [ 0 0 γ 21 γ 22 Rank P Φ 1 1 [ γ11 γ 12 γ 21 γ 22 unless γ 21 γ 22 0 What would it mean for γ 21 γ 22 0? herefore the first equation is identified unless γ 21 γ 22 0 Also [ 1 [ 1 β12 γ P Φ 2 11 γ β 22 1 γ 21 γ 22 0 β 22 0 whose rank is 1 the second equation is identified 8

9 Example 4 Consider a crime rate example developed by students in class 311 Some Formalism and Proof of the Identification heorem Definition 5 A structure of a model is a set of parameters and functions which completely define the stochastic relationships between the endogenous variables and the exogenous variables We can denote the true structure as B 0, C 0, f 0 where f is the joint density of the errors Definition 6 wo structures, B 0, C 0, f 0 and B 1, C 1, f 1 are equivalent if they imply the same joint distribution for the endogenous variables iff they have the same reduced form Definition 7 Suppose B 0, C 0, f 0 is the true structure hen if α is a parameter of the model and α α 0 in all structures equivalent to B 0, C 0, f 0, then α is identified heorem 8 he structure B 1, C 1, f 1 is equivalent to B 0, C 0, f 0 iff a nonsingular matrix D : B 1 DB 0, C 1 DC 0, and u 1 t has the same distribution as Du 0 t Proof Assume B 1 DB 0, C 1 DC 0, and u 1 t has the same distribution as Du 0 t hen B 1 C 1 DB 0 DC 0 B 0 C 0, and B 1 u 1 t B 0 u 0 t Assume B 0, C 0, f 0 and B 1, C 1, f 1 are equivalent structures hen they have the same reduced form B 1 C 1 B 0 C 0 C 1 DC 0 where D B 1 B 0 B 1 DB 0, u 1 t Du 0 t 9

10 heorem 9 Suppose the true structure is P 0 [ B 0 C 0 and the restriction matrix in the ith equation is Φ i identified iff Rank P 0 Φ i n 1 he the ith equation is Proof Suppose P 1 DP 0 is an equivalent structure satisfying Pi 1 Φ i 0 hen D ip 0 Φ i P 1 Φ i 0 D ip 1 Φ i 0 where D i is the ith row of D and e ip 0 Φ i e ip 1 Φ i 0 where e i is a vector with 1 in the ith element and 0 everywhere else e i and D i are both elements of Null [ [ P 0 Φ i But Null P 0 Φ i has dimension 1 because Rank P 0 Φ i n 1 D i λe i for some scalar constant λ Normalization of B ensures that λ 1 P 1 i P 0 i Let P 0 have the ith row identified, and let Rank P 0 Φ i < n 1 hen D i λe i : D i Null [ P 0 Φ i Define and note that P 1 P 0 Note that P 1 i D i P 0, D i e i P 0 0 P 0 has less than full rank his is not possible because B has full rank P has full rank 4 Instrumental Variables Consider the structural equation y X k β + Qγ + u where EX u 0 ie, X are endogenous explanatory variables find k regressors, Z, with the properties: We want to 10

11 Z 1 plim X is invertible; Z 2 plim u 0 Z Note that an implication of plim X invertible is that plim invertible Z satisfying these two conditions are called instruments Let be the set of exogenous variables Z Z Q Let X X Q be the set of explanatory variables Let β β γ Z Z is be the set of structural parameters ignoring covariance parameters hen our model can be written as y X β + u If we premultiply our equation by Z, we get Z y Z X β + Z u Consider the estimator that solves the orthogonality condition E [Z y Z X β 0 : β Z X Z y his is the instrumental variables estimator of β It is too diffi cult to derive the moments of β, and in many cases they do not even exist he trouble occurs because X in the denominator is endogenous and, therefore, random But we can derive the asymptotic properties of β β Z X Z y Z X Z X β + u Z X Z X β + Z X Z u β + Z X Z u plim β β + plim Z X Z u Z β X Z u + plim β + plim Z X plim Z u β + plim Z X 0 β 11

12 β β Z X Z u Z X Z u [ D β β Z X Z u u Z X Z plim Z X Z uu Z X Z plim plim Z X plim Z uu Z plim X Z Note that EZ uu Z Z σ 2 IZ plim Z uu Z plim Z σ 2 IZ [ D β β σ plim 2 Z X plim Z Z plim X Z β β N [0, V with V σ 2 plim Z X plim Z Z Choice of instruments: 1 Excluded exogenous variables; 2 Lagged or leading regressors maybe; plim X Z 3 Excluded exogenous variables from implicit equations; 4 Out of thin air regressors; 5 2SLS 1st stage predictors 5 wo Stage Least Squares wo Stage Least Squares 2SLS is a special case of instrumental variables with some nice optimality properties Consider the model y X k β + Qγ + u 12

13 where X are endogenous explanatory variables with Rank Z k Let Z Z Q, X X Q, β β, γ Let Z be a set of instruments and define X i Z Z Z Z X i X Z Z Z Z X X Z Z Z Z X X satisfies the conditions for instruments for X and X for X as long as Z satisfies them Now consider the 2SLS estimator β X X X y First, this is equivalent to running a regression of o see this, write equation 4 as and y Xβ + Qγ + u 4 y X β + u β X X X y [ X Z Z Z Z Z Z Z Z X 1 X Z Z Z Z Z Z Z Z y [ X Z Z Z Z X 1 X Z Z Z Z y X X X y which is the instrumental variables estimator We can write β X X X y X X X X β + u β + X X X u 13

14 plim β β + plim X X X u X X X u β + plim β + plim X X plim X u Note that plim X X plim plim X X Q X X Q Q Q X Z Z Z Z X Q X plim X Z Z Z Z X plim Q X plim X Z plim Q X plim X Z plim Q X Z Z Z X t t plim Z Z t X Z Z Z Z Q Q Q plim X Z Z Z Z Q plim Q Q plim X Z plim Q Q plim Z X t Z Z Z Q plim X Z plim Q Q plim Z Z plim Z Q where all plim terms exist Also plim X u plim X u plim Q u plim X Z Z Z Z u plim Q u plim X Z plim Q u plim X Z plim Q u 0 0 Z Z Z u plim Z Z plim Z u 14

15 because Next, plim Z u plim Q u 0, 0 β β X X X u plim β β β β X X X u u X X X plim X X X uu X X X plim plim X X plim X uu X plim X X σ 2 plim X X plim X X plim X X β β N 0, V with V σ 2 plim X X plim X X plim X X Discuss testing Relationship Between 2SLS and IV 1 2SLS is a way to use k > k instruments optimally 2 If k k, then 2SLSIVILS 6 3SLS Basic idea: Do 2SLS and use residuals to estimate Ω consistently in GLS generalization of 2SLS Details: Consider hen use Ω y i X i β i + u i, i 1, 2,, n 15

16 with common instruments Z for each equation β i and residuals are [ Xi Z Z Z 1 [ Z Xi Xi Z Z Z Z y i X i P Z X i X i P Z y i, û i y i Xi β i [ I Xi P Z Xi Xi P Z y i Now stack equations: y 1 X1 0 0 y 2 0 X2 0 y n 0 0 Xn or Let be the instruments for X hen y X β + u I n Z β 1 β 2 β n + u 1 u 2 u n I n Z y I n Z X β + I n Z u 5 Note that D [ I n Z u I n Z D u I n Z I n Z Ω I I n Z Ω Z Z herefore, if we apply GLS to equation 5, we get β [ X I n Z Ω Z Z I n Z X 1 [ X I n Z Ω Z Z I n Z y [ X Ω 1 Z Z Z Z X 1 [ X Ω 1 Z Z Z Z y [ X Ω 1 P Z X 1 [ X Ω 1 P Z y Students should derive the asymptotic properties of 3SLS 16

17 7 FIML he basic idea in FIML is to write down the log likelihood function and maximize it over the parameters of interest We describe the details later Advantages and Disadvantages of FIML: 1 FIML allows for cross equation restrictions 2 FIML allows for nonlinearity more naturally 3 FIML is more sensitive to misspecification of the model 4 Small sample bias is larger for FIML than for IV than for OLS 5 All methods trade off effi ciency against robustness 17

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

Chapter 2. Dynamic panel data models

Chapter 2. Dynamic panel data models Chapter 2. Dynamic panel data models Master of Science in Economics - University of Geneva Christophe Hurlin, Université d Orléans Université d Orléans April 2010 Introduction De nition We now consider

More information

Econometrics Simple Linear Regression

Econometrics Simple Linear Regression Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Multivariate normal distribution and testing for means (see MKB Ch 3)

Multivariate normal distribution and testing for means (see MKB Ch 3) Multivariate normal distribution and testing for means (see MKB Ch 3) Where are we going? 2 One-sample t-test (univariate).................................................. 3 Two-sample t-test (univariate).................................................

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

SYSTEMS OF REGRESSION EQUATIONS

SYSTEMS OF REGRESSION EQUATIONS SYSTEMS OF REGRESSION EQUATIONS 1. MULTIPLE EQUATIONS y nt = x nt n + u nt, n = 1,...,N, t = 1,...,T, x nt is 1 k, and n is k 1. This is a version of the standard regression model where the observations

More information

1 Teaching notes on GMM 1.

1 Teaching notes on GMM 1. Bent E. Sørensen January 23, 2007 1 Teaching notes on GMM 1. Generalized Method of Moment (GMM) estimation is one of two developments in econometrics in the 80ies that revolutionized empirical work in

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

What s New in Econometrics? Lecture 8 Cluster and Stratified Sampling

What s New in Econometrics? Lecture 8 Cluster and Stratified Sampling What s New in Econometrics? Lecture 8 Cluster and Stratified Sampling Jeff Wooldridge NBER Summer Institute, 2007 1. The Linear Model with Cluster Effects 2. Estimation with a Small Number of Groups and

More information

Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem

Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem Chapter Vector autoregressions We begin by taking a look at the data of macroeconomics. A way to summarize the dynamics of macroeconomic data is to make use of vector autoregressions. VAR models have become

More information

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d. DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

CAPM, Arbitrage, and Linear Factor Models

CAPM, Arbitrage, and Linear Factor Models CAPM, Arbitrage, and Linear Factor Models CAPM, Arbitrage, Linear Factor Models 1/ 41 Introduction We now assume all investors actually choose mean-variance e cient portfolios. By equating these investors

More information

1 Determinants and the Solvability of Linear Systems

1 Determinants and the Solvability of Linear Systems 1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped

More information

FULLY MODIFIED OLS FOR HETEROGENEOUS COINTEGRATED PANELS

FULLY MODIFIED OLS FOR HETEROGENEOUS COINTEGRATED PANELS FULLY MODIFIED OLS FOR HEEROGENEOUS COINEGRAED PANELS Peter Pedroni ABSRAC his chapter uses fully modified OLS principles to develop new methods for estimating and testing hypotheses for cointegrating

More information

Matrix Differentiation

Matrix Differentiation 1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

Multi-variable Calculus and Optimization

Multi-variable Calculus and Optimization Multi-variable Calculus and Optimization Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Multi-variable Calculus and Optimization 1 / 51 EC2040 Topic 3 - Multi-variable Calculus

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series.

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series. Cointegration The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series. Economic theory, however, often implies equilibrium

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

Chapter 3: The Multiple Linear Regression Model

Chapter 3: The Multiple Linear Regression Model Chapter 3: The Multiple Linear Regression Model Advanced Econometrics - HEC Lausanne Christophe Hurlin University of Orléans November 23, 2013 Christophe Hurlin (University of Orléans) Advanced Econometrics

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

More information

IDENTIFICATION IN A CLASS OF NONPARAMETRIC SIMULTANEOUS EQUATIONS MODELS. Steven T. Berry and Philip A. Haile. March 2011 Revised April 2011

IDENTIFICATION IN A CLASS OF NONPARAMETRIC SIMULTANEOUS EQUATIONS MODELS. Steven T. Berry and Philip A. Haile. March 2011 Revised April 2011 IDENTIFICATION IN A CLASS OF NONPARAMETRIC SIMULTANEOUS EQUATIONS MODELS By Steven T. Berry and Philip A. Haile March 2011 Revised April 2011 COWLES FOUNDATION DISCUSSION PAPER NO. 1787R COWLES FOUNDATION

More information

Lecture 5 Least-squares

Lecture 5 Least-squares EE263 Autumn 2007-08 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property

More information

Trend and Seasonal Components

Trend and Seasonal Components Chapter 2 Trend and Seasonal Components If the plot of a TS reveals an increase of the seasonal and noise fluctuations with the level of the process then some transformation may be necessary before doing

More information

Simple Linear Regression Inference

Simple Linear Regression Inference Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

Chapter 10: Basic Linear Unobserved Effects Panel Data. Models:

Chapter 10: Basic Linear Unobserved Effects Panel Data. Models: Chapter 10: Basic Linear Unobserved Effects Panel Data Models: Microeconomic Econometrics I Spring 2010 10.1 Motivation: The Omitted Variables Problem We are interested in the partial effects of the observable

More information

INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition)

INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition) INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition) Abstract Indirect inference is a simulation-based method for estimating the parameters of economic models. Its

More information

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col

More information

3.1 Least squares in matrix form

3.1 Least squares in matrix form 118 3 Multiple Regression 3.1 Least squares in matrix form E Uses Appendix A.2 A.4, A.6, A.7. 3.1.1 Introduction More than one explanatory variable In the foregoing chapter we considered the simple regression

More information

Regression Analysis. Regression Analysis MIT 18.S096. Dr. Kempthorne. Fall 2013

Regression Analysis. Regression Analysis MIT 18.S096. Dr. Kempthorne. Fall 2013 Lecture 6: Regression Analysis MIT 18.S096 Dr. Kempthorne Fall 2013 MIT 18.S096 Regression Analysis 1 Outline Regression Analysis 1 Regression Analysis MIT 18.S096 Regression Analysis 2 Multiple Linear

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

2. Linear regression with multiple regressors

2. Linear regression with multiple regressors 2. Linear regression with multiple regressors Aim of this section: Introduction of the multiple regression model OLS estimation in multiple regression Measures-of-fit in multiple regression Assumptions

More information

Lecture 3: Linear methods for classification

Lecture 3: Linear methods for classification Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,

More information

Chapter 10. Key Ideas Correlation, Correlation Coefficient (r),

Chapter 10. Key Ideas Correlation, Correlation Coefficient (r), Chapter 0 Key Ideas Correlation, Correlation Coefficient (r), Section 0-: Overview We have already explored the basics of describing single variable data sets. However, when two quantitative variables

More information

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

To give it a definition, an implicit function of x and y is simply any relationship that takes the form: 2 Implicit function theorems and applications 21 Implicit functions The implicit function theorem is one of the most useful single tools you ll meet this year After a while, it will be second nature to

More information

problem arises when only a non-random sample is available differs from censored regression model in that x i is also unobserved

problem arises when only a non-random sample is available differs from censored regression model in that x i is also unobserved 4 Data Issues 4.1 Truncated Regression population model y i = x i β + ε i, ε i N(0, σ 2 ) given a random sample, {y i, x i } N i=1, then OLS is consistent and efficient problem arises when only a non-random

More information

Solución del Examen Tipo: 1

Solución del Examen Tipo: 1 Solución del Examen Tipo: 1 Universidad Carlos III de Madrid ECONOMETRICS Academic year 2009/10 FINAL EXAM May 17, 2010 DURATION: 2 HOURS 1. Assume that model (III) verifies the assumptions of the classical

More information

Department of Economics

Department of Economics Department of Economics On Testing for Diagonality of Large Dimensional Covariance Matrices George Kapetanios Working Paper No. 526 October 2004 ISSN 1473-0278 On Testing for Diagonality of Large Dimensional

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Several Views of Support Vector Machines

Several Views of Support Vector Machines Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min

More information

Mean squared error matrix comparison of least aquares and Stein-rule estimators for regression coefficients under non-normal disturbances

Mean squared error matrix comparison of least aquares and Stein-rule estimators for regression coefficients under non-normal disturbances METRON - International Journal of Statistics 2008, vol. LXVI, n. 3, pp. 285-298 SHALABH HELGE TOUTENBURG CHRISTIAN HEUMANN Mean squared error matrix comparison of least aquares and Stein-rule estimators

More information

Least Squares Estimation

Least Squares Estimation Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David

More information

Factor analysis. Angela Montanari

Factor analysis. Angela Montanari Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number

More information

Introduction to Macroeconomics TOPIC 2: The Goods Market

Introduction to Macroeconomics TOPIC 2: The Goods Market TOPIC 2: The Goods Market Annaïg Morin CBS - Department of Economics August 2013 Goods market Road map: 1. Demand for goods 1.1. Components 1.1.1. Consumption 1.1.2. Investment 1.1.3. Government spending

More information

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors. 3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

Chapter 4: Vector Autoregressive Models

Chapter 4: Vector Autoregressive Models Chapter 4: Vector Autoregressive Models 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie IV.1 Vector Autoregressive Models (VAR)...

More information

Least-Squares Intersection of Lines

Least-Squares Intersection of Lines Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a

More information

British Columbia Institute of Technology Calculus for Business and Economics Lecture Notes

British Columbia Institute of Technology Calculus for Business and Economics Lecture Notes British Columbia Institute of Technology Calculus for Business and Economics Lecture Notes Kevin Wainwright PhD August 17, 2012 Contents 1 Matrix Algebra 4 1.1 Matrices and Vectors.........................

More information

Master s Theory Exam Spring 2006

Master s Theory Exam Spring 2006 Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem

More information

Multivariate Analysis (Slides 13)

Multivariate Analysis (Slides 13) Multivariate Analysis (Slides 13) The final topic we consider is Factor Analysis. A Factor Analysis is a mathematical approach for attempting to explain the correlation between a large set of variables

More information

University of Ljubljana Doctoral Programme in Statistics Methodology of Statistical Research Written examination February 14 th, 2014.

University of Ljubljana Doctoral Programme in Statistics Methodology of Statistical Research Written examination February 14 th, 2014. University of Ljubljana Doctoral Programme in Statistics ethodology of Statistical Research Written examination February 14 th, 2014 Name and surname: ID number: Instructions Read carefully the wording

More information

ECON 142 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE #2

ECON 142 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE #2 University of California, Berkeley Prof. Ken Chay Department of Economics Fall Semester, 005 ECON 14 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE # Question 1: a. Below are the scatter plots of hourly wages

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

CS229 Lecture notes. Andrew Ng

CS229 Lecture notes. Andrew Ng CS229 Lecture notes Andrew Ng Part X Factor analysis Whenwehavedatax (i) R n thatcomesfromamixtureofseveral Gaussians, the EM algorithm can be applied to fit a mixture model. In this setting, we usually

More information

Quadratic forms Cochran s theorem, degrees of freedom, and all that

Quadratic forms Cochran s theorem, degrees of freedom, and all that Quadratic forms Cochran s theorem, degrees of freedom, and all that Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 1, Slide 1 Why We Care Cochran s theorem tells us

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

MAT188H1S Lec0101 Burbulla

MAT188H1S Lec0101 Burbulla Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

Multiple Linear Regression in Data Mining

Multiple Linear Regression in Data Mining Multiple Linear Regression in Data Mining Contents 2.1. A Review of Multiple Linear Regression 2.2. Illustration of the Regression Process 2.3. Subset Selection in Linear Regression 1 2 Chap. 2 Multiple

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

More information

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that

More information

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression Logistic Regression Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max

More information

Lecture 4: Partitioned Matrices and Determinants

Lecture 4: Partitioned Matrices and Determinants Lecture 4: Partitioned Matrices and Determinants 1 Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying

More information

Notes on Applied Linear Regression

Notes on Applied Linear Regression Notes on Applied Linear Regression Jamie DeCoster Department of Social Psychology Free University Amsterdam Van der Boechorststraat 1 1081 BT Amsterdam The Netherlands phone: +31 (0)20 444-8935 email:

More information

Lecture 2 Matrix Operations

Lecture 2 Matrix Operations Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or

More information

IMPACT EVALUATION: INSTRUMENTAL VARIABLE METHOD

IMPACT EVALUATION: INSTRUMENTAL VARIABLE METHOD REPUBLIC OF SOUTH AFRICA GOVERNMENT-WIDE MONITORING & IMPACT EVALUATION SEMINAR IMPACT EVALUATION: INSTRUMENTAL VARIABLE METHOD SHAHID KHANDKER World Bank June 2006 ORGANIZED BY THE WORLD BANK AFRICA IMPACT

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

Linear Algebraic Equations, SVD, and the Pseudo-Inverse

Linear Algebraic Equations, SVD, and the Pseudo-Inverse Linear Algebraic Equations, SVD, and the Pseudo-Inverse Philip N. Sabes October, 21 1 A Little Background 1.1 Singular values and matrix inversion For non-smmetric matrices, the eigenvalues and singular

More information

ANALYSIS OF FACTOR BASED DATA MINING TECHNIQUES

ANALYSIS OF FACTOR BASED DATA MINING TECHNIQUES Advances in Information Mining ISSN: 0975 3265 & E-ISSN: 0975 9093, Vol. 3, Issue 1, 2011, pp-26-32 Available online at http://www.bioinfo.in/contents.php?id=32 ANALYSIS OF FACTOR BASED DATA MINING TECHNIQUES

More information

Linear Algebra Notes

Linear Algebra Notes Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

More information

Econometric Methods for Panel Data

Econometric Methods for Panel Data Based on the books by Baltagi: Econometric Analysis of Panel Data and by Hsiao: Analysis of Panel Data Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies

More information

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not. Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C

More information

Iterative Methods for Solving Linear Systems

Iterative Methods for Solving Linear Systems Chapter 5 Iterative Methods for Solving Linear Systems 5.1 Convergence of Sequences of Vectors and Matrices In Chapter 2 we have discussed some of the main methods for solving systems of linear equations.

More information

Multivariate Normal Distribution

Multivariate Normal Distribution Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #4-7/21/2011 Slide 1 of 41 Last Time Matrices and vectors Eigenvalues

More information

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6 Chapter 7 Matrices Definition An m n matrix is an array of numbers set out in m rows and n columns Examples (i ( 1 1 5 2 0 6 has 2 rows and 3 columns and so it is a 2 3 matrix (ii 1 0 7 1 2 3 3 1 is a

More information

Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software

Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software STATA Tutorial Professor Erdinç Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software 1.Wald Test Wald Test is used

More information

Panel Data: Linear Models

Panel Data: Linear Models Panel Data: Linear Models Laura Magazzini University of Verona laura.magazzini@univr.it http://dse.univr.it/magazzini Laura Magazzini (@univr.it) Panel Data: Linear Models 1 / 45 Introduction Outline What

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

Simultaneous Prediction of Actual and Average Values of Study Variable Using Stein-rule Estimators

Simultaneous Prediction of Actual and Average Values of Study Variable Using Stein-rule Estimators Shalabh & Christian Heumann Simultaneous Prediction of Actual and Average Values of Study Variable Using Stein-rule Estimators Technical Report Number 104, 2011 Department of Statistics University of Munich

More information

Lecture Notes 2: Matrices as Systems of Linear Equations

Lecture Notes 2: Matrices as Systems of Linear Equations 2: Matrices as Systems of Linear Equations 33A Linear Algebra, Puck Rombach Last updated: April 13, 2016 Systems of Linear Equations Systems of linear equations can represent many things You have probably

More information

Dynamics. Basilio Bona. DAUIN-Politecnico di Torino. Basilio Bona (DAUIN-Politecnico di Torino) Dynamics 2009 1 / 30

Dynamics. Basilio Bona. DAUIN-Politecnico di Torino. Basilio Bona (DAUIN-Politecnico di Torino) Dynamics 2009 1 / 30 Dynamics Basilio Bona DAUIN-Politecnico di Torino 2009 Basilio Bona (DAUIN-Politecnico di Torino) Dynamics 2009 1 / 30 Dynamics - Introduction In order to determine the dynamics of a manipulator, it is

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

Mathematical Induction

Mathematical Induction Mathematical Induction In logic, we often want to prove that every member of an infinite set has some feature. E.g., we would like to show: N 1 : is a number 1 : has the feature Φ ( x)(n 1 x! 1 x) How

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

5.3 The Cross Product in R 3

5.3 The Cross Product in R 3 53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or

More information

Clustering in the Linear Model

Clustering in the Linear Model Short Guides to Microeconometrics Fall 2014 Kurt Schmidheiny Universität Basel Clustering in the Linear Model 2 1 Introduction Clustering in the Linear Model This handout extends the handout on The Multiple

More information

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication). MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

More information

5 Homogeneous systems

5 Homogeneous systems 5 Homogeneous systems Definition: A homogeneous (ho-mo-jeen -i-us) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x +... + a n x n =.. a m

More information

Chapter 6: Multivariate Cointegration Analysis

Chapter 6: Multivariate Cointegration Analysis Chapter 6: Multivariate Cointegration Analysis 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie VI. Multivariate Cointegration

More information