ECON 5360 Class Notes Systems of Equations

Size: px
Start display at page:

Download "ECON 5360 Class Notes Systems of Equations"

Transcription

1 ECON 0 Class Notes Systems of Equations Introduction Here, we consider two types of systems of equations The rst type is a system of Seemingly Unrelated Regressions (SUR), introduced by Arnold Zellner (9) Sets of equations with distinct dependent and independent variables are often linked together by some common unmeasurable factor Examples include systems of factor demands by a particular rm, agricultural supply-response equations, and capital-asset pricing models The methods presented here can also be thought of as an alternative estimation framework for panel-data models The second type is a Simultaneous Equations System, which involve the interaction of multiple endogenous variables within a system of equations Estimating the parameters of such as system is typically not as simple as doing OLS equation-by-equation Issues such as identi cation (whether the parameters are even estimable) and endogeneity bias are the primary topics in this section Seemingly Unrelated Regressions (SUR) Model Consider the following set of equations y i X i i + i () for i ; :::; M, where the matrices y i, X i and i are of dimension (T ), (T K i ) and (K i ), respectively The stacked system in matrix form is y X 0 0 y 0 X 0 Y + X + : y M 0 0 X M M M Although each of the M equations may seem unrelated (ie, each has potentially distinct coe cient vectors, dependent variables and explanatory variables), the equations in () are linked through their (mean-zero) Although each equation typically represents a separate time series, it is possible that T instead denotes the number of cross sections within an equation

2 error structure where I T I T M I T E( 0 I T I T M I T ) I T M I T M I T MM I T M M M M MM MM MT MT is the variance-covariance matrix for each t ; :::; T error vector Generalized Least Squares (GLS) The system resembles the one we studied in chapter 0 on nonshperical disturbances The e cient estimator in this context is the GLS estimator ^ (X 0 X) (X 0 Y ) [X 0 ( I)X] [X 0 ( I)Y ]: Assuming all the classical assumptions hold (other than that of spherical disturbances), GLS is the best linear unbiased estimator There are two important conditions under which GLS does not provide any e ciency gains over OLS: ij 0 When all the contemporaneous correlations across equations equal zero, the equations are not linked in any fashion and GLS does not provide any e ciency gains In fact, one can show that b ^ X X X M When the explanatory variables are identical across equations, b ^ As a rule, the e ciency gains of GLS over OLS tend to be greater when the contemporaneous correlation in errors across equations ( ij ) is greater and there is less correlation between X across equations Feasible Generalized Least Squaures (FGLS) Typically, is not known Assuming that a consistent estimator of is available, the feasible GLS estimator ^ F GLS (X 0 ^ X) (X 0 ^ Y ) [X 0 (^ I)X] [X 0 (^ I)Y ] ()

3 will be a consistent estimator of The typical estimator of is ^ ^ ^ M ^ ^ ^ M ^ ^ M ^ M ^ MM T e0 e T e0 e T e0 e M T e0 e T e0 e T e0 e M T e0 M e T e0 M e T e0 M e M ; () where e i ; i ; :::; M represent the OLS residuals Degrees of freedom corrections for the elements in ^ are possible, but will not generally produce unbiasedness It is also possible to iterate on () and () until convergence, which will produce the maximum likelihood estimator under multivariate normal errors In other words, ^ F GLS and ^ ML will have the same limiting distributions such that ^ ML;F GLS asy N(; ) where is consistently estimated by ^ [X 0 (^ I)X] : Maximum Likelihood Although asymptotically equivalent, maximum likelihood is an alternative estimator to FGLS that will provide di erent answers in small samples Begin by rewriting the model for the t th observation as Y t y ;t y ;t 0 x t M + y Mt M;t ;t ;t 0 x t + 0 t where x t is the row vector of all di erent explanatory variables in the system and each i is the column vector of coe cients for the i th equation (unless each equation contains all explanatory variables, there will be zeros in i to allow for exclusion restrictions) Assuming multivariate normally distributed errors, the log likelihood function is log L X T log L t MT t log() T log jj X T t 0 t t () where, as de ned earlier, E( t 0 t) The maximum likelihood estimates are found by taking the derivatives of () with respect to and, setting them equal to zero and solving

4 Hypothesis Testing We consider two types of tests tests for contemporaneous correlation between errors and test for linear restrictions on the coe cients Contemporaneous Correlation If there is no contemporaneous correlation between errors in di erent equations (ie, is diagonal), then OLS equation-by-equation is fully e cient Therefore, it is useful to test the following restriction H 0 : ij 0 8i j H A : H 0 false Breusch and Pagan suggest using the Lagrange multiplier test statistic T X M X i i j r ij where r ij is calculated using the OLS residuals as follows r ij q e 0 i e j : (e 0 i e i)(e 0 j e j) Under the null hypothesis, is asymptotically chi-squared with M(M ) degrees of freedom Restrictions on Coe cients The general F test presented in chapter can be extended to the SUR system However, since the statistic requires using ^, the test will only be valid asymptotically Consider testing the following J linear restrictions H 0 : R q H A : H 0 false where ( ; ; :::; M ) 0 Within the SUR framework, it is possible to test coe cient restrictions across equations One possible test statistic is W (R^ F GLS q) 0 [Rvar(^ F GLS )R 0 ] (R^ F GLS q) which has an asymptotic chi-square distribution with J degrees of freedom

5 Autocorrelation Heteroscedasticity and autocorrelation are possibilities within the SUR framework I will focus on autocorrelation because SUR systems are often comprised of time series observations for each equation Assume the errors follow i;t i i;t + it where it is white noise The overall error structure will now be M M E( 0 M M ) M M M M MM MM MT MT where ij j T j i T j T i T i T T : The following three-step approach is recommended Run OLS equation-by-equation Compute consistent estimate of i (eg, ^ i ( P T t e i;te i:t )( P T t e i:t )) Transform the data, using either Prais-Winsten or Cochrane-Orcutt, to remove the autocorrelation Estimate using the transformed data as suggested in () Use ^ and equation () to calculate the FGLS estimates SUR Gauss Application Consider the data taken from Woolridge (00) The model attempts to explain wages and fringe bene ts for workers: W ages t X t + t Benefits t X ;t + t where X t X t so that OLS and FGLS will produce equivalent results Although OLS and FGLS are equivalent, one advantage of FGLS within a SUR framework is that it allows you test coe cient restrictions

6 across equations Doing OLS equation-by-equation would not allow such tests The variables are de ned as follows: Dependent Variables Wages Bene ts Hourly earnings in 999 dollars per hour Hourly bene ts (vacation, sick leave, insurance and pension) in 999 dollars per hour Explanatory Variables Education Experience Years of schooling Years of work experience Tenure Union South Years with current employer One if union member, zero otherwise One if live in south, zero otherwise Northeast One if live in northeast, zero otherwise Northcentral One if live in northcentral, zero otherwise Married One if married, zero otherwise White Male One if white, zero otherwise One if male, zero otherwise See Gauss example 9 for further details The Simultaneous Equation Model The simultaneous system can be written as Y + XB E () where the variable matrices are Y Y Y M X X X K M Y Y Y M X X X K M Y T M ; X T K ; E T M Y T Y T Y T M X T X T X T K T T T M

7 and the coe cient matrices are MM M M M M ; B KM : M M MM K K MK Some de nitions Y t;j is the jth endogenous variable X t;j is the jth exogenous or predetermined variable Equations () are referred to as structural equations and B are the structural parameters To examine the assumptions about the error terms, rewrite the E matrix as ~E vec(e) ( ; ; :::; T ; ; ; :::; T ; :::; M ; M ; :::; T M ) 0 : We assume E( ~ E) 0 E( ~ E ~ E 0 ) I T where the variance-covariance matrix for t ( t ; t ; :::; tm ) 0 is M M : M M MM Reduced Form The reduced-form solution to () is Y XB + E X + V

8 where B, V E and the error vector ~ V vec(v ) satis es E( ~ V ) 0 E( ~ V ~ V 0 ) ( 0 I T ) ( I T ) where 0 Demand and Supply Example Consider the following demand and supply equations Q s t 0 + P t + W t + Z t + s t Q d t 0 + P t + Z t + d t Q s t Q d t where Q s t, Q d t and P t are endogenous variables and W t and Z t are exogenous variables Let Q Q s t Q d t In matrix form, the system can be written as Q P W Z s d Q P W Z s d Y ; X ; E W T Z T Q T P T s T d T and Identi cation ; B Identi cation Question Given data on X and Y, can we identify, B and? Estimation of and Begin by making the standard assumptions about the reduced form Y X + V : plim( T X0 X) Q plim( T X0 V ) 0 8

9 plim( T V 0 V ) : These assumptions imply that the equation-by-equation OLS estimates of and will be consistent Relationship Between (; ) and ( ; B; ) With these estimates (^ and ^) in hand, the question is whether we can map back to, B and? We know the following B and 0 To see if identi cation is possible, we can count the number of known elements on the left-hand side and compare with the number of unknown elements on the right-hand side Number of Known Elements KM elements in M(M + ) elements in Total M(K + (M + )): Number of Unknown Elements M elements in M(M + ) elements in B KM Total M(M + K + (M + )): Therefore, we are M pieces of information shy of identifying the structural parameters In other words, there is more than one set of structural parameters that are consistent with the reduced form We say the model is underidenti ed Identi cation Conditions There are several possibilities for obtaining identi cation: Normalization (ie, set ii in for i ; :::; m) Identities (eg, national income accounting identity) Exclusion restrictions (eg, demand and supply shift factors) Other linear (and nonlinear) restrictions (eg, Blanchard-Quah long-run restriction) 9

10 Rank and Order Conditions Begin by rewriting the i th equation from B in matrix form as I K i B i 0 () where i and B i represent the i th columns of and B, respectively Since the rank of [ I K ] equals K, () represents a system of K equations in M +K we will introduce linear restrictions as follows unknowns (after normalization) In achieving identi cation, R i i B i 0 () where Rank(R i ) J Putting equations () and () together and rede ning i ( i ; B i ) 0 gives ( I K ) i 0: R i From this discussion, it is clear that R i must provide at least M new pieces of information Here are the formal rank and order conditions Order Condition The order condition states that Rank(R i ) J M is a necessary but not su cient condition for identi cation A situation where the order condition is not su cient is when R i j 0 More details on the order condition below Rank Condition The rank condition states that Rank(R i ) M is a necessary and su cient condition for identi cation We can now summarize all possible identi cation outcomes Under Identi cation If either Rank(R i ) < M or Rank(R i ) < M, the i th equation is underidenti ed Exact Identi cation If Rank(R i ) M and Rank(R i ) M, the i th equation is exactly identi ed Over Identi cation If Rank(R i ) > M and Rank(R i ) M, the i th equation is overidenti ed 0

11 Identi cation Conditions in the Demand and Supply Example Begin with supply and note that M The order condition is simple Since all the variables are in the supply equation, there is no restriction matrix R s so that Rank(R s ) 0 < The supply equation is underidenti ed There is no need to look at the rank condition Next, consider demand The relevant matrix equations are 0 0 ( I 0 0 K ) d 0 0 R d ; for which the order condition is clearly satis ed (ie, Rank(R d ) M ) For the rank condition, we need to nd the rank of R d : Clearly, Rank(R d ) M, so that the demand equation is exactly identi ed Limited-Information Estimation We will consider ve di erent limited-information estimation techniques OLS, indirect least squares (ILS), instrumental variable (IV) estimation, two-stage least squares (SLS) and limited-information maximum likelihood (LIML) The term limited information refers to equation-by-equation estimation, as opposed to full-information estimation which uses the linkages among the di erent equations Begin by writing the i th equation as Y i + XB i i y i Y i i + Y i i + X i i + X i i + i where Y i represents the vector of endogenous variables (other than y i ) in the i th equation, Y i vector of endogenous variables excluded from the i th equation, and similarly for X represents the Therefore, i 0 and

12 i 0 so that y i Y i i + X i i + i Y i X i i + i Z i i + i : i Ordinary Least Squares (OLS) The OLS estimator of i is The expected value of ^ i is ^OLS i (Z 0 iz i ) (Z 0 iy i ): E(^ OLS i ) i + E[(Z 0 iz i ) Z 0 i i ]: However, since y i and Y i are jointly determined (recall Z i contains Y i ), we cannot expect that E(Z 0 i i) 0 or plim(z 0 i i) 0 Therefore, OLS estimates will be biased and inconsistent This is commonly known as simultaneity or endogeneity bias Indirect Least Squares (ILS) The indirect least squares estimator simply uses the consistent reduced-form estimates (^ and ^) and the relations B and 0 to solve for, B and The ILS estimator is only feasible if the system is exactly identi ed To see this, consider the i th equation as given in () i B i where ^ (X 0 X) X 0 Y Substitution gives (X 0 X) X 0 y i Y i ^ i ^ i : 0 Multiplying through by (X 0 X) gives X 0 y i + X 0 Y i^ i X 0 X ^ i ) 0 X 0 y i X 0 Y i^ i + X 0 X i^i X 0 Z i^i :

13 Therefore, the ILS estimator for the i th equation can be written as ^ILS i (X 0 Z i ) (X 0 y i ): There are three cases: If the i th equation is exactly identi ed, then X 0 Z i is square and invertible If the i th equation is underidenti ed, then X 0 Z i is not square If the i th equation is overidenti ed, then X 0 Z i is not square although a subset could be used to obtain consistent albeit ine cient estimates of i Instrumental Variable (IV) Estimation Let W i be an instrument matrix (dimension T (K i + M i )) satisfying plim( T W i 0Z i) wz, a nite invertible matrix plim( T W i 0W i) ww, a nite positive-de nite matrix plim( T W i 0 i) 0 Since plim(zi 0 i) 0, we can instead examine plim( T W 0 i y i ) plim( T W 0 i Z i ) + plim( T W 0 i i ) plim( T W 0 i Z i ) Naturally, the instrumental variable estimator is ^IV i (W 0 i Z i ) (W 0 i y i ) which is consistent and has asymptotic variance-covariance matrix asy:var:(^ IV i ) ii T [ wz ww zw]: This can be estimated by est:asy:var:(^ IV i ) ^ ii (W 0 i Z i ) W 0 i W i (Z 0 iw i ) and ^ ii T (y i Z i^iv i ) 0 (y i Z i^iv i ):

14 A degrees of freedom correction is optional Notice that ILS is a special case of IV estimation for an exactly identi ed equation where W i X Two-Stage Least Squares (SLS) When an equation in the system is overidenti ed (ie, rows(x 0 Z i ) >cols(x 0 Z i )), a convenient and intuitive IV estimator is the two-stage least squares estimator The SLS estimator works as follows: Stage # Regress Y i on X and form ^Y i X ^ OLS Stage # Estimate i by an OLS regression of y i on ^Y i and X i More formally, let ^Z i ( ^Y i ; X i ) The SLS estimator is given by ^SLS i ( ^Z 0 i ^Z i ) ( ^Z 0 iy i ) where the asymptotic variance-covariance matrix for ^ SLS i can be estimated consistently by est:asy:var(^ SLS i ) ^ ii ( ^Z 0 i ^Z i ) and ^ ii T (y i Z i^sls i ) 0 (y i Z i^sls i ) Limited-Information Maximum Likelihood (LIML) Limited-information maximum likelihood estimation refers to ML estimation of a single equation in the system For example, if we assume normally distributed errors, then we can form the joint probability distribution function of (y i ; Y i ) and maximize it by choosing i and the appropriate elements of Since the LIML estimator is more complex but asymptotically equivalent to the SLS estimator, it is not widely used Note When the i th equation is exactly identi ed, ^ ILS i ^ IV i ^ SLS i ^ LIML i Full-Information Estimation The ve estimators mentioned above are not fully e cient because they ignore cross-equation relationships between error terms and any omitted endogenous variables We consider two fully e cient estimators below

15 Three-Stage Least Squares (SLS) Begin by writing the system () as y y y M Z Z Z M M + M ) y Z + where ~ E and E( 0 ) I T Then, applying the principle from SUR estimation, the fully e cient estimator is ^ ( W 0 ( I) Z) ( W 0 ( I)y) where W indicates an instrumental variable matrix in the form of Z Zellner and Theil (9) suggest the following three-stage procedure for estimating Stage # Calculate ^Y i for each equation (i ; :::; M) using OLS and the reduced form Stage # Use ^Y i to calculate Stage # Calculate the IV-GLS estimator ^ SLS i and ^ ij T (y i Z i^sls i ) 0 (y j Z j^sls j ) ^SLS [Z 0 (^ X(X 0 X) 0 )Z] [Z 0 (^ X(X 0 X) 0 )y] [ ^Z 0 (^ I) ^Z] [ ^Z 0 (^ I)y]: The asymptotic variance-covariance matrix can be estimated by est:asy:var(^ SLS ) [Z 0 (^ X(X 0 X) 0 )Z] [ ^Z 0 (^ I) ^Z] : Full-Information Maximum Likelihood (FIML) The full-information maximum likelihood estimator is asymptotically e cient Assuming multivariate normally distributed errors, we maximize ln L(; jy; Z) MT ln() + T ln jj + T ln j j (y Z)0 ( I T )(y Z)

16 by choosing, B and The FIML estimator can be computationally burdensome and has the same asymptotic distribution as the SLS estimator As a result, most researchers use SLS 8 Simultaneous Equations Gauss Application Consider estimating a traditional Keynesian consumption function using quarterly data between 9 and 00 The simultaneous system is C t 0 + DI t + c t (8) DI t + C t + I t + G t + NX t + y t (9) where the variables are de ned as follows: Endogenous Variables C t Consumption DI t Disposable Income Exogenous Variables I t Investment G t Government Spending NX t Net Exports The conditions for identi cation of (8), the more interesting equation to be estimated, are shown below Begin by writing the system as Y + XB Y + X 0 0 E 0 0 where Y t (C t ; DI t ) and X t (; I t ; G t ; NX t ) The order condition depends on the rank of the restriction matrix for (8), R ;

17 which is obviously Rank(R ) > M : Therefore, the model is overidenti ed if the rank condition is satis ed (ie, Rank(R ) M ) The relevant matrix for the rank condition is R ; which has Rank(R ) Therefore, equation (8) is overidenti ed Another way to see that (8) is overidenti ed is to solve for the reduced-form representation of C t C t 0 + [ + C t + I t + G t + NX t + y t ] + c t [( 0 + ) + I t + G t + NX t + ( c t + y t )] C t 0 + I t + G t + NX t + v t : (0) Clearly, estimation of (0) is likely to produce three distinct estimates of, which re ects the overidenti - cation problem See Gauss examples 0 and for OLS and SLS estimation of (8)

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

Chapter 2. Dynamic panel data models

Chapter 2. Dynamic panel data models Chapter 2. Dynamic panel data models Master of Science in Economics - University of Geneva Christophe Hurlin, Université d Orléans Université d Orléans April 2010 Introduction De nition We now consider

More information

Lecture 1: Systems of Linear Equations

Lecture 1: Systems of Linear Equations MTH Elementary Matrix Algebra Professor Chao Huang Department of Mathematics and Statistics Wright State University Lecture 1 Systems of Linear Equations ² Systems of two linear equations with two variables

More information

Chapter 3: The Multiple Linear Regression Model

Chapter 3: The Multiple Linear Regression Model Chapter 3: The Multiple Linear Regression Model Advanced Econometrics - HEC Lausanne Christophe Hurlin University of Orléans November 23, 2013 Christophe Hurlin (University of Orléans) Advanced Econometrics

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

IDENTIFICATION IN A CLASS OF NONPARAMETRIC SIMULTANEOUS EQUATIONS MODELS. Steven T. Berry and Philip A. Haile. March 2011 Revised April 2011

IDENTIFICATION IN A CLASS OF NONPARAMETRIC SIMULTANEOUS EQUATIONS MODELS. Steven T. Berry and Philip A. Haile. March 2011 Revised April 2011 IDENTIFICATION IN A CLASS OF NONPARAMETRIC SIMULTANEOUS EQUATIONS MODELS By Steven T. Berry and Philip A. Haile March 2011 Revised April 2011 COWLES FOUNDATION DISCUSSION PAPER NO. 1787R COWLES FOUNDATION

More information

SYSTEMS OF REGRESSION EQUATIONS

SYSTEMS OF REGRESSION EQUATIONS SYSTEMS OF REGRESSION EQUATIONS 1. MULTIPLE EQUATIONS y nt = x nt n + u nt, n = 1,...,N, t = 1,...,T, x nt is 1 k, and n is k 1. This is a version of the standard regression model where the observations

More information

CAPM, Arbitrage, and Linear Factor Models

CAPM, Arbitrage, and Linear Factor Models CAPM, Arbitrage, and Linear Factor Models CAPM, Arbitrage, Linear Factor Models 1/ 41 Introduction We now assume all investors actually choose mean-variance e cient portfolios. By equating these investors

More information

Normalization and Mixed Degrees of Integration in Cointegrated Time Series Systems

Normalization and Mixed Degrees of Integration in Cointegrated Time Series Systems Normalization and Mixed Degrees of Integration in Cointegrated Time Series Systems Robert J. Rossana Department of Economics, 04 F/AB, Wayne State University, Detroit MI 480 E-Mail: r.j.rossana@wayne.edu

More information

Panel Data Econometrics

Panel Data Econometrics Panel Data Econometrics Master of Science in Economics - University of Geneva Christophe Hurlin, Université d Orléans University of Orléans January 2010 De nition A longitudinal, or panel, data set is

More information

Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem

Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem Chapter Vector autoregressions We begin by taking a look at the data of macroeconomics. A way to summarize the dynamics of macroeconomic data is to make use of vector autoregressions. VAR models have become

More information

1.2 Solving a System of Linear Equations

1.2 Solving a System of Linear Equations 1.. SOLVING A SYSTEM OF LINEAR EQUATIONS 1. Solving a System of Linear Equations 1..1 Simple Systems - Basic De nitions As noticed above, the general form of a linear system of m equations in n variables

More information

Solución del Examen Tipo: 1

Solución del Examen Tipo: 1 Solución del Examen Tipo: 1 Universidad Carlos III de Madrid ECONOMETRICS Academic year 2009/10 FINAL EXAM May 17, 2010 DURATION: 2 HOURS 1. Assume that model (III) verifies the assumptions of the classical

More information

1 Another method of estimation: least squares

1 Another method of estimation: least squares 1 Another method of estimation: least squares erm: -estim.tex, Dec8, 009: 6 p.m. (draft - typos/writos likely exist) Corrections, comments, suggestions welcome. 1.1 Least squares in general Assume Y i

More information

1 Teaching notes on GMM 1.

1 Teaching notes on GMM 1. Bent E. Sørensen January 23, 2007 1 Teaching notes on GMM 1. Generalized Method of Moment (GMM) estimation is one of two developments in econometrics in the 80ies that revolutionized empirical work in

More information

Bias in the Estimation of Mean Reversion in Continuous-Time Lévy Processes

Bias in the Estimation of Mean Reversion in Continuous-Time Lévy Processes Bias in the Estimation of Mean Reversion in Continuous-Time Lévy Processes Yong Bao a, Aman Ullah b, Yun Wang c, and Jun Yu d a Purdue University, IN, USA b University of California, Riverside, CA, USA

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

y t by left multiplication with 1 (L) as y t = 1 (L) t =ª(L) t 2.5 Variance decomposition and innovation accounting Consider the VAR(p) model where

y t by left multiplication with 1 (L) as y t = 1 (L) t =ª(L) t 2.5 Variance decomposition and innovation accounting Consider the VAR(p) model where . Variance decomposition and innovation accounting Consider the VAR(p) model where (L)y t = t, (L) =I m L L p L p is the lag polynomial of order p with m m coe±cient matrices i, i =,...p. Provided that

More information

Topic 5: Stochastic Growth and Real Business Cycles

Topic 5: Stochastic Growth and Real Business Cycles Topic 5: Stochastic Growth and Real Business Cycles Yulei Luo SEF of HKU October 1, 2015 Luo, Y. (SEF of HKU) Macro Theory October 1, 2015 1 / 45 Lag Operators The lag operator (L) is de ned as Similar

More information

3.1 Least squares in matrix form

3.1 Least squares in matrix form 118 3 Multiple Regression 3.1 Least squares in matrix form E Uses Appendix A.2 A.4, A.6, A.7. 3.1.1 Introduction More than one explanatory variable In the foregoing chapter we considered the simple regression

More information

Representation of functions as power series

Representation of functions as power series Representation of functions as power series Dr. Philippe B. Laval Kennesaw State University November 9, 008 Abstract This document is a summary of the theory and techniques used to represent functions

More information

Chapter 4: Vector Autoregressive Models

Chapter 4: Vector Autoregressive Models Chapter 4: Vector Autoregressive Models 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie IV.1 Vector Autoregressive Models (VAR)...

More information

INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition)

INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition) INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition) Abstract Indirect inference is a simulation-based method for estimating the parameters of economic models. Its

More information

14.451 Lecture Notes 10

14.451 Lecture Notes 10 14.451 Lecture Notes 1 Guido Lorenzoni Fall 29 1 Continuous time: nite horizon Time goes from to T. Instantaneous payo : f (t; x (t) ; y (t)) ; (the time dependence includes discounting), where x (t) 2

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Chapter 5: The Cointegrated VAR model

Chapter 5: The Cointegrated VAR model Chapter 5: The Cointegrated VAR model Katarina Juselius July 1, 2012 Katarina Juselius () Chapter 5: The Cointegrated VAR model July 1, 2012 1 / 41 An intuitive interpretation of the Pi matrix Consider

More information

Chapter 4: Statistical Hypothesis Testing

Chapter 4: Statistical Hypothesis Testing Chapter 4: Statistical Hypothesis Testing Christophe Hurlin November 20, 2015 Christophe Hurlin () Advanced Econometrics - Master ESA November 20, 2015 1 / 225 Section 1 Introduction Christophe Hurlin

More information

Section 2.4: Equations of Lines and Planes

Section 2.4: Equations of Lines and Planes Section.4: Equations of Lines and Planes An equation of three variable F (x, y, z) 0 is called an equation of a surface S if For instance, (x 1, y 1, z 1 ) S if and only if F (x 1, y 1, z 1 ) 0. x + y

More information

Multivariate Normal Distribution

Multivariate Normal Distribution Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #4-7/21/2011 Slide 1 of 41 Last Time Matrices and vectors Eigenvalues

More information

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series.

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series. Cointegration The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series. Economic theory, however, often implies equilibrium

More information

ECON 142 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE #2

ECON 142 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE #2 University of California, Berkeley Prof. Ken Chay Department of Economics Fall Semester, 005 ECON 14 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE # Question 1: a. Below are the scatter plots of hourly wages

More information

Lecture Notes 2: Matrices as Systems of Linear Equations

Lecture Notes 2: Matrices as Systems of Linear Equations 2: Matrices as Systems of Linear Equations 33A Linear Algebra, Puck Rombach Last updated: April 13, 2016 Systems of Linear Equations Systems of linear equations can represent many things You have probably

More information

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression Logistic Regression Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max

More information

Portfolio selection based on upper and lower exponential possibility distributions

Portfolio selection based on upper and lower exponential possibility distributions European Journal of Operational Research 114 (1999) 115±126 Theory and Methodology Portfolio selection based on upper and lower exponential possibility distributions Hideo Tanaka *, Peijun Guo Department

More information

problem arises when only a non-random sample is available differs from censored regression model in that x i is also unobserved

problem arises when only a non-random sample is available differs from censored regression model in that x i is also unobserved 4 Data Issues 4.1 Truncated Regression population model y i = x i β + ε i, ε i N(0, σ 2 ) given a random sample, {y i, x i } N i=1, then OLS is consistent and efficient problem arises when only a non-random

More information

IMPACT EVALUATION: INSTRUMENTAL VARIABLE METHOD

IMPACT EVALUATION: INSTRUMENTAL VARIABLE METHOD REPUBLIC OF SOUTH AFRICA GOVERNMENT-WIDE MONITORING & IMPACT EVALUATION SEMINAR IMPACT EVALUATION: INSTRUMENTAL VARIABLE METHOD SHAHID KHANDKER World Bank June 2006 ORGANIZED BY THE WORLD BANK AFRICA IMPACT

More information

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method 578 CHAPTER 1 NUMERICAL METHODS 1. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

More information

Chapter 6: Multivariate Cointegration Analysis

Chapter 6: Multivariate Cointegration Analysis Chapter 6: Multivariate Cointegration Analysis 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie VI. Multivariate Cointegration

More information

171:290 Model Selection Lecture II: The Akaike Information Criterion

171:290 Model Selection Lecture II: The Akaike Information Criterion 171:290 Model Selection Lecture II: The Akaike Information Criterion Department of Biostatistics Department of Statistics and Actuarial Science August 28, 2012 Introduction AIC, the Akaike Information

More information

Econometrics Simple Linear Regression

Econometrics Simple Linear Regression Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight

More information

Wooldridge, Introductory Econometrics, 3d ed. Chapter 12: Serial correlation and heteroskedasticity in time series regressions

Wooldridge, Introductory Econometrics, 3d ed. Chapter 12: Serial correlation and heteroskedasticity in time series regressions Wooldridge, Introductory Econometrics, 3d ed. Chapter 12: Serial correlation and heteroskedasticity in time series regressions What will happen if we violate the assumption that the errors are not serially

More information

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a

More information

Department of Economics

Department of Economics Department of Economics On Testing for Diagonality of Large Dimensional Covariance Matrices George Kapetanios Working Paper No. 526 October 2004 ISSN 1473-0278 On Testing for Diagonality of Large Dimensional

More information

Stephen du Toit Mathilda du Toit Gerhard Mels Yan Cheng. LISREL for Windows: SIMPLIS Syntax Files

Stephen du Toit Mathilda du Toit Gerhard Mels Yan Cheng. LISREL for Windows: SIMPLIS Syntax Files Stephen du Toit Mathilda du Toit Gerhard Mels Yan Cheng LISREL for Windows: SIMPLIS Files Table of contents SIMPLIS SYNTAX FILES... 1 The structure of the SIMPLIS syntax file... 1 $CLUSTER command... 4

More information

Cardiff Economics Working Papers

Cardiff Economics Working Papers Cardiff Economics Working Papers Working Paper No. E015/8 Comparing Indirect Inference and Likelihood testing: asymptotic and small sample results David Meenagh, Patrick Minford, Michael Wickens and Yongdeng

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

Regression Analysis. Regression Analysis MIT 18.S096. Dr. Kempthorne. Fall 2013

Regression Analysis. Regression Analysis MIT 18.S096. Dr. Kempthorne. Fall 2013 Lecture 6: Regression Analysis MIT 18.S096 Dr. Kempthorne Fall 2013 MIT 18.S096 Regression Analysis 1 Outline Regression Analysis 1 Regression Analysis MIT 18.S096 Regression Analysis 2 Multiple Linear

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

Review of Bivariate Regression

Review of Bivariate Regression Review of Bivariate Regression A.Colin Cameron Department of Economics University of California - Davis accameron@ucdavis.edu October 27, 2006 Abstract This provides a review of material covered in an

More information

Chapter 7 Nonlinear Systems

Chapter 7 Nonlinear Systems Chapter 7 Nonlinear Systems Nonlinear systems in R n : X = B x. x n X = F (t; X) F (t; x ; :::; x n ) B C A ; F (t; X) =. F n (t; x ; :::; x n ) When F (t; X) = F (X) is independent of t; it is an example

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Note 2 to Computer class: Standard mis-specification tests

Note 2 to Computer class: Standard mis-specification tests Note 2 to Computer class: Standard mis-specification tests Ragnar Nymoen September 2, 2013 1 Why mis-specification testing of econometric models? As econometricians we must relate to the fact that the

More information

Penalized regression: Introduction

Penalized regression: Introduction Penalized regression: Introduction Patrick Breheny August 30 Patrick Breheny BST 764: Applied Statistical Modeling 1/19 Maximum likelihood Much of 20th-century statistics dealt with maximum likelihood

More information

What s New in Econometrics? Lecture 8 Cluster and Stratified Sampling

What s New in Econometrics? Lecture 8 Cluster and Stratified Sampling What s New in Econometrics? Lecture 8 Cluster and Stratified Sampling Jeff Wooldridge NBER Summer Institute, 2007 1. The Linear Model with Cluster Effects 2. Estimation with a Small Number of Groups and

More information

Exact Nonparametric Tests for Comparing Means - A Personal Summary

Exact Nonparametric Tests for Comparing Means - A Personal Summary Exact Nonparametric Tests for Comparing Means - A Personal Summary Karl H. Schlag European University Institute 1 December 14, 2006 1 Economics Department, European University Institute. Via della Piazzuola

More information

Poisson Models for Count Data

Poisson Models for Count Data Chapter 4 Poisson Models for Count Data In this chapter we study log-linear models for count data under the assumption of a Poisson error structure. These models have many applications, not only to the

More information

Granger Causality between Government Revenues and Expenditures in Korea

Granger Causality between Government Revenues and Expenditures in Korea Volume 23, Number 1, June 1998 Granger Causality between Government Revenues and Expenditures in Korea Wan Kyu Park ** 2 This paper investigates the Granger causal relationship between government revenues

More information

Factor analysis. Angela Montanari

Factor analysis. Angela Montanari Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

Introduction to Macroeconomics TOPIC 2: The Goods Market

Introduction to Macroeconomics TOPIC 2: The Goods Market TOPIC 2: The Goods Market Annaïg Morin CBS - Department of Economics August 2013 Goods market Road map: 1. Demand for goods 1.1. Components 1.1.1. Consumption 1.1.2. Investment 1.1.3. Government spending

More information

Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression

Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression Saikat Maitra and Jun Yan Abstract: Dimension reduction is one of the major tasks for multivariate

More information

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system 1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software

Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software STATA Tutorial Professor Erdinç Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software 1.Wald Test Wald Test is used

More information

16 : Demand Forecasting

16 : Demand Forecasting 16 : Demand Forecasting 1 Session Outline Demand Forecasting Subjective methods can be used only when past data is not available. When past data is available, it is advisable that firms should use statistical

More information

STATISTICA Formula Guide: Logistic Regression. Table of Contents

STATISTICA Formula Guide: Logistic Regression. Table of Contents : Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

Using row reduction to calculate the inverse and the determinant of a square matrix

Using row reduction to calculate the inverse and the determinant of a square matrix Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible

More information

Empirical Methods in Applied Economics

Empirical Methods in Applied Economics Empirical Methods in Applied Economics Jörn-Ste en Pischke LSE October 2005 1 Observational Studies and Regression 1.1 Conditional Randomization Again When we discussed experiments, we discussed already

More information

Linear Threshold Units

Linear Threshold Units Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear

More information

ANALYSIS OF FACTOR BASED DATA MINING TECHNIQUES

ANALYSIS OF FACTOR BASED DATA MINING TECHNIQUES Advances in Information Mining ISSN: 0975 3265 & E-ISSN: 0975 9093, Vol. 3, Issue 1, 2011, pp-26-32 Available online at http://www.bioinfo.in/contents.php?id=32 ANALYSIS OF FACTOR BASED DATA MINING TECHNIQUES

More information

Partial Fractions Decomposition

Partial Fractions Decomposition Partial Fractions Decomposition Dr. Philippe B. Laval Kennesaw State University August 6, 008 Abstract This handout describes partial fractions decomposition and how it can be used when integrating rational

More information

LOGIT AND PROBIT ANALYSIS

LOGIT AND PROBIT ANALYSIS LOGIT AND PROBIT ANALYSIS A.K. Vasisht I.A.S.R.I., Library Avenue, New Delhi 110 012 amitvasisht@iasri.res.in In dummy regression variable models, it is assumed implicitly that the dependent variable Y

More information

Least Squares Estimation

Least Squares Estimation Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

Linear Classification. Volker Tresp Summer 2015

Linear Classification. Volker Tresp Summer 2015 Linear Classification Volker Tresp Summer 2015 1 Classification Classification is the central task of pattern recognition Sensors supply information about an object: to which class do the object belong

More information

Multivariate Analysis of Variance (MANOVA): I. Theory

Multivariate Analysis of Variance (MANOVA): I. Theory Gregory Carey, 1998 MANOVA: I - 1 Multivariate Analysis of Variance (MANOVA): I. Theory Introduction The purpose of a t test is to assess the likelihood that the means for two groups are sampled from the

More information

Trade Liberalization and the Economy:

Trade Liberalization and the Economy: Trade Liberalization and the Economy: Stock Market Evidence from Singapore Rasyad A. Parinduri Shandre M. Thangavelu January 11, 2009 Abstract We examine the e ect of the United States Singapore Free Trade

More information

Factor Analysis. Principal components factor analysis. Use of extracted factors in multivariate dependency models

Factor Analysis. Principal components factor analysis. Use of extracted factors in multivariate dependency models Factor Analysis Principal components factor analysis Use of extracted factors in multivariate dependency models 2 KEY CONCEPTS ***** Factor Analysis Interdependency technique Assumptions of factor analysis

More information

Time Series Analysis III

Time Series Analysis III Lecture 12: Time Series Analysis III MIT 18.S096 Dr. Kempthorne Fall 2013 MIT 18.S096 Time Series Analysis III 1 Outline Time Series Analysis III 1 Time Series Analysis III MIT 18.S096 Time Series Analysis

More information

From the help desk: Bootstrapped standard errors

From the help desk: Bootstrapped standard errors The Stata Journal (2003) 3, Number 1, pp. 71 80 From the help desk: Bootstrapped standard errors Weihua Guan Stata Corporation Abstract. Bootstrapping is a nonparametric approach for evaluating the distribution

More information

1 Determinants and the Solvability of Linear Systems

1 Determinants and the Solvability of Linear Systems 1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped

More information

CHAPTER 2 Estimating Probabilities

CHAPTER 2 Estimating Probabilities CHAPTER 2 Estimating Probabilities Machine Learning Copyright c 2016. Tom M. Mitchell. All rights reserved. *DRAFT OF January 24, 2016* *PLEASE DO NOT DISTRIBUTE WITHOUT AUTHOR S PERMISSION* This is a

More information

Regression III: Advanced Methods

Regression III: Advanced Methods Lecture 16: Generalized Additive Models Regression III: Advanced Methods Bill Jacoby Michigan State University http://polisci.msu.edu/jacoby/icpsr/regress3 Goals of the Lecture Introduce Additive Models

More information

A Stock-Flow Accounting Model of the Labor Market: An Application to Israel

A Stock-Flow Accounting Model of the Labor Market: An Application to Israel WP/15/58 A Stock-Flow Accounting Model of the Labor Market: An Application to Israel Yossi Yakhin and Natalya Presman 2015 International Monetary Fund WP/15/58 IMF Working Paper Office of Executive Director

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

General Framework for an Iterative Solution of Ax b. Jacobi s Method

General Framework for an Iterative Solution of Ax b. Jacobi s Method 2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

More information

2. Linear regression with multiple regressors

2. Linear regression with multiple regressors 2. Linear regression with multiple regressors Aim of this section: Introduction of the multiple regression model OLS estimation in multiple regression Measures-of-fit in multiple regression Assumptions

More information

1 Present and Future Value

1 Present and Future Value Lecture 8: Asset Markets c 2009 Je rey A. Miron Outline:. Present and Future Value 2. Bonds 3. Taxes 4. Applications Present and Future Value In the discussion of the two-period model with borrowing and

More information

What is Linear Programming?

What is Linear Programming? Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

More information

Trend and Seasonal Components

Trend and Seasonal Components Chapter 2 Trend and Seasonal Components If the plot of a TS reveals an increase of the seasonal and noise fluctuations with the level of the process then some transformation may be necessary before doing

More information

Linear Models for Continuous Data

Linear Models for Continuous Data Chapter 2 Linear Models for Continuous Data The starting point in our exploration of statistical models in social research will be the classical linear model. Stops along the way include multiple linear

More information

Our development of economic theory has two main parts, consumers and producers. We will start with the consumers.

Our development of economic theory has two main parts, consumers and producers. We will start with the consumers. Lecture 1: Budget Constraints c 2008 Je rey A. Miron Outline 1. Introduction 2. Two Goods are Often Enough 3. Properties of the Budget Set 4. How the Budget Line Changes 5. The Numeraire 6. Taxes, Subsidies,

More information

SAS Software to Fit the Generalized Linear Model

SAS Software to Fit the Generalized Linear Model SAS Software to Fit the Generalized Linear Model Gordon Johnston, SAS Institute Inc., Cary, NC Abstract In recent years, the class of generalized linear models has gained popularity as a statistical modeling

More information

Imputing Missing Data using SAS

Imputing Missing Data using SAS ABSTRACT Paper 3295-2015 Imputing Missing Data using SAS Christopher Yim, California Polytechnic State University, San Luis Obispo Missing data is an unfortunate reality of statistics. However, there are

More information

Linear Algebra Notes

Linear Algebra Notes Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

More information

Introduction to Regression and Data Analysis

Introduction to Regression and Data Analysis Statlab Workshop Introduction to Regression and Data Analysis with Dan Campbell and Sherlock Campbell October 28, 2008 I. The basics A. Types of variables Your variables may take several forms, and it

More information