Gauss-Markov Theorem. The Gauss-Markov Theorem is given in the following regression model and assumptions:

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Gauss-Markov Theorem. The Gauss-Markov Theorem is given in the following regression model and assumptions:"

Transcription

1 Gauss-Markov Theorem The Gauss-Markov Theorem is given in the following regression model and assumptions: The regression model y i = β 1 + β x i + u i, i = 1,, n (1) Assumptions (A) or Assumptions (B): Assumptions (A) Eu i = 0 for all i Var(u i ) = σ for all i (homoscedasticity) Cov(u i, u j ) = 0 for all i j x i is nonstochastic constant The text book uses Assumptions (B) (see p588 of the text): Assumptions (B) E(u i x 1,, x n ) = 0 for all i Var(u i x 1,, x n )) = σ for all i (homoscedasticity) Cov(u i, u j x 1,, x n ) = 0 for all i j If we use Assumptions (B), we need to use the law of iterated expectations in proving the BLUE With Assumptions (B), the BLUE is given conditionally on x 1,, x n Let us use Assumptions (A) The Gauss-Markov Theorem is stated in the boxed statement below: 1

2 Gauss Markov Theorem Under Assumptions (A), the OLS estimators, β 1 and β are the Best Linear Unbiased Estimator (BLUE), that is 1 Unbias: E β 1 = β 1 and E β = β Best: β1 and β have the smallest variances among the class of all linear unbiased estimators Real data seldomly satisfy Assumptions (A) or Assumptions (B) Accordingly we should think that the Gauss-Markov theorem only holds in the never-never land However, it is important to understand the Gauss-Markov theorem on two grounds: 1 We may treat the world of the Gauss-Markov theorem as equivalent to the world of perfect competition in micro economic theory The mathematical exercises are good for your souls We shall prove the Gauss-Markov theorem using the simple regression model of equation (1) We can prove the Gauss-Markov theorem using the multiple regression model y i = β 1 + β x i + + β k x ik + u i, i = 1,, n () To do so, however, we need to use vector and matrix language (linear algebra) Actually, once you learn linear algebra the proof of Gauss-Markov theorem is far more straight forward than the proof for the simple regression model of (1) In the text book the Gauss-Markov theorem is discussed on the following pages: You should take a look at these pages

3 Proving the Gauss-Markov Theorem The unbiasedness of β 1 and of β are given in the Comments on the Midterm Examination and the answers to Assignment #5 So, we prove here the minimum variance properties There are generally two ways to prove bestness: (i) using linear algebra, and (ii) using calculus We prove bestness using linear algebra first, and we leave the proof using calculus to the Appendix First we prove that β 1 has the smallest variance among all other linear estimators of β 1 Proof that β 1 is best We need to re-express β 1 first β 1 = ȳ β x = 1 n yi ( (xi x)y i ) x = 1 n y i (x i x) x = n y i ( 1 n (x ) i x) x y i, where = x i n x = w i y i, where w i = 1 n (x i x) x The BLUE only looks at linear estimators of β 1 The linear estimators are defined by n β 1 = a i y i In passing we notice that if a i = w i, for all i = 1,, n then β 1 = β 1 We have to make β 1 unbiased To take expectation of β 1 we first substitute equation (1): y i = β 1 + β x i + u i for y i : β 1 = n a i y i = n a i (β 1 + β x i + u i ) = β 1 ai + β ai x i + a i u i E β 1 = β 1 ai + β ai x i + a i Eu i = β 1 ai + β ai x i, 3

4 since Eu i = 0 for all i We see that E β 1 = β 1 a i = 1 a i x i = 0 ( ) means if and only if We take variance of β 1 : Var( β 1 ) E( β 1 E β 1 ) = E( β 1 β 1 ) since E β 1 = β 1 = E( a i u i ) = (a 1Eu a neu n + a 1 a Eu 1 u + + a n 1 a n Eu n 1 u n ) = σ (a a n) = σ a i, since Eu i = σ and Eu i u j = 0, i j The variance of the OLS estimator, Var( β 1 ) is given by Var( β 1 ) = σ wi We see Var( β 1 ) Var( β 1 ) n a i n wi Since a i is an arbitrary nonstochastic constant we can rewrite a i as a i = w i + d i Earlier we saw that β 1 is unbiased if and only if a i = 1 and a i x i = 1 So, ai = w i + d i = 1 ai x i = w i x i + d i x i = 0 But wi = ( 1 n (x ) i x) x = 1 x wi x i = ( 1 n (x i x) x (xi x) = 1 ) x i = 1 n xi x i n x x = x x = 0 Hence di = 0 and di x i = 0 We square a i and sum with respect to i = 1,, n: a i = (w i + d i ) = w i + d i + w i d i = w i + d i 4

5 since the cross product term is zero: wi d i = ( 1 n (x ) i x) x d i = 1 n di 1 ( d i x i x d i ) = 0 Hence a i = w i + d i w i, and this concludes the proof Proof that β is best where β = (xi x)y i We shall use the fact that vi = 0 = ( ) xi x y i = v i y i v i = x i x and The variance of β, Var( β ), is given by Let β be a linear estimator of β : v i = Var( β ) = σ v i β = b i y i We need to find the conditions that make β unbiased Taking expectation we have E β = E b i (β 1 + β x i + u i ) = β 1 bi + β bi x i and thus E( β ) = β b i = 0, bi x i = 1 The variance of β, Var( β ), is Var( β ) = σ b i Let b i = v i + c i 5

6 then bi = v i + c i = c i = 0 bi x i = v i x i + c i x i = c i x i = 0 since vi x i = 1 x (xi x)x i = i n x = 1 So the variance of β becomes Var( β ) = σ b i = σ, (v i + c i ) = σ ( v i + c i + v i c i ) σ v i + σ c i Var( β ) since vi c i = 1 (xi x)c i = 1 ( x i c i x c i ) = 0 Appendix: Proving Bestness using calculus Another way to prove that the OLS estimators, β 1 and β, are best is to use calculus to find the minimum variance Since variance is a quadratic function, it is twice differentiable and thus we may use calculus to find the minimum Proving that β 1 is best The variance of a linear unbiased estimator is given by σ a i with two linear constraints ai = 1 and ai x i = 0 Hence we may form the following minimization problem subject to the linear constraints: min a 1,,a n σ a i subject to ai = 1 ai x i = 0 6

7 We form the Lagrangian Λ = σ a i λ 1 ( a i 1) λ ai x i The first order conditions are a 1 = σ a 1 λ 1 x 1 λ = 0 (1) a = σ a λ 1 x λ = 0 () a n = σ a n λ 1 x n λ = 0 (n) λ 1 = a i + 1 = 0 (n+1) λ = a i x i = 0 (n+) Adding the left hand and right hand sides of equations (1) (n) we have Since a i = 1 σ a i n λ 1 λ xi = 0 σ n λ 1 λ n x = 0 (*) Multiplying the left hand and right hand sides of equations (1) (n) by x 1, x, x n respectively and adding up we have Since a i x i = 0 we have σ a i x i λ x i λ x i = 0 n x λ 1 λ x i = 0 (**) Equations (*) and (**) form a linear equation system in λ 1 and in λ : n λ 1 + λ n x = σ Solving for λ 1 and for λ we have From equations (1) (n)we have n xλ 1 + λ x i = 0 λ 1 = σ n x i, and λ = σ x σ a i = λ 1 + λ x i, i = 1,, n 7

8 Substituting for λ 1 and for λ we obtain σ a i = σ x n s i σ xx i i = 1,, n xx or x a i = i x x i = + n x x x i n n since x i = + n x, w i is for the OLS estimator of β 1, β1 The second order conditions are = 1 n (x i x) x = w i, i = 1,, n a 1 = σ,, a n = σ, λ 1 = 0, λ = 0, and the cross-derivatives = 0,, ; = 0 a 1 a a n a n 1 λ 1 a i = 1, i = 1,, n λ a i = x i, i = 1,, n Hence the bordered Hessian becomes a 1 a 1 a H = a n a 1 λ 1 a 1 λ a 1 a n a λ 1 a λ a a 1 a n a n λ 1 a n λ a n a 1 λ 1 a n λ 1 λ 1 λ λ 1 a 1 λ a n λ λ 1 λ λ 8

9 This becomes σ x 1 0 σ x H = σ 1 x n x 1 x x 3 x n 0 0 and it can be proved that H is negative definite, and hence the solutions yield the minimum variance a 1 = w 1, a = w,, a n = w n Proving that β is best The constrained minimization problem becomes The first order conditions are min σ b i b 1,,b n bi = 0 subject to bi x i = 1 b 1 = σ b 1 λ 1 x 1 λ = 0 (1) b = σ b λ 1 x λ = 0 () b n = σ b n λ 1 x n λ = 0 (n) λ 1 = b i = 0 (n+1) λ = b i x i + 1 = 0 (n+) We proceed just in the same way as we did before and obtain λ 1 + x λ = 0 n x λ 1 + ( x i )λ = σ 9

10 Solving for λ 1 and for λ we obtain λ 1 = σ x λ = σ Substituting for λ 1 and for λ we obtain Hence σ b i = λ 1 + λ x i = σ x ( ) = σ xi x + σ x i b i = x i x = v i, i = 1,, n The second order conditions are obtained in a similar way and the bordered Hessian is negative definite 10

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

The Method of Least Squares

The Method of Least Squares Hervé Abdi 1 1 Introduction The least square methods (LSM) is probably the most popular technique in statistics. This is due to several factors. First, most common estimators can be casted within this

More information

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

More information

This is a square root. The number under the radical is 9. (An asterisk * means multiply.)

This is a square root. The number under the radical is 9. (An asterisk * means multiply.) Page of Review of Radical Expressions and Equations Skills involving radicals can be divided into the following groups: Evaluate square roots or higher order roots. Simplify radical expressions. Rationalize

More information

ELEC-E8104 Stochastics models and estimation, Lecture 3b: Linear Estimation in Static Systems

ELEC-E8104 Stochastics models and estimation, Lecture 3b: Linear Estimation in Static Systems Stochastics models and estimation, Lecture 3b: Linear Estimation in Static Systems Minimum Mean Square Error (MMSE) MMSE estimation of Gaussian random vectors Linear MMSE estimator for arbitrarily distributed

More information

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all. 1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.

More information

Econometrics Simple Linear Regression

Econometrics Simple Linear Regression Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight

More information

Heteroskedasticity and Weighted Least Squares

Heteroskedasticity and Weighted Least Squares Econ 507. Econometric Analysis. Spring 2009 April 14, 2009 The Classical Linear Model: 1 Linearity: Y = Xβ + u. 2 Strict exogeneity: E(u) = 0 3 No Multicollinearity: ρ(x) = K. 4 No heteroskedasticity/

More information

Regression III: Advanced Methods

Regression III: Advanced Methods Lecture 5: Linear least-squares Regression III: Advanced Methods William G. Jacoby Department of Political Science Michigan State University http://polisci.msu.edu/jacoby/icpsr/regress3 Simple Linear Regression

More information

Quadratic forms Cochran s theorem, degrees of freedom, and all that

Quadratic forms Cochran s theorem, degrees of freedom, and all that Quadratic forms Cochran s theorem, degrees of freedom, and all that Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 1, Slide 1 Why We Care Cochran s theorem tells us

More information

0.8 Rational Expressions and Equations

0.8 Rational Expressions and Equations 96 Prerequisites 0.8 Rational Expressions and Equations We now turn our attention to rational expressions - that is, algebraic fractions - and equations which contain them. The reader is encouraged to

More information

Sample Problems. Practice Problems

Sample Problems. Practice Problems Lecture Notes Quadratic Word Problems page 1 Sample Problems 1. The sum of two numbers is 31, their di erence is 41. Find these numbers.. The product of two numbers is 640. Their di erence is 1. Find these

More information

The basic unit in matrix algebra is a matrix, generally expressed as: a 11 a 12. a 13 A = a 21 a 22 a 23

The basic unit in matrix algebra is a matrix, generally expressed as: a 11 a 12. a 13 A = a 21 a 22 a 23 (copyright by Scott M Lynch, February 2003) Brief Matrix Algebra Review (Soc 504) Matrix algebra is a form of mathematics that allows compact notation for, and mathematical manipulation of, high-dimensional

More information

Unit 3 Polynomials Study Guide

Unit 3 Polynomials Study Guide Unit Polynomials Study Guide 7-5 Polynomials Part 1: Classifying Polynomials by Terms Some polynomials have specific names based upon the number of terms they have: # of Terms Name 1 Monomial Binomial

More information

QUADRATIC EQUATIONS EXPECTED BACKGROUND KNOWLEDGE

QUADRATIC EQUATIONS EXPECTED BACKGROUND KNOWLEDGE MODULE - 1 Quadratic Equations 6 QUADRATIC EQUATIONS In this lesson, you will study aout quadratic equations. You will learn to identify quadratic equations from a collection of given equations and write

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

FE670 Algorithmic Trading Strategies. Stevens Institute of Technology

FE670 Algorithmic Trading Strategies. Stevens Institute of Technology FE670 Algorithmic Trading Strategies Lecture 6. Portfolio Optimization: Basic Theory and Practice Steve Yang Stevens Institute of Technology 10/03/2013 Outline 1 Mean-Variance Analysis: Overview 2 Classical

More information

Wooldridge, Introductory Econometrics, 4th ed. Chapter 15: Instrumental variables and two stage least squares

Wooldridge, Introductory Econometrics, 4th ed. Chapter 15: Instrumental variables and two stage least squares Wooldridge, Introductory Econometrics, 4th ed. Chapter 15: Instrumental variables and two stage least squares Many economic models involve endogeneity: that is, a theoretical relationship does not fit

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

1 Another method of estimation: least squares

1 Another method of estimation: least squares 1 Another method of estimation: least squares erm: -estim.tex, Dec8, 009: 6 p.m. (draft - typos/writos likely exist) Corrections, comments, suggestions welcome. 1.1 Least squares in general Assume Y i

More information

MULTIVARIATE PROBABILITY DISTRIBUTIONS

MULTIVARIATE PROBABILITY DISTRIBUTIONS MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined

More information

Readings. D Chapter 1. Lecture 2: Constrained Optimization. Cecilia Fieler. Example: Input Demand Functions. Consumer Problem

Readings. D Chapter 1. Lecture 2: Constrained Optimization. Cecilia Fieler. Example: Input Demand Functions. Consumer Problem Economics 245 January 17, 2012 : Example Readings D Chapter 1 : Example The FOCs are max p ( x 1 + x 2 ) w 1 x 1 w 2 x 2. x 1,x 2 0 p 2 x i w i = 0 for i = 1, 2. These are two equations in two unknowns,

More information

Parameter Estimation: A Deterministic Approach using the Levenburg-Marquardt Algorithm

Parameter Estimation: A Deterministic Approach using the Levenburg-Marquardt Algorithm Parameter Estimation: A Deterministic Approach using the Levenburg-Marquardt Algorithm John Bardsley Department of Mathematical Sciences University of Montana Applied Math Seminar-Feb. 2005 p.1/14 Outline

More information

Indiana State Core Curriculum Standards updated 2009 Algebra I

Indiana State Core Curriculum Standards updated 2009 Algebra I Indiana State Core Curriculum Standards updated 2009 Algebra I Strand Description Boardworks High School Algebra presentations Operations With Real Numbers Linear Equations and A1.1 Students simplify and

More information

Regression, least squares

Regression, least squares Regression, least squares Joe Felsenstein Department of Genome Sciences and Department of Biology Regression, least squares p.1/24 Fitting a straight line X Two distinct cases: The X values are chosen

More information

Functions and Equations

Functions and Equations Centre for Education in Mathematics and Computing Euclid eworkshop # Functions and Equations c 014 UNIVERSITY OF WATERLOO Euclid eworkshop # TOOLKIT Parabolas The quadratic f(x) = ax + bx + c (with a,b,c

More information

Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

More information

Integrating algebraic fractions

Integrating algebraic fractions Integrating algebraic fractions Sometimes the integral of an algebraic fraction can be found by first epressing the algebraic fraction as the sum of its partial fractions. In this unit we will illustrate

More information

The Method of Lagrange Multipliers

The Method of Lagrange Multipliers The Method of Lagrange Multipliers S. Sawyer October 25, 2002 1. Lagrange s Theorem. Suppose that we want to maximize (or imize a function of n variables f(x = f(x 1, x 2,..., x n for x = (x 1, x 2,...,

More information

Section 2-5 Quadratic Equations and Inequalities

Section 2-5 Quadratic Equations and Inequalities -5 Quadratic Equations and Inequalities 5 a bi 6. (a bi)(c di) 6. c di 63. Show that i k, k a natural number. 6. Show that i k i, k a natural number. 65. Show that i and i are square roots of 3 i. 66.

More information

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method 578 CHAPTER 1 NUMERICAL METHODS 1. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

More information

3.8 Finding Antiderivatives; Divergence and Curl of a Vector Field

3.8 Finding Antiderivatives; Divergence and Curl of a Vector Field 3.8 Finding Antiderivatives; Divergence and Curl of a Vector Field 77 3.8 Finding Antiderivatives; Divergence and Curl of a Vector Field Overview: The antiderivative in one variable calculus is an important

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

Schooling, Political Participation, and the Economy. (Online Supplementary Appendix: Not for Publication)

Schooling, Political Participation, and the Economy. (Online Supplementary Appendix: Not for Publication) Schooling, Political Participation, and the Economy Online Supplementary Appendix: Not for Publication) Filipe R. Campante Davin Chor July 200 Abstract In this online appendix, we present the proofs for

More information

Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

More information

t-tests and F-tests in regression

t-tests and F-tests in regression t-tests and F-tests in regression Johan A. Elkink University College Dublin 5 April 2012 Johan A. Elkink (UCD) t and F-tests 5 April 2012 1 / 25 Outline 1 Simple linear regression Model Variance and R

More information

HOMEWORK 4 SOLUTIONS. All questions are from Vector Calculus, by Marsden and Tromba

HOMEWORK 4 SOLUTIONS. All questions are from Vector Calculus, by Marsden and Tromba HOMEWORK SOLUTIONS All questions are from Vector Calculus, by Marsden and Tromba Question :..6 Let w = f(x, y) be a function of two variables, and let x = u + v, y = u v. Show that Solution. By the chain

More information

Factoring Quadratic Expressions

Factoring Quadratic Expressions Factoring the trinomial ax 2 + bx + c when a = 1 A trinomial in the form x 2 + bx + c can be factored to equal (x + m)(x + n) when the product of m x n equals c and the sum of m + n equals b. (Note: the

More information

1.5. Factorisation. Introduction. Prerequisites. Learning Outcomes. Learning Style

1.5. Factorisation. Introduction. Prerequisites. Learning Outcomes. Learning Style Factorisation 1.5 Introduction In Block 4 we showed the way in which brackets were removed from algebraic expressions. Factorisation, which can be considered as the reverse of this process, is dealt with

More information

2.3. Finding polynomial functions. An Introduction:

2.3. Finding polynomial functions. An Introduction: 2.3. Finding polynomial functions. An Introduction: As is usually the case when learning a new concept in mathematics, the new concept is the reverse of the previous one. Remember how you first learned

More information

Zeros of a Polynomial Function

Zeros of a Polynomial Function Zeros of a Polynomial Function An important consequence of the Factor Theorem is that finding the zeros of a polynomial is really the same thing as factoring it into linear factors. In this section we

More information

The equivalence of logistic regression and maximum entropy models

The equivalence of logistic regression and maximum entropy models The equivalence of logistic regression and maximum entropy models John Mount September 23, 20 Abstract As our colleague so aptly demonstrated ( http://www.win-vector.com/blog/20/09/the-simplerderivation-of-logistic-regression/

More information

CURVE FITTING LEAST SQUARES APPROXIMATION

CURVE FITTING LEAST SQUARES APPROXIMATION CURVE FITTING LEAST SQUARES APPROXIMATION Data analysis and curve fitting: Imagine that we are studying a physical system involving two quantities: x and y Also suppose that we expect a linear relationship

More information

Instrumental Variables & 2SLS

Instrumental Variables & 2SLS Instrumental Variables & 2SLS y 1 = β 0 + β 1 y 2 + β 2 z 1 +... β k z k + u y 2 = π 0 + π 1 z k+1 + π 2 z 1 +... π k z k + v Economics 20 - Prof. Schuetze 1 Why Use Instrumental Variables? Instrumental

More information

Definition of a Linear Program

Definition of a Linear Program Definition of a Linear Program Definition: A function f(x 1, x,..., x n ) of x 1, x,..., x n is a linear function if and only if for some set of constants c 1, c,..., c n, f(x 1, x,..., x n ) = c 1 x 1

More information

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6 Chapter 7 Matrices Definition An m n matrix is an array of numbers set out in m rows and n columns Examples (i ( 1 1 5 2 0 6 has 2 rows and 3 columns and so it is a 2 3 matrix (ii 1 0 7 1 2 3 3 1 is a

More information

The Scalar Algebra of Means, Covariances, and Correlations

The Scalar Algebra of Means, Covariances, and Correlations 3 The Scalar Algebra of Means, Covariances, and Correlations In this chapter, we review the definitions of some key statistical concepts: means, covariances, and correlations. We show how the means, variances,

More information

Linear Codes. In the V[n,q] setting, the terms word and vector are interchangeable.

Linear Codes. In the V[n,q] setting, the terms word and vector are interchangeable. Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a sub-vector space of V[n,q]. If the subspace of V[n,q]

More information

Chapter 3: The Multiple Linear Regression Model

Chapter 3: The Multiple Linear Regression Model Chapter 3: The Multiple Linear Regression Model Advanced Econometrics - HEC Lausanne Christophe Hurlin University of Orléans November 23, 2013 Christophe Hurlin (University of Orléans) Advanced Econometrics

More information

POLYNOMIAL FUNCTIONS

POLYNOMIAL FUNCTIONS POLYNOMIAL FUNCTIONS Polynomial Division.. 314 The Rational Zero Test.....317 Descarte s Rule of Signs... 319 The Remainder Theorem.....31 Finding all Zeros of a Polynomial Function.......33 Writing a

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Linearly Independent Sets and Linearly Dependent Sets

Linearly Independent Sets and Linearly Dependent Sets These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Math 5311 Gateaux differentials and Frechet derivatives

Math 5311 Gateaux differentials and Frechet derivatives Math 5311 Gateaux differentials and Frechet derivatives Kevin Long January 26, 2009 1 Differentiation in vector spaces Thus far, we ve developed the theory of minimization without reference to derivatives.

More information

Practice with Proofs

Practice with Proofs Practice with Proofs October 6, 2014 Recall the following Definition 0.1. A function f is increasing if for every x, y in the domain of f, x < y = f(x) < f(y) 1. Prove that h(x) = x 3 is increasing, using

More information

Instrumental Variables & 2SLS

Instrumental Variables & 2SLS Instrumental Variables & 2SLS y 1 = β 0 + β 1 y 2 + β 2 z 1 +... β k z k + u y 2 = π 0 + π 1 z k+1 + π 2 z 1 +... π k z k + v Economics 20 - Prof. Schuetze 1 Why Use Instrumental Variables? Instrumental

More information

1 Cobb-Douglas Functions

1 Cobb-Douglas Functions 1 Cobb-Douglas Functions Cobb-Douglas functions are used for both production functions Q = K β L (1 β) where Q is output, and K is capital and L is labor. The same functional form is also used for the

More information

3.4 Complex Zeros and the Fundamental Theorem of Algebra

3.4 Complex Zeros and the Fundamental Theorem of Algebra 86 Polynomial Functions.4 Complex Zeros and the Fundamental Theorem of Algebra In Section., we were focused on finding the real zeros of a polynomial function. In this section, we expand our horizons and

More information

Probability Generating Functions

Probability Generating Functions page 39 Chapter 3 Probability Generating Functions 3 Preamble: Generating Functions Generating functions are widely used in mathematics, and play an important role in probability theory Consider a sequence

More information

88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a

88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a 88 CHAPTER. VECTOR FUNCTIONS.4 Curvature.4.1 Definitions and Examples The notion of curvature measures how sharply a curve bends. We would expect the curvature to be 0 for a straight line, to be very small

More information

Definition of derivative

Definition of derivative Definition of derivative Contents 1. Slope-The Concept 2. Slope of a curve 3. Derivative-The Concept 4. Illustration of Example 5. Definition of Derivative 6. Example 7. Extension of the idea 8. Example

More information

REGRESSION LINES IN STATA

REGRESSION LINES IN STATA REGRESSION LINES IN STATA THOMAS ELLIOTT 1. Introduction to Regression Regression analysis is about eploring linear relationships between a dependent variable and one or more independent variables. Regression

More information

Equations, Inequalities & Partial Fractions

Equations, Inequalities & Partial Fractions Contents Equations, Inequalities & Partial Fractions.1 Solving Linear Equations 2.2 Solving Quadratic Equations 1. Solving Polynomial Equations 1.4 Solving Simultaneous Linear Equations 42.5 Solving Inequalities

More information

CAPM, Arbitrage, and Linear Factor Models

CAPM, Arbitrage, and Linear Factor Models CAPM, Arbitrage, and Linear Factor Models CAPM, Arbitrage, Linear Factor Models 1/ 41 Introduction We now assume all investors actually choose mean-variance e cient portfolios. By equating these investors

More information

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style Solving quadratic equations 3.2 Introduction A quadratic equation is one which can be written in the form ax 2 + bx + c = 0 where a, b and c are numbers and x is the unknown whose value(s) we wish to find.

More information

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression Logistic Regression Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max

More information

Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

NSM100 Introduction to Algebra Chapter 5 Notes Factoring

NSM100 Introduction to Algebra Chapter 5 Notes Factoring Section 5.1 Greatest Common Factor (GCF) and Factoring by Grouping Greatest Common Factor for a polynomial is the largest monomial that divides (is a factor of) each term of the polynomial. GCF is the

More information

Lecture 4: Equality Constrained Optimization. Tianxi Wang

Lecture 4: Equality Constrained Optimization. Tianxi Wang Lecture 4: Equality Constrained Optimization Tianxi Wang wangt@essex.ac.uk 2.1 Lagrange Multiplier Technique (a) Classical Programming max f(x 1, x 2,..., x n ) objective function where x 1, x 2,..., x

More information

6 EXTENDING ALGEBRA. 6.0 Introduction. 6.1 The cubic equation. Objectives

6 EXTENDING ALGEBRA. 6.0 Introduction. 6.1 The cubic equation. Objectives 6 EXTENDING ALGEBRA Chapter 6 Extending Algebra Objectives After studying this chapter you should understand techniques whereby equations of cubic degree and higher can be solved; be able to factorise

More information

Roots and Coefficients of a Quadratic Equation Summary

Roots and Coefficients of a Quadratic Equation Summary Roots and Coefficients of a Quadratic Equation Summary For a quadratic equation with roots α and β: Sum of roots = α + β = and Product of roots = αβ = Symmetrical functions of α and β include: x = and

More information

The Big Picture. Correlation. Scatter Plots. Data

The Big Picture. Correlation. Scatter Plots. Data The Big Picture Correlation Bret Hanlon and Bret Larget Department of Statistics Universit of Wisconsin Madison December 6, We have just completed a length series of lectures on ANOVA where we considered

More information

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4) ROOTS OF POLYNOMIAL EQUATIONS In this unit we discuss polynomial equations. A polynomial in x of degree n, where n 0 is an integer, is an expression of the form P n (x) =a n x n + a n 1 x n 1 + + a 1 x

More information

Partial Fractions. Combining fractions over a common denominator is a familiar operation from algebra:

Partial Fractions. Combining fractions over a common denominator is a familiar operation from algebra: Partial Fractions Combining fractions over a common denominator is a familiar operation from algebra: From the standpoint of integration, the left side of Equation 1 would be much easier to work with than

More information

Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

More information

Bivariate Regression Analysis. The beginning of many types of regression

Bivariate Regression Analysis. The beginning of many types of regression Bivariate Regression Analysis The beginning of many types of regression TOPICS Beyond Correlation Forecasting Two points to estimate the slope Meeting the BLUE criterion The OLS method Purpose of Regression

More information

Appendix F: Mathematical Induction

Appendix F: Mathematical Induction Appendix F: Mathematical Induction Introduction In this appendix, you will study a form of mathematical proof called mathematical induction. To see the logical need for mathematical induction, take another

More information

2. What are the theoretical and practical consequences of autocorrelation?

2. What are the theoretical and practical consequences of autocorrelation? Lecture 10 Serial Correlation In this lecture, you will learn the following: 1. What is the nature of autocorrelation? 2. What are the theoretical and practical consequences of autocorrelation? 3. Since

More information

SECTION 0.6: POLYNOMIAL, RATIONAL, AND ALGEBRAIC EXPRESSIONS

SECTION 0.6: POLYNOMIAL, RATIONAL, AND ALGEBRAIC EXPRESSIONS (Section 0.6: Polynomial, Rational, and Algebraic Expressions) 0.6.1 SECTION 0.6: POLYNOMIAL, RATIONAL, AND ALGEBRAIC EXPRESSIONS LEARNING OBJECTIVES Be able to identify polynomial, rational, and algebraic

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

The Variance of Sample Variance for a Finite Population

The Variance of Sample Variance for a Finite Population The Variance of Sample Variance for a Finite Population Eungchun Cho November 11, 004 Key Words: variance of variance, variance estimator, sampling variance, randomization variance, moments Abstract The

More information

Ged Ridgway Wellcome Trust Centre for Neuroimaging University College London. [slides from the FIL Methods group] SPM Course Vancouver, August 2010

Ged Ridgway Wellcome Trust Centre for Neuroimaging University College London. [slides from the FIL Methods group] SPM Course Vancouver, August 2010 Ged Ridgway Wellcome Trust Centre for Neuroimaging University College London [slides from the FIL Methods group] SPM Course Vancouver, August 2010 β β y X X e one sample t-test two sample t-test paired

More information

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year. This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra

More information

Review of Bivariate Regression

Review of Bivariate Regression Review of Bivariate Regression A.Colin Cameron Department of Economics University of California - Davis accameron@ucdavis.edu October 27, 2006 Abstract This provides a review of material covered in an

More information

3.1 Least squares in matrix form

3.1 Least squares in matrix form 118 3 Multiple Regression 3.1 Least squares in matrix form E Uses Appendix A.2 A.4, A.6, A.7. 3.1.1 Introduction More than one explanatory variable In the foregoing chapter we considered the simple regression

More information

Introduction to Calculus for Business and Economics. by Stephen J. Silver Department of Business Administration The Citadel

Introduction to Calculus for Business and Economics. by Stephen J. Silver Department of Business Administration The Citadel Introduction to Calculus for Business and Economics by Stephen J. Silver Department of Business Administration The Citadel I. Functions Introduction to Calculus for Business and Economics y = f(x) is a

More information

4.6 Null Space, Column Space, Row Space

4.6 Null Space, Column Space, Row Space NULL SPACE, COLUMN SPACE, ROW SPACE Null Space, Column Space, Row Space In applications of linear algebra, subspaces of R n typically arise in one of two situations: ) as the set of solutions of a linear

More information

Zeros of Polynomial Functions

Zeros of Polynomial Functions Zeros of Polynomial Functions The Rational Zero Theorem If f (x) = a n x n + a n-1 x n-1 + + a 1 x + a 0 has integer coefficients and p/q (where p/q is reduced) is a rational zero, then p is a factor of

More information

Question 2: How do you solve a matrix equation using the matrix inverse?

Question 2: How do you solve a matrix equation using the matrix inverse? Question : How do you solve a matrix equation using the matrix inverse? In the previous question, we wrote systems of equations as a matrix equation AX B. In this format, the matrix A contains the coefficients

More information

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver Høgskolen i Narvik Sivilingeniørutdanningen STE637 ELEMENTMETODER Oppgaver Klasse: 4.ID, 4.IT Ekstern Professor: Gregory A. Chechkin e-mail: chechkin@mech.math.msu.su Narvik 6 PART I Task. Consider two-point

More information

Assessment Schedule 2013

Assessment Schedule 2013 NCEA Level Mathematics (9161) 013 page 1 of 5 Assessment Schedule 013 Mathematics with Statistics: Apply algebraic methods in solving problems (9161) Evidence Statement ONE Expected Coverage Merit Excellence

More information

Numerical Summarization of Data OPRE 6301

Numerical Summarization of Data OPRE 6301 Numerical Summarization of Data OPRE 6301 Motivation... In the previous session, we used graphical techniques to describe data. For example: While this histogram provides useful insight, other interesting

More information

Numerical Methods Lecture 5 - Curve Fitting Techniques

Numerical Methods Lecture 5 - Curve Fitting Techniques Numerical Methods Lecture 5 - Curve Fitting Techniques Topics motivation interpolation linear regression higher order polynomial form exponential form Curve fitting - motivation For root finding, we used

More information

Multiple Linear Regression in Data Mining

Multiple Linear Regression in Data Mining Multiple Linear Regression in Data Mining Contents 2.1. A Review of Multiple Linear Regression 2.2. Illustration of the Regression Process 2.3. Subset Selection in Linear Regression 1 2 Chap. 2 Multiple

More information

Examination paper for TMA4115 Matematikk 3

Examination paper for TMA4115 Matematikk 3 Department of Mathematical Sciences Examination paper for TMA45 Matematikk 3 Academic contact during examination: Antoine Julien a, Alexander Schmeding b, Gereon Quick c Phone: a 73 59 77 82, b 40 53 99

More information

Zeros of Polynomial Functions

Zeros of Polynomial Functions Zeros of Polynomial Functions Objectives: 1.Use the Fundamental Theorem of Algebra to determine the number of zeros of polynomial functions 2.Find rational zeros of polynomial functions 3.Find conjugate

More information

Differentiation and Integration

Differentiation and Integration This material is a supplement to Appendix G of Stewart. You should read the appendix, except the last section on complex exponentials, before this material. Differentiation and Integration Suppose we have

More information

3. Mathematical Induction

3. Mathematical Induction 3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information