The Laws of Linear Combination

Similar documents
Review Jeopardy. Blue vs. Orange. Review Jeopardy

Section Continued

Linearly Independent Sets and Linearly Dependent Sets

LS.6 Solution Matrices

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

5.4 Solving Percent Problems Using the Percent Equation

Linear Algebra Notes

3. Mathematical Induction

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

What does the number m in y = mx + b measure? To find out, suppose (x 1, y 1 ) and (x 2, y 2 ) are two points on the graph of y = mx + b.

Answer: C. The strength of a correlation does not change if units change by a linear transformation such as: Fahrenheit = 32 + (5/9) * Centigrade

Question 2: How do you solve a matrix equation using the matrix inverse?

Solving Systems of Linear Equations

Row Echelon Form and Reduced Row Echelon Form

Module 3: Correlation and Covariance

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

CURVE FITTING LEAST SQUARES APPROXIMATION

Factoring. Factoring 1

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

The last three chapters introduced three major proof techniques: direct,

α = u v. In other words, Orthogonal Projection

Algebraic expressions are a combination of numbers and variables. Here are examples of some basic algebraic expressions.

Correlation key concepts:

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Dimensionality Reduction: Principal Components Analysis

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

by the matrix A results in a vector which is a reflection of the given

Simple Regression Theory II 2010 Samuel L. Baker

Solutions to Math 51 First Exam January 29, 2015

NSM100 Introduction to Algebra Chapter 5 Notes Factoring

1.5. Factorisation. Introduction. Prerequisites. Learning Outcomes. Learning Style

MATH 60 NOTEBOOK CERTIFICATIONS

Similarity and Diagonalization. Similar Matrices

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

NOTES ON LINEAR TRANSFORMATIONS

4.5 Linear Dependence and Linear Independence

University of Lille I PC first year list of exercises n 7. Review

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

International College of Economics and Finance Syllabus Probability Theory and Introductory Statistics

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

160 CHAPTER 4. VECTOR SPACES

9.2 Summation Notation

APPLICATIONS AND MODELING WITH QUADRATIC EQUATIONS

Summation Algebra. x i

6.4 Special Factoring Rules

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

HFCC Math Lab Beginning Algebra 13 TRANSLATING ENGLISH INTO ALGEBRA: WORDS, PHRASE, SENTENCES

October 3rd, Linear Algebra & Properties of the Covariance Matrix

Linear Equations in Linear Algebra

1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number

Overview. Essential Questions. Precalculus, Quarter 4, Unit 4.5 Build Arithmetic and Geometric Sequences and Series

THE DIMENSION OF A VECTOR SPACE

Example: Boats and Manatees

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

MATHEMATICS FOR ENGINEERING BASIC ALGEBRA

Lecture 5 Principal Minors and the Hessian

The Bivariate Normal Distribution

Mathematics Pre-Test Sample Questions A. { 11, 7} B. { 7,0,7} C. { 7, 7} D. { 11, 11}

0.8 Rational Expressions and Equations

Systems of Linear Equations

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)

Zeros of a Polynomial Function

Rational Exponents. Squaring both sides of the equation yields. and to be consistent, we must have

1.5 SOLUTION SETS OF LINEAR SYSTEMS

Statistics 104: Section 6!

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

QUADRATIC EQUATIONS EXPECTED BACKGROUND KNOWLEDGE

GEOMETRIC SEQUENCES AND SERIES

x = + x 2 + x

Recall that two vectors in are perpendicular or orthogonal provided that their dot

DERIVATIVES AS MATRICES; CHAIN RULE

Vieta s Formulas and the Identity Theorem

DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9

Similar matrices and Jordan form

Quadratic forms Cochran s theorem, degrees of freedom, and all that

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

MATH Fundamental Mathematics IV

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

Orthogonal Projections

3.1 Solving Systems Using Tables and Graphs

1 Portfolio mean and variance

Lecture 2 Matrix Operations

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test

10.2 Series and Convergence

Inner product. Definition of inner product

6 EXTENDING ALGEBRA. 6.0 Introduction. 6.1 The cubic equation. Objectives

Notes on Determinant

8 Square matrices continued: Determinants

LAGUARDIA COMMUNITY COLLEGE CITY UNIVERSITY OF NEW YORK DEPARTMENT OF MATHEMATICS, ENGINEERING, AND COMPUTER SCIENCE

Math 241, Exam 1 Information.

Econometrics Simple Linear Regression

3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Business Statistics. Successful completion of Introductory and/or Intermediate Algebra courses is recommended before taking Business Statistics.

0.4 FACTORING POLYNOMIALS

Name: Section Registered In:

Transcription:

The Laws of Linear Combination James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) 1 / 21

The Laws of Linear Combination 1 Goals for This Module 2 What is a Linear Combination? An Example: Course Grades 3 Learning to read a Linear Combination The Sample Mean as a Linear Combination 4 The Mean of a Linear Combination 5 The Variance of a Linear Combination An Example 6 Covariance of Two Linear Combinations 7 The General Heuristic Rule James H. Steiger (Vanderbilt University) 2 / 21

Goals for This Module Goals for This Module In this module, we cover What is a linear combination? Basic definitions and terminology Key aspects of the behavior of linear combinations The Mean of a linear combination The Variance of a linear combination The Covariance between two linear combinations The Correlation between two linear combinations Applications James H. Steiger (Vanderbilt University) 3 / 21

What is a Linear Combination? An Example: Course Grades What is a Linear Combination? An Example: Course Grades A linear combination of two variables X and Y is any variable C of the form below, where a and b are constants called linear weights C = ax + by (1) The ith score C i on variable C is produced from the ith scores on X and Y via the formula C i = ax i + by i (2) I sometimes refer loosely to the above two formulas as the linear combination rule in variable form and the linear combination rule in score form, respectively. The constants a and b are called linear weights. James H. Steiger (Vanderbilt University) 4 / 21

What is a Linear Combination? An Example: Course Grades What is a Linear Combination? An Example: Course Grades Let s consider a classic example: Consider a course, in which there are two exams, a midterm and a final. The exams are not weighted equally. The final exam counts twice as much as the midterm. The course grade for the ith individual is produced with the following equation G i = 1 3 X i + 2 3 Y i (3) where X i is the midterm grade and Y i the final exam grade. We say that G is a linear combination of X and Y with linear weights 1/3 and 2/3. James H. Steiger (Vanderbilt University) 5 / 21

What is a Linear Combination? An Example: Course Grades What is a Linear Combination? An Example: Course Grades We can think of a linear combination as a recipe that combines ingredients to produce a particular result. For a given set of variables, the linear combination is defined by the linear weights. Suppose we have two lists of numbers, X and Y. Below is a table of some common linear combination. X Y Name +1 +1 Sum +1 1 Difference + 1 2 + 1 2 Average James H. Steiger (Vanderbilt University) 6 / 21

Learning to read a Linear Combination Learning to read a Linear Combination It is important to be able to examine an expression and determine the following: Is the expression a LC? What quantities are being linearly combined? What are the linear weights? For example, consider the following expression. This expression shows how to compute the overall mean X that results when two samples of size n 1 and n 2, with sample means X 1 and X 2, are combined into one group of size n 1 + n 2. X = n 1X 1 + n 2 X 2 n 1 + n 2 (4) James H. Steiger (Vanderbilt University) 7 / 21

Learning to read a Linear Combination Learning to read a Linear Combination The expression is a linear combination of X 1 and X 2. If you look carefully, you can see that the expression can be rewritten as ( ) ( ) n1 n2 X = X 1 + X 2 (5) n 1 + n 2 n 1 + n 2 This is an expression of the form X = p 1 X 1 + p 2 X 2 (6) in which p 1 is the proportion of the total sample size that is from the first sample, and p 2 is the proportion of the total size that is from the second sample. Naturally, these two proportions sum to 1. James H. Steiger (Vanderbilt University) 8 / 21

Learning to read a Linear Combination The Sample Mean as a Linear Combination Learning to read a Linear Combination The Sample Mean as a Linear Combination Consider the sample mean X = n i=1 X i n (7) The sample mean can be written in the form X = n i=1 ( ) 1 X i (8) n In that form, it is easier to see that the sample mean is a linear combination of all the X i in which all the linear weights are 1/n. James H. Steiger (Vanderbilt University) 9 / 21

The Mean of a Linear Combination The Mean of a Linear Combination When we linearly combine two or more variables, the mean and variance of the resulting linear combination has a known relationship to the mean and variance of the original variables. Consider the following data, in which final grades G i for 4 students are computed as G i = (1/3)X i + (2/3)Y i. James H. Steiger (Vanderbilt University) 10 / 21

The Mean of a Linear Combination The Mean of a Linear Combination Student Midterm(X ) Final(Y ) Grade(G) A 90 78 82 B 82 88 86 C 74 86 82 D 90 96 94 Mean 84 87 86 Notice that the mean for G can be computed from the means of the midterm (X ) and final (Y ) using the same linear weights used to produce the G i, namely G = (1/3)X + (2/3)Y = (1/3)84 + (2/3)87 = 28 + 58 = 86 James H. Steiger (Vanderbilt University) 11 / 21

The Mean of a Linear Combination The Mean of a Linear Combination The preceding example demonstrates the linear combination rule for means, i.e., the mean of a linear combination is applied by applying the linear weights to the means of the variables that were linearly combined. So, if G i = ax i + by i (9) then G = (1/3)X + (2/3)Y (10) James H. Steiger (Vanderbilt University) 12 / 21

The Variance of a Linear Combination The Variance of a Linear Combination Although the rule for the mean of a LC is simple, the rule for the variance is not so simple. Proving this rule is trivial with matrix algebra, but much more challenging with summation algebra. At this stage, we will present this rule as a heuristic rule, a procedure that is easy and yields the correct answer. We will demonstrate the heuristic rule step by step, using, as an example, the simple linear combination W = X + Y. We shall derive an expression for the variance of W. James H. Steiger (Vanderbilt University) 13 / 21

The Variance of a Linear Combination The Variance of a Linear Combination Write the linear combination as a rule for variables. In this case, we write X + Y Algebraically square the previous expression. In this case, we get (X + Y ) 2 = X 2 + Y 2 + 2XY Apply a natural conversion rule to the preceding expression. The rule is: 1 Replace a squared variable by its variance; 2 Replace the product of two variables by their covariance. Applying the conversion rule, we obtain S 2 W = S 2 X +Y = S 2 X + S 2 Y + 2S XY James H. Steiger (Vanderbilt University) 14 / 21

The Variance of a Linear Combination An Example The Variance of a Linear Combination An Example As an example, generate an expression for the variance of the linear combination X 2Y James H. Steiger (Vanderbilt University) 15 / 21

The Variance of a Linear Combination An Example The Variance of a Linear Combination An Example Applying the conversion rule, we get (X 2Y ) 2 = X 2 + 4Y 2 4XY S 2 X 4Y = S 2 X + 4S 2 Y 4S XY James H. Steiger (Vanderbilt University) 16 / 21

Covariance of Two Linear Combinations Covariance of Two Linear Combinations It is quite common to have more than one linear combination on the same columns of numbers. Consider the following example, in which we compute the sum (X + Y ) and the difference (X Y ) on two columns of numbers. X Y X + Y X Y 1 3 4 2 2 1 3 1 3 2 5 1 James H. Steiger (Vanderbilt University) 17 / 21

Covariance of Two Linear Combinations Covariance of Two Linear Combinations The two new variable have a covariance, and we can derive an expression for the covariance using a slight modification of the heuristic rule. In this case, we simply compute the algebraic product of the two linear combinations, then apply the same conversion rule used previously. (X + Y )(X Y ) = X 2 Y 2 S X +Y,X Y = S 2 X S 2 Y We see that the covariance between these two linear combinations is not a function of the covariance between the two original variables. Moreover, if X and Y have the same variance, the covariance between X + Y and X Y is always zero! James H. Steiger (Vanderbilt University) 18 / 21

The General Heuristic Rule The General Heuristic Rule Theorem (The General Heuristic Rule) A general rule that allows computation of the variance of any linear combination or transformation, as well as the covariance between any two linear transformations or combinations, is the following: For the variance of a single expression, write the expression, square it, and apply the simple mnemonic conversion rule described below. For the covariance of any two expressions, write the two expressions, compute their algebraic product, then apply the conversion rule described below. The conversion rule is as follows: All constants are carried forward. If a term has the product of two variables, replace the product with the covariance of the two variables. If a term has the square of a single variable, replace the squared variable with its variance. Any term without the product of two variables or the square of a variable is deleted. James H. Steiger (Vanderbilt University) 19 / 21

The General Heuristic Rule The General Heuristic Rule Example (The General Heuristic Rule) Suppose X and Y are random variables, and you compute the following new random variables: W = X Y M = 2X + 5 Construct formulas for 1 σ 2 W 2 σ 2 M 3 σ W,M (Answers on next slide... ) James H. Steiger (Vanderbilt University) 20 / 21

The General Heuristic Rule The General Heuristic Rule Example (The General Heuristic Rule) Answers. 1 To get σ 2 W, we square X Y, obtaining X 2 + Y 2 2XY, and apply the conversion rule to get σ 2 W = σ2 X + σ2 Y 2σ X,Y. 2 To get σ 2 M, we square 2X + 5, obtaining 4X 2 + 20X + 25. Applying the conversion rule, we drop the last two terms, neither of which have the square of a variable or the product of two variables. We are left with the first term, which yields σ 2 M = 4σ2 X. 3 To get σ W,M, we begin by computing (X Y )(2X + 5) = 2X 2 2XY + 5X 5Y. We drop the last two terms, and obtain σ W,M = 2σ 2 X 2σ X,Y. James H. Steiger (Vanderbilt University) 21 / 21