Matrix Algebra and Applications



Similar documents
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

1 Introduction to Matrices

Lecture 2 Matrix Operations

Multi-variable Calculus and Optimization

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Solution to Homework 2

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Introduction to Matrix Algebra

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Typical Linear Equation Set and Corresponding Matrices

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. (

Unit 18 Determinants

8 Square matrices continued: Determinants

Matrix Differentiation

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Introduction to Matrices for Engineers

Solving Linear Systems, Continued and The Inverse of a Matrix

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 2

Data Mining: Algorithms and Applications Matrix Math Review

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

LINEAR ALGEBRA. September 23, 2010

Similarity and Diagonalization. Similar Matrices

1.2 Solving a System of Linear Equations

Solving simultaneous equations using the inverse matrix

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Style. Learning Outcomes

Vector and Matrix Norms

MAT188H1S Lec0101 Burbulla

by the matrix A results in a vector which is a reflection of the given

Notes on Determinant

Name: Section Registered In:

MA106 Linear Algebra lecture notes

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

6. Cholesky factorization

160 CHAPTER 4. VECTOR SPACES

Systems of Linear Equations

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

Matrix Representations of Linear Transformations and Changes of Coordinates

1 Determinants and the Solvability of Linear Systems

Direct Methods for Solving Linear Systems. Matrix Factorization

Solving Systems of Linear Equations

Using row reduction to calculate the inverse and the determinant of a square matrix

NOTES ON LINEAR TRANSFORMATIONS

9 MATRICES AND TRANSFORMATIONS

Lecture 4: Partitioned Matrices and Determinants

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Linear Algebra Review. Vectors

[1] Diagonal factorization

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Linear Algebra Notes

8.1. Cramer s Rule for Solving Simultaneous Linear Equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

5.3 The Cross Product in R 3

T ( a i x i ) = a i T (x i ).

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

Lecture 5: Singular Value Decomposition SVD (1)

Linear Algebra: Determinants, Inverses, Rank

Mathematics Course 111: Algebra I Part IV: Vector Spaces

3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Similar matrices and Jordan form

Brief Introduction to Vectors and Matrices

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

CS3220 Lecture Notes: QR factorization and orthogonal transformations

Solutions to Math 51 First Exam January 29, 2015

Chapter 6. Orthogonality

Lecture Notes 2: Matrices as Systems of Linear Equations

5 Homogeneous systems

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Matrix algebra for beginners, Part I matrices, determinants, inverses

MATH APPLIED MATRIX THEORY

The Characteristic Polynomial

Orthogonal Diagonalization of Symmetric Matrices

Math 312 Homework 1 Solutions

Equations, Inequalities & Partial Fractions

CURVE FITTING LEAST SQUARES APPROXIMATION

Continued Fractions and the Euclidean Algorithm

Operation Count; Numerical Linear Algebra

Properties of Real Numbers

Linear Algebra I. Ronald van Luijk, 2012

Lecture 3: Finding integer solutions to systems of linear equations

26. Determinants I. 1. Prehistory

University of Lille I PC first year list of exercises n 7. Review

LS.6 Solution Matrices

SOLVING LINEAR SYSTEMS

A Primer on Index Notation

F Matrix Calculus F 1

9.4. The Scalar Product. Introduction. Prerequisites. Learning Style. Learning Outcomes

Linear Programming. March 14, 2014

4.5 Linear Dependence and Linear Independence

Lecture notes on linear algebra

1 Sets and Set Notation.

Chapter 2 Determinants, and Linear Independence

Question 2: How do you solve a matrix equation using the matrix inverse?

Inner products on R n, and more

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Transcription:

Matrix Algebra and Applications Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 1 / 49

EC2040 Topic 2 - Matrices and Matrix Algebra Reading 1 Chapters 4 and 5 of CW 2 Chapters 11, 12 and 13 of PR Plan 1 Matrices and Matrix Algebra 2 Transpose, Inverse, and Determinant of a Matrix 3 Solutions to Systems of Linear Equations Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 2 / 49

Matrices and System of Equations A matrix is an array of numbers. Some examples are, [ ] 2 3 2 3 A =, B = 5 10, C = 7 2 1 1 Notation: We shall use a capital letter to denote a matrix and the corresponding small letter to denote individual elements of a matrix. The element in the (2, 1) position (2nd [ row, 1st column) ] of A will be a11 a denoted as a 21. So, from above, A = 12. a 21 a 22 The number of rows and columns in a matrix is also called its order or dimension. For instance, the matrix A has order 2 2. 2 5 1 4 Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 3 / 49

Special Types of Matrices Square matrix which has the same number of rows as columns (order n n) Row matrix (order 1 n) Column matrix (order n 1). A vector is a matrix having either a single row or a single column. Two special types of square matrices are the null matrix (all entries zero) and the identity matrix which has 1 along the diagonal and zeros everywhere else: 1 0... 0 0 1... 0.... 0 0... 1 A square matrix where the only non-zero entries are on the diagonal is also called a diagonal matrix. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 4 / 49

Matrices in Economics One example of a matrix in economics is the IS-LM macro model. 1 Y = C + I + Ḡ C = a + b(1 t)y I = e lr M = ky hr (resource constraint) (consumption function) (investment function) (money demand) We can write the above system in matrix notation as: 1 1 1 0 Y b(1 t) 1 0 0 C 0 0 1 l I = k 0 0 h R Question: what happens to consumption when we increase the money supply (expansionary monetary policy). 1 Recall, IS (the first three equations) and LM gives AD, where Ḡ and M are government policies. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 5 / 49 Ḡ a e M

Arithmetical Operations on Matrices Addition and Subtraction of matrices is very easy but can be done only when all the matrices have the same dimension. Thus, A m n + B p q is defined only when m = p and n = q. Example: [ 4 3 1 2 ] =A 2 2 + [ ] 1 2 5 6 =B 2 2 = [ 4 + 1 3 + 2 1 + 5 2 + 6 ] = [ 5 5 8 6 That is, m = p = n = q = 2. [ ] c a11 a 11 Clearly, 12 and c a 21 a 12 cannot be added together. 22 c =A 13 2 2 =C 3 1 ] Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 6 / 49

Arithmetical Operations on Matrices Formal Definition: Suppose that we have two matrices A = [a ij ] m n and B = [b ij ] m n. Then, the matrix A + B is simply [a ij + b ij ] m n. E.g., for the m = n = 2 case, [ ] [ ] [ ] a11 a 12 b11 b 12 a11 + b 11 a 12 + b 12 a 21 a 22 =A 2 2 + b 21 b 22 B 2 2 = Similarly, the matrix A B is simply [a ij b ij ] m n. a 21 + b 21 a 22 + b 22 This process easily extends to the case when we have many matrices. For instance - if we have three matrices A, B, C of the same dimension - then A + B + C is the matrix formed by adding the corresponding (i, j)th entries of A, B and C. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 7 / 49

Scalar Multiplication Scalar Multiplication by a number c is just multiplying each element of the matrix by the number c. 2 1 For instance if A = 3 9 4 5 and c = 5/2 then c A is the 2 2 matrix 5 5/2 15/2 45/2 10 25/2 5 5. Note that scalar multiplication applies to any matrix. We can combine the operations of scalar multiplication and addition: for instance, you should be able to say what 5A + 3B means if A and B have the same dimension. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 8 / 49

Matrix Multiplication Matrix Multiplication is somewhat more complicated (but all it really requires is concentration). Suppose that we have two matrices A m n and B p q. The product AB is defined only when n = p, that is when the number of columns in A equals the number of rows in B. 1 This means that the product BA is defined only when q = m. 2 It is possible that AB is defined but BA is not defined. 3 Furthermore, even if AB and BA are both defined it is possible that they do not give the same matrix. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 9 / 49

Matrix Multiplication The bottom line is, the order of operation is important in matrix multiplication. Mathematically, we say that matrix multiplication is not commutative (whereas the scalar case is). All this is very different from ordinary multiplication. Take the previous example: [ ] 4 3 A 2 2 = and B 1 2 2 2 = What are AB and BA? [ 1 2 5 6 ] Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 10 / 49

Matrix Multiplication for the 2 by 2 case First, AB is, [ ] [ ] 4 3 1 2 AB = 1 2 5 6 [ ] 4 (1) + 3 (5) 4 (2) + 3 (6) = = 1 (1) + 2 (5) 1 (2) + 2 (6) However, BA is, [ ] [ ] 1 2 4 3 BA = 5 6 1 2 [ ] 1 (4) + 2 (1) 1 (3) + 2 (2) = = 5 (4) + 6 (1) 5 (3) + 6 (2) So, AB is not BA. [ 19 17 11 14 [ 6 7 26 27 Again, think about this versus the scalar case. If a = 5 and b = 6, ab = ba = 30. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 11 / 49 ] ]

Formal Definition of Matrix Multiplication Suppose we have two matrices A = [a ij ] m n and B = [b ij ] n p. The product AB is a m p matrix [c ij ] m p where, c ij = a i1 b 1j + a i2 b 2j +... + a in b nj. Loosely speaking, c ij is the product of the ith row of A and the jth column of B. Again, for the 2 by 2 case, we have: [ ] [ ] a11 a AB = 12 b11 b 12 a 21 a 22 b 21 b 22 [ ] a11 (b = 11 ) + a 12 (b 21 ) a 11 (b 11 ) + a 12 (b 21 ) a 21 (b 12 ) + a 22 (b 22 ) a 21 (b 12 ) + a 22 (b 22 ) [ ] c11 c = 12 C c 21 c 22 Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 12 / 49

Example Suppose A = possible. 1 2 3 4 5 6 and B = [ 7 8 9 10 ] and AB = C, if Since A has order 3 2 and B has order 2 2, it follows that C = AB is defined and the product is a matrix of dimension 3 2. The individual elements of C (e.g., c 11 = a 11 b 11 + a 12 b 21 ) are: c 11 = (1 7) + (2 9) = 25, c 12 = (1 8) + (2 10) = 28, c 21 = (3 7) + (4 9) = 57, c 22 = (3 8) + (4 10) = 64, c 31 = (5 7) + (6 9) = 89, c 32 = 5 8 + 6 10 = 100. Hence, C = c 11 c 12 c 21 c 22 c 31 c 32 = 25 28 57 64 89 100 Note that BA is not defined as elements a 31 = 5 and a 32 = 6 cannot be matched with elements of B.. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 13 / 49

Null and Identity Matrix Multiplication The null and identity matrix cases are trivial. The identity matrix plays the role of 1 in the scalar case. Scalar case: Matrix case: [ ] [ 4 3 1 0 1 2 0 1 ] = 3 1 = 3 [ 4 (1) + 3 (0) 4 (0) + 3 (1) 1 (1) + 2 (0) 1 (0) + 2 (1) ] = [ 4 3 1 2 The null matrix plays the role of zero. As with 3 0 = 0, so, [ ] [ ] [ ] [ 4 3 0 0 4 (0) + 3 (0) 4 (0) + 3 (0) 0 0 = = 1 2 0 0 1 (0) + 2 (0) 1 (0) + 2 (0) 0 0 ] ] Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 14 / 49

Recap: Rules for Scalars versus Matrices Scalar cases: 1 Commutative; a + b = b + a 2 Associative; (a + b) + c = a + (b + c) and (ab) c = a (bc) 3 Distributive; a (b + c) = ab + ac and (b + c) a = ba + ca Matrix cases: 1 Addition Commutative law; A + B = B + A Associative law; (A + B) + C = A + (B + C ) 2 Multiplication Associative law; (AB) C = A (BC ) Distributive law; A (B + C ) = AB + AC and (B + C ) A = BA + CA Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 15 / 49

Transpose of a Matrix The transpose of a matrix A is formed by interchanging the rows and columns of A. If A has order m n, then it s transpose has dimension n m. The transpose of A is generally denoted as A T and sometimes A. If A = A T, then we say that A is a symmetric matrix. [ A ] symmetric 1 3 matrix must be a square matrix (e.g., A = A T = ). 3 1 Transpose of a square matrix: [ ] [ ] a11 a A = 12 A T a11 a = 21 a 21 a 22 a 12 a 22 Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 16 / 49

Properties of the Transpose The transpose operation has the following properties. 1 (A T ) T = A [ ] [ 2 4 3 4 1 (A + B) T = A T + B T. Suppose A = A 1 2 T = 3 2 [ ] [ ] 1 2 1 5 and B = B 5 6 T =. Try this at home. 2 6 3 (AB) T = B T A T More generally, if we have n-matrices A 1, A 2,..., A n then ] (A 1 A 2... A n ) T = A T n A T n 1... A T 2 A T 1 Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 17 / 49

The Inverse of a Square Matrix For a square matrix, A n n, the following is always true: AI n = I n A = A If we can find a matrix B n n satisfying AB = BA = I n, then we say that B is the inverse matrix of A. The inverse matrix of A is denoted A 1. Not all square matrices have inverses. 1 A matrix which has an inverse is called non-singular. 2 A matrix which does not have an inverse is called singular. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 18 / 49

The Determinant of a Square Matrix We denote the determinant of A, as A. The determinant of a 1 1 matrix is trivial: it is the number itself. [ ] a11 a The determinant of a 2 2 matrix A = 12 is, a 21 a 22 A = a 11 a 22 a 12 a 21 The procedure for bigger matrices is more complicated. Let A be a n n matrix. Let M ij denote the determinant of the matrix derived from A by deleting the ith row and jth column. Let C ij = ( 1) i+j M ij. We refer to M ij as a minor and C ij as a co-factor. Note: A square matrix A has an inverse if and only if its determinant is non-zero (is non-singular). Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 19 / 49

The Determinant of a Square matrix Pick any row or column in the matrix. Suppose we pick the ith row. Then, the determinant of the matrix A is: A = a i1 C i1 + a i2 C i2 +... + a in C in If we pick the a column, say the jth one, then the determinant is computed as: A = a 1j C 1j + a 2j C 2j +... + a nj C nj Observe that we could have picked any row or column: we will always get the same answer. For practical considerations, we try to pick out a row or column which has the maximum number of zeros. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 20 / 49

The Determinant of a Square matrix, 2 by 2 case Suppose we have the following matrix: A = A = (1 4) (2 3) = 2. [ 1 2 3 4 ]. We know Although it s a little trivial, we can relate that to the technique for larger matrices, as a first step. We have, A = (a 11 C 11 ) + (a 12 C 12 ) = [ a 11 ( 1) 1+1 M 11 ] + [ a12 ( 1) 1+2 M 12 ] = a 11 M 11 a 12 M 12 ; M 11 = 4 and M 12 = 3 = a 11 4 a 12 3 = (1 4) (2 3) = 2 Why is this all so easy? In the 2 by 2 case, the M ij s are scalars. If we have a 3 by 3 case, the M ij are 2 by 2 matrices, which we have to work out the determinant of first etc... Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 21 / 49

3 by 3 Example Suppose we have the following 3 by 3 matrix, A = 1 2 3 4 2 6 7 8 9 Let us compute the determinant by computing the co-factors of the first row. We have, C ij = ( 1) i+j M ij, so, C 11 = ( 1) 1+1 M 11 = ( 1) 1+1 2 6 8 9 } {{ } M 11 But we know that M 11 = (8 6) (2 9) = 48 18 = 30. We conclude that C 11 = 30.. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 22 / 49

3 by 3 Example Continued We can do this for the other 2 elements of the row. C 12 = ( 1) 1+2 4 6 7 9 = 6 } {{ } =M 12 C 13 = ( 1) 1+3 4 2 7 8 = 18 We find, Hence, the determinant, call it for short, is, = a 11 C 11 + a 12 C 12 + a 13 C 13 = a 11 M 11 a 21 M 21 + a 13 M 13 = [1 ( 30)] + (2 6) + (3 18) = 36 We could have chosen any other row or column. If we choose the third column, then we have: C 13 = 18, C 23 = 6, C 33 = 6 and therefore = (3 18) + (6 6) + [9 ( 6)] = 36. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 23 / 49

Properties of the Determinant There are some useful properties of determinants (that we won t prove) which can simplify the computation of the determinant significantly. 1 If a row or column of A is multiplied by c, then the determinant of the new matrix is c A. 2 Multiplying a row (column) by a non-zero constant and adding it to another row (column) has no effect on the determinant. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 24 / 49

Extended Example with Common Factors Common factors basically involves taking things outside the brackets, as we usually do, except we can use the properties of the determinant. 1 2 3 Consider the same matrix as before, A = 4 2 6. 7 8 9 Note that the second row can be written as 2 [ 2 1 3 ] = [ 4 2 6 ]. We know that multiplying a row by a constant leads the determinant to change by the same factor. It follows, A = 2 1 2 3 2 1 3 7 8 9 Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 25 / 49

Example Continued Note that column three also has a common factor of 3, i.e., 1 3 3 1 = 3. 3 9 It follows that, A = (2 3) 1 2 1 2 1 1 7 8 3 = 6 1 2 1 2 1 1 7 8 3 There are now no common factors; however we can now make use of the row and column operations. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 26 / 49

Example Continued Since the determinant is not changed by multiplying a row (column) by a constant and adding it to another row (column) we can use this to make some entries in a row or column zeros. Multiplying the first row by 1 and adding it to the second gives, 1 2 1 A = 6 1 1 0 7 8 3 1 2 1 What did we do? Well, we started with 6 2 1 1 and we noted, 7 8 3 T T T 1 2 1 ( 1) 2 + 1 = 1, which is the new middle row 1 1 0 of A. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 27 / 49

Example Continued Similarly, multiplying the first row by 3 and adding it to the third row gives: 1 2 1 A = 6 1 1 0 4 2 0 Following the same idea as before, we start out with 1 2 1 A = 6 1 1 0 and we note, 7 8 3 T T T 1 7 4 ( 3) 2 + 8 = 2, which is the new bottom row of 1 3 0 A. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 28 / 49

Example Continued 1 2 3 Why did we do all of this? Well, A = 4 2 6 is not that easy 7 8 9 to compute. 1 2 1 However, A = 6 1 1 0 is easy to compute, as we can take 4 2 0 the cofactors of the third column. In that case, we only have to compute one cofactor since two entries in the column are zero. Therefore, we have, A = 6 [ 1 ( 1) 1+3 1 1 4 2 ] = 6 6 = 36 Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 29 / 49

Computing the inverse using the determinant We observed before that we can find the inverse of a square matrix A if and only if A = 0. When the determinant exists, it can be shown that the inverse is given by, A 1 = 1 A [c ij] T where [c ij ] is the matrix of cofactors of A. In words, what we do is the following: 1 Replace each element of A by its cofactor. 2 Take the transpose of the matrix. 3 Scalar multiply this matrix by 1/ A. Note: the following is also used, adj (A) [c ij ] T ; it is called the adjunct. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 30 / 49

A 2 by 2 Example of Matrix Inversion Consider the following 2 by 2 matrix A = [ 3 2 1 0 We know that A = (3 0) (1 2) = 2. inverse must exist. ]. Since A = 0, an The cofactor matrix (notice how [ easy it is ] in this case, [ again due to ] 0 1 0 2 zeros) is the following; c ij = [c 2 3 ij ] T = 1 3 [ ] [ ] Then, A 1 = 1 A [c ij] T = 1 0 2 0 1 2 = 1 1 3 2. 1 2 3 2 Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 31 / 49

The Inverse of a Square Matrix Consider the following 2 by 2 matrix: A = matrix have an inverse? [ 1 λ 3 4 ]. Does this We know an inverse only exists if the matrix is non-singular. to hold, we require, A = 0. For this We know that if A = (1 4) (3 λ) = 0 the matrix is singular. That is, we can t find the inverse of A if λ = 4/3. Finding a singularity tends to become a problem is we have very big matrices (say we have a big model) which is relatively sparse (that is, has lots of zeros in it). Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 32 / 49

3 by 3 Example Let us compute the inverse of the matrix that we have seen before: 1 2 3 A = 4 2 6. 7 8 9 We already know the following: C 11 = 30, C 12 = 6, C 13 = 18, C 23 = 6, C 33 = 6 A = 36 Recall we can use C 1j s or C i3 s to get here. The remaining cofactors are C 21 = 6, C 22 = 12, C 31 = 6, C 32 = 6. Try this at home. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 33 / 49

Matrix Inversion Example Continued Here the matrix of cofactors is, [c ij ] = The transpose is, [c ij ] T = 30 6 6 6 12 6 18 6 6 The inverse matrix is, A 1 = 1 A [c ij] T = A = 36. 30 6 18 6 12 6 6 6 6. 5 1 6 6 1 6 1 3 1 2 1 6 1 6 1 6 1 6. as Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 34 / 49

Cramer s Rule Now suppose we have a big linear model. In it s most general specification it is written as a system of linear equations: a 11 x 1 + a 12 x 2 +... + a 1n x n = b 1.... =. a n1 x 1 + a n2 x 2 +... + a nn x n = b n This is system of n linear equations in n unknowns. It can be represented compactly in matrix notation as Ax = b where, a 11... a 1n x 1 b 1 A =..., x =., b =.. a n1... a nn x n b n Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 35 / 49

Cramer s Rule We note that providing A = 0 (that is, A has an inverse) we can solve for x by just multiplying both sides of the equation Ax = b by A 1 : (A 1 )Ax = (A 1 A)x = I n x = x = A 1 b Cramer s rule is just an alternative formula for this; the advantage of the rule is that we can find each x i individually; this may be useful if we don t want to find the value of all x i. Cramer s Rule states that: x i = A b,i A Here A b,i is the matrix where the ith column of A is replaced by the column vector b. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 36 / 49

Examples of solving for a specific variable Consider the system 2 3 7 4 4 4 1 5 1 hence the system is solvable. A w y z x = 6 2 1 We can compute w independently of y and z as, 6 3 7 2 4 4 1 5 1 w = = 48 A 80 = 3 5 Computation of y and z is left as an exercise. b. Here, A = 80; Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 37 / 49

Why does Cramer s Rule work? Cramer s Rule is equivalent to finding the inverse of A. To see this more formally, consider the system, a 11 a 12 a 13 w b 1 a 21 a 22 a 33 y = b 2. a 31 a 32 a 33 z b 3 Now note that the inverse of A can be written as, A 1 = 1 A where C ij is the co-factor of a ij. C 11 C 21 C 31 C 12 C 22 C 32 C 13 C 23 C 33 Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 38 / 49

Why does Cramer s Rule work? Hence, we have, w y = A 1 b = 1 A z b 1 C 11 + b 2 C 21 + b 3 C 31 b 1 C 12 + b 2 C 22 + b 3 C 32 b 1 C 13 + b 2 C 23 + b 3 C 33 Now note that b 1 C 11 + b 2 C 21 + b 3 C 31 is nothing but the determinant of the matrix where the first column of A has been replaced by the vector b. Similarly, b 1 C 12 + b 2 C 22 + b 3 C 32 is the determinant of the matrix where the second column of A has been replaced by the vector b. You can check that a similar observation holds for b 1 C 13 + b 2 C 23 + b 3 C 33. This shows that the inverse approach and Cramer s rule are identical for a system of three linear equations but this approach can be easily extended. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 39 / 49

A Demand and Supply Example Consider a linear model of demand and supply (of ice-creams). 1 Demand for ice-creams depends on the price of ice-creams, the income of individuals and the temperature. 2 The supply of ice-creams also depends on how much the ice-cream company can charge for it s product and how hot it is. q d = γm bp + αt q s = δp ɛt + d If income goes up, demand goes up. If the temperature goes up, demand goes up but supply goes down (some of the ice-cream melts before it is sold). If price goes up, demand falls and supply rises. In equilibrium, q d = q s. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 40 / 49

A Demand and Supply Example It is very easy to solve this model. The solution for the price is, p γm+(α+ɛ)t d = δ+b. Higher temperatures result in higher prices as people eat more ice-creams and ice-creams melt, reducing supply. We can also draw the model in (p, q) space and shift the curves. More importantly, we can also represent the model in matrix form. [ ] [ ] [ ] b 1 p γm + αt = δ 1 q d ɛt A Now we need A = b + δ = 0. If we look at p, that makes a lot of economic sense. We can t have an infinite price for ice-cream. Again, we can also use Cramer s rule to solve for shocks to income or temperature. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 41 / 49

Demand and Supply The supply and demand example is trivial. However, we might imagine that the entire economy (not just one market) consists of many demand and supply schedules (what we called a big linear model). The demand and supply schedules will probably display some form of interdependence. That is, consider an economy with ice-creams and apples. If the temperature goes up, the demand for apples might drop. Why? Because, with given income, the demand for ice-cream rises. If this is the case, our economy consists of an entire vector of prices and a vector of quantities. We want to know the price and quantities in equilibrium. That turns out to be very complicated. Having a grasp of matrix algebra is then very useful as we want to know if an equilibrium exists. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 42 / 49

Returning to the example at the very beginning Consider the macroeconomic model we wrote down at the beginning. C = a + b(1 t)y I = e lr G = Ḡ L = ky hr M = M We want to solve this model to determine the endogenous variables (Y, C, I, R) in terms of the exogenous variables (Ḡ, a, b, t, e, k, h, l, M). We can use all the new tools we have developed to solve the model. We ll focus on solving for consumption as a function of monetary and fiscal policy. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 43 / 49

Macro Model Example The equilibrium in this system is determined by the conditions: Y = C + I + Ḡ C = a + b(1 t)y I = e lr M = ky hr We can write the above system in matrix notation as: 1 1 1 0 Y b(1 t) 1 0 0 C 0 0 1 l I = k 0 0 h R The first thing to do is to compute A which is needed no matter which approach we use to solve this system. We choose one of the rows or columns with the maximum number of zeros since that will simplify the computation. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 44 / 49 Ḡ a e M

Example Continued Suppose we choose row 2. We know, = 4 j=1 a 2j C 2j. Then, = b(1 t) C 21 + C 22, as a 22 = 1 and a 23 = a 24 = 0. } {{ } =a21 Also C ij = ( 1) i+j M ij so = 4 j=1 a 2j ( 1) 2+j M 2j. However, it is not as easy as before because the M 2j s are 3 by 3 matrices. We only have two of them, in this case. C 21 = ( 1) 2+1 1 1 0 0 1 l C 22 = ( 1) 2+2 0 0 h 1 1 0 0 1 l k 0 h Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 45 / 49

Example Continued Expanding the 3 3 determinant in the expression for C 21 about the first column (two zero elements), we get, C 21 = [( 1)(1 ( h) 0 l)] = h For C 22 we can expand the 3 3 determinant about the first column (one zero element): C 22 = ( 1) 1+1 1 1 l 0 h + ( 1)3+1 k 1 0 1 l = h kl Hence the determinant of the original 4 4 is, = bh(1 t) h kl = h(1 b(1 t)) kl Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 46 / 49

Example Continued We can now compute the value of any of the endogenous variables using Cramer s Rule. Suppose we want to compute C. We replace the second column with [ Ḡ a e M ] T. Then, C = 1 A 1 Ḡ 1 0 b(1 t) a 0 0 0 e 1 l k M 0 h Computing the determinant of the 4 4 matrix by expanding around the second row, we have = b(1 t) ( 1) 2+1 Ḡ 1 0 e 1 l M 0 h + a ( 1)2+2 1 1 0 0 1 l k 0 h Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 47 / 49

Example Continued The second 3 3 determinant above has already been found. Expanding this about the first 3 3 around the first row, we have, Ḡ 1 0 e 1 l M 0 h = Ḡ ( 1)2 [1 ( h) 0 l] + ( 1) This is (hḡ + he + Ml). ( 1) 3 [e ( h) M l] Hence, = b(1 t) (hḡ + he + Ml) + a (h + kl) = [b(1 t)(hḡ + he + Ml) + a(h + kl)]. It follows that C = b(1 t)(hḡ + he + Ml) + a(h + kl) h(1 b(1 t)) + kl This is all exogenous; and Ḡ, M are policy variables. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 48 / 49

Roundup You should now be able to do the following: 1 Add/subtract/multiply matrices. 2 Find the inverse of a matrix and it s determinant (which is basically matrix manipulation). 3 Apply these techniques to standard models and use Cramer s rule to solve for endogenous variables as functions of exogenous variables. Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 49 / 49