Math 117 Chapter 2 Systems of Linear Equations and Matrices

Similar documents
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Solving Systems of Linear Equations

Solving Linear Systems, Continued and The Inverse of a Matrix

Lecture 1: Systems of Linear Equations

Systems of Equations

5.5. Solving linear systems by the elimination method

Arithmetic and Algebra of Matrices

Solving Systems of Linear Equations Using Matrices

Notes on Determinant

1.2 Solving a System of Linear Equations

Row Echelon Form and Reduced Row Echelon Form

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Lecture Notes 2: Matrices as Systems of Linear Equations

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Click on the links below to jump directly to the relevant section

Review of Fundamental Mathematics

1 Introduction to Matrices

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Algebraic expressions are a combination of numbers and variables. Here are examples of some basic algebraic expressions.

Introduction to Matrix Algebra

Vocabulary Words and Definitions for Algebra

Typical Linear Equation Set and Corresponding Matrices

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

Solutions to Math 51 First Exam January 29, 2015

x y The matrix form, the vector form, and the augmented matrix form, respectively, for the system of equations are

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Section 10.4 Vectors

Brief Introduction to Vectors and Matrices

5 Systems of Equations

1 Solving LPs: The Simplex Algorithm of George Dantzig

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Data Mining: Algorithms and Applications Matrix Math Review

Section 1.1 Linear Equations: Slope and Equations of Lines

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Solution to Homework 2

ALGEBRA. sequence, term, nth term, consecutive, rule, relationship, generate, predict, continue increase, decrease finite, infinite

Systems of Linear Equations

Solving Systems of Linear Equations

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

EQUATIONS and INEQUALITIES

Linearly Independent Sets and Linearly Dependent Sets

8 Square matrices continued: Determinants

7 Gaussian Elimination and LU Factorization

Linear Programming. March 14, 2014

Systems of Linear Equations

1 Determine whether an. 2 Solve systems of linear. 3 Solve systems of linear. 4 Solve systems of linear. 5 Select the most efficient

Solution of Linear Systems

Elementary Matrices and The LU Factorization

3. Solve the equation containing only one variable for that variable.

Properties of Real Numbers

MATH APPLIED MATRIX THEORY

Math 312 Homework 1 Solutions

Math 1050 Khan Academy Extra Credit Algebra Assignment

Unit 1 Equations, Inequalities, Functions

Determine If An Equation Represents a Function

Lecture notes on linear algebra

Elements of a graph. Click on the links below to jump directly to the relevant section

Using row reduction to calculate the inverse and the determinant of a square matrix

9.2 Summation Notation

MBA Jump Start Program

COLLEGE ALGEBRA. Paul Dawkins

Session 29 Scientific Notation and Laws of Exponents. If you have ever taken a Chemistry class, you may have encountered the following numbers:

by the matrix A results in a vector which is a reflection of the given

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Operation Count; Numerical Linear Algebra

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Methods for Finding Bases

How To Understand And Solve Algebraic Equations

Some Lecture Notes and In-Class Examples for Pre-Calculus:

K80TTQ1EP-??,VO.L,XU0H5BY,_71ZVPKOE678_X,N2Y-8HI4VS,,6Z28DDW5N7ADY013

Vectors 2. The METRIC Project, Imperial College. Imperial College of Science Technology and Medicine, 1996.

Microeconomic Theory: Basic Math Concepts

10.2 Systems of Linear Equations: Matrices

1 VECTOR SPACES AND SUBSPACES

Excel supplement: Chapter 7 Matrix and vector algebra

6. Vectors Scott Surgent (surgent@asu.edu)

5 Homogeneous systems

Solutions of Linear Equations in One Variable

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

This is a square root. The number under the radical is 9. (An asterisk * means multiply.)

Name: Section Registered In:

Linear Algebra and TI 89

Polynomial and Rational Functions

Linear Algebra Notes

2.2/2.3 - Solving Systems of Linear Equations

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 2

SOLVING LINEAR SYSTEMS

Part 1 Expressions, Equations, and Inequalities: Simplifying and Solving

Math 25 Activity 6: Factoring Advanced

Chapter 9. Systems of Linear Equations

LS.6 Solution Matrices

Solving Quadratic Equations

Linear Programming. Solving LP Models Using MS Excel, 18

Excel Basics By Tom Peters & Laura Spielman

Transcription:

Math 117 Chapter 2 Systems of Linear Equations and Matrices Flathead Valley Community College Page 1 of 28

1. Systems of Linear Equations A linear equation in n unknowns is defined by a 1 x 1 + a 2 x 2 + + a n x n = k where a 1, a 2,..., a n and k are real numbers and x 1, x 2,..., x n are the variables. An equation is nonlinear if any of the variables are raised to a power, multiplied together or involve logarithms, exponentials or trigonometric functions. We do not deal with non-linear equations in this course. A system of linear equations is a set of one or more linear equations. In both the Supply/Demand and the Break-Even Analysis problems in Chapter 1 you derived two linear equations with the goal of finding where the two equations were equal to each other. In each case a system of linear equations was solved. A solution to the system is the set of point or points that satisfy all of the equations. 1.1. Possible Solutions to a System of Linear Equations Given any system of linear equations there are three possible outcomes when trying to solve the system. 1. One Solution: There is one and only one set of points (x 1, x 2,..., x n ) that satisfy all of the equations. This system is said to have a unique solution. With two equations and two unknowns the solution is the point where the graphs of the equations intersect. Page 2 of 28

2. No Solution: There are no points that satisfy all of the equations. With two equations and two unknowns the graphs of the equations are parallel with different y intercepts. In this case there is no point that satisfies both equations. 3. Infinitely Many Solutions: There are infinitely many points that satisfy every equation in the system. With two equations and two unknowns graphs of the equations are identical. Warning: Infinitely many solutions does not mean any set of points will satisfy the system. What this does mean is that any point that satisfies one equation will satisfy all of the equations. Now for just a little more vocabulary. Any system that has a solution, one or more is called consistent. If no solution exists, the system is called inconsistent. A system that has infinitely many solutions is called dependent. As will be shown later, the solutions to a dependent system will be determined by choosing any value for one or more of the variables which in turn will determine the values of the remaining variables. If the is one or no solutions, the system is called independent. The following table summarized the vocabulary with the possible outcomes of systems of linear equations. One Solution Infinitely Many Solutions No Solution Consistent Consistent Inconsistent Independent Dependent Independent Page 3 of 28

1.2. Equivalent Equation Operations One method used to solve systems of linear equations is to transform the original system into a different system that has the exact same solution(s). The transformations must make the system simpler without changing the answers. If the new system has the same solution, the system is called an equivalent system. There are three transformations that result in an equivalent systems. 1. Equation Exchange: The order of any two equations can be switched. 2. Multiply an Equation by a Constant: Both sides of an equation can be multiplied by a nonzero constant. This transformation is usually performed when there is a common factor in every term of the equation or one wishes to remove fractions from an equation. 3. Add a Multiple of One Equation to Another Equation: This transformation will be the most useful solving systems. It is very important to really understand this transformation as one equation will be changed drastically with this action. The new equation replaces the equation added to, not the equation that was multiplied by the constant. For example if one wishes to multiply the first equation by 5 and to the third equation, it is the third equation that is changed. The first equation remains exactly the same. Page 4 of 28

1.3. Echelon Method The first method in solving systems is called the Echelon Method. The goal of the Echelon Method is the use the equivalent equation operations to rewrite the system in to what is called a triangular form. For three equations with three unknowns triangular form looks like x + ay + bz = c y + dz = e z = f. (E1) (E2) (E3) Once triangular form has been achieved it is easy to solve the system using backsubstitution. Back-substitution starts at the bottom of the system works back up the system. In the example above the value for z is given in equation 3. Substitute the value for z in equation 3 back into equation 2 and solve for y. The final step is to substitute the values for z and y back into equation 1 and solve for x. To find the triangular form of a system start at the upper left corner, usually with the variable x, and use equation operations to eliminate x in all of equations below. Move down one equation and to the right and repeat the elimination process. This is a very mechanical method that once mastered will allow one to solve a system with many equations and many variables. Page 5 of 28

Inconsistent System Triangular form is very useful to determine if the system is consistent or not. If the system is inconsistent the last equation will lead to an impossible equation such as 0 = 5. At this point stop, and state the system has no solution. Dependent System If the final equation ends up with 0 = 0, then the system is probably dependent. In the three equation three unknown case this will look like x + ay + bz = c y + dz = e (E1) (E2) 0 = 0. (E3) Now both x and y depend on the the variable z. The z variable is now called a parameter or the free variable. Choosing any value for z and substituting back into equation 1 and 2 will result in one solution to the system. Since z can be any number, there are now infinitely many solutions to the system. Page 6 of 28

2. Matrices and Systems of Linear Equations After practicing solving systems using the Echelon Method one will quickly find it a bit cumbersome to keep track of all the variables in the equation operations. Matrices can be used to rewrite the system in a more manageable format. A matrix is nothing more than an array of numbers. Each number in the matrix is called an element or entry. The element of a matrix will be referenced by their location in the matrix. To be consistent the entries will be identified first by the row and then the column location in the array. Before rewriting the system it is important that all of the variables are aligned in the same vertical location. Now by taking only the coefficients of the variables the system can be written as an augmented matrix. For three equations and three unknowns the x coefficients will go in the first column, the y coefficients in the second column, the z coefficients in the third column and the last column will be all of the constants to the right of the equal sign. It is very important to keep track of what every column represents. For example the system has an augmented matrix x + 3y 6z = 7 2x y + 2z = 0 x + y + 2z = 1 1 3 6 7 2 1 2 0 1 1 2 1 Page 7 of 28

Notice that the row of the augmented matrix gives the coefficients of the corresponding equation. Now the equivalent operations for the augmented matrix are almost identical to the equivalent equation operations. 2.1. Equivalent Row Operations 1. Row Exchange: Any two rows can be switched. 2. Multiply an Row by a Constant: A row can be multiplied by a nonzero constant. 3. Add a Multiple of One Row to Another Row: Again this operation will be the most useful solving the matrix problem. As before it is very important to understand the new row replaces the row added to, but the row that was multiplied by the constant stays the same. For example if one wishes to multiply the first row by 5 and to the third row, it is the third row that is changed. The first row remains exactly the same. Page 8 of 28 2.2. Gauss Method The Gauss Method is the same method as the Echelon Method but uses matrices and equivalent row operations instead of equations and equation operations. In fact, with an augmented matrix the Gauss Method is often called the Echelon

Method. The goal is to rewrite the augmented matrix to the triangular form 1 0 1. 0 0 1 A matrix of this form is said to be in row-echelon form and s will be real numbers. To finish solving the system rewrite the triangular matrix as a system of equations and use back substitution to solve. In row-echelon form the first non-zero entry in each row is called a pivot. While it is desirable to have the pivots all ones, this is not necessary. Often it is easier to leave the pivots as they are until triangular form has been established. Then at the end use multiplication by a constant to make the pivot a one. In row-echelon form a system with no solution will look similar to 1 0 1. 0 0 0 5 Here the last row of the matrix is written 0 = 5, an impossibility. In row-echelon form a dependent system will result in a last row of zeros 1 0 1. 0 0 0 0 Notice there is no coefficient for the z variable. In this case z is referred to as a free variable. Page 9 of 28

When solving a system using the augmented matrix it is very important to start in the upper left entry, establish a pivot in the matrix and use row operations to get zeros for all of the other entries below the pivot. Next move down and to the right to the first nonzero entry for the next pivot and get zeros below that pivot. Repeat this process until the desired triangular form is produced. Warning: Deviating from the process of working from upper left to lower right will not lead to the desired triangular matrix. 2.3. Gauss-Jordan Method It is possible to skip the algebraic back substitution entirely and still arrive at the correct solution to the system. This is call the Gauss-Jordan Method or reduced row-echelon form. After row operations have been used to achieve row-echelon form, start form the lower right and use row operations to get zeros above the pivot. Move up and to the left to the next pivot and repeat until the matrix is of the form 1 0 0 0 1 0. 0 0 1 Now it is just a matter of writing down the solution. When there is no solution to the system, there is no need to continue from row-echelon form. When there is no solution, there will still be no solution after more row operations. Reduced row-echelon form is very useful when there are infinitely many solutions. The Page 10 of 28

columns with a pivot are the dependent variables and those without pivots are the independent variables. 2.4. Solving Systems with the Calculator With a little practice solving simple systems of two or three equations with two or three unknowns is not that difficult. For these simpler systems you will be expected to show your work to solve the systems using row operations on the augmented matrix. For larger systems and to check solutions to simpler systems the calculator can be a valuable and time saving asset. 1. Enter the Augmented Matrix into the Calculator: Once the system has been written as an augmented matrix the matrix is entered into the calculator. Press 2ND then MATRIX to enter the matrix environment. Move over to EDIT and down to any of the matrices your wish, eg. 1:[A], and press ENTER. First set the size of the matrix, called the dimension of the matrix. The dimension of a matrix is always the number of rows number of columns. Now enter the entries for the augmented matrix. After all of the entries have been entered, go back to the main screen using 2ND QUIT. To check the augmented matrix enter the matrix environment, 2ND MATRIX, and select 1: [A] 3 4. For example the system x + 3y 6z = 7 2x y + 2z = 0 x + y + 2z = 1 Page 11 of 28

appears in the matrix environment as 2. Row-Echelon Form: To express the augmented matrix in row-echelon form, ref, press 2ND then MATRIX to enter the matrix environment. Arrow over to MATH, arrow up to A:ref( and press ENTER. Again enter the matrix environment, 2ND MATRIX, and select the matrix where the augmented matrix was entered, 1:[A]. Finally close the parentheses, ) and press ENTER. The resulting matrix will be in row-echelon form. Note: Row-echelon form is not a unique operation. Do not be surprised if the calculator s results do not match your hand calculations. For the previous system the row-echelon form is Page 12 of 28 3. Reduced Row-Echelon Form: The procedure to express the augmented matrix in reduced row-echelon form, rref, is very similar to row-echelon

form. Press 2ND then MATRIX to enter the matrix environment. Arrow over to MATH, arrow up to B:rref( and press ENTER. Again enter the matrix environment, 2ND MATRIX, and select the matrix where the augmented matrix was entered, 1:[A]. Finally close the parentheses, ) and press ENTER. The resulting matrix will be in reduced row-echelon form. Reduced row-echelon form is a unique operation. Now your hand-calculations should match the calculator. The reduced row-echelon form for our example is The solution to the system is x = 1, y = 0, z = 1. As before, we are not going to use spreadsheets to solve systems, method 3 in the textbook. You will, however, be responsible for hand calculations as well as calculator solutions to solve systems of linear equations. Page 13 of 28

3. Matrix Addition and Subtraction In the previous section matrices were used to solve systems of linear equations. The applications that use matrices is far greater than just systems of linear equations. In chapter 1 the calculator was used to find the linear function to best fit data. This is actually a very involved matrix application. At the conclusion of this chapter we will investigate the use of matrices in the economic application of Input-Output models. Before we get to the applications the arithmetic of matrices must be discussed. The main components of any arithmetic are addition, additive identity, additive inverse (subtraction), multiplication (two types), multiplicative identity and multiplicative inverse (not division). Once the operations and identities are understood it is important to establish the appropriate properties of the arithmetic that hold. These properties include the associative, commutative and distributive laws. While many of the properties of real numbers will be the same for matrices, there are a couple of surprised along the way. 3.1. Matrix Vocabulary and Definitions When entering matrices into the calculator the size or dimension of the matrix had to be known. A matrix A with m rows and n columns is defined to be an m n matrix, written dim(a) = m n. A square matrix has the same number of rows and columns, thus has dimension n n. A row matrix or row vector has dimension 1 n. A column matrix or column vector has dimension m 1. In general capital letters will be used to represent a matrix while lower case letters Page 14 of 28

will represent elements of the matrix. Subscripts will be used to indicate the location of the element of a matrix. Remember that for matrices it is row first, then column. For example, given the matrix 1 3 6 7 A = 2 1 2 0. 1 1 2 1 The element in the second row and third column is 2 and will be written a 23 = 2. In general the element in the i th row and j th column is written a ij. For a matrix with dimension m n it is also often neater to express the matrix A as A = [a ij ] for 1 i m and 1 j n instead of writing out the entire matrix. This notation will be used to define the arithmetic operations on matrices. For example, Matrix Equality Two matrices are equal if they have the same size and if each pair of corresponding elements is equal. That is, A = B if and only if dim(a) = dim(b) and a ij = b ij for all i and j. 3.2. Matrix Addition Now that the basic definitions and notation are understood it is time to look at the arithmetic of matrices. Once matrix addition has been defined an identity matrix and additive inverse follow directly. Page 15 of 28

To verify these definitions it will be useful to enter the following matrices into your calculator: [ ] [ ] [ ] 2 1 3 1 2 3 A =, B =, C =, 3 5 2 2 4 5 Adding Matrices D = [ 2 1 ], E = [ 3 5 ], F = [ ] [ ] 3 1, G =. 2 2 The sum of two m n matrices X and Y is the m n matrix X +Y in which each element is the sum of the corresponding elements of X and Y. Or equivalently, given X = [x ij ] and Y = [y ij ], then X + Y = [x ij + y ij ]. Try using your calculator to add matrices with the same dimension, A + B, as well as matrices with different dimensions, A + D. Identity Matrix for Addition An additive identity is a matrix, 0 such that A + 0 = 0 + A = A. The identity for matrix addition is called the zero matrix. The zero matrix adjusts its dimension to match that of the matrix A. The zero matrix is an m n matrix with 0 for every entry. Page 16 of 28

Additive Inverse Given an m n matrix A, the additive inverse or negative of A is the m n matrix A such that A + ( A) = A + A = 0. Given A = [a ij ] the additive inverse is the defined by A = [ a ij ] where each element of A is multiplied by 1. Subtracting Matrices Now that the negative of a matrix has been defined subtraction of matrices follows exactly as with real numbers. For two m n matrices X and Y, the difference X Y is the m n matrix defined by X Y = X + ( Y ) Equivalently, given X = [x ij ] and Y = [y ij ] the difference X Y is defined by X Y = [x ij y ij ]. Page 17 of 28

4. Up to this point matrices have closely followed the properties and definitions of real numbers. Matrix multiplication will be the first deviation from the real numbers. To begin with there are two different matrix multiplications, scalar multiplication and matrix multiplication. 4.1. Scalar Multiplication A scalar is defined to be any real number usually represented by lower case k, l, m or n. Scalar multiplication most closely represents the repeated addition of real numbers. For real numbers 3 + 3 + 3 + 3 + 3 is the sum of the number 3 five times or 5 3. For matrices the definitions is similar, A + A + A + A + A is defined as 5 A or just 5A. More formally, Product of a Matrix and a Scalar The product of a scalar k and a matrix X is the matrix kx, each of whose elements is k times the corresponding element of X, kx = [kx ij ]. 4.2. The ultimate goal of this section is to determine the requirements and methods of multiplying two matrices. Before jumping to the general case consider only the product of a row matrix and a column matrix. Using the matrices entered into Page 18 of 28

your calculator earlier, try multiplying D F. Now try D G. In both cases a 1 2 matrix multiplies a 2 1 matrix and the product is a 1 1 matrix. Now notice D F = [ 2 1 ] [ ] 3 = 2( 3) + 1(2) = 6 + 2 = 4 2 and D G = [ 2 1 ] [ ] 1 = 2(1) + 1( 2) = 2 + 2 = 0 2 Row Matrix times a Column Matrix In order to multiply a row matrix times a column matrix the number of elements in the row must be the same as the number of elements in the column. The resulting product will be a 1 1 matrix whose sole element is the sum of the products of the corresponding elements in the row and column. Now is probably a good time to point out that order is very important here. The definition above is only for (row matrix) (column matrix). Use your calculator to try D F. This leads to our first big difference between the real numbers and matrices. In general, matrix multiplication is not commutative, A B B A. From this point forward make certain the row matrix is to the left and the column matrix is to the right. [ ] 3 1 Next notice that the columns of matrix B = are identical to F = [ ] [ ] 2 2 3 1 and G =. Any guesses to the product D B? Try it. Notice a couple 2 2 Page 19 of 28

of things: 1. dim(d) = 1 2, dim(b) = 2 2 and dim(db) = 1 2. In general, the number of columns of the first matrix must equal the number of rows in the second matrix in order to multiply row by column. 2. The entries in DB are the same as DF and DG, respectively. DB = D [ F G ] = [ D F D G ] In other words, to find the entry in the first row and first column of the product of DB, multiply the first row of D by the first column of B. To find the entry in the first row and second column of the product of DB, multiply the first row of D by the second column of B From these examples the definition of matrix multiplication arises. Product of Two Matrices Let A be an m n matrix and B be an n k matrix. To find the element in the i th row and j th column of the product matrix AB, multiply the i th row of A by the j th column of B. The product matrix AB is an m k matrix. To reiterate there are two very important properties of matrix multiplication: 1. AB is defined if and only if the number of columns of A is the same as the number of rows of B. If dim(a) = m n and dim(b) = n k, then dim(ab) = m k. 2. In general matrix multiplication is not commutative, AB BA. Page 20 of 28

5. In matrix arithmetic there is no division, only a multiplicative inverse. Before an inverse matrix can be defined one must define a multiplicative identity for matrices. 5.1. Identity Matrix For real numbers the multiplicative identity is the number 1. For any real number a, 1 a = a 1 = a. Taking a similar requirement for matrices an identity matrix,i, must satisfy A I = I A = A. In this definition the commutative property not only holds, but is required. In order for the multiplication to be defined both the identity I and the matrix A must be square. Identity Matrix The identity matrix is a unique square matrix defined as { e ij = 1 for i = j, I = [e ij ] such that e ij = 0 for i j. Do not be alarmed by the notation, the identity is a square matrix with all zeros except along the main diagonal. Below are the 2 2, 3 3 and 4 4 identity matrices. Page 21 of 28

I 2 = [ ] 1 0, I 0 1 3 = 5.2. 1 0 0 0 1 0 0 0 1 0, I 4 = 0 1 0 0. 0 0 1 0 0 1 0 0 0 0 1 Once the identity matrix is defined it is relatively easy to define an inverse for a matrix A. Multiplicative Inverse Matrix Given a square matrix A the inverse of matrix A, denoted A 1 is a matrix such that A A 1 = A 1 A = I. Recall the real number 0 has no multiplicative inverse, there is no number a such that a 0 = 1. Just like the real numbers not every square matrix A has an inverse. Actually finding a multiplicative inverse of A is much more work. Page 22 of 28 5.3. Finding While defining A 1 is relatively straight forward, finding a multiplicative inverse for A is much more work. The formula for finding the inverse of a 2 2 matrix

will be given. For all higher dimensions the calculator will be used. Inverse of a 2 2 Matrix [ ] a b Given A =, the inverse of A is c d A 1 = 1 ad bc [ d ] b c a Notice the scalar in front of A 1, 1/(ad bc). If ad bc = 0 a division by zero error is encountered. In this case the matrix A does not have an inverse. For the 2 2 case it is often faster to check to see if ad bc = 0 and use the formula above than to use your calculator to find the inverse. by Calculator For square matrices with dimension larger than 2 2 we will use the textbooks method 2 only. Find an inverse matrix is actually a very computationally challenging task that is executed easily with your calculator. 1. First enter the matrix A into your calculator and return to the computation screen. 2. Select the matrix A from the matrix environment, 2ND, MATRIX, 1:[A]. 3. Press the x 1 button (This is actually the same button you have been using to enter the matrix environment.) and press ENTER. Page 23 of 28

4. More than likely the resulting matrix has a lot of decimals. It is usually possible to express the inverse matrix with fractional entries. Press MATH, 1: Frac and then ENTER. 5.4. Solving Systems using To end a very long chapter let us return to the beginning, solving systems of linear equations. Earlier we solved the system x + 3y 6z = 7 2x y + 2z = 0 x + y + 2z = 1 using an augmented matrix and the rref command. Consider the system as two equal column matrices, x + 3y 6z 7 2x y + 2z = 0. x + y + 2z 1 The trick here is to consider only the left side of the equation and realize the column matrix is really the product of two matrices, one known and one unknown. x + 3y 6z 1 3 6 x 2x y + 2z = 2 1 2 y. x + y + 2z 1 1 2 z Page 24 of 28

The known matrix is called the coefficient matrix. This is nothing more than the augmented matrix without the column of constants. Let the coefficient matrix be called A and the unknown matrix x. 1 3 6 x A = 2 1 2, x = y. 1 1 2 z Do not confuse the column matrix x with the variable x in the system. Finally let the column of constants be labeled b. 7 b = 0. 1 Now the system is of the form Ax = b. Solving Ax = b Using Matrix Inverses To solve a system of equations Ax = b, where A is the matrix of coefficients, x is the matrix of variables, and b is the matrix of constants 1. Find A 1. 2. Multiply both sides of the equation on the left by A 1 to get x = A 1 b. Page 25 of 28

6. The final application of this chapter brings together all of the matrix arithmetic learned into a very useful economic modeling technique. In this model it is assumed there are n commodities produced. Each commodity uses some or all of other commodities in its production. An input-output matrix, or technological matrix, is used to represent what commodities are used to produce one unit of another commodity. Each column represents the commodity being produced and each row represents the amount of the commodity used in production of the commodity in the column. For example the input-output matrix A may represent the production of Agriculture, Manufacturing and Transportation. Agriculture Manufacturing Transportation 1 1 Agriculture 0 A = 4 3 1 1 Manufacturing 0 2 4 1 1 Transportation 0 4 4 The first column shows that 1/2 unit of manufacturing and 1/4 unit of transportation will be used to create one unit of agriculture. A production matrix is a column matrix representing the total number of units of the commodity produced. Keep in mind that some of the units produced are used to produce other commodities. Thus the production matrix is a gross production, not the total units available. To find the number of units available, the number of units used in production must be subtracted from the production matrix. Multiplying the input-output matrix by the production matrix,, AX, will Page 26 of 28

yield the number of units used in production. Finally the total units available for sale is given by X AX. In practice the production matrix is not given, but must be found. In general an economy will be able to use a given quantity of a commodity. The quantity the economy desires to use is called a demand matrix, D. The demand matrix is also given as a column matrix. Now the producers of the commodities want to have enough units available after production to satisfy the demand. In matrix language the amounts to solving the matrix equation X AX = D. A little matrix algebra allows one to solve for X X AX = D (I A)X = D X = (I A) 1 D There is yet another method that may be used to find the production matrix, X. Once the matrix equation is in the form (I A)X = D it may be written as an augmented matrix [(I A)D] and use reduced row-echelon form to produce the desired result. Personally the second method is more appealing and less computationally expensive. The problem above is called an open model. In the open model it is the goal to have some surplus of the commodity to meet the demand of the public. In a closed input-output model there is no surplus, X = AX. A closed model may be desirable in a commune or a community that is self sufficient. Now the matrix problem amounts to solving (I A)X = 0. In this case the augmented matrix must be used. The augmented matrix in the closed case will have a column of Page 27 of 28

zeros that will not change during row operations. Thus it is only necessary to find rref([i A]. Two very important things to remember. 1. Don t for get the column of zeros in the last column of rref([i A]). Include the zeros before rewriting the solution equations. 2. There will always be infinitely many solutions. That is there will be a free variable in rref([i A]). In most cases choose the smallest value for the free variable that leaves whole number solutions. Page 28 of 28