SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING. Self Study Course

Similar documents
Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

8 Square matrices continued: Determinants

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 2

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Solving Linear Systems, Continued and The Inverse of a Matrix

Using row reduction to calculate the inverse and the determinant of a square matrix

Operation Count; Numerical Linear Algebra

Direct Methods for Solving Linear Systems. Matrix Factorization

6. Cholesky factorization

Solving Systems of Linear Equations

SOLVING LINEAR SYSTEMS

by the matrix A results in a vector which is a reflection of the given

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Style. Learning Outcomes

Notes on Determinant

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Solving simultaneous equations using the inverse matrix

7 Gaussian Elimination and LU Factorization

LINEAR ALGEBRA. September 23, 2010

Solving Systems of Linear Equations

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

1.2 Solving a System of Linear Equations

Similar matrices and Jordan form

Elementary Matrices and The LU Factorization

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

University of Lille I PC first year list of exercises n 7. Review

Systems of Linear Equations

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Vector and Matrix Norms

Name: Section Registered In:

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

The Characteristic Polynomial

Review Jeopardy. Blue vs. Orange. Review Jeopardy

General Framework for an Iterative Solution of Ax b. Jacobi s Method

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

LS.6 Solution Matrices

Lecture 5: Singular Value Decomposition SVD (1)

8.1. Cramer s Rule for Solving Simultaneous Linear Equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Solution of Linear Systems

Row Echelon Form and Reduced Row Echelon Form

26. Determinants I. 1. Prehistory

Lecture 1: Systems of Linear Equations

Question 2: How do you solve a matrix equation using the matrix inverse?

Typical Linear Equation Set and Corresponding Matrices

Solving Systems of Linear Equations Using Matrices

[1] Diagonal factorization

K80TTQ1EP-??,VO.L,XU0H5BY,_71ZVPKOE678_X,N2Y-8HI4VS,,6Z28DDW5N7ADY013

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

x = + x 2 + x

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Linear Algebra: Determinants, Inverses, Rank

Data Mining: Algorithms and Applications Matrix Math Review

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

MAT 242 Test 2 SOLUTIONS, FORM T

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

DETERMINANTS TERRY A. LORING

Matrix algebra for beginners, Part I matrices, determinants, inverses

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

CS3220 Lecture Notes: QR factorization and orthogonal transformations

1 Review of Least Squares Solutions to Overdetermined Systems

Linear Equations and Inequalities

MATH APPLIED MATRIX THEORY

Lecture 2 Matrix Operations

Systems of Linear Equations

5 Homogeneous systems

1 Determinants and the Solvability of Linear Systems

5.5. Solving linear systems by the elimination method

Equations, Inequalities & Partial Fractions

1 Introduction to Matrices

Linear Algebra Review. Vectors

8 Primes and Modular Arithmetic

Introduction to Engineering Analysis - ENGR1100 Course Description and Syllabus Monday / Thursday Sections. Fall '15.

1.5 SOLUTION SETS OF LINEAR SYSTEMS

MA106 Linear Algebra lecture notes

Here are some examples of combining elements and the operations used:

Lecture 3: Finding integer solutions to systems of linear equations

SOLVING COMPLEX SYSTEMS USING SPREADSHEETS: A MATRIX DECOMPOSITION APPROACH

Applied Linear Algebra I Review page 1

Introduction to Matrix Algebra

Solution to Homework 2

A Direct Numerical Method for Observability Analysis

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Factorization Theorems

Solutions to Math 51 First Exam January 29, 2015

Homogeneous systems of algebraic equations. A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

ASEN Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1

Chapter 2 Determinants, and Linear Independence

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Continued Fractions and the Euclidean Algorithm

Sequences. A sequence is a list of numbers, or a pattern, which obeys a rule.

Linear Programming. March 14, 2014

Lecture notes on linear algebra

Chapter 6. Orthogonality

Chapter 17. Orthogonal Matrices and Symmetries of Space

Transcription:

SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING Self Study Course MODULE 17 MATRICES II Module Topics 1. Inverse of matrix using cofactors 2. Sets of linear equations 3. Solution of sets of linear equations using elimination method 4. Inverse of matrix using elimination method A: Work Scheme based on JAMES (FOURTH EDITION) 1. You have discovered in Module 16 how to determine the adjoint matrix. The latter provides a direct method for calculating the inverse of a matrix. Turn to p.341 and study section 5.4, up to the start of Example 5.21. The inverse of a matrix is defined near the start of this section. There are two cases to consider depending on whether A = 0 (A is singular) or A = 0 (A is non-singular). When A is non-singular the inverse A 1 exists, and can be calculated using the shaded formula on p.341. This is often called the direct method of calculating the inverse, or the cofactor method. Study Example 5.21. In the solution to part (a), the matrix adj A is calculated by first obtaining the minors. For a 2 2 matrix omitting a row and column will leave the minor as the determinant of a single element, which is the element itself. Hence, for A one obtains M 11 = 3 = 3 and A 11 = ( 1) 1+1 M 11 = 3. The other cofactors can be found using a similar argument. The value of the determinant is clearly 1(3) 2(2) = 3 4 = 1, as stated in the solution. The inverse A 1 can then be easily caclulated using the general formula. 2. Study the theory on p.343, and work through Example 5.22. The inverses A 1 and B 1 are found by the method used in Example 5.21. ***Do Exercise 51 on p.345*** ***Do Exercise A: Use the direct method to find the inverse of the matrix 1 2. 2 1 3. An extremely important use of matrices is in the solution of sets (or systems) of linear equations. These occur most frequently in the numerical solution of problems. In these situations the number of equations can be huge and solutions are possible only with the use of computers. It is important, however, that you understand the principles underlying these numerical methods and in this module the theory is presented and applied by hand to small systems. Study section 5.5, which starts on p.347, up to the beginning of Example 5.24. The simultaneous linear equations (5.18) are a system of n equations in n unknowns, so that if the system is written in the matrix form AX = b then A is a n n square matrix. There are four different cases to consider depending on whether, or not, A = 0 or b = 0. The types of solution to expect for these four cases, (a)-(d), are stated on pp.347 and 348. 1

4. Work through all five parts of Example 5.24. The purpose of this Example is to give you practice in classifying systems of equations. The determination of the solutions, where they exist, will be looked at in more detail in the following sections. Study Example 5.25. The details of calculating A 1 are omitted, but you should be able to determine the inverse and find the unique solution X. Work through Examples 5.26 and 5.27. You will need to fill in the details in calculating the determinant in Example 5.26. Eigenvalues and eigenvectors, which appear in Example 5.27, are investigated in Module 18. 5. Read more of section 5.5 on pp.351 and 352, up to the beginning of Example 5.27. Cramer s rule on p.352 is a neat way of writing the solution. However, it should be emphasised that it is an inefficient way of calculating the solution since determinants of large matrices are very time-consuming to evaluate. ***Do Exercise 63 on p.354*** 6. Having discussed the types of solution that arise it is now important to discover the appropriate methods for finding these solutions. Elimination methods are commonly used and these are discussed in section 5.5.2 starting on p.356. Study this section, stopping three lines above expression 5.25 in the middle of p.358. Note that the row operations apply to elements in A and b, but leave the elements in X unchanged. The elementary row operations introduced on p.357 can be used to convert a matrix A into upper-triangular form. The general procedure for solving a system of equations in upper-triangular form is stated on pp.358 and 359 but it is easier to understand by looking at Example 5.30, which is discussed below. Study Example 5.30. The system is written in matrix form and then the elementary row operations are used to convert the equations to upper triangular form. The reduced system of equations can then be written x + 2y + 3z = 10, y + 4 3 z = 10 3, z = 1. With back-substitution you must work back through these reduced equations starting with the last. Clearly the third equation gives z = 1, and this can then be substituted into the next-to-end equation to give y = 10 3 4 3 z = 10 3 4 3 (1) = 6 3 = 2. Substituting for y and z into the first equation then enables x to be easily calculated. (At the end of the calculation you should always substitute your final answer into the original equations to make sure that your solution is correct. If the equations are not satisfied then try to find the error). The above procedure can be written as separate algorithms for special matrices, but the basic method is unchanged. Hence, omit the sections on the tridiagonal, or Thomas, algorithm and Gauss elimination and move on to p.364. Work through Examples 5.31 and 5.32. ***Do Exercise 64 on p.354*** ***Do Exercise 73 on p.369 using the standard elimination method*** 7. As mentioned earlier the cofactor method is not efficient when calculating the inverses of large matrices, since it takes a long time to evaluate large determinants. Fortunately there are better methods of calculating inverses and one of these methods involves the elimination method. This procedure is not discussed in J. so is given below. Suppose you require the inverse of the n n square matrix A. Then you form a new matrix (A I), in which the n n unit matrix I is added to A as extra columns with the new matrix now n 2n. The basic method is to use row operations to reduce A to the unit matrix I. By simultaneously carrying out the same row operations on I the latter changes to A 1. Hence the matrix (A I) becomes (I A 1 ). 2

The elimination method discussed in section 6 reduces a matrix to upper triangular form. During this process the elements below the leading diagonal in a column are made zero. To reduce A to the unit matrix it is also necessary to make the elements above the leading diagonal zero. Start with the first column, and then consider columns 2, 3... in turn. We illustrate the method with the example below. Example: If A = 1 1 1 use the elimination method to determine A 1. 1 2 3 0 1 1 Start with the matrix (A I) : 1 2 3 1 0 0 1 1 1 0 1 0 Add row 1 to row 2 : 1 2 3 1 0 0 0 3 4 1 1 0 Now move to the second column, where we need zeros both above and below the element 3. Divide row 2 by 3 : 1 2 3 1 0 0 4 1 1 0 1 3 3 3 0 Subtract 2 row 2 from row 1 : 0 1 4/3 1/3 1/3 0 Subtract row 2 from row 3 : 0 1 4/3 1/3 1/3 0 0 0 7/3 1/3 1/3 1 Multiply row 3 by 3 7 : 0 1 4/3 1/3 1/3 0 The first and second columns are now correct, so we must move on to the third column. 1 0 0 2/7 5/7 1/7 Subtract 1 3 row 3 from row 1 : 0 1 4/3 1/3 1/3 0 1 0 0 2/7 5/7 1/7 Subtract 4 3 row 3 from row 2 : 0 1 0 1/7 1/7 4/7 The matrix A has been changed to I, and the theory says that the right-hand side I has become A 1. Hence, it has been shown that 2/7 5/7 1/7 A 1 = 1/7 1/7 4/7 1/7 1/7 3/7 The evaluation of A 1 by the above method appears lengthy but for large matrices it can be shown that it is more efficient than determining the inverse using the cofactor method. 3

***Exercise B: (i) 1 2 2 1 Using the elimination method determine the inverse of the matrices (ii) 1 2 1 0 1 2 1 4 1 8. To complete the module it is instructive to read the section on ill-conditioning which starts on p.367. Look through Example 5.35 which shows that difficulties arise when solving the system AX = b when A is very small. Read the remainder of this section, up to the beginning of Exercises 5.5.3. B: Work Scheme based on STROUD (SIXTH EDITION) S. covers part, but not all, of this Module. The calculation of the inverse of a matrix using cofactors can be found in Programme 5 of S. Start at p.569 and work through frames 28-34. Before reading the section on solving sets of linear equations, it would be helpful for you to work through sections 3 and 4 of the scheme based on J. This states the general cases that can arise. Then return to p.572 of S. and study frames 35-46. The latter shows how to solve systems of n equations in n unknowns, AX = b, using either the matrix inverse or the elimination method. Note, however, that S. only considers the straightforward situations when the determinant of A is non-zero so that its inverse A 1 exists. The calculation of the inverse of a matrix using the elimination method is not in S. so to complete the module work through section 7 of A: Work Scheme based on JAMES (FOURTH EDITION) 4

Specimen Test 17 1. (i) State a formal expression, involving cofactors, for the inverse of a square matrix A. 1 3 (ii) Hence find the inverse of 2 4 2. (i) State a condition on the matrix A for the set of n inhomogeneous equations in n unknowns AX = b to have a unique solution. (ii) State a condition on the matrix A for the set of n homogeneous equations in n unknowns AX = 0 to have a non-trivial solution. 3. Find the value of α which is necessary for the set of equations αx + y z = 0 2x + 3y + 3z = 0 4x + 5y + z = 0 to have a non-trivial solution. (You are not asked to solve the equations). 4. Use the elimination method to solve the set of equations x + 2y + z = 10 2x + y + z = 6 10x y + 3z = 2 5. Use the elimination method to find the inverse of the matrix 1 1 1 2 0 2 2 2 1 5