# Solving Systems of Linear Equations

Size: px
Start display at page:

Transcription

1 LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how this matrix machinery can also be used to solve such systems However, before we embark on solving systems of equations via matrices, let me remind you of what such solutions should look like The Geometry of Linear Systems Consider a linear system of m equations and n unknowns: (5) What does the solution set look like? easily visualize the solution sets a x + a x + + a n x n = b a x + a x + + a n x n = b a m x + a m x + + a mn x n = b m Let s examine this question case by case, in a setting where we can equations in 3 unknowns This would correspond to a situation where you have 3 variables x, x, x 3 with no relations between them Being free to choose whatever value we want for each of the 3 variables, it s clear that the solutions set is just R 3, the 3-dimensional space of ordered sets of 3 real numbers equation in 3 unknowns In this case, use the equation to express one variable, say x n, in terms of the other variables; x = a n (b a x + a x + a,3 x 3 ) The remaining variables x, and x are then unrestricted Letting these variables range freely over R will then fill out a -dimensional plane in R Thus, in the case of equation in 3 unknowns we have a two dimensional plane as a solution space equations in 3 unknowns As in the preceding example, the solution set of each individual equation will be a -dimensional plane The solution set of the pair of equation will be the intersection of these two planes (For points common to both solution sets will be points corresponding to the solutions of both equations) Here there will be three possibilities: () The two planes do not intersect In other words, the two planes are parallel but distinct Since they share no common point, there is no simultaneous solutoin of both equations () The intersection of the two planes is a line This is the generic case (3) The two planes coincide In this case, the two equations must be somehow redundant Thus we have either no solution, a -dimensional solution space (ie a line) or a -dimensional solution space 3 equations and 3 unknowns In this case, the solution set will correspond to the intersection of the three planes corresponding to the 3 equations We will have four possiblities: () The three planes share no common point In this case there will be no solution () The three planes have one point in common This will be the generic situation

2 ELEMENTARY ROW OPERATIONS (3) The three planes share a single line (4) The three planes all coincide Thus, we either have no solution, a -dimensional solution (ie, a point), a -dimensional solution (ie a line) or a -dimensional solution We now summarize and generalize this discussion as follows Theorem 5 Consider a linear system of m equations and n unknowns: The solution set of such a system is either: a x + a x + + a n x n = b a x + a x + + a n x n = b a m x + a m x + + a mn x n = b m () The empty set {}; ie, there is no solution () A hyperplane of dimension greater than or equal to (n m) Elementary Row Operations In the preceding lecture we remarked that our new-fangled matrix algebra allowed us to represent linear systems such as (5) succintly as matrix equations: a a a n x b a a a n x (5) = b a m a m a mn x m b m For example the linear system (53) can be represented as (54) [ 3 x + 3x = 3 x + x = ] [ ] [ ] x 3 = x Now, when solving linear systems like (53) it is very common to create new but equivalent equations by, for example, mulitplying by a constant or adding one equation to another In fact, we have Theorem 5 Let S be a system of m linear equations in n unknowns and let S be another system of m equations in n unknowns obtained from S by applying some combination of the following operations: interchanging the order of two equations multiplying one equation by a non-zero constant replacing a equation with the sum of itself and a multiple of another equation in the system Then the solution sets of S ane S are identical In particular, the solution set of (53) is equivalent to the solution set of (55) x + 3x = 3 x x =

3 3 SOLVING LINEAR EQUATIONS 3 (where we have multiplied the second equation by -), as well as the solution set of (56) x + 3x = 3 x = (where we have replaced the second row in (55) by its sum with the first row), as well as the solution of (57) x = 3 x = (where we have replaced the first row in (56) by its sum with 3 times the second row) We can thus solve a linear system by means of the elementary operations described in the theorem above Now because there is a matrix equation corresponding to every system of linear equations, each of the operations described in the theorem above corresponds to a matrix operation To convert these operations in our matrix language, we first associate with a linear system an augmented matrix a x + a x + + a n x n = b a x + a x + + a n x n = b a m x + a m x + + a mn x n = b m a a a n a a a n a m a m a mn This is just the m (n + ) matrix that s obtained appending the column vector b to the columns of the m n matrix A We shall use the notation [A b] to denote the augmented matrix of A and b The augmented matrices corresponding to equations (55), (56), and (57) are thus, respectively [ 3 3 ], [ 3 3 ], and [ 3 ] From this example, we can see that the operations in Theorem 5 translate to the following operations on the corresponding augmented matrices: Row Interchange: the interchange of two rows of the augmented matrix Row Scaling: multiplication of a row by a non-zero scalar Row Addition: replacing a row by its sum with a multiple of another row Henceforth, we shall refer to these operations as Elementary Row Operations b b b m 3 Solving Linear Equations Let s now reverse directions and think about how to recognize when the system of equations corresponding to a given matrix is easily solved (To keep our discussion simple, in the examples given below we consider systems of n equations in n unknowns)

4 3 SOLVING LINEAR EQUATIONS 4 3 Diagonal Matrices A matrix equation Ax = b is trivial to solve if the matrix A is purely diagonal For then the augmented matrix has the form a b a b A = a n,n b n a nn b n the corresponding system of equations reduces to a x = b x = b a a x = b x = b a a nn x n = b n x n = b n a nn 3 Lower Triangular Matrices If the coefficient matrix A has the form a a a A = a n, a n, a n,n a n a n, a n,n a nn (with zeros everywhere above the diagonal from a to a nn ), then it is called lower triangular A matrix equation Ax = b in which A is lower triangular is also fairly easy to solve For it is equivalent to a system of equations of the form a x = b a x + a x = b a 3 x + a 3 x + a 33 x 3 = b 3 a n x + a n x + a n3 x a nn x n = b n To find the solution of such a system one solves the first equation for x and then substitutes its solution b /a for the variable x in the second equation ( ) b a + a x = b x = ( b a ) b a a a One can now substitute the numerical expressions for x, and x into the third equation and get a numerical expression for x 3 Continuing in this manner we can solve the system completely We call this method solution by forward substitution 33 Upper Triangular Matrices We can do a similar thing for systems of equations characterized by an upper triangular matrices A a a a,n a n a a,n a n A = a n,n a nn

5 3 SOLVING LINEAR EQUATIONS 5 (that is to say, a matrix with zero s everywhere below and to the left of the diagonal), then the corresponding system of equations will be a x + a x + + a n x n = b a x + + a n x n = b a n,n x n + a n,n x n = b n a nn x n = b n which can be solved by substituting the solution of the last equation into the preceding equation and solving for x n x n = a n,n x n = b n a nn ( ( )) bn b n a n,n a nn and then substituting this result into the third from last equation, etc by back-substitution This method is called solution 34 Solution via Row Reduction 34 Gaussian Reduction In the general case a matrix will be neither be upper triangular or lower triangular and so neither forward- or back-substituiton can be used to solve the corresponding system of equations However, using the elementary row operations discussed in the preceding section we can always convert the augmented matrix of a (self-consistent) system of linear equations into an equivalent matrix that is upper triangular; and having done that we can then use back-substitution to solve the corresponding set of equations We call this method Gaussian reduction Let me demonstate the Gaussian method with an example Example 53 Solve the following system of equations using Gaussian reduction x + x x 3 = x x + x 3 = x x x 3 = 3 First we write down the corresponding augmented matrix 3 We now use elementary row operations to convert this matrix into one that is upper triangular Adding - times the first row to the second row produces 3 Adding - times the first row to the third row produces 3 3

6 Adding 3 3 SOLVING LINEAR EQUATIONS 6 times the second row to the third row produces 6 The augmented matrix is now in upper triangular form It corresponds to the following system of equations which can easily be solved via back-substitution: x + x x 3 = x + x 3 = x 3 = 6 x 3 = 6 x 3 = 3 x + 6 = x = x + 3 = x = In summary, solution by Gaussian reduction consists of the following steps () Write down the augmented matrix corresponding to the system of linear equations to be solved () Use elementary row operations to convert the augmented matrix [A b] into one that is upper triangular This is acheived by systematically using the first row to eliminate the first entries in the rows below it, the second row to eliminate the second entries in the row below it, etc (3) Once the augmented matrix has been reduced to upper triangular form, write down the corresponding set of linear equations and use back-substitution to complete the solution Remark 54 The text refers to matrices that are upper triangular as being in row-echelon form Definition 55 A matrix is in row-echelon form if it satisfies two conditions: () All rows containing only zeros appear below rows with non-zero entries () The first non-zero entry in a row appears in a column to the right of the first non-zero entry of any preceding row For such a matrix the first non-zero entry in a row is called the pivot for that row 34 Gauss-Jordan Reduction In the Gauss Reduction method described above, one carries out row operations until the augmented matrix is upper triangular and then finishes the problem by converting the problem back into a linear system and using back-substitution to complete the solution Thus, row reduction is used to carry out about half the work involved in solving a linear system It is also possible to use only row operations to construct a solution of a linear system Ax = b technique is called Gauss-Jordan reduction The idea is this This () Write down the augmented matrix corresponding to the system of linear equations to be solved () Use elementary row operations to convert the augmented matrix [A b] into one that is upper triangular This is acheived by systematically using the first row to eliminate the first entries in the rows below it, the second row to eliminate the second entries in the row below it, etc a a a n b a a a n b a a a n b a a n b [A b] = row operations a n a n a nn b n a nn b n

7 3 SOLVING LINEAR EQUATIONS 7 (3) Continue to use row operations on the aumented matrix until all the entries above the diagonal of the first factor have been eliminated, and only s appear along the diagonal a a a n b a a b n b b row operations = [ I b ] a nn b n b n (4) The solution of the linear system corresponding to the augmented matrix [I b ] is trivial x = Ix = b Moreover, since [I b ] was obtained from [A b] by row operations, x = b must also be the solution of Ax = b Example 56 Solve the following system of equations using Gauss-Jordan reduction x + x x 3 = x x + x 3 = x x x 3 = 3 First we write down the corresponding augmented matrix 3 In the preceding example, I demonstated that this augmented matrix is equivalent to 6 We ll now continue to apply row operations until the first block has the form of a 3 3 identity matrix Multiplying the second and third rows by yields 3 Replacing the first row by its sum with times the second row yields 3 Replacing the second row by its sum with the third row yields 3 The system of equations corresponding to this augmented matrix is This is our solution x = x = x 3 = 3 To maintain contact with the definitions used in the text, we have Definition 57 A matrix is in reduced row-echelon form if it is in row-echelon form with all pivots equal to and with zeros in every column entry above and below the pivots

8 4 ELEMENTARY MATRICES 8 Theorem 58 Let Ax = b be a linear system and suppose [A b ] is row equivalent to [A b] with A a matrix in row-echelon form Then () The system Ax = b is inconsistent if and only if the augmented matrix [A b ] has a row with all entries to the left of the partition and a non-zero entry to the right of the partition () If Ax = b is consistent and every column of A contains a pivot, the system has a unique solution (3) If Ax = b is consistent and some column of A has no pivot, the system has infinitely many solutions, with as many free variables as there are pivot-free columns in A 4 Elementary Matrices I shall now show that all elementary row operations can be carried out by means of matrix multiplication Even though carrying out row operations by matrix multiplication will be grossly inefficient from a calculation point of view, this equivalence remains an important theoretical fact (As you ll see in some of the forthcoming proofs) Definition 59 An elementary matrix is a matrix that can be obtained from the identity matrix by means of a single row operation Theorem 5 Let A be an m n matrix, and let E be an m m elementary matrix Then multiplication of A on the left by E effects the same elementary row operation on A as that which was performed on the m m identity matrix to produce E Corollary 5 If A is a matrix that was obtained from A by a sequence of row operations then there exists a corresponding sequence of elementary matrices E, E,, E r such that A = E r E E A

### Solving Systems of Linear Equations

LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### Solving Linear Systems, Continued and The Inverse of a Matrix

, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

### Solving Systems of Linear Equations Using Matrices

Solving Systems of Linear Equations Using Matrices What is a Matrix? A matrix is a compact grid or array of numbers. It can be created from a system of equations and used to solve the system of equations.

### 1.2 Solving a System of Linear Equations

1.. SOLVING A SYSTEM OF LINEAR EQUATIONS 1. Solving a System of Linear Equations 1..1 Simple Systems - Basic De nitions As noticed above, the general form of a linear system of m equations in n variables

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### Linear Equations ! 25 30 35\$ & " 350 150% & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

MathsTrack (NOTE Feb 2013: This is the old version of MathsTrack. New books will be created during 2013 and 2014) Topic 4 Module 9 Introduction Systems of to Matrices Linear Equations Income = Tickets!

### Systems of Linear Equations

Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

### Row Echelon Form and Reduced Row Echelon Form

These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

### December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

### Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

Section 8. Solving a System of Equations Using Matrices (Guassian Elimination) x + y + z = x y + 4z = x 4y + z = System of Equations x 4 y = 4 z A System in matrix form x A x = b b 4 4 Augmented Matrix

### Lecture 1: Systems of Linear Equations

MTH Elementary Matrix Algebra Professor Chao Huang Department of Mathematics and Statistics Wright State University Lecture 1 Systems of Linear Equations ² Systems of two linear equations with two variables

### MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

### MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3

MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................

### 5.5. Solving linear systems by the elimination method

55 Solving linear systems by the elimination method Equivalent systems The major technique of solving systems of equations is changing the original problem into another one which is of an easier to solve

### a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

### Methods for Finding Bases

Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

### Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix

### 1.5 SOLUTION SETS OF LINEAR SYSTEMS

1-2 CHAPTER 1 Linear Equations in Linear Algebra 1.5 SOLUTION SETS OF LINEAR SYSTEMS Many of the concepts and computations in linear algebra involve sets of vectors which are visualized geometrically as

### 2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables

### Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

### 5 Homogeneous systems

5 Homogeneous systems Definition: A homogeneous (ho-mo-jeen -i-us) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x +... + a n x n =.. a m

### Systems of Linear Equations

Chapter 1 Systems of Linear Equations 1.1 Intro. to systems of linear equations Homework: [Textbook, Ex. 13, 15, 41, 47, 49, 51, 65, 73; page 11-]. Main points in this section: 1. Definition of Linear

### by the matrix A results in a vector which is a reflection of the given

Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

### Using row reduction to calculate the inverse and the determinant of a square matrix

Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible

### Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

### Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

### Solution of Linear Systems

Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

### MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

### Arithmetic and Algebra of Matrices

Arithmetic and Algebra of Matrices Math 572: Algebra for Middle School Teachers The University of Montana 1 The Real Numbers 2 Classroom Connection: Systems of Linear Equations 3 Rational Numbers 4 Irrational

### 1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

### Lecture Notes 2: Matrices as Systems of Linear Equations

2: Matrices as Systems of Linear Equations 33A Linear Algebra, Puck Rombach Last updated: April 13, 2016 Systems of Linear Equations Systems of linear equations can represent many things You have probably

### 160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

### Linear Algebra Notes

Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

### SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections

### Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

### 10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

578 CHAPTER 1 NUMERICAL METHODS 1. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

### Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Section 1.2: Row Reduction and Echelon Forms Echelon form (or row echelon form): 1. All nonzero rows are above any rows of all zeros. 2. Each leading entry (i.e. left most nonzero entry) of a row is in

### ( ) which must be a vector

MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are

### 1 Determinants and the Solvability of Linear Systems

1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped

### Recall that two vectors in are perpendicular or orthogonal provided that their dot

Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

### Linearly Independent Sets and Linearly Dependent Sets

These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation

### 2.2/2.3 - Solving Systems of Linear Equations

c Kathryn Bollinger, August 28, 2011 1 2.2/2.3 - Solving Systems of Linear Equations A Brief Introduction to Matrices Matrices are used to organize data efficiently and will help us to solve systems of

### 1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0

Solutions: Assignment 4.. Find the redundant column vectors of the given matrix A by inspection. Then find a basis of the image of A and a basis of the kernel of A. 5 A The second and third columns are

### 8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

### Section 1.7 22 Continued

Section 1.5 23 A homogeneous equation is always consistent. TRUE - The trivial solution is always a solution. The equation Ax = 0 gives an explicit descriptions of its solution set. FALSE - The equation

### 7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

### SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis

### Factorization Theorems

Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

### Subspaces of R n LECTURE 7. 1. Subspaces

LECTURE 7 Subspaces of R n Subspaces Definition 7 A subset W of R n is said to be closed under vector addition if for all u, v W, u + v is also in W If rv is in W for all vectors v W and all scalars r

### LS.6 Solution Matrices

LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

### NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

### 1 Introduction to Matrices

1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

### Elementary Matrices and The LU Factorization

lementary Matrices and The LU Factorization Definition: ny matrix obtained by performing a single elementary row operation (RO) on the identity (unit) matrix is called an elementary matrix. There are three

### Linear Codes. Chapter 3. 3.1 Basics

Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length

### Orthogonal Diagonalization of Symmetric Matrices

MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

### 1 Sets and Set Notation.

LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

### 5 Systems of Equations

Systems of Equations Concepts: Solutions to Systems of Equations-Graphically and Algebraically Solving Systems - Substitution Method Solving Systems - Elimination Method Using -Dimensional Graphs to Approximate

### University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

### 6. Cholesky factorization

6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

### 4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

### Homogeneous systems of algebraic equations. A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which

Homogeneous systems of algebraic equations A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x + + a n x n = a

### 7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse

### Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

### Linear Equations in Linear Algebra

1 Linear Equations in Linear Algebra 1.5 SOLUTION SETS OF LINEAR SYSTEMS HOMOGENEOUS LINEAR SYSTEMS A system of linear equations is said to be homogeneous if it can be written in the form A 0, where A

### Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

### Question 2: How do you solve a matrix equation using the matrix inverse?

Question : How do you solve a matrix equation using the matrix inverse? In the previous question, we wrote systems of equations as a matrix equation AX B. In this format, the matrix A contains the coefficients

### Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

### These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

### Solution to Homework 2

Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

### Orthogonal Projections

Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

### The Determinant: a Means to Calculate Volume

The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are

### Math 312 Homework 1 Solutions

Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

### LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

### General Framework for an Iterative Solution of Ax b. Jacobi s Method

2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,

### Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col

### MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

### Vector Spaces 4.4 Spanning and Independence

Vector Spaces 4.4 and Independence October 18 Goals Discuss two important basic concepts: Define linear combination of vectors. Define Span(S) of a set S of vectors. Define linear Independence of a set

### x y The matrix form, the vector form, and the augmented matrix form, respectively, for the system of equations are

Solving Sstems of Linear Equations in Matri Form with rref Learning Goals Determine the solution of a sstem of equations from the augmented matri Determine the reduced row echelon form of the augmented

### Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

### 9 Multiplication of Vectors: The Scalar or Dot Product

Arkansas Tech University MATH 934: Calculus III Dr. Marcel B Finan 9 Multiplication of Vectors: The Scalar or Dot Product Up to this point we have defined what vectors are and discussed basic notation

### Linear Equations in Linear Algebra

May, 25 :46 l57-ch Sheet number Page number cyan magenta yellow black Linear Equations in Linear Algebra WEB INTRODUCTORY EXAMPLE Linear Models in Economics and Engineering It was late summer in 949. Harvard

### THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

### Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

### Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

### Similar matrices and Jordan form

Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

### Inner product. Definition of inner product

Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product

### Math 215 HW #6 Solutions

Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T

### OPRE 6201 : 2. Simplex Method

OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2

### Solving Simultaneous Equations and Matrices

Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering

### 8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

Solution by Inverse Matrix Method 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix algebra allows us

### Linear Programming. March 14, 2014

Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

### Lecture 14: Section 3.3

Lecture 14: Section 3.3 Shuanglin Shao October 23, 2013 Definition. Two nonzero vectors u and v in R n are said to be orthogonal (or perpendicular) if u v = 0. We will also agree that the zero vector in

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that