Introduction to Linear Systems

Similar documents
Row Echelon Form and Reduced Row Echelon Form

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Systems of Linear Equations

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Solving Systems of Linear Equations

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Linearly Independent Sets and Linearly Dependent Sets

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

1.5 SOLUTION SETS OF LINEAR SYSTEMS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Solving Systems of Linear Equations

Direct Methods for Solving Linear Systems. Matrix Factorization

Lecture 1: Systems of Linear Equations

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Linear Equations in Linear Algebra

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

Lecture Notes 2: Matrices as Systems of Linear Equations

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

x y The matrix form, the vector form, and the augmented matrix form, respectively, for the system of equations are

Solving Systems of Linear Equations Using Matrices

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Solutions to Math 51 First Exam January 29, 2015

1 VECTOR SPACES AND SUBSPACES

( ) which must be a vector

Notes on Determinant

Arithmetic and Algebra of Matrices

1.2 Solving a System of Linear Equations

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Simultaneous Equations and Matrices

5 Homogeneous systems

Solution of Linear Systems

Methods for Finding Bases

Linear Equations in Linear Algebra

8 Square matrices continued: Determinants

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Prerequisites: TSI Math Complete and high school Algebra II and geometry or MATH 0303.

Linear Algebra Notes

10.2 Systems of Linear Equations: Matrices

Precalculus REVERSE CORRELATION. Content Expectations for. Precalculus. Michigan CONTENT EXPECTATIONS FOR PRECALCULUS CHAPTER/LESSON TITLES

by the matrix A results in a vector which is a reflection of the given

5.5. Solving linear systems by the elimination method

SOLVING LINEAR SYSTEMS

How To Understand And Solve Algebraic Equations

Homogeneous systems of algebraic equations. A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which

MAT 242 Test 2 SOLUTIONS, FORM T

Systems of Linear Equations

A =

7 Gaussian Elimination and LU Factorization

Systems of Equations

Lecture notes on linear algebra

Name: Section Registered In:

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Using row reduction to calculate the inverse and the determinant of a square matrix

9 Multiplication of Vectors: The Scalar or Dot Product

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Department of Chemical Engineering ChE-101: Approaches to Chemical Engineering Problem Solving MATLAB Tutorial VI

II. Linear Systems of Equations

1 Determinants and the Solvability of Linear Systems

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

How To Understand And Solve A Linear Programming Problem

4.5 Linear Dependence and Linear Independence

Question 2: How do you solve a matrix equation using the matrix inverse?

Linear Equations in One Variable

Operation Count; Numerical Linear Algebra

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Georgia Standards of Excellence Curriculum Map. Mathematics. GSE 8 th Grade

Vector Spaces 4.4 Spanning and Independence

Application of Linear Algebra in. Electrical Circuits

Factorization Theorems

Jim Lambers MAT 169 Fall Semester Lecture 25 Notes

Elementary Matrices and The LU Factorization

Linear Algebra: Vectors

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Brief Introduction to Vectors and Matrices

MAT188H1S Lec0101 Burbulla

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Typical Linear Equation Set and Corresponding Matrices

Linear Codes. Chapter Basics

Section Continued

Alabama Department of Postsecondary Education

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

a(1) = first.entry of a

Physics 235 Chapter 1. Chapter 1 Matrices, Vectors, and Vector Calculus

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Algebra I Credit Recovery

with functions, expressions and equations which follow in units 3 and 4.

Content. Chapter 4 Functions Basic concepts on real functions 62. Credits 11

TWO-DIMENSIONAL TRANSFORMATION

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

OPRE 6201 : 2. Simplex Method

2.2/2.3 - Solving Systems of Linear Equations

Thnkwell s Homeschool Precalculus Course Lesson Plan: 36 weeks

1 Solving LPs: The Simplex Algorithm of George Dantzig

University of Lille I PC first year list of exercises n 7. Review

Transcription:

Introduction to Linear Systems Linear Systems: Nonhomogeneous and Homogeneous The Three Possibilities Examples. Explanations. Analytic Geometry Toolkit. Reversible Rules. Equivalent Systems Reduced Echelon Systems. Recognition of Reduced Echelon Systems Form of a Reduced Echelon System Writing a Standard Parametric Solution. Last Frame Algorithm From Reduced Echelon System To General Solution Elimination Algorithm Documenting the Toolkit Equations to Matrices and Matrices to Equations The Toolkit and Maple RREF Defined. Illustration Matrix Frame Sequence. Illustration

Linear Nonhomogeneous System Given numbers a 11,..., a mn, b 1,..., b m, consider the nonhomogeneous system of m linear equations in n unknowns x 1, x 2,..., x n (1) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1, a 21 x 1 + a 22 x 2 + + a 2n x n = b 2,. a m1 x 1 + a m2 x 2 + + a mn x n = b m. Constants a 11,..., a mn are called the coefficients of system (1). Constants b 1,..., b m are collectively referenced as the right hand side, right side or RHS.

Linear Homogeneous System Given numbers a 11,..., a mn consider the homogeneous system of m linear equations in n unknowns x 1, x 2,..., x n (2) a 11 x 1 + a 12 x 2 + + a 1n x n = 0, a 21 x 1 + a 22 x 2 + + a 2n x n = 0,. a m1 x 1 + a m2 x 2 + + a mn x n = 0. Constants a 11,..., a mn are called the coefficients of system (2).

The Three Possibilities Solutions of general linear system (1) may be classified into exactly three possibilities: 1. No solution. 2. Infinitely many solutions. 3. A unique solution.

Examples A solution (x, y) of a 2 2 system is a pair of values that simultaneously satisfy both equations. Consider the following three systems: 1 No Solution 2 Infinitely Many Solutions 3 Unique Solution { x y = 0, 0 = 1. Three Possibilities Explained { x 2y = 0, 0 = 0. { 3x + 2y = 1, x y = 2. Signal equation 0 = 1, a false equation, implies system 1 has No Solution. Geometrically realized by two parallel lines. System 2 represents 2 identical lines. It has infinitely many solutions, one solution (x, y) for each point on the line x 2y = 0. Analytic geometry writes the solutions for < t 1 < as the parametric equations { x = 2t1, y = t 1. System 3 represents 2 non-parallel lines. It has a unique solution x = 1, y = 1.

Toolkit: Combo, Swap, Mult The toolkit rules neither create nor destroy solutions of the original system. Swap Mult Combo Two equations can be interchanged without changing the solution set. An equation can be multiplied by m 0 without changing the solution set. A multiple of one equation can be added to a different equation without changing the solution set. Reversible Rules The mult and combo rules replace an existing equation by a new one, whereas swap replaces two equations. The three operations are reversible: The swap rule is reversed by repeating it. The mult rule is reversed by repeating it with multiplier 1/m. The combo rule is reversed by repeating it with c replaced by c.

Reduced Echelon Systems The leading term in an equation is the first nonzero term in variable list order. The leading coefficient is the nonzero number multiplying the variable in the leading term. A lead variable is a variable that appears exactly once in the system of equations, in a leading term with leading coefficient one. A system of linear algebraic equations in which each nonzero equation has a lead variable is called a reduced echelon system. The conventions: Within an equation, variables must appear in variable list order. Equations with lead variables are listed in variable list order. Following them are any zero equations. A nonzero equation with no variables is not allowed. Called a signal equation, it looks like 0=1. A free variable in a reduced echelon system is any variable that is not a lead variable. A variable that does not appear is a free variable. Otherwise, detection of a free variable must be from a triangular system in which all lead variables are identified.

Recognition of Reduced Echelon Systems A linear system (1) is recognized as possible to convert into a reduced echelon system provided the leading term in each equation has its variable missing from all other equations. Conversion is made by mult and swap toolkit rules. Form of a Reduced Echelon System A reduced echelon system can be written in the special form (3) x i1 + E 11 x j1 + + E 1k x jk = D 1, x i2 + E 21 x j1 + + E 2k x jk = D 2,. x im + E m1 x j1 + + E mk x jk = D m. Symbols Defined The numbers E 11,..., E mk and D 1,..., D m are known constants. Variables x i1,..., x im are the lead variables. The remaining variables x j1,..., x ik are the free variables.

Writing the Solution when Free Variables Exist Assume variable list order x, y, z, w, u, v for the reduced echelon system below. Boxed variables are lead variables and the remaining are free variables. (4) x + 4w + u + v = 1, y u + v = 2, z w + 2u v = 0. Last Frame Algorithm If the reduced echelon system has zero free variables, then the unique solution is already displayed. Otherwise, there is at least one free variable, and then the 2 step algorithm below applies to write out the general solution. 1. Set the free variables equal to invented symbols t 1,..., t k. Each symbol can assume values from to. 2. Solve equations (4) for the leading variables and then back-substitute the free variables to obtain a standard general solution.

From Reduced Echelon System To General Solution Assume variable list order x, y, z, w, u, v in the reduced echelon system (5) x + 4w + u + v = 1, y u + v = 2, z w + 2u v = 0. The boxed lead variables in (5) are x, y, z and the free variables are w, u, v. Assign invented symbols t 1, t 2, t 3 to the free variables and back-substitute in (5) to obtain a standard general solution x = 1 4t 1 t 2 t 3, y = 2 + t 2 t 3 z = t 1 2t 2 + t 3, w = t 1, u = t 2, v = t 3.

Elimination Algorithm The algorithm employs at each algebraic step one of the three rules in the Toolkit mult, swap and combo. The objective of each algebraic step is to increase the number of lead variables. The process stops when no more lead variables can be found, in which case the last system of equations is a reduced echelon system. It may also stop when a signal equation is found. Otherwise, equations with lead variables migrate to the top, in variable list order dictated by the lead variable, and equations with no variables are swapped to the end. Within each equation, variables appear in variable list order, left-to-right. Reversibility of the algebraic steps means that no solutions are created or destroyed: the original system and all systems in the intermediate steps have exactly the same solutions. The final reduced echelon system has a standard general solution. This expression is either the unique solution of the system, or else in the case of infinitely many solutions, it is a parametric solution using invented symbols t 1,..., t k.

Theorem 1 (Method of Elimination) Every linear system has either no solution or else it has exactly the same solutions as an equivalent reduced echelon system, obtained by repeated application of the three toolkit rules swap, mult and combo.

Documenting the 3 Rules Throughout, symbols s and t stand for source and target equations. Symbols c and m stand for constant and multiplier. The multiplier must be nonzero. The symbol R used in textbooks abbreviates Row, which corresponds exactly to equation. Swap(s,t) Interchange equation/row s and equation/row t. Textbooks: SWAP R s, R t Blackboard: swap(s,t) Mult(t,m) Multiply equation/row t by m 0. Textbooks: (m)r t Blackboard: mult(t,m) Combo(s,t,c) Multiply equation/row s by c and then add it to a different equation/row t. Textbooks: R t = (c)r s + R t Blackboard: combo(s,t,c)

From Equations { to Matrices 3x + 4y = 1 The system can be represented as the augmented matrix C = ( ) 5x 6y = 2 3 4 1. The process is automated for any number of variables, by observing that 5 6 2 each column of C is a partial derivative, e.g., column 1 of C is the partial on x, column 2 of C the partial on y. The rightmost column of C contains the RHS of each equation. From Matrices to Equations The variable names x, y are written above columns 1, 2 of the augmented matrix, as below. x y = ( 3 4 1 5 6 2 The scalar system is reconstructed from the augmented matrix by taking dot products, of the symbols on the top row against the rows of the matrix. Each dot product answer is followed by an equal sign and then the number in the third column. )

The Three Rules and Maple Newer versions of maple use the LinearAlgebra package, and a single function RowOperation() to accomplish the same thing done by three functions addrow(), mulrow(), swaprow() in the older linalg package. A conversion table appears below. Hand-written maple linalg maple LinearAlgebra swap(s,t) swaprow(a,s,t) RowOperation(A,[t,s]) mult(t,c) mulrow(a,t,c) RowOperation(A,t,c) combo(s,t,c) addrow(a,s,t,c) RowOperation(A,[t,s],c) Handwritten to Maple Code Conversion Ref: Utah Maple Tutorial, course web site. swap:=(a,s,t)->linalg[swaprow](a,s,t): mult:=(a,t,m)->linalg[mulrow](a,t,m): combo:=(a,s,t,c)->linalg[addrow](a,s,t,c): C1 := Matrix([[3,2,2],[3,1,2],[1,1,1]]); C2:=combo(C1,1,2,-1); C3:=combo(C2,1,3,-1/3);

RREF The reduced row-echelon form of a matrix, or rref, is specified by the following requirements. The matrix can be A for A x = 0 or the augmented matrix A b for A x = b. This means a signal equation 0 = 1 is possible, but it depends on interpretation. Zero rows appear last. Each nonzero row has first element 1, called a leading one. The column in which the leading one appears, called a pivot column, has all other entries equal to zero. The pivot columns appear as consecutive initial columns of the identity matrix I. Trailing columns of I might be absent.

RREF Illustration The matrix (6) below is a typical rref which satisfies the preceding properties. Displayed secondly is the reduced echelon system (7) in the variables x 1,..., x 8 represented by the augmented matrix (6). (6) (7) 1 2 0 3 4 0 5 0 6 0 0 1 7 8 0 9 0 10 0 0 0 0 0 1 11 0 12 0 0 0 0 0 0 0 1 13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 x 1 + 2x 2 +3x 4 +4x 5 + 5x 7 = 6 x 3 +7x 4 +8x 5 + 9x 7 = 10 x 6 +11x 7 = 12 x 8 = 13 0 = 0 0 = 0 0 = 0 Matrix (6) is an rref, because it passes the two checkpoints. The pivot columns 1, 3, 6, 8 appear as the initial 4 columns of the 7 7 identity matrix I, in natural order; the trailing 3 columns of I are absent.

Linear System Frame Sequence A sequence of swap, multiply and combination steps applied to a system of equations is called a frame sequence. Each step (a frame) consists of exactly one operation. The First Frame is the original system and the Last Frame is the reduced echelon system. Frames are documented by the acronyms swap, combo, mult. Arithmetic detail is suppressed: only results appear. The viewpoint is that a camera is pointed at the workspace of an assistant who writes the mathematics, and after the completion of each step, a photo is taken. The filmstrip of photos is the frame sequence. The sequence is related to a video of the events, because it is filmstrip of certain video frames. Matrix Frame Sequences The same terminology applies for systems A x = b represented by an augmented matrix C = aug(a, b). The First Frame is C and the Last Frame is rref(c).

A Matrix Frame Sequence Illustration Steps in a frame sequence can be documented by the notation each written next to the target row. swap(s,t), mult(t,m), combo(s,t,c), During the sequence, initial columns of the identity, called pivot columns, are created as steps toward the rref, using the three operations mult, swap, combo. Trailing columns of the identity might not appear. Required is that pivot columns occur as consecutive initial columns of the identity matrix.

Frame 1: Frame 2: Frame 3: Frame 4: Frame 5: Frame 6: Frame 7: Last Frame: 1 2 1 0 1 1 4 1 0 2 0 1 1 0 1 0 0 0 0 0 1 2 1 0 1 0 2 0 0 1 0 1 1 0 1 0 0 0 0 0 1 2 1 0 1 0 1 1 0 1 0 2 0 0 1 0 0 0 0 0 1 2 1 0 1 0 1 1 0 1 0 0 2 0 1 0 0 0 0 0 1 0 3 0 1 0 1 1 0 1 0 0 2 0 1 0 0 0 0 0 1 0 3 0 1 0 1 1 0 1 0 0 1 0 1/2 0 0 0 0 0 1 0 3 0 1 0 1 0 0 1/2 0 0 1 0 1/2 0 0 0 0 0 1 0 0 0 1/2 0 1 0 0 1/2 0 0 1 0 1/2 0 0 0 0 0 Original augmented matrix. combo(1,2,-1) Pivot column 1 completed. swap(2,3) combo(2,3,-2) combo(2,1,-2) Pivot column 2 completed. mult(3,-1/2) combo(3,2,-1) combo(3,1,3) Pivot column 3 completed. rref found.

Definition 1 (Parametric Equations) The terminology parametric equations refers to a set of equations of the form (8) x 1 = d 1 + c 11 t 1 + + c 1k t k, x 2 = d 2 + c 21 t 1 + + c 2k t k,. x n = d n + c n1 t 1 + + c nk t k. The numbers d 1,..., d n, c 11,..., c nk are known constants and the variable names t 1,..., t k are parameters. The symbols t 1,..., t k are therefore allowed to take on any value from to. Analytic Geometry In calculus courses, parametric equations are encountered as scalar equations of lines (k = 1) and planes (k = 2). If no symbols t 1, t 2,... appear, then the equations describe a point.

Definition 2 (General Solution) A general solution of (1) is a set of parametric equations (8) plus two additional requirements: (9) (10) Equations (8) satisfy (1) for all real values of t 1,..., t k. Any solution of (1) can be obtained from (8) by specializing values of the parameter symbols t 1, t 2,... t k.