5 Homogeneous systems



Similar documents
Homogeneous systems of algebraic equations. A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which

Lecture notes on linear algebra

Solving Systems of Linear Equations

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Solutions to Math 51 First Exam January 29, 2015

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

1.2 Solving a System of Linear Equations

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Systems of Linear Equations

Row Echelon Form and Reduced Row Echelon Form

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Methods for Finding Bases

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Section Continued

Solving Systems of Linear Equations Using Matrices

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

Linear Equations in Linear Algebra

( ) which must be a vector

8 Square matrices continued: Determinants

Solving Linear Systems, Continued and The Inverse of a Matrix

Systems of Linear Equations

Class Meeting # 1: Introduction to PDEs

by the matrix A results in a vector which is a reflection of the given

Linear Algebra Notes

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

160 CHAPTER 4. VECTOR SPACES

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Matrix Representations of Linear Transformations and Changes of Coordinates

Vector Spaces 4.4 Spanning and Independence

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

Notes on Determinant

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

General Framework for an Iterative Solution of Ax b. Jacobi s Method

4.5 Linear Dependence and Linear Independence

LS.6 Solution Matrices

NOTES ON LINEAR TRANSFORMATIONS

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) Write the given system in matrix form x = Ax + f ( ) sin(t) x y z = dy cos(t)

THE DIMENSION OF A VECTOR SPACE

Math 4310 Handout - Quotient Vector Spaces

Lecture Notes 2: Matrices as Systems of Linear Equations

1.5 SOLUTION SETS OF LINEAR SYSTEMS

Solving Systems of Linear Equations

1 VECTOR SPACES AND SUBSPACES

DIVISIBILITY AND GREATEST COMMON DIVISORS

CURVE FITTING LEAST SQUARES APPROXIMATION

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

Arithmetic and Algebra of Matrices

Linearly Independent Sets and Linearly Dependent Sets

Lecture 1: Systems of Linear Equations

Lecture 2: Homogeneous Coordinates, Lines and Conics

Recall that two vectors in are perpendicular or orthogonal provided that their dot

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Numerical Analysis Lecture Notes

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

MATH APPLIED MATRIX THEORY

Brief Introduction to Vectors and Matrices

Name: Section Registered In:

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0.

1 Determinants and the Solvability of Linear Systems

Linear Equations in Linear Algebra

Question 2: How do you solve a matrix equation using the matrix inverse?

8 Primes and Modular Arithmetic

Similarity and Diagonalization. Similar Matrices

University of Lille I PC first year list of exercises n 7. Review

A =

1 Sets and Set Notation.

MATH1231 Algebra, 2015 Chapter 7: Linear maps

Vector and Matrix Norms

Scalar Valued Functions of Several Variables; the Gradient Vector

Matrix Differentiation

Polynomial Invariants

Operation Count; Numerical Linear Algebra

Inner Product Spaces

Higher Order Equations

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Lecture L3 - Vectors, Matrices and Coordinate Transformations

5.5. Solving linear systems by the elimination method

Practical Guide to the Simplex Method of Linear Programming

6. LECTURE 6. Objectives

Linear Programming Problems

MAT 242 Test 2 SOLUTIONS, FORM T

Stanford Math Circle: Sunday, May 9, 2010 Square-Triangular Numbers, Pell s Equation, and Continued Fractions

0.8 Rational Expressions and Equations

[1] Diagonal factorization

Vector Spaces. Chapter R 2 through R n

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

α = u v. In other words, Orthogonal Projection

DETERMINANTS TERRY A. LORING

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Transcription:

5 Homogeneous systems Definition: A homogeneous (ho-mo-jeen -i-us) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x +... + a n x n =.. a m x +... + a mn x n = In matrix form, this reads Ax =, where A is m n, x = x. x n and is n. The homogenous system Ax = always has the solution x =. It follows that any homogeneous system of equations is consistent n, Definition: solutions. Any non-zero solutions to Ax =, if they exist, are called non-trivial These may or may not exist. We can find out by row reducing the corresponding augmented matrix (A ). Example: Given the augmented matrix 2 (A ) = 2 3 4 5, 2 4 2 row reduction leads quickly to the echelon form 2 4 3. Observe that nothing happened to the last column row operations do nothing to a column of zeros. Equivalently, doing a row operation on a system of homogeneous equations doesn t change the fact that it s homogeneous. For this reason, when working with homogeneous systems, we ll just use the matrix A, rather than the augmented matrix. The echelon form of A is 2 4 3.

Here, the leading variables are x and x 2, while x 3 and x 4 are the free variables, since there are no leading entries in the third or fourth columns. Continuing along, we obtain the Gauss- Jordan form (You should be working out the details on your scratch paper as we go along....) 8 7 4 3. No further simplification is possible because any new row operation will destroy the structure of the columns with leading entries. The system of equations now reads x 8x 3 7x 4 = x 2 + 4x 3 + 3x 4 =, In principle, we re finished with the problem in the sense that we have the solution in hand. But it s customary to rewrite the solution in vector form so that its properties are more evident. First, we solve for the leading variables; everything else goes on the right hand side: x = 8x 3 + 7x 4 x 2 = 4x 3 3x 4. Assigning any values we choose to the two free variables x 3 and x 4 gives us one the many solutions to the original homogeneous system. This is, of course, whe the variables are called free. For example, taking x 3 =, x 4 = gives the solution x = 8, x 2 = 4. We can distinguish the free variables from the leading variables by denoting them as s, t, u, etc. This is not logically necessary; it just makes things more transparent. Thus, setting x 3 = s, x 4 = t, we rewrite the solution in the form x = 8s + 7t x 2 = 4s 3t x 3 = s x 4 = t More compactly, the solution can also be written in matrix and set notation as x H = x x 2 x 3 x 4 = s 8 4 + t 7 3 : s, t R () The curly brackets { } are standard notation for a set or collection of objects. We call this the general solution to the homogeneous equation. Notice that x H is an infinite set of 2

objects (one for each possible choice of s and t) and not a single vector. The notation is somewhat misleading, since the left hand side x H looks like a single vector, while the right hand side clearly represents an infinite collection of objects with 2 degrees of freedom. We ll improve this later. 5. Some comments about free and leading variables Let s go back to the previous set of equations x = 8x 3 + 7x 4 x 2 = 4x 3 3x 4. Notice that we can rearrange things: we could solve the second equation for x 3 to get x 3 = 4 (x 2 + 3x 4 ), and then substitute this into the first equation, giving x = 2x 2 + x 4. Now it looks as though x 2 and x 4 are the free variables! This is perfectly all right: the specific algorithm (Gaussian elimination) we used to solve the original system leads to the form in which x 3 and x 4 are the free variables, but solving the system a different way (which is perfectly legal) could result in a different set of free variables. Mathematicians would say that the concepts of free and leading variables are not invariant notions. Different computational schemes lead to different results. BUT... What is invariant (i.e., independent of the computational details) is the number of free variables (2) and the number of leading variables (also 2 here). No matter how you solve the system, you ll always wind up being able to express 2 of the variables in terms of the other 2! This is not obvious. Later we ll see that it s a consequence of a general result called the dimension or rank-nullity theorem. The reason we use s and t as the parameters in the system above, and not x 3 and x 4 (or some other pair) is because we don t want the notation to single out any particular variables as free or otherwise they re all to be on an equal footing. 5.2 Properties of the homogenous system for A mn If we were to carry out the above procedure on a general homogeneous system A m n x =, we d establish the following facts: 3

The number of leading variables is min(m, n). The number of non-zero equations in the echelon form of the system is equal to the number of leading entries. The number of free variables plus the number of leading variables = n, the number of columns of A. The homogenous system Ax = has non-trivial solutions if and only if there are free variables. If there are more unknowns than equations, the homogeneous system always has nontrivial solutions. Why? This is one of the few cases in which we can tell something about the solutions without doing any work. A homogeneous system of equations is always consistent (i.e., always has at least one solution). Exercises:. What sort of geometric object does x H represent? 2. Suppose A is 4 7. How many leading variables can Ax = have? How many free variables? 3. (*) If the Gauss-Jordan form of A has a row of zeros, are there necessarily any free variables? If there are free variables, is there necessarily a row of zeros? 5.3 Linear combinations and the superposition principle There are two other fundamental properties of the homogeneous system:. Theorem: If x is a solution to Ax =, then so is cx for any real number c. Proof: x is a solution means Ax =. But A(cx) = c(ax) = c =, so cx is also a solution. 2. Theorem: If x and y are two solutions to the homogeneous equation, then so is x + y. Proof: A(x + y) = Ax + Ay = + =, so x + y is also a solution. 4

These two properties constitute the famous principle of superposition which holds for homogeneous systems (but NOT for inhomogeneous ones). Definition: If x and y are vectors and s and t are scalars, then sx + ty is called a linear combination of x and y. Example: 3x 4πy is a linear combination of x and y. We can restate the superposition principle as: Superposition principle: if x and y are two solutions to the homogenous equation Ax =, then any linear combination of x and y is also a solution. Remark: This is just a compact way of restating the two properties: If x and y are solutions, then by property, sx and ty are also solutions. And by property 2, their sum sx + ty is a solution. Conversely, if sx + ty is a solution to the homogeneous equation for all s, t, then taking t = gives property, and taking s = t = gives property 2. You have seen this principle at work in your calculus courses. t Example: Suppose φ(x, y) satisfies LaPlace s equation 2 φ x + 2 φ 2 y =. 2 We write this as φ =, where = 2 x 2 + 2 y 2. The differential operator has the same property as matrix multiplication, namely: if φ(x, y) and ψ(x, y) are two differentiable functions, and s and t are any two real numbers, then (sφ + tψ) = s φ + t ψ. Exercise: Verify this. That is, show that the two sides are equal by using properties of the derivative. The functions φ and ψ are, in fact, vectors, in an infinite-dimensional space called Hilbert space. It follows that if φ and ψ are two solutions to Laplace s equation, then any linear combination of φ and ψ is also a solution. The principle of superposition also holds for solutions to the wave equation, Maxwell s equations in free space, and Schrödinger s equation in quantum mechanics. For those of you who know the language, these are all (systems of) homogeneous linear differential equations. Example: Start with white light (e.g., sunlight); it s a collection of electromagnetic waves which satisfy Maxwell s equations. Pass the light through a prism, obtaining red, orange, 5

..., violet light; these are also solutions to Maxwell s equations. The original solution (white light) is seen to be a superposition of many other solutions, corresponding to the various different colors (i.e. frequencies) The process can be reversed to obtain white light again by passing the different colors of the spectrum through an inverted prism. This is one of the experiments Isaac Newton did when he was your age. Referring back to the example (see Eqn ()), if we set x = 8 4, and y = then the susperposition principle tells us that any linear combination of x and y is also a solution. In fact, these are all of the solutions to this system, as we ve proven above. 7 3, 6