Homogeneous systems of algebraic equations. A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which



Similar documents
5 Homogeneous systems

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Linear Equations in Linear Algebra

Section Continued

Systems of Linear Equations

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Methods for Finding Bases

Solving Systems of Linear Equations

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

1.2 Solving a System of Linear Equations

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) Write the given system in matrix form x = Ax + f ( ) sin(t) x y z = dy cos(t)

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

Row Echelon Form and Reduced Row Echelon Form

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

Solutions to Math 51 First Exam January 29, 2015

by the matrix A results in a vector which is a reflection of the given

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Class Meeting # 1: Introduction to PDEs

Solving Systems of Linear Equations Using Matrices

Linear Algebra Notes

Lecture notes on linear algebra

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

8 Square matrices continued: Determinants

Vector Spaces 4.4 Spanning and Independence

CURVE FITTING LEAST SQUARES APPROXIMATION

Name: Section Registered In:

160 CHAPTER 4. VECTOR SPACES

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

( ) which must be a vector

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Systems of Linear Equations

1.5 SOLUTION SETS OF LINEAR SYSTEMS

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Solving Linear Systems, Continued and The Inverse of a Matrix

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Brief Introduction to Vectors and Matrices

General Framework for an Iterative Solution of Ax b. Jacobi s Method

x = + x 2 + x

Lecture 1: Systems of Linear Equations

Similarity and Diagonalization. Similar Matrices

Linear Algebra Notes for Marsden and Tromba Vector Calculus

4.5 Linear Dependence and Linear Independence

x y The matrix form, the vector form, and the augmented matrix form, respectively, for the system of equations are

Arithmetic and Algebra of Matrices

DIVISIBILITY AND GREATEST COMMON DIVISORS

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

Linearly Independent Sets and Linearly Dependent Sets

Subspaces of R n LECTURE Subspaces

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

Using row reduction to calculate the inverse and the determinant of a square matrix

MATH APPLIED MATRIX THEORY

Solving Systems of Linear Equations

Lecture Notes 2: Matrices as Systems of Linear Equations

5.5. Solving linear systems by the elimination method

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

Math 312 Homework 1 Solutions

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

1 Determinants and the Solvability of Linear Systems

Lecture 2: Homogeneous Coordinates, Lines and Conics

F Matrix Calculus F 1

α = u v. In other words, Orthogonal Projection

Review Jeopardy. Blue vs. Orange. Review Jeopardy

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Notes on Determinant

THE DIMENSION OF A VECTOR SPACE

The Method of Partial Fractions Math 121 Calculus II Spring 2015

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Scalar Valued Functions of Several Variables; the Gradient Vector

T ( a i x i ) = a i T (x i ).

Question 2: How do you solve a matrix equation using the matrix inverse?

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

[1] Diagonal factorization

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0.

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Matrix Differentiation

Lecture 2 Matrix Operations

NOTES ON LINEAR TRANSFORMATIONS

Vector Spaces. Chapter R 2 through R n

LS.6 Solution Matrices

A =

19 LINEAR QUADRATIC REGULATOR

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents

Homogeneous equations, Linear independence

Practical Guide to the Simplex Method of Linear Programming

5.3 The Cross Product in R 3

Chapter 6. Linear Transformation. 6.1 Intro. to Linear Transformation

2.2/2.3 - Solving Systems of Linear Equations

Systems of Equations

Math 4310 Handout - Quotient Vector Spaces

1 Introduction to Matrices

Matrix Representations of Linear Transformations and Changes of Coordinates

1 Inner Products and Norms on Real Vector Spaces

Algebra of the 2x2x2 Rubik s Cube

Transcription:

Homogeneous systems of algebraic equations A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x + + a n x n = a m x + + a mn x n = In matrix form, this reads Ax =, where A is m n, x = x, x n n and is n The homogenous system Ax = always has the solution x = Any non-zero solutions, if they exist, are said to be non-trivial solutions These may or may not exist We can find out by row reducing the corresponding augmented matrix (A) Example: Given the augmented matrix (A) = 2 2 3 4 5 2 4 2, row reduction leads quickly to the echelon form 2 4 3

Nothing happened to the last column --- row operations don t do anything to a column of zeros In particular, doing a row operation on a system of homogeneous equations doesn t change the fact that it s homogeneous For this reason, when working with homogeneous equations, we just use the matrix A The echelon form of A is 2 4 3 After the matrix has been put into echelon form, we can distinguish two different types of variables The leading entries are in columns and 2, and the variables corresponding to these entries, namely x, x 2 are called leading Variables which are not leading are, by default, free variables In the present case, x 3 and x 4 are free variables, since there are no leading entries in the third or fourth columns Continuing along, we obtain the Gauss-Jordan form (You are working out all the details on your scratch paper as we go along, aren t you!?) 8 7 4 3 No further simplification is possible Meaning: any further non-trivial rwo operations will destroy the Guass-Jordan structure of the columns with leading entries The resulting system of equations reads x 8x 3 7x 4 = x 2 + 4x 3 + 3x 4 =, 2

In principle, we re done in the sense that we have the solution in hand However, it s customary to rewrite the solution so that its properties are more clearly displayed First, we solve for the leading variables: x = 8x 3 + 7x 4 x 2 = 4x 3 3x 4 Assigning any values we choose to the two free variables x 3 and x 4 gives us a solution to the original homogeneous system This is, of course, whe the variables are called "free" We can distinguish the free variables from the leading variables by denoting them as s, t, u, etc Thus, setting x 3 = s, x 4 = t, we can now rewrite the solution as x = 8s + 7t x 2 = 4s 3t x 3 = s x 4 = t Better yet, the solution can also be written in matrix form as x = x x 2 x 3 x 4 = s 8 4 + t 7 3 We won t do it here, but If we were to carry out the above procedure on a general homogeneous system A m n x =, we d establish the following facts: 3

Properties of the homogenous system for A mn The number of leading variables is min(m, n) 2 The number of non-zero equations in the echelon form of the system is equal to the number of leading entries 3 The number of free variables plus the number of leading variables = n, the number of columns of A 4 The homogenous system Ax = has non-trivial solutions only if there are free variables 5 If there are more unknowns than equations, the homogeneous system always has non-trivial solutions Why? This is one of the few cases where you can tell something about the solutions without doing any row reduction 6 A homogeneous system of equations is always consistent (ie, always has at least one solution) Why? Can you explain this geometrically? There are two other properties: If x is a solution to Ax =, then so is cx for any real number c Proof: x is a solution means Ax = But Acx = cax = c =, so cx is also a solution 2 If x and y are two solutions to the homogeneous equation, then so is x + y Proof: A(x + y) = Ax + Ay = + = 4

These two properties constitute the famous principle of superposition which holds for homogeneous systems (but NOT for inhomogeneous ones) Definition: if x and y are two vectors and s and t two real numbers (scalars ), then sx + ty is called a linear combination of x and y We can restate the superposition principle as: Superposition principle: if x and y are two solutions to the homogenous equation Ax =, then any linear combination of x and y is also a solution Remark: This is just a compact way of restating the two properties: If x and y are solutions, then by property, sx and ty are also solutions And by property 2, their sum sx + ty is a solution Conversely, if sx + ty is a solution to the homogeneous equation for all s, t, then taking t = gives property, and taking s = t = gives property 2 You have seen this principle at work in your calculus courses Example: Suppose φ(x, y) satisfies LaPlace s equation 2 φ x 2 + 2 φ y 2 = We write this as φ =, where = 2 x 2 + 2 y 2 The differential operator has the same property as matrix multiplication, namely: if φ(x, y) and ψ(x, y) are two differentiable functions, and s and t any two real numbers, 5

then (sφ + tψ) = s φ + t ψ It follows that if φ and ψ are two solutions to Laplace s equation, then any linear combination of φ and ψ is also a solution The principle of superposition also holds for solutions to the wave equation, Maxwell s equations in free space, and Schrödinger s equation Example: Start with "white" light (ie, sunlight); it s a collection of electromagnetic waves which satisfy Maxwell s equations Pass the light through a prism, obtaining red, orange,, violet light; these are also solutions to Maxwell s equations The original solution (white light) is seen to be a superposition of many other solution, corresponding to the various different colors The process can be reversed to obtain white light again by passing the different colors of the spectrum through an inverted prism Referring back to the example we just worked out, if we set x = 8 4, and y = 7 3, then the susperposition principle tells us that any linear combination of x and y is also a solution In fact, these are all of the solutions to this system We write x h = {sx + ty : real s, t} and say that x h is the general solution to the homogeneous system Ax = 6