PROVING STATEMENTS IN LINEAR ALGEBRA

Similar documents
Lecture 2 Matrix Operations

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Linear Algebra Notes for Marsden and Tromba Vector Calculus

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

1 Introduction to Matrices

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Notes on Determinant

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Solving simultaneous equations using the inverse matrix

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

Math 312 Homework 1 Solutions

8 Square matrices continued: Determinants

3. Mathematical Induction

Similarity and Diagonalization. Similar Matrices

NOTES ON LINEAR TRANSFORMATIONS

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

Continued Fractions and the Euclidean Algorithm

LINEAR ALGEBRA. September 23, 2010

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Inner Product Spaces

The Characteristic Polynomial

T ( a i x i ) = a i T (x i ).

MAT188H1S Lec0101 Burbulla

The Determinant: a Means to Calculate Volume

Inner Product Spaces and Orthogonality

Linear Algebra Notes

Chapter 20. Vector Spaces and Bases

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

THE DIMENSION OF A VECTOR SPACE

Chapter 17. Orthogonal Matrices and Symmetries of Space

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Vector and Matrix Norms

Solution to Homework 2

Solving Linear Systems, Continued and The Inverse of a Matrix

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

University of Lille I PC first year list of exercises n 7. Review

The last three chapters introduced three major proof techniques: direct,

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

LS.6 Solution Matrices

Inner products on R n, and more

1 Sets and Set Notation.

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. (

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Orthogonal Projections

Solutions to Math 51 First Exam January 29, 2015

LINEAR ALGEBRA W W L CHEN

Matrix Representations of Linear Transformations and Changes of Coordinates

Lecture 4: Partitioned Matrices and Determinants

Math 4310 Handout - Quotient Vector Spaces

Name: Section Registered In:

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

GENERATING SETS KEITH CONRAD

MATH APPLIED MATRIX THEORY

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

DETERMINANTS TERRY A. LORING

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

26. Determinants I. 1. Prehistory

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

8.1. Cramer s Rule for Solving Simultaneous Linear Equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Introduction to Matrix Algebra

Section Inner Products and Norms

Typical Linear Equation Set and Corresponding Matrices

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

E3: PROBABILITY AND STATISTICS lecture notes

α = u v. In other words, Orthogonal Projection

Matrix Differentiation

( ) which must be a vector

3. INNER PRODUCT SPACES

Systems of Linear Equations

MA106 Linear Algebra lecture notes

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

3.2 Matrix Multiplication

1 VECTOR SPACES AND SUBSPACES

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

Data Mining: Algorithms and Applications Matrix Math Review

Mathematical Induction. Mary Barnes Sue Gordon

Lecture 1: Schur s Unitary Triangularization Theorem

Fundamentele Informatica II

Excel supplement: Chapter 7 Matrix and vector algebra

Arkansas Tech University MATH 4033: Elementary Modern Algebra Dr. Marcel B. Finan

Basic Proof Techniques

So let us begin our quest to find the holy grail of real analysis.

CURVE FITTING LEAST SQUARES APPROXIMATION

Linear Algebra Review. Vectors

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

Handout #1: Mathematical Reasoning

BANACH AND HILBERT SPACE REVIEW

x = + x 2 + x

[1] Diagonal factorization

Vector Math Computer Graphics Scott D. Anderson

Elements of Abstract Group Theory

by the matrix A results in a vector which is a reflection of the given

Transcription:

Mathematics V2010y Linear Algebra Spring 2007 PROVING STATEMENTS IN LINEAR ALGEBRA Linear algebra is different from calculus: you cannot understand it properly without some simple proofs. Knowing statements of formulas isn t enough. You also have to know their contexts, and how to deduce one from another. That s what these notes are intended to help with. But proof is as much an art as a science, so these are merely guidelines. What is a proof? Just a convincing argument to explain why a mathematical statement is true. However, long experience has led us to develop very clear standards for what is convincing and what isn t. We will try to explain the steps that most frequently arise in a typical proof. But let us be clear: a proof does not have to be one of those mechanical constructions from high school with statements on the left and reasons on the right. Rather, it should be an argument (preferably in complete sentences) that is intelligible and convincing to a human being. Let s start by considering what kinds of basic objects we are dealing with. 1

What are our basic objects of study? There are two answers. Scalars, vectors, and matrices. First, there are real numbers, also known as scalars. Then there are m n matrices. There are also vectors in R m, but we regard them as m 1 matrices. You might say that scalars should also be regarded as 1 1 matrices. We might sometimes want to bend the rules this way, but it s usually better not to. For example, if λ is a scalar and A is a 2 2 matrix, then the product λa is defined, but you can t multiply a 1 1 matrix by a 2 2 matrix. In any equation, the same type of object should appear on both sides. It s an error to equate a scalar to a vector, for example. Sets. The other basic objects that we consider are sets. This is what many proofs are about. A set is just any collection of elements, which can be any kind of objects: scalars, vectors, matrices, or even apples and oranges. It s an error to equate a set with an element. For example, Span is a set, not a single vector, so it s an error to write Span (u, v) = λu + µv. An example of a set is R, the set of all real numbers. Another is the interval [0, 1], the set of all real numbers x with 0 x 1. Yet another is R n, the set of all n-vectors. We write x T (spoken as x is in T ) if x is an element of T. We write S T and say S is a subset of T if every element of S is also an element of T. For example, [0, 1] R. It s an error to confuse, belonging of an element, with, inclusion of a subset. However, notice that for any element x T, there is a set {x} whose sole element is x, and then {x} T. 2

How can I write down a set by name? A few of them, like R n, have standard names. Others have a finite number of elements, which we can list inside curly braces. This is called roster notation. For example, {3, π, 4} R. For another example, {(1, 3), (2, 4), (3, 7)} R 2. What about sets with an infinite number of elements? We don t have time to write them down this way! Roster notation can be extended in three ways to deal with this. First, you can write dots: {1, 3, 5, 7, 9,...} R. You should only do this when the pattern is absolutely clear. Second, you can define a subset by writing an expression with free variables, which you specify after a solidus (read as such that ): {(t, t) R 2 t R} R 2. That s the correct way to describe the span: Span (u, v) = {λu + µv λ, µ R}. Third, you can determine a subset of a set S by imposing some condition on the elements, which you again write down after the solidus: [0, 1] = {x R 0 x 1}. 3

How do I show that two objects are equal? That depends what kind of objects they are. For vectors or matrices. You need to show that they have the same size (usually obvious enough to go without saying) and that all entries are the same. Example 1. Prove that (A T ) T = A. Proof. If A is m n, then A T is n m, so (A T ) T is m n. Moreover, the i, j entry of (A T ) T is ((A T ) T ) i,j = (A T ) j,i = A i,j, that is, it s the same as the i, j entry of A. Hence (A T ) T = A. For sets. You need to show that they have the same elements. That is, every element of S is an element of T, and vice versa. Equivalently, you need to show S T and T S. Sometimes you can do both at once, but sometimes you have to do them separately. Example 2. If u, v R n, prove that Span (u, v) = Span (u + v, v). Proof. By definition, the left-hand side is the set of all linear combinations of u and v. So any element of the left-hand side can be expressed, for some λ, µ R, as λu + µv = λ(u + v) + (µ λ)v, which is a linear combination of u + v and v and hence is an element of the right-hand side. Likewise, any element of the right-hand side can be expressed, for some λ, µ R, as λ(u + v) + µv = λu + (λ + µ)v, which is an element of the left-hand side. Hence the two sides are equal. 4

How can I prove an if-and-only-if statement? P if and only if Q really means two things: P implies Q, and Q implies P. One can imagine at least two strategies for proving such a statement: (I) Prove both at once, by connecting P to Q through a chain of equivalences. (II) Prove them separately: first P implies Q, then Q implies P. Example 3. Prove that A is symmetric if and only if I 2A is. Proof. A is symmetric A T = A 2A T = 2A I 2A T = I 2A (I 2A) T = I 2A I 2A is symmetric. [Here we followed strategy (I).] Example 4. Prove that Ax = 0 has a unique solution if and only if Ax = b has a unique solution for any b where it has a solution at all. Proof. =: Suppose the second statement is true. Then x = 0, being a solution of Ax = 0, must be the unique solution. This proves the first statement. = : Conversely, suppose the first statement is true. Then for any two solutions x, x of Ax = b, we have A(x x ) = Ax Ax = b b = 0, so x x must be the unique solution of Ax = 0, namely 0. Hence x = x, that is, any two solutions of Ax = b must be equal. This proves the second statement. [Here we followed strategy (II).] 5

How can I prove a for-all statement? That is, something like For all elements of a set S, such-and-such is true. Begin with Let x S. That is, give it an open-ended name that could apply to any element of S. Then argue that for this x, such-and-such must hold. Example 5. If AB = I, prove that for every c R n, Ax = c has a solution. Proof. Let c R n. If x = Bc, then Ax = A(Bc) = (AB)c = Ic = c, so x is a solution. How can I prove a there-exists statement? That is, something like There exists an element of S satisfying so-and-so. Say Let x =... and specify accurately what it is. That is, give it a precise name ensuring that it satisfies what you want. Then argue that for this x, so-and-so must hold. Example 6. If A is n n with rank n, prove that it has an inverse. Proof. What we need to show can be rephrased as: If A is n n with rank n, then there exists an n n B satisfying BA = I. Any such A has RRE form equal to I, so there exist elementary matrices E 1,..., E r with E r E 1 A = I. Let B = E r E 1 ; then BA = I. Now, here is an example where both kinds of let appear. Example 7. If T : R n R m is linear and onto, prove that T is onto. Proof. For T to be onto means that for all x R m, there exists y R n such that T (y) = x. We need to show that T is onto, meaning that for all x R m, there exists z R n such that T (z) = x. Let x R m [this is the open-ended let, which goes with for all ]. Since T is onto, there exists y R n such that T (y) = x. Now let z = y [this is the specific let, which goes with there exists ]. Then T (z) = T ( y) = ( T (y)) = ( x) = x by linearity of T, so T is onto. 6

What is mathematical induction? A method for proving a statement for all natural numbers. There are two steps: (a) Check the statement for n = 1 (usually easy). (b) Then, assume it s true for a given n (called the induction hypothesis) and deduce it for n + 1 (called the induction step). This isn t circular: you assume it for only one n, but you prove it for all. Example 8. Prove that for A invertible and for all n > 0, (A n ) 1 = (A 1 ) n. Proof by induction. (a) For n = 1, this is just (A 1 ) 1 = A 1 = (A 1 ) 1. (b) Now, assume (A n ) 1 = (A 1 ) n for a given n. Then A n+1 (A 1 ) n+1 = AA n (A 1 ) n A 1 = AIA 1 = I (where the second equality uses the induction hypothesis), so (A 1 ) n+1 is the inverse of A n+1. 7

Any concluding general advice? Sometimes it s better to go back to the original definitions and unravel them; other times it s better to rely on previously established theorems. Also, sometimes it s better to take things apart with indices; other times it s better to work with matrices as a whole. Example 9. Prove that (A + B) T = A T + B T. Proof. By the definition of transpose, the i, j entry of (A + B) T is (A + B) ji. By the definition of matrix sum, (A + B) ji = A ji + B ji. But also by the definition of matrix sum, the i, j entry of A T +B T is A T ij +B T ij. By the definition of transpose, A T ij = A ji, and B T ij = B ji. So the i, j entry of A T + B T is A ji + B ji. Hence (A + B) T and A T + B T have the same i, j entries for each i and j, so they re equal. [Here we unraveled the definitions, working from the outside in on both sides. Also, we took things apart using indices.] Example 10. Prove that det A 1 = 1/ det A. Proof. Since I = AA 1, taking the determinant of both sides yields det I = det(aa 1 ). But previous theorems tell us that det I = 1 and det(ab) = det A det B, so 1 = det A det A 1 and the result follows. [Here we relied on previous theorems. And we treated the matrix as a whole.] 8