Diagonal, Symmetric and Triangular Matrices

Similar documents
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Notes on Determinant

Elementary Matrices and The LU Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization

Solving Linear Systems, Continued and The Inverse of a Matrix

6. Cholesky factorization

Lecture 2 Matrix Operations

Arkansas Tech University MATH 4033: Elementary Modern Algebra Dr. Marcel B. Finan

Solution to Homework 2

Data Mining: Algorithms and Applications Matrix Math Review

Using row reduction to calculate the inverse and the determinant of a square matrix

Systems of Linear Equations

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. (

1 Introduction to Matrices

MAT188H1S Lec0101 Burbulla

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Introduction to Matrix Algebra

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

Similarity and Diagonalization. Similar Matrices

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Chapter 6. Orthogonality

8 Square matrices continued: Determinants

Section Inner Products and Norms

1 VECTOR SPACES AND SUBSPACES

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

MATH APPLIED MATRIX THEORY

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 2

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Math 312 Homework 1 Solutions

The Characteristic Polynomial

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

LINEAR ALGEBRA. September 23, 2010

Chapter 17. Orthogonal Matrices and Symmetries of Space

Matrix Differentiation

Inner products on R n, and more

Solving Systems of Linear Equations

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

by the matrix A results in a vector which is a reflection of the given

T ( a i x i ) = a i T (x i ).

Linear Algebra Review. Vectors

Lecture 3: Finding integer solutions to systems of linear equations

SOLVING LINEAR SYSTEMS

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Vector and Matrix Norms

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Solving Systems of Linear Equations Using Matrices

Similar matrices and Jordan form

7 Gaussian Elimination and LU Factorization

Solution of Linear Systems

1.2 Solving a System of Linear Equations

Notes on Symmetric Matrices

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Style. Learning Outcomes

160 CHAPTER 4. VECTOR SPACES

[1] Diagonal factorization

Lecture notes on linear algebra

Name: Section Registered In:

26. Determinants I. 1. Prehistory

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

9.2 Summation Notation

1 Sets and Set Notation.

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

Rotation Matrices and Homogeneous Transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

Lecture L3 - Vectors, Matrices and Coordinate Transformations

DETERMINANTS TERRY A. LORING

Systems of Linear Equations

Question 2: How do you solve a matrix equation using the matrix inverse?

Matrix Representations of Linear Transformations and Changes of Coordinates

Factorization Theorems

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

9 MATRICES AND TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS

Some Lecture Notes and In-Class Examples for Pre-Calculus:

Operation Count; Numerical Linear Algebra

Lecture 1: Schur s Unitary Triangularization Theorem

Methods for Finding Bases

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Solving Systems of Linear Equations

1 Solving LPs: The Simplex Algorithm of George Dantzig

Inner Product Spaces and Orthogonality

Matrix Algebra in R A Minimal Introduction

University of Lille I PC first year list of exercises n 7. Review

26 Ideals and Quotient Rings

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Math 319 Problem Set #3 Solution 21 February 2002

Lecture 1: Systems of Linear Equations

Row Echelon Form and Reduced Row Echelon Form

Notes on Linear Algebra. Peter J. Cameron

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Linear Algebra: Determinants, Inverses, Rank

5. Orthogonal matrices

Linear Algebra Notes

Typical Linear Equation Set and Corresponding Matrices

Transcription:

Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by Diagonal Matrices 2.2.1 Theorem (Multiplying on left right by a diagonal matrix) 3 Symmetric Matrices 3.1 Example: AA T A T A 3.2 Example: Evaluating A n 3.3 Theorem (Properties of Symmetric Matrices) 4 Triangular Matrices 4.1 Triangularity Transpose 4.1.1 Theorem (Transpose of Triangular Matrices) 4.2 Triangularity Inverses 4.2.1 Theorem (Products Inverses of Upper Triangular Matrices) Diagonal, Symmetric Triangular Matrices In this section we look at some matrices that play a special role. Each one has visual properties that are reflected in the values of some of the entries with special subscripts. All matrices considered are square. Diagonal Matrices Diagonal matrices are zero off the diagonal. This means the matrix looks like where the entries denoted by * may be zero or nonzero. In terms of the subscripts, D = [d ] is a diagonal matrix if d = 0 whenever If D = [d ] is an diagonal matrix, it is common to write where are the diagonal entries. Contents 1

Products, Powers Inverses of Diagonal Matrices It is very easy to verify the special rule for taking products of diagonal matrices. If Proof: The i-j entry of RS is Math1300:MainPage/DiagTriangSym for all are diagonal matrices, then In particular, if D = [d ] is a diagonal matrix, then, more generally Clearly r i,k s k,j = 0 unless both i = k k = j. Hence the only way to get a nonzero summ is to have i = j, in which case its value is r i,i s i,i. If are all nonzero, Then clearly the product of the matrix D = [d ] is, so the inverse of D is known. On the other h, if some d an all zero row, so the matrix is not invertible. 1 = 0, then D has Theorem (Powers of Matrices) If is a diagonal matrix then for. D is invertible if only if for When D is invertible, for all integers k. Multiplying Matrices on the Left Right by Diagonal Matrices Suppose that A = [a ] is an matrix is a diagonal matrix. Let us consider a typical element in the i-th row of DA. Then the i-j entry of DA is d i a. Considering the row as a whole, each element is multiplied by d i. In other words, the row R i becomes d i R i. The remains true, of course, for every row i, This can also be viewed as a sequence of elementary row operations: its easy to take the product of these diagonal matrices: The corresponding elementary matrices are, so the sequence of elementary row operations is the same as multiplying on the left by D. Products, Powers Inverses of Diagonal Matrices 2

A similar analysis of multiplication of a matrix on the right by a diagonal matrix reveals that the columns are multiplied by the diagonal entries of the matrix. Theorem (Multiplying on left right by a diagonal matrix) If A is an matrix, are diagonal matrices, then The effect of RA is to multiply each entry of the i-th row of A by ri. Symbolically, for The effect of AS is to multiply each entry of the j-th column of A by s. Symbolically, for j Symmetric Matrices A matrix A = [a ] is symmetric if A = A T. In other words, a An example of a symmetric matrix is = a for all j,i Notice the geometric relationship for a typical a a j,i within the matrix A: This means that the part of the matrix above the diagonal is the mirror image of the part below the diagonal vice-versa. Another way of saying this: if you flip the matrix over using the main diagonal as an axis, the matrix is unchanged. Multiplying Matrices on the Left Right by Diagonal Matrices 3

Example: AA T A T A The following example gives a method of constructing symmetric matrices. We use For any matrix A, both AA T A T A are symmetric. Proof: Remember that (A T ) T = A (AB) T = B T A T. Hence (AA T ) T = (A T ) T A T = AA T which is precisely the condition necessary for AA T to be symmetric. The proof for A T A is essentially identical. Another way of seeing the symmetry is to observe that (AA T ) (AA T ) j,i are both equal to the inner product of R i R j. Suppose Then both of which are symmetric. Example: Evaluating A n Let We wish to evaluate A n. Notice that U 2 = 2I, so Notice also that Now also Hence Theorem (Properties of Symmetric Matrices) If A B are symmetric matrices of order n, then A T is symmetric Example: AAT ATA 4

A + B A B are symmetric ra is symmetric for any real number r If A is invertible, then A 1 is symmetric Math1300:MainPage/DiagTriangSym Proofs: Remember that (A 1 ) T = (A T ) 1. Since A = A T B = B T, it follows that (A T ) T = A = A T (A + B) T = A T + B T = A + B (A B) T = A T B T = A B (ra) T = r(a T ) = ra (A 1 ) T = (A T ) 1 = A 1 so A 1 is symmetric. Triangular Matrices Triangular matrices come in two varieties: upper triangular lower triangular. An upper triangular matrix zero below the diagonal. This means the matrix looks like where * may be zero or nonzero. Viewed another way, if A = [a ] then A is upper triangular if a = 0 whenever i > j. Similarly, a lower triangular matrix zero abpve the diagonal. This means the matrix looks like where * may be zero or nonzero. In this case A is lower triangular if a = 0 whenever i < j. A matrix is triangular if it is either upper triangular or lower triangular. Triangularity Transpose Theorem (Transpose of Triangular Matrices) The transpose of an upper triangular matrix is lower triangular. The transpose of a lower triangular matrix is upper triangular. Theorem (Properties of Symmetric Matrices) 5

The transpose of a triangular matrix is triangular. Proofs: The i-j entry of A is the j-i entry of A T. If A is lower triangular, then the i-j entry of A is zero if i < j, hence so is the j-i entry of A T. This means that A T is upper triangular. A similar argument holds if A is lower triangular or if A is triangular. Informally: Math1300:MainPage/DiagTriangSym Triangularity Inverses Theorem (Products Inverses of Upper Triangular Matrices) Informally: if then Formally: Suppose A B are two square matrices of order n. Then If A B are upper triangular, then so is AB A is upper triangular invertible if only if each diagonal entry is nonzero Theorem (Transpose of Triangular Matrices) 6

If A is upper triangular invertible, then so is A 1 Proofs: If A B are both upper triangular, then a = b = 0 whenever i > j. Now consider the i-j entry in the product AB. Now the upper triangularity of A B imply so For this sum to be nonzero we need triangular. This means that (AB) = 0 whenever i > j, so AB is upper Suppose A is upper triangular, we want to solve AX = I. The matrix itself is already in the form for Gaussian Elimination! If all the diagonal entries are nonzero, we can back substitute solve AX = I. Alternatively, we can change each diagonal entry to 1 with an elementary row operation change all entries above the diagonal to zero to get I n as the reduced row echelon form. In either case, A is invertible. On the other h, if there is a zero on the diagonal, then there is a free variable, A is not invertible. Finally, suppose that A is upper triangular invertible. This means we can start with the matrix [A I], by elementary row operations, reduce it to [I A 1 ]. For each diagonal element d we have hence the i elementary row operations for changes the diagonal entries to 1. To change an entry above the diagonal, say a, to zero we use Note that i < j since the entry is above the diagonal. The entries below the diagonal are already zero. What is the effect on the right side of [A I] when we follow this process to reduce A to I? The first part just multiplies changes the diagonal entries to. The second part just changes entries that are above the diagonal. Hence the right side of [I A 1 ] must be upper triangular, the proof of the theorem is complete. Theorem (Products Inverses of Upper Triangular Matrices) 7