MAT 2038 LINEAR ALGEBRA II Instructor: Engin Mermut Course assistant: Zübeyir Türkoğlu web:

Similar documents
Section Inner Products and Norms

Similarity and Diagonalization. Similar Matrices

Inner Product Spaces and Orthogonality

Lecture 1: Schur s Unitary Triangularization Theorem

Chapter 6. Orthogonality

Orthogonal Diagonalization of Symmetric Matrices

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Inner products on R n, and more

Applied Linear Algebra I Review page 1

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Linear Algebra Review. Vectors

Inner product. Definition of inner product

Numerical Methods I Eigenvalue Problems

α = u v. In other words, Orthogonal Projection

NOTES ON LINEAR TRANSFORMATIONS

[1] Diagonal factorization

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Notes on Symmetric Matrices

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

MATH APPLIED MATRIX THEORY

Finite dimensional C -algebras

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

ISOMETRIES OF R n KEITH CONRAD

T ( a i x i ) = a i T (x i ).

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Inner Product Spaces

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

by the matrix A results in a vector which is a reflection of the given

Similar matrices and Jordan form

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Chapter 17. Orthogonal Matrices and Symmetries of Space

LINEAR ALGEBRA W W L CHEN

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

x = + x 2 + x

LINEAR ALGEBRA. September 23, 2010

Introduction to Matrix Algebra

5. Orthogonal matrices

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Algebra: Determinants, Inverses, Rank

1 VECTOR SPACES AND SUBSPACES

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

Vector and Matrix Norms

Lecture 18 - Clifford Algebras and Spin groups

17. Inner product spaces Definition Let V be a real vector space. An inner product on V is a function

Math 4310 Handout - Quotient Vector Spaces

Orthogonal Projections

Name: Section Registered In:

BANACH AND HILBERT SPACE REVIEW

Linear Algebra Notes for Marsden and Tromba Vector Calculus

16.3 Fredholm Operators

Notes on Determinant

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

Lecture 5: Singular Value Decomposition SVD (1)

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

SALEM COMMUNITY COLLEGE Carneys Point, New Jersey COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Recall that two vectors in are perpendicular or orthogonal provided that their dot

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

Factorization Theorems

Systems of Linear Equations

Continuity of the Perron Root

Numerical Analysis Lecture Notes

Using row reduction to calculate the inverse and the determinant of a square matrix

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

Notes on Linear Algebra. Peter J. Cameron

1 Introduction to Matrices

Orthogonal Bases and the QR Algorithm

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

October 3rd, Linear Algebra & Properties of the Covariance Matrix

Lecture 2 Matrix Operations

Matrix Representations of Linear Transformations and Changes of Coordinates

3 Orthogonal Vectors and Matrices

DATA ANALYSIS II. Matrix Algorithms

The Determinant: a Means to Calculate Volume

MATH1231 Algebra, 2015 Chapter 7: Linear maps

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

Examination paper for TMA4205 Numerical Linear Algebra

1 Sets and Set Notation.

8 Square matrices continued: Determinants

Solving Systems of Linear Equations

4 MT210 Notebook Eigenvalues and Eigenvectors Definitions; Graphical Illustrations... 3

Chapter 6. Linear Transformation. 6.1 Intro. to Linear Transformation

Lecture notes on linear algebra

Arkansas Tech University MATH 4033: Elementary Modern Algebra Dr. Marcel B. Finan

Data Mining: Algorithms and Applications Matrix Math Review

3. INNER PRODUCT SPACES

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

The Characteristic Polynomial

Transcription:

MAT 2038 LINEAR ALGEBRA II 15.05.2015 Dokuz Eylül University, Faculty of Science, Department of Mathematics Instructor: Engin Mermut Course assistant: Zübeyir Türkoğlu web: http://kisi.deu.edu.tr/engin.mermut/ SELF-ADJOINT(=HERMITIAN), UNITARY AND NORMAL MATRICES AND OPERATORS, AND THE SPECTRAL THEOREM See the lecture notes for the proofs of all of the below results given without proof. Remember that if we identify 1 1 complex matrices with complex numbers (that is, for all c C, we identify the 1 1 complex matrix [c] with the complex number c), then the standard inner product in the vector space C n over the field C of complex numbers is given for all vectors X = x 1 x 2. x n and Y = in C n by using the matrix product and adjoint (=conjugate transpose) of matrices by: < X, Y > = Y X = (the product of the 1 n matrix Y and n 1 matrix X) }{{} where Y = Y T = [ ] y 1 y 2 y n. This holds because < X, Y >= x1 y 1 + x 2 y 2 + + x n y n and x 1 Y X = Y T X = [ ] x 2 y 1 y 2 y n. = y 1x 1 + y 2 x 2 + + y n x n = x 1 y 1 + x 2 y 2 + + x n y n. x n Remember that for an n n complex matrix A = [a ij ] n i,j=1, we define: (i) A = [a ij ] n i,j=1 = (the conjugate of A obtained by taking the conjugates of every entry of A), and (ii) A = A T = A T = (the adjoint of A) = (the conjugate transpose of A ) = [b ij ] n i,j=1 where b ij = a ji. The main property of the adjoint(=conjugate transpose) of matrices is its relation with the standard inner product in C n in the following way: for all vectors X, Y in the vector space C n over the field C, y 1 y 2. y n < AX, Y > = < X, A Y > }{{}}{{} Let A = [a ij ] n i,j=1 be an n n complex square matrix, that is, an n n matrix with complex number entries, (i) The matrix A is said to be a self-adjoint(=hermitian) matrix if A = A. If A is a self-adjoint(=hermitian) matrix, then by the above property of the adjoint(=conjugate transpose) of matrices we have for all vectors X, Y in the vector space C n over the field C, < AX, Y > = < X, AY > }{{}}{{} (ii) The matrix A is said to be a unitary matrix if equivalently, Observe that: A A = I, if A is a self-adjoint(=hermitian) matrix. where I is the n n identity matrix, A is an invertible matrix with its inverse A 1 = A. A is a unitary matrix the columns of A form an orthonormal basis for C n with its standard inner product. (iii) The matrix A is said to be a normal matrix if A and A commute, that is, if A A = AA. Firstly observe that self-adjoint(=hermitian) matrices and unitary matrices are normal matrices; explain why. Self-adjoint and unitary matrices generalize symmetric matrices and orthogonal matrices in the real case; observe that if A is an n n real square matrix, that is, if all entries of A are real numbers, then A = A and so A = A T, which then gives us that: (i) A is a self-adjoint matrix (ii) A is a unitary matrix when A is a real matrix AT = A, that is, A is a symmetric matrix. when A is a real matrix AT A = I, that is, A is an orthogonal matrix. 1

Let V be a finite-dimensional inner product space over C, that is, V is a finite-dimensional vector space over the field C of complex numbers with an inner product over C, that is, there exists a function <, >: V V C that assigns to each u, v V a complex number < u, v > such that the following properties hold for all u, v, w V and c C: (i) < v, u >= < u, v > (ii) < cu, v >= c < u, v > (and then by (i), we obtain that < u, cv >= c < u, v >). (iii) < u + v, w >=< u, w > + < v, w > (and then by (i), we obtain that < u, v + w >=< u, v > + < u, w >). (iv) < v, v > 0 for all v V (note that by (i), < v, v >= < v, v > and so < v, v > is a real number), and for each v V, < v, v >= 0 v = 0 = (the zero vector of V ). By the last property (iv), we can also define the norm of a vector v V to be v = < v, v > since < v, v > is a nonnegative real number. Thus property (iv) gives that v = 0 v = 0 = (the zero vector of V ). Question: Given a linear operator T : V V, that is, a linear transformation from the vector space V to itself, we ask under which conditions does the finite-dimensional inner product space V over C have an orthonormal basis consisting of eigenvectors for the linear operator T, that is, when does an orthonormal basis B for V exists such that [T ] B is a diagonal matrix? We shall firstly consider this problem concretely for matrices. What we know for the real case is: Theorem 1. Spectral Theorem for Symmetric Matrices over R. Let A be an n n matrix with real number entries. Then A is orthogonally diagonalizable (that is, there exists an orthogonal n n real matrix Q such that Q 1 AQ = Q T AQ = D is a diagonal matrix) if and only if A T = A, that is, A is a symmetric matrix. The analog of this theorem for complex matrices is not obtained by replacing symmetric matrices with selfadjoint(=hermitian matrices); as we shall see below for a complex square matrix A, being unitarily diagonalizable turns out to be equivalent with the matrix being normal. Let A and B be n n complex matrices. We say that B is unitarily equivalent to A if there exists an n n unitary matrix U such that B = U 1 AU = U AU. We say that A is unitarily diagonalizable if a diagonal matrix D is unitarily equaivalent to A, that is, if U 1 AU = U AU = D is a diagonal matrix for some n n unitary matrix U. To prove the Spectral Theorem for Normal Matrices, we shall use the following lemmas: Lemma 2. (i) Diagonal n n complex matrices are normal matrices. (ii) If A is an n n complex matrix that is unitarily diagonalizable, then A is a normal matrix, that is, A and A commute: A A = AA. Triangular normal matrices are just the diagonal matrices: Lemma 3. If A is an n n complex matrix that is upper triangular or lower triangular, then A is a normal matrix A is a diagonal matrix. Using Schur s Theorem, which we shall prove after we see the adjoint of linear operators, we obtain: Lemma 4. If A is an n n complex matrix, then there exists a unitary matrix U such that U 1 AU = U AU is an upper triangular matrix. The promised theorem for the characterization of normal matrices, the essence of this notion, is then given by: Theorem 5. The Spectral Theorem for Normal Matrices over C. For every n n complex matrix that A, we have: A is unitarily diagonalizable A is a normal matrix, that is, A A = AA. The above mentioned Schur s Theorem says the following; prove it following the steps given in the lectures. Theorem 6. Schur s Theorem. Let V be a finite-dimensional complex inner product space and let T : V V be a linear operator. Then there exists an orthonormal basis B for V such that the matrix [T ] B is upper triangular. For an n n matrix A, applying Schur s Theorem to the linear operator µ A : C n C n defined by µ A (x) = Ax for all x C n gives the above Lemma 4. Explain how using the Change-of-Basis Formula. 2

We shall now extend the concepts self-adjoint(=hermitian), unitary and normal from matrices to linear operators. Firstly, we shall see how to define the adjoint T : V V of a linear operator T : V V on a finite-dimensional inner product space V. The idea that we shall use is the main property of the adjoint(=conjugate transpose) of matrices: for all vectors X, Y in the vector space C n over the field C, < AX, Y > = < X, A Y > where <, > denotes the. To treat inner product spaces over R or over C, simultaneously, we shall say that let V be a finite-dimensional inner product space over the field F, where For each fixed y V, the function h y : V F the field F = R or F = C. defined by h y (v) =< v, y > for all v V is a linear transformation. Moreover, every linear transformation from V to F is necessarily of this form: Theorem 7. Let V be a finite-dimensional inner product space over the field F, where F = R or F = C. If g : V F is a linear transformation, then there exists a unique y V such that g(v) =< v, y > for all v V. Proof. If B = {v 1, v 2,..., v n } is an orthonormal basis for V, check that y = n i=1 g(v i)v i works by showing that for the linear operator h y : V F defined by h(v) =< v, y > for all v V, we have h y (v j ) = g(v j ) for j = 1, 2,..., n and so h y = g since they agree at every basis element in B. Complete the details of the proof. For the proof of the uniqueness part of this theorem, we use: Lemma 8. Let V be an inner product space over the field F, where F = R or F = C. Let y, z V. < x, y >=< x, z > for all x V = y = z. Using the above theorem, we shall be able to define the adjoint of a linear operator. Theorem 9. Definition of the adjoint T : V V of a linear operator T : V V on a finite-dimensional inner product space V. Let V be a finite-dimensional inner product space over the field F, where F = R or F = C. Let T : V V be a linear operator on V. Then there exists a unique function, denoted by such that T : V V, < T (x), y >=< x, T (y) > for all x, y V. Furthermore, the function T : V V is a linear transformation, and it is called the adjoint of the linear operator T : V V. We also have < x, T (y) >=< T (x), y > for all x, y V. Proof. Define the function T : V V for each y V as follows. The function g : V F, defined for all x V by g(x) =< T (x), y >, is a linear transformation and so by the above theorem, there exists a unique vector y V such that g(x) =< x, y > for all x V. We define T (y) = y. Thus we obtain < T (x), y >=< x, T (y) > for every x V and y V. Uniqueness of such a function follows from the above lemma. Now show that T : V V is a linear transformation. Complete the details of the proof. Theorem 10. With the hypothesis as in the above theorem, if B = {v 1, v 2,..., v n } is an orthonormal basis for V, then [T ] B = [T ] B, that is, if A = [T ] B, then [T ] B = A = A T. Proof. If B = [b ij ] n i,j=1 = [T ] B and A = [a ij ] n i,j=1 = [T ] B, then show that for all i, j {1, 2,..., n}, and obtain This means B = A. Complete the details of the proof. a ij =< T (v j ), v i > and b ij =< T (v j ), v i > b ij =< T (v j ), v i >= < v i, T (v j ) > = < T (v i ), v j > = a ji. Proposition 11. If A is an n n matrix over the field F, where F = R or F = C, and if the linear transformations then µ A : F n F n and µ A : F n F n are defined by µ A (x) = Ax and µ A (x) = A x for all x F n, (µ A ) = µ A, that is, µ A : F n F n is the adjoint of the linear operator µ A : F n F n where we take the standard inner product in F n. Proof. In the standard orthonormal basis E = {e 1, e 2,..., e n } for F n, the above theorem gives that [(µ A ) ] E = [µ A ] E = A = [µ A ] E which implies that (µ A ) = µ A. 3

Theorem 12. Let V be a finite-dimensional inner product space over the field F, where F = R or F = C. For all linear operators T : V V and U : V V, and for all c F, we have: (i) (T + U) = T + U. (ii) (ct ) = c T. (iii) (T U) = U T, where T U = T U means the composition of the functions T and U, and similarly for U T ; remember this is the algebra structure of End F (V ) = Hom F (V, V ) which consists of all linear operators on V, it is both a vector space over F and a ring where multiplication of linear operators on V is just their composition, and these two structures are related by (ct )U = T (cu) = c(t U). (iv) (T ) = T. (v) I = I, where I : V V is the identity linear operator defined by I(v) = v for all v V. Proof. Prove all of these using Theorem 9 (which gives the defining property of the adjoint of a linear operator), and Lemma 8. Let V be a finite-dimensional inner product space over the field F, where F = R or F = C. Say n = dim(v ). Let T : V V be a linear operator. Using now the notion of the adjoint T : V V of the linear operator T : V V, we define self-adjoint(=hermitian), unitary and normal operators like for matrices as follows: (i) The linear operator T : V V is said to be a self-adjoint(=hermitian) operator if Note then that: T = T. < T (x), y >=< x, T (y) > for all x, y V, if T : V V is a self-adjoint operator, that is, if T = T. (ii) The linear operator T : V V is said to be a unitary operator if T T = I, that is, the composition T T = I, where I : V V is the identity linear operator defined by I(v) = v for all v V. (iii) The linear operator T : V V is said to be a normal operator if T and T commute, that is, if T T = T T, which means the equality of the compositions T T = T T. Firstly observe that self-adjoint(=hermitian) operators and unitary operators are normal operators; explain why. In terms of matrix A = [T ] B representing the linear operator T : V V in any ORTHONORMAL BASIS B = {v 1, v 2,..., v n } for V, we have that: (i) T : V V is a self-adjoint(=hermitian) operator the matrix A = [T ] B is a self-adjoint(=hermitian) matrix. (ii) T : V V is a unitary operator the matrix A = [T ] B is a unitary matrix. (iii) T : V V is a normal operator the matrix A = [T ] B is a normal matrix. Be careful, for these equivalences, we need the basis B = {v 1, v 2,..., v n } to be an orthonormal basis for V. To prove these, use Theorem 10; for example, T : V V is a self-adjoint(=hermitian) operator T = T [T ] B = [T ] B [T ] B = [T ] B A = A A is a self-adjoint(=hermitian) matrix Using the above characterization in terms of matrices, now use the Spectral Theorem for Normal Matrices over C to prove: Theorem 13. The Spectral Theorem for Normal Operators over a finite-dimensional inner product space over C. Let T : V V be a linear operator where V is a finite-dimensional inner product space over the field C of complex numbers. Then T : V V is a normal operator if and only if there exists an orthonormal basis B for V that consists of eigenvectors of T, that is, if and only if, there exists an orthonormal basis B for V such that the matrix [T ] B is a diagonal matrix (whose diagonal entries are necessarily the eigenvalues of T counted with their algebraic multiplicities). 4

Similarly, using the Spectral Theorem for Symmetric Matrices over R, prove: Theorem 14. The Spectral Theorem for Self-adjoint Operators over a finite-dimensional inner product space over R. Let T : V V be a linear operator where V is a finite-dimensional inner product space over the field R of real numbers. Then T : V V is a self-adjoint(=hermitian) operator if and only if there exists an orthonormal basis B for V that consists of eigenvectors of T, that is, if and only if, there exists an orthonormal basis B for V such that the matrix [T ] B is a diagonal matrix (whose diagonal entries are necessarily the eigenvalues of T counted with their algebraic multiplicities). The below theorems give some main properties of self-adjoint(=hermitian) operators, unitary operators and normal operators whose proofs are left to you as Homework Problems. Theorem 15. Properties of Normal Operators. Let V be a finite-dimensional inner product space over the field F, where F = R or F = C. Let T : V V be a NORMAL OPERATOR on V. Then we have: (i) T (v) = T (v) for every v V. (ii) T ci is also a normal operator for every c F, where I : V V is the identity linear operator defined by I(v) = v for all v V. (iii) If v V is an eigenvector of T with eigenvalue λ F, that is, if 0 v V and T (v) = λv, then v is also an eigenvector of T with eigenvalue λ, that is, T (v) = λv. (iv) If λ 1 and λ 2 in F are distinct eigenvalues of the linear operator T : V V, then the eigenspaces E(λ 1 ) = Ker(T λ 1 I) and E(λ 2 ) = Ker(T λ 2 I) are orthogonal subspaces, that is, if v 1 is an eigenvector of the linear operator T with eigenvalue λ 1 and v 2 is an eigenvector of T with eigenvalue λ 2, then < v 1, v 2 >= 0. Proposition 16. A characterization of normal operators. Let V be a finite-dimensional inner product space over the field F, where F = R or F = C. Let T : V V be a linear operator on V. Then: T : V V is a normal operator T (v) = T (v) for every v V. Theorem 17. Eiegenvalues of self-adjoint operators are always real numbers. Let V be a finite-dimensional inner product space over the field F, where F = R or F = C. Let T : V V be a SELF-ADJOINT(=HERMITIAN) OPERATOR on V. Then: (i) Every eigenvalue of T in F is a real number. (ii) The characteristic polynomial p(t) of T is of the form p(t) = ( 1) n (t λ 1 )(t λ 2 ) (t λ n ) for some real numbers λ 1, λ 2,..., λ n where λ 1, λ 2,..., λ n are not necessarily distinct. Theorem 18. Characterizations of unitary operators. Let V be a finite-dimensional inner product space over the field F, where F = R or F = C. The following are equivalent for a linear operator T : V V on V : (i) T : V V is a UNITARY OPERATOR, that is, T T = I, where I : V V is the identity linear operator defined by I(v) = v for all v V. (ii) T T = T T = I, where I : V V is the identity linear operator defined by I(v) = v for all v V. (iii) T : V V is an invertible linear operator with T 1 = T. (iv) T : V V preserves the inner product, that is, < T (x), T (y) >=< x, y > for all x, y V. (v) T : V V sends every orthonormal basis of V to an orthonormal basis, that is, T (B) = {T (v 1 ), T (v 2 ),..., T (v n )} is also an orthonormal basis for V for every orthonormal basis B = {v 1, v 2,..., v n } for V. (vi) There exists an orthonormal basis B = {v 1, v 2,..., v n } for V such that T (B) = {T (v 1 ), T (v 2 ),..., T (v n )} is also an orthonormal basis for V. (vii) T : V V preserves the norm, that is, T (x) = x for every x V. To prove (vii) (iv) in the above theorem, firstly prove the Polarization Identities that express the inner product in terms of norm in an inner product space: 5

(i) Polarization identity for REAL inner product spaces. If V is an inner product space over the field R of real numbers, then for every u, v V, < u, v >= 1 4 [ u + v 2 u v 2]. (ii) Polarization identity for COMPLEX inner product spaces. If V is an inner product space over the field C of complex numbers, then for every u, v V, < u, v >= 1 4 4 i k u + i k v = 1 [ u + v 2 u v 2] + i [ u + iv 2 u iv 2]. 4 4 k=1 Proposition 19. A characterization of self-adjoint operators on complex inner product spaces. Let V be a finite-dimensional inner product space over the field C of complex numbers and let T : V V be a linear operator on V. Then T : V V is a self-adjoint(=hermitian) operator if and only if V < T (v), v > is a real number for every v V. Theorem 20. Let V be a finite-dimensional inner product space over the field C of complex numbers. The following are equivalent for a linear operator T : V V on V : (i) T = proj W : V V is an orthogonal projection onto a subspace W of V, that is, for each b V, (ii) T 2 = T = T. T (b) = p is the unique vector p W such that b p W. Remember the Spectral Theorem for Symmetric Matrices over R that has been formulated using projections; in terms of projections, we state the Spectral Theorem for normal operators over C and self-adjoint operators over R as follows: Theorem 21. THE SPECTRAL THEOREM. Let V be a finite-dimensional inner product space over the field F, where F = R or F = C. Say n = dim(v ). Let T : V V be a linear operator on V. Suppose that T : V V is a normal operator if F = C, and T : V V is a self-adjoint operator if F = R. Then the characteristic polynomial p(t) of T : V V is of the form p(t) = ( 1) n (t λ 1 ) m1 (t λ 2 ) m2 (t λ k ) m k for some k Z +, distinct eigenvalues λ 1, λ 2,..., λ k of T : V V in F with algebraic multiplicities m 1, m 2,..., m k Z +. For each i = 1, 2,..., k, let W i = Ker(T λ i I) be the eigenspace of T : V V corresponding to the eigenvalue λ i, and let T i = proj Wi : V V be the orthogonal projection map onto the subspace W i. Then we have the following: (i) V = W 1 W 2 W k is the direct sum of its subspaces W 1, W 2,..., W k. (ii) For each i = 1, 2,..., k, if W i = j i W j is the direct sum of the subspaces W j for j {1, 2,..., k} \ {i}, then we have W i = W i. (iii) For every i, j {1, 2,..., k}, T i T j = δ ij T i, where δ ij = { 0, if i j; 1, if i = j. (iv) I = T 1 + T 2 + + T k, where I : V V is the identity linear operator defined by I(v) = v for all v V. (v) T = λ 1 T 1 + λ 2 T 2 + + λ k T k. In the above theorem, (i) The set {λ 1, λ 2,..., λ k } of eigenvalues of T : V V is called the spectrum of T : V V. (ii) The sum I = T 1 + T 2 + + T k is called the resolution of the identity operator induced by T : V V. (iii) T = λ 1 T 1 + λ 2 T 2 + + λ k T k is called the spectral decomposition of T : V V. For further results about operators, and exercies, see Chapter 6 of the following Supplementary Textbook: Friedberg, S. H., Insel, A. J., and Spence, L. E. Linear Algebra. 4th edition. 6