Complex Linear Algebra

Similar documents
Section Inner Products and Norms

Chapter 6. Orthogonality

Applied Linear Algebra I Review page 1

Inner Product Spaces and Orthogonality

Introduction to Matrix Algebra

Linear Algebra Review. Vectors

Inner product. Definition of inner product

1 Introduction to Matrices

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Notes on Symmetric Matrices

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Quantum Physics II (8.05) Fall 2013 Assignment 4

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Similarity and Diagonalization. Similar Matrices

LINEAR ALGEBRA. September 23, 2010

Orthogonal Diagonalization of Symmetric Matrices

Finite dimensional C -algebras

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Lecture 18 - Clifford Algebras and Spin groups

[1] Diagonal factorization

Similar matrices and Jordan form

Lecture 1: Schur s Unitary Triangularization Theorem

Linear Algebra: Determinants, Inverses, Rank

The Characteristic Polynomial

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Lecture 5: Singular Value Decomposition SVD (1)

x = + x 2 + x

α = u v. In other words, Orthogonal Projection

Chapter 9 Unitary Groups and SU(N)

Inner products on R n, and more

DATA ANALYSIS II. Matrix Algorithms

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

5. Orthogonal matrices

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

Data Mining: Algorithms and Applications Matrix Math Review

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Vector and Matrix Norms

October 3rd, Linear Algebra & Properties of the Covariance Matrix

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

Physics 221A Spring 2016 Notes 1 The Mathematical Formalism of Quantum Mechanics

Finite Dimensional Hilbert Spaces and Linear Inverse Problems

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Vector and Tensor Algebra (including Column and Matrix Notation)

Inner Product Spaces

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

Factorization Theorems

Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics

MATH APPLIED MATRIX THEORY

The three-dimensional rotations are defined as the linear transformations of the vector x = (x 1, x 2, x 3 ) x i = R ij x j, (2.1) x 2 = x 2. (2.

Linear Algebra: Vectors

Lecture 2 Matrix Operations

3 Orthogonal Vectors and Matrices

Particle Physics. Michaelmas Term 2011 Prof Mark Thomson. Handout 7 : Symmetries and the Quark Model. Introduction/Aims

Numerical Analysis Lecture Notes

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

Chapter 17. Orthogonal Matrices and Symmetries of Space

Recall that two vectors in are perpendicular or orthogonal provided that their dot

1 Sets and Set Notation.

Solution to Homework 2

1 Symmetries of regular polyhedra

BANACH AND HILBERT SPACE REVIEW

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Brief Introduction to Vectors and Matrices

Mathematics Course 111: Algebra I Part IV: Vector Spaces

9 MATRICES AND TRANSFORMATIONS

Eigenvalues and Eigenvectors

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Numerical Methods I Eigenvalue Problems

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Review Jeopardy. Blue vs. Orange. Review Jeopardy

A Primer on Index Notation

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

A note on companion matrices

4. Matrix Methods for Analysis of Structure in Data Sets:

Rotation Matrices and Homogeneous Transformations

Lecture notes on linear algebra

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

3. INNER PRODUCT SPACES

Linear Algebra Review and Reference

Examination paper for TMA4205 Numerical Linear Algebra

Quantum Computing. Robert Sizemore

Quantum Mechanics and Representation Theory

Linear Algebra and TI 89

Semi-Simple Lie Algebras and Their Representations

ALGEBRAIC EIGENVALUE PROBLEM

by the matrix A results in a vector which is a reflection of the given

1 VECTOR SPACES AND SUBSPACES

State of Stress at Point

Direct Methods for Solving Linear Systems. Matrix Factorization

GENERATING SETS KEITH CONRAD

8 Hyperbolic Systems of First-Order Equations

Transcription:

p. 1/31 Complex Linear Algebra The basic mathematical objects in quantum mechanics are state vectors and linear operators (matrices). Because the theory is fundamentally linear, and the probability amplitudes are complex numbers, the mathematics underlying quantum mechanics is complex linear algebra. The vectors are members of a complex vector space, or Hilbert space, with an associated inner product. It is important to remember that these abstract mathematical objects represent physical things, and should be considered independent of the particular representations they are given by choosing a particular basis.

p. 2/31 Dirac notation State vectors in quantum mechanics are written in Dirac notation. The basic object is the ket-vector ψ, which (given a particular basis) can be represented as a column vector. The adjoint of a ket-vector is a bra-vector ψ, represented as a row vector. ψ = α 1. α N, ψ = (α 1 α N ) = ψ. If the vector ψ is normalized, that means ψ ψ = N j=1 α j 2 = 1.

p. 3/31 Inner and Outer Products Given two vectors ψ and φ, ψ = α 1. α N, φ = β 1. β N, the inner product between them is written φ ψ = (β 1 β N ) α 1. = j β jα j. α N The inner product is independent of the choice of basis.

p. 4/31 φ ψ is called a bracket. Note that φ ψ = ψ φ, and for a normalized vector ψ ψ = 1. If two vectors are orthogonal then ψ φ = 0. It is also handy to write the outer product (also sometimes called a dyad): ψ φ = α 1. (β 1 β N ) = α 1 β1 α 1 βn...... α N α N β 1 α N β N A dyad ψ φ is a linear operator. As we shall see, it is common (and often convenient) to write more general operators as linear combinations of dyads.

p. 5/31 Linear operators A linear operator Ô transforms states to states such that Ô(a ψ +b φ ) = aô ψ +bô φ for all states ψ, φ and complex numbers a,b. Given a choice of basis, an operator can be represented by a matrix Ô = a 11 a 1N..... a N1 a NN [a ij ]. The matrix representation depends on the choice of basis. We will only be dealing with orthonormal bases in this class.

p. 6/31 Expectation values Given a state ψ and an operator Ô, we can calculate a number ) (Ô ψ Ô ψ Ô ψ = ψ which is called the expectation value ofô in the state ψ. Given a particular choice of basis, we can express this number in terms of the elements of Ô and ψ : Ô = i,j α ia ij α j. As we will see, when Ô is Hermitian, its expectation value gives the average result of some measurement on a system in the state ψ.

p. 7/31 Matrix elements Similar to an expectation value is a matrix element ψ Ô φ where ψ φ. If ψ and φ are both members of the orthonormal basis { j } then i Ô j = a ij, where a ij is an element of the matrix representing Ô in the basis { j }. The operator can be written as a sum over outer products, Ô = ij a ij i j.

p. 8/31 Hermitian Conjugation One of the most important operations in complex linear algebra is Hermitian conjugation. The Hermitian conjugate Ô is the complex conjugate of the transpose of an operator Ô. If in a particular basis Ô = [a ij] then Ô = [a ji ]. Hermitian conjugation works similarly to transposition in real linear algebra: (ˆB) = ˆB Â. When applied to state vectors, ( ψ ) = ψ. Similarly, for dyads ( ψ φ ) = φ ψ. Note that Hermitian conjugation is not linear, but rather is antilinear: (aô) = a Ô, (a ψ ) = a ψ.

Orthonormal bases and the trace It is normally most convenient to choose a particular orthonormal basis { j } and work in terms of it. As long as one works within a fixed basis, one can treat state vectors as column vectors and operators as matrices. An orthonormal basis for an N-dimensional space has N vectors that satisfy i j = δ ij, N j=1 j j = Î. The trace of an operator is the sum of the diagonal elements: Tr{Ô} = j j Ô j = j a jj. A traceless operator has Tr{Ô} = 0. p. 9/31

p. 10/31 The trace is independent of the choice of basis. If { j } and { φ k } are both orthonormal bases, then Tr{Ô} = j j Ô j = k φ k Ô φ k. The trace also has the useful cyclic property Tr{ˆB} = Tr{ˆBÂ}. This applies to products of any number of operators: Tr{ˆBĈ} = Tr{ĈˆB} = Tr{ˆBĈÂ}. This invariance implies that Tr{ φ ψ } = ψ φ.

p. 11/31 Normal operators A normal operator satisfies Ô Ô = ÔÔ. Operators are diagonalizable if and only if they are normal. That is, for normal Ô we can always find an orthonormal basis { φ j } such that Ô = j λ j φ j φ j, Tr{Ô} = j λ j, and any diagonalizable operator must be normal. These values λ j are the eigenvalues of Ô and { φ j } the corresponding eigenvectors, Ô φ j = λ j φ j. If Ô is nondegenerate i.e., all the λ j are distinct then the eigenvectors are unique (up to a phase). Otherwise there is some freedom in choosing this eigenbasis.

Hermitian operators One very useful class of operators are the Hermitian operators Ĥ that satisfy Ĥ = Ĥ. These are the complex analogue of symmetric matrices. They are obviously normal: Ĥ Ĥ = Ĥ2 = ĤĤ. The eigenvalues of a Hermitian matrix are always real. We will look at this in more detail later. We have already seen an example: the Pauli matrices. ( ) ( ) ( ) 0 1 0 i 1 0 ˆX =, Ŷ =, Ẑ =. 1 0 i 0 0 1 These matrices are obviously Hermitian. It is easy to see that any 2 2 Hermitian matrix can be written aî +b ˆX +cŷ +dẑ for some real values a,b,c,d. p. 12/31

p. 13/31 The Commutator Matrix multiplication is noncommutative, in general. That is, in general ˆB ˆBÂ. Given two operators  and ˆB, their commutator is [Â, ˆB] ˆB ˆBÂ. [Â, ˆB] = 0 if and only if  and ˆB commute. Occasionally, one will encounter matrices that anticommute: ˆB = ˆBÂ. For example, the Pauli matrices anticommute with each other. In these cases, it is sometimes helpful to define the anticommutator: {Â, ˆB} ˆB + ˆBÂ. If two normal operators  and ˆB commute, it is possible to find an eigenbasis which simultaneously diagonalizes both of them. (The converse is also true.)

Orthogonal projectors An orthogonal projector ˆP is an Hermitian operator that obeys ˆP 2 = ˆP. All the eigenvalues of ˆP are either 0 or 1. The complement of a projector Î ˆP is also a projector. Note that ψ ψ is a projector for any normalized state vector ψ. Such a projector is called one-dimensional; a projector more generally has dimension d = Tr{ˆP}. In dimension 2, any projector can be written in the form ) ˆP = (Î + n ˆ σ /2 = ψ n ψ n, where n is a (real) unit three-vector (n x,n y,n z ) (i.e., with n 2 x +n 2 y +n 2 z = 1) and ˆ σ = ( ˆX,Ŷ,Ẑ). In the Bloch sphere picture, n is the direction in space, and ˆP is the projector onto the state ψ n that is spin up along that axis. p. 14/31

The Spectral Theorem Suppose Ô is a normal operator, and λ j are its distinct eigenvalues (i.e., not including any repetition) for j = 1,...,M N. There is a unique set of orthogonal projectors ˆP j such that Ô = M j=1 λ j ˆPj, and ˆP j ˆPk = δ jk ˆPj, ˆP j j = Î. Such a set of projectors is called a decomposition of the identity. For a Hermitian operator, the eigenvalues λ j will all be real. If λ j is an eigenvalue of Ô that is not repeated, then ˆP j = φ j φ j where φ j is an eigenvector of Ô. p. 15/31

p. 16/31 Unitary Operators A unitary operator satisfies Û Û = ÛÛ = Î. It is clearly a normal operator. All of its eigenvalues have unit norm; that is, λ j = 1 for all j. This means that for real 0 θ j < 2π. λ j = exp(iθ j ) There is a correspondence between Hermitian and unitary operators: for every unitary operator Û there is an Hermitian operator Ĥ such that Û = exp(iĥ). (We will clarify what exp(iĥ) means shortly.)

p. 17/31 We have already seen that a unitary operator is equivalent to a change of basis. This is easy to check: if { j } is an orthonormal basis, then so is {Û j }: (Û i ) (Û j ) = i Û Û j = i j = δ ij. For spin-1/2, the most general 2 2 unitary can be written (up to a global phase) Û = cos(θ/2)î +isin(θ/2) n ˆ σ, where n is again a real unit three-vector (n x,n y,n z ) and ˆ σ = ( ˆX,Ŷ,Ẑ). In the Bloch sphere picture, Û is a rotation by θ about the axis n.

p. 18/31 Operator space The space of all operators on a particular Hilbert space of dimension N is itself a Hilbert space of dimension N 2 ; sometimes this fact can be very useful. If  and ˆB are operators, so is aâ+bˆb for any complex a,b. One can define an inner product on operator space. The most commonly used one is (Â, ˆB) Tr{ ˆB} (the Frobenius or Hilbert-Schmidt inner product). It is easy to see that (Â, ˆB) = (ˆB,Â), and (Â,Â) 0 with equality only for  = 0. With respect to this inner product, the Pauli matrices together with the identity form an orthogonal basis for all operators on 2D Hilbert space. A linear transformation on operator space is often referred to as a superoperator.

p. 19/31 Functions of Operators It is common to write a function of an operator f(ô) (where f is ordinarily a function on the complex numbers) which is itself an operator. Usually f(x) is defined by a Taylor series: f(x) = c 0 +c 1 (x x 0 )+c 2 (x x 0 ) 2 +. For the operator version, we write f(ô) = c 0Î +c 1(Ô x 0Î)+c 2(Ô x 0Î)2 + For particular functions and operators, this series can sometimes be summed explicitly. For instance, for a nilpotent operator Ô2 = 0, the series obviously truncates after the first-order term (taking x 0 = 0). For projectors, ˆP n = ˆP for all n 1. For idempotent operators obeying Ô2 = Î (such as the Pauli matrices), one can sum the even and odd terms separately.

p. 20/31 If Ô is normal, we simplify by writing Ô in diagonal form: Ô = j λ j φ j φ j, f(ô) = j f(λ j ) φ j φ j. The most commonly-used function is the exponential exp(x) = 1+x+x 2 /2!+, but others also occur from time to time: cos(x) = 1 x 2 /2+x 4 /4! sin(x) = x x 3 /3!+x 5 /5! log(1+x) = x+x 2 /2+x 3 /3+

Polar Decomposition There are certain special forms in which operators can always be written. One of these is the polar decomposition. For any linear operator Ô, there is a unitary operator Û and positive operators  and ˆB such that Ô = Û = ˆBÛ, which we call the left and right polar decompositions of Ô. A positive operator is an Hermitian operator with all nonnegative eigenvalues. In this case, the positive operators  and ˆB are uniquely given by Â Ô Ô, ˆB ÔÔ. If Ô is invertible then Û is also unique, otherwise not. p. 21/31

p. 22/31 Singular Value Decomposition For any operator Ô we can find unitary operators Û and ˆV and a diagonal real matrix ˆD with all nonnegative entries, such that Ô = Û ˆDˆV. The operator ˆD is unique up to a permutation of the order of the diagonal entries. If the diagonal elements of ˆD are all nonzero, then Ô is invertible.

p. 23/31 Tensor products The tensor (or Kronecker) product is a way of combining two Hilbert spaces to produce a higher dimensional space. Let ψ be a state in a D 1 -dimensional Hilbert space H 1 and φ be a state in a D 2 -dimensional Hilbert space H 2. Then we define ψ φ to be a state in the D 1 D 2 -dimensional space H 1 H 2. Such a state is called a product state. Any state in this larger space can be written as a linear combination of product states Ψ = l α l ψ l φ l where ψ l H 1 and φ l H 2.

p. 24/31 What are the properties of this product? (a ψ +b ψ ) φ = a ψ φ +b ψ φ. ψ (a φ +b φ ) = a ψ φ +b ψ φ. We need also to define bra-vectors, and the inner product: ( ψ φ ) = ψ φ. ( ψ φ )( ψ φ ) = ψ ψ φ φ.

p. 25/31 If { j 1 } is a basis for H 1 and { k 2 } is a basis for H 2 then { j 1 k 2 } is a basis for H 1 H 2. Given two states in H 1 and H 2 ψ = in terms of this basis D 1 j=1 α j j 1, φ = D 2 k=1 β k k 2, ψ φ = j,k α j β k j 1 k 2. Generic states in H 1 H 2 are not product states: Ψ = j,k t jk j 1 k 2.

p. 26/31 Operator Tensor Products If  is an operator on H 1 and ˆB on H 2, we construct a similar product to get a new operator  ˆB on H 1 H 2. Its properties are similar to tensor products of states: (aâ+bâ ) ˆB = aâ ˆB +bâ ˆB.  (aˆb +bˆb ) = aâ ˆB +bâ ˆB. ( ˆB) =  ˆB. ( ˆB)( ˆB ) =  ˆB ˆB. Tr{ ˆB} = Tr{Â}Tr{ˆB}.

p. 27/31 We can also apply these tensor product operators to tensor product states: ( ˆB)( ψ φ ) =  ψ ˆB φ. ( ψ φ )( ˆB) = ψ  φ ˆB. A general operator on H 1 H 2 is not a product operator, but can be written as a linear combination of product operators: Ô = l  l ˆB l. Tensor products play an important role in quantum mechanics! They describe how the Hilbert spaces of subsystems are combined.

p. 28/31 Matrix representation What does the matrix representation of a tensor product look like? If ψ has amplitudes (α 1,,α D1 ) and φ has amplitudes (β 1,,β D2 ), the state ψ φ in H 1 H 2 is represented as a D 1 D 2 -dimensional column vector: α 1 β 1 ψ φ = α 1 φ α 2 φ. α D1 φ = α 1 β 2. α 1 β D2 α 2 β 1. α D1 β D2.

p. 29/31 Similarly, if  = [a ij] and ˆB = [b ij ] then  ˆB = a 11 ˆB a1d1 ˆB..... a D1 1 ˆB a D1 D 1 ˆB. For brevity, ψ φ is often written ψ φ or even ψφ. One can similarly condense the notation for operators; but one should be very careful not to confuse  ˆB with ˆB.

p. 30/31 Example: Two Spins Given two states in the Z basis, ψ = α 1 +α 2 and φ = β 1 +β 2, we can write ψ φ = α 1 β 1 +α 1 β 2 +α 2 β 1 +α 2 β 2. A general state of two spins would be Ψ = t 11 +t 12 +t 21 +t 22. As a column vector this is Ψ = t 11 t 12 t 21. t 22

p. 31/31 Similarly, operators on two spins can be written as linear combinations of tensor products of the Pauli matrices and the identity. For instance, Î Î = Î ˆX = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0, Ẑ Î =, ˆX Ŷ = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 i 0 0 i 0 0 i 0 0 i 0 0 0,.