Quantum Physics II (8.05) Fall 2013 Assignment 4



Similar documents
α = u v. In other words, Orthogonal Projection

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Similarity and Diagonalization. Similar Matrices

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

Inner product. Definition of inner product

LS.6 Solution Matrices

8.04: Quantum Mechanics Professor Allan Adams Massachusetts Institute of Technology. Problem Set 5

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Chapter 17. Orthogonal Matrices and Symmetries of Space

x = + x 2 + x

Inner Product Spaces

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Vector and Matrix Norms

Section Inner Products and Norms

Chapter 6. Orthogonality

Finite dimensional C -algebras

Limits and Continuity

Applied Linear Algebra I Review page 1

5. Orthogonal matrices

1 Inner Products and Norms on Real Vector Spaces

LINEAR ALGEBRA W W L CHEN

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

1 Lecture 3: Operators in Quantum Mechanics

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

BANACH AND HILBERT SPACE REVIEW

Linear Algebra Review. Vectors

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Inner Product Spaces and Orthogonality

Quantum Mechanics: Postulates

THREE DIMENSIONAL GEOMETRY

1 VECTOR SPACES AND SUBSPACES

Section 12.6: Directional Derivatives and the Gradient Vector

Section 1.1. Introduction to R n

1 Introduction to Matrices

Lecture 1: Schur s Unitary Triangularization Theorem

1 Sets and Set Notation.

Math 241, Exam 1 Information.

Math 4310 Handout - Quotient Vector Spaces

Taylor and Maclaurin Series

Orthogonal Projections

Physics 221A Spring 2016 Notes 1 The Mathematical Formalism of Quantum Mechanics

Orthogonal Diagonalization of Symmetric Matrices

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

LEARNING OBJECTIVES FOR THIS CHAPTER

Chapter 20. Vector Spaces and Bases

Orthogonal Projections and Orthonormal Bases

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi

28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z

4.5 Linear Dependence and Linear Independence

Linear Algebra I. Ronald van Luijk, 2012

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Continued Fractions and the Euclidean Algorithm

Unified Lecture # 4 Vectors

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

NOTES ON LINEAR TRANSFORMATIONS

Lecture 14: Section 3.3

x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Vector Spaces. Chapter R 2 through R n

MAT 1341: REVIEW II SANGHOON BAEK

Linear Algebra: Vectors

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

3 Orthogonal Vectors and Matrices

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Section The given line has equations. x = 3 + t(13 3) = t, y = 2 + t(3 + 2) = 2 + 5t, z = 7 + t( 8 7) = 7 15t.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Brief Introduction to Vectors and Matrices

3. INNER PRODUCT SPACES

LINEAR ALGEBRA. September 23, 2010

Math 115A - Week 1 Textbook sections: Topics covered: What is a vector? What is a vector space? Span, linear dependence, linear independence

5.3 The Cross Product in R 3

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

by the matrix A results in a vector which is a reflection of the given

17. Inner product spaces Definition Let V be a real vector space. An inner product on V is a function

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

0.1 Phase Estimation Technique

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

T ( a i x i ) = a i T (x i ).

Operator methods in quantum mechanics

ALGEBRAIC EIGENVALUE PROBLEM

Solutions to Math 51 First Exam January 29, 2015

26. Determinants I. 1. Prehistory

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

MATH APPLIED MATRIX THEORY

The Quantum Harmonic Oscillator Stephen Webb

Matrix Calculations: Applications of Eigenvalues and Eigenvectors; Inner Products

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

A note on companion matrices

PHY4604 Introduction to Quantum Mechanics Fall 2004 Practice Test 3 November 22, 2004

Wavelet analysis. Wavelet requirements. Example signals. Stationary signal 2 Hz + 10 Hz + 20Hz. Zero mean, oscillatory (wave) Fast decay (let)

College of the Holy Cross, Spring 2009 Math 373, Partial Differential Equations Midterm 1 Practice Questions

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

4.3 Lagrange Approximation

Critical points of once continuously differentiable functions are important because they are the only points that can be local maxima or minima.

1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm

Transcription:

Quantum Physics II (8.05) Fall 2013 Assignment 4 Massachusetts Institute of Technology Physics Department Due October 4, 2013 September 27, 2013 3:00 pm Problem Set 4 1. Identitites for commutators (Based on Griffiths Prob.3.13) [10 points] In the following problem A, B, and C are linear operators. So are q and p. (a) Prove the following commutator identity: [A, BC] = [A, B]C + B [A, C]. This is the derivation property of the commutator: the commutator with A, that is the object [A, ], acts like a derivative on the product BC. In the result the commutator is first taken with B and then taken with C while the operator that stays untouched is positioned at the expected place. (b) Prove the Jacobi identity: [[A, B],C]+ [[B, C],A]+ [[C, A],B] = 0. (c) Using [q, p] = il and the result of (a), show that n n 1 [q,p ] = il nq. (d) For any function f(q) that can be expanded in a power series in q, use (c) to show [f(q),p] = ilf (q). (e) On the space of position-dependent functions, the operator f(x) acts multiplical tively and p acts as. Calculate [f(x),p] by letting this operator act on an i x arbitrary wavefunction. 2. Useful operator identities and translations [10 points] Suppose that A and B are two operators that do not commute, [A, B] = 0. (a) Let t be a formal variable. Show that d e t(a+b) = (A + B)e t(a+b) = e t(a+b) (A + B). 1

2 (b) Now suppose [A, B] = c, where c is a c-number (a complex number times the identity operator). Prove that e A Be A = B + c. (1) [Hint: Define an operator-valued function F (t) e ta Be ta. What is F (0)? Derive a differential equation for F (t) and integrate it.] Comment: Equation (1) is a special case of the Hadamard lemma, to be considered below. (c) Let a be a real number and ˆp be the momentum operator. Show that the unitary translation operator translates the position operator: p/l ˆT (a) e iaˆ Tˆ (a)ˆxtˆ(a) = xˆ+ a. If a state ψ) is described by the wave function (x ψ) = ψ(x), show that the state Tˆ(a) ψ) is described by the wave function ψ(x a). 3. Proof of the Hadamard lemma [10 points] Prove that for two operators A and B, we have e A Be A 1 1 = B +[A, B]+ [A, [A, B]]+ [A, [A, [A, B]]]+. (1) 2! 3! Define f(t) e ta Be ta and calculate the first few derivatives of f(t) evaluated at t = 0. Then use Taylor expansions. Calculating explicitly the first three derivatives suffices to obtain (1). Do things to all orders by finding the form of the (n +1)-th term in the right-hand side of (1). To write the answer in a neat form we define the operator ada that acts on operators X to give operators via the commutator ada(x) [A, X]. Confirm that with this notation, the complete version of equation (1) becomes A e Be A = e ada (B). 4. Special case of the Baker-Haussdorf Theorem [10 points] Consider two operators A and B, such that [A, B] = ci, where c is a complex number and I is the identity operator. You will prove here the following identity A+B B A c/2 A B c/2 e = e e e = e e e. (2) For this purpose consider the operator valued function t(a+b) ta G(t) e e.

3 (a) Using operator properties and identities you derived previously show that d G 1 G(t) = B + ct. (3) d (b) Note that (3) is equivalent to G(t) = G(t)(B +ct). Verify that the solution to this equation is 1 G(t) = G(0)e tb e 2 ct2. (4) (c) Consider G(1) to prove the first equality in (2). Rename the operators to obtain the other equality. Comment: The full Baker-Hausdorff formula is of the form X Y e e = e X+Y+ 1 2 [X,Y]+ 1 12 ([X,[X,Y]] [Y,[X,Y]])+... and there is no simple closed form for general X and Y. 5. Bras and kets. [5 points] Consider a three-dimensional Hilbert space with an orthonormal basis 1), 2), 3). Using complex constants a and b define the kets ψ) = a 1) b 2)+ a 3) ; φ) = b 1)+ a 2). (a) Write down (ψ and (φ. Calculate (φ ψ) and (ψ φ). Check that (φ ψ) = (ψ φ). (b) Express ψ) and φ) as column vectors in the 1), 2), 3) basis and repeat (a). (c) Let A = φ)(ψ. Find the 3 3 matrix that represents A in the given basis. (d) Let Q = ψ)(ψ + φ)(φ. Is Q hermitian? Give a simple argument (no computation) to show that Q has a zero eigenvalue. 6. Shankar 1.8.8, p.43. Hermitian matrices and anticommutators [5 points] 7. Orthogonal projections and approximations (based on Axler) [15 points] Consider a vector space V with an inner-product and a subspace U of V that is spanned by rather simple vectors. (You can imagine this by taking V to be 3 dimensional space, and U some plane going through the origin). The question is: Given a vector v V that is not in U, what is the vector in U that best approximates v? As we also have a norm, we can ask a more precisely question: What is the vector u U for which v u is smallest. The answer is surprisingly simple: the vector u is given by P U v, the orthogonal projection of v to U! (a) Prove the above claim by showing that for any u U one has v u v P U v.

4 As an application consider the infinite dimensional vector space of real functions in the interval x [ π, π]. The inner product of two functions f and g on this interval is taken to be: π (f, g) = f(x)g(x)dx. π We will take U to be the a six-dimensional subspace of functions with a basis given 2 3 4 by (1,x,x,x,x,x 5 ). In this problem please use an algebraic manipulator that does integrals! (b) Use the Gram-Schmi algorithm to find an orthonormal basis (e 1,...,e 6 ) for U. (c) Consider approximating the functions sinx and cos x with the best possible representatives from U. Calculate exactly these two representatives and write them as polynomials in x with coefficients that depend on powers of π and other constants. Also write the polynomials using numerical coefficients with six significant digits. (d) Do a plot for each of the functions (sinx and cosx) where you show the original function, its best approximation in U calculated above, and the approximation in U that corresponds to the truncated Taylor expansion about x = 0.

MIT OpenCourseWare http://ocw.mit.edu 8.05 Quantum Physics II Fall 2013 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.