Introduction to Quantum Mechanics

Similar documents
Similarity and Diagonalization. Similar Matrices

by the matrix A results in a vector which is a reflection of the given

Mathematics Course 111: Algebra I Part IV: Vector Spaces

1 Lecture 3: Operators in Quantum Mechanics

Chapter 20. Vector Spaces and Bases

α = u v. In other words, Orthogonal Projection

1 Sets and Set Notation.

THE DIMENSION OF A VECTOR SPACE

Continued Fractions and the Euclidean Algorithm

The Characteristic Polynomial

LINEAR ALGEBRA W W L CHEN

Chapter 6. Orthogonality

LS.6 Solution Matrices

State of Stress at Point

GROUP ALGEBRAS. ANDREI YAFAEV

BANACH AND HILBERT SPACE REVIEW

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

NOTES ON LINEAR TRANSFORMATIONS

Linear Algebra I. Ronald van Luijk, 2012

Quotient Rings and Field Extensions

Quantum Physics II (8.05) Fall 2013 Assignment 4

Notes on Determinant

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Orthogonal Projections

3. INNER PRODUCT SPACES

Notes from February 11

Chapter 17. Orthogonal Matrices and Symmetries of Space

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

LEARNING OBJECTIVES FOR THIS CHAPTER

University of Lille I PC first year list of exercises n 7. Review

Inner Product Spaces and Orthogonality

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Lecture 18 - Clifford Algebras and Spin groups

Continuous Groups, Lie Groups, and Lie Algebras

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

T ( a i x i ) = a i T (x i ).

Chapter 9 Unitary Groups and SU(N)

Math 312 Homework 1 Solutions

Solutions to Math 51 First Exam January 29, 2015

Mathematics (MAT) MAT 061 Basic Euclidean Geometry 3 Hours. MAT 051 Pre-Algebra 4 Hours

Sample Induction Proofs

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

The Quantum Harmonic Oscillator Stephen Webb

Finite dimensional C -algebras

CONTINUED FRACTIONS AND PELL S EQUATION. Contents 1. Continued Fractions 1 2. Solution to Pell s Equation 9 References 12

Section Inner Products and Norms

17. Inner product spaces Definition Let V be a real vector space. An inner product on V is a function

Introduction to Matrix Algebra

Inner products on R n, and more

Lecture L3 - Vectors, Matrices and Coordinate Transformations

LINEAR ALGEBRA. September 23, 2010

On the representability of the bi-uniform matroid

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

Linear Algebra Notes for Marsden and Tromba Vector Calculus

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

1 Homework 1. [p 0 q i+j p i 1 q j+1 ] + [p i q j ] + [p i+1 q j p i+j q 0 ]

MATH1231 Algebra, 2015 Chapter 7: Linear maps

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

Quantum Mechanics and Representation Theory

A characterization of trace zero symmetric nonnegative 5x5 matrices

MATH APPLIED MATRIX THEORY

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in Total: 175 points.

Orthogonal Diagonalization of Symmetric Matrices

How To Prove The Dirichlet Unit Theorem

Row Ideals and Fibers of Morphisms

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

PHY4604 Introduction to Quantum Mechanics Fall 2004 Practice Test 3 November 22, 2004

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

x = + x 2 + x

MA106 Linear Algebra lecture notes

4.5 Linear Dependence and Linear Independence

Indiana State Core Curriculum Standards updated 2009 Algebra I

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Applied Linear Algebra I Review page 1

General Framework for an Iterative Solution of Ax b. Jacobi s Method

Unified Lecture # 4 Vectors

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

DATA ANALYSIS II. Matrix Algorithms

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Section 4.4 Inner Product Spaces

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

1 Variational calculation of a 1D bound state

Mechanics 1: Conservation of Energy and Momentum

9 MATRICES AND TRANSFORMATIONS

Methods for Finding Bases

Data Mining: Algorithms and Applications Matrix Math Review

Factorization Theorems

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)

PHYS 1624 University Physics I. PHYS 2644 University Physics II

COMMUTATIVE RINGS. Definition: A domain is a commutative ring R that satisfies the cancellation law for multiplication:

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

Systems of Linear Equations

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

Classification of Cartan matrices

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

Transcription:

Introduction to Quantum Mechanics William Gordon Ritter 1 1 Harvard University Department of Physics 17 Oxford St., Cambridge, MA 0138 (Dated: January 11, 005) I. THE HARMONIC OSCILLATOR Define operators a = 1 (X D) and a = 1 (X + D), where X is multiplication by x and D is the derivative with respect to x. It follows that a + a = X, so a = X a. Define f 0 to be such that af 0 = 0. Any solution of this equation is a constant times f 0 = e x /. Define f n = (a ) n f 0, which implies af n = nf n 1 and f n+1 = a f n = ( X a)f n f n+1 (x) = xf n (x) nf n 1 (x). This is known as a recurrence relation. Its most natural representation is in terms of a generating function. Suppose there exists F (x, z) such that f n (x) F (x, z) = z n. We may then calculate the derivative with respect to z, f n D z F = (n 1)! zn 1 = n=1 = f n x zn n=1 = xf z n=1 f n+1 z n n f n 1 z n f n 1 (n 1)! zn 1 = ( x z)f. The separable differential equation D z F = ( x z)f has solution F (x, z) = e xz z /. In quantum mechanics, momenta are represented as eigenvalues of the operator P = id. The energy H is the sum of the kinetic energy, P /(m), and the potential energy, given by a function V (x). For the quadratic potential V (x) = 1 (x 1), we have H 1 P + V (X) = 1 ( D + X 1) = a a.

Theorem 1. Hf n = nf n for all integers n 0. Proof. For any operators A, B, define D A B = [A, B] and note that D A satisfies the product rule for derivatives, D A (BC) = (D A B)C +BD A C. We now prove that [a, (a ) n ] = n(a ) n 1. This is obvious for n = 0, and for n = 1 it is the statement that D a a = 1, which we have checked previously. We therefore use induction, assuming that D(a ) n 1 = (n 1)(a ) n. By the product rule, Then D a (a (a ) n 1 ) = (D a a )(a ) n 1 + a D a (a ) n 1 = (a ) n 1 + a (n 1)(a ) n = n(a ) n 1. Hf n = a a(a ) n f 0 = (a ) n+1 af }{{} 0 +a [a, (a ) n ]f 0 =0 = a n(a ) n 1 f 0 = nf n. Eigenvalues of H are identified with possible energies of the system; this analysis shows that the oscillator energies are discrete. Aside from their mathematical interest as an orthogonal basis for the space {ψ ψ(x) dx < }, the eigenfunctions f n are also physically meaningful, since f n (x) represents the probability distribution for a particle s position, if the particle is in the n th excited state. 1 Graphs of the first few are shown in Fig. 1. FIG. 1 Plots of f, f 3, and f 4. II. MATRIX REPRESENTATIONS OF SU Given a set of commutation relations [A, B] = C etc, an n-dimensional matrix representation of the algebra is a set of n n matrices A, B, C, etc. which satisfy the given relations. The description of spin in quantum mechanics uses matrix representations of the following algebra: [h, x] = x, [h, y] = y, [x, y] = h. (1) 1 The last statement is only true if we divide f n by a constant equal to the total area under its graph, so it becomes a valid probability distribution function.

3 This algebra is often called sl due to its fundamental representation in terms of matrices: ( ) ( ) ( ) 0 1 0 0 1 0 x = y = h =. () 0 0 1 0 0 1 The choice of the coefficients,, in equation (1) is unimportant, in the sense that we can multiply x, y, and h by three arbitrary scale factors and obtain results similar to what will follow here. The choice of coefficients in (1) is standard in the mathematics literature (though not the standard in physics) and causes certain other calculations to work out simply; for example, the matrix elements in () are all 1 or 1. In the following, we assume that x, y, and h are operators which act on a finite-dimensional vector space V and which satisfy the commutation relations (1). From this (minimal) set of assumptions, we will obtain much information. If V is one-dimensional, then all matrices act as multiplication by a single number, and we cannot have nontrivial commutation relations, so we assume dim V > 1. Suppose that v V is an eigenvector of h with eigenvalue λ, so hv = λv. Then using (1), xv = [h, x]v = hxv xhv = h(xv) λxv. Therefore h(xv) = ( + λ)xv. In other words, xv is another eigenvector, with eigenvalue λ +. To organize these calculations, let V ρ denote all eigenvectors in V with eigenvalue ρ. We have just shown that x(v λ ) V λ+, and a similar calculation shows that y(v λ ) V λ, so we can view x and y as raising and lowering operators for eigenvalues of h, similar to the way a and a are raising and lowering operators for the eigenvalues of the oscillator Hamiltonian H = a a. Let X and A be any two noncommuting operators. We will need the following formula for the commutator of X with a power of A (which may easily be proved by induction) [X, A n ] = A k 1 [X, A]A n k. (3) Since V is finite-dimensional, h can have at most finitely many different eigenvalues; call them λ 1,..., λ M. In general, the eigenvalues λ i could be complex, but we will see this is not the case. Choose λ max to be the eigenvalue with the largest real part. Let v 0 be a nonzero eigenvector with eigenvalue λ max (in this case, v 0 is called a maximal vector). Theorem. λ max is an integer.

4 Proof. Since x is a raising operator, xv 0 = 0. Then xy n v 0 = y n xv }{{} 0 +[x, y n ]v 0 = = = =0 y k 1 hy n k v 0 (4) y k 1 (λ max (n k))y n k v 0 ( ) (λ max n + k) y n 1 v 0 ( n(n + 1) ) = n(λ max n) + y n 1 v 0 = n(λ max n + 1)y n 1 v 0. Since the latter formula is valid for all n, it is in particular valid for the largest n such that y n 1 v 0 0. Since that n was the largest (call it N), we have y N v 0 = 0 and hence the left side of equation (4) is zero. Therefore, N(λ max N + 1)y N 1 v 0 = 0. If N = 0 then v 0 = 0, a contradiction. We conclude that λ max = N 1. In the process, we have proven that the dimension of the space in which the operators act is N = λ max + 1. Theorem 3. Consider a matrix representation with no invariant subspace. If [Q, X] = 0 for all X in the algebra, and if Ker(Q) {0} then Q = 0. Proof. Let v 0 be an element of Ker(Q), so Qv = 0. Then QXv = XQv = 0, so X preserves Ker(Q). Then Ker(Q) is a nontrivial invariant subspace, but this is a contradiction. In the situation of Theorem 3, we infer that if [A, X] = 0 for all X in the algebra, then A = λi for some λ, where I is the identity matrix. This result is called Schur s Lemma; the proof is to apply Theorem 3 with Q = A λi where λ is an eigenvalue of A. III. ANGULAR MOMENTUM In quantum mechanics, the momentum 3-vector p is represented by the operator i, where is the gradient. The angular momentum L = r p then has three components, which are operators satisfying [L 1, L ] = il 3. If we define x = L 1 + il, y = L 1 il, and h = L 3, then we have the commutation relations (1). As the eigenvalues of h are integers separated by two, the eigenvalues of L 3 must be half-integers separated by one. Thus the representation with highest L 3 eigenvalue given by l must have dimension l + 1 (note: l is λ max for h).

5 Note that L = L L commutes with L 1, L, L 3 and hence, by Schur s Lemma, L = λi in this representation, so every vector is an eigenvector of L. In this context, the eigenvalue equation for L 3 and for L are differential equations, whose solutions are the spherical harmonics. The latter are special functions which, up to a constant factor, take the form e imφ Pl m (cos θ), l m l, in spherical coordinates and which are responsible for the odd-looking shape of electron orbitals. In quantum mechanics, the possible outcomes of a measurement are identified with the possible eigenvalues of an operator. In this spirit, possible measurements of the z-component of angular momentum correspond to allowed eigenvalues of L 3. Theorem and the surrounding discussion then imply that these measurement outcomes are not arbitrary; the highest one, l, must be a halfinteger and there are l + 1 eigenvectors, and L lowers the eigenvalue by one each time, so the possible outcomes are m {l, l 1,..., l}. The angular dependence of the corresponding wave function is proportional to e imφ Pl m (cos θ). Moreover, higher l correspond to higher energy, so the distinct values of l give the distinct orbitals in an atom, in order of increasing excitation energy.