The Schrödinger Equation



Similar documents
Introduction to Schrödinger Equation: Harmonic Potential

8.04: Quantum Mechanics Professor Allan Adams Massachusetts Institute of Technology. Problem Set 5

Time dependence in quantum mechanics Notes on Quantum Mechanics

Quantum Mechanics: Postulates

FLAP P11.2 The quantum harmonic oscillator

1 Lecture 3: Operators in Quantum Mechanics

The quantum mechanics of particles in a periodic potential: Bloch s theorem

Second Order Linear Partial Differential Equations. Part I

Math 4310 Handout - Quotient Vector Spaces

1 Variational calculation of a 1D bound state

Quantum Mechanics and Representation Theory

The Quantum Harmonic Oscillator Stephen Webb

Lecture L3 - Vectors, Matrices and Coordinate Transformations

5 Homogeneous systems

Let s first see how precession works in quantitative detail. The system is illustrated below: ...

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Operator methods in quantum mechanics

5.61 Physical Chemistry 25 Helium Atom page 1 HELIUM ATOM

Unified Lecture # 4 Vectors

Chapter 17. Orthogonal Matrices and Symmetries of Space

Section 4.1 Rules of Exponents

Chapter 22 The Hamiltonian and Lagrangian densities. from my book: Understanding Relativistic Quantum Field Theory. Hans de Vries

PHY4604 Introduction to Quantum Mechanics Fall 2004 Practice Test 3 November 22, 2004

CHEM6085: Density Functional Theory Lecture 2. Hamiltonian operators for molecules

Nanoelectronics. Chapter 2 Classical Particles, Classical Waves, and Quantum Particles. Q.Li@Physics.WHU@2015.3

Quantum mechanics in one dimension

2.6 The driven oscillator

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0.

Solutions to Problems in Goldstein, Classical Mechanics, Second Edition. Chapter 7

2.2 Magic with complex exponentials

Particle Physics. Michaelmas Term 2011 Prof Mark Thomson. Handout 7 : Symmetries and the Quark Model. Introduction/Aims

The continuous and discrete Fourier transforms

Five fundamental operations. mathematics: addition, subtraction, multiplication, division, and modular forms

5.61 Fall 2012 Lecture #19 page 1

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Lies My Calculator and Computer Told Me

1. Degenerate Pressure

Correlation and Convolution Class Notes for CMSC 426, Fall 2005 David Jacobs

Section 6.1 Joint Distribution Functions

Association Between Variables

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Similarity and Diagonalization. Similar Matrices

Heating & Cooling in Molecular Clouds

Trigonometric functions and sound

x 2 + y 2 = 1 y 1 = x 2 + 2x y = x 2 + 2x + 1

Math 319 Problem Set #3 Solution 21 February 2002

G. GRAPHING FUNCTIONS

Figure 1.1 Vector A and Vector F

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

Rate Equations and Detailed Balance

Orbits of the Lennard-Jones Potential

South Carolina College- and Career-Ready (SCCCR) Pre-Calculus

Vector Math Computer Graphics Scott D. Anderson

9. Momentum and Collisions in One Dimension*

arxiv: v1 [quant-ph] 3 Mar 2016

P.A.M. Dirac Received May 29, 1931

Lecture 7: Continuous Random Variables

2. Spin Chemistry and the Vector Model

An Introduction to Hartree-Fock Molecular Orbital Theory

WRITING PROOFS. Christopher Heil Georgia Institute of Technology

Oscillations. Vern Lindberg. June 10, 2010

Homogeneous systems of algebraic equations. A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which

Part 1 Expressions, Equations, and Inequalities: Simplifying and Solving

Chapter 15 Collision Theory

arxiv: v2 [physics.acc-ph] 27 Oct 2014

Linear Programming. March 14, 2014

Notes on Elastic and Inelastic Collisions

7. Beats. sin( + λ) + sin( λ) = 2 cos(λ) sin( )

Three Pictures of Quantum Mechanics. Thomas R. Shafer April 17, 2009

Lecture 8. Generating a non-uniform probability distribution

Linear Algebra Notes for Marsden and Tromba Vector Calculus

5544 = = = Now we have to find a divisor of 693. We can try 3, and 693 = 3 231,and we keep dividing by 3 to get: 1

A Simple Pseudo Random Number algorithm

LINEAR INEQUALITIES. Mathematics is the art of saying many things in many different ways. MAXWELL

Copyright 2008 Pearson Education, Inc., publishing as Pearson Addison-Wesley.

CHAPTER 3. Methods of Proofs. 1. Logical Arguments and Formal Proofs

Mechanics 1: Conservation of Energy and Momentum

Lecture 3 Fluid Dynamics and Balance Equa6ons for Reac6ng Flows

The Method of Partial Fractions Math 121 Calculus II Spring 2015

Section V.3: Dot Product

Elasticity Theory Basics

LS.6 Solution Matrices

Physics 221A Spring 2016 Notes 1 The Mathematical Formalism of Quantum Mechanics

The Deadly Sins of Algebra

Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities

Chapter 9 Unitary Groups and SU(N)

Metric Spaces. Chapter Metrics

Polynomial Invariants

Class Meeting # 1: Introduction to PDEs

Matter Waves. Home Work Solutions

Unit 1 Number Sense. In this unit, students will study repeating decimals, percents, fractions, decimals, and proportions.

Practice with Proofs

Properties of Real Numbers

Chapter 11 Number Theory

8 Primes and Modular Arithmetic

1 The Brownian bridge construction

Transcription:

The Schrödinger Equation When we talked about the axioms of quantum mechanics, we gave a reduced list. We did not talk about how to determine the eigenfunctions for a given situation, or the time development of wavefunctions. For nonrelativistic particles (which is all we ll consider here), these issues are addressed with the Schrödinger equation. Let s start by motivating this equation. Consider a traveling plane wave in one dimension, say the x direction. In classical physics, we represented the relation between the location and time for this wave by the real part of f(x, t) = exp i(kx ωt) for a frequency ω and wavenumber k = 2π/λ, where λ is the wavelength. In quantum mechanics we know that k = p/ h. Therefore, let the wavefunction (unnormalized) be ψ exp i(px/ h ωt). (1) Right away we can see that there is a problem with this. The probability is supposed to be proportional to the squared norm of the wavefunction, but that s a constant for this one! Therefore, the integral over all space is infinite unless the constant is zero, which is uninteresting. However, we can still use this because any wavefunction may be built up by sums or integrals of functions like this, in the same way that in Fourier analysis we know that any function may be built up from sums of sines and cosines. Therefore, we can treat just this plane wave function and generalize later. From our axioms, we know that every observable should be represented as an operator, such that when the operator acts on a wavefunction we get the original wavefunction times the value of the observable (if that wavefunction is an eigenfunction of the operator). In this case, let s focus on three particular observables: the x position, the x component of the momentum, and the energy. We are now faced with a choice. We can decide on a position representation, in which all operators are functions of position (including derivatives with respect to position), or a momentum representation in which all operators are functions of momentum (including derivatives with respect to momentum). As long as we re consistent either one will do, but for convenience and to conform to the usual standard, we will use the position representation. In this representation, the operator for the one-dimensional position x is simply x itself! Boring, but true. Multiplying x by ψ gives xψ, with the eigenvalue x. In fact, this means that any algebraic function of x has as its operator the function itself. For example, 1/x becomes 1/x, and so on. What about momentum? In the momentum representation the operator would similarly be just p, but in the position representation we need something else. We note that a derivative leaves the exponential part unchanged, and we find that exp i(px/ h ωt) = (ip/ h) exp i(px/ h ωt). (2) x

From this, we see that ( h/i)( / x)ψ = pψ, so the operator for the momentum is ( h/i)( / x). Note that this means that, unlike in classical physics, the order of operations is important. In classical physics, the position times the momentum equals the momentum times the position (e.g., in angular momentum). However, in quantum mechanics you don t just multiply, you perform operations. Suppose we have a wavefunction ψ. Then xpψ = x( h/i)( / x)ψ. However, pxψ = ( h/i)( / x)(xψ) = ( h/i)ψ + x( h/i)( / x)ψ. This is different from xpψ! Since this is true for any wavefunction ψ, we say that x and p have a nonzero commutator [x, p] xp px = ( h/i) = i h. (3) This is shorthand for saying that (xp px) operating on ψ gives i hψ, regardless of what ψ is. The fact that x and p don t commute (i.e., the order of operation matters) turns out to be another statement of the uncertainty principle: you can t measure x and p simultaneously with arbitrary precision. Operators that commute don t suffer this restriction. For example, in three dimensions, x, y, and z commute: e.g., xyψ = yxψ. Therefore, there is nothing to prevent perfectly precise simultaneous measurement of x and y. Thinking back on our examples, this makes sense: consider light passing through a two-dimensional hole that is as small as you like. What about energy? We know that the energy is hω. Ask class: by analogy with the momentum, what operator will give this energy from a plane wave wavefunction? We find that the operator i h( / t) gives hω, so this is the operator for energy. Incidentally, time is a coordinate just like space, so the operator for time is just t. Therefore, energy and time don t commute; [t, E] = i h. Time and energy can t be measured simultaneously with arbitrary precision either. This is a slightly less common version of the uncertainty principle. In three dimensions the mapping from quantity to operator is r r, p i h, E i h( / t). Now let s assume that the particle is nonrelativistic, with rest mass m. Ask class: in classical physics, what is the kinetic energy of the particle? It is p 2 /2m. We can also assume some potential energy, which is a function of position: V (r). Ask class: what, then, is the equation for total energy in classical physics? It is just (4) p 2 /2m + V (r) = E. (5) If we apply this to our quantum mechanical operators, we get the Schrödinger equation: h2 2m 2 ψ + V (r)ψ = i h ψ t. (6)

Note that this equation is linear in the wavefunction ψ. Therefore, we can add together solutions. This justifies our concentration on plane waves. The Schrödinger equation is valid for a sum or integral of plane waves, therefore it is valid for any wavefunction at all, because any wavefunction may be represented as a sum or integral of plane waves. By the way, there is an interesting symmetry hidden in this equation. Suppose you have a wavefunction that satisfies this equation, and you have normalized the wavefunction so that the integral of its squared modulus is 1 over all space. If you now multiply the wavefunction by e iφ, where φ is some phase that is a real number and is constant over all of space, then the Schrödinger equation is still satisfied and the squared modulus is unaffected, so the probability is still normalized. Therefore, you have freedom to choose this global phase factor. Put another way, there is a symmetry with respect to the global phase factor. In quantum mechanics, every symmetry implies a physical quantity that is conserved, and for this one it turns out to be the electric charge, although I don t know a simple explanation for why. The Schrödinger equation therefore tells us how a wavefunction changes with time. If you have the wavefunction at a single time and at all points in space you can calculate its evolution from that point on. This leads to the question of whether we can find special wavefunctions that don t evolve at all, so that they are time-independent. Let s give that a try. If the wavefunction is truly time-independent, then ψ/ t = 0. Therefore, the energy is exactly zero. Ask class: what does this imply about the momentum? It means that the momentum also must be zero. Ask class: what does that mean about how the wavefunction is localized in space? If the momentum is exactly zero, then the uncertainty in the momentum is also zero, so the uncertainty in the position is infinite to satisfy the uncertainty principle. Therefore, if we are to consider particles that are even slightly localized in space, the energy has to be nonzero (which we knew already), meaning that there has to be some time-dependence in the wavefunction. We re glossing over some issues here (for example, one has to decide on a reference energy relative to which the energy is zero, so it isn t quite as simple as we ve portrayed), but this does suggest that our demand of strict time-independence is too restrictive. Instead, we can ask whether it is possible that the probability density, i.e., the squared modulus of the wavefunction, can be time-independent. This is possible if the only time-dependence in the wavefunction is in a factor of the form exp( iet/ h). Then, if ψ = Ψ(r) exp( iet/ h), i h( / t)ψ = EΨ(r) exp( iet/ h) = Eψ. (7) Any time that an operator on a wavefunction gives the same wavefunction times a constant, the wavefunction must be an eigenfunction of the operator and the constant is the eigenvalue. Here, the eigenvalue is E. A wavefunction of this type is called a stationary

state, because as time goes on the spatial part of the wavefunction doesn t evolve. For such a wavefunction, the Schrödinger equation becomes h2 2m 2 Ψ(r) + V (r)ψ(r) = EΨ(r), (8) where we have cancelled out the common factor exp( iet/ h). This is the time-independent Schrödinger equation. When you solve this equation with boundary conditions and constraints such as that Ψ(r) must be continuous, you find that it is solved only by a limited set of functions, and by a limited set of values for E. These are the eigenfunctions and eigenvalues for the system, and their nature depends on V (r). For example, for V (r) = 0 we return to the plane wave solutions and find that any E is acceptable, but that s not true in general. The limitation of energies and wavefunctions (sometimes into discrete possibilities, sometimes into continuous) are what put the quantum in quantum mechanics. Let me make one more general point before moving on to an example. If the wavefunction is an energy eigenstate, then the only time dependence is in the exp( iet/ h) factor. The factor evolves on a time scale t 0 (E) h/e, but it doesn t have any measurable effects. Note that the spatial wavefunction is then in general a function of the energy as well as position: Ψ(r) = Ψ(r, E). Now consider a wavefunction that is a mixture of wavefunctions of two different energies: ψ(r, t) = Ψ(r, E 1 ) exp( ie 1 t/ h) + Ψ(r, E 2 ) exp( ie 2 t/ h). (9) In general, Ψ(r, E 2 ) can be a very different function from Ψ(r, E 1 ). The first term will evolve on a time scale t 1 h/e 1, but the second will evolve on a time scale t 2 h/e 2. Since these will usually be different times, the whole wavefunction will change palpably as time goes on, thus the probability density is not time-independent. Only if the wavefunction is an energy eigenstate (or the sum of eigenstates with a common energy) is it time-independent. Of course, you can still multiply everything by the same global phase factor exp(iφ) without changing everything, so that symmetry is maintained. Now we ll try an example. Suppose that the potential is that of an infinite square well in one dimension. That is, V (x) = 0 when a < x < a, but V (x) otherwise. We want to find the energy eigenstates. First let s consider the region x > a. The time-independent Schrödinger equation says ( h 2 /2m)d 2 ψ/dx 2 = (E )ψ (10) where we use total derivatives because we only consider one dimension. Now, if ψ 0 then we get infinite second derivatives everywhere, which leads to pathologies that are clearly unrealistic (such as an unbounded probability density). Therefore, ψ = 0 when x a or

x a. Then, in the region a x a we have the equation ( h 2 /2m)d 2 ψ/dx 2 = Eψ (11) with the boundary condition ψ = 0 at x = ±a. If E < 0 this leads to exponentials that don t satisfy the boundary conditions. With E = 0, the second derivative is zero so you get a linear function of x. The only linear function of x that satisfies the boundary conditions is ψ = 0, so there isn t a particle anywhere! We need E > 0, in which case we have ψ(x) = A cos( 2mE/ h 2 x) + B sin( 2mE/ h 2 x). (12) We need ψ = 0 at x = ±a, so B = 0 and a 2mE/ h 2 = 1 (2n + 1)π, 2 E = h 2 (n + 1/2) 2 π 2 /(2ma 2 ) (13) where n is a positive integer. Note that the uncertainty principle predicts p > h/a, or a minimum energy of E h 2 /(2ma 2 ), so aside from factors of π this is confirmed by the detailed analysis. The minimum energy of a confined particle increases as the confinement region shrinks. It is straightforward to extend this analysis to a rectangular confinement in two or three dimensions. We ve talked a couple of times about how any function may be expressed as a sum of sines and cosines. In quantum mechanics one can show that any function may be expressed as a sum of eigenfunctions of any operator. There is a little bit of trickiness if more than one eigenfunction has the same eigenvalue, but that s the gist of it. If one makes an observation of some quantity, then the only possible value observed is one of the eigenvalues of the operator associated with that quantity. If the wavefunction is not an eigenfunction of the operator, it can still be represented as a sum of two or more of the eigenfunctions, and the measured value will be an eigenvalue of one of the eigenfunctions in that sum. One cannot predict the eigenvalue that will be measured, but one can say the relative probability of the measurement of each of the possible eigenvalues. Therefore, we do not completely lose predictability, despite what you might think when reading commentaries on quantum mechanics. If we know the wavefunction of a system at some time, we can determine its evolution from that point forth (in principle), and can therefore make precise statements about the relative probability of obtaining different values for a given measurement. Once the measurement is made, though, the system state changes and it becomes more complicated. The main point is that indeterminacy does not mean that we just have to throw our hands up and quit!