Statistical Thermodynamics - Fall Professor Dmitry Garanin. Statistical physics. October 24, 2012 I. PREFACE
|
|
|
- Ross Hodges
- 9 years ago
- Views:
Transcription
1 1 Statistical Thermodynamics - Fall 9 Professor Dmitry Garanin Statistical physics October 4, 1 I. PREFACE Statistical physics considers systems of a large number of entities particles) such as atoms, molecules, spins, etc. For these system it is impossible and even does not make sense to study the full microscopic dynamics. The only relevant information is, say, how many atoms have a particular energy, then one can calculate the observable thermodynamic values. That is, one has to know the distribution function of the particles over energies that defines the macroscopic properties. This gives the name statistical physics and defines the scope of this subject. The approach outlined above can be used both at and off equilibrium. The branch of physics studying nonequilibrium situations is called kinetics. The equilibrium situations are within the scope of statistical physics in the narrow sense that belongs to this course. It turns out that at equilibrium the energy distribution function has an explicit general form and the only problem is to calculate the observables. The formalism of statistical physics can be developed for both classical and quantum systems. The resulting energy distribution and calculating observables is simpler in the classical case. However, the very formulation of the method is more transparent within the quantum mechanical formalism. In addition, the absolute value of the entropy, including its correct value at T, can only be obtained in the quantum case. To avoid double work, we will consider only quantum statistical physics in this course, limiting ourselves to systems without interaction. The more general quantum results will recover their classical forms in the classical limit. II. BASIC DEFIITIOS AD ASSUMPTIOS OF QUATUM STATISTICAL PHYSICS From quantum mechanics follows that the states of the system do not change continuously like in classical physics) but are quantized. There is a huge number of discrete quantum states with corresponding energy values being the main parameter characterizing these states. It is the easiest to formulate quantum statistics for systems of noninteracting particles, then the results can be generalized. In the absence of interaction, each particle has its own set of quantum states in which it can be at the moment, and for identical particles these sets of states are identical. The particles can be distributed over their own quantum states in a great number of different ways, the so-called realizations. Each realization of this distribution is called microstate of the system. As said above, the information contained in the microstates is excessive, and the only meaningful information is how many particles i are in a particular quantum state i. These numbers i specify what in statistical physics is called macrostate. If these numbers are known, the energy and other quantities of the system can be found. It should be noted that the statistical macrostate contains more information than the macroscopic physical quantities that follow from it. For a large number of particles, each macrostate k can be realized by a very large number w k of microstates, the so-called thermodynamic probability. The main assumption of statistical physics is that all microstates occur with the same probability. Then the probability of the macrostate k is simply proportional to w k. One can introduce the true probability of the macrostate k as p k = w k Ω, Ω k w k. 1) The true probability is normalized by 1: 1 = k p k. ) Both w k and p k depend on the whole set of i such as p k = p k 1,,...). 3)
2 This dependence will be specified below. For an isolated system the number of particles and the energy U are conserved, thus the numbers i satisfy the constraints i =, 4) i i ε i = U, 5) where ε i is the energy of the particle in the state i. This limits the variety of the macroscopic states k. The number of particles i averaged over all macrostates k has the form i i = k k) i p k, 6) where ik is the number of particles in the microstate i corresponding to the macrostate k. For each macrostate the thermodynamic probabilities differ by a large amount. Then macrostates having small p k practically never occur, and the state of the system is dominated by macrostates with the largest p k. Consideration of particular models shows that the maximum of p k is very sharp for large number of particles, whereas k) i is a smooth function of k. In this case Eq. 6) becomes i = k max) i k p k = kmax) i, 7) where the value k max corresponds to the maximum of p k. In the case of large, the dominating true probability p k should be found by maximization with respect to all i obeying the two constraints above. III. TWO-STATE PARTICLES COI TOSSIG) A tossed coin can land in two positions: Head up or tail up. Considering the coin as a particle, one can say that this particle has two quantum states, 1 corresponding to the head and corresponding to the tail. If coins are tossed, this can be considered as a system of particles with two quantum states each. The microstates of the system are specified by the states occupied by each coin. As each coin has states, there are total Ω = 8) microstates. The macrostates of this system are defined by the numbers of particles in each state, 1 and. These two numbers satisfy the constraint condition 4), i.e., 1 + =. Thus one can take, say, 1 as the number k labeling macrostates. The number of microstates in one macrostate that is, the mumber of different microstates that belong to the same macrostate) is given by )! w 1 = 1! 1 )! =. 9) 1 This formula can be derived as follows. We have to pick 1 particles to be in the state 1, all others will be in the state. How many ways are there to do this? The first particle can be picked in ways, the second one can be picked in 1 ways since we can choose of 1 particles only. The third 1 particle can be picked in different ways etc. Thus one obtains the number of different ways to pick the particles is where the factorial is defined by 1) ) ) =! 1 )!, 1)! 1)... 1,! = 1. 11) The expression in Eq. 1) is not yet the thermodynamical probability w 1 because it contains multiple counting of the same microstates. The realizations, in which 1 1 particles are picked in different orders, have been counted as different microstates, whereas they are the same microstate. To correct for the multiple counting, one has to divide
3 3 w 1) / w / ) = 4 = 16 = 1 1 / ) / / ) FIG. 1: Binonial distribution for an ensemble of two-state systems. by the number of permutations 1! of the 1 particles that yields Eq. 9). One can check that the condition 1) is satisfied, 1= w 1 = 1=! 1! 1 )! =. 1) The thermodynamic probability w 1 has a maximum at 1 = /, half of the coins head and half of the coins tail. This macrostate is the most probable state. Indeed, as for an individual coin the probabilities to land head up and tail up are both equal to.5, this is what we expect. For large the maximum of w 1 on 1 becomes sharp. To prove that 1 = / is the maximum of w 1, one can rewrite Eq. 9) in terms of the new variable p = 1 / as w 1 =! / + p)!/ p)!. 13) One can see that w p is symmetric around 1 = /, i.e., p =. Working out the ratio one can see that 1 = / is indeed the maximum of w 1. w /±1 /)!/)! = w / / + 1)!/ 1)! = / < 1, 14) / + 1 IV. STIRLIG FORMULA AD THERMODYAMIC PROBABILITY AT LARGE Analysis of expressions with large factorials is simplified by the Stirling formula! = π ). 15) e
4 4 In particular, Eq. 13) becomes where w 1 = π /e) π / + p) [/ + p) /e] /+p π / p) [/ p) /e] / p = = π 1 p 1 1 p ) 1 + p ) / + p) /+p / p) / p w / ) /+p 1 p ) / p, 16) w / = π 17) is the maximal value of the thermodynamic probability. Eq. 16) can be expanded for p. Since p enters both the bases and the exponents, one has to be careful and expand the logarithm of w 1 rather than w 1 itself. The square root term in Eq. 16) can be discarded as it gives a negligible contribution of order p /. One obtains ln w 1 ) = ln w/ + p ln = ln w / + p 1 + p ) [ p 1 ) p ) p ln ) ] p 1 p ) ) [ p 1 ) ] p and thus = ln w / p p + p + p p + p = ln w / p w 1 = w/ exp 18) ) p. 19) One can see that w 1 becomes very small if p 1 / that is much smaller than for large. That is, w 1 is small in the main part of the interval 1 and is sharpy peaked near 1 = /. V. MAY-STATE PARTICLES The results obtained in Sec. III for two-state particles can be generalized for n-state particles. We are looking for the number of ways to distribute particles over n boxes so that there are i particles in ith box. That is, we look for the number of microstates in the macrostate described by the numbers i. The result is given by w =! 1!!... n! =! n i=1 i!. ) This formula can be obtained by using Eq. 9) successively. The number of ways to put 1 particles in box 1 and the other 1 in other boxes is given by Eq. 9). Then the number of ways to put particles in box is given by a similar formula with 1 there are only 1 particles after 1 particles have been put in box 1) and 1. These numbers of ways should multiply. Then one considers box 3 etc. until the last box n. The resulting number of microstates is w =! 1! 1 )! 1 )!! 1 )! 1 )! 3! 1 3 )!... n + n 1 + n )! n! n 1 + n )! n 1 + n )! n 1! n! n! n!!. 1) In this formula, all numerators except for the first one and all second terms in the denominators cancel each other, so that Eq. ) follows.
5 5 VI. THERMODYAMIC PROBABILITY AD ETROPY As said above, the main postulate of statistical physics is that the observed macrostate is the macrostate realized by the greatest number of microstates, the most probable of all macrostates. For instance, a macrostate of an ideal gas with all molecules in one half of the container is much less probable than the macrostate with the molecules equally distributed over the whole container. Like the state with all coins landed head up is much less probable than the state with half of the coins landed head up and the other half tail up). For this reason, if the initial state is all molecules in one half of the container, then in the course of evolution the system will come to the most probably state with the molecules equally distributed over the whole container, and will stay in this state forever. We have seen in thermodynamics that an isolated system, initially in a non-equilibrium state, evolves to the equilibrium state characterized by the maximal entropy. On this way one comes to the idea that entropy S and thermodynamic probability w should be related, one being a monotonic function of the other. The form of this function can be found if one notices that entropy is additive while the thermodynamic probability is miltiplicative. If a system consists of two subsystems that weakly interact with each other that is almost always the case as the intersubsystem interaction is limited to the small region near the interface between them), then S = S + S and w = w 1 w. If one chooses L. Boltzmann) S = k B ln w, ) then S = S + S and w = w 1 w are in accord since ln w 1 w ) = ln w 1 + ln w. In Eq. ) k B is the Boltzmann constant, k B = J K 1. 3) It will be shown that the statistically defined entropy above coincides with the thermodynamic entropy at equilibrium. On the other hand, statistical entropy is well defined for nonequilibrium states as well, where as the thermodynamic entropy is usually undefined off equilibrium. VII. QUATUM STATES AD EERGY LEVELS A. Stationary Schrödinger equation In the formalism of quantum mechanics, quantized states and their energies E are the solutions of the eigenvalue problem for a matrix or for a differential operator. In the latter case the problem is formulated as the so-called stationary Schrödinger equation ĤΨ = EΨ, 4) where Ψ = Ψ r) is the complex function called wave function. The physical interpretation of the wave function is that Ψ r) gives the probability for a particle to be found near the space point r. As above, the number of measurements d of the toral measurements in which the particle is found in the elementary volume d 3 r = dxdydz around r is given by d = Ψ r) d 3 r. 5) The wave function satisfies the normalization condition ˆ 1 = d 3 r Ψ r). 6) The operator Ĥ in Eq. 4) is the so-called Hamilton operator or Hamiltonian. For one particle, it is the sum of kinetic and potential energies where the classical momentum p is replaced by the operator Ĥ = ˆp + Ur), 7) m ˆp = i r. 8)
6 The Schrödinger equation can be formulated both for single particles and the systems of particles. In this course we will restrict ourselves to single particles. In this case the notation ε will be used for single-particle energy levels instead of E. One can see that Eq. 4) is a second-order linear differential equation. It is an ordinary differential equation in one dimension and partial differential equation in two and more dimensions. If there is a potential energy, this is a linear differential equation with variale coefficients that can be solved analytically only in special cases. In the case U = this is a linear differential equation with constant coefficients that is easy to solve analytically. An important component of the quantum formalism is boundary conditions for the wave function. In particular, for a particle inside a box with rigid walls the boundary condition is Ψ= at the walls, so that Ψ r) joins smoothly with the value Ψ r) = everywhere outside the box. In this case it is also guaranteed that Ψ r) is integrable and Ψ can be normalized according to Eq. 6). It turns out that the solution of Eq. 4) that satisfies the boundary conditions exists only for a discrete set of E values that are called eigenvalues. The corresponding Ψ are calles eigenfunctions, and all together is called eigenstates. Eigenvalue problems, both for matrices and differential operators, were known in mathematics before the advent of quantum mechanics. The creators of quantum mechanics, mainly Schrödinger and Heisenberg, realised that this mathematical formalism can accurately describe quantization of energy levels observed in experiments. Whereas Schrödinger formlated his famous Schrödinger equation, Heisenberg made a major contribution into description of quantum systems with matrices. 6 B. Energy levels of a particle in a box As an illustration, consider a particle in a one-dimensional rigid box, x L. In this case the momentum becomes and Eq. 4) takes the form and can be represented as ˆp = i d dx d 9) Ψx) = εψx) 3) m dx d dx Ψx) + k Ψx) =, k mε. 31) The solution of this equation satisfying the boundary conditions Ψ) = ΨL) = has the form Ψ ν x) = A sin k ν x), k ν = π ν, ν = 1,, 3,... 3) L where eigenstates are labeled by the index ν. The constant A following from Eq. 6) is A = /L. The energy eigenvalues are given by ε n = k ν m = π ν ml. 33) One can see the the energy ε is quandratic in the momentum p = k de Broglie relation), as it should be, but the energy levels are discrete because of the quantization. For very large boxes, L, the energy levels become quasicontinuous. The lowest-energy level with ν = 1 is called the ground state. For a three-dimensional box with sides L x, L y, and L z a similar calculation yields the energy levels parametrized by the three quantum numbers ν x, ν y, and ν z : ) ε νx,ν y,ν z = π ν x m L + ν y x L + ν z y L. 34) z Here the ground state corresponds to ν x = ν y = ν z = 1. One can order the states in increasing ε and number them by the index j, the ground state being j = 1. If L x = L y = L z = L, then ε νx,ν y,ν z = π ν ml x + ν y + ν ) π ν z = ml 35) and the same value of ε j can be realized for different sets of ν x, ν y, and ν z, for instance, 1,5,1), 5,1,1), etc. The number of different sets of ν x, ν y, ν z ) having the same ε j is called degeneracy and is denoted as g j. States with ν x = ν y = ν z have g j = 1 and they are called non-degenerate. If only two of the numbers ν x, ν y, and ν z coincide, the degeneracy is g j = 3. If all numbers are different, g j = 3! = 6. If one sums over the energy levels parametrized by j, one has to multiply the summands by the corresponding degeneracies.
7 7 C. Density of states For systems of a large size and thus very finely quantized states, one can define the density of states ρε) as the number of energy levels dn ε in the interval dε, that is, dn ε = ρε)dε. 36) It is convenient to start calculation of ρε) by introducing the number of states dn νx ν y ν z dν x dν y dν z, considering ν x, ν y, and ν z as continuous. The result obviously is in the elementary volume dn νx ν y ν z = dν x dν y dν z, 37) that is, the corresponding density of states is 1. ow one can rewrite Eq. 34) in the form where ε = π m ν x + ν y + ν z) = π ν m, 38) ν x ν x L x. 39) The advantage of using this variable is that the vector ν = ν x, ν y, ν z ) is proportional to the momentum p, thus ε ν. Eq. 37) becomes dn νx ν y ν z = V d ν x d ν y d ν z, V = L x L y L z 4) And after that one can go over to the number of states within the shell d ν, as was done for the distribution function of the molecules over velocities. Taking into account that ν x, ν y, and ν z are all positive, this shell is not a complete spherical shell but 1/8 of it. Thus The last step is to change the variable from ν to ε using Eq. 38). With dn ν = V π ν d ν. 41) ν = m m π ε, ν = ε, π d ν dε = 1 m 1 ε π 4) one obtains dn ε = π ) 3/ m 4 V εdε, 43) π that is, in Eq. 36) ρε) = V π) ) 3/ m ε. 44) VIII. BOLTZMA DISTRIBUTIO AD COECTIO WITH THERMODYAMICS In this section we obtain the distribution of particles over the n energy levels as the most probable macrostate by maximizing its thermodynamic probability w. We label quantum states by the index i and use Eq. ). The task is to find the maximum of w with respect to all i that satisfy the constraints 4) and 5). Practically it is more convenient to maximize ln w than w itself. Using the method of Lagrange multipliers, one searches for the maximum of the target function Φ 1,,, n ) = ln w + α n i β i=1 n ε i i. 45) i=1
8 Here α and β are Lagrange multipliers with arbitrary) signs chosen anticipating the final result. satisfies 8 The maximum Φ i =, i = 1,,..., n. 46) As we are interested in the behavior of macroscopic systems with i 1, the factorials can be simplified with the help of Eq. 15) that takes the form eglecting the relatively small last term in this expression, one obtains Φ = ln! ln! = ln + ln π. 47) n j ln j + j=1 n n j + α j β j=1 j=1 n ε j j. 48) In the derivatives Φ/ i, the only contribution comes from the terms with j = i in the above expression. One obtains the equations that yield j=1 Φ i = ln i + α βε i = 49) i = e α βε i, 5) the Boltzmann distribution. The Lagrange multipliers can be found from Eqs. 4) and 5) in terms of and U. Summing Eq. 5) over i, one obtains = e α Z, α = ln /Z), 51) where n Z = e βε i 5) i=1 is the so-called partition function German Zustandssumme) that plays a major role in statistical physics. eliminating α from Eq. 5) yields Then, i = Z e βε i. 53) After that for the internal energy U one obtains or U = n ε i i = Z i=1 n ε i e βεi = Z i=1 Z β 54) U = ln Z β. 55) This formula implicitly defines the Lagrange multiplier β as a function of U. The statistical entropy, Eq. ), within the Stirling approximation becomes ) n S = k B ln w = k B ln i ln i. 56) Inserting here Eq. 5) and α from Eq. 51), one obtains S k B = ln i=1 n i α βε i ) = ln α + βu = ln Z + βu, 57) i=1
9 Let us show now that β is related to the temperature. To this purpose, we differentiate S with respect to U keeping = const : 1 S k B U = ln Z U + U β U + β = ln Z β β U + U β + β. 58) U Here according to Eq. 55) the first and second terms cancel each other leading to S U = k Bβ. 59) ote that the energies of quantum states ε i depend on the geometry of the system. In particular, for quantum particles in a box ε i depend on L x, L y, and L z according to Eq. 34). In the differentiation over U above, ε i were kept constant. From the thermodynamical point of view, this amounts to differentiating over U with the constant volume V. Recalling the thermodynamic formula ) S = 1 6) U V T and comparing it with Eq. 59), one identifies β = 1 k B T. 61) Thus Eq. 55) actually defines the internal energy as a function of the temperature. ow from Eq. 57) one obtains the statistical formula for the free energy F = U T S = k B T ln Z. 6) One can see that the partition function Z contains the complete information of the system s thermodynamics since other quantities such as pressure P follow from F. 9 IX. STATISTICAL THERMODYAMICS OF THE IDEAL GAS In this section we demonstrate how the results for the ideal gas, previously obtained within the molecular theory, follow from the more general framework of statistical physics in the preceding section. Consider an ideal gas in a sufficiently large container. In this case the energy levels of the system, see Eq. 34), are quantized so finely that one can introduce the density of states ρ ε) defined by Eq. 36). The number of particles in quantum states within the energy interval dε is the product of the number of particles in one state ε) and the number of states dn ε in this energy interval. With ε) given by Eq. 53) one obtains d ε = ε)dn ε = Z e βε ρ ε) dε. 63) For finely quantized levels one can replace summation in Eq. 5) and similar formulas by integration as ˆ... dερ ε)... 64) Thus for the partition function 5) one has For quantum particles in a rigid box with the help of Eq. 44) one obtains Z = V π) ) 3/ ˆ m dε εe βε = i ˆ Z = dερ ε) e βε. 65) V π) ) 3/ ) 3/ m π β = V mkb T 3/ π. 66) Using the classical relation ε = mv / and thus dε = mvdv, one can obtain the formula for the number of particles in the speed interval dv d v = fv)dv, 67)
10 1 where the speed distribution function fv) is given by fv) = = 1 Z e βε ρ ε) mv = 1 V π ) 3/ V mk B T π) ) 3/ m 4πv exp mv πk B T k B T ) 3/ m m v mv e βε ). 68) This result coincides with the Maxwell distribution function obtained earlier from the molecular theory of gases. The Plank s constant that links to quantum mechanics disappeared from the final result. The internal energy of the ideal gas is its kinetic energy U = ε, 69) ε = m v / being the average kinetic energy of an atom. The latter can be calculated with the help of the speed distribution function above, as was done in the molecular theory of gases. The result has the form ε = f k BT, 7) where in our case f = 3 corresponding to three translational degrees of freedom. The same result can be obtained from Eq. 55) U = ln Z β = ) 3/ m β V ln π = 3 β β ln β = 3 1 β = 3 k BT. 71) After that the known result for the heat capacity C V = U/ T ) V follows. The pressure P is defined by the thermodynamic formula ) F P = V. 7) With the help of Eqs. 6) and 66) one obtains P = k B T ln Z V T = k B T ln V V that amounts to the equation of state of the ideal gas P V = k B T. = k BT V 73) X. STATISTICAL THERMODYAMICS OF HARMOIC OSCILLATORS Consider an ensemble of identical harmonic oscillators, each of them described by the Hamiltonian Ĥ = ˆp m + kx. 74) Here the momentum operator ˆp is given by Eq. 9) and k is the spring constant. This theoretical model can describe, for instance, vibrational degrees of freedom of diatomic molecules. In this case x is the elongation of the chemical bond between the two atoms, relative to the equilibrium bond length. The stationary Schrödinger equation 4) for a harmonic oscillator becomes d ) m dx + kx Ψx) = εψx). 75) The boundary conditions for this equation require that Ψ± ) = and the integral in Eq. 6) converges. In contrast to Eq. 3), this is a linear differential equation with a variable coefficient. Such differential equations in general do not have solutions in terms of known functions. In some cases the solution can be expressed through special functions such as hypergeometrical, Bessel functions, etc. Solution of Eq. 75) can be found but this task belongs to quantum mechanics courses. The main result that we need here is that the energy eigenvalues of the harmonic oscillator have the form ε ν = ω ν + 1 ), ν =, 1,,..., 76)
11 where ω = k/m is the frequency of oscillations. The energy level with ν = is the ground state. The ground-state energy ε = ω / is not zero, as would be the case for a classical oscillator. This quantum ground-state energy is called zero-point energy. It is irrelevant in the calculation of the heat capacity of an ensemble of harmonic oscillators. The partition function of a harmonic oscillator is Z = e βε ν = e β ω / e β ω ) ν e β ω/ = 1 e β ω = 1 e β ω/ e = β ω/ sinh β ω /), 77) ν= ν= where the result for the geometrical progression was used. The hyperbolic functions are defined by We will be using the formulas and 1 + x + x + x = 1 x) 1, x < 1 78) sinhx) ex e x, coshx) ex + e x tanhx) sinhx) coshx), coshx) cothx) sinhx) = 1 tanhx). 79) sinhx) = coshx), coshx) = sinhx) 8) 11 sinhx) = { { x, x 1 e x /, x 1, tanhx) x, x 1 = 1, x 1. 81) or The internal average) energy of the ensemble of oscillators is given by Eq. 55). This yields U = ln Z β = sinh β ω /) 1 β sinh β ω /) = ω ) coth β ω 8) U = ω ) coth ω. 83) k B T Another way of writing the internal energy is ) 1 U = ω + 1 e β ω 1 84) or the similar in terms of T. The limiting low- and high-temperature cases of this formula are U = { ω /, k B T ω k B T, k B T ω. 85) In the low-temperature case, almost all oscillators are in their ground states, since e β ω 1. Thus the main term that contributes into the partition function in Eq. 77) is ν =. Correspondingly, U is the sum of ground-state energies of all oscillators. At high temperatures, the previously obtained classical result is reproduced. The Planck s constant that links to quantum mechanics disappears. In this case, e β ω is only slightly smaller than one, so that very many different n contribute to Z in Eq. 77). In this case one can replace summation by integration, as was done above for the particle in a potential box. In that case one also obtained the classical results. One can see that the crossover between the two limiting cases corresponds to k B T ω. In the high-temperature limit k B T ω, many low-lying energy levels are populated. The top populated level ν can be estimates from k B T ω ν, so that Probability to find an oscillator in the states with ν ν is exponentially small. ν k BT ω 1. 86)
12 1 C / kb ) k B T / hω ) FIG. : Heat capacity of harmonic oscillators. The heat capacity is defined by C = du dt = ω 1 sinh [ ω /k B T )] ) ω ) ) ω /k B T ) k B T = k B. 87) sinh [ ω /k B T )] This formula has limiting cases C = { ) ) ω k B k BT exp ω k BT, k B T ω k B, k B T ω. 88) One can see that at high temperatures the heat capacity can be written as C = f k B, 89) where the effective number of degrees of freedom for an oscillator is f =. The explanation of the additional factor is that the oscillator has not only the kinetic, but also the potential energy, and the average values of these two energies are the same. Thus the total amount of energy in a vibrational degree of freedom doubles with respect to the translational and rotational degrees of freedom. At low temperatures the vibrational heat capacity above becomes exponentially small. One says that vibrational degrees of freedom are getting frozen out at low temperatures. The average quantum number of an oscillator is given by n ν = 1 Z Using Eq. 76), one can calculate this sum as follows n = 1 Z ν= ) 1 + ν e βεν 1 Z ν= Finally with the help of Eq. 84), one finds 1 = 1 1 e βεν ω Z ν= νe βεν. 9) ν= ε ν e βεν Z Z = 1 1 Z ω Z β 1 = U 1 ω. 91) n = 1 e β ω 1 = 1 exp [ ω /k B T )] 1 9)
13 13 so that U = ω 1 + n ). 93) At low temperatures, k B T ω, the average quantum number n becomes exponentially small. This means that the oscillator is predominantly in its ground state, ν =. At high temperatures, k B T ω, Eq. 9) yields n = k BT ω 94) that has the same order of magnitude as the top populated quantum level number ν given by Eq. 86). To conclude this section, let us reproduce the classical high-temperature results by a simpler method. At high temperatures, many levels are populated and the distribution function e βε ν only slightly changes from one level to the other. Indeed its relative change is e βε ν e βε ν+1 e βε ν = 1 e βεν+1 εν) = 1 e β ω = β ω = ω k B T 1. 95) In this situation one can replace summation in Eq. 77) by integration using the rule in Eq. 64). The density of states ρ ε) for the harmonic oscillator can be easily found. The number of levels dn ν in the interval dν is dn ν = dν 96) that is, there is only one level in the interval dν = 1). Then, changing the variables with the help of Eq. 76), one finds the number of levels dn ε in the interval dε as where dn ε = dν dε dε = ) 1 dε dε = ρ ε) dε, 97) dν ρ ε) = 1 ω. 98) One actually can perform a one-line calculation of the partition function Z via integration. Considering ν as continuous, one writes Z = dνe βε = dε dν dε e βε = 1 ω dεe βε = 1 ω β = k BT ω. 99) One can see that this calculation is in accord with Eq. 98). ow the internal energy U is given by that coincides with the second line of Eq. 85). U = ln Z β = ln 1 β +... = ln β +... = β β β = k BT 1) XI. STATISTICAL THERMODYAMICS OF ROTATORS The rotational kinetic energy of a rigid body, expressed with respect to the principal axes, reads E rot = 1 I 1ω I ω + 1 I 3ω 3. 11) We will restrict our consideration to the axially symmetric body with I 1 = I = I. In this case E rot can be rewritten in terms of the angular momentum components L α = I α ω α as follows E rot = L I ) L I 3 I 3. 1) In the important case of a diatomic molecule with two identical atoms, then the center of mass is located between the two atoms at the distance d/ from each, where d is the distance between the atoms. The moment of inertia of
14 the molecule with respect to the 3 axis going through both atoms is zero, I 3 =. The moments of inertia with respect to the 1 and axis are equal to each other: I 1 = I = I = m 14 ) d = 1 md, 13) where m is the mass of an atom. In our case, I 3 =, there is no kinetic energy associated with rotation around the 3 axis. Then Eq. 1) requires L 3 =, that is, the angular momentum of a diatomic molecule is perpendicular to its axis. In quantum mechanics L and L 3 become operators. The solution of the stationary Schrödinger equation yields eigenvalues of L, L 3, and E rot in terms of three quantum numbers l, m, and n. These eigenvalues are given by ˆL = ll + 1), l =, 1,,... 14) ˆL 3 = n, m, n = l, l + 1,..., l 1, l 15) and E rot = ll + 1) I ) n 16) I 3 I that follows by substitution of Eqs. 14) and 15) into Eq. 1). One can see that the results do not depend on the quantum number m and there are l + 1 degenerate levels because of this. Indeed, the angular momentum of the molecule ˆL can have l + 1 different projections on any axis and this is the reason for the degeneracy. Further, the axis of the molecule can have l + 1 different projections on ˆL parametrized by the quantum number n. In the general case I 3 I these states are non-degenerate. However, they become degenerate for the fully symmetric body, I 1 = I = I 3 = I. In this case E rot = ll + 1) I 17) and the degeneracy of the quantum level l is g l = l + 1). For the diatomic molecule I 3 =, and the only acceptable value of n is n =. Thus one obtains Eq. 17) again but with the degeneracy g l = l + 1. The rotational partition function both for the symmetric body and for a diatomic molecule is given by Z = l g l e βε l, g l = l + 1) ξ, ξ = {, symmetric body 1, diatomic molecule, 18) where ε l E rot. In both cases Z cannot be calculated analytically in general. In the general case I 1 = I I 3 with I 3 one has to perform summation over both l and n. In the low-temperature limit, most of rotators are in their ground state l =, and very few are in the first excited state l = 1. Discarding all other values of l, one obtains ) Z = ξ exp β 19) I at low temperatures. The rotational internal energy is given by U = ln Z = 3ξ β I exp β I ) = 3 ξ I exp ) Ik B T 11) and it is exponentially small. The heat capacity is C V = ) U T V ) = 3 ξ ) k B exp Ik B T Ik B T and it is also exponentially small. In the high-temperature limit, very many energy levels are thermally populated, so that in Eq. 18) one can replace summation by integration. With g l = l) ξ, ε l ε = l I, ε l = l I, l ε = I I Iε = ε 111) 11)
15 15 C / kb ) Fully symmetric body Diatomic molecule IT / h FIG. 3: Rotational heat capacity of a fully symmetric body and a diatomic molecule. one has and further Z = dl g l e βε l = ξ dl l ξ e βε l = ξ dε l ε lξ e βε 113) Z = ξ I = 3ξ 1)/ I β ow the internal energy is given by whereas the heat capacity is dε 1 ε Iε ) ξ/ ) ξ+1)/ ˆ I e βε = 3ξ 1)/ dε ε ξ 1)/ e βε ) ξ+1)/ dx x ξ 1)/ e x β ξ+1)/. 114) U = ln Z β = ξ + 1 ln β β = ξ β = ξ + 1 k BT, 115) C V = ) U T V = ξ + 1 k B. 116) In these two formulas, f = ξ + 1 is the number of degrees of freedom, f = for a diatomic molecule and f = 3 for a fully symmetric rotator. This result confirms the principle of equidistribution of energy over the degrees of freedom in the classical limit that requires temperatures high enough. To conclude this section, let us calculate the partition function classically at high temperatures. The calculation can be performed in the general case of all moments of inertia different. The rotational energy, Eq. 11), can be written as E rot = L 1 I 1 + L I + L 3 I ) The dynamical variables here are the angular momentum components L 1, L, and L 3, so that the partition function
16 16 can be obtained by integration with respect to them Z class = = = dl 1 dl dl 3 exp βe rot ) dl 1 exp βl 1 I 1 I 1 β I β I 3 β ) ) ) dl exp βl dl 3 exp βl 3 I I 3 ) 3 dx e x β 3/. 118) In fact, Z class contains some additional coefficients, for instance, from the integration over the orientations of the molecule. However, these coefficients are irrelevant in the calculation of the the internal energy and heat capacity. For the internal energy one obtains U = ln Z β = 3 ln β β = 3 k BT 119) that coincides with Eq. 115) with ξ =. In the case of the diatomic molecule the energy has the form Then the partition function becomes E rot = L 1 I 1 + L I. 1) Z class = = = dl 1 dl exp βe rot ) dl 1 exp βl 1 I 1 I 1 β I β ) ) dl exp βl I ) dx e x β 1. 11) This leads to U = k B T, 1) in accordance with Eq. 115) with ξ = 1. XII. STATISTICAL PHYSICS OF VIBRATIOS I SOLIDS Above in Sec. X we have studied statistical mechannics of harmonic oscillators. For instance, a diatomic molecule behaves as an oscillator, the chemical bond between the atoms periodically changing its length with time in the classical description. For molecules consisting of atoms, vibrations become more complicated and, as a rule, involve all atoms. Still, within the harmonic approximation the potential energy expanded up to the second-order terms in deviations from the equilibrium positions) the classical equations of motion are linear and can be dealt with matrix algebra. Diagonalizing the dynamical matrix of the system, one can find 3 6 linear combinations of atomic coordinates that dynamically behave as independent harmonic oscillators. Such collective vibrational modes of the system are called normal modes. For instance, a hypothetic triatomic molecule with atoms that are allowed to move along a straight line has normal modes. If the masses of the first and third atoms are the same, one of the normal modes corresponds to antiphase oscillations of the first and third atoms with the second central) atom being at rest. Another normal mode corresponds to the in-phase oscillations of the first and third atoms with the central atom oscillating in the anti-phase to them. For more complicated molecules normal modes can be found numerically. ormal modes can be considered quantum mechanically as independent quantum oscillators as in Sec. X. Then the vibrational partition function of the molecule is a product of partition functions of individual normal modes and the internal energy is a sum of internal energies of these modes. The latter is the consequence of the independence of the normal modes. The solid having a well-defined crystal structure is translationally invariant that simplifies finding normal modes, in spite of a large number of atoms. In the simplest case of one atom in the unit cell normal modes can be found immediately by making Fourier transformation of atomic deviations. The resulting normal modes are sound waves
17 with different wave vectors k and polarizations in the simplest case one longitudinal and two equivalent transverse modes). In the sequel, to illustrate basic principles, we will consider only one type of sound waves say, the longitudinal wave) but we will count it three times. The results can be then generalized for different kinds of waves. For wave vectors small enough, there is the linear relation between the wave vector and the frequency of the oscillations ω k = vk 13) In this formula, ω k depends only on the magnitude k = π/λ and not on the direction of k and v is the speed of sound. The acceptable values of k should satisfy the boundary conditions at the boundaries of the solid body. Similarly to the quantum problem of a particle in a potential box with dimentions L x, L y, and L z, one obtains the quantized values of the wave vector components k α with α = x, y, z so that k = k α = π ν α L α, ν = 1,, 3,..., 14) kx + ky + kxz = π ν x + ν y + ν z = π ν, 17 ν α ν α L α. 15) Similarly to the problem of the particle in a box, one can introduce the frequency density of states ρ ω) via the number of normal modes in the frequency interval dω dn ω = ρ ω) dω. 16) Starting with Eq. 37), one arrives at Eq. 41). Then with the help of ω = vπ ν following from Eqs. 13) 15) one obtains that yields for the frequency density of states The total number of normal modes should be 3: dn ω = 3 V π d ν ν dω = 3 V π ρ ω) = 3 = 3 k 1 vπ) 3 ω dω 17) 3V π v 3 ω. 18) 1 = 3 k x,k y,k x 1. 19) For a body of a box shape = x y z, so that the above equation splits into the three separate equations for α = x, y, z. This results in α = k α,max k α =1 = ν α,max 13) k α,max = π α L α = πa, 131) where a is the lattice spacing, that is, the distance between the atoms. As the solid body is macroscopic and there are very many values of the allowed wave vectors, it is convenient to replace summation my integration and use the density of states:... k ˆ ωmax dωρ ω)... 13) At large k and thus ω, the linear dependence of Eq. 13) is no longer valid, so that Eq. 18) becomes invalid as well. Still, one can make a crude approximation assuming that Eqs. 13) and 18) are valid until some maximal frequency of the sound waves in the body. The model based on this assumption is called Debye model. The maximal frequency or the Debye frequency ω D ) is defined by Eq. 19) that takes the form 3 = ˆ ωd dωρ ω). 133)
18 18 With the use of Eq. 18) one obtains wherefrom 3 = ˆ ωd dω 3V π v 3 ω = V ω3 D π v 3, 134) ω D = 6π ) 1/3 v a, 135) where we used V/) 1/3 = v 1/3 = a, and v is the unit-cell volume. It is convenient to rewrite Eq. 18) in terms of ω D as ρ ω) = 9 ω ω ) D The quantum energy levels of a harmonic oscillator are given by Eq. 76). In our case for a k-oscillator it becomes ε k,νk = ω k ν k + 1 ), ν =, 1,, ) All the k-oscillators contribute additively into the internal energy, U = 3 ε k,νk = 3 ω k ν k + 1 ) = 3 k k k ω k n k + 1 ), 138) where, similarly to Eq. 9), n k = 1 exp β ω k ) ) Replacing summation by integration and rearranging Eq. 138), within the Debye model one obtains U = U + ˆ ωd dωρ ω) ω exp β ω) 1, 14) where U is the zero-temperature value of U due to the zero-point motion. The integral term in Eq. 14) can be calculated analytically at low and high temperatures. For k B T ω D, the integral converges at ω ω D, so that one can extend the upper limit of integration to. One obtains U = U + 9 = U + ω D ) 3 β 4 This result can be written in the form where ω dωρ ω) exp β ω) 1 = U + 9 is the Debye temperature. The heat capacity is given by C V = dx ω 3 D dω ω ω exp β ω) 1 x3 e x 1 = U 9 π 4 + ω D ) 3 β 4 15 = U + 3π4 5 k BT ) 3 kb T. 141) ω D ) 3 U = U + 3π4 T 5 k BT, 14) θ D ) U T V θ D = ω D /k B 143) ) 3 = 1π4 T 5 k B, T θ D. 144) θ D Although the Debye temperature enters Eqs. 14) and 144), these results are accurate and insensitive to the Debye approximation.
19 19 C / kb ) T 3 T / θ D FIG. 4: Temperature dependence of the phonon heat capacity. The low-temperature result C T 3 dashed line) holds only for temperatures much smaller than the Debye temperature θ D. For k B T ω D, the exponential in Eq. 14) can be expanded for β ω 1. It is, however, convenient to step back and to do the same in Eq. 138). WIth the help of Eq. 19) one obtains U = U + 3 k For the heat capacity one then obtains ω k β ω k = U + 3k B T k 1 = U + 3k B T. 145) C V = 3k B, 146) k B per each of approximately 3 vibrational modes. One can see that Eqs. 145) and 146) are accurate because they do not use the Debye model. The assumption of the Debye model is really essential at intermediate temperatures, where Eq. 14) is approximate. The temperature dependence of the photon heat capacity, following from Eq. 14), is shown in Fig. 4 together with its low-temperature asymptote, Eq. 144), and the high-temperature asymptote, Eq. 146). Because of the large numerical coefficient in Eq. 144), the law C V T 3 in fact holds only for the temperatures much smaller than the Debye temperature. XIII. SPIS I MAGETIC FIELD Magnetism of solids is in most cases due to the intrinsic angular momentum of the electron that is called spin. In quantum mechanics, spin is an operator represented by a matrix. The value of the electronic angular momentum is s = 1/, so that the quantum-mechanical eigenvalue of the square of the electron s angular momentum in the units of ) is ŝ = s s + 1) = 3/4 147) c.f. Eq. 14). Eigenvalues of ŝ z, projection of a free spin on any direction z, are given by ŝ z = m, m = ±1/ 148) c.f. Eq. 15). Spin of the electron can be interpreted as circular motion of the charge inside this elementary particle. This results in the magnetis moment of the electron ˆµ = gµ B ŝ, 149)
20 where g = is the so-called g-factor and µ B is Bohr s magneton, µ B = e / m e ) = J/T. ote that the model of circularly moving charges inside the electron leads to the same result with g = 1; The true value g = follows from the relativistic quantum theory. The energy the Hamiltonian) of an electron spin in a magnetic field B has the form Ĥ = gµ B ŝ B, 15) the so-called Zeeman Hamiltonian. One can see that the minimum of the energy corresponds to the spin pointing in the direction of B. Choosing the z axis in the direction of B, one obtains Ĥ = gµ B ŝ z B. 151) Thus the energy eigenvalues of the electronic spin in a magnetic field with the help of Eq. 148) become ε m = gµ B mb, 15) where, for the electron, m = ±1/. In an atom, spins of all electrons usually combine in a collective spin S that can be greater than 1/. Thus instead of Eq. 15) one has Eigenvalues of the projections of Ŝ on the direction of B are Ĥ = gµ B Ŝ B. 153) m = S, S + 1,..., S 1, S, 154) all together S + 1 different values, c.f. Eq. 14). The S + 1 different energy levels of the spin are given by Eq. 15). Physical quantities of the spin are determined by the partition function where ε m is given by Eq. 15) and Z S = S m= S e βε m = Eq. 155) can be reduced to a finite geometrical progression S m= S e my, 155) y βgµ B B = gµ BB k B T. 156) Z S = e Sy S k= e y ) k = e ys es+1)y 1 e y 1 = es+1/)y e S+1/)y e y/ e y/ = sinh [S + 1/)y]. 157) sinh y/) For S = 1/ with the help of sinhx) = sinhx) coshx) one can transform Eq. 157) to Z 1/ = cosh y/). 158) The partition function beng found, one can easily obtain thermodynamic quantities.. For instance, the free energy F is given by Eq. 6), The average spin polarization is defined by With the help of Eq. 155) one obtains F = k B T ln Z S = k B T ln S z = m = 1 Z S z = 1 Z S m= S sinh [S + 1/)y]. 159) sinh y/) me βε m. 16) Z y = ln Z y. 161)
21 1 B S x) S = 1/ 1 5/ S = x FIG. 5: Brillouin function B Sx) for different S. With the use of Eq. 157) and the derivatives sinhx) = coshx), coshx) = sinhx) 16) one obtains S z = b S y), 163) where b S y) = S + 1 ) [ coth S + 1 ) ] y 1 [ ] 1 coth y. 164) Usually the function b S y) is represented in the form b S y) = SB S Sy), where B S x) is the Brillouin function B S x) = ) [ coth ) ] x 1 [ ] 1 S S S coth S x. 165) For S = 1/ from Eq. 158) one obtains S z = b 1/ y) = 1 tanh y 166) and B 1/ x) = tanh x. One can see that b S ± ) = ±S and B S ± ) = ±1. At zero magnetic field, y =, the spin polarization should vanish. It is immediately seen in Eq. 166) but not in Eq. 163). To clarify the behavior of b S y) at small y, one can use coth x = 1/x + x/3 that yields In physical units, this means b S y) = = y 3 S + 1 ) [ 1 S + 1 [ S + 1 ) S z = ) + S + 1 y ) ] 1 = ) ] y 1 [ y + 1 ] y 3 SS + 1) y. 167) 3 SS + 1) gµ B B 3 k B T, gµ BB k B T. 168)
22 The average magnetic moment of the spin can be obtained from Eq. 149) µ z = gµ B S z. 169) The magnetization of the sample M is defined as the magnetic moment per unit volume. If there is one magnetic atom per unit cell and all of them are uniformly magnetized, the magnetization reads M = µ z v = gµ B v S z, 17) where v is the unit-cell volume. The internal energy U of a system of spins can be obtained from the general formula, Eq. 55). The calculation can be, however, avoided since from Eq. 15) simply follows U = ε m = gµ B B m = gµ B B S z = gµ B Bb S y) = k B T yb S y). 171) In particular, at low magnetic fields or high temperatures) one has U = The magnetic susceptibility per spin is defined by From Eqs. 169) and 163) one obtains where b Sy) db Sy) dy For S = 1/ from Eq. 166) one obtains = SS + 1) 3 gµ B B). 17) k B T χ = µ z B. 173) χ = gµ B) k B T b Sy), 174) S + 1/ sinh [S + 1/) y] ) ) 1/ ) sinh [y/] b 1/ y) = 1 4 cosh y/). 176) From Eq. 167) follows b S ) = SS + 1)/3, thus one obtains the high-temperature limit χ = SS + 1) gµ B ) 3 k B T, k BT gµ B B. 177) In the opposite limit y 1 the function b S y) and thus the susceptibility becomes small. The physical reason for this is that at low temperatures, k B T gµ B B, the spins are already strongly aligned by the magnetic field, S z = S, so that S z becomes hardly sensitive to small changes of B. As a function of temperature, χ has a maximum at intermediate temperatures. The heat capacity C can be obtained from Eq. 171) as C = U T = gµ BBb Sy) y T = k Bb Sy)y. 178) One can see that C has a maximum at intermediate temperatures, y 1. At high temperatures C decreases as 1/T : ) C SS + 1) gµb B = k B, 179) 3 k B T and at low temperatures C becomes exponentially small, except for the case S = see Fig. 6). Finally, Eqs. 174) and 178) imply a simple relation between χ and C that does not depend on the spin value S. χ C = T B 18)
23 3 C / kb ) S = 5/ 1 S = 1/ k T / Sg B) B µ B FIG. 6: Temperature dependence of the heat capacity of spins in a magnetic fields with different values of S. XIV. PHASE TRASITIOS AD THE MEA-FIELD APPROXIMATIO As explained in thermodynamics, with changing thermodynamic parameters such as temperature, pressure, etc., the system can undergo phase transitions. In the first-order phase transitions, the chemical potentials µ of the two competing phases become equal at the phase transition line, while on each side of this line they are unequal and only one of the phases is thermodynamically stable. The first-order trqansitions are thus abrupt. To the contrast, secondorder transitions are gradual: The so-called order parameter continuously grows from zero as the phase-transition line is crossed. Thermodynamic quantities are singular at second-order transitions. Phase transitions are complicated phenomena that arise due to interaction between particles in many-particle systems in the thermodynamic limit. It can be shown that thermodynamic quantities of finite systems are non-singular and, in particular, there are no second-order phase transitions. Up to now in this course we studied systems without interaction. Including the latter requires an extension of the formalism of statistical mechanics that is done in Sec. XV. In such an extended formalism, one has to calculate partition functions over the energy levels of the whole system that, in general, depend on the interaction. In some cases these energy levels are known exactly but they are parametrized by many quantum numbers over which summation has to be carried out. In most of the cases, however, the energy levels are not known analytically and their calculation is a huge quantum-mechanical problem. In both cases calculation of the partition function is a very serious task. There are some models for which thermodynamic quantities had been calculated exactly, including models that possess second-order phase transitions. For other models, approximate numerical methods had been developed that allow calculating phase transition temperatures and critical indices. It was shown that there are no phase transitions in one-dimensional systems with short-range interactions. It is most convenient to illustrate the physics of phase transitions considering magnetic systems, or the systems of spins introduced in Sec. XIII. The simplest forms of interaction between different spins are Ising interaction J ij Ŝ iz Ŝ jz including only z components of the spins and Heisenberg interaction J ij Ŝ i Ŝ j. The exchange interaction J ij follows from quantum mechanics of atoms and is of electrostatic origin. In most cases practically there is only interaction J between spins of the neighboring atoms in a crystal lattice, because J ij decreases exponentially with distance. For the Ising model, all energy levels of the whole system are known exactly while for the Heisenberg model they are not. In the case J > neighboring spins have the lowest interaction energy when they are collinear, and the state with all spins pointing in the same direction is the exact ground state of the system. For the Ising model, all spins in the ground state are parallel or antiparallel with the z axis, that is, it is double-degenerate. For the Heisenberg model the spins can point in any direction in the ground state, so that the ground state has a continuous degeneracy. In both cases, Ising and Heisenberg, the ground state for J > is ferromagnetic. The nature of the interaction between spins considered above suggests that at T the system should fall into its ground state, so that thermodynamic averages of all spins approach their maximal values, Ŝi S. With increasing
24 temperature, excited levels of the system become populated and the average spin value decreases, Ŝi < S. At high temperatures all energy levels become populated, so that neighboring spins can have all possible orientations with respect to each other. In this case, if there is no external magnetic field acting on the spins see Sec. XIII), the average spin value should be exactly zero, because there as many spins pointing in one direction as there are the spins pointing in the other direction. This is why the high-temperature state is called symmetric state. If now the temperature is lowered, there should be a phase transition temperature T C the Curie temperature for ferromagnets) below which the order parameter average spin value) becomes nonzero. Below T C the symmetry of the state is spontaneously broken. This state is called ordered state. ote that if there is an applied magnetic field, it will break the symmetry by creating a nonzero spin average at all temperatures, so that there is no sharp phase transition, only a gradual increase of Ŝi as the temperature is lowered. Although the scenario outlined above looks persuasive, there are subtle effects that may preclude ordering and ensure Ŝi = at all nonzero temperatures. As said above, ordering does not occur in one dimension spin chains) for both Ising and Heisenberg models. In two dimensions, Ising model shows a phase transition, and the analytical solution of the problem exists and was found by Lars Osager. However, two-dimensional Heisenberg model does not order. In three dimensions, both Ising and Heisenberg models show a phase transition and no analytical solution is available. 4 A. The mean-field approximation for ferromagnets While a rigorous solution of the problem of phase transition is very difficult, there is a simple approximate solution that captures the physics of phase transitions as described above and provides qualitatively correct results in most cases, including three-dimensional systems. The idea of this approximate solution is to reduce the original many-spin problem to an effective self-consistent one-spin problem by considering one spin say i) and replacing other spins in the interaction by their average thermodynamic values, J ij Ŝ i Ŝ j J ij Ŝ i Ŝj. One can see that now the interaction term has mathematically the same form as the term describing interaction of a spin with the applied magnetic field, Eq. 153). Thus one can use the formalism of Sec. XIII) to solve the problem analytically. This aproach is called mean field approximation MFA) or molecular field approximation and it was first suggested by Weiss for ferromagnets. The effective one-spin Hamiltonian within the MFA is similar to Eq: 153) and has the form ) Ĥ = Ŝ gµ B B + Jz Ŝ + 1 Ŝ Jz, 181) where z is the number of nearest neigbors for a spin in the lattice z = 6 for the simple cubic lattice). Here the last Ŝ term ensures that the replacement Ŝ Ŝ in Ŝ Ĥ yields Ĥ = gµ B B 1/)Jz that becomes the correct ground-state energy at T = where Ŝ = S. For the Heisenberg model, in the presence of a whatever weak applied field B the ordering will occur in the direction of B. Choosing the z axis in this direction, one can simplify Eq. 181) to ) Ĥ = gµ B B + Jz Ŝz Ŝz + 1 Ŝz Jz. 18) The same mean-field Hamilotonian is valid for the Ising model as well, if the magnetic field is applied along the z axis. Thus, within the MFA but not in general!) Ising and Heisenberg models are equivalent. The energy levels corresponding to Eq. 18) are ) ε m = gµ B B + Jz Ŝz m + 1 Ŝz Jz, m = S, S + 1,..., S 1, S. 183) ow, after calculation of the partition function Z, Eq. 155), one arrives at the free energy per spin F = k B T ln Z = 1 Jz Ŝz kb T ln sinh [S + 1/)y], 184) sinh y/) where y gµ B B + Jz Ŝz. 185) k B T
25 5 T =.5T C T = T C T = T C Ŝ z S = 1/ FIG. 7: Graphic solution of the Curie-Weiss equation. To find the actual value of the order parameter Ŝz at any temperature T and magnetic field B, one has to minimize F with respect to Ŝz, as explained in thermodynamics. One obtains the equation = F = Jz Ŝz Jzb S y), 186) Ŝz where b S y) is defined by Eq. 164). Rearranging terms one arrives at the transcedental Curie-Weiss equation Ŝz = b S gµ BB + Jz Ŝz 187) k B T that defines Ŝz. In fact, this equation could be obtained in a shorter way by using the modified argument, Eq. 185), in Eq. 163). For B =, Eq. 187) has the only solution Ŝz = at high temperatures. As b S y) has the maximal slope at y =, it is sufficient that this slope with respect to Ŝz ) becomes smaller than 1 to exclude any solution other than Ŝz =. Using Eq. 167), one obtains that the only solution Ŝz = is realized for T T C, where SS + 1) Jz T C = 188) 3 k B is the Curie temperature within the mean-field approximation. Below T C the slope of b S y) with respect to Ŝz exceeds 1, so that there are three solutions for Ŝz : One solution Ŝz = and two symmetric solutions Ŝz see Fig. 7). The latter correspond to the lower free energy than the solution Ŝz = thus they are thermodynamically stable see Fig. 8).. These solutions describe the ordered state below T C. Slightly below T C the value of Ŝz is still small and can be found by expanding b S y) up to y 3. In particular, b 1/ y) = 1 tanh y = 1 4 y 1 48 y3. 189)
26 6 F/Jz) T = T C T = T C T =.8T C Ŝ z S = 1/ T =.5T C FIG. 8: Free energy of a ferromagnet vs Ŝz within the MFA. Arbitrary vertical shift.) Using T C = 1/4)Jz/k B and thus Jz = 4k B T C for S = 1/, one can rewrite Eq. 187) with B = in the form Ŝz = T C T Ŝz 4 3 TC T One of the solutions is Ŝz = while the other two solutions are Ŝz S = ± 3 ) 3 Ŝz 3. 19) 1 TTC ), S = 1, 191) where the factor T/T C was replaced by 1 near T C. Although obtained near T C, in this form the result is only by the factor 3 off at T =. The singularity of the order parameter near T C is square root, so that for the magnetization critical index one has β = 1/. Results of the numerical solution of the Curie-Weiss equation with B = for different S are shown in Fig. 9. ote that the magnetization is related to the spin average by Eq. 17). Let us now consider the magnetic susceptibility per spin χ defined by Eq. 173) above T C. Linearizing Eq. 187) with the use of Eq. 167), one obtains Here from one obtains Ŝz = SS + 1) 3 χ = µ z B gµ B B + Jz Ŝz = gµ B k B T Ŝz B = SS + 1) gµ B B 3 k B T + T C Ŝz. 19) T = SS + 1) 3 gµ B ) k B T T C ). 193) In contrast to non-interacting spins, Eq. 177), the susceptibility diverges at T = T C rather than at T =. The inverse susceptivility χ 1 is a straight line crossing the T axis at T C. In the theory of phase transitions the critcal index for the susceptibility γ is defined as χ T T C ) γ. One can see that γ = 1 within the MFA. More precise methods than the MFA yield smaller values of β and larger values of γ that depend on the details of the model such as the number of interacting spin components 1 for the Ising and 3 for the Heisenberg) and the dimension of the lattice. On the other hand, critical indices are insensitive to such factors as lattice structure and the value of the spin S. On the other hand, the value of T C does not possess such universality and depends on all parameters of the problem. Accurate values of T C are by up to 5% lower than their mean-field values in three dimensions.
27 7 Sˆ z / S S = 5/ S = 1/ T/T C FIG. 9: Temperature dependences of normalized spin averages Ŝz /S for S = 1/, 5/, obtained from the numerical solution of the Curie-Weiss equation. XV. THE GRAD CAOICAL ESEMBLE In the previous chapters we considered statistical properties of systems of non-interacting and distinguishable particles. Assigning a quantum state for one such a particle is independent from assigning the states for other particles, these processes are statistically independent. Particle 1 in state 1 and particle in state, the microstate 1,), and the microstate,1) are counted as different microstates within the same macrostate. However, quantum-mechanical particles such as electrons are indistinguishable from each other. Thus microstates 1,) and,1) should be counted as a single microstate, one particle in state 1 and one in state, without specifying with particle is in which state. This means that assigning quantum states for different particles, as we have done above, is not statistically independent. Accordingly, the method based on the minimization of the Boltzmann entropy of a large system, Eq. ), becomes invalid. This changes the foundations of statistical description of such systems. In addition to indistinguishability of quantum particles, it follows from the relativistic quantum theory that particles with half-integer spin, called fermions, cannot occupy a quantum state more than once. There is no place for more than one fermion in any quantum state, the so-called exclusion principle. An important example of fermions is electron. To the contrary, particles with integer spin, called bosons, can be put into a quantum state in unlimited numbers. Examples of bosons are Helium and other atoms with an integer spin. ote that in the case of fermions, even for a large system, one cannot use the Stirling formula, Eq. 15), to simplify combinatorial expressions, since the occupation numbers of quantum states can be only and 1. This is an additional reason why the Boltzmann formalism does not work here. Finally, the Boltzmann formalism of quantum statistics does not work for systems with interaction because the quantum states of such systems are quantum states of the system as the whole rather than one-particle quantum states. To construct the statistical mechanics of systems of indistinguishable particles and/or systems with interaction, one has to consider an ensemble of systems. The states of the systems are labeled by ξ, so that = ξ ξ, 194) and each state ξ is specified by the set of i, the numbers of particles in each quantum state i. We require that the
28 average number of particles in a system and the average energy of a system over the ensemble are fixed: = 1 ξ ξ, ξ = i 195) i U = ξ 1 ξ E ξ, ξ E ξ = i 8 ε i i. 196) Here the sums over i are performed for the indicated state ξ. Do not confuse ξ with ξ. The number of ways W in which the state ξ can be realized i.e., the number of microstates) can be calculated in exactly the same way as it was done above in the case of Boltzmann statistics. As the systems we are considering are distinguishable, one obtains W =! ξ ξ!. 197) Since the number of systems in the ensemble and thus all ξ can be arbitrary high, the actual state of the ensemble is that maximizing W under the constraints of Eqs. 195) and 196). Within the method of Lagrange multipliers, one has to maximize the target function Φ 1,, ) = ln W + α ξ ξ ξ β ξ ξ E ξ 198) [c.f. Eq. 45) and below]. Using the Stirling formula, Eq. 15), one obtains Φ = ln! ξ ξ ln ξ + ξ ξ + α ξ ξ ξ β ξ ξ E ξ. 199) ote that we do not require i to be large. In particular, for fermions one has i =, 1.Minimizing, one obtains the equations that yield the Gibbs distribution of the so-called grand canonical ensemble. The total number of systems is given by Φ ξ = ln ξ + α ξ βe ξ = ) ξ = e α ξ βe ξ, 1) = ξ ξ = ξ e α ξ βe ξ = Z, ) where Z is the partition function of the grand canonical ensemble. Thus the ensemble average of the internal energy, Eq. 196), becomes U = 1 e α ξ βe ξ E ξ = ln Z Z β. 3) ξ In contrast to Eq. 55), this formula does not contain explicitly since the average number of particles in a system is encapsulated in Z. As before, the parameter β can be associated with temperature, β = 1/ k B T ). Thus Eq. 3) can be rewritten as U = ln Z T ow the entropy S can be found thermodynamically as S = ˆ T Integrating by parts and using Eq. 4), one obtains This yields the free energy S = U T + ˆ T dt C V T = dt 1 T U = U T + T β = k BT ln Z T. 4) ˆ T ˆ T dt 1 T U T. 5) dt k B ln Z T = U T + k B ln Z. 6) F = U T S = k B T ln Z. 7)
29 9 XVI. STATISTICAL DISTRIBUTIOS OF IDISTIGUISHABLE PARTICLES The Gibbs distribution looks similar to the Boltzmann distribution, Eq. 5). However, this is a distribution of systems of particles in the statistical ensemble, rather than distribution of particles over the energy levels. To obtain the latter, an additional step has to be made. This is calculating the average of i as i = 1 ξ i = 1 e α ξ βe ξ i, 8) Z Z ξ where Z is defined by Eq. ). As both ξ and E ξ are additive in i, see Eqs. 195) and 195), and ξ are specified by their corresponding sets of i, the summand of Eq. ) is multiplicative: ξ Z = 1 e α βε1)1 e α βε)... = j Z j 9) As similar factorization occurs in Eq. 8), all factors except the factor containing i cancel each other leading to i = 1 e α βεi)i = ln Z i Z i α. 1) i The partition functions for quantum states i have different forms for bosons and fermions. For fermions i take the values and 1 only, thus For bosons, i take any values from to. Thus Z i = i= Z i = 1 + e α βεi. 11) e α βε i) i 1 =. 1) α βεi 1 e ow i can be calculated from Eq. 1). Discarding the bar over i, one obtains i = 1 e βε i α ± 1, 13) where +) corresponds to fermions and -) corresponds to bosons. In the first case the distribution is called Fermi-Dirac distribution and in the second case the Bose-Einstein distribution. Eq. 13) should be compared with the Boltzmann distribution, Eq. 5). Whereas β = 1/k B T ), the parameter α is defined by the normalization condition = i i = i 1 e βεi α ± 1. 14) It can be shown that the parameter α is related to the chemical potential µ as α = βµ = µ/ k B T ). Thus Eq. 13) can be written as i = f ε i ), f ε) = 1 e βε µ) ± 1. 15) At high temperatures µ becomes large negative, so that βµ 1. In this case one can neglect ±1 in the denominator in comparizon to the large exponential, and the Bose-Einstein and Fermi-Dirac distributions simplify to the Boltzmann distribution. Replacing summation by integration, one obtains Using Eqs. 66) and 31) yields = e βµ dερε)e βε = e βµ Z. 16) e βµ = Z = 1 n mkb T π ) 3/ 1 17) and our calculations are self-consistent. At low temperatures ±1 in the denominator of Eq. 15) cannot be neglected, and the system is called degenerate.
30 3 XVII. BOSE-EISTEI GAS In the macroscopic limit and V, so that the concentration of particles n = /V is constant, the energy levels of the system become so finely quantized that they become quasicontinuous. In this case summation in Eq. 14) can be replaced by integration. However, it turns out that below some temperature T B the ideal bose-gas undergoes the so-called Bose condensation. The latter means that a mascoscopic number of particles falls into the ground state. These particles are called Bose condensate. Setting the energy of the ground state to zero that always can be done) from Eq. 15) one obtains Resolving this for µ, one obtains = 1 e βµ 1. 18) ) µ = k B T ln = k BT. 19) Since is very large, one has practically µ = below the Bose condensation temperature. Having µ =, one can easily calculate the number of particles in the excited states ex by integration as The total number of particles is then ex = dε ρε) e βε 1. ) In three dimensions, using the density of states ρε) given by Eq. 44), one obtains ex = V π) ) 3/ ˆ m = + ex. 1) ε dε e βε 1 = V π) The integral here is a number, a particular case of the general integral where Γs) is the gamma-function satisfying and ζs) is the Riemann zeta function Thus Eq. ) yields ζs) = ) 3/ m 1 β 3/ x dx e x 1. ) dx xs 1 e x = Γs)ζs), 3) 1 Γn + 1) = nγn) = n! 4) Γ1/) = π, Γ3/) = π/ 5) k s 6) k=1 ζ1) =, ζ3/) =.61, ζ5/) = ) ζ5/) = ζ3/) 8) ex = increasing with temperature. At T = T B one has ex = that is, = V ) 3/ mkb T π) Γ3/)ζ3/) 9) V π) ) 3/ mkb T B Γ3/)ζ3/), 3)
31 31 / ex / T /T B FIG. 1: Condensate fraction T )/ and the fraction of excited particles ext )/ for the ideal Bose-Einstein gas. and thus the condensate disappears, =. From the equation above follows k B T B = m π) n Γ3/)ζ3/) ) /3, n = V, 31) that is T B n /3. In typical situations T B <.1 K that is a very low temperature that can be obtained by special methods such as laser cooling. ow, obviously, Eq. 9) can be rewritten as ) 3/ ex T =, T T B 3) T B The temperature dependence of the Bose-condensate is given by = 1 ) 3/ ex T = 1, T B T T B. 33) One can see that at T = all bosons are in the Bose condensate, while at T = T B the Bose condensate disappears. Indeed, in the whole temperature range T T B one has and our calculations using µ = in this temperature range are self-consistent. Above T B there is no Bose condensate, thus µ and is defined by the equation = ρε) dε e βε µ) 1. 34) This is a nonlinear equation that has no general analytical solution. At high temperatures the Bose-Einstein and also Fermi-Dirac) distribution simplifies to the Boltzmann distribution and µ can be easily found. Equation 17) with the help of Eq. 31) can be rewritten in the form e βµ = 1 ) 3/ T 1, 35) ζ3/) T B so that the high-temperature limit implies T T B. Let us consider now the energy of the ideal Bose gas. Since the energy of the condensate is zero, in the thermodynamic limit one has U = ερε) dε e βε µ) 1 36)
32 3 T /T B µ / k B TB ) FIG. 11: Chemical potential µt ) of the ideal Bose-Einstein gas. Dashed line: High-temperature asymptote corresponding to the Boltzmann statistics. C / kb ) T /T B FIG. 1: Heat capacity CT ) of the ideal Bose-Einstein gas. in the whole temperature range. At high temperatures, T T B, the Boltzmann distribution applies, so that U is given by Eq. 71) and the heat capacity is a constant, C = 3/) k B. For T < T B one has µ =, thus U = dε ερε) e βε 1 37) that in three dimensions becomes U = V π) ) 3/ ˆ m dε ε3/ e βε 1 = V π) ) 3/ m Γ5/)ζ5/) k B T ) 5/. 38)
33 33 With the help of Eq. 3) this can be expressed in the form T U = k B T The corresponding heat capacity is T B C V = ) 3/ Γ5/)ζ5/) T Γ3/)ζ3/) = k BT T B ) U T V ) 3/ 3 ζ5/) ζ3/), T T B. 39) ) 3/ T 15 ζ5/) = k B T B 4 ζ3/), T T B. 4) To calculate the pressure of the ideal bose-gas, one has to start with the entropy that below T B is given by S = Differentiating the free energy one obtains the pressure i.e. the equation of state P = ˆ T dt C V T ) T = 3 C V = k B T F = U T S = k B T ) ) F F = V T T B T T P V = k B T T B T T B ) TB = V T T B ) 3/ 5 ζ5/) ζ3/), T T B. 41) ) 3/ ζ5/) ζ3/), T T B 4) 3 F T B ) 3 ) T B = F V V, 43) ) 3/ ζ5/) ζ3/), T T B, 44) compared to P V = k B T at high temperatures. One can see that the pressure of the ideal bose gas with the condensate contains an additional factor T/T B ) 3/ < 1. This is because the particles in the condensate are not thermally agitated and thus they do not contribute into the pressure. XVIII. FERMI-DIRAC GAS Properties of the Fermi gas [plus sign in Eq. 15)] are different from those of the Bose gas because the exclusion principle prevents multi-occupancy of quantum states. As a result, no condensation at the ground state occurs at low temperatures, and for a macroscopic system the chemical potential can be found from the equation = dερε)fε) = ρε) dε e βε µ) + 1 at all temperatures. This is a nonlinear equation for µ that in general can be solved only numerically. In the limit T fermions fill the certain number of low-lying energy levels to minimize the total energy while obeying the exclusion principle. As we will see, the chemical potential of fermions is positive at low temperatures, µ >. For T i. e., β ) one has e βε µ) if ε < µ and e βε µ) if ε > µ. Thus Eq. 45) becomes 45) fε) = { 1, ε < µ, ε > µ. 46) Then the zero-temperature value µ of µ is defined by the equation = ˆ µ dερε). 47) Fermions are mainly electrons having spin 1/ and correspondingly degeneracy because of the two states of the spin. In three dimensions, using Eq. 44) with an additional factor for the degeneracy one obtains = V π) ) 3/ ˆ m µ dε ε = V π) ) 3/ m 3 µ3/. 48)
34 34 T /T F µ / k B TF ) FIG. 13: Chemical potential µt ) of the ideal Fermi-Dirac gas. Dashed line: High-temperature asymptote corresponding to the Boltzmann statistics. Dashed-dotted line: Low-temperature asymptote. f ε ) T =. 3T F T =. 3T F T = T = T F ε / k B TF ) FIG. 14: Fermi-Dirac distribution function at different temperatures. From here one obtains µ = 3π n ) /3 = εf, 49) m where we have introduced the Fermi energy ε F that coincides with µ. It is convenient also to introduce the Fermi temperature as k B T F = ε F. 5) One can see that T F has the same structure as T B defined by Eq. 31). In typical metals T F 1 5 K, so that at room temperatures T T F and the electron gas is degenerate. It is convenient to express the density of states in
35 35 C / kb ) T /T F FIG. 15: Heat capacity CT ) of the ideal Fermi-Dirac gas. three dimensions, Eq. 44), in terms of ε F : Let us now calculate the internal energy U at T = : U = ˆ µ dερε)ε = 3 ρε) = 3 ε. 51) ε 3/ F ˆ εf ε 3/ F dε ε 3/ = 3 ε 3/ F 5 ε5/ F = 3 5 ε F. 5) One cannot calculate the heat capacity C V from this formula as it requires taking into account small temperaturedependent corrections in U. This will be done later. One can calculate the pressure at low temperatures since in this region the entropy S should be small and F = U T S = U. One obtains ) ) F U P = = = 3 V V 5 ε F V = 3 5 ) ε F = 5 3 V nε F = 3π ) /3 n 5/3. 53) m 5 T = T = To calculate the heat capacity at low temperatures, one has to find temperature-dependent corrections to Eq. 5). We will need the integral of a general type M η = dε ε η fε) = ε η dε e ε µ)/kbt ) ) that enters Eq. 45) for and the similar equation for U. With the use of Eq. 51) one can write = 3 U = 3 ε 3/ F ε 3/ F M 1/, 55).M 3/ 56) If can be shown that for k B T µ the expansion of M η up to quadratic terms has the form [ ) ] M η = µη π η η + 1) kb T. 57) η µ
36 36 ow Eq. 55) takes the form ε 3/ F = µ 3/ [ 1 + π 8 ) ] kb T µ 58) that defines µt ) up to the terms of order T : µ = ε F [ 1 + π 8 kb T µ ) ] /3 ) ] ) ] = εf [1 π kb T = εf [1 π kb T 1 µ 1 ε F 59) or, with the help of Eq. 5), µ = ε F [ 1 π T 1 T F ) ]. 6) It is not surprising that the chemical potential decreases with temperature because at high temperatures it takes large negative values, see Eq. 17). Equation 56) becomes [ U = 3 µ 5/ ) ] [ ) ] 1 + 5π kb T 35 µ5/ = 1 + 5π kb T. 61) 5/) 8 µ 8 Using Eq. 6), one obtains ε 3/ F ε 3/ F [ U = 3 ) ] 5/ ) ] 5 ε F 1 π T [1 + 5π T 1 T F 8 T F [ = 3 ) ] ) ] 5 ε F 1 5π T [1 + 5π T 4 8 T F T F ε F 6) that yields U = 3 5 ε F [ 1 + 5π 1 T T F ) ]. 63) At T = this formula reduces to Eq. 5). ow one obtains ) U π T C V = = k B 64) T T F V that is small at T T F. Let us now derive Eq. 57). Integrating Eq. 54) by parts, one obtains M η = εη+1 ˆ η + 1 fε) dε εη+1 fε) η + 1 ε. 65) The first term of this formula is zero. At low temperatures fε) is close to a step function fast changing from 1 to in the vicinity of ε = µ. Thus fε) ε = 1 ε e βε µ) + 1 = [ βeβε µ) e βε µ) + 1 ] = β 4 cosh [β ε µ) /] 66) has a sharp negative peak at ε = µ. To the contrary, ε η+1 is a slow function of ε that can be expanded in the Taylor series in the vicinity of ε = µ. Up to the second order one has ε η+1 = µ η+1 + εη+1 ε ε µ) + 1 ε η+1 ε=µ ε ε µ) ε=µ = µ η+1 + η + 1) µ η ε µ) + 1 η η + 1) µη 1 ε µ). 67)
37 Introducing x β ε µ) / and formally extending integration in Eq. 65) from to, one obtains [ M η = µη+1 1 dx η η + 1 ] η η + 1) x + βµ β x 1 µ cosh x). 68) The contribution of the linear x term vanishes by symmetry. Using the integrals 37 1 dx cosh x) =, x dx cosh x) = π 6, 69) one arrives at Eq. 57).
The properties of an ideal Fermi gas are strongly determined by the Pauli principle. We shall consider the limit: µ >> k B T βµ >> 1,
Chapter 3 Ideal Fermi gas The properties of an ideal Fermi gas are strongly determined by the Pauli principle. We shall consider the limit: µ >> k B T βµ >>, which defines the degenerate Fermi gas. In
Review of Statistical Mechanics
Review of Statistical Mechanics 3. Microcanonical, Canonical, Grand Canonical Ensembles In statistical mechanics, we deal with a situation in which even the quantum state of the system is unknown. The
The Quantum Harmonic Oscillator Stephen Webb
The Quantum Harmonic Oscillator Stephen Webb The Importance of the Harmonic Oscillator The quantum harmonic oscillator holds a unique importance in quantum mechanics, as it is both one of the few problems
Physics 176 Topics to Review For the Final Exam
Physics 176 Topics to Review For the Final Exam Professor Henry Greenside May, 011 Thermodynamic Concepts and Facts 1. Practical criteria for identifying when a macroscopic system is in thermodynamic equilibrium:
5.61 Physical Chemistry 25 Helium Atom page 1 HELIUM ATOM
5.6 Physical Chemistry 5 Helium Atom page HELIUM ATOM Now that we have treated the Hydrogen like atoms in some detail, we now proceed to discuss the next simplest system: the Helium atom. In this situation,
Free Electron Fermi Gas (Kittel Ch. 6)
Free Electron Fermi Gas (Kittel Ch. 6) Role of Electrons in Solids Electrons are responsible for binding of crystals -- they are the glue that hold the nuclei together Types of binding (see next slide)
PHY4604 Introduction to Quantum Mechanics Fall 2004 Practice Test 3 November 22, 2004
PHY464 Introduction to Quantum Mechanics Fall 4 Practice Test 3 November, 4 These problems are similar but not identical to the actual test. One or two parts will actually show up.. Short answer. (a) Recall
1 Variational calculation of a 1D bound state
TEORETISK FYSIK, KTH TENTAMEN I KVANTMEKANIK FÖRDJUPNINGSKURS EXAMINATION IN ADVANCED QUANTUM MECHAN- ICS Kvantmekanik fördjupningskurs SI38 för F4 Thursday December, 7, 8. 13. Write on each page: Name,
1 Lecture 3: Operators in Quantum Mechanics
1 Lecture 3: Operators in Quantum Mechanics 1.1 Basic notions of operator algebra. In the previous lectures we have met operators: ˆx and ˆp = i h they are called fundamental operators. Many operators
FLAP P11.2 The quantum harmonic oscillator
F L E X I B L E L E A R N I N G A P P R O A C H T O P H Y S I C S Module P. Opening items. Module introduction. Fast track questions.3 Ready to study? The harmonic oscillator. Classical description of
Lecture L3 - Vectors, Matrices and Coordinate Transformations
S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between
Physics 9e/Cutnell. correlated to the. College Board AP Physics 1 Course Objectives
Physics 9e/Cutnell correlated to the College Board AP Physics 1 Course Objectives Big Idea 1: Objects and systems have properties such as mass and charge. Systems may have internal structure. Enduring
MASTER OF SCIENCE IN PHYSICS MASTER OF SCIENCES IN PHYSICS (MS PHYS) (LIST OF COURSES BY SEMESTER, THESIS OPTION)
MASTER OF SCIENCE IN PHYSICS Admission Requirements 1. Possession of a BS degree from a reputable institution or, for non-physics majors, a GPA of 2.5 or better in at least 15 units in the following advanced
Unified Lecture # 4 Vectors
Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,
3. Derive the partition function for the ideal monoatomic gas. Use Boltzmann statistics, and a quantum mechanical model for the gas.
Tentamen i Statistisk Fysik I den tjugosjunde februari 2009, under tiden 9.00-15.00. Lärare: Ingemar Bengtsson. Hjälpmedel: Penna, suddgummi och linjal. Bedömning: 3 poäng/uppgift. Betyg: 0-3 = F, 4-6
(Most of the material presented in this chapter is taken from Thornton and Marion, Chap. 7)
Chapter 4. Lagrangian Dynamics (Most of the material presented in this chapter is taken from Thornton and Marion, Chap. 7 4.1 Important Notes on Notation In this chapter, unless otherwise stated, the following
ASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1
19 MDOF Dynamic Systems ASEN 3112 Lecture 1 Slide 1 A Two-DOF Mass-Spring-Dashpot Dynamic System Consider the lumped-parameter, mass-spring-dashpot dynamic system shown in the Figure. It has two point
3. Electronic Spectroscopy of Molecules I - Absorption Spectroscopy
3. Electronic Spectroscopy of Molecules I - Absorption Spectroscopy 3.1. Vibrational coarse structure of electronic spectra. The Born Oppenheimer Approximation introduced in the last chapter can be extended
Elasticity Theory Basics
G22.3033-002: Topics in Computer Graphics: Lecture #7 Geometric Modeling New York University Elasticity Theory Basics Lecture #7: 20 October 2003 Lecturer: Denis Zorin Scribe: Adrian Secord, Yotam Gingold
2, 8, 20, 28, 50, 82, 126.
Chapter 5 Nuclear Shell Model 5.1 Magic Numbers The binding energies predicted by the Liquid Drop Model underestimate the actual binding energies of magic nuclei for which either the number of neutrons
8.04: Quantum Mechanics Professor Allan Adams Massachusetts Institute of Technology. Problem Set 5
8.04: Quantum Mechanics Professor Allan Adams Massachusetts Institute of Technology Tuesday March 5 Problem Set 5 Due Tuesday March 12 at 11.00AM Assigned Reading: E&R 6 9, App-I Li. 7 1 4 Ga. 4 7, 6 1,2
2.2 Magic with complex exponentials
2.2. MAGIC WITH COMPLEX EXPONENTIALS 97 2.2 Magic with complex exponentials We don t really know what aspects of complex variables you learned about in high school, so the goal here is to start more or
Topic 3b: Kinetic Theory
Topic 3b: Kinetic Theory What is temperature? We have developed some statistical language to simplify describing measurements on physical systems. When we measure the temperature of a system, what underlying
Introduction to Schrödinger Equation: Harmonic Potential
Introduction to Schrödinger Equation: Harmonic Potential Chia-Chun Chou May 2, 2006 Introduction to Schrödinger Equation: Harmonic Potential Time-Dependent Schrödinger Equation For a nonrelativistic particle
2. Spin Chemistry and the Vector Model
2. Spin Chemistry and the Vector Model The story of magnetic resonance spectroscopy and intersystem crossing is essentially a choreography of the twisting motion which causes reorientation or rephasing
Mechanics 1: Conservation of Energy and Momentum
Mechanics : Conservation of Energy and Momentum If a certain quantity associated with a system does not change in time. We say that it is conserved, and the system possesses a conservation law. Conservation
= N 2 = 3π2 n = k 3 F. The kinetic energy of the uniform system is given by: 4πk 2 dk h2 k 2 2m. (2π) 3 0
Chapter 1 Thomas-Fermi Theory The Thomas-Fermi theory provides a functional form for the kinetic energy of a non-interacting electron gas in some known external potential V (r) (usually due to impurities)
6-2. A quantum system has the following energy level diagram. Notice that the temperature is indicated
Chapter 6 Concept Tests 6-1. In a gas of hydrogen atoms at room temperature, what is the ratio of atoms in the 1 st excited energy state (n=2) to atoms in the ground state(n=1). (Actually H forms H 2 molecules,
Understanding Poles and Zeros
MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING 2.14 Analysis and Design of Feedback Control Systems Understanding Poles and Zeros 1 System Poles and Zeros The transfer function
The derivation of the balance equations
Chapter 3 The derivation of the balance equations In this chapter we present the derivation of the balance equations for an arbitrary physical quantity which starts from the Liouville equation. We follow,
11. Rotation Translational Motion: Rotational Motion:
11. Rotation Translational Motion: Motion of the center of mass of an object from one position to another. All the motion discussed so far belongs to this category, except uniform circular motion. Rotational
CLASSICAL CONCEPT REVIEW 8
CLASSICAL CONCEPT REVIEW 8 Kinetic Theory Information concerning the initial motions of each of the atoms of macroscopic systems is not accessible, nor do we have the computational capability even with
Differential Relations for Fluid Flow. Acceleration field of a fluid. The differential equation of mass conservation
Differential Relations for Fluid Flow In this approach, we apply our four basic conservation laws to an infinitesimally small control volume. The differential approach provides point by point details of
Time dependence in quantum mechanics Notes on Quantum Mechanics
Time dependence in quantum mechanics Notes on Quantum Mechanics http://quantum.bu.edu/notes/quantummechanics/timedependence.pdf Last updated Thursday, November 20, 2003 13:22:37-05:00 Copyright 2003 Dan
Lecture L22-2D Rigid Body Dynamics: Work and Energy
J. Peraire, S. Widnall 6.07 Dynamics Fall 008 Version.0 Lecture L - D Rigid Body Dynamics: Work and Energy In this lecture, we will revisit the principle of work and energy introduced in lecture L-3 for
1. Degenerate Pressure
. Degenerate Pressure We next consider a Fermion gas in quite a different context: the interior of a white dwarf star. Like other stars, white dwarfs have fully ionized plasma interiors. The positively
CBE 6333, R. Levicky 1 Differential Balance Equations
CBE 6333, R. Levicky 1 Differential Balance Equations We have previously derived integral balances for mass, momentum, and energy for a control volume. The control volume was assumed to be some large object,
FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL
FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL STATIsTICs 4 IV. RANDOm VECTORs 1. JOINTLY DIsTRIBUTED RANDOm VARIABLEs If are two rom variables defined on the same sample space we define the joint
Till now, almost all attention has been focussed on discussing the state of a quantum system.
Chapter 13 Observables and Measurements in Quantum Mechanics Till now, almost all attention has been focussed on discussing the state of a quantum system. As we have seen, this is most succinctly done
APPLIED MATHEMATICS ADVANCED LEVEL
APPLIED MATHEMATICS ADVANCED LEVEL INTRODUCTION This syllabus serves to examine candidates knowledge and skills in introductory mathematical and statistical methods, and their applications. For applications
Kinetic Theory of Gases. Chapter 33. Kinetic Theory of Gases
Kinetic Theory of Gases Kinetic Theory of Gases Chapter 33 Kinetic theory of gases envisions gases as a collection of atoms or molecules. Atoms or molecules are considered as particles. This is based on
Notes on Elastic and Inelastic Collisions
Notes on Elastic and Inelastic Collisions In any collision of 2 bodies, their net momentus conserved. That is, the net momentum vector of the bodies just after the collision is the same as it was just
State of Stress at Point
State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,
Molecular Symmetry 1
Molecular Symmetry 1 I. WHAT IS SYMMETRY AND WHY IT IS IMPORTANT? Some object are more symmetrical than others. A sphere is more symmetrical than a cube because it looks the same after rotation through
Thermodynamics: Lecture 8, Kinetic Theory
Thermodynamics: Lecture 8, Kinetic Theory Chris Glosser April 15, 1 1 OUTLINE I. Assumptions of Kinetic Theory (A) Molecular Flux (B) Pressure and the Ideal Gas Law II. The Maxwell-Boltzmann Distributuion
Group Theory and Chemistry
Group Theory and Chemistry Outline: Raman and infra-red spectroscopy Symmetry operations Point Groups and Schoenflies symbols Function space and matrix representation Reducible and irreducible representation
Chapter 15 Collision Theory
Chapter 15 Collision Theory 151 Introduction 1 15 Reference Frames Relative and Velocities 1 151 Center of Mass Reference Frame 15 Relative Velocities 3 153 Characterizing Collisions 5 154 One-Dimensional
CHAPTER 13 MOLECULAR SPECTROSCOPY
CHAPTER 13 MOLECULAR SPECTROSCOPY Our most detailed knowledge of atomic and molecular structure has been obtained from spectroscopy study of the emission, absorption and scattering of electromagnetic radiation
Quantum Mechanics: Postulates
Quantum Mechanics: Postulates 5th April 2010 I. Physical meaning of the Wavefunction Postulate 1: The wavefunction attempts to describe a quantum mechanical entity (photon, electron, x-ray, etc.) through
Let s first see how precession works in quantitative detail. The system is illustrated below: ...
lecture 20 Topics: Precession of tops Nutation Vectors in the body frame The free symmetric top in the body frame Euler s equations The free symmetric top ala Euler s The tennis racket theorem As you know,
Section 3: Crystal Binding
Physics 97 Interatomic forces Section 3: rystal Binding Solids are stable structures, and therefore there exist interactions holding atoms in a crystal together. For example a crystal of sodium chloride
The quantum mechanics of particles in a periodic potential: Bloch s theorem
Handout 2 The quantum mechanics of particles in a periodic potential: Bloch s theorem 2.1 Introduction and health warning We are going to set up the formalism for dealing with a periodic potential; this
The Basics of FEA Procedure
CHAPTER 2 The Basics of FEA Procedure 2.1 Introduction This chapter discusses the spring element, especially for the purpose of introducing various concepts involved in use of the FEA technique. A spring
EQUATION OF STATE. e (E µ)/kt ± 1 h 3 dp,
EQUATION OF STATE Consider elementary cell in a phase space with a volume x y z p x p y p z = h 3, (st.1) where h = 6.63 1 7 erg s is the Planck constant, x y z is volume in ordinary space measured in
DO PHYSICS ONLINE FROM QUANTA TO QUARKS QUANTUM (WAVE) MECHANICS
DO PHYSICS ONLINE FROM QUANTA TO QUARKS QUANTUM (WAVE) MECHANICS Quantum Mechanics or wave mechanics is the best mathematical theory used today to describe and predict the behaviour of particles and waves.
Assessment Plan for Learning Outcomes for BA/BS in Physics
Department of Physics and Astronomy Goals and Learning Outcomes 1. Students know basic physics principles [BS, BA, MS] 1.1 Students can demonstrate an understanding of Newton s laws 1.2 Students can demonstrate
Chapter 22 The Hamiltonian and Lagrangian densities. from my book: Understanding Relativistic Quantum Field Theory. Hans de Vries
Chapter 22 The Hamiltonian and Lagrangian densities from my book: Understanding Relativistic Quantum Field Theory Hans de Vries January 2, 2009 2 Chapter Contents 22 The Hamiltonian and Lagrangian densities
Solutions to Problems in Goldstein, Classical Mechanics, Second Edition. Chapter 7
Solutions to Problems in Goldstein, Classical Mechanics, Second Edition Homer Reid April 21, 2002 Chapter 7 Problem 7.2 Obtain the Lorentz transformation in which the velocity is at an infinitesimal angle
arxiv:1111.4354v2 [physics.acc-ph] 27 Oct 2014
Theory of Electromagnetic Fields Andrzej Wolski University of Liverpool, and the Cockcroft Institute, UK arxiv:1111.4354v2 [physics.acc-ph] 27 Oct 2014 Abstract We discuss the theory of electromagnetic
Heating & Cooling in Molecular Clouds
Lecture 8: Cloud Stability Heating & Cooling in Molecular Clouds Balance of heating and cooling processes helps to set the temperature in the gas. This then sets the minimum internal pressure in a core
Lecture 5 Motion of a charged particle in a magnetic field
Lecture 5 Motion of a charged particle in a magnetic field Charged particle in a magnetic field: Outline 1 Canonical quantization: lessons from classical dynamics 2 Quantum mechanics of a particle in a
Rotation: Moment of Inertia and Torque
Rotation: Moment of Inertia and Torque Every time we push a door open or tighten a bolt using a wrench, we apply a force that results in a rotational motion about a fixed axis. Through experience we learn
An Introduction to Hartree-Fock Molecular Orbital Theory
An Introduction to Hartree-Fock Molecular Orbital Theory C. David Sherrill School of Chemistry and Biochemistry Georgia Institute of Technology June 2000 1 Introduction Hartree-Fock theory is fundamental
Section 5.0 : Horn Physics. By Martin J. King, 6/29/08 Copyright 2008 by Martin J. King. All Rights Reserved.
Section 5. : Horn Physics Section 5. : Horn Physics By Martin J. King, 6/29/8 Copyright 28 by Martin J. King. All Rights Reserved. Before discussing the design of a horn loaded loudspeaker system, it is
HEAT UNIT 1.1 KINETIC THEORY OF GASES. 1.1.1 Introduction. 1.1.2 Postulates of Kinetic Theory of Gases
UNIT HEAT. KINETIC THEORY OF GASES.. Introduction Molecules have a diameter of the order of Å and the distance between them in a gas is 0 Å while the interaction distance in solids is very small. R. Clausius
Problem Set V Solutions
Problem Set V Solutions. Consider masses m, m 2, m 3 at x, x 2, x 3. Find X, the C coordinate by finding X 2, the C of mass of and 2, and combining it with m 3. Show this is gives the same result as 3
Physics 41 HW Set 1 Chapter 15
Physics 4 HW Set Chapter 5 Serway 8 th OC:, 4, 7 CQ: 4, 8 P: 4, 5, 8, 8, 0, 9,, 4, 9, 4, 5, 5 Discussion Problems:, 57, 59, 67, 74 OC CQ P: 4, 5, 8, 8, 0, 9,, 4, 9, 4, 5, 5 Discussion Problems:, 57, 59,
Numerical analysis of Bose Einstein condensation in a three-dimensional harmonic oscillator potential
Numerical analysis of Bose Einstein condensation in a three-dimensional harmonic oscillator potential Martin Ligare Department of Physics, Bucknell University, Lewisburg, Pennsylvania 17837 Received 24
Lecture 3 Fluid Dynamics and Balance Equa6ons for Reac6ng Flows
Lecture 3 Fluid Dynamics and Balance Equa6ons for Reac6ng Flows 3.- 1 Basics: equations of continuum mechanics - balance equations for mass and momentum - balance equations for the energy and the chemical
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
Copyright 2011 Casa Software Ltd. www.casaxps.com. Centre of Mass
Centre of Mass A central theme in mathematical modelling is that of reducing complex problems to simpler, and hopefully, equivalent problems for which mathematical analysis is possible. The concept of
Midterm Solutions. mvr = ω f (I wheel + I bullet ) = ω f 2 MR2 + mr 2 ) ω f = v R. 1 + M 2m
Midterm Solutions I) A bullet of mass m moving at horizontal velocity v strikes and sticks to the rim of a wheel a solid disc) of mass M, radius R, anchored at its center but free to rotate i) Which of
Module 3 : Molecular Spectroscopy Lecture 13 : Rotational and Vibrational Spectroscopy
Module 3 : Molecular Spectroscopy Lecture 13 : Rotational and Vibrational Spectroscopy Objectives After studying this lecture, you will be able to Calculate the bond lengths of diatomics from the value
Review D: Potential Energy and the Conservation of Mechanical Energy
MSSCHUSETTS INSTITUTE OF TECHNOLOGY Department of Physics 8.01 Fall 2005 Review D: Potential Energy and the Conservation of Mechanical Energy D.1 Conservative and Non-conservative Force... 2 D.1.1 Introduction...
arxiv:1603.01211v1 [quant-ph] 3 Mar 2016
Classical and Quantum Mechanical Motion in Magnetic Fields J. Franklin and K. Cole Newton Department of Physics, Reed College, Portland, Oregon 970, USA Abstract We study the motion of a particle in a
1 The basic equations of fluid dynamics
1 The basic equations of fluid dynamics The main task in fluid dynamics is to find the velocity field describing the flow in a given domain. To do this, one uses the basic equations of fluid flow, which
The Role of Electric Polarization in Nonlinear optics
The Role of Electric Polarization in Nonlinear optics Sumith Doluweera Department of Physics University of Cincinnati Cincinnati, Ohio 45221 Abstract Nonlinear optics became a very active field of research
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur
Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce
Structure Factors 59-553 78
78 Structure Factors Until now, we have only typically considered reflections arising from planes in a hypothetical lattice containing one atom in the asymmetric unit. In practice we will generally deal
Columbia University Department of Physics QUALIFYING EXAMINATION
Columbia University Department of Physics QUALIFYING EXAMINATION Monday, January 13, 2014 1:00PM to 3:00PM Classical Physics Section 1. Classical Mechanics Two hours are permitted for the completion of
Orbits of the Lennard-Jones Potential
Orbits of the Lennard-Jones Potential Prashanth S. Venkataram July 28, 2012 1 Introduction The Lennard-Jones potential describes weak interactions between neutral atoms and molecules. Unlike the potentials
PHYS-2010: General Physics I Course Lecture Notes Section XIII
PHYS-2010: General Physics I Course Lecture Notes Section XIII Dr. Donald G. Luttermoser East Tennessee State University Edition 2.5 Abstract These class notes are designed for use of the instructor and
Second Order Linear Partial Differential Equations. Part I
Second Order Linear Partial Differential Equations Part I Second linear partial differential equations; Separation of Variables; - point boundary value problems; Eigenvalues and Eigenfunctions Introduction
Infrared Spectroscopy: Theory
u Chapter 15 Infrared Spectroscopy: Theory An important tool of the organic chemist is Infrared Spectroscopy, or IR. IR spectra are acquired on a special instrument, called an IR spectrometer. IR is used
Precession of spin and Precession of a top
6. Classical Precession of the Angular Momentum Vector A classical bar magnet (Figure 11) may lie motionless at a certain orientation in a magnetic field. However, if the bar magnet possesses angular momentum,
Magnetism and Magnetic Materials K. Inomata
Magnetism and Magnetic Materials K. Inomata 1. Origin of magnetism 1.1 Magnetism of free atoms and ions 1.2 Magnetism for localized electrons 1.3 Itinerant electron magnetism 2. Magnetic properties of
where h = 6.62 10-34 J s
Electromagnetic Spectrum: Refer to Figure 12.1 Molecular Spectroscopy: Absorption of electromagnetic radiation: The absorptions and emissions of electromagnetic radiation are related molecular-level phenomena
Figure 1.1 Vector A and Vector F
CHAPTER I VECTOR QUANTITIES Quantities are anything which can be measured, and stated with number. Quantities in physics are divided into two types; scalar and vector quantities. Scalar quantities have
Chapter 7. Lattice vibrations. 7.1 Introduction
Chapter 7 Lattice vibrations 7. Introduction Up to this point in the lecture, the crystal lattice was always assumed to be completely rigid, i.e. atomic displacements away from the positions of a perfect
7. DYNAMIC LIGHT SCATTERING 7.1 First order temporal autocorrelation function.
7. DYNAMIC LIGHT SCATTERING 7. First order temporal autocorrelation function. Dynamic light scattering (DLS) studies the properties of inhomogeneous and dynamic media. A generic situation is illustrated
Physics 235 Chapter 1. Chapter 1 Matrices, Vectors, and Vector Calculus
Chapter 1 Matrices, Vectors, and Vector Calculus In this chapter, we will focus on the mathematical tools required for the course. The main concepts that will be covered are: Coordinate transformations
Electromagnetism Laws and Equations
Electromagnetism Laws and Equations Andrew McHutchon Michaelmas 203 Contents Electrostatics. Electric E- and D-fields............................................. Electrostatic Force............................................2
5 Numerical Differentiation
D. Levy 5 Numerical Differentiation 5. Basic Concepts This chapter deals with numerical approximations of derivatives. The first questions that comes up to mind is: why do we need to approximate derivatives
Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.
Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.
Define the notations you are using properly. Present your arguments in details. Good luck!
Umeå Universitet, Fysik Vitaly Bychkov Prov i fysik, Thermodynamics, 0-0-4, kl 9.00-5.00 jälpmedel: Students may use any book(s) including the textbook Thermal physics. Minor notes in the books are also
Torque Analyses of a Sliding Ladder
Torque Analyses of a Sliding Ladder 1 Problem Kirk T. McDonald Joseph Henry Laboratories, Princeton University, Princeton, NJ 08544 (May 6, 2007) The problem of a ladder that slides without friction while
Derivation of the relativistic momentum and relativistic equation of motion from Newton s second law and Minkowskian space-time geometry
Apeiron, Vol. 15, No. 3, July 2008 206 Derivation of the relativistic momentum and relativistic equation of motion from Newton s second law and Minkowskian space-time geometry Krzysztof Rȩbilas Zak lad
