Notes on Groups S O(3) and S U(2)

Size: px
Start display at page:

Download "Notes on Groups S O(3) and S U(2)"

Transcription

1 Notes on Groups S O(3) and S U() Ivan G. Avramidi New Mexico Institute of Mining and Technology October, 011 Author: IGA File: su.tex; Date: November 9, 011; Time: 10:07

2 1 1 Group S O(3) 1.1 Algebra so(3) The generators X i = X i, i = 1,, 3, of the algebra so(3) are 3 3 real antisymmetric matrices defined by or X 1 = , X = , X 3 = , (1.1) (X i ) k j = ε k i j = ε ik j (1.) Raising and lowering a vector index does not change anything; it is done just for convenience of notation. These matrices have a number of properties and where Therefore, they satisfy the algebra The Casimir operator is defined by X i X j = E ji Iδ i j, (1.3) ε i jk X k = E i j E ji (1.4) (E ji ) kl = δ k j δl i (1.5) [X i, X j ] = ε k i jx k (1.6) X = X i X i (1.7) It is equal to and commutes with all generators X = I (1.8) [X i, X ] = 0. (1.9) su.tex; November 9, 011; 10:07; p. 1

3 where A general element of the group S O(3) in canonical coordinates y i has the form exp (X(y)) = I cos r + P(1 cos r) + X(y) sin r r (1.10) = P + Π cos r + X(y) sin r, r (1.11) r = y i y i, θ i = yi r X(y) = X i y i, and the matrices Π and P are defined by In particular, for r = π we have (1.1) P i j = θ i θ j (1.13) Π i j = δ i j θ i θ j (1.14) exp (X(y)) = P Π (1.15) Notice that and, therefore, tr P = 1, tr Π = (1.16) tr exp (X(y)) = 1 + cos r. (1.17) Also, det sinh X(y) X(y) ( ) sin r = (1.18) r 1. Representations of S O(3) Let X i be the generators of S O(3) in a general irreducible representation satisfying the algebra [X i, X j ] = ε i jk X k (1.19) They are determined by symmetric traceless tensors of type ( j, j) (with j a positive integer) (X i ) l 1...l j k1...k j = jε (l 1 i(k1 δ l k δ l j) k j ) (1.0) Then X = j( j + 1)I (1.1) su.tex; November 9, 011; 10:07; p.

4 3 where Another basis of generators is (J +, J, J 3 ), where I l 1...l jk1...k j = δ (l 1 (k 1 δ l j) k j ) (1.) J + = ix 1 + X (1.3) J = ix 1 X (1.4) J 3 = ix 3 (1.5) so that J + = J, J 3 = J 3. (1.6) They satisfy the commutation relations [J +, J ] = J 3, [J 3, J + ] = J +, [J 3, J ] = J. (1.7) The Casimir operator in this basis is X = J + J J 3 + J 3 = J J + J 3 J 3 (1.8) From the commutation relations we have J 3 J + = J + (J 3 + 1) (1.9) J 3 J = J (J 3 1) (1.30) J J + = X J 3 J 3 (1.31) J + J = X J 3 + J 3 (1.3) Since J 3 commutes with X they can be diagonalized simultaneously. Since they are both Hermitian, they have real eigenvalues. Notice also that since X i are anti-hermitian, then X 0. (1.33) Let λ, m be the basis in which both operators are diagonal, that is, X λ, m = λ λ, m (1.34) J 3 λ, m = m λ, m (1.35) There are many ways to show that the eigenvalues m of the operator J 3 are integers. Now, since X 1 + X = X X 3 = X + J 3 0 (1.36) su.tex; November 9, 011; 10:07; p. 3

5 4 then the eigenvalues of J 3 are bounded by a maximal integer j, m j (1.37) that is, m takes ( j + 1) values j, j + 1,, 1, 0, 1,, j 1, j (1.38) From the commutation relations above we have J + λ, m = const λ, m + 1 (1.39) J λ, m = const λ, m 1 (1.40) Let λ, 0 be the eigenvector corresponding to the eigenvalue m = 0, that is J 3 λ, 0 = 0. (1.41) Then for positive m > 0 λ, m = a λm J+ m λ, 0 (1.4) λ, m = a λ, m J m λ, m (1.43) Then J + λ, j = 0 (1.44) J λ, j = 0 (1.45) Therefore, 0 = J J + λ, j = ( X J 3 J 3) λ, j (1.46) 0 = J + J λ, j = ( X J 3 + J 3) λ, j (1.47) Thus, ( λ j j ) λ, j = 0 (1.48) So, the eigenvalues of the Casimir operator X are labeled by a non-negative integer j 0, λ j = j( j + 1). (1.49) su.tex; November 9, 011; 10:07; p. 4

6 5 These eigenvalues have the multiplicity d j = j + 1. (1.50) In particular, for j = 1 we get λ 1 = as expected before. From now on we will denote the basis in which the operators X and J 3 are diagonal by λ j, m = j. (1.51) m The matrix elements of the generators J ± can be computed as follows. First of all, j m J j 3 = δ m mm m (1.5) We have and This gives j j( j + 1) = m J j j + m 1 m 1 J j + m m (1.53) m j m J j j + = m 1 m 1 J j m j m J j + m j m J j m (1.54) = δ m,m 1 ( j + m)( j m + 1) (1.55) = δ m,m+1 ( j m)( j + m + 1) (1.56) The matrix elements of the generators X i are j m X j i 1 = δ m m,m 1 ( j + m)( j m + 1) i +δ m,m+1 ( j m)( j + m + 1) (1.57) j m X j 1 = δ m m,m 1 ( j + m)( j m + 1) 1 δ m,m+1 ( j m)( j + m + 1) (1.58) j m X j 3 = δ m mm im (1.59) su.tex; November 9, 011; 10:07; p. 5

7 6 Thus, irreducible representations of S O(3) are labeled by a non-negative integer j 0. The matrices of the representation j in canonical coordinates y i will be denoted by j D j mm (y) = m exp [ X(y) ] j (1.60) m where X(y) = X i y i. The characters of the irreducible representations are χ j (y) = j Dmm(y) j = m= j Group S U() j e imr = 1 + m= j j cos(mr) (1.61) m=1.1 Algebra su() Pauli matrices σ i = σ i, i = 1,, 3, are defined by 0 1 σ 1 = 1 0, σ 0 i = i 0, σ 3 = They have the following properties (.1) σ i = σ i (.) σ i = I (.3) det σ i = 1 (.4) tr σ i = 0, (.5) σ 1 σ = iσ 3 (.6) σ σ 3 = iσ 1 (.7) σ 3 σ 1 = iσ (.8) σ 1 σ σ 3 = ii (.9) (.10) which can be written in the form σ i σ j = δ i j I + iε i jk σ k. (.11) su.tex; November 9, 011; 10:07; p. 6

8 7 They satisfy the following commutation relations The traces of products are Also, there holds [σ i, σ j ] = iε i jk σ k (.1) tr σ i σ j = δ i j (.13) tr σ i σ j σ k = iε i jk (.14) σ i σ i = 3I. (.15) The Pauli matrices satisfy the anti-commutation relations σ i σ j + σ j σ i = δ i j I (.16) Therefore, Pauli matrices form a representation of Clifford algebra in 3 dimensions and are, in fact, Dirac matrices σ i = (σ i B ), where A, B = 1,, in 3 dimen- A sions. The spinor metric E = (E AB ) is defined by 0 1 E = 1 0. (.17) Notice that and the inverse metric E 1 = (E AB ) is Then E = iσ (.18) E 1 = E (.19) σ T i = Eσ i E 1 (.0) This allows to define the matrices Eσ i = (σ i AB ) with covariant indices Eσ 1 = 1 0 σ 3 = 0 1, (.1) Eσ = i 0 ii = 0 i, (.) Eσ 3 = 0 1 σ 1 = 1 0. (.3) su.tex; November 9, 011; 10:07; p. 7

9 8 Notice that the matrices Eσ i are all symmetric (Eσ i ) T = Eσ i (.4) Pauli matrices satisfy the following completeness relation which implies This gives In a more symmetric form σ i A B σ i C D = δ A Dδ C B δ A Bδ C D (.5) σ i (A (B σ i C) D) = δ (A (Bδ C) D) (.6) σ i AB σ i CD = E AD E CB E AB E CD (.7) σ i AB σ i CD = E AD E CB + E BD E CA (.8) A spinor is a two dimensional complex vector ψ = (ψ A ), A = 1, in the spinor space C. We use the metric E to lower the index to get the covariant components which defines the cospinor ψ = (ψ A ) We also define the conjugated spinor ψ = ( ψ A ) by ψ A = E AB ψ B (.9) ψ = Eψ (.30) ψ A = ψ A (.31) On the complex spinor space there are two invariant bilinear forms ψ, ϕ = ψϕ = ψ A ϕ A (.3) and Note that and ψϕ = ψ A ϕ A = E AB ψ B ϕ A = ψ 1 ϕ ψ ϕ 1 (.33) ψ, ψ = ψψ = ψ A ψ A = ψ 1 + ψ (.34) ψψ = ψ A ψ A = 0 (.35) su.tex; November 9, 011; 10:07; p. 8

10 9 The matrices X i = i σ i (.36) are the generators of the Lie algebra su() with the commutation relations [X i, X j ] = ε i jk X k (.37) The Casimir operator is It commutes with all generators X = X i X i (.38) [X i, X ] = 0. (.39) There holds X = 3 4 I (.40) Notice that tr X i X j = 1 δ i j (.41) X = j( j + 1)I (.4) with j = 1, which determines the fundamental representation of S U(). An arbitrary element of the group S U() in canonical coordinates y i has the form ( i ) exp σ jy j = I cos r + iσ jθ j sin r, (.43) This can be written in the form exp (X(y)) = I cos(r/) + X(y) sin(r/). (.44) r/ where X(y) = X i y i. In particular, for r = π exp (X(y)) = I. (.45) Obviously, tr exp (X(y)) = cos(r/) (.46) su.tex; November 9, 011; 10:07; p. 9

11 10. Representations of S U() We define symmetric contravariant spinors ψ A 1...A j of rank j. Each index here can be lowered by the metric E AB. The number of independent components of such spinor is precisely j 1 = ( j + 1) (.47) i=0 Let X i be the generators of S U() in a general irreducible representation satisfying the algebra [X i, X j ] = ε i jk X k (.48) They are determined by symmetric spinors of type ( j, j) Then (X k ) A 1...A j B1...B j = i jσ k (A 1(B1 δ A B δ A j) B j ) (.49) X = j( j + 1)I (.50) where I A 1...A j B 1...B j = δ (A 1 (B 1 δ A j) B j ) (.51) Another basis of generators is J + = ix 1 + X (.5) J = ix 1 X (.53) J 3 = ix 3 (.54) They have all the properties of the generators of S O(3) discussed above except that the operator J 3 has integer eigenvalues m. One can show that they can be either integers or half-integers taking ( j + 1) values j, j + 1,..., j 1, j, m j (.55) with j being a positive integer or half-integer. If j is integer, then all eigenvalues m of J 3 are integers including 0. If j is a half-integer, then all eigenvalues m are half-integers and 0 is excluded. We choose a basis in which the operators X and J 3 are diagonal. Let ψ = ψ A 1...A j be a symmetric spinor of rank j. Let e 1 = (e A 1 ) and e = (e A ) be the basis cospinors defined by e 1 A = δ A 1, e A = δ A. (.56) su.tex; November 9, 011; 10:07; p. 10

12 11 Let e j,m A 1...A j+m B 1...B j m = e 1 (A1 e 1 A j+m e B1 e B j m ) = δ (A 1 1 δ A j+m 1 δ B 1 δb j m) (.57) Then this is an eigenspinor of the operator J 3 with the eigenvalue m Then the spinors J 3 e j,m = me j,m (.58) j ( ) 1/ ( j)! = e j,m (.59) m ( j + m)!( j m)! form an orthonormal basis in the space of symmetric spinors of rank j. The matrix elements of the generators X i are given by exactly the same formulas j m X j i 1 = δ m m,m 1 ( j + m)( j m + 1) i +δ m,m+1 ( j m)( j + m + 1) (.60) j m X j 1 = δ m m,m 1 ( j + m)( j m + 1) 1 δ m,m+1 ( j m)( j + m + 1) (.61) j m X j 3 = δ m mm im (.6) but now j and m are either both integers or half-integers. Thus, irreducible representations of S U() are labeled by a non-negative halfinteger j 0. The matrices of the representation j in canonical coordinates y i are j D j mm (y) = m exp [ X(y) ] j (.63) m where X(y) = X i y i. The representations of S U() with integer j are at the same time representations of S O(3). However, representations with half-integer j do not give representations of S O(3), since for r = π D j mm (y) = δ mm. (.64) su.tex; November 9, 011; 10:07; p. 11

13 1 The characters of the irreducible representations are: for integer j χ j (y) = and for half-integer j χ j (y) = 1 + j j Dmm(y) j = m= j k= 1 j D j 1 +k, 1 +k(y) = 1 + j j e imr = 1 + m= j k= 1 j exp j cos(mr) (.65) m=1 [ ( ) ] 1 i + k r = Now, let X i and Y j be the generators of two representations and let Then G i generate the product representation X Y and j k=1 [( ) ] 1 cos + k r (.66) G i = X i I Y + I X Y i (.67) exp[g(y)] = exp[x(y)] exp[y(y)] (.68) Therefore, the character of the product representation factorizes χ G (y) = χ X (y)χ Y (y) (.69).3 Double Covering Homomorphism S U() S O(3) Recall that the matrices Eσ i = (σ i AB ) are symmetric and the matrix E = (E AB ) is anti-symmetric. Let ψ AB be a symmetric spinor. Then one can form a vector A i = σ i BA ψ AB = tr (Eσ i ψ). (.70) Conversely, given a vector A i we get a symmetric spinor of rank by ψ AB = 1 Ai σ i AB (.71) or ψ = 1 Ai σ i E 1 (.7) More generally, given a symmetric spinor of rank j we can associate to it a symmetric tensor of rank j by A i1...i j = σ i1 A 1 B 1 σ i j A j B j ψ A 1...A j B 1...B j (.73) su.tex; November 9, 011; 10:07; p. 1

14 13 One can show that this tensor is traceless. Indeed by using the eq. (??) and the fact that the spinor is symmetric we see that the tensor A is traceless. The inverse transformation is defined by ψ A1...A j B 1...B j = 1 j Ai 1...i j σ i1 A 1 B 1 σ i j A j B j (.74) The double covering homomorphism Λ : S U() S O(3) (.75) is defined as follows. Let U S U(). Then the matrices Uσ i U 1 satisfy all the properties of the Pauli matrices and are therefore in the algebra su(). Therefore, Uσ i U 1 = Λ j i(u)σ j. (.76) That is, Λ i j(u) = 1 tr σi Uσ j U 1 (.77) The matrix Λ(U) depends on the matrix U and is, in fact, in S O(3). For infinitesimal transformations U = exp(x k y k ) = I + i σ ky k + (.78) the matrix Λ is Λ i j = δ i j + ε i jk y k + = [ exp(x k y k ) ] i j (.79) It is easy to see that So, in short where Λ(I) = Λ( I) = I. (.80) Λ(exp(X i y i )) = exp(λ(x i )y i ) (.81) [Λ(σ k )] i j = iε i k j (.8) 3 Invariant Vector Fields Let y i be the canonical coordinates on the group S U() ranging over ( π, π). The position of the indices on the coordinates will be irrelevant, that is, y i = y i. We introduce the radial coordinate r = y = y i y i, so that the geodesic distance su.tex; November 9, 011; 10:07; p. 13

15 14 from the origin is equal to r, and the angular coordinates (the coordinates on S ) θ i = ŷ i = y i /r. Let us consider the fundamental representation of the group S U() with generators T a = iσ a, where σ a are the Pauli matrices satisfying the algebra Let T(y) = T i y i. Then where (y, p) = y i p i, (y p) k = ε i jk y i p j and Therefore, where ŷ is a unit vector. Next, we compute [T a, T b ] = ε c abt c. (3.83) T(y)T(p) = (y, p) T(y p) (3.84) [T(y)] = y I (3.85) exp[rt(ŷ)] = cos r + T(ŷ) sin r (3.86) exp[t(f(r ˆq, ρ ˆp))] = cos r cos ρ (ˆq, ˆp) sin r sin ρ + cos r sin ρt( ˆp) + cos ρ sin rt(ˆq) sin r sin ρt(ˆq ˆp) (3.87) This defines the group multiplication map F(q, p) in canonical coordinates. This map has a number of important properties. In particular, F(0, p) = F(p, 0) = p and F(p, p) = 0. Another obvious but very useful property of the group map is that if q = F(ω, p), then ω = F(q, p) and p = F( ω, q). Also, there is the associativity property and the inverse property F(ω, F(p, q)) = F(F(ω, p), q) (3.88) F( p, ω) = F(ω, p). (3.89) This map defines all the properties of the group. We introduce the matrices L α b(x) = y b Fα (y, x), (3.90) y=0 R α b(x) = y b Fα (x, y), (3.91) y=0 su.tex; November 9, 011; 10:07; p. 14

16 15 Let X and Y be the inverses of the matrices L and R, that is, Also let so that LX = XL = RY = YR = I. (3.9) D = XR, D 1 = YL (3.93) L = RD 1, X = DY (3.94) Let C a be the matrices of the adjoint representation defined by and C = C(x) be the matrix defined by Then one can show that X = exp(c) I C L = It is easy to see also that (C a ) b c = ε b ac (3.95) C = C a x a. (3.96), Y = I exp( C) C, (3.97) C exp(c) I, R = C I exp( C). (3.98) x a = L a bx b = R a bx b = X a bx b = Y a bx b. (3.99) and and as well as X = Y T, L = R T (3.100) ( ) sinh(c/) XY = YX =, C/ (3.101) ( ) C/ LR = RL =, sinh(c/) (3.10) D = XR = RX = exp(c), D T = YL = LY = exp( C). (3.103) su.tex; November 9, 011; 10:07; p. 15

17 16 Note that where r = x and with θ i = x i /r. Therefore, C = 4r Π, (3.104) Π i j = δ i j θ i θ j (3.105) C n = ( 1) n (r) n Π, C n+1 = ( 1) n (r) n C (3.106) Therefore, the eigenvalues of the matrix C are (ir, ir, 0), and for any analytic function f (C) = f (0)(I Π) + Π 1 and, therefore, This enables one to compute Then one can show that [ f (ir) + f ( ir) ] + 1 4ir C [ f (ir) f ( ir ] tr f (C) = f (0) + f (ir) + f ( ir) (3.107) det f (C) = f (0) f (ir) f ( ir) (3.108) Y = I Π + sin r r X = I Π + sin r r cos rπ 1 sin r cos rπ + 1 C (3.109) r sin r C (3.110) r R = I Π + r cot rπ + 1 C (3.111) L = I Π + r cot rπ 1 C (3.11) D = I Π + Π cos(r) + sin(r) C (3.113) r x µ Fα (x, y) = L α γ(f(x, y))x γ µ(x), (3.114) x µ Fα (y, x) = R α γ(f(y, x))y γ µ(x). (3.115) These matrices satisfy important identities µ X a ν ν X a µ = ε a bcx b µx c ν, (3.116) µ Y a ν ν Y a µ = ε a bcy b µy c ν. (3.117) su.tex; November 9, 011; 10:07; p. 16

18 17 and Let us define the one-forms They satisfy important identities Next, we define the vector fields. L µ a µ L ν b L µ b µ L ν a = ε c abl γ c, (3.118) R µ a µ R ν b R µ b µ R ν a = ε c abr ν c. (3.119) σ a = X a µ(x)dx µ, σ a + = Y a µ(x)dx µ. dσ a = ε a bcσ b σ c, (3.10) dσ a + = ε a bcσ b + σ c +. (3.11) K a = L µ a(x) µ, K + a = R µ a(x) µ. These are the right-invariant and the left-invariant vector fields on the group. They are related by K a = D a b K + b (3.1) They form two representations of the group S U() and satisfy the algebra [K + a, K + b ] = εc abk + c (3.13) [K a, K b ] = εc abk c (3.14) [K + a, K b ] = 0 (3.15) That is, the left-invariant vector fields commute with the right-invariant ones. Now, let T a be the generators of some representation of the group S U() satisfying the same algebra. Then, of course, exp[t(ω)] exp[t(p)] = exp[t(f(ω, p))] (3.16) First, one can derive a useful commutation formula exp[t(ω)]t b exp[ T(ω)] = D a bt a, (3.17) where the matrix D = (D a b) is defined by D = exp C. More generally, exp[t(ω)] exp[t(p)] exp[ T(ω)] = exp[t(d(ω)p)]. (3.18) su.tex; November 9, 011; 10:07; p. 17

19 18 Next, one can show that This means that exp[ T(ω)] ω i exp[t(ω)] = Ya it a, (3.19) ω i exp[t(ω)] = Ya i exp[t(ω)]t a = Y i a T a exp[t(ω)], (3.130) The Killing vectors have important properties K + a exp[t(ω)] = (exp[t(ω)])t a (3.131) K a exp[t(ω)] = T a (exp[t(ω)]) (3.13) This immediately leads to further important equations and exp[k + (p)] exp[t(ω)] = exp[t(ω)] exp[t(p)] (3.133) exp[k (q)] exp[t(ω)] = exp[t(q)] exp[t(ω)] (3.134) where K ± (p) = p a K ± a. More generally, exp[k + (p) + K (q)] exp[t(ω)] = exp[t(q)] exp[t(ω)] exp[t(p)] (3.135) and, therefore, exp[k + (p) + K (q)] exp[t(ω)] = exp[t(q)] exp[t(p)] = exp[t(f(q, p))] ω=0 (3.136) In particular, for the adjoint representation this formulas take the form DC b = D a bc a D (3.137) K + a D = DC a (3.138) K a D = C a D (3.139) exp[k + (p)]d(x) = D(x)D(p) (3.140) exp[k (q)]d(x) = D(q)D(x) (3.141) exp[k + (p) + K (q)]d(x) = D(q)D(x)D(p) (3.14) su.tex; November 9, 011; 10:07; p. 18

20 19 Then for a scalar function f (ω) the action of the right-invariant and leftinvariant vector fields is simply exp[k + (p)] f (ω) = f (F(ω, p)), exp[k (q)] f (ω) = f (F(q, ω)). (3.143) Since the action of these vector fields commutes, we have more generally Therefore, that is, exp[k + (p) + K (q)] f (ω) = f (F(q, F(ω, p))). (3.144) exp[k + (p) + K (q)] f (ω) = f (F(q, p)). (3.145) ω=0 The bi-invariant metric is defined by g αβ = δ ab Y a αy b β = δ ab X a αx b β (3.146) g αβ = δ ab L α al β b = δ ab R α ar β b, (3.147) ( ) sinh[c/] g = = I Π + sin r Π. (3.148) C/ r The Riemannian volume elements of the metric g is defined as usual where dvol = g 1/ (x)dx 1 dx dx 3 (3.149) g 1/ = (det g µν ) 1/ = sin r r. (3.150) Thus the bi-invariant volume element of the group is dvol (x) = sin r r dx = sin r dr dθ (3.151) where dx = dx 1 dx dx 3, and dθ is the volume element on S. The invariance of the volume element means that for any fixed p dvol (x) = dvol (F(x, p)) = dvol (F(p, x)) (3.15) Notice also, that F(p, q) is nothing but the geodesic distance between the points p and q. su.tex; November 9, 011; 10:07; p. 19

21 0 We will always use the orthonormal basis of the right-invariant vector fields K + a and one-forms σ a + and denote the covariant derivative with respect to K + a simply by a. The Levi-Civita connection of the bi-invariant metric in the rightinvariant basis is defined by a K + b = εc bak + c (3.153) so that the coefficients of the affine connection are so that for an arbitrary vector ω a bc = σ a +( c K + b ) = εa bc. (3.154) a V = [ (K + a V b ) + ε b cav c] K + b (3.155) Then where a K d = Mb dak + b (3.156) M b da = K + a D d b + ε b cad d c = ε b cad d c + ε b cad d c = ε b cad d c (3.157) so that Now, the curvature tensor is The Ricci curvature tensor and the scalar curvature are The scalar Laplacian is One can show that Further, a K d = εb cad d c K + b (3.158) R a bcd = ε f cdε a f b. (3.159) R ab = δ ab, R = 6. (3.160) 0 = δ ab K a K b = δab K + a K + b. (3.161) 0 ω sin ω = ω sin ω (3.16) 0 exp[t(ω)] = (exp[t(ω)])t = T exp[t(ω)] (3.163) su.tex; November 9, 011; 10:07; p. 0

22 1 where T = T a T a. This means that exp[t 0 ] exp[t(ω)] = exp(tt ) exp[t(ω)] (3.164) The Killing vector fields K a ± are divergence free, which means that they are anti-self-adjoint with respect to the invariant measure dvol (ω). that is, Thus the Laplacian 0 is self-adjoint with respect to the same measure as well. Let be the total connection on a vector bundle V realizing the representation G. Let us define the one-forms A = σ a +G a = G a Y a µdx µ. (3.165) and F = da + A A. Then, by using the above properties of the right-invariant one-forms we compute where F = 1 F abσ a + σ b + = 1 F aby a µy b νdx µ dx ν (3.166) F ab = ε c abg c (3.167) Thus A is the Yang-Mills curvature on the vector bundle V with covariantly constant curvature. The covariant derivative of a section of the bundle V is then (recall that a = K + a ) a ϕ = (K + a + G a )ϕ (3.168) and the covariant derivative of a section of the endomorphism bundle End(V) along the right-invariant basis is then Therefore, we find a Q = K + a Q + [G a, Q] (3.169) a G b = ε c bag c + [G a, G b ] = 0. (3.170) Then the derivatives along the left-invariant vector fields are K a ϕ = (K a + B a )ϕ (3.171) where K a Q = K a Q + [B a, Q] (3.17) B a = D a b G b (3.173) su.tex; November 9, 011; 10:07; p. 1

23 The Laplacian takes the form = a a = (K + a + G a )(K + a + G a ) = 0 + G a K + a + G (3.174) where G = G i G i. We want to rewrite the Laplacian in terms of Casimir operators of some representations of the group S U(). The covariant derivatives a do not form a representation of the algebra S U(). The operators that do are the covariant Lie derivatives. The covariant Lie derivatives along a Killing vector ξ of sections of this vector bundle are defined by L ξ = ξ 1 σa +( b ξ)ε bc ag c (3.175) By denoting the Lie derivatives along the Killing vectors K ± a by K ± a this gives for the right-invariant and the left-invariant bases K + a = L K + a = a + G a = K + a + G a. (3.176) K a = L K a = K a B a = K a. (3.177) It is easy to see that these operators form the same algebra S U() S U() [K + a, K + b ] = εc abk + c (3.178) [K a, K b ] = εc abk c (3.179) [K + a, K b ] = 0 (3.180) Now, the Laplacian is given now by the sums of the Casimir operators = K G (3.181) where K = 1 K K (3.18) and K ± = K ± a K ± a. 4 Heat Kernel on S 3 Our goal is to evaluate the heat kernel diagonal. Since it is constant we can evaluate it at any point, say, at the origin. We use the geodesic coordinates y i defined su.tex; November 9, 011; 10:07; p.

24 3 above. That is why, we will evaluate the heat kernel when one point is fixed at the origin. We will denote it simply by U(t; y). We obviously have exp(t ) = exp ( tg ) exp ( tk ). (4.1) Therefore, the heat kernel is equal to U(t, y) = exp ( tg ) ( t ) Ψ, y (4.) where Ψ(t, y) = exp(tk )δ(y) is the heat kernel of the operator K. 4.1 Scalar Heat Kernel Let Φ(t, ω) = (4πt) 3/ e t n= ω + πn sin ω ) ( ω + πn) exp ( 4t One can show by direct calculation that this function satisfies the equation (4.3) t Φ = 0 Φ (4.4) with the initial condition Φ(0, ω) = δ S 3(ω) (4.5) Therefore, this is the scalar heat kernel on the group S U() and, therefore, on S 3. Now, let us compute the integral Ψ(t) = dvol (ω)φ(t, ω) exp[t(ω)] (4.6) S U() Obviously, Ψ(0) = 1. Next, we have t Ψ = dvol (ω) exp[t(ω)] 0 Φ(t, ω) (4.7) S U() Now, by integrating by parts we get t Ψ = DωΦ(t, ω) 0 exp[t(ω)] (4.8) S U() which gives t Ψ = ΨT (4.9) su.tex; November 9, 011; 10:07; p. 3

25 4 Thus, we obtain a very important equation exp(tt ) = dvol (ω) Φ(t, ω) exp[t(ω)] (4.10) S U() This is true for any representation of S U(). We derive two corollaries. First, we get exp(tt ) = dvol (ω) Φ(t, ω) exp[t(p)] exp[t(ω)] exp[ T(p)] (4.11) S U() which means that for any p or Also, we see that exp[(t + s)t ] = Φ(t, ω) = Φ(t, F( p, F(ω, p)). (4.1) Φ(t, F(p, q)) = Φ(t, F(q, p)). (4.13) S 3 S 3 dvol (q)dvol (p)φ(t, q)φ(s, p) exp[t(q)] exp[t(p)] (4.14) By changing the variables z = F(q, p) and p p we obtain exp[(t + s)t ] = dvol (z) dvol (p)φ(t, F(p, z))φ(s, p) exp[t(z)] which means that S U() S U() S U() (4.15) dvol (p)φ(t, F(p, z))φ(s, p) = Φ(t + s, z) (4.16) In particular, for z = 0 and s = t this becomes dvol (p)φ(t, p)φ(t, p) = Φ(t, 0) (4.17) S U() su.tex; November 9, 011; 10:07; p. 4

Chapter 9 Unitary Groups and SU(N)

Chapter 9 Unitary Groups and SU(N) Chapter 9 Unitary Groups and SU(N) The irreducible representations of SO(3) are appropriate for describing the degeneracies of states of quantum mechanical systems which have rotational symmetry in three

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Tensors on a vector space

Tensors on a vector space APPENDIX B Tensors on a vector space In this Appendix, we gather mathematical definitions and results pertaining to tensors. The purpose is mostly to introduce the modern, geometrical view on tensors,

More information

INVARIANT METRICS WITH NONNEGATIVE CURVATURE ON COMPACT LIE GROUPS

INVARIANT METRICS WITH NONNEGATIVE CURVATURE ON COMPACT LIE GROUPS INVARIANT METRICS WITH NONNEGATIVE CURVATURE ON COMPACT LIE GROUPS NATHAN BROWN, RACHEL FINCK, MATTHEW SPENCER, KRISTOPHER TAPP, AND ZHONGTAO WU Abstract. We classify the left-invariant metrics with nonnegative

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. Math 312, Fall 2012 Jerry L. Kazdan Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. In addition to the problems below, you should also know how to solve

More information

Invariant Metrics with Nonnegative Curvature on Compact Lie Groups

Invariant Metrics with Nonnegative Curvature on Compact Lie Groups Canad. Math. Bull. Vol. 50 (1), 2007 pp. 24 34 Invariant Metrics with Nonnegative Curvature on Compact Lie Groups Nathan Brown, Rachel Finck, Matthew Spencer, Kristopher Tapp and Zhongtao Wu Abstract.

More information

arxiv:math/0010057v1 [math.dg] 5 Oct 2000

arxiv:math/0010057v1 [math.dg] 5 Oct 2000 arxiv:math/0010057v1 [math.dg] 5 Oct 2000 HEAT KERNEL ASYMPTOTICS FOR LAPLACE TYPE OPERATORS AND MATRIX KDV HIERARCHY IOSIF POLTEROVICH Preliminary version Abstract. We study the heat kernel asymptotics

More information

Structure of the Root Spaces for Simple Lie Algebras

Structure of the Root Spaces for Simple Lie Algebras Structure of the Root Spaces for Simple Lie Algebras I. Introduction A Cartan subalgebra, H, of a Lie algebra, G, is a subalgebra, H G, such that a. H is nilpotent, i.e., there is some n such that (H)

More information

The three-dimensional rotations are defined as the linear transformations of the vector x = (x 1, x 2, x 3 ) x i = R ij x j, (2.1) x 2 = x 2. (2.

The three-dimensional rotations are defined as the linear transformations of the vector x = (x 1, x 2, x 3 ) x i = R ij x j, (2.1) x 2 = x 2. (2. 2 The rotation group In this Chapter we give a short account of the main properties of the threedimensional rotation group SO(3) and of its universal covering group SU(2). The group SO(3) is an important

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0. Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

Introduction to Matrix Algebra I

Introduction to Matrix Algebra I Appendix A Introduction to Matrix Algebra I Today we will begin the course with a discussion of matrix algebra. Why are we studying this? We will use matrix algebra to derive the linear regression model

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

THREE DIMENSIONAL GEOMETRY

THREE DIMENSIONAL GEOMETRY Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

More information

4. MATRICES Matrices

4. MATRICES Matrices 4. MATRICES 170 4. Matrices 4.1. Definitions. Definition 4.1.1. A matrix is a rectangular array of numbers. A matrix with m rows and n columns is said to have dimension m n and may be represented as follows:

More information

equations Karl Lundengård December 3, 2012 MAA704: Matrix functions and matrix equations Matrix functions Matrix equations Matrix equations, cont d

equations Karl Lundengård December 3, 2012 MAA704: Matrix functions and matrix equations Matrix functions Matrix equations Matrix equations, cont d and and Karl Lundengård December 3, 2012 Solving General, Contents of todays lecture and (Kroenecker product) Solving General, Some useful from calculus and : f (x) = x n, x C, n Z + : f (x) = n x, x R,

More information

A Primer on Index Notation

A Primer on Index Notation A Primer on John Crimaldi August 28, 2006 1. Index versus Index notation (a.k.a. Cartesian notation) is a powerful tool for manipulating multidimensional equations. However, there are times when the more

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution

More information

9 MATRICES AND TRANSFORMATIONS

9 MATRICES AND TRANSFORMATIONS 9 MATRICES AND TRANSFORMATIONS Chapter 9 Matrices and Transformations Objectives After studying this chapter you should be able to handle matrix (and vector) algebra with confidence, and understand the

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

Lecture 2 Matrix Operations

Lecture 2 Matrix Operations Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or

More information

MATH PROBLEMS, WITH SOLUTIONS

MATH PROBLEMS, WITH SOLUTIONS MATH PROBLEMS, WITH SOLUTIONS OVIDIU MUNTEANU These are free online notes that I wrote to assist students that wish to test their math skills with some problems that go beyond the usual curriculum. These

More information

The Greatest Common Factor; Factoring by Grouping

The Greatest Common Factor; Factoring by Grouping 296 CHAPTER 5 Factoring and Applications 5.1 The Greatest Common Factor; Factoring by Grouping OBJECTIVES 1 Find the greatest common factor of a list of terms. 2 Factor out the greatest common factor.

More information

The cover SU(2) SO(3) and related topics

The cover SU(2) SO(3) and related topics The cover SU(2) SO(3) and related topics Iordan Ganev December 2011 Abstract The subgroup U of unit quaternions is isomorphic to SU(2) and is a double cover of SO(3). This allows a simple computation of

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

DEFINITION 5.1.1 A complex number is a matrix of the form. x y. , y x

DEFINITION 5.1.1 A complex number is a matrix of the form. x y. , y x Chapter 5 COMPLEX NUMBERS 5.1 Constructing the complex numbers One way of introducing the field C of complex numbers is via the arithmetic of matrices. DEFINITION 5.1.1 A complex number is a matrix of

More information

The basic unit in matrix algebra is a matrix, generally expressed as: a 11 a 12. a 13 A = a 21 a 22 a 23

The basic unit in matrix algebra is a matrix, generally expressed as: a 11 a 12. a 13 A = a 21 a 22 a 23 (copyright by Scott M Lynch, February 2003) Brief Matrix Algebra Review (Soc 504) Matrix algebra is a form of mathematics that allows compact notation for, and mathematical manipulation of, high-dimensional

More information

MAT188H1S Lec0101 Burbulla

MAT188H1S Lec0101 Burbulla Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u

More information

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010 Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators

More information

Finite dimensional C -algebras

Finite dimensional C -algebras Finite dimensional C -algebras S. Sundar September 14, 2012 Throughout H, K stand for finite dimensional Hilbert spaces. 1 Spectral theorem for self-adjoint opertors Let A B(H) and let {ξ 1, ξ 2,, ξ n

More information

1 Introduction. 2 Matrices: Definition. Matrix Algebra. Hervé Abdi Lynne J. Williams

1 Introduction. 2 Matrices: Definition. Matrix Algebra. Hervé Abdi Lynne J. Williams In Neil Salkind (Ed.), Encyclopedia of Research Design. Thousand Oaks, CA: Sage. 00 Matrix Algebra Hervé Abdi Lynne J. Williams Introduction Sylvester developed the modern concept of matrices in the 9th

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

Semi-Simple Lie Algebras and Their Representations

Semi-Simple Lie Algebras and Their Representations i Semi-Simple Lie Algebras and Their Representations Robert N. Cahn Lawrence Berkeley Laboratory University of California Berkeley, California 1984 THE BENJAMIN/CUMMINGS PUBLISHING COMPANY Advanced Book

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6 Chapter 7 Matrices Definition An m n matrix is an array of numbers set out in m rows and n columns Examples (i ( 1 1 5 2 0 6 has 2 rows and 3 columns and so it is a 2 3 matrix (ii 1 0 7 1 2 3 3 1 is a

More information

Factoring of Prime Ideals in Extensions

Factoring of Prime Ideals in Extensions Chapter 4 Factoring of Prime Ideals in Extensions 4. Lifting of Prime Ideals Recall the basic AKLB setup: A is a Dedekind domain with fraction field K, L is a finite, separable extension of K of degree

More information

2. Introduction to quantum mechanics

2. Introduction to quantum mechanics 2. Introduction to quantum mechanics 2.1 Linear algebra Dirac notation Complex conjugate Vector/ket Dual vector/bra Inner product/bracket Tensor product Complex conj. matrix Transpose of matrix Hermitian

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

2.1: MATRIX OPERATIONS

2.1: MATRIX OPERATIONS .: MATRIX OPERATIONS What are diagonal entries and the main diagonal of a matrix? What is a diagonal matrix? When are matrices equal? Scalar Multiplication 45 Matrix Addition Theorem (pg 0) Let A, B, and

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

Section 8.8. 1. The given line has equations. x = 3 + t(13 3) = 3 + 10t, y = 2 + t(3 + 2) = 2 + 5t, z = 7 + t( 8 7) = 7 15t.

Section 8.8. 1. The given line has equations. x = 3 + t(13 3) = 3 + 10t, y = 2 + t(3 + 2) = 2 + 5t, z = 7 + t( 8 7) = 7 15t. . The given line has equations Section 8.8 x + t( ) + 0t, y + t( + ) + t, z 7 + t( 8 7) 7 t. The line meets the plane y 0 in the point (x, 0, z), where 0 + t, or t /. The corresponding values for x and

More information

NOTES ON MINIMAL SURFACES

NOTES ON MINIMAL SURFACES NOTES ON MINIMAL SURFACES DANNY CALEGARI Abstract. These are notes on minimal surfaces, with an emphasis on the classical theory and its connection to complex analysis, and the topological applications

More information

Quotient Rings and Field Extensions

Quotient Rings and Field Extensions Chapter 5 Quotient Rings and Field Extensions In this chapter we describe a method for producing field extension of a given field. If F is a field, then a field extension is a field K that contains F.

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

Quantum Mechanics and Representation Theory

Quantum Mechanics and Representation Theory Quantum Mechanics and Representation Theory Peter Woit Columbia University Texas Tech, November 21 2013 Peter Woit (Columbia University) Quantum Mechanics and Representation Theory November 2013 1 / 30

More information

1.4. Removing Brackets. Introduction. Prerequisites. Learning Outcomes. Learning Style

1.4. Removing Brackets. Introduction. Prerequisites. Learning Outcomes. Learning Style Removing Brackets 1. Introduction In order to simplify an expression which contains brackets it is often necessary to rewrite the expression in an equivalent form but without any brackets. This process

More information

BILINEAR FORMS KEITH CONRAD

BILINEAR FORMS KEITH CONRAD BILINEAR FORMS KEITH CONRAD The geometry of R n is controlled algebraically by the dot product. We will abstract the dot product on R n to a bilinear form on a vector space and study algebraic and geometric

More information

Continuous Groups, Lie Groups, and Lie Algebras

Continuous Groups, Lie Groups, and Lie Algebras Chapter 7 Continuous Groups, Lie Groups, and Lie Algebras Zeno was concerned with three problems... These are the problem of the infinitesimal, the infinite, and continuity... Bertrand Russell The groups

More information

1 Short Introduction to Time Series

1 Short Introduction to Time Series ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The

More information

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS A QUIK GUIDE TO THE FOMULAS OF MULTIVAIABLE ALULUS ontents 1. Analytic Geometry 2 1.1. Definition of a Vector 2 1.2. Scalar Product 2 1.3. Properties of the Scalar Product 2 1.4. Length and Unit Vectors

More information

Lecture 2: Essential quantum mechanics

Lecture 2: Essential quantum mechanics Department of Physical Sciences, University of Helsinki http://theory.physics.helsinki.fi/ kvanttilaskenta/ p. 1/46 Quantum information and computing Lecture 2: Essential quantum mechanics Jani-Petri Martikainen

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

The Hadamard Product

The Hadamard Product The Hadamard Product Elizabeth Million April 12, 2007 1 Introduction and Basic Results As inexperienced mathematicians we may have once thought that the natural definition for matrix multiplication would

More information

Lecture 18 - Clifford Algebras and Spin groups

Lecture 18 - Clifford Algebras and Spin groups Lecture 18 - Clifford Algebras and Spin groups April 5, 2013 Reference: Lawson and Michelsohn, Spin Geometry. 1 Universal Property If V is a vector space over R or C, let q be any quadratic form, meaning

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors 1 Chapter 13. VECTORS IN THREE DIMENSIONAL SPACE Let s begin with some names and notation for things: R is the set (collection) of real numbers. We write x R to mean that x is a real number. A real number

More information

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5. PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

CONNECTIONS ON PRINCIPAL G-BUNDLES

CONNECTIONS ON PRINCIPAL G-BUNDLES CONNECTIONS ON PRINCIPAL G-BUNDLES RAHUL SHAH Abstract. We will describe connections on principal G-bundles via two perspectives: that of distributions and that of connection 1-forms. We will show that

More information

INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL

INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL SOLUTIONS OF THEORETICAL EXERCISES selected from INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL Eighth Edition, Prentice Hall, 2005. Dr. Grigore CĂLUGĂREANU Department of Mathematics

More information

1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each)

1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) Math 33 AH : Solution to the Final Exam Honors Linear Algebra and Applications 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) (1) If A is an invertible

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Lecture Notes 1: Matrix Algebra Part B: Determinants and Inverses

Lecture Notes 1: Matrix Algebra Part B: Determinants and Inverses University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 57 Lecture Notes 1: Matrix Algebra Part B: Determinants and Inverses Peter J. Hammond email: p.j.hammond@warwick.ac.uk Autumn 2012,

More information

Irreducible Representations of SO(2) and SO(3)

Irreducible Representations of SO(2) and SO(3) Chapter 8 Irreducible Representations of SO(2) and SO(3) The shortest path between two truths in the real domain passes through the complex domain. Jacques Hadamard 1 Some of the most useful aspects of

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

Math 241, Exam 1 Information.

Math 241, Exam 1 Information. Math 241, Exam 1 Information. 9/24/12, LC 310, 11:15-12:05. Exam 1 will be based on: Sections 12.1-12.5, 14.1-14.3. The corresponding assigned homework problems (see http://www.math.sc.edu/ boylan/sccourses/241fa12/241.html)

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Factoring Quadratic Expressions

Factoring Quadratic Expressions Factoring the trinomial ax 2 + bx + c when a = 1 A trinomial in the form x 2 + bx + c can be factored to equal (x + m)(x + n) when the product of m x n equals c and the sum of m + n equals b. (Note: the

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that

More information

Index notation in 3D. 1 Why index notation?

Index notation in 3D. 1 Why index notation? Index notation in 3D 1 Why index notation? Vectors are objects that have properties that are independent of the coordinate system that they are written in. Vector notation is advantageous since it is elegant

More information

Gauged supergravity and E 10

Gauged supergravity and E 10 Gauged supergravity and E 10 Jakob Palmkvist Albert-Einstein-Institut in collaboration with Eric Bergshoeff, Olaf Hohm, Axel Kleinschmidt, Hermann Nicolai and Teake Nutma arxiv:0810.5767 JHEP01(2009)020

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 205/6 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

Copy in your notebook: Add an example of each term with the symbols used in algebra 2 if there are any.

Copy in your notebook: Add an example of each term with the symbols used in algebra 2 if there are any. Algebra 2 - Chapter Prerequisites Vocabulary Copy in your notebook: Add an example of each term with the symbols used in algebra 2 if there are any. P1 p. 1 1. counting(natural) numbers - {1,2,3,4,...}

More information

1D 3D 1D 3D. is called eigenstate or state function. When an operator act on a state, it can be written as

1D 3D 1D 3D. is called eigenstate or state function. When an operator act on a state, it can be written as Chapter 3 (Lecture 4-5) Postulates of Quantum Mechanics Now we turn to an application of the preceding material, and move into the foundations of quantum mechanics. Quantum mechanics is based on a series

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Exam 1 Sample Question SOLUTIONS. y = 2x

Exam 1 Sample Question SOLUTIONS. y = 2x Exam Sample Question SOLUTIONS. Eliminate the parameter to find a Cartesian equation for the curve: x e t, y e t. SOLUTION: You might look at the coordinates and notice that If you don t see it, we can

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

Lecture 3 SU(2) January 26, 2011. Lecture 3

Lecture 3 SU(2) January 26, 2011. Lecture 3 Lecture 3 SU(2) January 26, 2 Lecture 3 A Little Group Theory A group is a set of elements plus a compostion rule, such that:. Combining two elements under the rule gives another element of the group.

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information