The Atiyah-Singer Index theorem what it is and why you should care Rafe Mazzeo. October 10, 2002

Size: px
Start display at page:

Download "The Atiyah-Singer Index theorem what it is and why you should care Rafe Mazzeo. October 10, 2002"

Transcription

1 The Atiyah-Singer Index theorem what it is and why you should care Rafe Mazzeo October 10, 2002

2 Before we get started: At one time, economic conditions caused the closing of several small clothing mills in the English countryside. A man from West Germany bought the buildings and converted them into dog kennels for the convenience of German tourists who liked to have their pets with them while vacationing in England. One summer evening, a local resident called to his wife to come out of the house.

3 Before we get started: At one time, economic conditions caused the closing of several small clothing mills in the English countryside. A man from West Germany bought the buildings and converted them into dog kennels for the convenience of German tourists who liked to have their pets with them while vacationing in England. One summer evening, a local resident called to his wife to come out of the house. Just listen!, he urged. The Mills Are Alive With the Hounds of Munich!

4

5 Background: The Atiyah-Singer theorem provides a fundamental link between differential geometry, partial differential equations, differential topology, operator algebras, and has links to many other fields, including number theory, etc.

6 Background: The Atiyah-Singer theorem provides a fundamental link between differential geometry, partial differential equations, differential topology, operator algebras, and has links to many other fields, including number theory, etc. The purpose of this talk is to provide some sort of idea about the general mathematical context of the index theorem and what it actually says (particularly in special cases), to indicate some of the various proofs, mention a few applications, and to hint at some generalizations and directions in the current research in index theory.

7

8

9 Outline: 1. Two simple examples

10 Outline: 1. Two simple examples 2. Fredholm operators

11 Outline: 1. Two simple examples 2. Fredholm operators 3. The Toeplitz index theorem

12 Outline: 1. Two simple examples 2. Fredholm operators 3. The Toeplitz index theorem 4. Elliptic operators on manifolds

13 Outline: 1. Two simple examples 2. Fredholm operators 3. The Toeplitz index theorem 4. Elliptic operators on manifolds 5. Dirac operators

14 Outline: 1. Two simple examples 2. Fredholm operators 3. The Toeplitz index theorem 4. Elliptic operators on manifolds 5. Dirac operators 6. The Gauß-Bonnet and Hirzebruch signature theorems

15 Outline: 1. Two simple examples 2. Fredholm operators 3. The Toeplitz index theorem 4. Elliptic operators on manifolds 5. Dirac operators 6. The Gauß-Bonnet and Hirzebruch signature theorems 7. The first proofs

16 Outline: 1. Two simple examples 2. Fredholm operators 3. The Toeplitz index theorem 4. Elliptic operators on manifolds 5. Dirac operators 6. The Gauß-Bonnet and Hirzebruch signature theorems 7. The first proofs 8. The heat equation and the McKean-Singer argument

17 Outline: 1. Two simple examples 2. Fredholm operators 3. The Toeplitz index theorem 4. Elliptic operators on manifolds 5. Dirac operators 6. The Gauß-Bonnet and Hirzebruch signature theorems 7. The first proofs 8. The heat equation and the McKean-Singer argument 9. Heat asymptotics and Patodi s miraculous cancellation

18 Outline: 1. Two simple examples 2. Fredholm operators 3. The Toeplitz index theorem 4. Elliptic operators on manifolds 5. Dirac operators 6. The Gauß-Bonnet and Hirzebruch signature theorems 7. The first proofs 8. The heat equation and the McKean-Singer argument 9. Heat asymptotics and Patodi s miraculous cancellation 10. Getzler rescaling

19 and if there is time: 11. Application: obstruction to existence of positive scalar curvature metrics

20 and if there is time: 11. Application: obstruction to existence of positive scalar curvature metrics 12. Application: the dimension of moduli spaces

21 and if there is time: 11. Application: obstruction to existence of positive scalar curvature metrics 12. Application: the dimension of moduli spaces 13. The Atiyah-Patodi-Singer index theorem

22 The simplest example: Suppose A : R n R k is a linear transformation. The rank + nullity theorem states that dim ran (A) + dim ker(a) = n.

23 The simplest example: Suppose A : R n R k is a linear transformation. The rank + nullity theorem states that dim ran (A) + dim ker(a) = n. To prove this, decompose R n = K K, R k = R R, K = ker A R = rana.

24 Then A : K R is an isomorphism, and n = dim K + dim K, k = dim R + dim R.

25 Then A : K R is an isomorphism, and n = dim K + dim K, k = dim R + dim R. Hence n k = (dim K + dim K ) (dim R + dim R ) = dim K dim R = dim ker A (dim R k dim rana).

26 Then A : K R is an isomorphism, and n = dim K + dim K, k = dim R + dim R. Hence n k = (dim K + dim K ) (dim R + dim R ) = dim K dim R = dim ker A (dim R k dim rana). = ind(a), where by definition, the index of A is defined to be ind(a) = dim ker A dim cokera.

27 The first infinite dimensional example: Let l 2 denote the space of all square-summable sequences (a 0, a 1, a 2,...).

28 The first infinite dimensional example: Let l 2 denote the space of all square-summable sequences (a 0, a 1, a 2,...). Define the left shift and right shift operators L : l 2 l 2, L((a 0, a 1, a 2,...)) = (a 1, a 2,...), and R : l 2 l 2, R((a 0, a 1, a 2,...)) = (0, a 0, a 1, a 2,...).

29 The first infinite dimensional example: Let l 2 denote the space of all square-summable sequences (a 0, a 1, a 2,...). Define the left shift and right shift operators L : l 2 l 2, L((a 0, a 1, a 2,...)) = (a 1, a 2,...), and R : l 2 l 2, R((a 0, a 1, a 2,...)) = (0, a 0, a 1, a 2,...). Then so dim ker L = 1, dim coker L = 0 ind L = 1.

30 On the other hand dim ker R = 1, dim coker R = 0 so ind R = 1.

31 On the other hand dim ker R = 1, dim coker R = 0 so ind R = 1. Notice that indl N = N, indr N = N, so we can construct operators with any given index.

32 Fredholm operators: Let H 1 and H 2 be (separable) Hilbert spaces, and suppose that A : H 1 H 2 is a bounded operator.

33 Fredholm operators: Let H 1 and H 2 be (separable) Hilbert spaces, and suppose that A : H 1 H 2 is a bounded operator. Example: acting on H 1 = H 1 (S 1 ) = {u : A = 1 d i dθ u 2 dθ < }, H 2 = L 2 (S 1 ), A(e inθ ) = ne inθ.

34 Definition: A is Fredholm if ker A is finite dimensional, rana is closed and cokera is finite dimensional. If A is Fredholm, then inda = dim ker A dim cokera Z is well defined.

35 Definition: A is Fredholm if ker A is finite dimensional, rana is closed and cokera is finite dimensional. If A is Fredholm, then inda = dim ker A dim cokera Z is well defined. Intuition: If A is Fredholm, then H 1 = K K, H 2 = R R, where A : K R is an isomorphism and both K and R are finite dimensional.

36 A is Fredholm if it is almost invertible, or more precisely it is invertible up to compact errors.

37 A is Fredholm if it is almost invertible, or more precisely it is invertible up to compact errors. {Fredholm operators} = {invertible elements in B(H 1, H 2 )/K(H 1, H 2 )}.

38 A is Fredholm if it is almost invertible, or more precisely it is invertible up to compact errors. {Fredholm operators} = {invertible elements in B(H 1, H 2 )/K(H 1, H 2 )}. We can also make sense of what it means for an unbounded, closed operator to be Fredholm.

39 A is Fredholm if it is almost invertible, or more precisely it is invertible up to compact errors. {Fredholm operators} = {invertible elements in B(H 1, H 2 )/K(H 1, H 2 )}. We can also make sense of what it means for an unbounded, closed operator to be Fredholm. Recall that A : H 1 H 2 is closed if Graph(A) = {(h, Ah) : h domain(a)} is a closed subspace of H 1 H 2.

40 A is Fredholm if it is almost invertible, or more precisely it is invertible up to compact errors. {Fredholm operators} = {invertible elements in B(H 1, H 2 )/K(H 1, H 2 )}. We can also make sense of what it means for an unbounded, closed operator to be Fredholm. Recall that A : H 1 H 2 is closed if Graph(A) = {(h, Ah) : h domain(a)} is a closed subspace of H 1 H 2. (This is always the case if H 1 = H 2 = L 2 and A is a differential operator.)

41 Basic Fact: If A t is a one-parameter family of Fredholm operators (depending continuously on t), then ind(a t ) is independent of t.

42 Basic Fact: If A t is a one-parameter family of Fredholm operators (depending continuously on t), then ind(a t ) is independent of t. Intuition: A t = ti K, t > 0, K finite rank, symmetric Then A t gains a nullspace and a cokernel of the same dimension every time t crosses an eigenvalue of K.

43 The Toeplitz index theorem: Let H L 2 (S 1 ) be the Hilbert space consisting of all L 2 functions on S 1 which have only nonnegative Fourier coefficients.

44 The Toeplitz index theorem: Let H L 2 (S 1 ) be the Hilbert space consisting of all L 2 functions on S 1 which have only nonnegative Fourier coefficients. So a H a(θ) = a n e inθ, a n 2 <. 0

45 The Toeplitz index theorem: Let H L 2 (S 1 ) be the Hilbert space consisting of all L 2 functions on S 1 which have only nonnegative Fourier coefficients. So a H a(θ) = a n e inθ, a n 2 <. 0 There is an orthogonal projection Π : L 2 (S 1 ) H.

46 Suppose f(θ) is a continuous (complex-valued) function on S 1. Define the Toeplitz operator T f : H H, T f (a) = ΠM f Π, where M f is multiplication by f (bounded operator on L 2 (S 1 )).

47 Suppose f(θ) is a continuous (complex-valued) function on S 1. Define the Toeplitz operator T f : H H, T f (a) = ΠM f Π, where M f is multiplication by f (bounded operator on L 2 (S 1 )). Question: When is T f Fredholm? What is its index?

48 Suppose f(θ) is a continuous (complex-valued) function on S 1. Define the Toeplitz operator T f : H H, T f (a) = ΠM f Π, where M f is multiplication by f (bounded operator on L 2 (S 1 )). Question: When is T f Fredholm? What is its index? Theorem: The Toeplitz operator T f is Fredholm if and only if f is nowhere 0. In this case, its index is given by the winding number of f (as a map S 1 C ).

49 Part of proof: We guess that T 1/f is a good approximation to an inverse for T f, so we compute: T f T 1/f = ΠM f ΠM 1/f Π

50 Part of proof: We guess that T 1/f is a good approximation to an inverse for T f, so we compute: T f T 1/f = ΠM f ΠM 1/f Π = ΠM f M 1/f Π + Π[M f, Π]M 1/f Π

51 Part of proof: We guess that T 1/f is a good approximation to an inverse for T f, so we compute: T f T 1/f = ΠM f ΠM 1/f Π = ΠM f M 1/f Π + Π[M f, Π]M 1/f Π = I H + Π[M f, Π]M 1/f Π. The final term here is a compact operator! Obvious when f has a finite Fourier series; for the general case, approximate any continuous function by trigonometric polynomials.

52 Proof of Toeplitz index theorem:

53 Proof of Toeplitz index theorem: First step: Let N be the winding number of f. Then there is a continuous family f t, where each f t : S 1 C is continuous, such that f 0 = f, f 1 = e inθ.

54 Proof of Toeplitz index theorem: First step: Let N be the winding number of f. Then there is a continuous family f t, where each f t : S 1 C is continuous, such that f 0 = f, f 1 = e inθ. ind(t ft ) is independent of t, so it suffices to compute T e inθ.

55 Proof of Toeplitz index theorem: First step: Let N be the winding number of f. Then there is a continuous family f t, where each f t : S 1 C is continuous, such that f 0 = f, f 1 = e inθ. ind(t ft ) is independent of t, so it suffices to compute T e inθ. ΠM e inθπ : n=0 a n e inθ n=0 a n N e inθ if N < 0 n=n a n N e inθ if N 0

56 Proof of Toeplitz index theorem: First step: Let N be the winding number of f. Then there is a continuous family f t, where each f t : S 1 C is continuous, such that f 0 = f, f 1 = e inθ. ind(t ft ) is independent of t, so it suffices to compute T e inθ. ΠM e inθπ : n=0 a n e inθ n=0 a n N e inθ if N < 0 n=n a n N e inθ if N 0 This corresponds to L N or R N, respectively. Thus ind(t f ) = winding number of f

57 Elliptic operators Let M be a smooth compact manifold of dimension n.

58 Elliptic operators Let M be a smooth compact manifold of dimension n. In any coordinate chart, a differential operator P on M has the form P = a α (x)d α, α m where α = (α 1,..., α n ), D α = α x α xα n n.

59 Elliptic operators Let M be a smooth compact manifold of dimension n. In any coordinate chart, a differential operator P on M has the form P = where α m a α (x)d α, α = (α 1,..., α n ), D α = α x α xα n n. The symbol of P is the homogeneous polynomial σ m (P )(x; ξ) = α =m a α (x)ξ α, ξ α = ξ α ξα n n.

60 If P is a system (i.e. acts between sections of two different vector bundles E and F over M), then the coefficients a α (x) lie in Hom(E x, F x ).

61 If P is a system (i.e. acts between sections of two different vector bundles E and F over M), then the coefficients a α (x) lie in Hom(E x, F x ). P is called elliptic if σ m (P )(x; ξ) 0 for ξ 0, (or is invertible as an element of Hom(E x, F x ))

62 Fundamental Theorem: If P is an elliptic operator of order m acting between sections of two vector bundles E and F on a compact manifold M, then P : L 2 (M; E) L 2 (M; F ) defines a(n unbounded) Fredholm operator.

63 Fundamental Theorem: If P is an elliptic operator of order m acting between sections of two vector bundles E and F on a compact manifold M, then P : L 2 (M; E) L 2 (M; F ) defines a(n unbounded) Fredholm operator. Hence ind(p ) is well defined.

64 Fundamental Theorem: If P is an elliptic operator of order m acting between sections of two vector bundles E and F on a compact manifold M, then P : L 2 (M; E) L 2 (M; F ) defines a(n unbounded) Fredholm operator. Hence ind(p ) is well defined. Question: Compute ind(p ).

65 Fundamental Theorem: If P is an elliptic operator of order m acting between sections of two vector bundles E and F on a compact manifold M, then P : L 2 (M; E) L 2 (M; F ) defines a(n unbounded) Fredholm operator. Hence ind(p ) is well defined. Question: Compute ind(p ). One expects this to be possible because the index is invariant under such a large class of deformations.

66 Interesting elliptic operators: How about the Laplacian 0 : L 2 (M) L 2 (M) on functions?

67 Interesting elliptic operators: How about the Laplacian 0 : L 2 (M) L 2 (M) on functions? Nope: the Laplacian is self-adjoint, 0 u, v = u, 0 v, and so ker 0 = coker 0 = R, hence ind( 0 ) = 0.

68 The same is true for k : L 2 Ω k (M) L 2 Ω k (M) since it too is self-adjoint. k = d k 1 d k + d k+1 d k

69 The same is true for k : L 2 Ω k (M) L 2 Ω k (M) since it too is self-adjoint. k = d k 1 d k + d k+1 d k In fact, the Hodge theorem states that ker k = coker k = H k (M) (the space of harmonic forms), and this is isomorphic to the singular cohomology, Hsing k (M, R).

70 Similarly is still self-adjoint. D = d + d : L 2 Ω (M) L 2 Ω (M),

71 Similarly is still self-adjoint. D = d + d : L 2 Ω (M) L 2 Ω (M), However, we can break the symmetry:

72 Similarly D = d + d : L 2 Ω (M) L 2 Ω (M), is still self-adjoint. However, we can break the symmetry: The Gauss-Bonnet operator: D GB : L 2 Ω even L 2 Ω odd

73 Similarly D = d + d : L 2 Ω (M) L 2 Ω (M), is still self-adjoint. However, we can break the symmetry: The Gauss-Bonnet operator: D GB : L 2 Ω even L 2 Ω odd ker D GB = cokerd GB = k k H 2k (M), H 2k+1 (M),

74 and so indd GB = n k=0 ( 1) k dim H k (M) = χ(m), the Euler characteristic of M.

75 and so indd GB = n k=0 ( 1) k dim H k (M) = χ(m), the Euler characteristic of M. Note that ind(d GB ) = 0 when dim M is odd.

76 and so indd GB = n k=0 ( 1) k dim H k (M) = χ(m), the Euler characteristic of M. Note that ind(d GB ) = 0 when dim M is odd. This is not the index theorem, but only a Hodge-theoretic calculation of the index of D GB in terms of something familiar.

77 The Chern-Gauss-Bonnet theorem states that χ(m) = M P f(ω).

78 The Chern-Gauss-Bonnet theorem states that χ(m) = M P f(ω). Here Pf(Ω) is an explicit differential form (called the Pfaffian) which involves only the curvature tensor of M.

79 The Chern-Gauss-Bonnet theorem states that χ(m) = M P f(ω). Here Pf(Ω) is an explicit differential form (called the Pfaffian) which involves only the curvature tensor of M. One can think of this integral either as a differential geometric quantity, or else as a characteristic number (the evaluation of the Euler class of T M on the fundamental homology class [M]).

80 The signature operator: D sig = d + d : L 2 Ω + (M) L 2 Ω (M) where Ω ± (M) is the ± eigenspace of an involution τ : Ω (M) Ω (M), τ 2 = 1.

81 The signature operator: D sig = d + d : L 2 Ω + (M) L 2 Ω (M) where Ω ± (M) is the ± eigenspace of an involution τ : Ω (M) Ω (M), τ 2 = 1. When dim M = 4k, then indd sig = sign(h 2k (M) H 2k (M) R), which is a basic topological invariant, the signature of M.

82 The signature operator: D sig = d + d : L 2 Ω + (M) L 2 Ω (M) where Ω ± (M) is the ± eigenspace of an involution τ : Ω (M) Ω (M), τ 2 = 1. When dim M = 4k, then indd sig = sign(h 2k (M) H 2k (M) R), which is a basic topological invariant, the signature of M. In all other cases, indd sig = 0.

83 The Hirzebruch signature theorem states that sign(m) = M L(p), where L(p) is the L-polynomial in the Pontrjagin classes. This is another curvature integral, and also a characteristic number. (It is a homotopy invariant of M.)

84 The (generalized) Dirac operator: Let E ± be two vector bundles over (M, g), and suppose that there is a (fibrewise) multiplication cl : T M E ± E, Clifford multiplication which satisfies Let E = E + E. cl(v)cl(w) + cl(w)cl(v) = 2 v, w Id.

85 The (generalized) Dirac operator: Let E ± be two vector bundles over (M, g), and suppose that there is a (fibrewise) multiplication cl : T M E ± E, Clifford multiplication which satisfies Let E = E + E. cl(v)cl(w) + cl(w)cl(v) = 2 v, w Id. Let : C (M; E) C (M; E T M) be a connection.

86 Now define D : L 2 (M; E) L 2 (M; E) by D = n i=1 cl(e i ) ei, where {e 1,..., e n } is any local orthonormal basis of T M.

87 The Clifford relations show that where A is a homomorphism. D 2 = + A

88 The Clifford relations show that D 2 = + A where A is a homomorphism. In other words, D 2 is the Laplacian up to a term of order zero.

89 In certain cases (when M is a spin manifold: w 2 = 0), there is a distinguished Dirac operator D acting between sections of the spinor bundle S over M. One can then twist this basic Dirac operator D : L 2 (M; S) L 2 (M; S) by tensoring with an arbitrary vector bundle E, to get the twisted Dirac operator D E = D 1 cl(1 E ) : L 2 (M : S E) L 2 (M; S E).

90 The index theorem for twisted Dirac operators states that ind(d E ) = M Â(T M)ch(E). The integrand is again a curvature integral ; it is the product of the  genus of M and the Chern character of the bundle E.

91 The index theorem for twisted Dirac operators states that ind(d E ) = M Â(T M)ch(E). The integrand is again a curvature integral ; it is the product of the  genus of M and the Chern character of the bundle E. Both D GB and D sig are twisted Dirac operators, and both Pf(R) and L(p) are the product of Â(T M) and ch(e) for appropriate bundles E.

92 There is a general index theorem for arbitrary elliptic operators P (which are not necessarily of order 1), acting between sections of two bundles E and F. This takes the form ind(p ) = M R(P, E, F ). The right hand side here is a characteristic number for some bundle associated to E, F and the symbol of P. By Chern-Weil theory, this can be written as a curvature integral, but this is less natural now.

93 Intermission: One day at the watering hole, an elephant looked around and carefully surveyed the turtles in view. After a few seconds thought, he walked over to one turtle, raised his foot, and KICKED the turtle as far as he could. (Nearly a mile) A watching hyena asked the elephant why he did it? Well, about 30 years ago I was walking through a stream and a turtle bit my foot. Finally I found the S.O.B and repaid him for what he had done to me. 30 years!!! And you remembered...but HOW???

94 Intermission: One day at the watering hole, an elephant looked around and carefully surveyed the turtles in view. After a few seconds thought, he walked over to one turtle, raised his foot, and KICKED the turtle as far as he could. (Nearly a mile) A watching hyena asked the elephant why he did it? Well, about 30 years ago I was walking through a stream and a turtle bit my foot. Finally I found the S.O.B and repaid him for what he had done to me. 30 years!!! And you remembered...but HOW??? I have turtle recall.

95 The original proofs The index is computable because it is invariant under many big alterations and behaves well with respect to many natural operations:

96 The original proofs The index is computable because it is invariant under many big alterations and behaves well with respect to many natural operations: 1. It is invariant under deformations of P amongst Fredholm operators

97 The original proofs The index is computable because it is invariant under many big alterations and behaves well with respect to many natural operations: 1. It is invariant under deformations of P amongst Fredholm operators 2. It has good multiplicative and excision properties

98 The original proofs The index is computable because it is invariant under many big alterations and behaves well with respect to many natural operations: 1. It is invariant under deformations of P amongst Fredholm operators 2. It has good multiplicative and excision properties 3. It defines an additive homomorphism on K 0 (M).

99 The original proofs The index is computable because it is invariant under many big alterations and behaves well with respect to many natural operations: 1. It is invariant under deformations of P amongst Fredholm operators 2. It has good multiplicative and excision properties 3. It defines an additive homomorphism on K 0 (M). Here we regard E indd E as a semigroup homomorphism {vector bundles on M} Z and then extend to the formal group, which is K 0 (M).

100 The original proofs The index is computable because it is invariant under many big alterations and behaves well with respect to many natural operations: 1. It is invariant under deformations of P amongst Fredholm operators 2. It has good multiplicative and excision properties 3. It defines an additive homomorphism on K 0 (M). Here we regard E indd E as a semigroup homomorphism {vector bundles on M} Z and then extend to the formal group, which is K 0 (M). 4. It is invariant, in a suitable sense, under cobordisms of M

101 The original proofs The index is computable because it is invariant under many big alterations and behaves well with respect to many natural operations: 1. It is invariant under deformations of P amongst Fredholm operators 2. It has good multiplicative and excision properties 3. It defines an additive homomorphism on K 0 (M). Here we regard E indd E as a semigroup homomorphism {vector bundles on M} Z and then extend to the formal group, which is K 0 (M). 4. It is invariant, in a suitable sense, under cobordisms of M

102 Hirzebruch (1957) proved the signature theorem for smooth algebraic varieties, using the cobordism invariance of the signature and the L-class.

103 Hirzebruch (1957) proved the signature theorem for smooth algebraic varieties, using the cobordism invariance of the signature and the L-class. An elaboration of this same strategy may be used to reduce the computation of the index of a general elliptic operator P to that of a twisted Dirac operator on certain very special manifolds (products of complex projective spaces).

104 Steps of the proof

105 Steps of the proof 1. Deform the m th order elliptic operator P to a 1 st order twisted Dirac operator D.

106 Steps of the proof 1. Deform the m th order elliptic operator P to a 1 st order twisted Dirac operator D. This requires the use of pseudodifferential operators (polynomials are too rigid), and was one of the big early motivations for the theory of pseudodifferential operators. It also requires K-theory.

107 Steps of the proof 1. Deform the m th order elliptic operator P to a 1 st order twisted Dirac operator D. This requires the use of pseudodifferential operators (polynomials are too rigid), and was one of the big early motivations for the theory of pseudodifferential operators. It also requires K-theory. 2. Replace the twisted Dirac operator on the manifold M by a twisted Dirac operator on a new manifold M such that M M = W. The index stays the same!

108 3. Check that the two sides are equal on enough examples, products of complex projective spaces (these generate the cobordism ring). This requires the multiplicativity of the index.

109 A simplification: Instead of the final two steps, it is in many ways preferable to compute the index of twisted Dirac operators on an arbitrary manifold M directly.

110 A simplification: Instead of the final two steps, it is in many ways preferable to compute the index of twisted Dirac operators on an arbitrary manifold M directly. Later proofs involve verifying that the two sides of the index theorem on the same directly for Dirac operators on arbitrary manifolds; this still uses K-theory and pseudodifferential operators, but obviates the bordism.

111 A simplification: Instead of the final two steps, it is in many ways preferable to compute the index of twisted Dirac operators on an arbitrary manifold M directly. Later proofs involve verifying that the two sides of the index theorem on the same directly for Dirac operators on arbitrary manifolds; this still uses K-theory and pseudodifferential operators, but obviates the bordism. From now on I ll focus only on the index theorem for twisted Dirac operators.

112 The heat equation approach Suppose that P is a nonnegative second order self-adjoint elliptic operator.

113 The heat equation approach Suppose that P is a nonnegative second order self-adjoint elliptic operator. The examples I have in mind are of the form P = D D or P = DD

114 The heat equation approach Suppose that P is a nonnegative second order self-adjoint elliptic operator. The examples I have in mind are of the form P = D D or P = DD Note that if D = d + d on Ω even or Ω +, then D = d + d on Ω odd or Ω, so D D = (d + d ) 2 = dd + d d, DD = dd + d d on Ω even (Ω + ), and Ω odd (Ω ), respectively.

115 The heat equation approach Suppose that P is a nonnegative second order self-adjoint elliptic operator. The examples I have in mind are of the form P = D D or P = DD Note that if D = d + d on Ω even or Ω +, then D = d + d on Ω odd or Ω, so D D = (d + d ) 2 = dd + d d, DD = dd + d d on Ω even (Ω + ), and Ω odd (Ω ), respectively.

116 The initial value problem for the heat equation is ( ) t + P u = 0 on M [0, ), with initial data u(x, 0) = u 0 (x).

117 The initial value problem for the heat equation is ( ) t + P u = 0 on M [0, ), with initial data u(x, 0) = u 0 (x). The solution is given by the heat kernel H u(x, t) = (Hu 0 )(x, t) = M H(x, y, t)φ(y) dv y.

118 Let {φ j, λ j } be the eigendata for P, i.e. P φ j = λ j φ j, {φ j } dense in L 2 (M).

119 Let {φ j, λ j } be the eigendata for P, i.e. P φ j = λ j φ j, {φ j } dense in L 2 (M). Then because u 0 = j=0 c j φ j (x) H(x, y, t) = M j=0 j=0 e λ jt φ j (x)φ j (y) e λ jt φ j (x)φ j (y) j =0 c j φ j (y) dv y

120 Let {φ j, λ j } be the eigendata for P, i.e. P φ j = λ j φ j, {φ j } dense in L 2 (M). Then because u 0 = j=0 c j φ j (x) H(x, y, t) = M j=0 j=0 e λ jt φ j (x)φ j (y) e λ jt φ j (x)φ j (y) j =0 c j φ j (y) dv y = j=0 c j e λ jt φ j (x).

121 The McKean-Singer argument Let (φ j, λ j ) be the eigendata for D D and (ψ l, µ l ) be the eigendata for DD.

122 The McKean-Singer argument Let (φ j, λ j ) be the eigendata for D D and (ψ l, µ l ) be the eigendata for DD. Spectral cancellation Let E(λ) = {φ : D Dφ = λφ} and F (µ) = {ψ : DD φ = µφ}. Then for λ 0, D : E(λ) F (λ) is an isomorphism.

123 The McKean-Singer argument Let (φ j, λ j ) be the eigendata for D D and (ψ l, µ l ) be the eigendata for DD. Spectral cancellation Let E(λ) = {φ : D Dφ = λφ} and F (µ) = {ψ : DD φ = µφ}. Then for λ 0, D : E(λ) F (λ) is an isomorphism. Proof: D Dφ = λφ D(D Dφ) = (DD )(Dφ) = λ(dφ) so λ is an eigenvalue for DD.

124 Also need to check that Dφ 0: Dφ = 0 0 = Dφ 2 = D Dφ, φ φ ker D.

125 Also need to check that Dφ 0: Dφ = 0 0 = Dφ 2 = D Dφ, φ φ ker D. Conclusion The heat kernel traces are given by the sums Tr(e td D ) = j=0 e λ jt Tr(e tdd ) = l=0 e µ l t.

126 Also need to check that Dφ 0: Dφ = 0 0 = Dφ 2 = D Dφ, φ φ ker D. Conclusion The heat kernel traces are given by the sums Tr(e td D ) = Tr(e tdd ) = j=0 l=0 e λ jt e µ l t. Tr(e td D ) Tr(e tdd ) = e λ jt e µ l t λ j =0 µ l =0 = dim ker D D dim ker DD = ind(d)!

127 Also need to check that Dφ 0: Dφ = 0 0 = Dφ 2 = D Dφ, φ φ ker D. Conclusion The heat kernel traces are given by the sums Tr(e td D ) = Tr(e tdd ) = j=0 l=0 e λ jt e µ l t. Tr(e td D ) Tr(e tdd ) = e λ jt e µ l t λ j =0 µ l =0 = dim ker D D dim ker DD = ind(d)!

128 In particular, Tr(e td D ) Tr(e tdd ) is independent of t!

129 Localization as t 0 The heat kernel H(x, y, t) measures the amount of heat at x M at time t, if an initial unit (delta function) of heat is placed at y M at time 0.

130 Localization as t 0 The heat kernel H(x, y, t) measures the amount of heat at x M at time t, if an initial unit (delta function) of heat is placed at y M at time 0. As t 0, if x y, then H(x, y, t) 0 (exponentially quickly).

131 Localization as t 0 The heat kernel H(x, y, t) measures the amount of heat at x M at time t, if an initial unit (delta function) of heat is placed at y M at time 0. As t 0, if x y, then H(x, y, t) 0 (exponentially quickly). On the other hand, along the diagonal H(x, x, t) j=0 a j t n/2+j. The coefficients a j are called the heat trace invariants.

132 They are local, i.e. they can be written as a j = M p j (x) dv x, where p j is a universal polynomial in the coefficients of the operator.

133 They are local, i.e. they can be written as a j = M p j (x) dv x, where p j is a universal polynomial in the coefficients of the operator. If P is a geometric differential operator, then the p j are universal functions of the metric and its covariant derivatives up to some order.

134 = j=0 ind(d) = Tre td D Tre tdd a j (D D)t n/2+j j=0 = a n/2 (D D) a n/2 (DD ). a j (DD )t n/2+j

135 = j=0 ind(d) = Tre td D Tre tdd a j (D D)t n/2+j j=0 = a n/2 (D D) a n/2 (DD ). a j (DD )t n/2+j Note: this implies that the index is zero when n/2 / N!

136 = j=0 ind(d) = Tre td D Tre tdd a j (D D)t n/2+j j=0 = a n/2 (D D) a n/2 (DD ). a j (DD )t n/2+j Note: this implies that the index is zero when n/2 / N! The difficulty: The term a n/2 is high up in the asymptotic expansion, and so very hard to compute explicitly!

137 V.K. Patodi did the computation explicitly for a few basic examples (Gauss-Bonnet, etc.). He discovered a miraculous cancellation which made the computation tractable.

138 His computation was extended somewhat by Atiyah-Bott-Patodi (1973)

139 P. Gilkey gave another proof based on geometric invariant theory, i.e. the idea that these universal polynomials are SO(n)-invariant, in an appropriate sense, and hence must be polynomials in the Pontrjagin forms. Then, you calculate the coefficients by checking on lots of examples...

140 The big breakthrough was accomplished by Getzler (as an undergraduate!):

141 The big breakthrough was accomplished by Getzler (as an undergraduate!): he defined a rescaling of the Clifford algebra and the Clifford bundles in terms of the variable t, which as the effect that the index term a n/2 occurs as the leading coefficient!

142 The first application: (non)existence of metrics of positive scalar curvature Suppose (M 2n, g) is a spin manifold. Thus the Dirac operator D acts between sections of the spinor bundles S ±. D : L 2 (M; S + ) L 2 (M; S ).

143 The first application: (non)existence of metrics of positive scalar curvature Suppose (M 2n, g) is a spin manifold. Thus the Dirac operator D acts between sections of the spinor bundles S ±. D : L 2 (M; S + ) L 2 (M; S ). The Weitzenböck formula states that D D = DD = + R 4

144 The first application: (non)existence of metrics of positive scalar curvature Suppose (M 2n, g) is a spin manifold. Thus the Dirac operator D acts between sections of the spinor bundles S ±. D : L 2 (M; S + ) L 2 (M; S ). The Weitzenböck formula states that D D = DD = + R 4 Corollary: (Lichnerowicz) If M admits a metric g of positive scalar curvature, then ker D = cokerd = 0. Hence indd = 0, and so the Â-genus of M vanishes.

145 The Gromov-Lawson conjecture states that a slight generalization of this condition (the vanishing of the corresponding integral characteristic class, and suitably modified to allow odd dimensions) is necessary and sufficient for the existence of a metric of positive scalar curvature (at least when π 1 (M) = 0).

146 The Gromov-Lawson conjecture states that a slight generalization of this condition (the vanishing of the corresponding integral characteristic class, and suitably modified to allow odd dimensions) is necessary and sufficient for the existence of a metric of positive scalar curvature (at least when π 1 (M) = 0). Proved (when M simply connected) if the dimension is 5 (Stolz).

147 Another application: computing dimensions of moduli spaces: Suppose N is a nonlinear elliptic operator such that the solutions of N(u) = 0 correspond to some interesting geometric objects, e.g. 1. Self-dual Yang-Mills connections

148 Another application: computing dimensions of moduli spaces: Suppose N is a nonlinear elliptic operator such that the solutions of N(u) = 0 correspond to some interesting geometric objects, e.g. 1. Self-dual Yang-Mills connections 2. Pseudoholomorphic curves

149 Another application: computing dimensions of moduli spaces: Suppose N is a nonlinear elliptic operator such that the solutions of N(u) = 0 correspond to some interesting geometric objects, e.g. 1. Self-dual Yang-Mills connections 2. Pseudoholomorphic curves 3. Complex structures

150 Another application: computing dimensions of moduli spaces: Suppose N is a nonlinear elliptic operator such that the solutions of N(u) = 0 correspond to some interesting geometric objects, e.g. 1. Self-dual Yang-Mills connections 2. Pseudoholomorphic curves 3. Complex structures Deformation theory: given a solution u 0 to this equation, find all nearby solutions u.

151 Another application: computing dimensions of moduli spaces: Suppose N is a nonlinear elliptic operator such that the solutions of N(u) = 0 correspond to some interesting geometric objects, e.g. 1. Self-dual Yang-Mills connections 2. Pseudoholomorphic curves 3. Complex structures Deformation theory: given a solution u 0 to this equation, find all nearby solutions u. Write N(u) = DN u0 (u u 0 ) + Q(u u 0 ).

152 Another application: computing dimensions of moduli spaces: Suppose N is a nonlinear elliptic operator such that the solutions of N(u) = 0 correspond to some interesting geometric objects, e.g. 1. Self-dual Yang-Mills connections 2. Pseudoholomorphic curves 3. Complex structures Deformation theory: given a solution u 0 to this equation, find all nearby solutions u. Write N(u) = DN u0 (u u 0 ) + Q(u u 0 ).

153 If the linearization L = DN u0, the implicit function theorem implies that if L is surjective then, in a neighbourhood of u 0, solutions of N(u) = 0 solutions of Lφ = 0 = ker L.

154 If the linearization L = DN u0, the implicit function theorem implies that if L is surjective then, in a neighbourhood of u 0, solutions of N(u) = 0 solutions of Lφ = 0 = ker L. Best situation (unobstructed deformation theory) is when L surjective, so that indl = dim ker L. This is computable, and is the local dimension of solution space.

155 Example: Let M 4 be compact and E a rank 2 bundle over M with charge c 2 (E) = k Z. Then the moduli space M k of all self-dual Yang-Mills instantons of charge k should be (and is, for generic choice of background metric) a smooth manifold, with dim M k = 8c 2 (E) 3(1 β 1 (M) + β + 2 (M)).

156 Example: Let M 4 be compact and E a rank 2 bundle over M with charge c 2 (E) = k Z. Then the moduli space M k of all self-dual Yang-Mills instantons of charge k should be (and is, for generic choice of background metric) a smooth manifold, with dim M k = 8c 2 (E) 3(1 β 1 (M) + β + 2 (M)). Freedom to choose generic metric means one can always assume one is in the unobstructed situation.

157 The index formula for manifolds with boundary

158 The index formula for manifolds with boundary Early version (Atiyah-Bott) applies to elliptic operators with local (e.g. Dirichlet or Neumann) boundary conditions.

159 The index formula for manifolds with boundary Early version (Atiyah-Bott) applies to elliptic operators with local (e.g. Dirichlet or Neumann) boundary conditions. Not available for Dirac-type operators

160 The index formula for manifolds with boundary Early version (Atiyah-Bott) applies to elliptic operators with local (e.g. Dirichlet or Neumann) boundary conditions. Not available for Dirac-type operators Atiyah-Patodi-Singer (1975): Suppose that Y = M has a collar neighbourhood [0, 1) Y M, so that the metric, the bundles, the connection, are all of product form in this neighbourhood.

161 The index formula for manifolds with boundary Early version (Atiyah-Bott) applies to elliptic operators with local (e.g. Dirichlet or Neumann) boundary conditions. Not available for Dirac-type operators Atiyah-Patodi-Singer (1975): Suppose that Y = M has a collar neighbourhood [0, 1) Y M, so that the metric, the bundles, the connection, are all of product form in this neighbourhood. Then where D = cl(e n ) ( ) + A, x n A : L 2 (Y ; S) L 2 (Y ; S) is self-adjoint!

162 The index formula for manifolds with boundary Early version (Atiyah-Bott) applies to elliptic operators with local (e.g. Dirichlet or Neumann) boundary conditions. Not available for Dirac-type operators Atiyah-Patodi-Singer (1975): Suppose that Y = M has a collar neighbourhood [0, 1) Y M, so that the metric, the bundles, the connection, are all of product form in this neighbourhood. Then where D = cl(e n ) ( ) + A, x n A : L 2 (Y ; S) L 2 (Y ; S) is self-adjoint!

163 Let {φ j, λ j } be the eigendata for A, Aφ j = λ j φ j. λ j ±.

164 Let {φ j, λ j } be the eigendata for A, Aφ j = λ j φ j. λ j ±. Decompose L 2 (Y ; S) = H + H, where H + is the sum of the eigenspaces corresponding to positive eigenvalues and H is the sum of the eigenspaces corresponding to negative eigenvalues.

165 Let {φ j, λ j } be the eigendata for A, Aφ j = λ j φ j. λ j ±. Decompose L 2 (Y ; S) = H + H, where H + is the sum of the eigenspaces corresponding to positive eigenvalues and H is the sum of the eigenspaces corresponding to negative eigenvalues. is the orthogonal projection. Π : L 2 (Y ; S) H +

166 Let {φ j, λ j } be the eigendata for A, Aφ j = λ j φ j. λ j ±. Decompose L 2 (Y ; S) = H + H, where H + is the sum of the eigenspaces corresponding to positive eigenvalues and H is the sum of the eigenspaces corresponding to negative eigenvalues. is the orthogonal projection. Π : L 2 (Y ; S) H +

167 The Atiyah-Patodi-Singer boundary problem: (D, Π) : L 2 (X, S + ) L 2 (X; S ) L 2 (Y ; S) (D, Π)u = (Du, Π u Y ).

168 The Atiyah-Patodi-Singer boundary problem: (D, Π) : L 2 (X, S + ) L 2 (X; S ) L 2 (Y ; S) (D, Π)u = (Du, Π u Y ). This is Fredholm

169 The Atiyah-Patodi-Singer boundary problem: (D, Π) : L 2 (X, S + ) L 2 (X; S ) L 2 (Y ; S) (D, Π)u = (Du, Π u Y ). This is Fredholm The nonlocal boundary condition given by Π is known as the APS boundary condition.

170 The Atiyah-Patodi-Singer boundary problem: (D, Π) : L 2 (X, S + ) L 2 (X; S ) L 2 (Y ; S) (D, Π)u = (Du, Π u Y ). This is Fredholm The nonlocal boundary condition given by Π is known as the APS boundary condition. To state the index formula, we need one more is definition:

171 The Atiyah-Patodi-Singer boundary problem: (D, Π) : L 2 (X, S + ) L 2 (X; S ) L 2 (Y ; S) (D, Π)u = (Du, Π u Y ). This is Fredholm The nonlocal boundary condition given by Π is known as the APS boundary condition. To state the index formula, we need one more is definition: η(s) = λ j 0 (sgnλ j ) λ j s.

172 The Atiyah-Patodi-Singer boundary problem: (D, Π) : L 2 (X, S + ) L 2 (X; S ) L 2 (Y ; S) (D, Π)u = (Du, Π u Y ). This is Fredholm The nonlocal boundary condition given by Π is known as the APS boundary condition. To state the index formula, we need one more is definition: η(s) = λ j 0 (sgnλ j ) λ j s. Weyl eigenvalue asymptotics for D implies that η(s) is holomorphic for Re(s) > n.

173 Theorem: The eta function extends meromorphically to the whole complex plane. It is regular at s = 0.

174 Theorem: The eta function extends meromorphically to the whole complex plane. It is regular at s = 0. Definition: The eta invariant of A, η(a) is by definition the value of this eta function at s = 0.

175 Theorem: The eta function extends meromorphically to the whole complex plane. It is regular at s = 0. Definition: The eta invariant of A, η(a) is by definition the value of this eta function at s = 0. The APS index theorem: ind(d, Π) = where h = dim ker A. M Â(T M)ch(E) 1 (η(a) + h), 2

176 Theorem: The eta function extends meromorphically to the whole complex plane. It is regular at s = 0. Definition: The eta invariant of A, η(a) is by definition the value of this eta function at s = 0. The APS index theorem: ind(d, Π) = where h = dim ker A. M Â(T M)ch(E) 1 (η(a) + h), 2 This suggests that η(a) is in many interesting ways the odd-dimensional generalization of the index.

177 Theorem: The eta function extends meromorphically to the whole complex plane. It is regular at s = 0. Definition: The eta invariant of A, η(a) is by definition the value of this eta function at s = 0. The APS index theorem: ind(d, Π) = where h = dim ker A. M Â(T M)ch(E) 1 (η(a) + h), 2 This suggests that η(a) is in many interesting ways the odd-dimensional generalization of the index. The APS index formula is then a sort of transgression formula, connecting the index in even dimensions and the eta invariant in odd dimensions.

178

Lecture 18 - Clifford Algebras and Spin groups

Lecture 18 - Clifford Algebras and Spin groups Lecture 18 - Clifford Algebras and Spin groups April 5, 2013 Reference: Lawson and Michelsohn, Spin Geometry. 1 Universal Property If V is a vector space over R or C, let q be any quadratic form, meaning

More information

16.3 Fredholm Operators

16.3 Fredholm Operators Lectures 16 and 17 16.3 Fredholm Operators A nice way to think about compact operators is to show that set of compact operators is the closure of the set of finite rank operator in operator norm. In this

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

Elasticity Theory Basics

Elasticity Theory Basics G22.3033-002: Topics in Computer Graphics: Lecture #7 Geometric Modeling New York University Elasticity Theory Basics Lecture #7: 20 October 2003 Lecturer: Denis Zorin Scribe: Adrian Secord, Yotam Gingold

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Polynomial Invariants

Polynomial Invariants Polynomial Invariants Dylan Wilson October 9, 2014 (1) Today we will be interested in the following Question 1.1. What are all the possible polynomials in two variables f(x, y) such that f(x, y) = f(y,

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

Invariant Metrics with Nonnegative Curvature on Compact Lie Groups

Invariant Metrics with Nonnegative Curvature on Compact Lie Groups Canad. Math. Bull. Vol. 50 (1), 2007 pp. 24 34 Invariant Metrics with Nonnegative Curvature on Compact Lie Groups Nathan Brown, Rachel Finck, Matthew Spencer, Kristopher Tapp and Zhongtao Wu Abstract.

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

How To Prove The Dirichlet Unit Theorem

How To Prove The Dirichlet Unit Theorem Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if

More information

INVARIANT METRICS WITH NONNEGATIVE CURVATURE ON COMPACT LIE GROUPS

INVARIANT METRICS WITH NONNEGATIVE CURVATURE ON COMPACT LIE GROUPS INVARIANT METRICS WITH NONNEGATIVE CURVATURE ON COMPACT LIE GROUPS NATHAN BROWN, RACHEL FINCK, MATTHEW SPENCER, KRISTOPHER TAPP, AND ZHONGTAO WU Abstract. We classify the left-invariant metrics with nonnegative

More information

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

SINTESI DELLA TESI. Enriques-Kodaira classification of Complex Algebraic Surfaces

SINTESI DELLA TESI. Enriques-Kodaira classification of Complex Algebraic Surfaces Università degli Studi Roma Tre Facoltà di Scienze Matematiche, Fisiche e Naturali Corso di Laurea Magistrale in Matematica SINTESI DELLA TESI Enriques-Kodaira classification of Complex Algebraic Surfaces

More information

5 Homogeneous systems

5 Homogeneous systems 5 Homogeneous systems Definition: A homogeneous (ho-mo-jeen -i-us) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x +... + a n x n =.. a m

More information

Math 231b Lecture 35. G. Quick

Math 231b Lecture 35. G. Quick Math 231b Lecture 35 G. Quick 35. Lecture 35: Sphere bundles and the Adams conjecture 35.1. Sphere bundles. Let X be a connected finite cell complex. We saw that the J-homomorphism could be defined by

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

MICROLOCAL ANALYSIS OF THE BOCHNER-MARTINELLI INTEGRAL

MICROLOCAL ANALYSIS OF THE BOCHNER-MARTINELLI INTEGRAL PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9939(XX)0000-0 MICROLOCAL ANALYSIS OF THE BOCHNER-MARTINELLI INTEGRAL NIKOLAI TARKHANOV AND NIKOLAI VASILEVSKI

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

RESONANCES AND BALLS IN OBSTACLE SCATTERING WITH NEUMANN BOUNDARY CONDITIONS

RESONANCES AND BALLS IN OBSTACLE SCATTERING WITH NEUMANN BOUNDARY CONDITIONS RESONANCES AND BALLS IN OBSTACLE SCATTERING WITH NEUMANN BOUNDARY CONDITIONS T. J. CHRISTIANSEN Abstract. We consider scattering by an obstacle in R d, d 3 odd. We show that for the Neumann Laplacian if

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

5. Linear algebra I: dimension

5. Linear algebra I: dimension 5. Linear algebra I: dimension 5.1 Some simple results 5.2 Bases and dimension 5.3 Homomorphisms and dimension 1. Some simple results Several observations should be made. Once stated explicitly, the proofs

More information

MATH1231 Algebra, 2015 Chapter 7: Linear maps

MATH1231 Algebra, 2015 Chapter 7: Linear maps MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales danielc@unsw.edu.au Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES CHRISTOPHER HEIL 1. Cosets and the Quotient Space Any vector space is an abelian group under the operation of vector addition. So, if you are have studied

More information

MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish

More information

SECTION 10-2 Mathematical Induction

SECTION 10-2 Mathematical Induction 73 0 Sequences and Series 6. Approximate e 0. using the first five terms of the series. Compare this approximation with your calculator evaluation of e 0.. 6. Approximate e 0.5 using the first five terms

More information

Allen Back. Oct. 29, 2009

Allen Back. Oct. 29, 2009 Allen Back Oct. 29, 2009 Notation:(anachronistic) Let the coefficient ring k be Q in the case of toral ( (S 1 ) n) actions and Z p in the case of Z p tori ( (Z p )). Notation:(anachronistic) Let the coefficient

More information

Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

More information

9. Quotient Groups Given a group G and a subgroup H, under what circumstances can we find a homomorphism φ: G G ', such that H is the kernel of φ?

9. Quotient Groups Given a group G and a subgroup H, under what circumstances can we find a homomorphism φ: G G ', such that H is the kernel of φ? 9. Quotient Groups Given a group G and a subgroup H, under what circumstances can we find a homomorphism φ: G G ', such that H is the kernel of φ? Clearly a necessary condition is that H is normal in G.

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Lecture L3 - Vectors, Matrices and Coordinate Transformations S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

More information

Finite dimensional C -algebras

Finite dimensional C -algebras Finite dimensional C -algebras S. Sundar September 14, 2012 Throughout H, K stand for finite dimensional Hilbert spaces. 1 Spectral theorem for self-adjoint opertors Let A B(H) and let {ξ 1, ξ 2,, ξ n

More information

State of Stress at Point

State of Stress at Point State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

8.1 Examples, definitions, and basic properties

8.1 Examples, definitions, and basic properties 8 De Rham cohomology Last updated: May 21, 211. 8.1 Examples, definitions, and basic properties A k-form ω Ω k (M) is closed if dω =. It is exact if there is a (k 1)-form σ Ω k 1 (M) such that dσ = ω.

More information

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

More information

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010 Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators

More information

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function 17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):

More information

Finite dimensional topological vector spaces

Finite dimensional topological vector spaces Chapter 3 Finite dimensional topological vector spaces 3.1 Finite dimensional Hausdorff t.v.s. Let X be a vector space over the field K of real or complex numbers. We know from linear algebra that the

More information

1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm

1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm Uppsala Universitet Matematiska Institutionen Andreas Strömbergsson Prov i matematik Funktionalanalys Kurs: F3B, F4Sy, NVP 005-06-15 Skrivtid: 9 14 Tillåtna hjälpmedel: Manuella skrivdon, Kreyszigs bok

More information

1 Completeness of a Set of Eigenfunctions. Lecturer: Naoki Saito Scribe: Alexander Sheynis/Allen Xue. May 3, 2007. 1.1 The Neumann Boundary Condition

1 Completeness of a Set of Eigenfunctions. Lecturer: Naoki Saito Scribe: Alexander Sheynis/Allen Xue. May 3, 2007. 1.1 The Neumann Boundary Condition MAT 280: Laplacian Eigenfunctions: Theory, Applications, and Computations Lecture 11: Laplacian Eigenvalue Problems for General Domains III. Completeness of a Set of Eigenfunctions and the Justification

More information

The cover SU(2) SO(3) and related topics

The cover SU(2) SO(3) and related topics The cover SU(2) SO(3) and related topics Iordan Ganev December 2011 Abstract The subgroup U of unit quaternions is isomorphic to SU(2) and is a double cover of SO(3). This allows a simple computation of

More information

Notes on metric spaces

Notes on metric spaces Notes on metric spaces 1 Introduction The purpose of these notes is to quickly review some of the basic concepts from Real Analysis, Metric Spaces and some related results that will be used in this course.

More information

Chapter 5. Banach Spaces

Chapter 5. Banach Spaces 9 Chapter 5 Banach Spaces Many linear equations may be formulated in terms of a suitable linear operator acting on a Banach space. In this chapter, we study Banach spaces and linear operators acting on

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

NOV - 30211/II. 1. Let f(z) = sin z, z C. Then f(z) : 3. Let the sequence {a n } be given. (A) is bounded in the complex plane

NOV - 30211/II. 1. Let f(z) = sin z, z C. Then f(z) : 3. Let the sequence {a n } be given. (A) is bounded in the complex plane Mathematical Sciences Paper II Time Allowed : 75 Minutes] [Maximum Marks : 100 Note : This Paper contains Fifty (50) multiple choice questions. Each question carries Two () marks. Attempt All questions.

More information

COHOMOLOGY OF GROUPS

COHOMOLOGY OF GROUPS Actes, Congrès intern. Math., 1970. Tome 2, p. 47 à 51. COHOMOLOGY OF GROUPS by Daniel QUILLEN * This is a report of research done at the Institute for Advanced Study the past year. It includes some general

More information

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

Lecture 2: Homogeneous Coordinates, Lines and Conics

Lecture 2: Homogeneous Coordinates, Lines and Conics Lecture 2: Homogeneous Coordinates, Lines and Conics 1 Homogeneous Coordinates In Lecture 1 we derived the camera equations λx = P X, (1) where x = (x 1, x 2, 1), X = (X 1, X 2, X 3, 1) and P is a 3 4

More information

LEARNING OBJECTIVES FOR THIS CHAPTER

LEARNING OBJECTIVES FOR THIS CHAPTER CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional

More information

tr g φ hdvol M. 2 The Euler-Lagrange equation for the energy functional is called the harmonic map equation:

tr g φ hdvol M. 2 The Euler-Lagrange equation for the energy functional is called the harmonic map equation: Notes prepared by Andy Huang (Rice University) In this note, we will discuss some motivating examples to guide us to seek holomorphic objects when dealing with harmonic maps. This will lead us to a brief

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

More information

Linear Algebra: Determinants, Inverses, Rank

Linear Algebra: Determinants, Inverses, Rank D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties SOLUTIONS TO EXERCISES FOR MATHEMATICS 205A Part 3 Fall 2008 III. Spaces with special properties III.1 : Compact spaces I Problems from Munkres, 26, pp. 170 172 3. Show that a finite union of compact subspaces

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

The Open University s repository of research publications and other research outputs

The Open University s repository of research publications and other research outputs Open Research Online The Open University s repository of research publications and other research outputs The degree-diameter problem for circulant graphs of degree 8 and 9 Journal Article How to cite:

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

Section 4.4 Inner Product Spaces

Section 4.4 Inner Product Spaces Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer

More information

Mean value theorem, Taylors Theorem, Maxima and Minima.

Mean value theorem, Taylors Theorem, Maxima and Minima. MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.

More information

3. Reaction Diffusion Equations Consider the following ODE model for population growth

3. Reaction Diffusion Equations Consider the following ODE model for population growth 3. Reaction Diffusion Equations Consider the following ODE model for population growth u t a u t u t, u 0 u 0 where u t denotes the population size at time t, and a u plays the role of the population dependent

More information

Lecture 13 - Basic Number Theory.

Lecture 13 - Basic Number Theory. Lecture 13 - Basic Number Theory. Boaz Barak March 22, 2010 Divisibility and primes Unless mentioned otherwise throughout this lecture all numbers are non-negative integers. We say that A divides B, denoted

More information

88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a

88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a 88 CHAPTER. VECTOR FUNCTIONS.4 Curvature.4.1 Definitions and Examples The notion of curvature measures how sharply a curve bends. We would expect the curvature to be 0 for a straight line, to be very small

More information

Sets of Fibre Homotopy Classes and Twisted Order Parameter Spaces

Sets of Fibre Homotopy Classes and Twisted Order Parameter Spaces Communications in Mathematical Physics Manuscript-Nr. (will be inserted by hand later) Sets of Fibre Homotopy Classes and Twisted Order Parameter Spaces Stefan Bechtluft-Sachs, Marco Hien Naturwissenschaftliche

More information

Surface bundles over S 1, the Thurston norm, and the Whitehead link

Surface bundles over S 1, the Thurston norm, and the Whitehead link Surface bundles over S 1, the Thurston norm, and the Whitehead link Michael Landry August 16, 2014 The Thurston norm is a powerful tool for studying the ways a 3-manifold can fiber over the circle. In

More information

ISU Department of Mathematics. Graduate Examination Policies and Procedures

ISU Department of Mathematics. Graduate Examination Policies and Procedures ISU Department of Mathematics Graduate Examination Policies and Procedures There are four primary criteria to be used in evaluating competence on written or oral exams. 1. Knowledge Has the student demonstrated

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

Revised Version of Chapter 23. We learned long ago how to solve linear congruences. ax c (mod m)

Revised Version of Chapter 23. We learned long ago how to solve linear congruences. ax c (mod m) Chapter 23 Squares Modulo p Revised Version of Chapter 23 We learned long ago how to solve linear congruences ax c (mod m) (see Chapter 8). It s now time to take the plunge and move on to quadratic equations.

More information

Ideal Class Group and Units

Ideal Class Group and Units Chapter 4 Ideal Class Group and Units We are now interested in understanding two aspects of ring of integers of number fields: how principal they are (that is, what is the proportion of principal ideals

More information

The Determinant: a Means to Calculate Volume

The Determinant: a Means to Calculate Volume The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are

More information

Quantum Mechanics and Representation Theory

Quantum Mechanics and Representation Theory Quantum Mechanics and Representation Theory Peter Woit Columbia University Texas Tech, November 21 2013 Peter Woit (Columbia University) Quantum Mechanics and Representation Theory November 2013 1 / 30

More information

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A = MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

More information

26. Determinants I. 1. Prehistory

26. Determinants I. 1. Prehistory 26. Determinants I 26.1 Prehistory 26.2 Definitions 26.3 Uniqueness and other properties 26.4 Existence Both as a careful review of a more pedestrian viewpoint, and as a transition to a coordinate-independent

More information