REPRESENTATIONS OF sl 2 (C)

Size: px
Start display at page:

Download "REPRESENTATIONS OF sl 2 (C)"

Transcription

1 REPRESENTATIONS OF sl (C) JAY TAYLOR Basic Definitions and Introduction Definition. A Lie algebra g is a vector space over a field k with an associated bilinear map [, ] : g g g, such that the following hold: [x, x] = 0 for all x g, [x, [y, z]] + [y, [z, x]] + [z, [x, y]] = 0 for all x, y, z g. Note. We call the latter axiom of the above definition the Jacobi Identity. The idea of this axiom is to be a replacement for associativity, as we do not have that a Lie algebra is an associative algebra. We refer to the bilinear map [, ] as the Lie bracket of g. Example. (a) Let g be any vector space over any field k. Then we can endow g with the trivial bracket operation [x, y] = 0 for all x, y g. We refer to this as an abelian Lie algebra. (b) Let k = R and let g = R 3. We define a product structure on g using the standard vector product x y for all x, y g. In other words if x, y g such that x = (x 1, x, x 3 ) and y = (y 1, y, y 3 ) then [x, y] = (x y 3 x 3 y, x 3 y 1 x 1 y 3, x 1 y x y 1 ). (c) Let V be any finite-dimensional vector space over a field k. We define the general linear Lie algebra gl(v ) to be the vector space of all linear maps from V to V, endowed with the commutator bracket [x, y] = x y y x for all x, y gl(v ). (d) We now define a matrix analogue for the Lie algebra in example (c). Let k be any field and let gl(n, k) be the vector space of all n n matrices defined over k. Then gl(n, k) is a Lie algebra with Lie bracket given by [x, y] = xy yx for all x, y gl(n, k), i.e. the commutator bracket. Note that a basis for gl(n, k) as a vector space is given by the n n unit matrices e ij which have entry 1 in the ijth position and zeros elsewhere. We then see that the commutator bracket is given by [e ij, e kl ] = δ jk e il δ il e kj, where δ ij is the Kronecker delta. 1

2 JAY TAYLOR (e) Let k be any field and sl(, k) = {x gl(, k) tr(x) = 0} gl(, k) be the vector subspace of gl(, k) whose elements have trace 0. Now if x, y sl(, k) then we will have [x, y] = xy yx sl(, k) hence the commutator brackets gives sl(, k) a Lie algebra structure. As a vector space it can be shown that sl(, k) has a basis given by e = [ ] f = [ ] h = [ ] These elements have Lie bracket relations [e, f] = h, [h, f] = f, [h, e] = e. (f) Let A be an associative algebra over a field k. Clearly A is a vector space over k and we can give it the structure of a Lie algebra by endowing it with the commutator bracket [x, y] = xy yx for all x, y A. Definition. Let g be a Lie algebra over a field k then a derivation D : g g is a linear map which satisfies the Leibniz rule D([x, y]) = [D(x), y] + [x, D(y)] for all x, y g. Let g be a Lie algebra over a field k then Der(g) the vector space of all derivations of g is a Lie algebra whose Lie bracket is given by the commutator bracket [D 1, D ] = D 1 D D D 1 for all D 1, D Der(g). We define a very important derivation known as the adjoint operator. Let x g then we define a map ad x : g g by ad x (y) = [x, y] for all y g. Claim. For any Lie algebra g we have ad x Der(g) for all x g. Proof. First of all we must show that ad x is linear. For any α, β k and y, z g we have ad x (αy + βz) = [x, αy + βz] = α[x, y] + β[x, z] = α ad x (y) + β ad x (z). Hence the map is linear. We now show that this map satisfies the Liebniz rule. For all y, z g we have ad x ([y, z]) = [x, [y, z]] = [y, [z, x]] [z, [x, y]], = [y, [x, z]] + [[x, y], z], = [ad x (y), z] + [y, ad x (z)]. Definition. For any Lie algebra g we call a derivation D Der(g) an inner derivation if there exists an element x g such that D = ad x. Any derivation of g which is not an inner derivation is called an outer derivation. Note that the derivation ad x is not to be confused with the adjoint homomorphism. We define the adjoint homomorphism to be the map ad : g gl(g) given by x ad x for all x g. However, for this to make sense we must define what we mean by a Lie algebra homomorphism.

3 REPRESENTATIONS OF sl (C) 3 Definition. Let g 1, g be Lie algebras defined over a common field k. Then a homomorphism of Lie algebras ϕ : g 1 g is a linear map of vector spaces such that ϕ([x, y]) = [ϕ(x), ϕ(y)], i.e. it preserves the Lie bracket. Claim. The map ad : g gl(g) is a homomorphism of Lie algebras. Proof. Clearly this map is linear by the linearity properties of the Lie bracket. Hence to show this is a homomorphism we must show that ad [x,y] = [ad x, ad y ] = ad x ad y ad y ad x for all x, y g. We do this by showing equivalence for all z g ad [x,y] (z) = [[x, y], z] = [z, [x, y]], = [x, [y, z]] + [y, [z, x]], = ad x ([y, z]) ad y ([x, z]), = (ad x ad y ad y ad x )(z). Definition. A representation of a Lie algebra g is a pair (V, ρ) where V is a vector space over k and ρ : g gl(v ) is a Lie algebra homomorphism. Example. (a) Take V to be any vector space over k and ρ = 0 to be the zero map. We call this the trivial representation of g. (b) The adjoint homomorphism of g is a representation of g with V = g and ρ = ad. We call this the adjoint representation of g. Alternatively instead of thinking of representations we can also consider modules for a Lie algebra g. Definition. Let g be a Lie algebra over a field k. A g-module is a pair (V, ) where V is a vector space and : g V V is a map satisfying the following conditions for all x, y g, v, w V and λ, µ k. (λx + µy) v = λ(x v) + µ(y v), x (λv + µw) = λ(x v) + µ(x w), [x, y] v = x (y v) y (x v). The Universal Enveloping Algebra In the beginning one of the main stumbling blocks in the representation theory of Lie algebras was that the Lie algebra is not an associative algebra over k. This proves quite irritating as we already know a lot about the representation theory of associative algebras and we would like to apply this to Lie algebras. Enter the Universal Enveloping Algebra. Definition. Let g be a Lie algebra over k with basis x i and Lie bracket [, ] defined by [x i, x j ] = k ck ijx k. The universal enveloping algebra U(g) is the associative algebra generated by the x i s with the defining relations x i x j x j x i = k ck ijx k. We call the elements c k ij the structure constants of U(g).

4 4 JAY TAYLOR The Universal Enveloping Algebra of g is an associative algebra generated as freely as possible by g, (to quote [Hum78]), with respect to the bracket relations. This algebra is always infinite dimensional unless the Lie algebra is abelian. Example. (a) Let g = span{x 1 } be a 1-dimensional Lie algebra over a field k. The only Lie bracket relation on g comes from [x 1, x 1 ] = 0. Therefore we only have one structure constant c 1 11 = 0 and hence U(g) is the polynomial algebra k[x 1 ] in one variable. (b) Consider sl(, k) for any field k. Now U(sl(, k)) is a free algebra generated by e, f, g, which is subject to the relations ef fe = h hf fh = f he eh = e. We know that sl(, k) = span{f} span{h} span{e}. Therefore U(sl(, k)) will contain U(span{f}), i.e. all polynomials in f. Similarly we will get that U(sl(, k)) wil contain all polynomials in h and all polynomials in e. As well as this we will get U(sl(, k)) contains all products of these elements. Theorem (Poincaré-Birkoff-Witt or PBW Theorem). Let g be a finite dimensional Lie algebra over a field k and let x 1,..., x n be an ordered basis for g. Then the universal enveloping algebra U(g) will have a basis given by Proof. See section 17.4 of [Hum78]. {x a x an n a 1,..., a n 0}. We note that we have only stated the PBW Theorem for finite dimensional Lie algebras, (for ease of notation), but this works perfectly well for infinite dimensional Lie algebras as well. See section 17.3 of [Hum78] for more details. Corollary. The elements x 1,..., x n vector subspace of U(g). in U(g) are linearly independent and hence g is a Proof. Clear from the PBW Theorem. Corollary. Let h g be a Lie subalgebra of g then U(h) is a subalgebra of U(g). Proof. Take a basis for h and extend it to a basis for g then this is clear from the PBW Theorem. Proposition. Let g be a finite dimensional Lie algebra and let U(g) be its universal enveloping algebra. Then there is a bijective correspondence between g-modules and U(g)- modules. Furthermore, under this correspondence, a g-module is irreducible if and only if its corresponding U(g) module is irreducible. Proof. See Lemma of [EW06].

5 REPRESENTATIONS OF sl (C) 5 The above Proposition makes our life simpler as we only have to find irreducible U(g)- modules, which we are much more adept at doing. We make a few remarks about the universal enveloping algebra before continuing. Let g be a Lie algebra over a field k and let A be an associative algebra over k. We have that A has the structure of a Lie algebra using the commutator bracket. Let ϕ : g A be a homomorphism of Lie algebras then we must have that ϕ factors uniquely through the universal enveloping algebra. This can be most easily seen in the following commutative diagram. g ϕ > A! > U(g) Hence the universal enveloping algebra is universal in the sense that it satisfies this universal property. The second remark we make is that the definition we have given here depends on the choice of a basis for g. This isn t very convenient and it can be shown that the following definition is equivalent to the definition given above. Definition. Let g be a Lie algebra over a field k and let T g be the tensor algebra of g. Consider the ideal I = x y y x [x, y] x, y g of the tensor algebra. Then the universal enveloping algebra of g is defined to be U(g) = T g/i. We finish this section with two more definitions. Definition. Let g be a Lie algebra over a field k and let (V, ρ V ) and (W, ρ W ) be two representations of g. Then the tensor product of these representations is the pair (V W, ρ V W ) where we define ρ V W to be ρ V W (x) = ρ V (x) Id + Id ρ W (x) for all x g. Definition. Let g be a Lie algebra over a field k and let (V, ρ V ) be a representation of g. Then the dual representation of g is the pair (V, ρ V ) where V is the dual vector space to V and ρ V is given by ρ V (x) = ρ V (x) for all x g. Irreducible Representations of sl(, C) Let (V, ρ V ) be a finite dimensional representation, (note not necessarily irreducible), of sl(, C) = span{f, h, e}, we consider V as a g module by setting x v = ρ V (x)v for all x g. Let λ k then we define a vector subspace of V by V λ = {v V h v = λv}. If λ is not an eigenvalue of ρ V (h) then V λ = {0}. Whenever V λ {0} we call λ a weight of h in V and V λ its associated weight space. Proposition. We have that for any v V λ (a) e v = 0 or e v is an eigenvector of h with eigenvalue λ +,

6 6 JAY TAYLOR (b) f v = 0 or f v is an eigenvector of h with eigenvalue λ. Proof. Recall the relation [h, e] = he eh = e in sl(, C). Hence for any v V λ we have h (e v) = (eh + e) v = λ(e v) + (e v) = ( + λ)e v. Therefore either e v = 0 or e v is an eigenvector of h. Similarly recalling the relation [h, f] = hf fh = f we obtain for any v V λ h (f v) = (fh f) v = λ(f v) (f v) = (λ )f v. Given this proposition we can describe the action of the basis e, f, h of sl(, C) on the eigenspaces of h in V in a fairly simple pictorial fashion. e e e e e {0} V λ V λ+ V λ V λ {0} f f f f f h h h h Note that in the following discussion it will become clear why we have only included the eigenspaces from λ to λ for V on this diagram. Proposition. Let V be a finite dimensional sl(, C) module then there exists an eigenvector w V for h such that e w = 0. Proof. We are working over an algebraically closed field, which means ρ V (h) has at least one eigenvalue and hence at least one eigenvector, say v. The vectors v, e v, e v,..., if non-zero, form an infinite sequence of linearly independent eigenvectors for ρ V (h) in V by the above Lemma. However V is finite dimensional so these cannot all be non-zero, hence there exists a k 0 such that e k v 0 and e k+1 v = 0. Setting w = e k we have h w = (λ + k)w and e w = 0. We refer to an eigenvector w V with the property that V λ {0} and V λ+ = {0} as a maximal vector of weight λ. Lemma. Let V be a finite dimensional irreducible module for sl(, C) and let w be a maximal vector of weight λ. Let l 0 be such that f l w 0 and f l+1 w = 0 then the vectors w, f w,..., f l w form a basis for V for some l 0. Proof. We aim to show that span{w, f w,..., f l w} = W V is a non-zero submodule of V. The vectors w, f w,..., f l w are linearly independent because they are eigenvectors for ρ V (h) with distinct eigenvalues. Clearly these vectors are invariant under the action of f and they are invariant under the action of h as they are eigenvectors for ρ V (h). Hence we must check that they are invariant under the action of e.

7 REPRESENTATIONS OF sl (C) 7 Now e w = 0 W as w was chosen this way. Recall the relation [e, f] = ef fe = h in sl(, C) then we have e (f w) = (h + fe) w = h w + f (e w) = λw + f 0 = λw, e (f w) = (h + fe) f w = (λ )f w + λf w = (λ + 1)f w,. e (f i w) = i(λ i + 1)f i 1 w. So we can see that e will take every vector f i w to some multiple of f i 1 w. We now prove the final statement by induction on i, note that we know e w = 0 and hence the statement is true for i = 0. For the case i + 1 we have e (f i+1 w) = (h + fe) f i w = (λ i)f i w + i(λ i + 1)f i w, = (λ + iλ i i)f i w, = [(i + 1)λ (i + 1)i]f i w, = (i + 1)(λ i)f i w. Therefore W is a non-zero submodule of V and as V is irreducible this means V = W. Corollary. We have λ = l in the above Lemma. Hence dim V = l + 1 and every weight of h is a non-negative integer which is dim V 1. Proof. Recall that f l+1 w = 0 and f l w 0 for some l 0 then we have 0 = e (f l+1 w) = (l + 1)(λ l)f l w (l + 1)(λ l) = 0 l = λ. Note that we call the weight of a maximal vector the highest weight of V, this is the weight such that dim V = λ + 1. Corollary. Let µ be a weight with respect to h and V µ the corresponding weight space. Then as a vector space we have V µ is one dimensional. Proof. The action of h on the basis of V is h (f i w) = (λ i)f i w. In the Lemma we wrote down an explicit basis for V and said exactly how e, f and h act on this basis. Therefore the irreducible module V is determined explicitly by its highest weight λ. Therefore we have shown the following theorem. Theorem. Let V be a finite dimensional irreducible module for sl(, C). (a) Relative to the action of h, V is the direct sum of weight spaces V µ, where µ = λ, λ,..., (λ ), λ where dim V = λ + 1 and dim µ = 1 for each µ. (b) V has, up to non-zero scalar multiples, a unique maximal vector whose weight, called the highest weight of V, is λ.

8 8 JAY TAYLOR (c) Let w, f w,..., f λ w be the basis of V prescribed in the Lemma then sl(, C) acts on this basis, for 0 i λ in the following way h (f i w) = (λ i)f i w, f (f i w) = f i+1 w, e (f i w) = i(λ i + 1)f i 1 w. In particular there exists at most one irreducible sl(, C)-module, up to isomorphism, of each possible dimension λ + 1 for λ 0. We refer to this module as V (λ). We comment that there is a concrete way to construct the irreducible sl(, C) modules V (λ). Let V = C[X, Y ] be the vector space of polynomials in two variables. For each λ 0 we let V λ be the vector subspace of all homogeneous polynomials of degree λ. This has a basis given by the monomials X λ, X λ 1 Y,..., XY λ 1, Y λ. We turn this vector subspace into a module for sl(, C) by defining a Lie algebra homomorphism ϕ : sl(, C) gl(v λ ) in the following way ϕ(e) = X ϕ(f) = Y ϕ(h) = X Y X X Y Y. It is left as an exercise to verify that this is indeed a representation of sl(, C) and to construct an explicit isomorphism between this description of V (λ) and the description given above. Low Dimensional Irreducible Representations Let λ = 0 then V (0) has dimension 1 and V (0) = V 0 = {v V h v = 0}. We know there is only one irreducible representation of dimension 1, hence this clearly must be the trivial representation. Let λ = 1 then V (1) has dimension. Consider the natural representation of sl(, C) acting on C. We have the action of h on the natural basis x, y of C is [ [ ] [ ] [ [ ] [ = =. 0 1] ] 1 1] Therefore h has eigenvalues ±1 on the natural basis of C and we have C = Cy Cx = V 1 V 1, which means V (1) is isomorphic to the natural representation of sl(, C). Note that the highest weight vector is clearly given by x or some scalar multiple of x. Let λ = then V () has dimension 3. Recall the adjoint representation of sl(, C) given by x ad x for all x sl(, C). Now the action of ad h on the basis of sl(, C) is given by ad h (f) = f ad h (h) = 0 ad h (e) = e. Clearly as a vector space we have sl(, C) = V V 0 V = span{f} span{h} span{e}, which correspond to the generalised eigenspaces of h. Also e is a highest weight vector for

9 REPRESENTATIONS OF sl (C) 9 V () as ad e (e) = 0 and a basis for V () = sl(, C) is given by e, f e = [f, e] = h and f e = [f, h] = f. Complete Reducibility of Finite Dimensional Representations We would like to prove the following theorem, which is attributed to Weyl. Every finite- Theorem (Weyl s Theorem). let g be a complex semisimple Lie algebra. dimensional representation of g is completely reducible. Proof. See section 6.3 of [Hum78]. We will not prove this in general as the proof is quite long but will instead give an easier proof for representations of sl(, C). We first introduce the so called Casimir operator of sl(, C), which we define to be c = ef + fe + h. The Casimir operator was first discovered by embedding the Lie algebra R 3 with the vector cross product into sl(, C), (see Exercise 8.7 in [EW06]). This operator has relationships with angular momentum in Quantum mechanics. See the wikipedia entry on Pauli matrices for more details. Lemma. Let M be a finite dimensional sl(, C) module. Then c : M M is a module homomorphism, in other words c(x m) = x c(m) for all x g and m M. Proof. It is clearly sufficient to do this just for the basis of sl(, C). So we have for e that e using the identity e f = eh + efe using the identity eh = heh eh. ) (ef + fe + h = e f + efe + eh, = eh + efe + eh, = efe + heh,

10 10 JAY TAYLOR Now considering multiplication on the other side we have ) (ef + fe + h e = efe + fe + h e, = efe he + h e, = efe + heh, using similar identities as before. Now considering the action of f we have f using the identity f e = fef fh using the identity fh = fh hfh. ) (ef + fe + h = fef + f e + fh, = fef fh + fh, = fef + hfh, Now considering multiplication on the other side we have ) (ef + fe + h f = ef + fef + h f, = hf + fef + h f, = efe + hfh, using similar identities as before. To prove this for h we recall the identities } } he eh = e hef ehf = ef hef = efh. hf fh = f ehf efh = ef Similarly we obtain } } he eh = e fhe feh = fe hfe = feh. hf fh = f hfe fhe = fe This gives us hef + hfe = efh + feh and so it s clear to see that h ) (ef + fe + h = hef + hfe + h3 = efh + feh + h3 = ) (ef + fe + h h.

11 REPRESENTATIONS OF sl (C) 11 We comment briefly that the Casimir operator is an element in the centre of the universal enveloping algebra of sl(, C). By Schur s Lemma we must have that c acts as a scalar on the irreducible modules V (λ). Proposition. Let V (λ) be an irreducible sl(, C) module with highest weight λ 0 then we have c(v) = c λ v for all v V (λ), where c λ = 1 λ(λ + ). Proof. Let w V (λ) be the highest weight vector then we have c(w) = Hence c λ = 1 λ(λ + ). ) (ef + fe + h w = ef w + fe w + h w, = λw λ w, = λ(λ + ) w. Let M be any finite dimensional sl(, C) module and let µ 1,..., µ r be the distinct eigenvalues of c acting on M. Now, as c is a module homomorphism, we will have c µ i 1 M is a module homomorphism for each 1 i r. Clearly ker(c µ i 1 M ) is a submodule of M for each 1 i r, hence for some multiplicities m i we will have a decomposition M = r ker(c λ i 1 M ) m i. i=1 In other words to express M as a direct sum of irreducible modules we can assume that c has only one eigenvalue on M. Proposition. Let M be a finite dimensional sl(, C) module then any irreducible submodule of M is isomorphic to V (λ) for some specific highest weight λ. Proof. By our above comment we can assume that c has only one eigenvalue on the module M. Let U be an irreducible submodule of M such that U is isomorphic to V (λ) for some highest weight λ. We know c acts on V (λ) as multiplication by c λ = 1 λ(λ + ) and hence we can assume that the eigenvalue of c on M is µ = 1 λ(λ + ). Assume W = V (λ ) is another irreducible submodule of M then we must have c acts on W as multiplication by 1 λ (λ + ). However as c λ is the unique eigenvalue of c on M we must have 1 λ(λ + ) = 1 λ (λ + ) λ = λ U = W = V (λ). Proposition. Let M be a finite dimensional sl(, C) module and N a submodule of M, then any irreducible submodule of M/N is isomorphic to V (λ).

12 1 JAY TAYLOR Proof. As before we can assume M = ker(c µ1 M ) m for some m 0 and µ the unique eigenvalue of c on M. Now we can consider the action of c on the quotient space M/N and it is clear that (c µ1 M ) m (v + N) = 0 for all v M. Therefore c has just one eigenvalue on the quotient space M/N, namely µ, and so as in the previous proposition we have any irreducible submodule of M/N is isomorphic to V (λ). We now put all these ingredients together to prove our main result. Theorem. Any finite dimensional sl(, C) module is completely reducible. Proof. Let M be a finite dimensional module of sl(, C) and assume that c has only one eigenvalue on M. Suppose U is a maximal proper completely reducible submodule of M then the quotient M/U is non-zero. So M/U must have an irreducible submodule which is isomorphic to V (λ) for some highest weight λ. Looking at the largest eigenvalue of h appearing in this submodule tells us that there exists a eigenvector for h with eigenvalue λ. We will now use the fact, (proved in Theorem 9.16 of [EW06]), that representations of Lie algebras preserve Jordan decompositions. So as h is a diagonal matrix we will have that h will act diagonalisably on M and so there exists a h eigenvector v M/U such that h v = λv. If e v 0 then e v would be a h eigenvector with eigenvalue λ + but this contradicts the maximality of λ. Hence e v = 0 and v is a highest weight vector. Let W be the submodule of M generated by v then W is irreducible. As w U the irreduciblity of W implies that U W = {0} and so U W is a larger completely reducible submodule of M, a contradiction. References [EW06] Karin Erdmann and Mark J. Wildon. Introduction to Lie Algebras. Springer Undegraduate Mathematics Series. Springer-Verlag, 006. [Hum78] James E. Humphreys. Introduction To Lie algebras and Representation Theory. Number 9 in Graduate Texts in Mathematics. Springer-Verlag, 1978.

GROUP ALGEBRAS. ANDREI YAFAEV

GROUP ALGEBRAS. ANDREI YAFAEV GROUP ALGEBRAS. ANDREI YAFAEV We will associate a certain algebra to a finite group and prove that it is semisimple. Then we will apply Wedderburn s theory to its study. Definition 0.1. Let G be a finite

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

LEARNING OBJECTIVES FOR THIS CHAPTER

LEARNING OBJECTIVES FOR THIS CHAPTER CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional

More information

Finite dimensional C -algebras

Finite dimensional C -algebras Finite dimensional C -algebras S. Sundar September 14, 2012 Throughout H, K stand for finite dimensional Hilbert spaces. 1 Spectral theorem for self-adjoint opertors Let A B(H) and let {ξ 1, ξ 2,, ξ n

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

Lecture 18 - Clifford Algebras and Spin groups

Lecture 18 - Clifford Algebras and Spin groups Lecture 18 - Clifford Algebras and Spin groups April 5, 2013 Reference: Lawson and Michelsohn, Spin Geometry. 1 Universal Property If V is a vector space over R or C, let q be any quadratic form, meaning

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Linear Algebra. A vector space (over R) is an ordered quadruple. such that V is a set; 0 V ; and the following eight axioms hold:

Linear Algebra. A vector space (over R) is an ordered quadruple. such that V is a set; 0 V ; and the following eight axioms hold: Linear Algebra A vector space (over R) is an ordered quadruple (V, 0, α, µ) such that V is a set; 0 V ; and the following eight axioms hold: α : V V V and µ : R V V ; (i) α(α(u, v), w) = α(u, α(v, w)),

More information

1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ]

1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ] 1 Homework 1 (1) Prove the ideal (3,x) is a maximal ideal in Z[x]. SOLUTION: Suppose we expand this ideal by including another generator polynomial, P / (3, x). Write P = n + x Q with n an integer not

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

BABY VERMA MODULES FOR RATIONAL CHEREDNIK ALGEBRAS

BABY VERMA MODULES FOR RATIONAL CHEREDNIK ALGEBRAS BABY VERMA MODULES FOR RATIONAL CHEREDNIK ALGEBRAS SETH SHELLEY-ABRAHAMSON Abstract. These are notes for a talk in the MIT-Northeastern Spring 2015 Geometric Representation Theory Seminar. The main source

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

Let H and J be as in the above lemma. The result of the lemma shows that the integral

Let H and J be as in the above lemma. The result of the lemma shows that the integral Let and be as in the above lemma. The result of the lemma shows that the integral ( f(x, y)dy) dx is well defined; we denote it by f(x, y)dydx. By symmetry, also the integral ( f(x, y)dx) dy is well defined;

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

Classification of Cartan matrices

Classification of Cartan matrices Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms

More information

it is easy to see that α = a

it is easy to see that α = a 21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coefficient ring is a field. We already know that such a polynomial ring is a UF. Therefore

More information

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES CHRISTOPHER HEIL 1. Cosets and the Quotient Space Any vector space is an abelian group under the operation of vector addition. So, if you are have studied

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

More information

Finite dimensional topological vector spaces

Finite dimensional topological vector spaces Chapter 3 Finite dimensional topological vector spaces 3.1 Finite dimensional Hausdorff t.v.s. Let X be a vector space over the field K of real or complex numbers. We know from linear algebra that the

More information

Quotient Rings and Field Extensions

Quotient Rings and Field Extensions Chapter 5 Quotient Rings and Field Extensions In this chapter we describe a method for producing field extension of a given field. If F is a field, then a field extension is a field K that contains F.

More information

MATH1231 Algebra, 2015 Chapter 7: Linear maps

MATH1231 Algebra, 2015 Chapter 7: Linear maps MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales danielc@unsw.edu.au Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter

More information

Nilpotent Lie and Leibniz Algebras

Nilpotent Lie and Leibniz Algebras This article was downloaded by: [North Carolina State University] On: 03 March 2014, At: 08:05 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered

More information

Irreducible Representations of Wreath Products of Association Schemes

Irreducible Representations of Wreath Products of Association Schemes Journal of Algebraic Combinatorics, 18, 47 52, 2003 c 2003 Kluwer Academic Publishers. Manufactured in The Netherlands. Irreducible Representations of Wreath Products of Association Schemes AKIHIDE HANAKI

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

3. Prime and maximal ideals. 3.1. Definitions and Examples.

3. Prime and maximal ideals. 3.1. Definitions and Examples. COMMUTATIVE ALGEBRA 5 3.1. Definitions and Examples. 3. Prime and maximal ideals Definition. An ideal P in a ring A is called prime if P A and if for every pair x, y of elements in A\P we have xy P. Equivalently,

More information

Group Theory. 1 Cartan Subalgebra and the Roots. November 23, 2011. 1.1 Cartan Subalgebra. 1.2 Root system

Group Theory. 1 Cartan Subalgebra and the Roots. November 23, 2011. 1.1 Cartan Subalgebra. 1.2 Root system Group Theory November 23, 2011 1 Cartan Subalgebra and the Roots 1.1 Cartan Subalgebra Let G be the Lie algebra, if h G it is called a subalgebra of G. Now we seek a basis in which [x, T a ] = ζ a T a

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

5. Linear algebra I: dimension

5. Linear algebra I: dimension 5. Linear algebra I: dimension 5.1 Some simple results 5.2 Bases and dimension 5.3 Homomorphisms and dimension 1. Some simple results Several observations should be made. Once stated explicitly, the proofs

More information

FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 22

FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 22 FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 22 RAVI VAKIL CONTENTS 1. Discrete valuation rings: Dimension 1 Noetherian regular local rings 1 Last day, we discussed the Zariski tangent space, and saw that it

More information

Polynomial Invariants

Polynomial Invariants Polynomial Invariants Dylan Wilson October 9, 2014 (1) Today we will be interested in the following Question 1.1. What are all the possible polynomials in two variables f(x, y) such that f(x, y) = f(y,

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

Quantum Mechanics and Representation Theory

Quantum Mechanics and Representation Theory Quantum Mechanics and Representation Theory Peter Woit Columbia University Texas Tech, November 21 2013 Peter Woit (Columbia University) Quantum Mechanics and Representation Theory November 2013 1 / 30

More information

Chapter 13: Basic ring theory

Chapter 13: Basic ring theory Chapter 3: Basic ring theory Matthew Macauley Department of Mathematical Sciences Clemson University http://www.math.clemson.edu/~macaule/ Math 42, Spring 24 M. Macauley (Clemson) Chapter 3: Basic ring

More information

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010 Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

SOME PROPERTIES OF FIBER PRODUCT PRESERVING BUNDLE FUNCTORS

SOME PROPERTIES OF FIBER PRODUCT PRESERVING BUNDLE FUNCTORS SOME PROPERTIES OF FIBER PRODUCT PRESERVING BUNDLE FUNCTORS Ivan Kolář Abstract. Let F be a fiber product preserving bundle functor on the category FM m of the proper base order r. We deduce that the r-th

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

ALGEBRA HW 5 CLAY SHONKWILER

ALGEBRA HW 5 CLAY SHONKWILER ALGEBRA HW 5 CLAY SHONKWILER 510.5 Let F = Q(i). Prove that x 3 and x 3 3 are irreducible over F. Proof. If x 3 is reducible over F then, since it is a polynomial of degree 3, it must reduce into a product

More information

arxiv:quant-ph/0603009v3 8 Sep 2006

arxiv:quant-ph/0603009v3 8 Sep 2006 Deciding universality of quantum gates arxiv:quant-ph/0603009v3 8 Sep 2006 Gábor Ivanyos February 1, 2008 Abstract We say that collection of n-qudit gates is universal if there exists N 0 n such that for

More information

MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

More information

(a) Write each of p and q as a polynomial in x with coefficients in Z[y, z]. deg(p) = 7 deg(q) = 9

(a) Write each of p and q as a polynomial in x with coefficients in Z[y, z]. deg(p) = 7 deg(q) = 9 Homework #01, due 1/20/10 = 9.1.2, 9.1.4, 9.1.6, 9.1.8, 9.2.3 Additional problems for study: 9.1.1, 9.1.3, 9.1.5, 9.1.13, 9.2.1, 9.2.2, 9.2.4, 9.2.5, 9.2.6, 9.3.2, 9.3.3 9.1.1 (This problem was not assigned

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

BILINEAR FORMS KEITH CONRAD

BILINEAR FORMS KEITH CONRAD BILINEAR FORMS KEITH CONRAD The geometry of R n is controlled algebraically by the dot product. We will abstract the dot product on R n to a bilinear form on a vector space and study algebraic and geometric

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

RINGS WITH A POLYNOMIAL IDENTITY

RINGS WITH A POLYNOMIAL IDENTITY RINGS WITH A POLYNOMIAL IDENTITY IRVING KAPLANSKY 1. Introduction. In connection with his investigation of projective planes, M. Hall [2, Theorem 6.2]* proved the following theorem: a division ring D in

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

Matrix generators for exceptional groups of Lie type

Matrix generators for exceptional groups of Lie type J. Symbolic Computation (2000) 11, 1 000 Matrix generators for exceptional groups of Lie type R. B. HOWLETT, L. J. RYLANDS AND D. E. TAYLOR School of Mathematics and Statistics, University of Sydney, Australia

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

POLYNOMIAL RINGS AND UNIQUE FACTORIZATION DOMAINS

POLYNOMIAL RINGS AND UNIQUE FACTORIZATION DOMAINS POLYNOMIAL RINGS AND UNIQUE FACTORIZATION DOMAINS RUSS WOODROOFE 1. Unique Factorization Domains Throughout the following, we think of R as sitting inside R[x] as the constant polynomials (of degree 0).

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

fg = f g. 3.1.1. Ideals. An ideal of R is a nonempty k-subspace I R closed under multiplication by elements of R:

fg = f g. 3.1.1. Ideals. An ideal of R is a nonempty k-subspace I R closed under multiplication by elements of R: 30 3. RINGS, IDEALS, AND GRÖBNER BASES 3.1. Polynomial rings and ideals The main object of study in this section is a polynomial ring in a finite number of variables R = k[x 1,..., x n ], where k is an

More information

The components of a variety of matrices with square zero and submaximal rank

The components of a variety of matrices with square zero and submaximal rank The components of a variety of matrices with square zero and submaximal ran DIKRAN KARAGUEUZIAN Mathematics Department, SUNY Binghamton, Binghamton, NY 13902, USA E-mail: diran@math.binghamton.edu BOB

More information

MODULES OVER A PID KEITH CONRAD

MODULES OVER A PID KEITH CONRAD MODULES OVER A PID KEITH CONRAD Every vector space over a field K that has a finite spanning set has a finite basis: it is isomorphic to K n for some n 0. When we replace the scalar field K with a commutative

More information

1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm

1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm Uppsala Universitet Matematiska Institutionen Andreas Strömbergsson Prov i matematik Funktionalanalys Kurs: F3B, F4Sy, NVP 005-06-15 Skrivtid: 9 14 Tillåtna hjälpmedel: Manuella skrivdon, Kreyszigs bok

More information

Math 115A - Week 1 Textbook sections: 1.1-1.6 Topics covered: What is a vector? What is a vector space? Span, linear dependence, linear independence

Math 115A - Week 1 Textbook sections: 1.1-1.6 Topics covered: What is a vector? What is a vector space? Span, linear dependence, linear independence Math 115A - Week 1 Textbook sections: 1.1-1.6 Topics covered: What is Linear algebra? Overview of course What is a vector? What is a vector space? Examples of vector spaces Vector subspaces Span, linear

More information

1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0

1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0 Solutions: Assignment 4.. Find the redundant column vectors of the given matrix A by inspection. Then find a basis of the image of A and a basis of the kernel of A. 5 A The second and third columns are

More information

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points Introduction to Algebraic Geometry Bézout s Theorem and Inflection Points 1. The resultant. Let K be a field. Then the polynomial ring K[x] is a unique factorisation domain (UFD). Another example of a

More information

Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm.

Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm. Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm. We begin by defining the ring of polynomials with coefficients in a ring R. After some preliminary results, we specialize

More information

Gröbner Bases and their Applications

Gröbner Bases and their Applications Gröbner Bases and their Applications Kaitlyn Moran July 30, 2008 1 Introduction We know from the Hilbert Basis Theorem that any ideal in a polynomial ring over a field is finitely generated [3]. However,

More information

G = G 0 > G 1 > > G k = {e}

G = G 0 > G 1 > > G k = {e} Proposition 49. 1. A group G is nilpotent if and only if G appears as an element of its upper central series. 2. If G is nilpotent, then the upper central series and the lower central series have the same

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

More information

Group Theory. Contents

Group Theory. Contents Group Theory Contents Chapter 1: Review... 2 Chapter 2: Permutation Groups and Group Actions... 3 Orbits and Transitivity... 6 Specific Actions The Right regular and coset actions... 8 The Conjugation

More information

4. CLASSES OF RINGS 4.1. Classes of Rings class operator A-closed Example 1: product Example 2:

4. CLASSES OF RINGS 4.1. Classes of Rings class operator A-closed Example 1: product Example 2: 4. CLASSES OF RINGS 4.1. Classes of Rings Normally we associate, with any property, a set of objects that satisfy that property. But problems can arise when we allow sets to be elements of larger sets

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Chapter 5. Banach Spaces

Chapter 5. Banach Spaces 9 Chapter 5 Banach Spaces Many linear equations may be formulated in terms of a suitable linear operator acting on a Banach space. In this chapter, we study Banach spaces and linear operators acting on

More information

( ) which must be a vector

( ) which must be a vector MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are

More information

CLUSTER ALGEBRAS AND CATEGORIFICATION TALKS: QUIVERS AND AUSLANDER-REITEN THEORY

CLUSTER ALGEBRAS AND CATEGORIFICATION TALKS: QUIVERS AND AUSLANDER-REITEN THEORY CLUSTER ALGEBRAS AND CATEGORIFICATION TALKS: QUIVERS AND AUSLANDER-REITEN THEORY ANDREW T. CARROLL Notes for this talk come primarily from two sources: M. Barot, ICTP Notes Representations of Quivers,

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

5 Homogeneous systems

5 Homogeneous systems 5 Homogeneous systems Definition: A homogeneous (ho-mo-jeen -i-us) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x +... + a n x n =.. a m

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

Structure of the Root Spaces for Simple Lie Algebras

Structure of the Root Spaces for Simple Lie Algebras Structure of the Root Spaces for Simple Lie Algebras I. Introduction A Cartan subalgebra, H, of a Lie algebra, G, is a subalgebra, H G, such that a. H is nilpotent, i.e., there is some n such that (H)

More information

2 Polynomials over a field

2 Polynomials over a field 2 Polynomials over a field A polynomial over a field F is a sequence (a 0, a 1, a 2,, a n, ) where a i F i with a i = 0 from some point on a i is called the i th coefficient of f We define three special

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information