some algebra prelim solutions



Similar documents
Similarity and Diagonalization. Similar Matrices

7. Some irreducible polynomials

Quotient Rings and Field Extensions

Introduction to Finite Fields (cont.)

minimal polyonomial Example

MATH PROBLEMS, WITH SOLUTIONS

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

it is easy to see that α = a

Orthogonal Diagonalization of Symmetric Matrices

3 1. Note that all cubes solve it; therefore, there are no more

Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm.

5. Linear algebra I: dimension

Chapter 6. Orthogonality

Section Inner Products and Norms

(a) Write each of p and q as a polynomial in x with coefficients in Z[y, z]. deg(p) = 7 deg(q) = 9

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

by the matrix A results in a vector which is a reflection of the given

Unique Factorization

Module MA3411: Abstract Algebra Galois Theory Appendix Michaelmas Term 2013

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

CHAPTER SIX IRREDUCIBILITY AND FACTORIZATION 1. BASIC DIVISIBILITY THEORY

ABSTRACT ALGEBRA: A STUDY GUIDE FOR BEGINNERS

G = G 0 > G 1 > > G k = {e}

Inner Product Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Factoring Polynomials

Modern Algebra Lecture Notes: Rings and fields set 4 (Revision 2)

MATH APPLIED MATRIX THEORY

1 = (a 0 + b 0 α) (a m 1 + b m 1 α) 2. for certain elements a 0,..., a m 1, b 0,..., b m 1 of F. Multiplying out, we obtain

Galois Theory III Splitting fields.

The Mean Value Theorem

LINEAR ALGEBRA W W L CHEN

Factoring of Prime Ideals in Extensions

H/wk 13, Solutions to selected problems

MAT 242 Test 2 SOLUTIONS, FORM T

Math Abstract Algebra I Questions for Section 23: Factoring Polynomials over a Field

University of Lille I PC first year list of exercises n 7. Review

ON GALOIS REALIZATIONS OF THE 2-COVERABLE SYMMETRIC AND ALTERNATING GROUPS

Chapter 13: Basic ring theory

1 Homework 1. [p 0 q i+j p i 1 q j+1 ] + [p i q j ] + [p i+1 q j p i+j q 0 ]

GROUP ACTIONS KEITH CONRAD

BANACH AND HILBERT SPACE REVIEW

α = u v. In other words, Orthogonal Projection

THE DIMENSION OF A VECTOR SPACE

Math 319 Problem Set #3 Solution 21 February 2002

x = + x 2 + x

Factorization in Polynomial Rings

Chapter 20. Vector Spaces and Bases

Inner products on R n, and more

T ( a i x i ) = a i T (x i ).

Linear Algebra I. Ronald van Luijk, 2012

= = 3 4, Now assume that P (k) is true for some fixed k 2. This means that

NOV /II. 1. Let f(z) = sin z, z C. Then f(z) : 3. Let the sequence {a n } be given. (A) is bounded in the complex plane

CONSEQUENCES OF THE SYLOW THEOREMS

Factorization Algorithms for Polynomials over Finite Fields

EXERCISES FOR THE COURSE MATH 570, FALL 2010

4. FIRST STEPS IN THE THEORY 4.1. A

ALGEBRA HW 5 CLAY SHONKWILER

r + s = i + j (q + t)n; 2 rs = ij (qj + ti)n + qtn.

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

MOP 2007 Black Group Integer Polynomials Yufei Zhao. Integer Polynomials. June 29, 2007 Yufei Zhao

Winter Camp 2011 Polynomials Alexander Remorov. Polynomials. Alexander Remorov

Math Practice Exam 2 with Some Solutions

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Finite dimensional C -algebras

GROUP ALGEBRAS. ANDREI YAFAEV

Matrix Representations of Linear Transformations and Changes of Coordinates

MA106 Linear Algebra lecture notes

Chapter 7: Products and quotients

Solutions to TOPICS IN ALGEBRA I.N. HERSTEIN. Part II: Group Theory

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

6. Fields I. 1. Adjoining things

FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

3. Prime and maximal ideals Definitions and Examples.

Group Theory. Contents

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

Linear Algebra Review. Vectors

GROUPS ACTING ON A SET

U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, Notes on Algebra

How To Prove The Dirichlet Unit Theorem

10 Splitting Fields. 2. The splitting field for x 3 2 over Q is Q( 3 2,ω), where ω is a primitive third root of 1 in C. Thus, since ω = 1+ 3

26 Ideals and Quotient Rings

2 Polynomials over a field

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)

October 3rd, Linear Algebra & Properties of the Covariance Matrix

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

PROBLEM SET 6: POLYNOMIALS

PYTHAGOREAN TRIPLES KEITH CONRAD

ON TORI TRIANGULATIONS ASSOCIATED WITH TWO-DIMENSIONAL CONTINUED FRACTIONS OF CUBIC IRRATIONALITIES.

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

Vector and Matrix Norms

A NOTE ON FINITE FIELDS

calculating the result modulo 3, as follows: p(0) = = 1 0,

2. Let H and K be subgroups of a group G. Show that H K G if and only if H K or K H.

fg = f g Ideals. An ideal of R is a nonempty k-subspace I R closed under multiplication by elements of R:

Prime Numbers and Irreducible Polynomials

Galois representations with open image

Transcription:

some algebra prelim solutions David Morawski August 19, 2012 Problem (Spring 2008, #5). Show that f(x) = x p x + a is irreducible over F p whenever a F p is not zero. Proof. First, note that f(x) has no roots in F p : since b p = b mod p (Fermat s Little Theorem), f(b) = b p b + a = a 0. Now, let α be a root of f(x) in the algebraic closure of F p. Note that α + i for i = 1,..., p is also a root of f(x). This is because in a field of characteristic p, we have (x + y) p = x p + y p for every x and y in the field; so becomes f(α + i) = (α + i) p (α + i) + a α p + i p (α + i) + a. Again, by Fermat s Little Theorem, i p = i, so this equation becomes α p α + a, which is zero by the assumption that α is a root. Thus, F p (α) contains every root of f(x) and so p f(x) = x (α + i) over F p (α). i=1 Now suppose, to the contradiction, that f(x) = g(x)h(x) F p [x] such that 1 < deg g(x) < p. Then, letting d = deg g(x), d g(x) = x (α + i j ) j=1 over F p (α). Expanding this product shows that the coefficient of x d 1 is d j=1 α + i j, which is equal to dα + k, for some k F p. Since g(x) has coefficients in F p and d is not zero, this means that α lies in F p, contradicting the fact that f(x) has no roots in F p. These solutions, chronologically listed, have not been checked by any prelim graders. 1

Problem (Fall 2008, #2). Prove that if a polynomial f in k[x] with a field k has a repeated irreducible factor g in k[x], then g divides the greatest common divisor of f and its derivative. Be sure to explain what derivative can mean without limits. Proof. First, let s define the algebraic derivative of a polynomial f(x) = a n x n + +a 1 x+a 0 in k[x]. We define the map D : k[x] k[x] on the k-basis of k[x] by Dx n = nx n 1 and extend k-linearly. To see if the product rule holds, we evaluate D on the product of two basis elements: D(x n x m ) = Dx n+m = (n + m)x n+m 1. On the other hand: Dx n x m + x n Dx m = nx n 1 x m + mx n x m 1 = (n + m)x n+m 1. Since the product rule holds on the basis elements and D is k-linear, it holds for all polynomials in k[x]. Note that D1 = D(1 1) = D1 1 + 1 D1, so D1 = 0. Extending k-linearly, this means that the derivative of any constant polynomial is zero (which was left a bit ambiguous in our definition). Conversely, if D(a n x n + a n 1 x n 1 + ) = na n x n 1 + (n 1)a n 1 x n 2 + = 0 in a field with characteristic not dividing n, then it must be that n = 0. Now suppose f(x) = g(x) n h(x), where n > 1 and g is irreducible. g certainly divides f, so we just need to show that g also divides Df. We compute Df: Df = D(g n h) = Dg n h + g n Dh = ng n 1 h + g n Dh. If the characteristic of k divides n, then Df = g n Dh and we have that g divides both f and Df. If the characteristic of k does not divide n, then the irreducibility of g means that g is nonconstant (any constant is a unit in k[x], since k is a field), so Dg 0. By assumption, n 1 > 0, so we can factor out g(x) from Df, giving us: Df = g(ng n 2 h + g n 1 Dh). So g divides Df. So, in any characteristic, g must divide the greatest common divisor of f and Df. Problem (Fall 2009, #7). Show that f(z) = wz 4 4z + 1 = 0 has multiple roots z only for w = 27. Proof. A multiple root α of f(z) is also a root of the algebraic derivative 1 Df(z) = 4wz 3 4. I.e., 4wα 3 4 = 0, which means that w = α 3. So, plugging α into f gives us f(α) = α 3 α 4 4α + 1 = 0, which simplifies to α = 1/3. So w = α 3 = (1/3) 3 = 27, as we wished to show. 1 I m assuming that we re working over a field, although the problem does not say so. 2

Problem (Spring 2010, #5). Let R be a commutative ring of endomorphisms of a finitedimensional complex vector space V. Prove that there is at least one (non-zero) common eigenvector for R on V. Proof. Let W be the C-linear span of R. So, W is a subspace of End C (V ), hence finitedimensional since V is. By definition, every vector in W is a C linear combination of elements of R. So, because the elements of R commute, W is also commutative (with respect to composition). Let T 1,..., T n be a basis for W. Let s now show that we can find a simultaneous eigenvector for this basis. We induct on n: for n = 1, T 1 has an eigenvector since C is algebraically closed. Hence, the minimal polynomial for T 1 has a root over C, which is an eigenvalue corresponding to a nonzero eigenvector. Now assume T 1,..., T n 1 have a simultaneous eigenvector, say v V with T i v = λ i v. Letting V λi denote the λ i -eigenspace for T i where 1 i < n, we show that the intersection n 1 i=1 V λ i is T n -stable. Indeed, a vector w in the intersection lies in each V λi, so T i w = λ i w for 1 i < n. So, T i T n w = T n T i w = T n λ i w = λ i T n w, since T i T n = T n T i. So T n w lies in each V λi, hence lies in the intersection. Now, since the intersection n 1 i=1 V λ i is T n -stable, it makes sense to speak of the minimal polynomial for T n of this subspace. Once again, since C is algebraically closed, this polynomial has a root, which precisely corresponds to an eigenvector v for T n. Since v lies in n 1 i=1 V λ i, it is simultaneously an eigenvector for the entire basis T 1,..., T n, say with respective eigenvalues γ i. Now it just remains to show that v is a simultaneous eigenvector for R. element of R, then n r = a i T i for a i in C. So, i=1 r(v ) = ( a i T i )(v ) = a i T i (v ) = a i γ i v = ( a i γ i )v Thus, v is an eigenvector for each R. If r is any Problem (Fall 2010, #3). Show that f(x) = x 25 10 factors linearly over F 101, the field of 101 elements. Proof. Note that 10 4 = 100 100 = ( 1)( 1) = 1 mod 101. So 10 is a root of f(x) since 10 25 = 10 24 10 = 1 10 = 10. Now consider F 101, which is cyclic of order 100. Since 25 divides 100, there is an element α of order 25. For j = 0, 1,..., 24, we have (10 α j ) 25 = 10 25 (α 25 ) j = 10, 3

giving 25 roots of f(x). And each α j is distinct since {1, α, α 2,..., α 24 } is the cyclic group generated by α. Combining this with the fact that 10 is invertible, it follows that these 25 roots are distinct. So f(x) splits into linear factors. Problem (Fall 2010, #5). Let X, Y be n by n complex matrices such that XY = Y X. Suppose that there are n by n invertible matrices A, B such that AXA 1 and BY B 1 are diagonal. Show that there is an n by n invertible matrix C such that CXC 1 and CY C 1 are diagonal. Proof. First we prove that the existence of matrices A and B is equivalent to X and Y each having a basis of eigenvectors. To see this, let D = AXA 1. Then De i = λ i for each i, since D is diagonal. From D = AXA 1, we get that XA 1 = A 1 D. So XA 1 e i = A 1 De i = A 1 λ i e i = λ i A 1 e i. So A 1 e i is an eigenvector for X. Because A 1 is an isomorphism, the vectors {A 1 e i } i=1,...,n are a basis for C n, thus an eigenbasis. Conversely, suppose X has an eigenbasis v 1,..., v n for C n with Xv i = λ i v i. Let P be the matrix such that the i th column of P is v i. Then P has full rank, hence P 1 exists. Furthermore, P 1 XP is diagonal, since P 1 XP e i = P 1 Xv i = P 1 λ i v i = λ i P 1 v i = λ i e i, So, there is a matrix, namely P 1, such that conjugating X with P 1 yields a diagonal matrix. Returning to the original problem, we see that X and Y both have an eigenbasis for C n (i.e., are diagonalizable). Then C n = V λ, the direct sum of distinct eigenspaces of X. Since X and Y commute, each V λ is Y -stable. Let s show that for a diagonalizable linear operator T on a finite dimensional vector space V over a field k, whenever W V is T -stable, T is diagonalizable on W. Let f(x) be the minimal polynomial for T on V over k. Since W is T -stable, it makes sense to speak about the minimal polynomial g(x) for T on W. Since f(t )(V ) = 0, f(t )(W ) = 0. Thus, by definitely of g(x), g(x) divides f(x). And because T is diagonalizable, f(x) splits into distinct linear factors. Then so does g(x) and, hence, T is diagonalizable on W. So each V λ has a basis of eigenvectors for Y. And because these vectors lie in V λ, they are eigenvectors for X. Taking the union of these vectors gives us a basis for C n that are simultaneously eigenvectors for C n. Then, from what we saw above, letting these vectors be the columns of a matrix C, we have that C 1 XC and C 1 Y C are diagonal. Problem (Fall 2011, #1). Describe all abelian groups of order 72. Proof. By the Fundamental Theorem of Finite Abelian Groups, any group G of order 72 is isomorphic to the direct product of cyclic groups of prime power order, unique up to reordering. Also, by Sun-Ze s theorem, we may collapse the direct product cyclic groups 4

Z/m Z/n to Z/mn whenever m and n are relatively prime. That said, we enumerate all abelian groups of order 72 = 2 3 3 2 : Z/8 Z/9 = Z/72 Z/8 Z/3 Z/3 = Z/24 Z/3 Z/4 Z/2 Z/9 = Z/4 Z/18 Z/4 Z/2 Z/3 Z/3 = Z/12 Z/6 Z/2 Z/2 Z/2 Z/9 = Z/2 Z/2 Z/18 Z/2 Z/2 Z/2 Z/3 Z/3 = Z/2 Z/6 Z/6 Problem (Fall 2011, #5). Prove that the ideal generated by 7 and x 2 + 1 is maximal in Z[x]. Proof. Let M denote the ideal generated by 7 and x 2 + 1. To show that M is maximal, we ll show that Z[x]/M is a field. Note that Z[x]/M = (Z[x]/7)/(x 2 + 1) = (Z/7)[x]/(x 2 + 1). So Z[x]/M is a field if and only if (Z/7)[x]/(x 2 + 1) is a field. Since Z/7 is a field, (Z/7)[x] is a unique factorization domain. Now notice that because x 2 + 1 has no roots (a root would have order 4 in (Z/7), which is impossible by Lagrange) in Z/7, it is irreducible. But in a unique factorization domain, this means that x 2 + 1 is prime and generates a prime ideal. So (Z/7)[x]/(x 2 + 1) is an integral domain. But it is a finite integral domain, so it is a field. Problem (Spring 2012, #1). Show that all groups of order 35 are cyclic. Proof. Let G be a group of order 35 = 3 7. By Sylow s theorem, the number of 5-Sylow subgroups is equal to 1 modulo 5 and also must divide 35. The only number satisfying these conditions is 1. So there is a unique 5-Sylow subgroup H. Furthermore, since p-sylow subgroups are conjugate to one another, H being the unique 5-Sylow subgroup means that H must be normal in G. Similarly, the number of 7-Sylow subgroups is equal to 1 modulo 7 and must divide 35. Again, the only possibility is that there is a unique (hence normal) 7-Sylow subgroup K. Note that if x is an element of both H and K, then Lagrange tells us that x 5 = 1 = x 7. So the order of x divides both 5 and 7, thus the order of x must be 1. Thus, H K = 1. Since H and K are normal and H K = 1, we have that HK = H K = Z/5 Z/7. And because 5 and 7 are relatively prime, Z/5 Z/7 = Z/35. So HK is a subgroup of G of order 35, so HK equals G. So G is isomorphic to Z/35. Problem (Spring 2012, #2). Let G be a finite group and H a subgroup of index 2. Show that H is normal. 5

Proof. By definition, the index of H in G is the number of left cosets of H, which is the same as the number of right cosets of H. Thus, there are two left cosets: H and gh, where g does not lie in H. Because these two sets are disjoint and their union is G, gh = G H. Similarly, Hg = G H. So gh = Hg for all g, which is exactly the statement that H is normal. 6