TOPICS IN ALGEBRAIC COMBINATORICS. Richard P. Stanley

Size: px
Start display at page:

Download "TOPICS IN ALGEBRAIC COMBINATORICS. Richard P. Stanley"

Transcription

1 TOPICS IN ALGEBRAIC COMBINATORICS Richard P. Stanley Course notes for (Algebraic Combinatorics) M.I.T., Spring 2010 Preliminary (incomplete) version of 24 April 2010 Acknowledgment. I am grateful to Sergey Fomin for his careful reading of the manuscript and for several helpful suggestions. 1 Walks in graphs. Given a finite set S and integer k 0, let ( ) S k denote the set of k-element subsets of S, and let ( S ) k denote the set of k-element multisubsets (sets with repeated elements) on S. For instance, if S = {1, 2, 3} then (using abbreviated notation), ( ) (( )) S S = {12, 13, 23}, = {11, 22, 33, 12, 13, 23}. 2 2 A (finite) graph G consists of a vertex set V = {v 1,..., v p } and edge set E = {e 1,..., e q }, together with a function ϕ : E ( V ) 2. We think that if ϕ(e) = uv (short for {u, v}), then e connects u and v or equivalently e is incident to u and v. If there is at least one edge incident to u and v then we say that the vertices u and v are adjacent. If ϕ(e) = vv, then we call e a loop at v. If several edges e 1,..., e j (j > 1) satisfy ϕ(e 1 ) = = ϕ(e j ) = uv, then we say that there is a multiple edge between u and v. A graph without loops or multiple edges is called simple. In this case we can think of E as just a subset of ( ) V 2 [why?]. The adjacency matrix of the graph G is the p p matrix A = A(G), over the field of complex numbers, whose (i, j)-entry a ij is equal to the number of edges incident to v i and v j. Thus A is a real symmetric matrix (and hence has real eigenvalues) whose trace is the number of loops in G. 1

2 A walk in G of length l from vertex u to vertex v is a sequence v 1, e 1, v 2, e 2,..., v l, e l, v l+1 such that: each v i is a vertex of G each e j is an edge of G the vertices of e i are v i and v i+1, for 1 i l v 1 = u and v l+1 = v. 1.1 Theorem. For any integer l 1, the (i, j)-entry of the matrix A(G) l is equal to the number of walks from v i to v j in G of length l. Proof. This is an immediate consequence of the definition of matrix multiplication. Let A = (a ij ). The (i, j)-entry of A(G) l is given by (A(G) l ) ij = a ii1 a i1 i 2 a il 1 j, where the sum ranges over all sequences (i 1,..., i l 1 ) with 1 i k p. But since a rs is the number of edges between v r and v s, it follows that the summand a ii1 a i1 i 2 a il 1 j in the above sum is just the number (which may be 0) of walks of length l from v i to v j of the form v i, e 1, v i1, e 2,..., v il 1, e l, v j (since there are a ii1 choices for e 1, a i1 i 2 choices for e 2, etc.) Hence summing over all (i 1,..., i l 1 ) just gives the total number of walks of length l from v i to v j, as desired. We wish to use Theorem 1.1 to obtain an explicit formula for the number (A(G) l ) ij of walks of length l in G from v i to v j. The formula we give will depend on the eigenvalues of A(G). The eigenvalues of A(G) are also called simply the eigenvalues of G. Recall that a real symmetric p p matrix M has p linearly independent real eigenvectors, which can in fact be chosen to be orthonormal (i.e., orthogonal and of unit length). Let u 1,..., u p be real orthonormal unit eigenvectors for M, with corresponding eigenvalues λ 1,..., λ p. All vectors u will be regarded as p 1 column vectors. We let t denote transpose, so u t is a 1 p row vector. Thus the dot (or scalar 2

3 or inner) product of the vectors u and v is given by u t v (ordinary matrix multiplication). In particular, u t i u j = δ ij (the Kronecker delta). Let U = (u ij ) be the matrix whose columns are u 1,..., u p, denoted U = [u 1,..., u p ]. Thus U is an orthogonal matrix and U t = U 1 = the matrix whose rows are u t 1,..., ut p. Recall from linear algebra that the matrix U diagonalizes M, i.e., u t 1 u t p, U 1 MU = diag(λ 1,..., λ p ), where diag(λ 1,..., λ p ) denotes the diagonal matrix with diagonal entries λ 1,..., λ p. In fact, we have MU = [λ 1 u 1,..., λ p u p ] (U 1 MU) ij = (U t MU) ij = λ j u t i u j = λ j δ ij. 1.2 Corollary. Given the graph G as above, fix the two vertices v i and v j. Let λ 1,..., λ p be the eigenvalues of the adjacency matrix A(G). Then there exist real numbers c 1,..., c p such that for all l 1, we have (A(G) l ) ij = c 1 λ l c pλ l p. In fact, if U = (u rs ) is a real orthogonal matrix such that U 1 AU = diag(λ 1,..., λ p ), then we have c k = u ik u jk. Proof. We have [why?] U 1 A l U = diag(λ l 1,..., λl p ). 3

4 Hence A l = U diag(λ l 1,..., λl p )U 1. Taking the (i, j)-entry of both sides (and using U 1 = U t ) gives [why?] (A l ) ij = k u ik λ l k u jk, as desired. In order for Corollary 1.2 to be of any use we must be able to compute the eigenvalues λ 1,..., λ p as well as the diagonalizing matrix U (or eigenvectors u i ). There is one interesting special situation in which it is not necessary to compute U. A closed walk in G is a walk that ends where it begins. The number of closed walks in G of length l starting at v i is therefore given by (A(G) l ) ii, so the total number f G (l) of closed walks of length l is given by f G (l) = p (A(G) l ) ii i=1 = tr(a(g) l ), where tr denotes trace (sum of the main diagonal entries). Now recall that the trace of a square matrix is the sum of its eigenvalues. If the matrix M has eigenvalues λ 1,..., λ p then [why?] M l has eigenvalues λ l 1,..., λl p. Hence we have proved the following. 1.3 Corollary. Suppose A(G) has eigenvalues λ 1,..., λ p. Then the number of closed walks in G of length l is given by f G (l) = λ l λl p. We now are in a position to use various tricks and techniques from linear algebra to count walks in graphs. Conversely, it is sometimes possible to count the walks by combinatorial reasoning and use the resulting formula to determine the eigenvalues of G. As a first simple example, we consider the complete graph K p with vertex set V = {v 1,..., v p }, and one edge between any two distinct vertices. Thus K p has p vertices and ( ) p 2 = 1 p(p 1) edges. 4 2

5 1.4 Lemma. Let J denote the p p matrix of all 1 s. Then the eigenvalues of J are p (with multiplicity one) and 0 (with multiplicity p 1). Proof. Since all rows are equal and nonzero, we have rank(j) = 1. Since a p p matrix of rank p m has at least m eigenvalues equal to 0, we conclude that J has at least p 1 eigenvalues equal to 0. Since tr(j) = p and the trace is the sum of the eigenvalues, it follows that the remaining eigenvalue of J is equal to p. 1.5 Proposition. The eigenvalues of the complete graph K p are as follows: an eigenvalue of 1 with multiplicity p 1, and an eigenvalue of p 1 with multiplicity one. Proof. We have A(K p ) = J I, where I denotes the p p identity matrix. If the eigenvalues of a matrix M are µ 1,..., µ p, then the eigenvalues of M +ci (where c is a scalar) are µ 1 +c,..., µ p +c [why?]. The proof follows from Lemma Corollary. The number of closed walks of length l in K p from some vertex v i to itself is given by (A(K p ) l ) ii = 1 p ((p 1)l + (p 1)( 1) l ). (1) (Note that this is also the number of sequences (i 1,..., i l ) of numbers 1, 2,..., p such that i 1 = i, no two consecutive terms are equal, and i l i 1 [why?].) Proof. By Corollary 1.3 and Proposition 1.5, the total number of closed walks in K p of length l is equal to (p 1) l + (p 1)( 1) l. By the symmetry of the graph K p, the number of closed walks of length l from v i to itself does not depend on i. (All vertices look the same. ) Hence we can divide the total number of closed walks by p (the number of vertices) to get the desired answer. What about non-closed walks in K p? It s not hard to diagonalize explicitly the matrix A(K p ) (or equivalently, to compute its eigenvectors), but 5

6 there is an even simpler special argument. We have (J I) l = l ( ) l ( 1) l k J k, (2) k k=0 by the binomial theorem. Now for k > 0 we have J k = p k 1 J [why?], while J 0 = I. (It is not clear a priori what is the correct value of J 0, but in order for equation (2) to be valid we must take J 0 = I.) Hence (J I) l = l ( ) l ( 1) l k p k 1 J + ( 1) l I. k k=1 Again by the binomial theorem we have (J I) l = 1 p ((p 1)l J ( 1) l J) + ( 1) l I = 1 p (p 1)l J + ( 1)l (pi J). (3) p Taking the (i, j)-entry of each side when i j yields (A(K p ) l ) ij = 1 p ((p 1)l ( 1) l ). (4) If we take the (i, i)-entry of (3) then we recover equation (1). Note the curious fact that if i j then (A(K p ) l ) ii (A(K p ) l ) ij = ( 1) l. We could also have deduced (4) from Corollary 1.6 using p i=1 p ( A(Kp ) l) = p(p ij 1)l, j=1 the total number of walks of length l in K p. Details are left to the reader. We now will show how equation (1) itself determines the eigenvalues of A(K p ). Thus if (1) is proved without first computing the eigenvalues of A(K p ) (which in fact is what we did two paragraphs ago), then we have 6

7 another means to compute the eigenvalues. The argument we will give can be applied to any graph G, not just K p. We begin with a simple lemma. 1.7 Lemma. Suppose α 1,..., α r and β 1,..., β s are nonzero complex numbers such that for all positive integers l, we have α l αl r = βl βl s. (5) Then r = s and the α s are just a permutation of the β s. Proof. We will use the powerful method of generating functions. Let x be a complex number whose absolute value is close to 0. Multiply (5) by x l and sum on all l 1. The geometric series we obtain will converge, and we get α 1 x 1 α 1 x + + α rx 1 α r x = β 1x 1 β 1 x + + β sx 1 β s x. (6) This is an identity valid for sufficiently small (in modulus) complex numbers. By clearing denominators we obtain a polynomial identity. But if two polynomials in x agree for infinitely many values, then they are the same polynomial [why?]. Hence equation (6) is actually valid for all complex numbers x (ignoring values of x which give rise to a zero denominator). Fix a complex number γ 0. Multiply (6) by 1 γx and let x 1/γ. The left-hand side becomes the number of α i s which are equal to γ, while the right-hand side becomes the number of β j s which are equal to γ [why?]. Hence these numbers agree for all γ, so the lemma is proved. 1.8 Example. Suppose that G is a graph with 12 vertices, and that the number of closed walks of length l in G is equal to 3 5 l +4 l +2( 2) l +4. Then it follows from Corollary 1.3 and Lemma 1.7 [why?] that the eigenvalues of A(G) are given by 5, 5, 5, 4, 2, 2, 1, 1, 1, 1, 0, 0. 7

8 2 Cubes and the Radon transform. Let us now consider a more interesting example of a graph G, one whose eigenvalues have come up in a variety of applications. Let Z 2 denote the cyclic group of order 2, with elements 0 and 1, and group operation being addition modulo 2. Thus = 0, = = 1, = 0. Let Z n 2 denote the direct product of Z 2 with itself n times, so the elements of Z n 2 are n-tuples (a 1,..., a n ) of 0 s and 1 s, under the operation of component-wise addition. Define a graph C n, called the n-cube, as follows: The vertex set of C n is given by V (C n ) = Z n 2, and two vertices u and v are connected by an edge if they differ in exactly one component. Equivalently, u + v has exactly one nonzero component. If we regard Z n 2 as consisting of real vectors, then these vectors form the set of vertices of an n-dimensional cube. Moreover, two vertices of the cube lie on an edge (in the usual geometric sense) if and only if they form an edge of C n. This explains why C n is called the n-cube. We also see that walks in C n have a nice geometric interpretation they are simply walks along the edges of an n-dimensional cube. We want to determine explicitly the eigenvalues and eigenvectors of C n. We will do this by a somewhat indirect but extremely useful and powerful technique, the finite Radon transform. Let V denote the set of all functions f : Z n 2 R, where R denotes the field of real numbers. (Note: For groups other than Z n 2 it is necessary to use complex numbers rather than real numbers. We could use complex numbers here, but there is no need to do so.) Note that V is a vector space over R of dimension 2 n [why?]. If u = (u 1,..., u n ) and v = (v 1,..., v n ) are elements of Z n 2, then define their dot product by u v = u 1 v u n v n, where the computation is performed modulo 2. Thus we regard u v as an element of Z 2. The expression ( 1) u v is defined to be the real number +1 or 1, depending on whether u v = 0 or 1, respectively. Since for integers k the value of ( 1) k depends only on k (mod 2), it follows that we can treat u and v as integer vectors without affecting the value of ( 1) u v. Thus, for instance, formulas such as are well-defined and valid. ( 1) u (v+w) = ( 1) u v+u w = ( 1) u v ( 1) u w 8

9 We now define two important bases of the vector space V. There will be one basis element of each basis for each u Z n 2. The first basis, denoted B 1, has elements f u defined as follows: f u (v) = δ uv, (7) the Kronecker delta. It is easy to see that B 1 is a basis, since any g V satisfies g = g(u)f u (8) u Z n 2 [why?]. Hence B 1 spans V, so since B 1 = dim V = 2 n, it follows that B 1 is a basis. The second basis, denoted B 2, has elements χ u defined as follows: χ u (v) = ( 1) u v. In order to show that B 2 is a basis, we will use an inner product on V (denoted, ) defined by f, g = f(u)g(u). u Z n 2 Note that this inner product is just the usual dot product with respect to the basis B Lemma. The set B 2 = {χ u : u Z n 2 } forms a basis for V. Proof. Since B 2 = dim V (= 2 n ), it suffices to show that B 2 is linearly independent. In fact, we will show that the elements of B 2 are orthogonal. We have χ u, χ v = w Z n 2 χ u (w)χ v (w) = ( 1) (u+v) w. w Z n 2 It is left as an easy exercise to the reader to show that for any y Z n 2, we have { 2 ( 1) y w = n, if y = 0 0, otherwise. w Z n 2 9

10 where 0 denotes the identity element of Z n 2 (the vector (0, 0,..., 0)). Thus χ u, χ v = 0 if and only u + v = 0, i.e., u = v, so the elements of B 2 are orthogonal (and nonzero). Hence they are linearly independent as desired. We now come to the key definition of the Radon transform. 2.2 Definition. Given a subset Γ of Z n 2 and a function f V, define a new function Φ Γ f V by Φ Γ f(v) = w Γ f(v + w). The function Φ Γ f is called the (discrete or finite) Radon transform of f (on the group Z n 2, with respect to the subset Γ). We have defined a map Φ Γ : V V. It is easy to see that Φ Γ is a linear transformation; we want to compute its eigenvalues and eigenvectors. 2.3 Theorem. The eigenvectors of Φ Γ are the functions χ u, where u Z n 2. The eigenvalue λ u corresponding to χ u (i.e., Φ Γ χ u = λ u χ u ) is given by λ u = w Γ( 1) u w. Proof. Let v Z n 2. Then Φ Γ χ u (v) = w Γ χ u (v + w) = w Γ ( ) = ( 1) u w ( 1) u v = ( 1) u (v+w) w Γ ( ) ( 1) u w χ u (v). w Γ 10

11 Hence as desired. ( ) Φ Γ χ u = ( 1) u w χ u, w Γ Note that because the χ u s form a basis for V by Lemma 2.1, it follows that Theorem 2.3 yields a complete set of eigenvalues and eigenvectors for Φ Γ. Note also that the eigenvectors χ u of Φ Γ are independent of Γ; only the eigenvalues depend on Γ. Now we come to the payoff. Let = {δ 1,..., δ n }, where δ i is the ith unit coordinate vector (i.e., δ i has a 1 in position i and 0 s elsewhere). Note that the jth coordinate of δ i is just δ ij (the Kronecker delta), explaining our notation δ i. Let [Φ ] denote the matrix of the linear transformation Φ : V V with respect to the basis B 1 of V given by (7). 2.4 Lemma. We have [Φ ] = A(C n ), the adjacency matrix of the n-cube. Proof. Let v Z n 2. We have Φ f u (v) = w f u (v + w) = w f u+w (v), since u = v + w if and only if u + w = v. There follows [why?] Φ f u = w f u+w. (9) Equation (9) says that the (u, v)-entry of the matrix Φ is given by { 1, if u + v (Φ ) uv = 0, otherwise. Now u + v if and only if u and v differ in exactly one coordinate. This is just the condition for uv to be an edge of C n, so the proof follows. 11

12 2.5 Corollary. The eigenvectors E u (u Z n 2 ) of A(C n) (regarded as linear combinations of the vertices of C n, i.e., of the elements of Z n 2 ) are given by E u = ( 1) u v v. (10) v Z n 2 The eigenvalue λ u corresponding to the eigenvector E u is given by λ u = n 2ω(u), (11) where ω(u) is the number of 1 s in u. (ω(u) is called the Hamming weight or simply the weight of u.) Hence A(C n ) has ( n i) eigenvalues equal to n 2i, for each 0 i n. Proof. For any function g V we have by (8) that g = v g(v)f v. Applying this equation to g = χ u gives χ u = v χ u (v)f v = v ( 1) u v f v. (12) Equation (12) expresses the eigenvector χ u of Φ (or even Φ Γ for any Γ Z n 2 ) as a linear combination of the functions f v. But Φ has the same matrix with respect to the basis of the f v s as A(C n ) has with respect to the vertices v of C n. Hence the expansion of the eigenvectors of Φ in terms of the f v s has the same coefficients as the expansion of the eigenvectors of A(C n ) in terms of the v s, so equation (10) follows. According to Theorem 2.3 the eigenvalue λ u corresponding to the eigenvector χ u of Φ (or equivalently, the eigenvector E u of A(C n )) is given by λ u = w ( 1) u w. (13) Now = {δ 1,..., δ n }, and δ i u is 1 if u has a one in its ith coordinate and is 0 otherwise. Hence the sum in (13) has n ω(u) terms equal to +1 and ω(u) terms equal to 1, so λ u = (n ω(u)) ω(u) = n 2ω(u), as claimed. 12

13 We have all the information needed to count walks in C n. 2.6 Corollary. Let u, v Z n 2, and suppose that ω(u + v) = k (i.e., u and v disagree in exactly k coordinates). Then the number of walks of length l in C n between u and v is given by (A l ) uv = 1 2 n n i=0 k ( )( ) k n k ( 1) j (n 2i) l, (14) j i j j=0 where we set ( n k i j ) = 0 if j > i. In particular, (A l ) uu = 1 2 n n i=0 ( ) n (n 2i) l. (15) i Proof. Let E u and λ u be as in Corollary 2.5. In order to apply Corollary 1.2, we need the eigenvectors to be of unit length (where we regard the f v s as an orthonormal basis of V). By equation (10), we have E u 2 = (( 1) u v ) 2 = 2 n. v Z n 2 Hence we should replace E u by E u = 1 2 n/2 E u to get an orthonormal basis. According to Corollary 1.2, we thus have (A l ) uv = 1 2 n w Z n 2 E uw E vw λ l w. Now E uw by definition is the coefficient of f w in the expansion (10), i.e., E uw = ( 1) u+w (and similarly for E v ), while λ w = n 2ω(w). Hence (A l ) uv = 1 ( 1) (u+v) w (n 2ω(w)) l. (16) 2 n w Z n 2 The number of vectors w of Hamming weight i which have j 1 s in common with u + v is ( )( k n k ) j i j, since we can choose the j 1 s in u + v which agree with w in ( k j) ways, while the remaining i j 1 s of w can be inserted in the 13

14 n k remaining positions in ( n k i j ) ways. Since (u + v) w j (mod 2), the sum (16) reduces to (14) as desired. Clearly setting u = v in (14) yields (15), completing the proof. It is possible to give a direct proof of (15) avoiding linear algebra. Thus by Corollary 1.3 and Lemma 1.7 (exactly as was done for K n ) we have another determination of the eigenvalues of C n. With a little more work one can also obtain a direct proof of (14). Later in Example , however, we will use the eigenvalues of C n to obtain a combinatorial result for which no nonalgebraic proof is known. 2.7 Example. Setting k = 1 in (14) yields (A l ) uv = 1 n [( ) n 1 2 n i i=0 ( )] n 1 (n 2i) l i 1 = 1 n 1 ( ) n 1 (n 2i) l+1. 2 n i n i i=0 14

15 3 Random walks. Let G be a finite graph. We consider a random walk on the vertices of G of the following type. Start at a vertex u. (The vertex u could be chosen randomly according to some probability distribution or could be specified in advance.) Among all the edges incident to u, choose one uniformly at random (i.e., if there are k edges incident to u, then each of these edges is chosen with probability 1/k). Travel to the vertex v at the other end of the chosen edge and continue as before from v. Readers with some familiarity with probability theory will recognize this random walk as a special case of a finite state Markov chain. Many interesting questions may be asked about such walks; the basic one is to determine the probability of being at a given vertex after a given number l of steps. Suppose vertex u has degree d u, i.e., there are d u edges incident to u (counting loops at u once only). Let M = M(G) be the matrix whose rows and columns are indexed by the vertex set {v 1,..., v p } of G, and whose (u, v)-entry is given by M uv = µ uv, d u where µ uv is the number of edges between u and v (which for simple graphs will be 0 or 1). Thus M uv is just the probability that if one starts at u, then the next step will be to v. An elementary probability theory argument (equivalent to Theorem 1.1) shows that if l is a positive integer, then (M l ) uv is equal to probability that one ends up at vertex v in l steps given that one has started at u. Suppose now that the starting vertex is not specified, but rather we are given probabilities ρ u summing to 1 and that we start at vertex u with probability ρ u. Let P be the row vector P = [ρ v1,..., ρ vp ]. Then again an elementary argument shows that if P M l = [σ v1,..., σ vp ], then σ v is the probability of ending up at v in l steps (with the given starting distribution). By reasoning as in Section 1, we see that if we know the eigenvalues and eigenvectors of M, then we can compute the crucial probabilities (M l ) uv and σ u. Since the matrix M is not the same as the adjacency matrix A, what does all this have to do with adjacency matrices? The answer is that in one important case M is just a scalar multiple of A. We say that the graph G 15

16 is regular of degree d if each d u = d, i.e., each vertex is incident to d edges. In this case it s easy to see that M(G) = 1 A(G). Hence the eigenvectors d E u of M(G) and A(G) are the same, and the eigenvalues are related by λ u (M) = 1λ d u(a). Thus random walks on a regular graph are closely related to the adjacency matrix of the graph. 3.1 Example. Consider a random walk on the n-cube C n which begins at the origin (the vector (0,..., 0)). What is the probability p l that after l steps one is again at the origin? Before applying any formulas, note that after an even (respectively, odd) number of steps, one must be at a vertex with an even (respectively, odd) number of 1 s. Hence p l = 0 if l is odd. Now note that C n is regular of degree n. Thus by (11), we have By (15) we conclude that λ u (M(C n )) = 1 (n 2ω(u)). n p l = 1 2 n n l n i=0 ( ) n (n 2i) l. i Note that the above expression for p l does indeed reduce to 0 when l is odd. 16

17 4 The Sperner property. In this section we consider a surprising application of certain adjacency matrices to some problems in extremal set theory. An important role will also be played by finite groups. In general, extremal set theory is concerned with finding (or estimating) the most or least number of sets satisfying given settheoretic or combinatorial conditions. For example, a typical easy problem in extremal set theory is the following: What is the most number of subsets of an n-element set with the property that any two of them intersect? (Can you solve this problem?) The problems to be considered here are most conveniently formulated in terms of partially ordered sets, or posets for short. Thus we begin with discussing some basic notions concerning posets. 4.1 Definition. A poset (short for partially ordered set) P is a finite set, also denoted P, together with a binary relation denoted satisfying the following axioms: (P1) (reflexivity) x x for all x P (P2) (antisymmetry) If x y and y x, then x = y. (P3) (transitivity) If x y and y z, then x z. One easy way to obtain a poset is the following. Let P be any collection of sets. If x, y P, then define x y in P if x y as sets. It is easy to see that this definition of makes P into a poset. If P consists of all subsets of an n-element set S, then P is called a (finite) boolean algebra of rank n and is denoted by B S. If S = {1, 2,..., n}, then we denote B S simply by B n. Boolean algebras will play an important role throughout this section. There is a simple way to represent small posets pictorially. The Hasse diagram of a poset P is a planar drawing, with elements of P drawn as dots. If x < y in P (i.e., x y and x y), then y is drawn above x (i.e., with a larger vertical coordinate). An edge is drawn between x and y if y covers x, i.e., x < y and no element z is in between, i.e., no z satisfies x < z < y. By the transitivity property (P3), all the relations of a finite 17

18 poset are determined by the cover relations, so the Hasse diagram determines P. (This is not true for infinite posets; for instance, the real numbers R with their usual order is a poset with no cover relations.) The Hasse diagram of the boolean algebra B 3 looks like Ø We say that two posets P and Q are isomorphic if there is a bijection (one-to-one and onto function) ϕ : P Q such that x y in P if and only if ϕ(x) ϕ(y) in Q. Thus one can think that two posets are isomorphic if they differ only in the names of their elements. This is exactly analogous to the notion of isomorphism of groups, rings, etc. It is an instructive exercise to draw Hasse diagrams of the one poset of order (number of elements) one (up to isomorphism), the two posets of order two, the five posets of order three, and the sixteen posets of order four. More ambitious readers can try the 63 posets of order five, the 318 of order six, the 2045 of order seven, the of order eight, the of order nine, the of order ten, the of order eleven, the of order twelve, the of order thirteen, the of order fourteen, the of order fifteen, and the of order sixteen. Beyond this the number is not currently known. A chain C in a poset is a totally ordered subset of P, i.e., if x, y C then either x y or y x in P. A finite chain is said to have length n if it has n + 1 elements. Such a chain thus has the form x 0 < x 1 < < x n. We say that a finite poset is graded of rank n if every maximal chain has length n. (A chain is maximal if it s contained in no larger chain.) For instance, the boolean algebra B n is graded of rank n [why?]. A chain y 0 < y 1 < < y j is said to be saturated if each y i+1 covers y i. Such a chain need not be maximal since there can be elements of P smaller than y 0 or greater than y j. If P is graded of rank n and x P, then we say that x has rank j, denoted ρ(x) = j, if some (or equivalently, every) saturated chain of P with top element x has 18

19 length j. Thus [why?] if we let P j = {x P : ρ(x) = j}, then P is a disjoint union P = P 0 P 1 P n, and every maximal chain of P has the form x 0 < x 1 < < x n where ρ(x j ) = j. We write p j = P j, the number of elements of P of rank j. For example, if P = B n then ρ(x) = x (the cardinality of x as a set) and ( ) n p j = #{x {1, 2,, n} : x = j} =. j (Note that we use both S and #S for the cardinality of the finite set S.) We say that a graded poset P of rank n (always assumed to be finite) is rank-symmetric if p i = p n i for 0 i n, and rank-unimodal if p 0 p 1 p j p j+1 p j+2 p n for some 0 j n. If P is both rank-symmetric and rank-unimodal, then we clearly have p 0 p 1 p m p m+1 p n, if n = 2m p 0 p 1 p m = p m+1 p m+2 p n, if n = 2m + 1. We also say that the sequence p 0, p 1,..., p n itself or the polynomial F (q) = p 0 + p 1 q + + p n q n is symmetric or unimodal, as the case may be. For instance, B n is rank-symmetric and rank-unimodal, since it is well-known (and easy to prove) that the sequence ( ( n 0), n ( 1),..., n n) (the nth row of Pascal s triangle) is symmetric and unimodal. Thus the polynomial (1 + q) n is symmetric and unimodal. A few more definitions, and then finally some results! An antichain in a poset P is a subset A of P for which no two elements are comparable, i.e., we can never have x, y A and x < y. For instance, in a graded poset P the levels P j are antichains [why?]. We will be concerned with the problem of finding the largest antichain in a poset. Consider for instance the boolean algebra B n. The problem of finding the largest antichain in B n is clearly equivalent to the following problem in extremal set theory: Find the largest collection of subsets of an n-element set such that no element of the collection contains another. A good guess would be to take all the subsets of cardinality n/2 (where x denotes the greatest integer x), giving a total of ( ) n n/2 sets in all. But how can we actually prove there is no larger collection? Such a proof was first given by Emmanuel Sperner in 1927 and is known as Sperner s 19

20 theorem. We will give three proofs of Sperner s theorem in this section: one proof uses linear algebra and will be applied to certain other situations; the second proof is an elegant combinatorial argument due to David Lubell in 1966; while the third proof is another combinatorial argument closely related to the linear algebra proof. We present the last two proofs for their cultural value. Our extension of Sperner s theorem to certain other situations will involve the following crucial definition. 4.2 Definition. Let P be a graded poset of rank n. We say that P has the Sperner property or is a Sperner poset if max{ A : A is an antichain of P } = max{ P i : 0 i n}. In other words, no antichain is larger than the largest level P i. Thus Sperner s theorem is equivalent to saying that B n has the Sperner property. Note that if P has the Sperner property there may still be antichains of maximum cardinality other than the biggest P i ; there just can t be any bigger antichains. 4.3 Example. A simple example of a graded poset that fails to satisfy the Sperner property is the following: We now will discuss a simple combinatorial condition which guarantees that certain graded posets P are Sperner. We define an order-matching from P i to P i+1 to be a one-to-one function µ : P i P i+1 satisfying x < µ(x) for all x P i. Clearly if such an order-matching exists then p i p i+1 (since µ is one-to-one). Easy examples show that the converse is false, i.e., if p i p i+1 then there need not exist an order-matching from P i to P i+1. We similarly define an order-matching from P i to P i 1 to be a one-to-one function µ : P i P i 1 satisfying µ(x) < x for all x P i. 4.4 Proposition. Let P be a graded poset of rank n. Suppose there exists an integer 0 j n and order-matchings P 0 P 1 P 2 P j P j+1 P j+2 P n. (17) 20

21 Then P is rank-unimodal and Sperner. Proof. Since order-matchings are one-to-one it is clear that Hence P is rank-unimodal. p 0 p 1 p j p j+1 p j+2 p n. Define a graph G as follows. The vertices of G are the elements of P. Two vertices x, y are connected by an edge if one of the order-matchings µ in the statement of the proposition satisfies µ(x) = y. (Thus G is a subgraph of the Hasse diagram of P.) Drawing a picture will convince you that G consists of a disjoint union of paths, including single-vertex paths not involved in any of the order-matchings. The vertices of each of these paths form a chain in P. Thus we have partitioned the elements of P into disjoint chains. Since P is rank-unimodal with biggest level P j, all of these chains must pass through P j [why?]. Thus the number of chains is exactly p j. Any antichain A can intersect each of these chains at most once, so the cardinality A of A cannot exceed the number of chains, i.e., A p j. Hence by definition P is Sperner. It is now finally time to bring some linear algebra into the picture. For any (finite) set S, we let RS denote the real vector space consisting of all formal linear combinations (with real coefficients) of elements of S. Thus S is a basis for RS, and in fact we could have simply defined RS to be the real vector space with basis S. The next lemma relates the combinatorics we have just discussed to linear algebra and will allow us to prove that certain posets are Sperner by the use of linear algebra (combined with some finite group theory). 4.5 Lemma. Suppose there exists a linear transformation U : RP i RP i+1 (U stands for up ) satisfying: U is one-to-one. For all x P i, U(x) is a linear combination of elements y P i+1 satisfying x < y. (We then call U an order-raising operator.) 21

22 Then there exists an order-matching µ : P i P i+1. Similarly, suppose there exists a linear transformation U : RP i RP i+1 satisfying: U is onto. U is an order-raising operator. Then there exists an order-matching µ : P i+1 P i. Proof. Suppose U : RP i RP i+1 is a one-to-one order-raising operator. Let [U] denote the matrix of U with respect to the bases P i of RP i and P i+1 of RP i+1. Thus the rows of [U] are indexed by the elements x 1,..., x pi of P i (in some order) and the columns by the elements y 1,..., y pi+1 of P i+1. Since U is one-to-one, the rank of [U] is equal to p i (the number of rows). Since the row rank of a matrix equals its column rank, [U] must have p i linearly independent columns. Say we have labelled the elements of P i+1 so that the first p i columns of [U] are linearly independent. Let A = (a ij ) be the p i p i matrix whose columns are the first p i columns of [U]. (Thus A is a square submatrix of [U].) Since the columns of A are linearly independent, we have det(a) = ±a 1π(1) a pi π(p i ) 0, where the sum is over all permutations π of 1,..., p i. Thus some term ±a 1π(1) a pi π(p i ) of the above sum in nonzero. Since U is order-raising, this means that [why?] x k < y π(k) for 1 k p i. Hence the map µ : P i P i+1 defined by µ(x k ) = y π(k) is an order-matching, as desired. The case when U is onto rather than one-to-one is proved by a completely analogous argument. We now want to apply Proposition 4.4 and Lemma 4.5 to the boolean algebra B n. For each 0 i < n, we need to define a linear transformation U i : R(B n ) i R(B n ) i+1, and then prove it has the desired properties. We 22

23 simply define U i to be the simplest possible order-raising operator, namely, for x (B n ) i, let U i (x) = y. (18) y (Bn) i+1 y>x Note that since (B n ) i is a basis for R(B n ) i, equation (18) does indeed define a unique linear transformation U i : R(B n ) i R(B n ) i+1. By definition U i is order-raising; we want to show that U i is one-to-one for i < n/2 and onto for i n/2. There are several ways to show this using only elementary linear algebra; we will give what is perhaps the simplest proof, though it is quite tricky. The idea is to introduce dual operators D i : R(B n ) i R(B n ) i 1 to the U i s (D stands for down ), defined by D i (y) = x, (19) x (Bn) i 1 x<y for all y (B n ) i. Let [U i ] denote the matrix of U i with respect to the bases (B n ) i and (B n ) i+1, and similarly let [D i ] denote the matrix of D i with respect to the bases (B n ) i and (B n ) i 1. A key observation which we will use later is that [D i+1 ] = [U i ] t, (20) i.e., the matrix [D i+1 ] is the transpose of the matrix [U i ] [why?]. Now let I i : R(B n ) i R(B n ) i denote the identity transformation on R(B n ) i, i.e., I i (u) = u for all u R(B n ) i. The next lemma states (in linear algebraic terms) the fundamental combinatorial property of B n which we need. For this lemma set U n = 0 and D 0 = 0 (the 0 linear transformation between the appropriate vector spaces). 4.6 Lemma. Let 0 i n. Then D i+1 U i U i 1 D i = (n 2i)I i. (21) (Linear transformations are multiplied right-to-left, so AB(u) = A(B(u)).) Proof. Let x (B n ) i. We need to show that if we apply the left-hand side of (21) to x, then we obtain (n 2i)x. We have D i+1 U i (x) = D i+1 y 23 y =i+1 x y

24 = y =i+1 x y z. If x, z (B n ) i satisfy x z < i 1, then there is no y (B n ) i+1 such that x y and z y. Hence the coefficient of z in D i+1 U i (x) when it is expanded in terms of the basis (B n ) i is 0. If x z = i 1, then there is one such y, namely, y = x z. Finally if x = z then y can be any element of (B n ) i+1 containing x, and there are n i such y in all. It follows that D i+1 U i (x) = (n i)x + z =i z y z =i x z =i 1 z. (22) By exactly analogous reasoning (which the reader should check), we have for x (B n ) i that U i 1 D i (x) = ix + z. (23) z =i x z =i 1 Subtracting (23) from (22) yields (D i+1 U i U i 1 D i )(x) = (n 2i)x, as desired. 4.7 Theorem. The operator U i defined above is one-to-one if i < n/2 and is onto if i n/2. Proof. Recall that [D i ] = [U i 1 ] t. From linear algebra we know that a (rectangular) matrix times its transpose is positive semidefinite (or just semidefinite for short) and hence has nonnegative (real) eigenvalues. By Lemma 4.6 we have D i+1 U i = U i 1 D i + (n 2i)I i. Thus the eigenvalues of D i+1 U i are obtained from the eigenvalues of U i 1 D i by adding n 2i. Since we are assuming that n 2i > 0, it follows that the eigenvalues of D i+1 U i are strictly positive. Hence D i+1 U i is invertible (since it has no 0 eigenvalues). But this implies that U i is one-to-one [why?], as desired. The case i n/2 is done by a dual argument (or in fact can be deduced directly from the i < n/2 case by using the fact that the poset B n is selfdual, though we will not go into this). Namely, from the fact that U i D i+1 = D i+2 U i+1 + (2i + 2 n)i i+1 24

25 we get that U i D i+1 is invertible, so now U i is onto, completing the proof. Combining Proposition 4.4, Lemma 4.5, and Theorem 4.7, we obtain the famous theorem of Sperner. 4.8 Corollary. The boolean algebra B n has the Sperner property. It is natural to ask whether there is a less indirect proof of Corollary 4.8. In fact, several nice proofs are known; we give one due to David Lubell, mentioned before Definition 4.2. Lubell s proof of Sperner s theorem. First we count the total number of maximal chains Ø = x 0 < x 1 < < x n = {1,..., n} in B n. There are n choices for x 1, then n 1 choices for x 2, etc., so there are n! maximal chains in all. Next we count the number of maximal chains x 0 < x 1 < < x i = x < < x n which contain a given element x of rank i. There are i choices for x 1, then i 1 choices for x 2, up to one choice for x i. Similarly there are n i choices for x i+1, then n i + 1 choices for x i+2, etc., up to one choice for x n. Hence the number of maximal chains containing x is i!(n i)!. Now let A be an antichain. If x A, then let C x be the set of maximal chains of B n which contain x. Since A is an antichain, the sets C x, x A are pairwise disjoint. Hence x A C x = x A C x = x A(ρ(x))!(n ρ(x))! Since the total number of maximal chains in the C x s cannot exceed the total number n! of maximal chains in B n, we have (ρ(x))!(n ρ(x))! n! x A Divide both sides by n! to obtain x A 1 ( n ) 1. ρ(x) 25

26 Since ( n i) is maximized when i = n/2, we have for all x A (or all x B n ). Thus or equivalently, 1 ( n ) ( 1 n n/2 x A A ρ(x) ), 1 ( n ) 1, n/2 ( ) n. n/2 Since ( n n/2 ) is the size of the largest level of Bn, it follows that B n is Sperner. There is another nice way to show directly that B n is Sperner, namely, by constructing an explicit order-matching µ : (B n ) i (B n ) i+1 when i < n/2. We will define µ by giving an example. Let n = 21, i = 9, and S = {3, 4, 5, 8, 12, 13, 17, 19, 20}. We want to define µ(s). Let (a 1, a 2,..., a 21 ) be a sequence of ±1 s, where a i = 1 if i S, and a i = 1 if i S. For the set S above we get the sequence (writing for 1) Replace any two consecutive terms 1 with 0 0: Ignore the 0 s and replace any two consecutive terms 1 with 0 0: Continue: At this stage no further replacement is possible. The nonzero terms consist of a sequence of s followed by a sequence of 1 s. There is at least one since i < n/2. Let k be the position (coordinate) of the last ; here k = 16. Define µ(s) = S {k} = S {16}. The reader can check that this procedure 26

27 gives an order-matching. In particular, why is µ injective (one-to-one), i.e., why can we recover S from µ(s)? In view of the above elegant proof of Lubell and the explicit description of an order-matching µ : (B n ) i (B n ) i+1, the reader may be wondering what was the point of giving a rather complicated and indirect proof using linear algebra. Admittedly, if all we could obtain from the linear algebra machinery we have developed was just another proof of Sperner s theorem, then it would have been hardly worth the effort. But in the next section we will show how Theorem 4.7, when combined with a little finite group theory, can be used to obtain many interesting combinatorial results for which simple, direct proofs are not known. 27

28 5 Group actions on boolean algebras. Let us begin by reviewing some facts from group theory. Suppose that X is an n-element set and that G is a group. We say that G acts on the set X if for every element π of G we associate a permutation (also denoted π) of X, such that for all x X and π, σ G we have π(σ(x)) = (πσ)(x). Thus [why?] an action of G on X is the same as a homomorphism ϕ : G S X, where S X denotes the symmetric group of all permutations of X. We sometimes write π x instead of π(x). 5.1 Example. (a) Let the real number α act on the xy-plane by rotation counterclockwise around the origin by an angle of α radians. It is easy to check that this defines an action of the group R of real numbers (under addition) on the xy-plane. (b) Now let α R act by translation by a distance α to the right (i.e., adding (α, 0)). This yields a completely different action of R on the xy-plane. (c) Let X = {a, b, c, d} and G = Z 2 Z 2 = {(0, 0), (0, 1), (1, 0), (1, 1)}. Let G act as follows: (0, 1) a = b, (0, 1) b = a, (0, 1) c = c, (0, 1) d = d (1, 0) a = a, (1, 0) b = b, (1, 0) c = d, (1, 0) d = c. The reader should check that this does indeed define an action. In particular, since (1, 0) and (0, 1) generate G, we don t need to define the action of (0, 0) and (1, 1) they are uniquely determined. (d) Let X and G be as in (c), but now define the action by (0, 1) a = b, (0, 1) b = a, (0, 1) c = d, (0, 1) d = c (1, 0) a = c, (1, 0) b = d, (1, 0) c = a, (1, 0) d = b. Again one can check that we have an action of Z 2 Z 2 on {a, b, c, d}. 28

29 Recall what is meant by an orbit of the action of a group G on a set X. Namely, we say that two elements x, y of X are G-equivalent if π(x) = y for some π G. The relation of G-equivalence is an equivalence relation, and the equivalence classes are called orbits. Thus x and y are in the same orbit if π(x) = y for some π G. The orbits form a partition of X, i.e, they are pairwise-disjoint, nonempty subsets of X whose union is X. The orbit containing x is denoted Gx; this is sensible notation since Gx consists of all elements π(x) where π G. Thus Gx = Gy if and only if x and y are G-equivalent (i.e., in the same G-orbit). The set of all G-orbits is denoted X/G. 5.2 Example. (a) In Example 5.1(a), the orbits are circles with center (0, 0) (including the degenerate circle whose only point is (0, 0)). (b) In Example 5.1(b), the orbits are horizontal lines. Note that although in (a) and (b) the same group G acts on the same set X, the orbits are different. (c) In Example 5.1(c), the orbits are {a, b} and {c, d}. (d) In Example 5.1(d), there is only one orbit {a, b, c, d}. Again we have a situation in which a group G acts on a set X in two different ways, with different orbits. We wish to consider the situation where X = B n, the boolean algebra of rank n (so B n = 2 n ). We begin by defining an automorphism of a poset P to be an isomorphism ϕ : P P. (This definition is exactly analogous to the definition of an automorphism of a group, ring, etc.) The set of all automorphisms of P forms a group, denoted Aut(P ) and called the automorphism group of P, under the operation of composition of functions (just as is the case for groups, rings, etc.) Now consider the case P = B n. Any permutation π of {1,..., n} acts on B n as follows: If x = {i 1, i 2,..., i k } B n, then π(x) = {π(i 1 ), π(i 2 ),..., π(i k )}. (24) This action of π on B n is an automorphism [why?]; in particular, if x = i, then also π(x) = i. Equation (24) defines an action of the symmetric group 29

30 S n of all permutations of {1,..., n} on B n [why?]. (In fact, it is not hard to show that every automorphism of B n is of the form (24) for π S n.) In particular, any subgroup G of S n acts on B n via (24) (where we restrict π to belong to G). In what follows this action is always meant. 5.3 Example. Let n = 3, and let G be the subgroup of S 3 with elements e and (1, 2). Here e denotes the identity permutation, and (using disjoint cycle notation) (1, 2) denotes the permutation which interchanges 1 and 2, and fixes 3. There are six orbits of G (acting on B 3 ). Writing e.g. 13 as short for {1, 3}, the six orbits are {Ø}, {1, 2}, {3}, {12}, {13, 23}, and {123}. We now define the class of posets which will be of interest to us here. Later we will give some special cases of particular interest. 5.4 Definition. Let G be a subgroup of S n. Define the quotient poset B n /G as follows: The elements of B n /G are the orbits of G. If O and O are two orbits, then define O O in B n /G if there exist x O and y O such that x y in B n. (It s easy to check that this relation is indeed a partial order.) 5.5 Example. (a) Let n = 3 and G be the group of order two generated by the cycle (1, 2), as in Example 5.3. Then the Hasse diagram of B 3 /G is shown below, where each element (orbit) is labeled by one of its elements Ø (b) Let n = 5 and G be the group of order five generated by the cycle (1, 2, 3, 4, 5). Then B 5 /G has Hasse diagram 30

31 Ø One simple property of a quotient poset B n /G is the following. 5.6 Proposition. The quotient poset B n /G defined above is graded of rank n and rank-symmetric. Proof. We leave as an exercise the easy proof that B n /G is graded of rank n, and that the rank of an element O of B n /G is just the rank in B n of any of the elements x of O. Thus the number of elements p i (B n /G) of rank i is equal to the number of orbits O (B n ) i /G. If x B n, then let x denote the set-theoretic complement of x, i.e., x = {1,..., n} x = {1 i n : i x}. Then {x 1,..., x j } is an orbit of i-element subsets of {1,..., n} if and only if { x 1,..., x j } is an orbit of (n i)-element subsets [why?]. Hence (B n ) i /G = (B n ) n i /G, so B n /G is rank-symmetric. Let π S n. We associate with π a linear transformation (still denoted π) π : R(B n ) i R(B n ) i by the rule π c x x = c x π(x), x (B n) i x (B n) i where each c x is a real number. (This defines an action of S n, or of any subgroup G of S n, on the vector space R(B n ) i.) The matrix of π with 31

32 respect to the basis (B n ) i is just a permutation matrix, i.e., a matrix with one 1 in every row and column, and 0 s elsewhere. We will be interested in elements of R(B n ) i which are fixed by every element of a subgroup G of S n. The set of all such elements is denoted R(B n ) G i, so R(B n ) G i = {v R(B n ) i : π(v) = v for all π G}. 5.7 Lemma. A basis for R(B n ) G i consists of the elements v O := x O x, where O (B n ) i /G, the set of G-orbits for the action of G on (B n ) i. Proof. First note that if O is an orbit and x O, then by definition of orbit we have π(x) O for all π G (or all π S n ). Since π permutes the elements of (B n ) i, it follows that π permutes the elements of O. Thus π(v O ) = v O, so v O R(B n ) G i. It is clear that the v O s are linearly independent since any x (B n ) i appears with nonzero coefficient in exactly one v O. It remains to show that the v O s span R(B n ) G i, i.e., any v = x (B n) i c x x R(B n ) G i can be written as a linear combination of v O s. Given x (B n ) i, let G x = {π G : π(x) = x}, the stabilizer of x. We leave as an exercise the standard fact that π(x) = σ(x) (where π, σ G) if and only if π and σ belong to the same left coset of G x, i.e., πg x = σg x. It follows that in the multiset of elements π(x), where π ranges over all elements of G and x is fixed, every element y in the orbit Gx appears #G x times, and no other elements appear. In other words, π(x) = G x v Gx. π G (Do not confuse the orbit Gx with the subgroup G x!) Now apply π to v and sum on all π G. Since π(v) = v (because v R(B n ) G i ), we get G v = π G π(v) = c x π(x) π G x (B n) i 32

33 = x (B n) i c x = ( ) π(x) π G x (B n) i c x G x v Gx. Dividing by G expresses v as a linear combination of the elements v Gx (or v O ), as desired. Now let us consider the effect of applying the order-raising operator U i to an element v of R(B n ) G i. 5.8 Lemma. If v R(B n ) G i, then U i(v) R(B n ) G i+1. Proof. Note that since π G is an automorphism of B n, we have x < y in B n if and only if π(x) < π(y) in B n. It follows [why?] that if x (B n ) i then U i (π(x)) = π(u i (x)). Since U i and π are linear transformations, it follows by linearity that U i π(u) = πu i (u) for all u R(B n ) i. (In other words, U i π = πu i.) Then π(u i (v)) = U i (π(v)) = U i (v), so U i (v) R(B n ) G i+1, as desired. We come to the main result of this section, and indeed our main result on the Sperner property. 5.9 Theorem. Let G be a subgroup of S n. Then the quotient poset B n /G is graded of rank n, rank-symmetric, rank-unimodal, and Sperner. Proof. Let P = B n /G. We have already seen in Proposition 5.6 that P is graded of rank n and rank-symmetric. We want to define order-raising operators Ûi : RP i RP i+1 and order-lowering operators ˆD i : RP i RP i 1. Let us first consider just Ûi. The idea is to identify the basis element v O of RB G n with the basis element O of RP, and to let Ûi : RP i RP i+1 correspond to the usual order-raising operator U i : R(B n ) i R(B n ) i+1. More precisely, 33

34 suppose that the order-raising operator U i for B n given by (18) satisfies U i (v O ) = c O,O v O, (25) O (B n) i+1 /G where O (B n ) i /G. (Note that by Lemma 5.8, U i (v O ) does indeed have the form given by (25).) Then define the linear operator Ûi : R((B n ) i /G) R((B n ) i /G) by Û i (O) = c O,O O. O (B n) i+1 /G We claim that Ûi is order-raising. We need to show that if c O,O 0, then O > O in B n /G. Since v O = x O x, the only way c O,O 0 in (25) is for some x O to satisfy x > x for some x O. But this is just what it means for O > O, so Ûi is order-raising. Now comes the heart of the argument. We want to show that Ûi is oneto-one for i < n/2. Now by Theorem 4.7, U i is one-to-one for i < n/2. Thus the restriction of U i to the subspace R(B n ) G i is one-to-one. (The restriction of a one-to-one function is always one-to-one.) But U i and Ûi are exactly the same transformation, except for the names of the basis elements on which they act. Thus Ûi is also one-to-one for i < n/2. An exactly analogous argument can be applied to D i instead of U i. We obtain one-to-one order-lowering operators ˆD i : R(B n ) G i R(B n ) G i 1 for i > n/2. It follows from Proposition 4.4, Lemma 4.5, and (20) that B n /G is rank-unimodal and Sperner, completing the proof. We will consider two interesting applications of Theorem 5.9. For our first application, we let n = ( ) m 2 for some m 1, and let M = {1,..., m}. Let X = ( ) M 2, the set of all two-element subsets of M. Think of the elements of X as (possible) edges of a graph with vertex set M. If B X is the boolean algebra of all subsets of X (so B X and B n are isomorphic), then an element x of B X is a collection of edges on the vertex set M, in other words, just a simple graph on M. Define a subgroup G of S X as follows: Informally, G consists of all permutations of the edges ( ) M 2 that are induced from permutations of the vertices M. More precisely, if π S m, then define ˆπ S X by ˆπ({i, j}) = {π(i), π(j)}. Thus G is isomorphic to S m. 34

TOPICS IN ALGEBRAIC COMBINATORICS

TOPICS IN ALGEBRAIC COMBINATORICS TOPICS IN ALGEBRAIC COMBINATORICS Richard P. Stanley Version of 1 February 2013 4 CONTENTS Preface 3 Notation 6 Chapter 1 Walks in graphs 9 Chapter 2 Cubes and the Radon transform 21 Chapter 3 Random walks

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Just the Factors, Ma am

Just the Factors, Ma am 1 Introduction Just the Factors, Ma am The purpose of this note is to find and study a method for determining and counting all the positive integer divisors of a positive integer Let N be a given positive

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

COMBINATORIAL PROPERTIES OF THE HIGMAN-SIMS GRAPH. 1. Introduction

COMBINATORIAL PROPERTIES OF THE HIGMAN-SIMS GRAPH. 1. Introduction COMBINATORIAL PROPERTIES OF THE HIGMAN-SIMS GRAPH ZACHARY ABEL 1. Introduction In this survey we discuss properties of the Higman-Sims graph, which has 100 vertices, 1100 edges, and is 22 regular. In fact

More information

DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH

DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH CHRISTOPHER RH HANUSA AND THOMAS ZASLAVSKY Abstract We investigate the least common multiple of all subdeterminants,

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Linear Codes. Chapter 3. 3.1 Basics

Linear Codes. Chapter 3. 3.1 Basics Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0. Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector

More information

. 0 1 10 2 100 11 1000 3 20 1 2 3 4 5 6 7 8 9

. 0 1 10 2 100 11 1000 3 20 1 2 3 4 5 6 7 8 9 Introduction The purpose of this note is to find and study a method for determining and counting all the positive integer divisors of a positive integer Let N be a given positive integer We say d is a

More information

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors 1 Chapter 13. VECTORS IN THREE DIMENSIONAL SPACE Let s begin with some names and notation for things: R is the set (collection) of real numbers. We write x R to mean that x is a real number. A real number

More information

GROUPS ACTING ON A SET

GROUPS ACTING ON A SET GROUPS ACTING ON A SET MATH 435 SPRING 2012 NOTES FROM FEBRUARY 27TH, 2012 1. Left group actions Definition 1.1. Suppose that G is a group and S is a set. A left (group) action of G on S is a rule for

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

Chapter 7. Permutation Groups

Chapter 7. Permutation Groups Chapter 7 Permutation Groups () We started the study of groups by considering planar isometries In the previous chapter, we learnt that finite groups of planar isometries can only be cyclic or dihedral

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

Chapter 3. Distribution Problems. 3.1 The idea of a distribution. 3.1.1 The twenty-fold way

Chapter 3. Distribution Problems. 3.1 The idea of a distribution. 3.1.1 The twenty-fold way Chapter 3 Distribution Problems 3.1 The idea of a distribution Many of the problems we solved in Chapter 1 may be thought of as problems of distributing objects (such as pieces of fruit or ping-pong balls)

More information

How To Prove The Dirichlet Unit Theorem

How To Prove The Dirichlet Unit Theorem Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

PYTHAGOREAN TRIPLES KEITH CONRAD

PYTHAGOREAN TRIPLES KEITH CONRAD PYTHAGOREAN TRIPLES KEITH CONRAD 1. Introduction A Pythagorean triple is a triple of positive integers (a, b, c) where a + b = c. Examples include (3, 4, 5), (5, 1, 13), and (8, 15, 17). Below is an ancient

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

1 Symmetries of regular polyhedra

1 Symmetries of regular polyhedra 1230, notes 5 1 Symmetries of regular polyhedra Symmetry groups Recall: Group axioms: Suppose that (G, ) is a group and a, b, c are elements of G. Then (i) a b G (ii) (a b) c = a (b c) (iii) There is an

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES CHRISTOPHER HEIL 1. Cosets and the Quotient Space Any vector space is an abelian group under the operation of vector addition. So, if you are have studied

More information

The Determinant: a Means to Calculate Volume

The Determinant: a Means to Calculate Volume The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

Classification of Cartan matrices

Classification of Cartan matrices Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

Group Theory. Contents

Group Theory. Contents Group Theory Contents Chapter 1: Review... 2 Chapter 2: Permutation Groups and Group Actions... 3 Orbits and Transitivity... 6 Specific Actions The Right regular and coset actions... 8 The Conjugation

More information

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish

More information

it is easy to see that α = a

it is easy to see that α = a 21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coefficient ring is a field. We already know that such a polynomial ring is a UF. Therefore

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

LEARNING OBJECTIVES FOR THIS CHAPTER

LEARNING OBJECTIVES FOR THIS CHAPTER CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like

More information

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

More information

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties SOLUTIONS TO EXERCISES FOR MATHEMATICS 205A Part 3 Fall 2008 III. Spaces with special properties III.1 : Compact spaces I Problems from Munkres, 26, pp. 170 172 3. Show that a finite union of compact subspaces

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

Elements of Abstract Group Theory

Elements of Abstract Group Theory Chapter 2 Elements of Abstract Group Theory Mathematics is a game played according to certain simple rules with meaningless marks on paper. David Hilbert The importance of symmetry in physics, and for

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

6.3 Conditional Probability and Independence

6.3 Conditional Probability and Independence 222 CHAPTER 6. PROBABILITY 6.3 Conditional Probability and Independence Conditional Probability Two cubical dice each have a triangle painted on one side, a circle painted on two sides and a square painted

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

3 Some Integer Functions

3 Some Integer Functions 3 Some Integer Functions A Pair of Fundamental Integer Functions The integer function that is the heart of this section is the modulo function. However, before getting to it, let us look at some very simple

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Determinants in the Kronecker product of matrices: The incidence matrix of a complete graph

Determinants in the Kronecker product of matrices: The incidence matrix of a complete graph FPSAC 2009 DMTCS proc (subm), by the authors, 1 10 Determinants in the Kronecker product of matrices: The incidence matrix of a complete graph Christopher R H Hanusa 1 and Thomas Zaslavsky 2 1 Department

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

Discrete Mathematics. Hans Cuypers. October 11, 2007

Discrete Mathematics. Hans Cuypers. October 11, 2007 Hans Cuypers October 11, 2007 1 Contents 1. Relations 4 1.1. Binary relations................................ 4 1.2. Equivalence relations............................. 6 1.3. Relations and Directed Graphs.......................

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...

More information

Notes on Algebraic Structures. Peter J. Cameron

Notes on Algebraic Structures. Peter J. Cameron Notes on Algebraic Structures Peter J. Cameron ii Preface These are the notes of the second-year course Algebraic Structures I at Queen Mary, University of London, as I taught it in the second semester

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

The last three chapters introduced three major proof techniques: direct,

The last three chapters introduced three major proof techniques: direct, CHAPTER 7 Proving Non-Conditional Statements The last three chapters introduced three major proof techniques: direct, contrapositive and contradiction. These three techniques are used to prove statements

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n. ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

On Integer Additive Set-Indexers of Graphs

On Integer Additive Set-Indexers of Graphs On Integer Additive Set-Indexers of Graphs arxiv:1312.7672v4 [math.co] 2 Mar 2014 N K Sudev and K A Germina Abstract A set-indexer of a graph G is an injective set-valued function f : V (G) 2 X such that

More information

5.3 The Cross Product in R 3

5.3 The Cross Product in R 3 53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or

More information

Zachary Monaco Georgia College Olympic Coloring: Go For The Gold

Zachary Monaco Georgia College Olympic Coloring: Go For The Gold Zachary Monaco Georgia College Olympic Coloring: Go For The Gold Coloring the vertices or edges of a graph leads to a variety of interesting applications in graph theory These applications include various

More information

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that

More information

ABSTRACT ALGEBRA: A STUDY GUIDE FOR BEGINNERS

ABSTRACT ALGEBRA: A STUDY GUIDE FOR BEGINNERS ABSTRACT ALGEBRA: A STUDY GUIDE FOR BEGINNERS John A. Beachy Northern Illinois University 2014 ii J.A.Beachy This is a supplement to Abstract Algebra, Third Edition by John A. Beachy and William D. Blair

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function 17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):

More information

1 A duality between descents and connectivity.

1 A duality between descents and connectivity. The Descent Set and Connectivity Set of a Permutation 1 Richard P. Stanley 2 Department of Mathematics, Massachusetts Institute of Technology Cambridge, MA 02139, USA rstan@math.mit.edu version of 16 August

More information

G = G 0 > G 1 > > G k = {e}

G = G 0 > G 1 > > G k = {e} Proposition 49. 1. A group G is nilpotent if and only if G appears as an element of its upper central series. 2. If G is nilpotent, then the upper central series and the lower central series have the same

More information