# Matrix techniques for strongly regular graphs and related geometries

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Matrix techniques for strongly regular graphs and related geometries Supplement to the notes of W. Haemers Intensive Course on Finite Geometry and its Applications, University of Ghent, April 3-14, Eigenvalues of a regular graph We give the proof of a result mentioned on page 1, line -5 in [3]. Theorem 1.1 Let A be the adjacency matrix of a regular graph of degree k, and let ρ be an eigenvalue of A. Then ρ k. If ρ = k, then the corresponding eigenvector is the all-one vector 1, or the graph is disconnected. Proof. Suppose that {1,..., v} is the vertex set of the graph, and let u = be an eigenvector corresponding to the eigenvector ρ; w.l.o.g. we may assume that u 1. u v max u i = 1 i {1,...,v} (otherwise we divide the vector by an appropriate scalar), so w.l.o.g. we have u j = 1 for a certain j {1,..., v}. The absolute value (A u) j of the j-th component of A u is at most i j u i ; since the absolute values of all components of u are less than or equal to 1, we have i j u i k. On the other hand (A u) j must be equal to ρu j = ρ, from which we obtain ρ k. If ρ = k, then we have i j u i = ku j = k, so u i = 1 for all vertices i which are adjacent 1

2 to j. We can repeat the procedure for all these i; in the case of a connected graph we find, after a finite number of steps, that u = 1. Exercise Let G be a regular graph of degree k and with adjacency matrix A. Prove that 1. the multiplicity of the eigenvalue k is exactly the number of connected components of G. 2. k is an eigenvalue of A if and only if G is bipartite. The following lemma will be used throughout these notes. Lemma 1.2 The (v v)-matrix J with all entries equal to 1 has eigenvalues 0 and v, with multiplicities respectively v 1 and 1. The (v v)-identity matrix I had eigenvalue 1 with multiplicity v. All (1 v)-vectors are eigenvectors of I. 2 The friendship property; polarities in projective planes A special case of the property λ = µ in a strongly regular graph, mentioned on page 2, line -5 in [3], is λ = µ = 1. A graph with this property is said to have the friendship property: each pair of vertices has exactly one common neighbour. We will now determine all graphs with this property. Theorem 2.1 The only regular graph having the friendship property is the triangle. Proof. Suppose that G is a graph with the friendship property, that A is its adjacency matrix and that it is regular of degree k. The diagonal entries of A 2 = AA T are all equal to k because of the regularity of G; the off-diagonal entries of A 2 are all equal to 1 because of the friendship property. This yields A 2 (k 1)I = J. Hence, an eigenvalue ρ of A different from k must satisfy ρ 2 = k 1 (see lemma 1.2), so ρ = ± k 1. Now we recall that k has multiplicity 1, and that A has a zero diagonal and hence trace equal to zero; if f denotes the multiplicity of k 1, we find 2

3 ( f = 1 v 1 k 2 k 1 ). It is clear that f can only be an integer if k = 2, and that this value of k corresponds to the triangle. We can consider the adjacency matrix A of a graph G as the incidence matrix of a (necessarily symmetric) design. The symmetry of the matrix learns us that this design has a polarity, which has no absolute points as the diagonal of A is zero. If G is regular of degree k and satisfies the friendship property, then two points of the design are contained in exactly one block and two blocks intersect in exactly one point; each block contains exactly k points and each point is contained in exactly k blocks. As a consequence, the design must be a projective plane of order k 1. We just proved, however, that the only possible value for k is 2; the corresponding projective plane is the triangle and hence is degenerate (in a non-degenerate projective plane there exist four points of which no three are collinear). Thus we have proved the following Corollary 2.2 A polarity of a non-degenerate projective plane has at least one absolute point. We can also determine the maximal number of absolute points of a polarity of a projective plane using matrix techniques. Theorem 2.3 A polarity in a projective plane of order t 2 has at most t t + 1 absolute points. Proof. Let A be the incidence matrix of a projective plane of order t 2 with a polarity; it immediately follows that A is symmetric. Since every pair of points determines a unique line, we have A 2 = J + ti, so the eigenvalues of A are t + 1, t and t. The number of absolute points is precisely the number of ones on the diagonal of A. First we suppose that all points are absolute. If an off-diagonal element a ij, i j, of A were equal to 1, then we would have a ii = a ij = a ji = a jj = 1, which would mean that there are two different lines (labelled by i and j) containing a pair of points (labelled by i and j), clearly a contradiction. Hence A must be the identity matrix, but this contradicts t 2. We conclude that not all points are absolute. An appropriate permutation yields the following form for the matrix A: [ ] I C A = C T, D 3

4 where I is an identity matrix of size c, say, and D is a matrix with zero diagonal; note that c is the number of absolute points of the polarity. The quotient matrix B which corresponds to this partition (see page 13 in [3]) is [ ] 1 t B = tc tc t + 1 t 2 +t+1 c t 2 +t+1 c (tc is the number of ones in the submatrix C (or C T tc ), so is the average row t 2 +t+1 c sum in C T ). One easily calculates that the eigenvalues of B are µ 1 = t + 1 and tc µ 2 = 1 ; the theory of interlacing learns us that the smallest eigenvalue µ t 2 +t+1 c 2 of B must be greater than or equal to the smallest eigenvalue of A. This yields c t t + 1. Now we deal with a situation in which this bound is met. Let ϕ be a Hermitian polarity in a projective plane of square order t, and let U be the set of absolute points of ϕ; then U = t t + 1. We consider the incidence structure H consisting of all absolute points and all non-absolute lines of ϕ, which corresponds to the submatrix C of A. A closer look at the matrices A, B and C tells us that each two points of H are contained in exactly one line of H, that each line of H contains t + 1 points of H and that each point of H is contained in t lines of H. Consequently H is a 2 (t t + 1, t + 1, 1)-design, i.e. a unital. 3 The line graph of a graph Let G be a regular connected graph. The line graph of G is by definition the graph G whose vertices are the edges of G; two edges of G are adjacent if they have a vertex in common. We will investigate whether G can be a strongly regular graph. Theorem 3.1 Let G be a regular connected graph of degree k. Then the line graph of G is strongly regular if and only if one of the following holds: 1. G is the pentagon 2. G is the complete bipartite graph K k,k 3. G is the complete graph K k+1 4

5 Proof. Let A be the adjacency matrix of G, let C be the adjacency matrix of the line graph G of G and let N be the vertex-edge incidence matrix of G. It is clear that NN T = A + ki, N T N = C + 2I; as a consequence, an eigenvalue λ of A corresponds to an eigenvalue λ + k 2 of C. Furthermore an easy counting argument shows that, if k > 2, the number of edges of G is greater than the number of vertices of G. If k = 1, we see that G is the union of disjoint edges, so the line graph G is void (and hence not strongly regular). If k = 2, G is a polygon and G is isomorphic to G; theorem 1.3 in [3] learns us that the pentagon is the only polygon which is a non-trivial strongly regular graph. From now on we may assume k > 2, so the matrix N has more columns than rows. As a consequence the matrix N T N has an eigenvalue 0, and C has an eigenvalue 2. Since k is an eigenvalue of A, 2k 2 is an eigenvalue of C; an eigenvalue λ k of A yields an eigenvalue λ + k 2 2k 2 of C. Now we have to consider two cases: 1. If k is an eigenvalue of A, then G is bipartite; it is easily proved that the multiplicity of k is 1. That means that A has an eigenvalue λ ±k with multiplicity v 2, where v is the number of vertices of G. By the fact that every vector orthogonal to the eigenvectors corresponding to the eigenvalues k and k must be an eigenvector corresponding to the eigenvalue λ, it is easily seen that λ must be 0. This implies that G is the complete bipartite graph K k,k. So we see that C has eigenvalues 2k 2, k 2 and 2, and that G is isomorphic to the k k lattice graph L k, an srg (k 2, 2k 2, k 2, 2) which is uniquely determined by its parameters if k 4 (see [2]). 2. If k is not an eigenvalue of A, then G is not bipartite and A has precisely two distinct eigenvalues k and λ, where λ = k. It can easily be proved (by considering the trace of A) that λ must be equal to 1; this implies that G is the complete graph K k+1. Therefore C has eigenvalues ( 2k 2, k ) 3 and 2 and G is the triangular graph T (k + 1), an srg k(k+1), 2k 2, k 1, 4 which is 2 uniquely determined by its parameters if k 7 (see [2]). 5

6 4 (Partial) linear spaces and their point and line graphs. Definition An incidence structure S = (P, L, I), with P a set of points, L a set of lines and I an incidence relation, is a partial linear space of order (s, t) if the following axioms are satisfied: 1. Each line is incident with s + 1 points; 2. Each point is incident with t + 1 lines; 3. Every two different points are incident with at most one line. A partial linear space is called a linear space if every two different points of P are incident with exactly one line of L. Notation The number of points of a partial linear space is denoted by v, the number of lines is denoted by b. Definition A partial geometry pg(s, t, α) is a partial linear space S = (P, L, I) that satisfies the following axiom: 4. If p P, L L then there are α elements L 1,..., L α of L and α elements p 1,..., p α of P such that p I L i, p i I L i and p i I L (for 1 i α). A generalized quadrangle is a pg(s, t, 1). Lemma 4.1 (De Bruijn-Erdős) Let S be a linear space. Assume that there exists no line of S that contains all points of S. Then b v. Proof. Let N be the incidence matrix of the linear space S. Consider the matrix NN T. The non-diagonal elements of this matrix are all equal to 1, because of the definition of a linear space. Hence NN T = J + D (1) with D a diagonal matrix. If there is a point of S that lies on exactly one line of S, then we easily see that S contains only that line and that all the points of S are 6

7 incident with this line. This is in contradiction with the assumptions. So we know that through every point of S there are at least two lines. From (1) it follows that all the diagonal entries of the matrix D are greater than 0. So the matrix D is positive definite. The matrix J is a positive semidefinite matrix. Furthermore the sum of a positive definite and a positive semi-definite matrix is positive definite. This proves that J +D is positive definite. But then J +D is non-singular and so J +D has rank v. A theorem of linear algebra states that if C is a (f g)-matrix over the real numbers and D is a (g h)-matrix over the real numbers, then rank (C) rank (CD). There follows that b rank (N) rank (NN T ) = v. This proves the lemma. Definition Let S be a partial linear space. The point graph of S is the graph with vertices the points of S, two different vertices are adjacent whenever they are collinear. The line graph of S is the graph with vertices the lines of S, two different vertices are adjacent whenever they are concurrent. Let N be the incidence matrix of a partial linear space S of order (s, t), let A be the adjacency matrix of the point graph of S and let C be the adjacency matrix of the line graph of S. Then the following relations hold: NN T = A + (t + 1)I (2) N T N = C + (s + 1)I (3) Lemma 4.2 The matrices NN T and N T N have the same non-zero eigenvalues (with the same multiplicities). Proof. Let u be an eigenvector of N T N corresponding to the eigenvalue λ 0. Then N T N u = λ u. Multiplying both sides of this equation on the left by N yields NN T N u = Nλ u = λ(n u). Suppose that N u = 0; if we multiply this equation on the left by N T we obtain N T N u = λ u = 0, a contradiction since λ 0. So N u is a (non-zero) eigenvector of the matrix NN T corresponding to the eigenvalue λ. Analogously we find for each eigenvector of NN T a eigenvector of N T N corresponding to the same non-zero eigenvalue. Thus we have proved that NN T and N T N have the same eigenvalues. Now let λ be a non-zero eigenvalue of N T N with multiplicity m, and let {u 1,..., u m } be a basis for the space of eigenvectors corresponding to λ. Then 7

8 {Nu 1,..., Nu m } is a set of eigenvectors of NN T corresponding to λ; it is easily seen that this set must be linearly independent (otherwise λ = 0), and that every eigenvector of NN T corresponding to λ can be represented as a linear combination of elements of {Nu 1,..., Nu m }. As a consequence the non-zero eigenvalues of NN T and N T N have the same multiplicities. We consider a partial linear space S with point graph G and line graph G ; we will investigate whether G and G can both be strongly regular. Theorem 4.3 Let S be a partial linear space with incidence matrix N, point graph G and line graph G ; suppose that G and G are strongly regular graphs. Then one of the following holds: 1. N is a square matrix of full rank, s = t, and G and G have the same parameters. 2. S is a partial geometry. Proof. Let A (respectively C) be the adjacency matrix of G (respectively G ). By relations (2) and (3) and lemma 4.2 an eigenvalue λ of A corresponds to an eigenvalue λ + t s of C. A necessary and sufficient condition for G and G to be strongly regular is that A and C have exactly two restricted eigenvalues. Two distinct situations occur: 1. Neither NN T nor N T N has an eigenvalue 0 Then N must be a square matrix of full rank, and consequently s = t. This also means that A and C have the same eigenvalues (with the same multiplicities), so G and G are strongly regular graphs with the same parameters. 2. NN T or N T N has an eigenvalue 0 Then both NN T and N T N must have an eigenvalue 0, otherwise either A or C would have an extra eigenvalue. As a consequence N cannot have full rank, or equivalently, must have an eigenvalue 0. As NN T has an eigenvalue 0, A has an eigenvalue t 1; furthermore we know that G is regular of degree k := (t + 1)s. If l denotes the smallest eigenvalue of A, we have l t + 1; from subsection 3.3 in [3] it follows that the maximal number of vertices in a clique of G is 1 k l = 1 + k l 1 + k t + 1 = s

9 As there obviously exist cliques of size s + 1 in G, namely the lines of the partial linear space, we find that l = t 1 and that the lines of the partial linear space are maximal cliques. Subsection 3.3 in [3] implies that there exists a constant α such that each vertex outside a maximal clique in G is adjacent to exactly α vertices of the clique, i.e. our partial linear space is a partial geometry. If S is a partial geometry pg(s, t, α), it is clear that (in the notation of theorem 4.3) G and G are strongly regular, that A has eigenvalue t 1 and that C has eigenvalue s 1. Consequently NN T and N T N have an eigenvalue 0, so N has no full rank. Thus we have proved the following Corollary 4.4 Let S be a partial linear space with incidence matrix N, point graph G and line graph G ; suppose that G and G are strongly regular graphs. Then S is a partial geometry if and only if N has no full rank. Example An example of case 1 in theorem 4.3 is obtained by considering the adjacency matrix of an srg (v, k, 0, 1) (see theorem 1.3 in [3]) as the incidence matrix of an incidence structure. One easily sees that this incidence structure must be a partial linear space, that the matrix is square and that it has full rank (because it has no eigenvalue 0). Remark We again use the notation of theorem 4.3. The α-property of a partial geometry pg(s, t, α) can be expressed in matrix form by considering the product AN: if the point labelled by i is incident with the line labelled by j, then (AN) ij is equal to s, if not, then (AN) ij is equal to α. Thus we find AN = sn + α(j N); using relation 2 we obtain NN T N = (t + s + 1 α)n + αj. (4) Equation (4) can be multiplied on the right (respectively left) with N T to prove that G (respectively G ) is a strongly regular graph. In a partial geometry pg(s, t, α) with α < s + 1 a pair of points does not necessarily determine a line. We will try to add lines to the partial geometry such that every two points become collinear, or equivalently, such that we obtain a 2-design. If it is possible to find such lines, then we say that the partial geometry is embeddable in a 2-design. 9

10 The following theorem (see [1]) gives a necessary condition for the parameters of a partial geometry to be embeddable in a 2-design. To prove this theorem, we need the following lemma: Lemma 4.5 Let S be a partial linear space with incidence matrix N and adjacency matrix A. Suppose that the point graph G of S is a strongly regular graph. Then one of the following holds: 1. S is a partial geometry; 2. b v. If b = v then det(a + (t + 1)I) is a square. Proof. Suppose first that b = v. In this case the matrix N is symmetric. From (2) we know that NN T = A + (t + 1)I. So det(a + (t + 1)I) = (det N) 2. This proves that det(a + (t + 1)I) is a square. Next, let b < v. In the same way as in the proof of lemma 4.1, we find that v > b rank (N) rank (NN T ). So NN T has rank less than v. From (2) there follows in this case that A must have an eigenvalue t 1. But then t 1 should be a solution of the eigenvalue equation of A, which is equal to X 2 + (µ λ)x + µ k = 0 since by assumption G is a strongly regular graph. So we proved that (t + 1) 2 (µ λ)(t + 1) + µ k = 0. (5) From (5) we see that t+1 µ (since k = (t+1)s). So µ = (t+1)α for some nonnegative integer α. Substituting this value in (5), we get λ = s 1 + t(α 1). So we see that G has the parameters of the point graph of a pg(s, t, α). Since the lines are maximal cliques, our partial linear space was in fact a partial geometry. Theorem 4.6 Suppose the partial geometry S is embeddable in a 2-design and suppose that S is not a 2-design D (i.e. α s + 1). Let G be the point graph of S. Then either α = t and the coclique size of G is equal to the clique size of G, or α t(s + 1) s + t + 1. Proof. We have to add new lines to the partial geometry S until every two points of S are exactly on one line. Suppose that the number of lines of S is equal to b and the number of lines of D is b. The number of points of S (and so also of D) is v. 10

11 We now apply the previous lemma to the noncollinearity graph G of S (this is the graph with vertices the points of S, two different vertices are adjacent whenever they are not collinear in S). It is clear that this graph is strongly regular since it is the complement of the point graph of S which is a strongly regular graph. So the lemma tells us that either b b v or G is the point graph of a partial geometry. In the first case, after some calculation the inequality reduces to α(s + t + 1) t(s + 1) (it is easy to check that the number of lines b of D is equal to (st+α)(st+t+α) ). In the second α 2 case, remark first that the lines we want to add should be cocliques of the point graph of S since otherwise there exist points of S that are on a line of S and on a new line, so they are on two different lines of the 2-design D, a contradiction. Since G is the point graph of a partial geometry, the new lines should be maximal cliques of G. This means that the new lines are maximal cocliques of G. So the maximal size of a coclique of G must be equal to the maximal size of a clique of G. It is easy to prove that in this case t = α (see [3]). 5 Pseudo-geometric graphs. Definition A strongly regular graph Γ is called pseudo-geometric if its parameters have the following form: ( ) st + α v = (s + 1) k = s(t + 1) α λ = t(α 1) + s 1 µ = α(t + 1) These are exactly the parameters of the point graph of a partial geometry pg(s, t, α). We call this partial geometry (if it exists or not) the partial geometry corresponding to the pseudo-geometric graph. If Γ is the point graph of at least one partial geometry, then we call Γ geometric. Problem From the definition of a pseudo-geometric graph we know that it is strongly regular. So any pseudo-geometric graph has three eigenvalues, namely k = s(t + 1), t 1 and s α. The interesting problem is to find out whether a pseudo-geometric graph indeed comes from a partial geometry or not. To solve this problem, one has to find back the lines of the geometry out of the pseudo-geometric graph. In the graph the lines are cliques of size s + 1, such that any two adjacent points of the geometry are in exactly one of the chosen cliques and such that through every point there are t + 1 cliques. It 11

12 is not necessary to check the last condition of a partial geometry, because the cliques we choose are maximal so we know that every point outside the clique is adjacent to a constant number of points of the clique (see theorem 3.3 in [3]). However, it is not always obvious to find the right set of cliques to form the lines of the partial geometry. Lemma 5.1 Let A be the adjacency matrix of a strongly regular graph G. Then the eigenvectors of A different from r 1 = (1, 1,..., 1), r an integer, are the same as the eigenvectors of J corresponding to the eigenvalue 0, with J the matrix with all entries equal to 1. Proof. It is easy to see that AJ = s(t + 1)J and JA = s(t + 1)J. Let u be an eigenvector of A corresponding to the eigenvalue λ u. So A u = λ u u. Multiplying both sides of this equality by J we get JA u = Jλ u u, and thus s(t + 1)J u = λ u J u. This means that λ u = s(t+1) or J u = 0. However, λ u s(t+1) because u is not a multiple of 1. So J u = 0 and we conclude that u is an eigenvector of J corresponding to the eigenvalue 0. Theorem 5.2 Let G be a pseudo-geometric graph. Assume that the partial geometry corresponding to G is a generalized quadrangle GQ(s, t). Then we have and if equality holds, then G is geometric. t s 2 Proof. Since G is pseudo-geometric, it has exactly three eigenvalues s(t + 1), t 1 and s 1 with multiplicities respectively 1, s2 (st+1) and st(s+1)(t+1) (see theorem 1.2 in [3]). Let J be the (v v) matrix with all entries equal to 1 and let I be the (v v) identity matrix. Define the matrix E by E = (s + 1)A + (s 2 1)I + J. From the eigenvalues of A we can calculate the eigenvalues of E, and their multiplicities follow from the multiplicities of the eigenvalues of A. Namely, We know that A has eigenvalue s(t + 1) with multiplicity 1. The corresponding eigenvector to this eigenvalue is the all one vector 1 = (1, 1,..., 1) T. From J 1 = v 1 we see that 1 is an eigenvector of the matrix J with corresponding eigenvalue v. From lemma (1.2) it follows that 1 is an eigenvector of the matrix 12

13 I with corresponding eigenvalue 1. follows that Now from the definition of the matrix E E 1 = (s + 1)A 1 + (s 2 1)I 1 + J 1 = (s + 1)s(t + 1) + (s 2 1)1 + v = 0. This means that 1 is also an eigenvector of the matrix E with multiplicity 1 and corresponding eigenvalue 0; Let a be an eigenvector of A corresponding to the eigenvalue s 1. Then a is an eigenvector of I with eigenvalue 1 and an eigenvector of J with eigenvalue 0 (see lemma (1.2) and lemma(5.1)). There follows that E a = (s + 1)A a + (s 2 1)I a + J a = (s + 1)(s 1) + (s 2 1)1 = 0 So also each eigenvector of A corresponding to the eigenvalue s 1 is an eigenvector of E corresponding to the eigevalue 0; For the eigenvalue t 1 of A one uses the same arguement as in the previous cases. We find that each eigenvector of A corresponding to the eigenvalue t 1 of A is an eigenvector of E corresponding to the eigenvalue (s + 1)(s + t). So the eigenvalues of the matrix E are 0 and (s+1)() with multiplicities respectively 1 + st(s+1)(t+1) and s2 (st+1). Now assume the matrix A has the following form: a 11 a 12 a 1,s(t+1) a 1,s(t+1)+1 a 1,v a s(t+1),1 a s(t+1),2 a s(t+1),s(t+1) a s(t+1),v 0 a s(t+1)+1,1) a s(t+1)+1,v a v,1 a v,v 13

14 (if necessary we change the order of the rows and coloms of A to bring it in this form). Define the following (s(t + 1) s(t + 1)) submatrix A 11 of A: a 11 a 1,s(t+1)... a s(t+1),1 a s(t+1),s(t+1) To the matrix A 11 there corresponds a subgraph of G. We call this subgraph G 11. From the choice of A 11 we see that G 11 is the subgraph of G consisting of all the vertices of G adjacent to one given vertex, where two vertices are adjacent in G 11 if they are adjacent in G. Since the graph G is strongly regular, the graph G 11 is regular of degree λ = s 1. So every vertex of G 11 has s 1 neighbours, which implies that every (connected) component of G 11 has at least s vertices. Furthermore as G 11 contains s(t + 1) vertices, we see that G 11 has at most t + 1 components. Let E 11 := (s + 1)A 11 + (s 2 1)I s(t+1),s(t+1) + J s(t+1),s(t+1), with I s(t+1),s(t+1) the (s(t + 1) s(t + 1))-identity matrix and J s(t+1),s(t+1) the (s(t + 1) s(t + 1))-matrix with all entries equal to 1. Since the multiplicity of the eigenvalue (s + 1)(s + t) of the matrix E is s2 (st+1), the matrix E 11 has eigenvalue 0 at least s(t + 1) s2 (st+1) times. So the matrix A 11 has eigenvalue s 1 at least 1 + s(t + 1) s2 (st+1) times (note that the eigenvalue of A 11 that corresponds to 1 also equals s 1 and it was not included yet). We already mentioned that the graph G 11 is regular of degree λ = s 1. So from part 1 of the exercise in section 1 we know that A 11 has at least 1 + s(t + 1) s2 (st+1) components. However, we proved above that A 11 has at most t + 1 components. So we get the inequality 1 + s(t + 1) s2 (st + 1) s + t t + 1. After some calculation one finds that t s 2. If t = s 2, we see from the proof of the inequality that G 11 can be partitioned in cliques of size s. So we found the maximal cliques in the graph G that represent the lines of the generalized quadrangle. This implies that the graph G is geometric. 6 Spreads If G is a strongly regular graph on v vertices, regular of degree k and with smallest eigenvalue l, we know from subsection 3.3 in [3] that the clique bound is K := 1 k l and 14

15 that the coclique bound is K := vl. We note that KK = v. A spread in G is defined k l as a partition of G into maximal cliques, also called Delsarte cliques. This, of course, corresponds to a partition of the complement G of G into Delsarte cocliques, i.e. to a colouring of G with v = K colours. Subsection 3.4 in [3] learns us that this number K of colours meets the Hoffman bound. Thus we conclude that a Hoffman colouring of G corresponds to a spread in G. Now we assume that G is the point graph of a partial geometry. If the partial geometry has a spread, then it is clear that G has a spread as well, but the converse is not necessarily true: for instance, the partial geometry pg(2, 1, 2) (which is indeed unique for these parameters) has no spread, while its point graph has several spreads. A spread of a partial geometry pg(s, t, α) corresponds to a coclique of size st+α in the α line graph; if such a coclique in the line graph exists, then we obtain a spread of the partial geometry. References [1] Andries E. Brouwer, Willem H. Haemers, and Vladimir D. Tonchev. Embedding partial geometries in Steiner designs. In Geometry, combinatorial designs and related structures (Spetses, 1996), pages Cambridge Univ. Press, Cambridge, [2] F. De Clerck. Finite geometry: an introduction to the theory of designs and the theory of strongly regular graphs, Lecture Notes. [3] W. H. Haemers. Matrix techniques for strongly regular graphs and related geometries, Lecture notes for the Intensive Course on Finite Geometry and its Applications. These notes were composed by Sara Cauchie Elisabeth Kuijken Universiteit Gent Vakgroep Zuivere Wiskunde en Computeralgebra Galglaan 2 B-9000 Gent 15

### COMBINATORIAL PROPERTIES OF THE HIGMAN-SIMS GRAPH. 1. Introduction

COMBINATORIAL PROPERTIES OF THE HIGMAN-SIMS GRAPH ZACHARY ABEL 1. Introduction In this survey we discuss properties of the Higman-Sims graph, which has 100 vertices, 1100 edges, and is 22 regular. In fact

### DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

### 10. Graph Matrices Incidence Matrix

10 Graph Matrices Since a graph is completely determined by specifying either its adjacency structure or its incidence structure, these specifications provide far more efficient ways of representing a

### MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### A CHARACTERIZATION OF TREE TYPE

A CHARACTERIZATION OF TREE TYPE LON H MITCHELL Abstract Let L(G) be the Laplacian matrix of a simple graph G The characteristic valuation associated with the algebraic connectivity a(g) is used in classifying

### 1 Eigenvalues and Eigenvectors

Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

### The determinant of a skew-symmetric matrix is a square. This can be seen in small cases by direct calculation: 0 a. 12 a. a 13 a 24 a 14 a 23 a 14

4 Symplectic groups In this and the next two sections, we begin the study of the groups preserving reflexive sesquilinear forms or quadratic forms. We begin with the symplectic groups, associated with

### Math 4707: Introduction to Combinatorics and Graph Theory

Math 4707: Introduction to Combinatorics and Graph Theory Lecture Addendum, November 3rd and 8th, 200 Counting Closed Walks and Spanning Trees in Graphs via Linear Algebra and Matrices Adjacency Matrices

Facts About Eigenvalues By Dr David Butler Definitions Suppose A is an n n matrix An eigenvalue of A is a number λ such that Av = λv for some nonzero vector v An eigenvector of A is a nonzero vector v

### Summary of week 8 (Lectures 22, 23 and 24)

WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry

### SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

31 Kragujevac J. Math. 25 (2003) 31 49. SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH Kinkar Ch. Das Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, W.B.,

### MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006

MATH 511 ADVANCED LINEAR ALGEBRA SPRING 26 Sherod Eubanks HOMEWORK 1 1.1 : 3, 5 1.2 : 4 1.3 : 4, 6, 12, 13, 16 1.4 : 1, 5, 8 Section 1.1: The Eigenvalue-Eigenvector Equation Problem 3 Let A M n (R). If

### Partitioning edge-coloured complete graphs into monochromatic cycles and paths

arxiv:1205.5492v1 [math.co] 24 May 2012 Partitioning edge-coloured complete graphs into monochromatic cycles and paths Alexey Pokrovskiy Departement of Mathematics, London School of Economics and Political

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

### Week 5 Integral Polyhedra

Week 5 Integral Polyhedra We have seen some examples 1 of linear programming formulation that are integral, meaning that every basic feasible solution is an integral vector. This week we develop a theory

### 13 Solutions for Section 6

13 Solutions for Section 6 Exercise 6.2 Draw up the group table for S 3. List, giving each as a product of disjoint cycles, all the permutations in S 4. Determine the order of each element of S 4. Solution

### Orthogonal Diagonalization of Symmetric Matrices

MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

### Determinants. Dr. Doreen De Leon Math 152, Fall 2015

Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.

### The Determinant: a Means to Calculate Volume

The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are

### A Turán Type Problem Concerning the Powers of the Degrees of a Graph

A Turán Type Problem Concerning the Powers of the Degrees of a Graph Yair Caro and Raphael Yuster Department of Mathematics University of Haifa-ORANIM, Tivon 36006, Israel. AMS Subject Classification:

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible:

Cramer s Rule and the Adjugate Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Theorem [Cramer s Rule] If A is an invertible

### Lecture 3: Linear Programming Relaxations and Rounding

Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can

### Classification of Cartan matrices

Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms

### Homework One Solutions. Keith Fratus

Homework One Solutions Keith Fratus June 8, 011 1 Problem One 1.1 Part a In this problem, we ll assume the fact that the sum of two complex numbers is another complex number, and also that the product

### Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

### Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

### Lecture 5 - Triangular Factorizations & Operation Counts

LU Factorization Lecture 5 - Triangular Factorizations & Operation Counts We have seen that the process of GE essentially factors a matrix A into LU Now we want to see how this factorization allows us

### (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.

Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product

### Sign pattern matrices that admit M, N, P or inverse M matrices

Sign pattern matrices that admit M, N, P or inverse M matrices C Mendes Araújo, Juan R Torregrosa CMAT - Centro de Matemática / Dpto de Matemática Aplicada Universidade do Minho / Universidad Politécnica

### Lecture Notes: Matrix Inverse. 1 Inverse Definition

Lecture Notes: Matrix Inverse Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Inverse Definition We use I to represent identity matrices,

### Geometry of Linear Programming

Chapter 2 Geometry of Linear Programming The intent of this chapter is to provide a geometric interpretation of linear programming problems. To conceive fundamental concepts and validity of different algorithms

### 2.5 Gaussian Elimination

page 150 150 CHAPTER 2 Matrices and Systems of Linear Equations 37 10 the linear algebra package of Maple, the three elementary 20 23 1 row operations are 12 1 swaprow(a,i,j): permute rows i and j 3 3

### Bindel, Fall 2012 Matrix Computations (CS 6210) Week 8: Friday, Oct 12

Why eigenvalues? Week 8: Friday, Oct 12 I spend a lot of time thinking about eigenvalue problems. In part, this is because I look for problems that can be solved via eigenvalues. But I might have fewer

### Diagonal, Symmetric and Triangular Matrices

Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

The Hadamard Product Elizabeth Million April 12, 2007 1 Introduction and Basic Results As inexperienced mathematicians we may have once thought that the natural definition for matrix multiplication would

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

### Introduction to Flocking {Stochastic Matrices}

Supelec EECI Graduate School in Control Introduction to Flocking {Stochastic Matrices} A. S. Morse Yale University Gif sur - Yvette May 21, 2012 CRAIG REYNOLDS - 1987 BOIDS The Lion King CRAIG REYNOLDS

### Math 315: Linear Algebra Solutions to Midterm Exam I

Math 35: Linear Algebra s to Midterm Exam I # Consider the following two systems of linear equations (I) ax + by = k cx + dy = l (II) ax + by = 0 cx + dy = 0 (a) Prove: If x = x, y = y and x = x 2, y =

### Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like

### 1 Limiting distribution for a Markov chain

Copyright c 2009 by Karl Sigman Limiting distribution for a Markov chain In these Lecture Notes, we shall study the limiting behavior of Markov chains as time n In particular, under suitable easy-to-check

### 1 Polyhedra and Linear Programming

CS 598CSC: Combinatorial Optimization Lecture date: January 21, 2009 Instructor: Chandra Chekuri Scribe: Sungjin Im 1 Polyhedra and Linear Programming In this lecture, we will cover some basic material

### AN UPPER BOUND ON THE LAPLACIAN SPECTRAL RADIUS OF THE SIGNED GRAPHS

Discussiones Mathematicae Graph Theory 28 (2008 ) 345 359 AN UPPER BOUND ON THE LAPLACIAN SPECTRAL RADIUS OF THE SIGNED GRAPHS Hong-Hai Li College of Mathematic and Information Science Jiangxi Normal University

### The Characteristic Polynomial

Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

### 1 Determinants. Definition 1

Determinants The determinant of a square matrix is a value in R assigned to the matrix, it characterizes matrices which are invertible (det 0) and is related to the volume of a parallelpiped described

### by the matrix A results in a vector which is a reflection of the given

Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

### MATH36001 Background Material 2015

MATH3600 Background Material 205 Matrix Algebra Matrices and Vectors An ordered array of mn elements a ij (i =,, m; j =,, n) written in the form a a 2 a n A = a 2 a 22 a 2n a m a m2 a mn is said to be

### Split Nonthreshold Laplacian Integral Graphs

Split Nonthreshold Laplacian Integral Graphs Stephen Kirkland University of Regina, Canada kirkland@math.uregina.ca Maria Aguieiras Alvarez de Freitas Federal University of Rio de Janeiro, Brazil maguieiras@im.ufrj.br

### Partitioned Matrices and the Schur Complement

P Partitioned Matrices and the Schur Complement P 1 Appendix P: PARTITIONED MATRICES AND THE SCHUR COMPLEMENT TABLE OF CONTENTS Page P1 Partitioned Matrix P 3 P2 Schur Complements P 3 P3 Block Diagonalization

### Solutions to Exercises 8

Discrete Mathematics Lent 2009 MA210 Solutions to Exercises 8 (1) Suppose that G is a graph in which every vertex has degree at least k, where k 1, and in which every cycle contains at least 4 vertices.

### UNCOUPLING THE PERRON EIGENVECTOR PROBLEM

UNCOUPLING THE PERRON EIGENVECTOR PROBLEM Carl D Meyer INTRODUCTION Foranonnegative irreducible matrix m m with spectral radius ρ,afundamental problem concerns the determination of the unique normalized

### Matrix Inverse and Determinants

DM554 Linear and Integer Programming Lecture 5 and Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1 2 3 4 and Cramer s rule 2 Outline 1 2 3 4 and

### Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

### Euclidean Geometry. We start with the idea of an axiomatic system. An axiomatic system has four parts:

Euclidean Geometry Students are often so challenged by the details of Euclidean geometry that they miss the rich structure of the subject. We give an overview of a piece of this structure below. We start

### Homework MA 725 Spring, 2012 C. Huneke SELECTED ANSWERS

Homework MA 725 Spring, 2012 C. Huneke SELECTED ANSWERS 1.1.25 Prove that the Petersen graph has no cycle of length 7. Solution: There are 10 vertices in the Petersen graph G. Assume there is a cycle C

### Block designs/1. 1 Background

Block designs 1 Background In a typical experiment, we have a set Ω of experimental units or plots, and (after some preparation) we make a measurement on each plot (for example, the yield of the plot).

### Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

### Lecture 9. 1 Introduction. 2 Random Walks in Graphs. 1.1 How To Explore a Graph? CS-621 Theory Gems October 17, 2012

CS-62 Theory Gems October 7, 202 Lecture 9 Lecturer: Aleksander Mądry Scribes: Dorina Thanou, Xiaowen Dong Introduction Over the next couple of lectures, our focus will be on graphs. Graphs are one of

### Inner Product Spaces

Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

### 2.1: Determinants by Cofactor Expansion. Math 214 Chapter 2 Notes and Homework. Evaluate a Determinant by Expanding by Cofactors

2.1: Determinants by Cofactor Expansion Math 214 Chapter 2 Notes and Homework Determinants The minor M ij of the entry a ij is the determinant of the submatrix obtained from deleting the i th row and the

### Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

### Basic Notions on Graphs. Planar Graphs and Vertex Colourings. Joe Ryan. Presented by

Basic Notions on Graphs Planar Graphs and Vertex Colourings Presented by Joe Ryan School of Electrical Engineering and Computer Science University of Newcastle, Australia Planar graphs Graphs may be drawn

### 3. Linear Programming and Polyhedral Combinatorics

Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

### POWER SETS AND RELATIONS

POWER SETS AND RELATIONS L. MARIZZA A. BAILEY 1. The Power Set Now that we have defined sets as best we can, we can consider a sets of sets. If we were to assume nothing, except the existence of the empty

### 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

### Linear Codes. In the V[n,q] setting, the terms word and vector are interchangeable.

Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a sub-vector space of V[n,q]. If the subspace of V[n,q]

### Solution based on matrix technique Rewrite. ) = 8x 2 1 4x 1x 2 + 5x x1 2x 2 2x 1 + 5x 2

8.2 Quadratic Forms Example 1 Consider the function q(x 1, x 2 ) = 8x 2 1 4x 1x 2 + 5x 2 2 Determine whether q(0, 0) is the global minimum. Solution based on matrix technique Rewrite q( x1 x 2 = x1 ) =

### Solving Systems of Linear Equations

LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

### DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH

DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH CHRISTOPHER RH HANUSA AND THOMAS ZASLAVSKY Abstract We investigate the least common multiple of all subdeterminants,

### Topic 1: Matrices and Systems of Linear Equations.

Topic 1: Matrices and Systems of Linear Equations Let us start with a review of some linear algebra concepts we have already learned, such as matrices, determinants, etc Also, we shall review the method

### October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

### 1.5 Elementary Matrices and a Method for Finding the Inverse

.5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:

### Vector Spaces II: Finite Dimensional Linear Algebra 1

John Nachbar September 2, 2014 Vector Spaces II: Finite Dimensional Linear Algebra 1 1 Definitions and Basic Theorems. For basic properties and notation for R N, see the notes Vector Spaces I. Definition

### Matrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for

### Characterizations of generalized polygons and opposition in rank 2 twin buildings

J. geom. 00 (2001) 000 000 0047 2468/01/000000 00 \$ 1.50 + 0.20/0 Birkhäuser Verlag, Basel, 2001 Characterizations of generalized polygons and opposition in rank 2 twin buildings Peter Abramenko and Hendrik

### DO NOT RE-DISTRIBUTE THIS SOLUTION FILE

Professor Kindred Math 04 Graph Theory Homework 7 Solutions April 3, 03 Introduction to Graph Theory, West Section 5. 0, variation of 5, 39 Section 5. 9 Section 5.3 3, 8, 3 Section 7. Problems you should

### max cx s.t. Ax c where the matrix A, cost vector c and right hand side b are given and x is a vector of variables. For this example we have x

Linear Programming Linear programming refers to problems stated as maximization or minimization of a linear function subject to constraints that are linear equalities and inequalities. Although the study

### Lattice Point Geometry: Pick s Theorem and Minkowski s Theorem. Senior Exercise in Mathematics. Jennifer Garbett Kenyon College

Lattice Point Geometry: Pick s Theorem and Minkowski s Theorem Senior Exercise in Mathematics Jennifer Garbett Kenyon College November 18, 010 Contents 1 Introduction 1 Primitive Lattice Triangles 5.1

### UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

### CS268: Geometric Algorithms Handout #5 Design and Analysis Original Handout #15 Stanford University Tuesday, 25 February 1992

CS268: Geometric Algorithms Handout #5 Design and Analysis Original Handout #15 Stanford University Tuesday, 25 February 1992 Original Lecture #6: 28 January 1991 Topics: Triangulating Simple Polygons

### Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc.

2 Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Theorem 8: Let A be a square matrix. Then the following statements are equivalent. That is, for a given A, the statements are either all true

### Linear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold:

Linear Systems Example: Find x, x, x such that the following three equations hold: x + x + x = 4x + x + x = x + x + x = 6 We can write this using matrix-vector notation as 4 {{ A x x x {{ x = 6 {{ b General

### Markov Chains, Stochastic Processes, and Advanced Matrix Decomposition

Markov Chains, Stochastic Processes, and Advanced Matrix Decomposition Jack Gilbert Copyright (c) 2014 Jack Gilbert. Permission is granted to copy, distribute and/or modify this document under the terms

### On the general equation of the second degree

On the general equation of the second degree S Kesavan The Institute of Mathematical Sciences, CIT Campus, Taramani, Chennai - 600 113 e-mail:kesh@imscresin Abstract We give a unified treatment of the

### Zachary Monaco Georgia College Olympic Coloring: Go For The Gold

Zachary Monaco Georgia College Olympic Coloring: Go For The Gold Coloring the vertices or edges of a graph leads to a variety of interesting applications in graph theory These applications include various

### 2.3 Scheduling jobs on identical parallel machines

2.3 Scheduling jobs on identical parallel machines There are jobs to be processed, and there are identical machines (running in parallel) to which each job may be assigned Each job = 1,,, must be processed

### Definition 12 An alternating bilinear form on a vector space V is a map B : V V F such that

4 Exterior algebra 4.1 Lines and 2-vectors The time has come now to develop some new linear algebra in order to handle the space of lines in a projective space P (V ). In the projective plane we have seen

### LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

### Math 330A Class Drills All content copyright October 2010 by Mark Barsamian

Math 330A Class Drills All content copyright October 2010 by Mark Barsamian When viewing the PDF version of this document, click on a title to go to the Class Drill. Drill for Section 1.3.1: Theorems about

### 6. GRAPH AND MAP COLOURING

6. GRPH ND MP COLOURING 6.1. Graph Colouring Imagine the task of designing a school timetable. If we ignore the complications of having to find rooms and teachers for the classes we could propose the following

### Presentation 3: Eigenvalues and Eigenvectors of a Matrix

Colleen Kirksey, Beth Van Schoyck, Dennis Bowers MATH 280: Problem Solving November 18, 2011 Presentation 3: Eigenvalues and Eigenvectors of a Matrix Order of Presentation: 1. Definitions of Eigenvalues

### December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

### Solution to Homework 2

Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

### Iterative Methods for Computing Eigenvalues and Eigenvectors

The Waterloo Mathematics Review 9 Iterative Methods for Computing Eigenvalues and Eigenvectors Maysum Panju University of Waterloo mhpanju@math.uwaterloo.ca Abstract: We examine some numerical iterative

### Determinants LECTURE Calculating the Area of a Parallelogram. Definition Let A be a 2 2 matrix. A = The determinant of A is the number

LECTURE 13 Determinants 1. Calculating the Area of a Parallelogram Definition 13.1. Let A be a matrix. [ a c b d ] The determinant of A is the number det A) = ad bc Now consider the parallelogram formed