QUADRATIC ALGEBRAS WITH EXT ALGEBRAS GENERATED IN TWO DEGREES. Thomas Cassidy
|
|
- Roberta Hodge
- 1 years ago
- Views:
Transcription
1 QUADRATIC ALGEBRAS WITH EXT ALGEBRAS GENERATED IN TWO DEGREES Thomas Cassidy Department of Mathematics Bucknell University Lewisburg, Pennsylvania Abstract Green and Marcos [3] call a graded k-algebra δ-koszul if the corresponding Yoneda algebra Ext(k, k) is finitely generated and Ext i,j (k, k) is zero unless j = δ(i) for some function δ : N N For any integer m 3 we exhibit a non-commutative quadratic δ-koszul algebra for which the Yoneda algebra is generated in degrees (1, 1) and (m, m + 1) These examples answer a question of Green and Marcos These algebras are not Koszul but are m-koszul (in the sense of Backelin) 1 Introduction A connected graded algebra A over a field k with generators in degree one is called Koszul [5] if its associated bigraded Yoneda (or Ext) algebra E(A) = i j Exti,j A (k, k) is generated as an algebra by Ext1,1 A (k, k) Koszul algebras are always quadratic, ie the elements in a minimal collection of defining relations will always be of degree two, however not every quadratic algebra is Koszul A quadratic algebra A will fail to be Koszul if and only if Ext i,j A (k, k) 0 for some i < j In the Yoneda algebra of a Koszul algebra, each cohomological degree is paired with only one internal degree Green and Marcos [3] use the following definition to study algebras which have this property 1991 Mathematics Subject Classification 16S37 Key words and phrases Graded algebra, Koszul algebra, Yoneda algebra
2 2 T CASSIDY Definition 11 A graded k-algebra A is δ-koszul if it satisfies the following two conditions: (1) Ext i,j A (k, k) = 0 unless j = δ(i) for a function δ : N N; (2) E(A) is finitely generated as an algebra A Koszul algebra is then a δ-koszul algebra for which δ is the identity function Green and Marcos ask if there is a bound N such that if A is a δ-koszul algebra, then E(A) is generated in degrees 0 to N The notion of m-koszul described in [4] and credited to Backelin [1] serves as another measure of how close a graded k-algebra comes to being Koszul The following definition of m-koszul should not be confused with Berger s N-Koszul [2], which refers to N-homogeneous algebras with Yoneda algebras generated in degrees one and two Definition 12 A graded algebra A is called m-koszul if Ext ij A (k, k) = 0 for all i < j m While any quadratic algebra is 3-Koszul, A is m-koszul for every m 1 if and only if A is Koszul It is natural to ask whether a quadratic algebra could be m-koszul for large values of m and yet still fail to be Koszul This question was answered by Roos [8] who showed that for any integer m 3 there exists a commutative quadratic algebra S which is m-koszul but not (m+1)-koszul More precisely, E( S) is finitely generated in bi-degrees (1, 1) and (m, m+1), but S is not a δ-koszul algebra because for each i m there is more than one value of j for which Ext ij (k, k) is nonzero The purpose of this paper is to show that for any integer m 3 there exists a δ-koszul algebra that appears to be Koszul in all cohomological degrees except for m Like Roos algebras S, these algebras are m-koszul but not (m + 1)-Koszul Specifically, we show that there exists a δ-koszul algebra C of global dimension m for which the corresponding Yoneda algebra E(C) is generated as an algebra in degrees (1, 1) and (m, m + 1) For these algebras δ is the function { i if i < m δ(i) = i + 1 if i = m 0 if i > m Our examples answer the question posed by Green and Marcos in [3] The algebras C illustrate that there is no bound N such that the Yoneda algebra
3 QUADRATIC ALGEBRAS WITH EXT ALGEBRAS GENERATED IN TWO DEGREES 3 of a δ-koszul algebra must be generated in degrees 0 to N Moreover, the bound does not exist even if we restrict ourselves to quadratic algebras A quadratic algebra A is determined by a vector space of generators V = A 1 and an arbitrary subspace of quadratic relations I V V The free algebra k V carries a standard grading and A inherits a grading from this free algebra We denote by A n the component of A degree n For any graded algebra A = k A k, let A[j] be the same vector space with the shifted grading A[j] k = A j+k Throughout we assume all graded algebras A are locally finite-dimensional with A i = 0 for i < 0 and A 0 = k 2 The algebra C Let m be an integer greater than 2 If m = 3 the algebra C has 10 generators and 8 relations If m = 4 then C has 12 generators and 14 relations For m 5, C has 3m generators and 4 + 3m relations For m = 3 the homological properties of the algebra C are not new In the proof of Lemma 26 we will introduce an algebra B which shares homological properties with C for m = 4 Consequently, we will henceforth assume that m 5 The algebra C is defined as follows The generating vector space V has the basis m+1 i=1 S i with sets S 1 = {n}, S 2 = {p, q, r}, S 3 = {s, t, u}, S 4 = {v, w, x 1, y 1, z 1 }, S 5 = {x 2, y 2, z 2 },, S m 1 = {x m 4, y m 4, z m 4 }, S m = {x m 3, y m 3 }, and S m+1 = {x m 2 } For all m 5 the space of relations I contains the generators {np nq, np nr, ps pt, qt qu, rs ru, sv sw, tw tx 1, uv ux 1, vx 2, wx 2, x i x i+1, sv sy 1, tw ty 1, ux 1 uy 1, sz 1, tz 1, uz 1, y i 1 x i + z i 1 y i } where i m 3 In addition, if m 6 then I also contains {z i z i+1 } where i m 5 Remark 21 We have chosen this large set of generators to clarify the exactness of the resolution below It may be possible to construct examples with fewer generators Notice that any basis for I is formed from certain sums of elements of S i right multiplied by elements of S i+1 This ordering on a basis of I makes C highly noncommutative Indeed the center of C is just the field k = C 0 Moreover, this ordering tells us that the left annihilator of n is zero and
4 4 T CASSIDY more generally that the left annihilator of an element of S i is generated by sums of elements from Π i 1 k=j S k for 1 j i 1 The structure of I will be exploited in the proofs of Lemmas 22, 24, 25 and 26 Our proof relies on constructing an explicit projective resolution for k as a left C-module Let (P, λ) be the sequence of projective C-modules P m λm P m 1 λ m 1 P m 2 λ 2 P 1 λ 1 C k where P m = C[ m 1], P m 1 = (C[1 m]) 7, P m 2 = (C[2 m]) 16, P 2 = (C[ 2]) 3m+4, P 1 = (C[ 1]) 3m, and for 3 i m 3, P i = (C[ i]) 3m+12 3i The map from C to k is the usual augmentation For convenience we will use λ i to denote both the map from P i to P i 1 and the matrix which gives this map via right multiplication The map λ 1 is right multiplication by the transpose of the matrix (n p q r s t u z 1 z m 4 v w x 1 y 1 x 2 y 2 x m 3 y m 3 x m 2 ) and the map λ m is right multiplication by the matrix ( np np np 0 0 ) The remaining maps λ i will be defined as right multiplication by matrices given in block form, for which we will need the following components Let α = p 0 0 p p q 0 0 q q, α = p 0 0 p p 0 0 q 0 0 q q, r r 0 r 0 0 r r 0 r β = γ = s s t t 0 u 0 u 0 s 0 0 s 0 t 0 t 0 0 u u v w x 1 ζ j =, β = ( ) xj 0, χ j =, δ = y j z j z z z z j η = s s 0 0 t t u 0 u, γ = p p 0 0 q q r 0 r, ɛ = v 0 w 0 x 1 0 y 1 z 1, η = (0, n, n, n), and ( 0 n n 0 0 n 0 n ) s t u,,
5 QUADRATIC ALGEBRAS WITH EXT ALGEBRAS GENERATED IN TWO DEGREES 5 The matrix defining map λ 2 has this block form η δ ɛ ζ m 5 β γ χ 2 χ m 4 x m 3 The matrix λ 3 has the form 0 η δ ɛ ζ m 6 α β γ χ 2 χ m 5 x m 4 For 4 j m 3 the matrix λ j has the form η δ ɛ ζ m j 3 α β γ χ 2 χ m j 2 x m j 1 Note that χ 2 is the only χ block in λ m 4 and that the matrix λ m 3 has no ζ or χ blocks The matrix λ m 2 has the form η δ α β γ,
6 6 T CASSIDY and the matrix λ m 1 has the form η α β Lemma 22 Let (Q, φ) be a minimal projective resolution of C k where the map Q i Q i 1 is given as right multiplication by a matrix φ i Then the matrices φ can be chosen to have block form such that all the entries in φ i are elements from the subalgebra generated by the set m+2 i j=1 S j Proof We prove this by induction on i φ 1 can be chosen to be λ 1, which has entries from V = m+1 j=1 S j Since the set φ 2 φ 1 must span the space I, φ 2 can be chosen to be λ 2, where the blocks have entries of the appropriate form Now suppose φ i has block form with entries from the subalgebra generated by m+2 i j=1 S j Since the rows of φ i+1 must annihilate the columns of φ i, φ i+1 can be chosen to have block form corresponding to the blocks of φ i Recall that any basis for I is ordered so that only elements of S j appear on the left of elements of S j+1 Since the entries in φ i contain no elements from m+1 j=m+3 i S j, no elements from m+1 j=m+2 i S j will appear in entries of φ i+1 Thus the entries in φ i+1 are from the subalgebra generated by the set m+1 i j=1 S j Remark 23 The lemma implies that φ m+1, if it exists, can only contain elements of S 1 and that there can be no map φ m+2 Therefore a minimal resolution of C k would have length no more than m + 1 and so the global dimension of C is at most m+1 We will see later that the global dimension of C is exactly m Lemma 24 The left annihilators of η, η, λ m and α are zero Proof The relations for C make it clear that nothing annihilates n from the left, and consequently η, η and λ m cannot be annihilated from the left Since the entries in α are all from S 2, the annihilator would have to be made from left multiples of n However p, q and r each appear alone in the first columns of α and n does not annihilate these individually Lemma 25 The rows of γ generate the left annihilator of x 2
7 QUADRATIC ALGEBRAS WITH EXT ALGEBRAS GENERATED IN TWO DEGREES 7 Proof The left annihilator of x 2 can have generators made from sums of elements in S 4, S 3 S 4, S 2 S 3 S 4 or S 1 S 2 S 3 S 4 It is easy to check that linear combinations of v, w and x 1 are the only elements in the span of S 4 that annihilate x 2 The elements of S 3 S 4 span a six dimensional subspace of C 2 with basis {sv, sx 1, tv, tw, uv, uw} and these are all left multiples of v, w or x 1 It follows that in C the elements of S 2 S 3 S 4 and S 1 S 2 S 3 S 4 are also all left multiples of v, w or x 1 Lemma 26 The left annihilator of β is generated by the rows of α, the left annihilator of δ is generated by η, the left annihilator of β is generated by λ m+1, and the left annihilator of γ is generated by the rows of β Proof We consider an algebra B closely related to C Let B be the algebra with generators {n, p, q, r, s, t, u, v, w, x 1, y 1, a, b} and defining relations {np nq, np nr, ps pt, qt qu, rs ru, sv sw, tw tx 1, uv ux 1, sv sy 1, tw ty 1, ux 1 uy 1, va vb, wa wb, x 1 a x 1 b} We will resolve k as a B- module using maps made from the same blocks as those which appear in the resolution of k as a C module Let (R, ψ) be this sequence of projective B-modules R 4 R 3 R 2 R 1 R 0 0 B[ 5] ψ 4 B[ 3] 7 ψ 3 B[ 2] 14 ψ 2 B[ 1] 13 ψ 1 B k The map from B to k is the usual augmentation, the map ψ 1 is right multiplication by the transpose of the matrix ( n p q r s t u v w x1 y 1 a b ), the map ψ 2 is right multiplication by the the matrix η δ β γ γ, the map ψ 3 is right multiplication by the the matrix 0 η α β, and the map ψ 4 is right multiplication by the the matrix ( np np np )
8 8 T CASSIDY Since the product ψ i ψ i 1 is zero in B, R is a complex The list of generators and relations for B ensure that this complex is exact at R 1 and R 0 Our goal is to show that (R, ψ) is a minimal projective resolution of B k We observe that B is the associated graded algebra for the splitting algebra (see [7]) corresponding to the layered graph below The methods of [7] (see also [6]) show that B has Hilbert series H B (g) = (1 13g + 14g 2 7g 3 + g 5 ) 1 Let P B (f, g) = dim(ext ij B (k, k))f i g j be i,j=0 the double Poincaré series for B From the formula P B ( 1, g) = H 1 B (g) = 1 13g + 14g 2 7g 3 + g 5 we can deduce something about the shape of a minimal projective resolution of the B-module k Moreover, the entries in the maps of a minimal projective resolution come from certain sets as in Lemma 22 It follows that the resolution must have the form: 0 B[ 5] d 3 d 2 R 4 B[ 5] d 3 B[ 4] d 1 R 3 B[ 4] d 1 B[ 5] d 2 R 2 ψ 2 R 1 ψ 1 R 0 k We will show that d 1 = d 2 = d 3 = 0 so that R is in fact a resolution of B k
9 QUADRATIC ALGEBRAS WITH EXT ALGEBRAS GENERATED IN TWO DEGREES 9 Consider first the possibility that d 1 > 0 This can only happen if η, α or β are annihilated from the left by a linear vector Clearly nothing annihilates η from the left and by Lemma 24 nothing annihilates α from the left Suppose the vector ( ) e1 e 2 e 3 left annihilates β where each e i is a sum of elements from S 2 Then e = ( e 1 e 2 e ) would left annihilate the β which appears in ψ 2 However the Poincaré series for B assures us that the dimension of Ext 3,3 B (k, k) is seven, which means that e would be in the span of the rows of α Now observe that no nonzero combination of the rows of α could produce e We conclude that d 1 = 0 Since d 1 is zero, Ext 3,4 (k, k) = Ext 4,4 (k, k) = 0 Now observe that the absence of Ext 4,4 (k, k) means that Ext 5,5 (k, k) must also be zero, so that d 2 = d 3 For Ext 3,5 (k, k) to be nonzero, the matrix defining the map R 3 B[ 5] d 2 R 2 must contain sums of elements from S 1 S 2 S 3 which annihilate γ The elements of S 1 S 2 S 3 span a one dimensional subspace of C 3 with basis {nps} Suppose nps( h 1 h 2 h 3 )γ = for some h i k Since in C 4 npsv = npsw = npsx 1, we get h 1 + h 2 + h 3 = 0, and thus nps( h 1 h 2 h 3 ) is just a row of β multiplied on the left by np Therefore the rows of β generate the left annihilator of γ and d 3 = d 2 = 0, which means (R, ψ) is exact In general one would not expect information from resolutions over other algebras to be useful in resolving C k, however the relations of C and B both follow the pattern that the left annihilator of an element of S i is generated by elements of S i 1, so that information from B is applicable to C Thus from the fact that (R, ψ) is exact, we see that the left annihilator of β is generated by the rows of α, the left annihilator of δ is generated by η, the left annihilator of β is generated by λ m+1, and the left annihilator of γ is generated by the rows of β Theorem 27 For any integer m 3 the complex P is a minimal projective resolution of the left C-module k It follows that the algebra C has global dimension m and Ext ij C (k, k) = 0 for all i < j m Moreover C is not a Koszul algebra because Ext m,m+1 C (k, k) 0
10 10 T CASSIDY Proof Direct calculation shows that λ i λ i 1 = 0 for all i so that P is a complex It is clear from the block form of the matrices λ i that their rows are linearly independent Since nothing annihilates n from the left, it is clear that P is exact at P m and that none of the λ i needs an additional row to annihilate η or η The complex is exact at P 1 since the product λ 2 λ 1 gives the defining relations for C We will show that P is exact elsewhere by examining the component blocks in the matrices λ i The block ɛ appears in λ 1 and is annihilated on the left by the δ which appears in λ 2 For i > 1, the column of λ i containing ɛ has no other nonzero entries Suppose that for some d i C we have ( d 1 d 2 d 3 ) ɛ = 0 Since λ 2 λ 1 give a basis for I, it follows that ( d1 d 2 d ) is a sum of left multiples of the rows of λ 2 By the block form of λ 2 this means that ( d 1 d 2 d 3 ) is a sum of left multiples of the rows of δ Therefore the rows of δ generate the left annihilator of ɛ In the same manner, one sees that the rows of ɛ generate the left annihilator of z 1 and that z i generates the left annihilator of z i+1 While γ does not appear in λ 1, the matrix (v, w, x 1, y 1 ) t does, and is annihilated by β Any row annihilating γ must also annihilate (v, w, x 1, y 1 ) t, and hence the rows of β generate the left annihilator of γ Likewise, since (x i, y i ) t appears in λ 1 we see that the rows of γ generate the left annihilator of χ 2 and that the rows of χ i generate the left annihilator of χ i+1 For i > 2, when x i appears as the only nonzero entry in a column of λ m 1 i, it can be annihilated by at most the rows of χ i 1 since χ i 1 annihilates the (x i, y i ) t in λ 1 Since the second row of χ i 1 does not annihilate x i, we conclude that x i 1 generates the left annihilator of x i Lemmas 24, 25 and 26 complete the proof that P is exact Since C is a graded algebra and the maps λ i are all of degree at least one, this resolution is minimal Acknowledgments The author thanks Jan-Erik Roos for several helpful suggestions
11 QUADRATIC ALGEBRAS WITH EXT ALGEBRAS GENERATED IN TWO DEGREES 11 References 1 J Backelin, A distributiveness property of augmented algebras and some related homological results, Ph D Thesis, Stockholm Available at joeb/avh/ 2 Roland Berger, Koszulity for nonquadratic algebras, J Algebra 239 (2001), no 2, MR MR (2002d:16034) 3 Edward L Green and Eduardo N Marcos, δ-koszul algebras, Comm Algebra 33 (2005), no 6, MR MR (2006c:16015) 4 Alexander Polishchuk and Leonid Positselski, Quadratic algebras, University Lecture Series, vol 37, American Mathematical Society, Providence, RI, 2005 MR MR Stewart B Priddy, Koszul resolutions, Trans Amer Math Soc 152 (1970), MR MR (42 #346) 6 Vladimir Retakh, Shirlei Serconek, and Robert Lee Wilson, Construction of some algebras associated to directed graphs and related to factorizations of noncommutative polynomials, Lie algebras, vertex operator algebras and their applications, Contemp Math, vol 442, Amer Math Soc, Providence, RI, 2007, pp MR MR , Koszulity of splitting algebras associated with cell complexes, arxiv: v1, Jan-Erik Roos, Commutative non-koszul algebras having a linear resolution of arbitrarily high order Applications to torsion in loop space homology, C R Acad Sci Paris Sér I Math 316 (1993), no 11, MR MR (94b:16040)
Abstract Linear Algebra, Fall Solutions to Problems III
Abstract Linear Algebra, Fall 211 - Solutions to Problems III 1. Let P 3 denote the real vector space of all real polynomials p(t) of degree at most 3. Consider the linear map T : P 3 P 3 given by T p(t)
Similarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
MATHEMATICAL BACKGROUND
Chapter 1 MATHEMATICAL BACKGROUND This chapter discusses the mathematics that is necessary for the development of the theory of linear programming. We are particularly interested in the solutions of a
An important class of codes are linear codes in the vector space Fq n, where F q is a finite field of order q.
Chapter 3 Linear Codes An important class of codes are linear codes in the vector space Fq n, where F q is a finite field of order q. Definition 3.1 (Linear code). A linear code C is a code in Fq n for
MATH 304 Linear Algebra Lecture 16: Basis and dimension.
MATH 304 Linear Algebra Lecture 16: Basis and dimension. Basis Definition. Let V be a vector space. A linearly independent spanning set for V is called a basis. Equivalently, a subset S V is a basis for
Galois theory and the normal basis theorem
Galois theory and the normal basis theorem Arthur Ogus December 3, 2010 Recall the following key result: Theorem 1 (Independence of characters) Let M be a monoid and let K be a field. Then the set of monoid
Mathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
THE FUNDAMENTAL THEOREM OF ALGEBRA VIA LINEAR ALGEBRA
THE FUNDAMENTAL THEOREM OF ALGEBRA VIA LINEAR ALGEBRA KEITH CONRAD Our goal is to use abstract linear algebra to prove the following result, which is called the fundamental theorem of algebra. Theorem
Linear Codes. In the V[n,q] setting, the terms word and vector are interchangeable.
Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a sub-vector space of V[n,q]. If the subspace of V[n,q]
Additional Topics in Linear Algebra Supplementary Material for Math 540. Joseph H. Silverman
Additional Topics in Linear Algebra Supplementary Material for Math 540 Joseph H Silverman E-mail address: jhs@mathbrownedu Mathematics Department, Box 1917 Brown University, Providence, RI 02912 USA Contents
Review: Vector space
Math 2F Linear Algebra Lecture 13 1 Basis and dimensions Slide 1 Review: Subspace of a vector space. (Sec. 4.1) Linear combinations, l.d., l.i. vectors. (Sec. 4.3) Dimension and Base of a vector space.
NOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
Interpolating Polynomials Handout March 7, 2012
Interpolating Polynomials Handout March 7, 212 Again we work over our favorite field F (such as R, Q, C or F p ) We wish to find a polynomial y = f(x) passing through n specified data points (x 1,y 1 ),
Markov Chains, part I
Markov Chains, part I December 8, 2010 1 Introduction A Markov Chain is a sequence of random variables X 0, X 1,, where each X i S, such that P(X i+1 = s i+1 X i = s i, X i 1 = s i 1,, X 0 = s 0 ) = P(X
A matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted
MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
THE DIMENSION OF A VECTOR SPACE
THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field
Recall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
Linear Algebra Concepts
University of Central Florida School of Electrical Engineering Computer Science EGN-3420 - Engineering Analysis. Fall 2009 - dcm Linear Algebra Concepts Vector Spaces To define the concept of a vector
3. Some commutative algebra Definition 3.1. Let R be a ring. We say that R is graded, if there is a direct sum decomposition, M n, R n,
3. Some commutative algebra Definition 3.1. Let be a ring. We say that is graded, if there is a direct sum decomposition, = n N n, where each n is an additive subgroup of, such that d e d+e. The elements
Linear Span and Bases
MAT067 University of California, Davis Winter 2007 Linear Span and Bases Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (January 23, 2007) Intuition probably tells you that the plane R 2 is of dimension
Some Basic Properties of Vectors in n
These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation
MATH 110 Spring 2015 Homework 6 Solutions
MATH 110 Spring 2015 Homework 6 Solutions Section 2.6 2.6.4 Let α denote the standard basis for V = R 3. Let α = {e 1, e 2, e 3 } denote the dual basis of α for V. We would first like to show that β =
ON THE FIBONACCI NUMBERS
ON THE FIBONACCI NUMBERS Prepared by Kei Nakamura The Fibonacci numbers are terms of the sequence defined in a quite simple recursive fashion. However, despite its simplicity, they have some curious properties
Math 312 Homework 1 Solutions
Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please
Math 113 Homework 3. October 16, 2013
Math 113 Homework 3 October 16, 2013 This homework is due Thursday October 17th at the start of class. Remember to write clearly, and justify your solutions. Please make sure to put your name on the first
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column
MINIMAL GENERATOR SETS FOR FINITELY GENERATED SHIFT-INVARIANT SUBSPACES OF L 2 (R n )
MINIMAL GENERATOR SETS FOR FINITELY GENERATED SHIFT-INVARIANT SUBSPACES OF L 2 (R n ) MARCIN BOWNIK AND NORBERT KAIBLINGER Abstract. Let S be a shift-invariant subspace of L 2 (R n ) defined by N generators
Notes on Linear Recurrence Sequences
Notes on Linear Recurrence Sequences April 8, 005 As far as preparing for the final exam, I only hold you responsible for knowing sections,,, 6 and 7 Definitions and Basic Examples An example of a linear
MATH36001 Background Material 2015
MATH3600 Background Material 205 Matrix Algebra Matrices and Vectors An ordered array of mn elements a ij (i =,, m; j =,, n) written in the form a a 2 a n A = a 2 a 22 a 2n a m a m2 a mn is said to be
Matrix Algebra LECTURE 1. Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 = a 11 x 1 + a 12 x 2 + +a 1n x n,
LECTURE 1 Matrix Algebra Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 a 11 x 1 + a 12 x 2 + +a 1n x n, (1) y 2 a 21 x 1 + a 22 x 2 + +a 2n x n, y m a m1 x 1 +a m2 x
Name: Section Registered In:
Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are
Another new approach to the small Ree groups
Another new approach to the small Ree groups Robert A. Wilson Abstract. A new elementary construction of the small Ree groups is described. Mathematics Subject Classification (2000). 20D06. Keywords. Groups
THEORY OF SIMPLEX METHOD
Chapter THEORY OF SIMPLEX METHOD Mathematical Programming Problems A mathematical programming problem is an optimization problem of finding the values of the unknown variables x, x,, x n that maximize
Real quadratic fields with class number divisible by 5 or 7
Real quadratic fields with class number divisible by 5 or 7 Dongho Byeon Department of Mathematics, Seoul National University, Seoul 151-747, Korea E-mail: dhbyeon@math.snu.ac.kr Abstract. We shall show
The Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
CHAPTER IV NORMED LINEAR SPACES AND BANACH SPACES
CHAPTER IV NORMED LINEAR SPACES AND BANACH SPACES DEFINITION A Banach space is a real normed linear space that is a complete metric space in the metric defined by its norm. A complex Banach space is a
Definition 12 An alternating bilinear form on a vector space V is a map B : V V F such that
4 Exterior algebra 4.1 Lines and 2-vectors The time has come now to develop some new linear algebra in order to handle the space of lines in a projective space P (V ). In the projective plane we have seen
MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.
MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted
Summary of week 8 (Lectures 22, 23 and 24)
WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry
Problems for Advanced Linear Algebra Fall 2012
Problems for Advanced Linear Algebra Fall 2012 Class will be structured around students presenting complete solutions to the problems in this handout. Please only agree to come to the board when you are
Section 6.1 - Inner Products and Norms
Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
Systems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
REPRESENTATIONS OF sl 2 (C)
REPRESENTATIONS OF sl (C) JAY TAYLOR Basic Definitions and Introduction Definition. A Lie algebra g is a vector space over a field k with an associated bilinear map [, ] : g g g, such that the following
Introduction To Lie algebras. Paniz Imani
Introduction To Lie algebras Paniz Imani 1 Contents 1 Lie algebras 3 1.1 Definition and examples....................................... 3 1.2 Some Basic Notions.........................................
Notes on Matrix Multiplication and the Transitive Closure
ICS 6D Due: Wednesday, February 25, 2015 Instructor: Sandy Irani Notes on Matrix Multiplication and the Transitive Closure An n m matrix over a set S is an array of elements from S with n rows and m columns.
ON THE PRODUCT OF N CHEBYSHEV SYSTEMS
Int ON THE J of PRODUCT Mathematical OF N Sciences CHEBYSHEV and Applications, SYSTEMS Vol L AMBARWATI1 1, No 3, September AND H GUNAWAN 011 Copyright Mind Reader Publications wwwjournalshubcom ON THE
by the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
2.5 Complex Eigenvalues
1 25 Complex Eigenvalues Real Canonical Form A semisimple matrix with complex conjugate eigenvalues can be diagonalized using the procedure previously described However, the eigenvectors corresponding
Topic 1: Matrices and Systems of Linear Equations.
Topic 1: Matrices and Systems of Linear Equations Let us start with a review of some linear algebra concepts we have already learned, such as matrices, determinants, etc Also, we shall review the method
4.9 Markov matrices. DEFINITION 4.3 A real n n matrix A = [a ij ] is called a Markov matrix, or row stochastic matrix if. (i) a ij 0 for 1 i, j n;
49 Markov matrices DEFINITION 43 A real n n matrix A = [a ij ] is called a Markov matrix, or row stochastic matrix if (i) a ij 0 for 1 i, j n; (ii) a ij = 1 for 1 i n Remark: (ii) is equivalent to AJ n
Diagonal, Symmetric and Triangular Matrices
Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by
Continued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction
IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible
7 - Linear Transformations
7 - Linear Transformations Mathematics has as its objects of study sets with various structures. These sets include sets of numbers (such as the integers, rationals, reals, and complexes) whose structure
α = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
Algebraic Complexity Theory and Matrix Multiplication
Algebraic Complexity Theory and Matrix Multiplication [Handout for the Tutorial] François Le Gall Department of Computer Science Graduate School of Information Science and Technology The University of
Matrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo
Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for
Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)
MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of
Simultaneous equal sums of three powers
Simultaneous equal sums of three powers T.D. Browning 1 and D.R. Heath-Brown 2 1 School of Mathematics, Bristol University, Bristol BS8 1TW 2 Mathematical Institute, 24 29 St. Giles, Oxford OX1 3LB 1 t.d.browning@bristol.ac.uk,
Homework: 2.1 (page 56): 7, 9, 13, 15, 17, 25, 27, 35, 37, 41, 46, 49, 67
Chapter Matrices Operations with Matrices Homework: (page 56):, 9, 3, 5,, 5,, 35, 3, 4, 46, 49, 6 Main points in this section: We define a few concept regarding matrices This would include addition of
4. MATRICES Matrices
4. MATRICES 170 4. Matrices 4.1. Definitions. Definition 4.1.1. A matrix is a rectangular array of numbers. A matrix with m rows and n columns is said to have dimension m n and may be represented as follows:
The determinant of a skew-symmetric matrix is a square. This can be seen in small cases by direct calculation: 0 a. 12 a. a 13 a 24 a 14 a 23 a 14
4 Symplectic groups In this and the next two sections, we begin the study of the groups preserving reflexive sesquilinear forms or quadratic forms. We begin with the symplectic groups, associated with
3. Recurrence Recursive Definitions. To construct a recursively defined function:
3. RECURRENCE 10 3. Recurrence 3.1. Recursive Definitions. To construct a recursively defined function: 1. Initial Condition(s) (or basis): Prescribe initial value(s) of the function.. Recursion: Use a
REPRESENTATION THEORY WEEK 12
REPRESENTATION THEORY WEEK 12 1. Reflection functors Let Q be a quiver. We say a vertex i Q 0 is +-admissible if all arrows containing i have i as a target. If all arrows containing i have i as a source,
ISOMETRIES OF R n KEITH CONRAD
ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x
Week 5 Integral Polyhedra
Week 5 Integral Polyhedra We have seen some examples 1 of linear programming formulation that are integral, meaning that every basic feasible solution is an integral vector. This week we develop a theory
Good luck, veel succes!
Final exam Advanced Linear Programming, May 7, 13.00-16.00 Switch off your mobile phone, PDA and any other mobile device and put it far away. No books or other reading materials are allowed. This exam
Finite fields: further properties
Chapter 4 Finite fields: further properties 8 Roots of unity in finite fields In this section, we will generalize the concept of roots of unity (well-known for complex numbers) to the finite field setting,
Sec 4.1 Vector Spaces and Subspaces
Sec 4. Vector Spaces and Subspaces Motivation Let S be the set of all solutions to the differential equation y + y =. Let T be the set of all 2 3 matrices with real entries. These two sets share many common
How is a vector rotated?
How is a vector rotated? V. Balakrishnan Department of Physics, Indian Institute of Technology, Madras 600 036 Appeared in Resonance, Vol. 4, No. 10, pp. 61-68 (1999) Introduction In an earlier series
Spectral graph theory
Spectral graph theory Uri Feige January 2010 1 Background With every graph (or digraph) one can associate several different matrices. We have already seen the vertex-edge incidence matrix, the Laplacian
Cofactor Expansion: Cramer s Rule
Cofactor Expansion: Cramer s Rule MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Today we will focus on developing: an efficient method for calculating
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0
1 Eigenvalues and Eigenvectors
Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x
A counterexample to a result on the tree graph of a graph
AUSTRALASIAN JOURNAL OF COMBINATORICS Volume 63(3) (2015), Pages 368 373 A counterexample to a result on the tree graph of a graph Ana Paulina Figueroa Departamento de Matemáticas Instituto Tecnológico
MATH 56A SPRING 2008 STOCHASTIC PROCESSES 31
MATH 56A SPRING 2008 STOCHASTIC PROCESSES 3.3. Invariant probability distribution. Definition.4. A probability distribution is a function π : S [0, ] from the set of states S to the closed unit interval
Row Ideals and Fibers of Morphisms
Michigan Math. J. 57 (2008) Row Ideals and Fibers of Morphisms David Eisenbud & Bernd Ulrich Affectionately dedicated to Mel Hochster, who has been an inspiration to us for many years, on the occasion
(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.
Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product
An Advanced Course in Linear Algebra. Jim L. Brown
An Advanced Course in Linear Algebra Jim L. Brown July 20, 2015 Contents 1 Introduction 3 2 Vector spaces 4 2.1 Getting started............................ 4 2.2 Bases and dimension.........................
1111: Linear Algebra I
1111: Linear Algebra I Dr. Vladimir Dotsenko (Vlad) Lecture 3 Dr. Vladimir Dotsenko (Vlad) 1111: Linear Algebra I Lecture 3 1 / 12 Vector product and volumes Theorem. For three 3D vectors u, v, and w,
MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix.
MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An m-by-n matrix is a rectangular array of numbers that has m rows and n columns: a 11
Section 2.1. Section 2.2. Exercise 6: We have to compute the product AB in two ways, where , B =. 2 1 3 5 A =
Section 2.1 Exercise 6: We have to compute the product AB in two ways, where 4 2 A = 3 0 1 3, B =. 2 1 3 5 Solution 1. Let b 1 = (1, 2) and b 2 = (3, 1) be the columns of B. Then Ab 1 = (0, 3, 13) and
Groups, Rings, and Fields. I. Sets Let S be a set. The Cartesian product S S is the set of ordered pairs of elements of S, S S = {(x, y) x, y S}.
Groups, Rings, and Fields I. Sets Let S be a set. The Cartesian product S S is the set of ordered pairs of elements of S, A binary operation φ is a function, S S = {(x, y) x, y S}. φ : S S S. A binary
Math 312 Lecture Notes Markov Chains
Math 312 Lecture Notes Markov Chains Warren Weckesser Department of Mathematics Colgate University Updated, 30 April 2005 Markov Chains A (finite) Markov chain is a process with a finite number of states
max cx s.t. Ax c where the matrix A, cost vector c and right hand side b are given and x is a vector of variables. For this example we have x
Linear Programming Linear programming refers to problems stated as maximization or minimization of a linear function subject to constraints that are linear equalities and inequalities. Although the study
MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006
MATH 511 ADVANCED LINEAR ALGEBRA SPRING 26 Sherod Eubanks HOMEWORK 1 1.1 : 3, 5 1.2 : 4 1.3 : 4, 6, 12, 13, 16 1.4 : 1, 5, 8 Section 1.1: The Eigenvalue-Eigenvector Equation Problem 3 Let A M n (R). If
1. Sigma algebras. Let A be a collection of subsets of some fixed set Ω. It is called a σ -algebra with the unit element Ω if. lim sup. j E j = E.
Real Analysis, course outline Denis Labutin 1 Measure theory I 1. Sigma algebras. Let A be a collection of subsets of some fixed set Ω. It is called a σ -algebra with the unit element Ω if (a), Ω A ; (b)
Finite dimensional C -algebras
Finite dimensional C -algebras S. Sundar September 14, 2012 Throughout H, K stand for finite dimensional Hilbert spaces. 1 Spectral theorem for self-adjoint opertors Let A B(H) and let {ξ 1, ξ 2,, ξ n
Scalar and vector potentials, Helmholtz decomposition, and de Rham cohomology
Scalar and vector potentials,, and de Rham cohomology Alberto Valli Department of Mathematics, University of Trento, Italy A. Valli Potentials,, de Rham cohomology Outline 1 Introduction 2 3 4 5 A. Valli
v 1. v n R n we have for each 1 j n that v j v n max 1 j n v j. i=1
1. Limits and Continuity It is often the case that a non-linear function of n-variables x = (x 1,..., x n ) is not really defined on all of R n. For instance f(x 1, x 2 ) = x 1x 2 is not defined when x
The Power Method for Eigenvalues and Eigenvectors
Numerical Analysis Massoud Malek The Power Method for Eigenvalues and Eigenvectors The spectrum of a square matrix A, denoted by σ(a) is the set of all eigenvalues of A. The spectral radius of A, denoted
1 Polyhedra and Linear Programming
CS 598CSC: Combinatorial Optimization Lecture date: January 21, 2009 Instructor: Chandra Chekuri Scribe: Sungjin Im 1 Polyhedra and Linear Programming In this lecture, we will cover some basic material
5 The Beginning of Transcendental Numbers
5 The Beginning of Transcendental Numbers We have defined a transcendental number (see Definition 3 of the Introduction), but so far we have only established that certain numbers are irrational. We now
1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
Methods for Finding Bases
Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,
Math 215 Exam #1 Practice Problem Solutions
Math 5 Exam # Practice Problem Solutions For each of the following statements, say whether it is true or false If the statement is true, prove it If false, give a counterexample (a) If A is a matrix such
Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M 1 such that
0. Inverse Matrix Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M such that M M = I = M M. Inverse of a 2 2 Matrix Let M and N be the matrices: a b d b M =, N = c