THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok and J.A. Hartigan. November 2011

Size: px
Start display at page:

Download "THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok and J.A. Hartigan. November 2011"

Transcription

1 THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE Alexander Barvinok and J.A. Hartigan November 20 Abstract. We consider the set of all graphs on n labeled vertices with prescribed degrees D = d,...,d n. For a wide class of tame degree sequences D we obtain a computationally efficient asymptotic formula approximating the number of graphs within a relative error which approaches 0 as n grows. As a corollary, we prove that the structure of a random graph with a given tame degree sequence D is well described by a certain maximum entropy matrix computed from D. We also establish an asymptotic formula for the number of bipartite graphs with prescribed degrees of vertices, or, equivalently, for the number of 0- matrices with prescribed row and column sums.. Introduction and main results. Graphs and their degree sequences. Let D = d,...,d n be a vector of positive integers and let GD be the set of all graphs undirected, with no loops or multiple edges on the set,...,n of vertices such that the degree of the k-th vertex is d k for k =,...,n. Equivalently, GD is the set of all n n symmetric matrices with 0- entries, zero trace and row column sums d,...,d n. We assume that.. d +...+d n 0 mod 2, since otherwise the set GD is empty. The theorem of Erdős and Gallai, see, for example, Theorem of [BR9], states the necessary and sufficient conditions for the existence of a graph with the given degree sequence. Without loss of generality, we assume that d d 2... d n. 99 Mathematics Subject Classification. 05A6, 05C07, 05C30, 52B55, 60F05. Key words and phrases. graphs, degree sequences, asymptotic formulas. The research of the first author was partially supported by NSF Grant DMS and a United States - Israel BSF grant Typeset by AMS-TEX

2 Then, the necessary and sufficient condition for GD to be non-empty is that.. holds and..2 k n d i kk + mink,d i for k =,...,n. i= i=k+ Our main goal is to estimate the cardinality GD of GD. Using the obtained estimate, we deduce a concentration result for a random graph G GD sampled from the uniform probability measure on GD..2 The maximum entropy matrix and tame degree sequences. The following matrix plays the crucial role in our construction. Let us consider the space R n 2 of vectors x = ξj,k, where j,k is an unordered pair of indices j k n. We consider the polytope P R 2 n, P = PD, defined by the equations ξ j,k = d k for k =,...,n and inequalities j: j k 0 ξ j,k. The integer points in PD correspond to the labeled graphs with degree sequence D, which we write as GD = PD Z n 2. We assume that PD is non-empty. We consider the following entropy function on PD: Hx = ξ j,k ln + ξ j,k ln ξ j,k ξ j,k j,k for x = ξ j,k. Since H is a strictly concave function, it attains its maximum on P at a unique point, z = ζ j,k, z = zd, whichwecallthemaximum entropy matrixassociated with the degree sequence D. Matrix z can be easily calculated by interior point methods, see [NN94]. For 0 < δ /2 we say that the degree sequence D is δ-tame if the polytope PD is non-empty and if δ ζ j,k δ for all j k n, where z = ζ j,k is the maximum entropy matrix associated with degree sequence D. In Theorem 2. we state some sufficient conditions for a degree sequence D to be tame. 2

3 .3 Quadratic form q and related quantities. Let z = ζ j,k be the maximum entropy matrix associated with a tame degree sequence D. We consider the following quadratic form q : R n R,.3. qt = 2 j,k ζ j,k ζj,k 2 τ j +τ k 2 for t = τ,...,τ n. It is easy to see that q is positive definite for n > 2. Let us consider the Gaussian probabilitymeasureonr n withdensityproportionaltoe q. Wedefinethefollowing random variables f,h : R n R,.3.2 Let ft = 6 ht = 24 ζ j,k ζj,k 2ζj,k τ j +τ k 3 j,k j,k Our main result is as follows. and ζ j,k ζj,k 6ζj,k 2 6ζ j,k + τ j +τ k 4 for t = τ,...,τ n. µ = Ef 2 and ν = Eh..4 Theorem. Let us fix 0 < δ < /2. Let D = d,...,d n be a δ-tame degree sequence such that d d n 0 mod 2, let z = ζ j,k be the maximum entropy matrix as defined in Section.2 and let the quadratic form q and values of µ and ν be as defined in Section.3. Let us define an n n symmetric matrix Q = ω jk by ω jk =ζ j,k ζj,k ω jj =d j k: k j ζ 2 j,k Then Q is positive definite and the value of.4. for j k and for j =,...,n. 2e Hz 2π n/2 detq exp µ 2 +ν approximates the number of graphs GD with degree sequence D within a relative error which approaches 0 as n +. More precisely, for any 0 < ǫ /2 the value of.4. approximates GD within relative error ǫ provided n γδ, ǫ 3

4 where γ = γδ is a positive constant. The main term.4.2 2e Hz 2π n/2 detq of formula.4. is the Gaussian approximation formula of [BH0], whose appearance, as is discussed in [BH0], is explained by the Local Central Limit Theorem, see also the discussion below. The factor exp µ 2 +ν is the Edgeworth correction factor, see [BH09b]. In the course of the proof of Theorem.4, we establish a two-sided bound γ δ exp µ 2 +ν γ 2 δ for some constants γ δ,γ 2 δ > 0, as long as the degree sequence D remains δ-tame. We note that computing the expectation of a polynomial with respect to the Gaussian probability measure is a linear algebra problem, cf. also Section 5.2. Hence apart from computing the maximum entropy matrix z, which can be done by interior point methods, computing the value of.4. is a linear algebra problem which can be solved in On 4 time in the unit cost model..5 Random graphs with prescribed degree sequences. Let us consider the set GD of all labeled graphs with degree sequence D as a finite probability space with the uniform measure. It is convenient to think of G GD as of a subgraph of the complete graph K n with the set V =,...,n of vertices and the set E = j,k : j k n of edges. Let us sample a graph G GD at random. What G is likely to look like? As a corollary of Theorem.4, we prove that with overwhelming probability, for a random graph G GD the number of edges of G in a given set S E with S = Ωn 2 is very close to the sum of the entries of the maximum entropy matrix indexed by the elements of S. 4

5 .6 Theorem. Let us fix numbers κ > 0 and 0 < δ /2. Then there exists a number γκ,δ > 0 such that the following holds. Suppose that n γκ,δ and that D = d,...,d n is a δ-tame degree sequence such that d +...+d n 0 mod 2. For a set S E, let σ S G be the number of edges of graph G GD that belong to set S and let σ S z = j,k S ζ j,k, where z = ζ j,k is the maximum entropy matrix. Suppose that S δn 2 and let ǫ = δ lnn n. If ǫ then for a uniformly chosen random graph G GD, we have P G GD : ǫσ S z σ S G +ǫσ S z 2n κn. The idea of the proof is as follows. For j k n, let x j,k be independent Bernoulli random variables such that P x j,k = = ζ j,k and P x j,k = 0 = ζ j,k. As is shown in [BH0], the probability mass function of the random vector X = xj,k is constant on the integer points of PD and is equal to e Hz at each G GD, so that the vector X conditioned on GD is uniform. Theorem.4 then implies that the probability that X GD is not too small. On the other hand, standard large deviation inequalities imply that the sum j,k S x j,k concentrates about the value of σ S z = j,k ζ j,k. We supply the details of the proof in Section 0. In many respects random graphs G GD behave like random graphs on the set,...,n of vertices, with pairs j,k chosen as the edges of G independently with probabilities ζ j,k, where z = ζ j,k is the maximum entropy matrix. As is discussed in [BH0], the distribution of the multivariate Bernoulli random vector X = x j,k is the distribution of the largest entropy among all multivariate Bernoulli random vectors constrained by Ey k = d k for k =,...,n where y k = j: j k x j,k. We remark that we obtain the Gaussian approximation term.4.2 if we assume that the vector of random variables Y = y,...,y n is asymptotically 5

6 Gaussian around its expectation d,...,d n. As it turns out, Y is not exactly Gaussian but is not very far from it. It looks plausible that both Theorem.4 and Theorem.6 can be extended to degree sequences D allowing a moderate number of entries ζ j,k of the maximum entropy matrix to be arbitrarily close to or 0. Our proofs, however, do not seem to allow such an extension with Theorem 4. being the main obstacle. Some of our proofs mostly in Sections 5 and 6 are similar to those of [BH09a], where we applied the maximum entropy approach of [BH0] to count non-negative integer matrices with prescribed row and column sums. The paper is organized as follows. In Section 2, we give several examples and extensions concerning our main result, Theorem.4 and also discuss related work in the literature. In Section 3, we present an integral representation for the number GD of graphs and also describe the plan of the proof of Theorem.4. The rest of the paper deals with the proofs. 2. Examples and extensions Sometimes one can tell that a degree sequence is tame without computing the maximum entropy matrix. 2. Theorem. Let us fix real numbers 0 < α < β < such that β < 2 α α, or, equivalently, α+β 2 < 4α. Then there exists a real number δ = δα,β > 0 and a positive integer n 0 = n 0 α,β such that any degree sequence D = d,...,d n satisfying α < d i n < β for i =,...,n is δ-tame provided n > n 0. One can choose β n 0 = max α β, δ = ǫ6 +ǫ 6 where ǫ = min 4β α + and 4α α+β 2 α, α α+β2 4. For example, degree sequences D = d,...,d n satisfying 0.25 < d i n < 0.74 for i =,...,n 6

7 or or 0.0 < d i n < 0.8 for i =,...,n 0.8 < d i < 0.89 for i =,...,n n are δ-tame for some δ > 0 and all sufficiently large n. We prove Theorem 2. in Section On the boundary of δ-tameness. Let us choose rational 0 < α < β < such that 2.2. β = 2 α α. Clearly, β > α. Let us choose a positive integer n such that αn and βn are even integers and let us consider the degree sequence d =... = d k = βn and d k+ =... = d n = αn for k = n α note that k is necessarily integral. The Erdős-Gallai condition..2 for k = n α, reduces to β 2 α α n. In particular, 2.2. does not even guarantee that the polytope PD is non-empty. In [JSM92] Jerrum, Sinclair and McKay discuss under what conditions an approximation formula for GD which depends smoothly on D may exist. They describe the phenomenon of the number of graphs GD changing sharply when thedegreesequence D isvaryingonlyslightlyaroundsomespecialvaluesofd. This phenomenon is apparently explained by the fact that the dimension of the polytope PD may change abruptly or the polytope may disappear altogether when D lies on the boundary of the Erdős-Gallai conditions..2. Theorem 8. of [JSM92] states that for as long as d + = max d i, i =,...,n and d = min d i, i =,...,n, d + d + 2 4d n d +, the degree sequence D is P-stable, meaning that increasing one of the degrees d i and decreasing another by does not change GD by more than a factor of n 0 this, in turn, implies that there are polynomial time randomized approximation algorithms for computing GD and sampling a random graph G GD. The condition of our Theorem 2. is only marginally stronger than As Sourav Chatterjee pointed out to us, Lemma 4. of recent [CDS] shows that a sequence D is δ-tame provided it lies sufficiently deep inside the polyhedron defined by the Erdős-Gallai conditions..2. Our example shows that the bounds of Theorem 2. are essentially the best possible if we take into account only the largest and the smallest degree of a vertex of the graph. 7

8 2.3 Regular graphs. In [MW90] McKay and Wormald compute the asymptotic of GD for regular graphs, where and almost regular graphs, where d =... = d n = d, d i d < n 2 +ǫ for i =,...,n for a sufficiently small ǫ > 0; see also [McK] for recent developments and [McK0] for a survey. One can show that the formula of Theorem.4 is equivalent to the asymptotic formula of [MW90] for regular or almost regular graphs. In the case of regular graphs, symmetry requires that ζ j,k = d n for all j k n for the maximum entropy matrix z = ζ j,k. 2.4 Approximations in the cut norm. The cut norm sometimes called the normalized cut norm of a real m n matrix A = a jk is defined by A cut = mn max J,K j J, k K where the maximum is taken over all non-empty subsets J,...,m and K,...,n. Let us choose set S in Theorem.6 of the form a jk, S = j,k : j J, k K, j k for some J,K,...,n. We note that there are not more than 2 2n distinct sets S of this form. Theorem.6 implies that as n grows, the maximum entropy matrix zd approximates the adjacency matrix of the overwhelming majority of graphs G GD within an error of O n /2 lnn in the cut norm. Shortly after the first version of this paper appeared, using a different approach, Chatterjee, Diaconis and Sly [CDS] described graph limits of graphs from GD asngrows. A graph limitisacertain function on[0,] [0,], viewedas an infinite matrix, which naturally arises as a limit object for a Cauchy sequence in the cut norm of adjacency matrices of graphs [LS06]. Graph limits constructed in [CDS] can indeed be viewed as infinite maximum entropy matrices. 8

9 2.5 Enumeration of bipartite graphs. A natural version of the problem concerns enumeration of labeled bipartite graphs with a given degree sequence or, equivalently, m n matrices with 0- entries and prescribed row sums R = r,...,r m and column sums C = c,...,c n. We assume that r +...+r m = c +...+c n. A simple necessary and sufficient condition for a 0- matrix with prescribed row and column sums to exist is given by the Gale-Ryser Theorem, see, for example, Corollary of [BR9]. Let us consider the polytope PR,C of m n matrices x = ξ jk defined by the equations n ξ jk = r j for j =,...,m and k= m ξ jk = c k for k =,...,n j= and inequalities 0 ξ jk for all j,k. Let us compute the maximum entropy matrix z = ζ jk as the necessarily unique matrix z PR,C that maximizes Hx = jk ξ jk ln ξ jk + ξ jk ln ξ jk for x = ξ jk on PR,C. For 0 < δ /2, we say that the margins R,C are δ-tame if and δm n and δn m δ ξ jk δ for all j,k. Suppose that the margins R,C are indeed δ-tame for some δ > 0. Let us define a quadratic form q : R m+n R by 2.5. Let qs,t = 2 ζjk ζjk 2 σj +τ k 2 j,k for s,t = σ,...,σ m ;τ,...,τ n u =,...,;,..., m times n times 9

10 and let L = u be the orthogonal complement to u in R m+n. Then the restriction q L of q onto L is strictly positive definite and we define detq L as the product of the non-zero eigenvalues of q. We consider the Gaussian probability measure on L with density proportional to e q and define random variables f,g : L R by We define fs,t = ζ jk ζ jk 2ζ jk σ j +τ k 3 6 hs,t = 24 j,k and ζ jk ζ jk 6ζjk 2 6ζ jk + σ j +τ k 4 j,k for s,t = σ,...,σ m ;τ,...,τ n. µ = Ef 2 and ν = Eh. Then the number R,C of 0- matrices with row sums R and column sums C is R, C = e Hz m+n 4π m+n /2 detq L exp µ +o 2 +ν provided m,n + in such a way that the margins R,C remain δ-tame for some δ > 0. We sketch the proof of in Section. Canfield and McKay [CM05] obtained an asymptotic formula of R, C when all row sums are equal, r =... = r m and all column sums are equal, c =... = c n, which was later extended to the case of almost equal row sums and almost equal column sums [CGM08], see also [GM09]. The maximum entropy matrix z was introduced in [Ba0] where a cruder asymptotic formula ln R,C Hz was established without the δ-tameness assumption and for a wider class of enumeration problems, including enumeration of 0- matrices with prescribed row and column sums and zeros in prescribed position. It was also shown in [Ba0] that a random matrix 0- with prescribed row and column sums concentrates about the maximum entropy matrix z. 3. An integral representation for the number of graphs In [BH0] we proved the following general result; see Theorem 5, Lemma and formula 6 there. 3. Theorem. Let P R p be a polyhedron defined by the system of linear equations Ax = b, where A is a n p matrix with columns a,...,a p Z n and b Z n is an integer vector, and inequalities 0 x the inequalities are understood coordinate-wise. Suppose that P has a non-empty interior, that is, contains a point x = ξ,...,ξ p such that 0 < ξ j < for j =,...,p. 0

11 Then the function p Hx = ξ j ln + ξ j ln ξ j ξ j j= for x = ξ,...,ξ p attains its maximum on P at a unique point z = ζ,...,ζ p such that 0 < ζ j < for j =,...,p. Let us consider the parallelepiped Π = [ π,π] n, Π R n. Then the number P 0, p of 0- points in P can be written as P 0, p = ehz 2π n Π e i t,b p j= ζ j +ζ j e i a j,t dt, where, is the standard scalar product in R n, dt is the standard Lebesgue measure in R n and i =. The idea of the proof is as follows. Let X = x,...,x p be a random vector of independent Bernoulli random variables such that P x j = = ζ j and P x j = 0 = ζ j for j =,...,p. It turns out that the probability mass function of X is constant on the set P 0, p and equals e Hz for every 0- point in P. Letting Y = AX, we obtain P 0, p = e Hz PX P = e Hz PY = b and the probability in question is written as the integral of the characteristic function of Y. Since p ζ j a j = b, in a neighborhood of the origin t = 0 the integrand can be written as p e i t,b ζ j +ζ j e i a j,t = 3.2 exp j= 2 + i 6 j= p ζ j ζ j a j,t 2 j= p ζ j ζ j 2ζ j a j,t 3 j= + p ζ j ζ j 6ζj ζ j + a j,t 4 j= p +O ζ j + 5 a j,t 5. j=

12 Note that the linear term is absent in the expansion. We obtain the following corollary. 3.3 Corollary. Let D = d,...,d n be a degree sequence such that the polytope PD defined in Section.2 has a non-empty interior and let z = ζ j,k be the maximum entropy matrix. Let Ft =exp i n d m τ m ζ j,k +ζ j,k e iτ j+τ k m= j,k for t = τ,...,τ n. Then for the parallelepiped Π = [ π,π] n, we have Proof. Follows by Theorem 3.. GD = ehz Ft dt. 2π n Π WenotethatinthecaseofregularandalmostregulargraphsseeSection2.3the integral of Corollary 3.3 is the same as the one evaluated by McKay and Wormald [MW90]. 3.4 Plan of the proof of Theorem.4. We use the integral representation of Corollary 3.3. Let us define subsets U,W Π by U = τ,...,τ n : τ j lnn for j =,...,n n and W = τ,...,τ n : τ j σ j π lnn for some σ j = ± n and j =,...,n. We show that the integral of Ft over Π \ U W is asymptotically negligible. Namely, in Section 8 we prove that the integral Π\U W Ft dt is asymptotically negligible compared to the integral 3.4. U Ft dt. 2

13 It is easy to show that provided d +...+d n is even. In Section 7, we evaluate U Ft dt = U W Ft dt. Ft dt, In particular, we show that the integrals 3.4. and have the same order of magnitude and so the integral of Ft outside of U W is indeed asymptotically irrelevant. From 3.2 one can deduce that asymptotically as n +, Ft exp qt+ift+ht for t U, where q is defined by.3. and f and h are defined by.3.2. LetusconsidertheGaussianprobabilitymeasureinR n withdensityproportional to e q. In Section 6, we prove that with respect to that measure ht E h = ν almost everywhere in U. This allows us to conclude that exp qt+ift+ht dt e ν U U exp qt+ift dt. In Section 5, we prove that asymptotically, as n +, function f is a Gaussian random variable, so exp qt+ift dt exp qt+ift dt U R n exp 2 Ef2 e qt dt, R n which concludes the evaluation of The crucial consideration used in proving and is that with respect to the Gaussian probability measure in R n with density proportional to e q, the coordinate functions τ,...,τ n are weakly correlated, that is, Eτ j τ k =O for j k and n Eτj 2 =O for j =,...,n. n We prove in Section 4, where we essentially use the δ-tameness assumption. 3.5 Notation. By γ, sometimes with an index or a list of parameters, we denote a positive constant depending only on the listed parameters. The most common appearance will be γδ, a positive constant depending only on the δ-tameness constant δ. As usual, for two functions g and g 2, where g 2 is non-negative, we write g = Og 2 if g γg 2 and g = Ωg 2 if g γg 2 for some γ > 0. 3

14 4. Correlations Let z = ζ j,k be the maximum entropy matrix as defined in Section.2. We assume that 0 < ζ j,k < for all j k. We define the quadratic form q : R n R by qt = 2 j,k ζ j,k ζj,k 2 τ j +τ k 2 for t = τ,...,τ n. Forn > 2thequadraticformq isstrictlypositivedefinite. WeconsidertheGaussian probability measure on R n with density proportional to e q. We consider a point t = τ,...,τ n as a random vector and τ,...,τ n as random variables. The main result of this section is as follows. 4. Theorem. For any 0 < δ /2 there exists γδ > 0 such that the following holds. Suppose that δ ζ j,k δ for all j k. Then Eτ j τ k γδ n 2 Eτ 2 j γδ n provided j k and for j =,...,n. We will often consider the following situation. Let ψ : R n R be a positive definite quadratic form. We consider the Gaussian probability measure in R n with density proportional to e ψ. For a polynomial random variable f : R n R we denote by Ef; ψ its expectation with respect to the measure. For a subspace L R n, we consider the restriction ψ L of ψ onto L and the Gaussian probability measure on L with density proportional to e ψ L. For a polynomial f : R n R, we denote by Ef; ψ L the expectation of the restriction f : L R with respect to that Gaussian probability measure on L. We will use the following standard fact: suppose that R n = L L 2 is a decomposition of R n into the direct sum of orthogonal subspaces such that ψt +t 2 = ψt +ψt 2 for all t L and t 2 L 2, so that the coordinates t L and t 2 L 2 of the point t = t + t 2, t R n are independent. Let l,l 2 : R n R be linear functions. Then E l l 2 ; ψ = E l l 2 ; ψ L +E l l 2 ; ψ L 2. 4

15 Indeed, writing t = t + t 2 with t L and t 2 L 2 and noting that l,2 t = l,2 t +l,2 t 2, we obtain E l tl 2 t; ψ =E l t l 2 t ; ψ+e l t l 2 t 2 ; ψ +E l t 2 l 2 t ; ψ+e l t 2 l 2 t 2 ; ψ =E l l 2 ; ψ L +2E l ; ψe l 2 ; ψ+e l l 2 ; ψ L 2 =E l l 2 ; ψ L +E l l 2 ; ψ L 2. We deduce Theorem 4. from the following result. 4.2 Proposition. Let n > 2 and let ξ j,k, j k n be a set of numbers such that α ξ j,k β for all j,k and some β > α > 0. Let σ k = j: j k ξ j,k for k =,...,n. Let us consider the quadratic form ψ : R n R defined by ψt = 2 j,k ξ j,k τj σj + τ k σk 2 for t = τ,...,τ n, where the sum is taken over all unordered pairs of indices j k n. Then ψ is a positive definite quadratic form and we consider the Gaussian probability measure in R n with density proportional to e ψ. Let ǫ = α β. Then for n > 2/ǫ we have Eτ j τ k Eτ 2 j n 2 ǫ 5/2 n ǫnǫ 2n + 3 2ǫn n 2 ǫ 5/2 n ǫnǫ 2n + 3 2ǫn Proof. Clearly, ψ is positive definite. Let v = σ,..., σ n. provided j k and for j =,...,n. Then v is an eigenvector of ψ with eigenvalue. Indeed, the gradient of ψ at t = v is 2v: ψt = 2ξ j,k = 2 σ j for j =,...,n. τ j t=v σj k: k j 5

16 Let L = v R n be the orthogonal complement to v. Hence L is defined in R n = τ,...,τ n by the equation n τ j σj = 0. We write We remark that j= ψt = n j 2 j=τ 2 + ξ j,k τ j τ k. σj σ k j,k 4.2. αn σ j βn for j =,...,n. Let ω = n σ j αnn j= and let us define the quadratic form φ : R n R by φt = n τ j σj ω j= 2 for t = τ,...,τ n. Hence φt is a form of rank and v is an eigenvector of φ with eigenvalue. We define a perturbation ψ = ψ ǫ2 2 φ for ǫ = α β. Hence ψ is a positive definite quadratic form such that ψt = ψt for all t L, and v is an eigenvector of ψ with eigenvalue ǫ 2 /2. Let us consider the Gaussian probability measure on R n with density proportionaltoe ψ. OurimmediategoalistoestimatethecovariancesEτ j τ k withrespect to that measure. Denoting by, the standard scalar product in R n, we can write ψt = 2 I +Qt, t, 6

17 where I is the n n identity matrix and Q = q jk is an n n symmetric matrix such that v is an eigenvector of Q with eigenvalue ǫ 2. We have q jk = ξ j,k σj σ k ǫ 2 σj σ k ω for j k and q jj = ǫ2 σ j ω It follows by 4.2. that for j =,...,n. ǫn q jk 0 for j k and 0 q jj ǫ for j =,...,n. n The covariance matrix R = Eτ j τ k ; ψ of the Gaussian measure with density proportional to e ψ is I +Q = ǫ ǫ I + n n I +Q = n ǫ I +P, where P = ǫ ǫ n n I +Q. Hence P = p jk is a symmetric matrix such that p jk ǫ n ǫn for all j,k. Furthermore, v is an eigenvector of P with eigenvalue ǫ 2 +ǫ/n/ ǫ/n, so Pv = λv for λ = ǫ ǫ 2 + ǫ. n n Let us bound the entries of a positive integer power P d = κ = p d jk of P. Let α β and let y = κv, y = η βn α 3/2 n 3/2,...,η n. By 4.2. and 4.2.2, we have p jk η j for all j,k. Also, by 4.2. we have η j ǫ n ǫ 3/2 n 7 for all j.

18 Furthermore, y isaneigenvectorofp witheigenvalueλdefinedby4.2.3,andhence y is an eigenvector of the d-th power P d = with eigenvalue λ d. Combining p d jk this with and 4.2.5, for d 0 we obtain p d+ jk = n m= p d jm p mk n m= p d jm η m = λ d η j λ d ǫ n ǫ 3/2 n. We note that for n > 2/ǫ we have 0 < λ <. Consequently, the series I +P = I + + d= d P d converges absolutely and we can bound the entries of the matrix R = I +Q = n ǫ I +P, R = r jk by We have r jk r jj ǫ 2 n ǫ n λ ǫ 3/2 n 2 ǫ 3/2 n λ λ = ǫ ǫ 2 2ǫ. n n if j k and for j =,...,n. Since R is the covariance matrix of the Gaussian probability measure with density proportional to e ψ, we obtain n E τ j τ k ; ψ 2 E τj; 2 ψ ǫ 5/2 n ǫnǫ 2n n 2 ǫ 5/2 n ǫnǫ 2n provided j k and for j =,...,n. Nowwegobacktotheformψ andthegaussianprobabilitymeasurewithdensity proportional to e ψ. Since v is an eigenvector of both ψ and ψ, since L = u and since ψ and ψ coincide on L, for any linear functions l,l 2 : R n R, we have E l l 2 ; ψ =E l l 2 ; ψ L+E l l 2 ; ψ spanv =E l l 2 ; ψ L +E l l 2 ; ψ spanv =E l l 2 ; ψ E l l 2 ; ψ spanv +E l l 2 ; ψ spanv. 8

19 We note that the gradient of the coordinate function τ j restricted to spanv is σj /ω. Since v is an eigenvector of ψ with eigenvalue and an eigenvector of ψ with eigenvalue ǫ 2 /2, we have By 4.2. we have σj σ k E τ j τ k ; ψ spanv = and 2ω σj σ k E τ j τ k ; ψ spanv = for all j,k. 2 ǫ 2 ω E τ j τ k ; ψ spanv 2ǫn and E τ j τ k ; ψ spanv ǫn. The proof now follows by and Now we are ready to prove Theorem 4.. Proof of Theorem 4.. Let us define ξ j,k = ζ j,k ζ 2 j,k for all j k and let us choose α = δ δ 2 and β = /4 in Proposition 4.2. We define σ j and ψ as in Proposition 4.2 and consider a linear transformation τ,...,τ n τ σ,...,τ n σn. Then the push-forward of the Gaussian probability measure with density proportional to e q is the Gaussian probability measure with density proportional to e ψ. Therefore, E τ j τ k ; q = E τ j τ k ; ψ. σj σ k Since The proof follows by Proposition 4.2. We will need the following lemma. σ j δαn for j =,...,n, 4.3 Lemma. Let q 0 : R n R, n 2, be the quadratic form defined by the formula q 0 t = τ j +τ k 2 for t = τ,...,τ n. 2 j,k Then the eigenspaces of q 0 are as follows: the -dimensional eigenspace E with eigenvalue n spanned by the vector u =,..., and the n -dimensional eigenspace E 2 = u with eigenvalue n 2/2. 9

20 Proof. We have τ k qt t=u = 2n 2. Hence the gradient of q 0 t at t = u is 2n 2u, so u is an eigenvector with eigenvalue n. For t u we have τ +...+τ n = 0 and hence qt = τ k j: j k τ j +τ k = n 2τ k. Therefore, the gradient of q 0 t at t u is n 2t, and so t is an eigenvector with eigenvalue n 2/2. 5. The third degree term The main result of this section is the following theorem. 5. Theorem. For unordered pairs j,k, j k n, let u j,k be Gaussian random variables such that Eu j,k = 0 for all j,k. Suppose further that for some θ > 0 we have Eu 2 j,k θ n for all j,k and that Let Euj,k u j2 θ,k 2 n 2 provided j,k j 2,k 2 =. U = j,ku 3 j,k. Then for some constant γθ > 0 and any 0 < ǫ < /2 we have E expiu exp 2 EU2 ǫ provided Furthermore, n γθ. ǫ EU 2 γθ 20

21 for some γθ > 0. Here i =. We apply Theorem 5. in the following situation. Let q : R n R be the quadratic form defined by.3.. Let us consider the Gaussian probability measure on R n with density proportional to e q. We define random variables u j,k by u j,k t = 3 6 ζ j,k ζj,k 2ζj,k τ j +τ k for t = τ,...,τ n. Then for the function ft defined by.3.2 we have f = j,ku 3 j,k. In this section, all implied constants in the O notation are absolute. Our main tool is Wick s formula for the expectation of the product of random Gaussian variables. 5.2 Wick s formula. Let w,...,w l be Gaussian random variables such that Ew =... = Ew l = 0. Then E w w l =0 if l is odd and E w...w l = Ew i w i2 Ew il w il if l = 2r is even, where the sum is taken over all 2r!/r!2 r unordered partitions of the set of indices,...,l into r = l/2 pairwise disjoint unordered pairs i,i 2,...,i l,i l, see for example, [Zv97]. Such a partition is called a matching of the random variables w,...,w l and we say that w i and w j are matched if they form a pair in the matching. In particular, 5.2. Ew 2r = 2r! r!2 r Ew 2 r for a centered Gaussian random variable w. We will also use that Ew 3 w3 2 = 9 Ew 2 Ew 2 2 Ew w 2 +6Ew w 2 3 and later in Section 6 that cov w ,w4 =E w 4 w2 4 Ew 4 Ew 4 2 =72Ew w 2 2 Ew 2 Ew Ew w

22 5.3 Representing monomials by graphs. Let x j,k : j k n be formal commuting variables. We interpret a monomial in x j,k asaweighted graph as follows. Let K n be the complete graph with vertices,...,n and edges j,k for j k n. A weighted graph G is a set of edges j,k of K n with positive integer weights α j,k on them. The set of vertices of G consists of all vertices of the edges of G. With G, we associate a monomial m G x = x α j,k j,k. j,k G The weight of G is the degree of m G x, that is, j,k G α j,k. We observe that for any p there are not more than r Or n p distinct weighted graphs G of weight 2r on p vertices. In what follows, given a set of random variables, we construct auxiliary Gaussian random variables with the same matrix of covariances. This is always possible since the matrix of covariances is positive semi-definite. Our proof of Theorem 5. is based on the following combinatorial lemma. 5.4 Lemma. For the Gaussian random variables u j,k of Theorem 5., let us introduce auxiliary Gaussian random variables v j,k such that Ev j,k =0 for all j k n and Ev j,k v j2,k 2 =Eu 3 j,k u3 j 2,k 2 for all j k, j 2 k 2 n. Given a weighted graph G of weight 2r, r >, let us represent it as a vertex-disjoint union G = G 0 G, where G 0 consists of s isolated edges of weight each and G is a graph with no isolated edges of weight we may have s = 0 and G 0 empty. Then We have Em G u 3 j,k : j k n, EmG vj,k : j k n r Or θ 3r n 3r+s/2 if s is even and Em G u 3 j,k : j k n, EmG vj,k : j k n r Or θ 3r n 3r+s+/2 if s is odd. 2 If s is even and G is a vertex-disjoint union of r s/2 connected components, each consisting of a pair of edges of weight sharing exactly one common vertex, then Em G u 3 j,k : j k n Em G vj,k : j k n r Or θ 3r n 3r+s/2+. 22

23 3 If s is even and G is a vertex-disjoint union of r s/2 connected components, each consisting of a pair of edges of weight sharing exactly one common vertex, then G has precisely 3r + s/2 vertices. In all other cases, G has strictly fewer than 3r+s/2 vertices. Proof. Ifj,k j 2,k 2 = wesaythatthepairofvariablesu j,k, u j2,k 2 and the pair of variables v j,k, v j2,k 2 are weakly correlated. If j,k j 2,k 2 we say that the pairs of variables are strongly correlated. Pairs of variables indexed by edges in different connected components of G are necessarily weakly correlated. To prove Part we use Wick s formula of Section 5.2. By 5.2.2, we obtain E θ 3 v j,k v j2,k 2 = O if the pair v n 4 j,k, v j2,k 2 is weakly correlated, 5.4. E θ 3 v j,k v j2,k 2 = O if the pair v n 3 j,k, v j2,k 2 is strongly correlated. Since for each isolated edge j,k G 0 variable v j,k has to be matched with variable v j2,k 2 indexed by an edge j 2,k 2 in a different connected component, we conclude that every matching of the set v j,k : j,k G contains at least s/2 weakly correlated pairs and hence EmG vj,k : j k n r Or θ 3 s/2 θ 3 r s/2 n 4 n 3. Moreover, ifsis odd, then the number of weakly correlated pairsis at least s+/2 and hence EmG vj,k : j k n r Or θ 3 s+/2 θ 3 r s+/2. Similarly, since for each isolated edge j,k G 0 at least one copy of the variable u j,k has to be matched with a copy of variable u j2,k 2 indexed by an edge in a different connected component, we conclude that every matching of the multiset u j,k,u j,k,u j,k : j,k G contains at least s/2 weakly correlated pairs, and hence Em G u 3 j,k : j k n r Or θ 23 n 4 n 2 n 3 s/2 3r s/2 θ. n

24 Moreover, ifsis odd, then the number of weakly correlated pairsis at least s+/2 and hence Em G u 3 j,k : j k n r Or θ n 2 s+/2 3r s+/2 θ. n This concludes the proof of Part. To prove Part 2, let us define Σ v G as the sum in the Wick s formula for Em G vj,k : j k n taken over all matchings of the set of the following structure: we split the edges of G into r pairs, pairing each isolated edge with another isolated edge and pairing each edge in a connected component of G consisting of two edges with the remaining edge in the same connected component. Then we match every variable v j,k with the variable v j2,k 2 such that j,k is paired with j 2,k 2. Reasoning as in the proof of Part, we conclude that EmG vj,k : j k n Σ v G r Or θ 3r n 3r+s/2+, since every matching of the set which is not included in Σ v G contains at least s/2 + weakly correlated pairs. Similarly, let us define Σ u G as the sum in the Wick s formula for Em G u 3 j,k : j k n taken over all matchings of the multiset of the following structure: we split the edges of G into r pairs as above and match every copy of variable u j,k with a copy of variable u j2,k 2 indexed by an edge in the same pair in particular, we may match copies of the same variable. Reasoning as in the proof of Part, we conclude that Em G u 3 j,k Σ : j k n u G ror θ 3r n 3r+s/2+, since every matching of the multiset which is not included in Σ u G contains at least s/2+ weakly correlated pairs. The proof of Part 2 follows since Σ u G = Σ v G by Wick s formula. To prove Part 3, we note that a connected weighted graph G of weight e contains a spanning tree with at most e edges and hence has at most e+ vertices. In particular, a connected graph G of weight e contains fewer than 3e/2 vertices unless G is an isolated edge of weight or a pair of edges of weight each, sharing one common vertex. Therefore, G has at most 2s r s = 3r + s 2 vertices and strictly fewer vertices, unless s is even and the connected components of G are pairs of edges of weight each sharing one common vertex. 24

25 5.5 Proof of Theorem 5.. Let v j,k be Gaussian random variables defined in Lemma 5.4 and let V = j,kv j,k. Since there are O n 3 strongly correlated pairs v j,k,v j2,k 2 and there are O n 4 weakly correlated pairs, by 5.4. we have 5.5. EV 2 = EU 2 = O θ 3. Since V is a Gaussian random variable, we have Ee iv = exp 2 EV 2 = exp 2 EU2. Our goal is to show that Ee iv and Ee iu are asymptotically equal as n +. By symmetry, the odd moments of U and V are 0: EU k = EV k = 0 if k > 0 is odd. The even moments of U and V can be expressed as EU 2r = G a G Em G u 3 j,k : j k n EV 2r = G a G Em G vj,k : j k n, where the sum is taken over all weighted graphs G of weight 2r and a G 2r!. Let G 2r be the set of weighted graphs G of weight 2r whose connected components are an even number s of isolated edges of weight and r s/2 pairs of edges of weight sharing one common vertex. Since there are no more than r Or n p distinct weighted graphs of weight 2r with p vertices, by Parts and 3 of Lemma 5.4, we have EU2r a G Em G u 3 j,k : j k n ror θ 3r and n G G 2r EV 2r a G Em G vj,k : j k n ror θ 3r. n G G 2r Therefore, by Part 2 of Lemma 5.4, EU 2r EV 2r r Or θ 3r. n 25

26 From Taylor s Theorem eix 2r s=0 i sxs s! x2r 2r! for x R, it follows that Ee iu Ee iv EU 2r 2r! + EV 2r 2r! + 2r s=0 EU s EV s. s! From 5.5. and 5.2. we deduce that for a positive integer r we have Therefore, by EV 2r 2r!2Or θ 3r. r! Ee iu Ee iv 2 Or θ 3r r! + ror θ 3r. n Given 0 ǫ /2, one can choose a positive integer r such that rlnr = O θ 2 ln ǫ so that the first term in the right hand side of does not exceed ǫ/2. It follows then that for all γθ n ǫ we have Ee iu Ee iv ǫ, and the proof follows by The fourth degree term The main result of this section is the following theorem. 6. Theorem. For unordered pairs j,k, j k n, let w j,k be Gaussian random variables such that Ew j,k = 0 for all j,k, and let σ j,k, be numbers. 26

27 Suppose further that for some θ > 0 we have Ew 2 j,k θ n for all j,k, and that Let Ewj,k w j2 θ,k 2 n 2 provided j,k j 2,k 2 =. W = j,kσ j,k w 4 j,k. Then for some constant γθ > 0 we have 2 3 E W γθ; varw γθ n ; P W > γθ exp n /5 provided n γ θ for some constant γ θ > 0. We apply Theorem 6. in the following situation. Let q : R n R be the quadratic form defined by.3.. Let us consider the Gaussian probability measure on R n with density proportional to e q. We define random variables w j,k by w j,k t = 4 24 ζ j,k ζj,k 6ζ 2 j,k 6ζ j,k + τ j +τ k for t = τ,...,τ n and let σ j,k = sign 6ζj,k 2 6ζ j,k +. Then for the function h defined by.3.2, we have h = j,kσ j,k w 4 j,k. While the proof of Parts 2 is done by a straightforward computation, to prove Part 3 we need reverse Hölder inequalities for polynomials with respect to the Gaussian measure. 27

28 6.2 Lemma. Let p be a polynomial of degree d in random Gaussian variables w,...,w l. Then for r > 2 we have Proof. This is Corollary 5 of [Du87]. E p r /r r d/2 Ep 2 / Proof of Theorem 6.. All implied constants in the O notation below are absolute. By formula 5.2., 2 θ Ewj,k Ew 4 = 3 j,k 2 2 = O and hence E W = O θ 2 and Part follows. Furthermore, varw = σ j,k σ j2,k 2 cov By we have j,k j 2,k 2 cov and, additionally, cov wj 4,k,w4 j 2,k 2 wj 4,k,w4 j 2,k 2 θ 4 = O n 6 Therefore, θ varw = O, n n 2 wj 4,k,w4 j 2,k 2. θ 4 = O n 4 provided j,k j 2,k 2 =. which proves Part 2. Finally, applying Lemma 6.2 with d = 4 we deduce from 6.3. that for any r > 2 E W EW r r 2r n r/2 2 Or θ 2r. Choosing r = n /5, we conclude that for all sufficiently large n n 0 θ we have E W EW r exp n /5. By Markov s inequality, we obtain P W EW > exp n /5 for all sufficiently large n and the proof follows from Part. 28

29 7. Computing the integral over a neighborhood of the origin We consider the integral of Corollary 3.3. Hence Ft =exp i Π n d m τ m m= for t = τ,...,τ n, Ft dt j,k ζ j,k +ζ j,k e iτ j+τ k where D = d,...,d n is a given degree sequence, z = ζ j,k is the maximum entropy matrix and Π is the parallelepiped [ π,π] n. We recall that the quadratic form q : R n R is defined by qt = 2 j,k ζ j,k ζj,k 2 τ j +τ k 2 for t = τ,...,τ n. In this section, we prove the following main result. 7. Theorem. Let us fix a number 0 < δ /2 and suppose that δ ζ j,k δ for all j k. Let f,h : R n R be polynomials defined by.3.2. Let us define a neighborhood of the origin U Π by U = τ,...,τ n : τ k lnn for k =,...,n. n Let Ξ = e qt dt R n and let us consider the Gaussian probability measure in R n with density Ξ e q. Let µ = Ef 2 and ν = Eh. Then 2 Ξ n/2 4π ; n µ, ν, E h γδ 29

30 for some constant γδ > 0; 3 For any 0 < ǫ /2 U Ft dt expν Ξ ǫ Ξ provided for some constant γδ > 0; 4 For any 0 < ǫ /2 U n γδ ǫ Ft dt exp µ 2 +ν Ξ ǫ Ξ provided for some γδ > 0. n γδ ǫ Proof. In what follows, all constants implied in the O and Ω notation depend only on the parameter δ. Let q 0 t = τ j +τ k 2 for t = τ,...,τ n 2 j,k as in Lemma 4.3. Then qt 4 q 0t and hence R n e q dt R n e 4 q 0t dt = π n/2 4 n n 8 2 n 2 n/2 4π, n which proves Part. Let us think of the coordinate functions τ j as random variables with respect to the Gaussian probability measure with density proportional to e q. By Theorem 4., we have 7.. Eτ j τ k = O n 2 Eτj 2 = O n provided j k and for j =,...,n. 30

31 For an unordered pair j k n, let us define u j,k = 3 6 ζ j,k ζj,k 2ζj,k τ j +τ k. Then f = j,ku 3 j,k. Similarly, let us define w j,k = 4 24 ζ j,k σ j,k =sign ζj,k 6ζ 2 j,k 6ζ j,k + 6ζ 2 j,k 6ζ j,k +. τ j +τ k and Then h = j,kσ j,k w 4 j,k. By 7.. the random variables u j,k satisfy the conditions of Theorem 5. and hence the upper bound for µ = Ef 2 follows by Theorem 5.. Similarly, by 7.. the random variables w j,k satisfy the conditions of Theorem 6. and hence the upper bound for ν = Eh and E h follows by Part of Theorem 6.. This concludes the proof of Part 2 of the theorem. By Lemma 4.3, eigenvalues of q are Ωn from which it follows that 7..2 R n \U From Theorem 5., we have e qt dt exp Ω ln 2 n Ξ. e qt+ift dt exp R n provided n which, combined with 7..2, results in 7..3 U µ 2 O, ǫ Ξ ǫ Ξ e qt+ift dt exp µ Ξ 2 ǫ Ξ O provided n. ǫ 3

32 By Part 2 of Theorem 6. and Chebyshev s inequality, we have 7..4 P h ν > ǫ while by Part 3 of Theorem 6., we have 7..5 P h > γδ for some constant γδ > 0. In addition, = O, ǫ 2 n = O exp n / ht = O ln 4 n for t U. Combining and Part 2 of the theorem, we deduce from 7..2 and 7..3 that 7..7 U U e qt+ht dt expνξ ǫ Ξ and e qt+ift+ht dt exp µ 2 +ν Ξ ǫ Ξ O provided n. ǫ From the Taylor series expansion, cf. 3.2, we obtain Ft = exp qt+ift+ht+ρt, where ln 5 n ρt = O for t U. n Therefore, for any ǫ > 0 we have Ft e qt+ht ǫ e qt+ht and Ft e qt+ift+ht ǫ e qt+ht for all t U provided n O. ǫ The proof of Parts 3 and 4 now follows from 7..7 and Part 2. 32

33 8. Bounding the integral outside of the special points We consider the integral representation of Corollary 3.3. Our goal is to show that the integral of Ft for t outside of the neighborhood of the special points 0,...,0 and ±π,...,±π is asymptotically negligible. In this section, we prove the following main result. 8. Theorem. Let us fix a number 0 < δ /2 and let D = d,...,d n be a δ-tame degree sequence. Let us define subsets U,W Π by U = τ,...,τ n : τ j lnn for j =,...,n n and W = τ,...,τ n : τ j σ j π lnn for some σ j = ± n and all j =,...,n. Then for any κ > 0 Ft dt n κ Π\U W U Ft dt provided n > γδ,κ. TheplanoftheproofofTheorem 8.isasfollows: first, usingsomecombinatorial arguments we show that for any positive constant ǫ > 0 the integral is asymptotically negligible outside of the areas where τ j ǫ for all j or where τ j σ j π ǫ for some σ j = ± and all j. Then we note that Ft is log-concave for t in a neighborhood of the origin and use a concentration inequality for log-concave measures. We introduce the following metric ρ. 8.2 Metric ρ. Let us define a function ρ : R [0,π] as follows: ρx = min k Z x 2πk. In words: ρx is the distance from x to the nearest integer multiple of 2π. Clearly, for all x,y R. We will use that ρ x = ρx and ρx+y ρx+ρy ρ2 x cosx 5 ρ2 x. 33

34 8.3 The absolute value of Ft. Let α j,k = 2ζ j,k ζj,k If D is δ-tame, we have for all j k δ 2 α j,k 2 for all j k. We have Ft = αj,k +α j,k cosτ j +τ k j,k For j k n let us define a function of τ R, so Ft = We note that f j,k τ = α j,k +α j,k cosτ, j,k f j,k τ j +τ k. f j,k 0 =. It follows by 8.2. and 8.3. that for ǫ > f j,k x exp γδ,ǫ f j,k y provided ρx 2ǫ and ρy ǫ for some γǫ,δ > 0. Furthermore, d 2 dτ lnf j,kτ = α j,k αj,k +cosτ α j,k cosτ 2 2 α j,k +α j,k cosτ 2. In particular, by 8.3. d 2 dτ lnf j,kτ δ2 for π τ π 3 and hence lnf j,k is strictly concave on the interval [ π/3,π/3]: x+y lnf j,k x+lnf j,k y 2lnf j,k δ x y 2 for all x,y [ π/3,π/3]. In what follows, we fix a particular parameter ǫ > 0. All implied constants in the O and Ω notation below may depend only on the parameters δ and ǫ. We say that n is sufficiently large if n γδ,ǫ for some constant γδ,ǫ > 0. Our first goal is to show that only the points t Π for which the inequality ρτ j +τ k ǫ holds for an overwhelming majority of pairs j,k contribute significantly to the integral of Ft on Π. 34 /2.

35 8.4 Lemma. For t Π, t = τ,...,τ n, and ǫ > 0 let us define a set Kt,ǫ,...,n consisting of the indices k such that ρτ j +τ k ǫ for more than n/2 distinct indices j. Let Kt,ǫ =,...,n\kt,ǫ. Then Ft exp γδǫ 2 n Kt,ǫ for some γδ > 0; 2 ρτ k τ k2 2ǫ for all k,k 2 Kt,ǫ; 3 Suppose that Kt, ǫ > n/2. Then ρτ k +τ k2 3ǫ for all k,k 2 Kt,ǫ. Proof. For every k Kt,ǫ there are at least n 2/2 distinct j k for which 8.4. ρτ j +τ k > ǫ and so by 8.2. we have cosτ j +τ k 5 ǫ2. Since there are at least Kt,ǫ n 2/4 pairs j,k for which 8.4. holds, the proof of Part follows from and For any k,k 2 K there is j,...,n such that Therefore, ρτ j +τ k, ρτ j +τ k2 ǫ. ρτ k τ k2 =ρτ j +τ k τ k2 τ j ρτ j +τ k +ρ τ j τ k2 =ρτ j +τ k +ρτ j +τ k2 2ǫ and Part 2 follows. Let us choose a k Kt,ǫ. Since Kt,ǫ > n/2, there is a j Kt,ǫ such that ρτ j +τ k ǫ. By Part 2, for any k 2 Kt,ǫ we have ρτ k +τ k2 = ρ τ j +τ k +τ j +τ k2 ρτ j +τ k +ρ τ j +τ k2 3ǫ and Part 3 follows. 35

36 8.5 Corollary. For an ǫ > 0 let us define a set Vǫ Π consisting of the points t Π such that Kt,ǫ ln 2 n, where Kt,ǫ is defined in Lemma 8.4. Then Π\Vǫ Ft dt n n provided n γδ,ǫ for some constant γδ,ǫ > 0. Proof. By Parts 3 of Theorem 7. we have Π Π Ft dt Ft dt Ω n n/2. The proof now follows by Part of Lemma 8.4. Next, we show that only the points t Π such that ρτ j +τ k ǫ for all j,k n contribute substantially to the integral of Ft on Π. 8.6 Lemma. For ǫ > 0 let us define a set Xǫ Π, Then Xǫ = t Π, t = τ,...,τ n : ρτ j +τ k ǫ for all j,k. Π\Xǫ Ft dt exp γ δ,ǫn Ft dt Π for all n γ 2 δ,ǫ for some constants γ δ,ǫ,γ 2 δ,ǫ > 0. Proof. Let us consider the set Vǫ/60 Π and n large enough so that the conclusion of Corollary 8.5 holds, that is, the integral of Ft over Π \ Vǫ/60 is asymptotically negligible. For a set A,...,n such that A ln 2 n, let us define a set P A Π we call it a piece such that ρτ j τ k ǫ/30 and ρτ j +τ k ǫ/20 for all j,k A. If n is large enough, by Lemma 8.4 for every t Vǫ/60 we can choose A = Kt,ǫ, so we have 8.6. Vǫ/60 A: A ln 2 n 36 P A.

37 Our next goal is to show that the integral of Ft over P A \ Xǫ is negligible compared to the integral of Ft over P A. Let us choose a point t P A \Xǫ. Thus we have Let us choose any k 0 A. Then ρτ i0 +τ j0 > ǫ for some i 0,j 0. ρτ i0 +τ j0 = ρτ i0 +τ j0 +τ k0 τ k0 ρτ i0 +τ k0 +ρτ j0 τ k0. Hence we have either ρτ i0 +τ k0 > ǫ/2 or ρτ j0 τ k0 > ǫ/2. In the first case, for every k A we have from which ρτ i0 +τ k0 =ρτ i0 +τ k0 +τ k τ k ρτ i0 +τ k +ρτ k0 τ k ρτ i0 +τ k +ǫ/30, ρτ i0 +τ k ǫ/2 ǫ/30 = 7ǫ/5. In the second case, for every k A, we have from which ρτ j0 τ k0 =ρτ j0 τ k0 +τ k τ k ρτ j0 +τ k +ρ τ k0 τ k =ρτ j0 +τ k +ρτ k0 +τ k ρτ j0 +τ k +ǫ/20, ρτ j0 +τ k > ǫ/2 ǫ/20 = 9ǫ/20. In either case, for any t P A \Xǫ, t = τ,...,τ n, there exists an index i / A such that ρτ i +τ k > 0.45ǫ for all k A. For i / A, we define Q A,i = Hence t P A : ρτ i +τ k > 0.45ǫ for all k A. P A \Xǫ Q A,i. Given a point t P A, we obtain another point in P A if we arbitrarily change the coordinate τ i [ π,π] for i A. We obtain a fiber E P A if we fix all other coordinates and let τ i [ π,π] vary. Geometrically, each fiber E is an interval of length 2π. Let us construct a set I E as follows. We choose an arbitrary 37 i A

38 k 0 A and let τ i vary in such a way that ρτ k0 +τ i 0.05ǫ. Geometrically, I is an interval of length 0.ǫ or a union of two non-overlapping intervals of total length 0.ǫ. Moreover, ρτ k +τ i ρτ k0 +τ i +ρτ k τ k0 0.05ǫ+0.05ǫ = 0.ǫ for all k A and all τ I. Using and 8.3.3, we conclude from that for any t Q A,i E and for any s P A I we have provided n is large enough. Therefore, Ft exp Ωn Fs Ft dt exp Ωn E Q A,i for all sufficiently large n. Integrating over all fibers E, we establish that Ft dt exp Ωn Q A,i E Ft dt P A Ft dt for all sufficiently large n. Since the number of different subsets A,...,n with A ln 2 n in 8.6. does not exceed exp O ln 3 n, the proof follows. Next, we prove that only the points in the neighborhood of the origin or the corners of Π contribute significantly to the integral of Ft over Π. 8.7 Lemma. For 0 < ǫ <, let Xǫ be the set defined in Lemma 8.6. Let us define Yǫ,Zǫ Π by Yǫ = t Π : t = τ,...,τ n, τ i ǫ/2 for i =,...,n and Zǫ = t Π : t = τ,...,τ n, τ i σ i π ǫ/2 for some σ i = ± and all i =,...,n. Then Moreover, Xǫ = Yǫ Zǫ and Yǫ Zǫ =. Ft dt = Yǫ 38 Zǫ Ft dt.

39 Proof. Let us pick a point t Xǫ. Then for each k we have ρ2τ k ǫ and hence either τ k ǫ/2 or τ k π ǫ/2 or τ k +π ǫ/2. Since ρτ k +τ j ǫ for all k,j, we conclude that if τ k ǫ/2 for some k then τ k ǫ/2 for all k. Hence Xǫ Yǫ Zǫ. The inclusion Zǫ Yǫ Xǫ is obvious. Since ǫ <, we have Yǫ Zǫ =. The set Zǫ is a union of 2 n pairwise disjoint corners, where each corner is determined by a choice of the interval [ π, π+ǫ/2] or [π ǫ/2,π] for each coordinate τ i. The transformation τk +π if τ k [ π, π+ǫ/2] τ k τ k π if τ k [π ǫ/2,π] is a volume-preserving transformation which maps Zǫ onto Xǫ and does not change the value of Ft. Finally, we will use that Ft is strictly log-concave on the set Y. For Euclidean space V with the norm, a point x V and a closed set A V we define the distance distx,a = min y A x y. We will need the following concentration inequality for strictly log-concave measures. 8.8 Theorem. Let V be Euclidean space with the norm, let B V be a convex body and let us consider a probability measure supported on B with density e u, where u : B R is a function satisfying x+y ux+uy 2u c x y 2 for all x,y B 2 and some constant c > 0. Let A B be a closed subset such that PA /2. Then, for r 0, we have P x B : distx,a r 2e cr2. Proof. See Section 2.2 of [Le0] and Theorem 8. and its proof in [Ba97]. 8.9 Lemma. Let Y Π be the set defined by Y = t Π, t = τ,...,τ n : τ i 2 for i =,...,n. Let U Π be the set U = t Π, t = τ,...,τ n : τ i lnn for i =,...,n. n 39

40 Then for any κ > 0 we have Ft dt n κ Y\U Y Ft dt, provided n γδ,κ is big enough. Proof. Let us consider the probability measure on Y with density proportional to Ft. Let us consider a map M : R n R n 2, where the coordinates of R n 2 are indexed by unordered pairs j,k and M j,k τ,...,τ n = τ j +τ k. By Lemma 4.3, Mt 2 n 2 t 2 for all t R n. Since from Section 8.3, ln Ft = 2 lnf j,k τ j +τ k, j,k it follows by that for any constant a and ut = ln Ft +a, we have t +t 2 ut +ut 2 2u 2 γδn t t 2 2 for all t,t 2 Y and some constant γδ > 0. We choose a so that e u is a probability density on Y. We apply Theorem 8.8 with c = γδn. For k =,...,n, let A k Y be the set of points with τ k 0 and let A + k Y be the set of points with τ k 0. Since both Y and the probability measure are invariant under the symmetry t t, we have P A k = P A + k = for k =,...,n. 2 Therefore, by Theorem 8.8, all but a n κ fraction of all points in Y lie within a distance of lnn/ n from each of the sets A k and A+ k, provided n is large enough. 40

THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok

THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE Alexer Barvinok Papers are available at http://www.math.lsa.umich.edu/ barvinok/papers.html This is a joint work with J.A. Hartigan

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d). 1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction

More information

n k=1 k=0 1/k! = e. Example 6.4. The series 1/k 2 converges in R. Indeed, if s n = n then k=1 1/k, then s 2n s n = 1 n + 1 +...

n k=1 k=0 1/k! = e. Example 6.4. The series 1/k 2 converges in R. Indeed, if s n = n then k=1 1/k, then s 2n s n = 1 n + 1 +... 6 Series We call a normed space (X, ) a Banach space provided that every Cauchy sequence (x n ) in X converges. For example, R with the norm = is an example of Banach space. Now let (x n ) be a sequence

More information

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Metric Spaces. Chapter 1

Metric Spaces. Chapter 1 Chapter 1 Metric Spaces Many of the arguments you have seen in several variable calculus are almost identical to the corresponding arguments in one variable calculus, especially arguments concerning convergence

More information

Transportation Polytopes: a Twenty year Update

Transportation Polytopes: a Twenty year Update Transportation Polytopes: a Twenty year Update Jesús Antonio De Loera University of California, Davis Based on various papers joint with R. Hemmecke, E.Kim, F. Liu, U. Rothblum, F. Santos, S. Onn, R. Yoshida,

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

Stationary random graphs on Z with prescribed iid degrees and finite mean connections

Stationary random graphs on Z with prescribed iid degrees and finite mean connections Stationary random graphs on Z with prescribed iid degrees and finite mean connections Maria Deijfen Johan Jonasson February 2006 Abstract Let F be a probability distribution with support on the non-negative

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results

More information

A characterization of trace zero symmetric nonnegative 5x5 matrices

A characterization of trace zero symmetric nonnegative 5x5 matrices A characterization of trace zero symmetric nonnegative 5x5 matrices Oren Spector June 1, 009 Abstract The problem of determining necessary and sufficient conditions for a set of real numbers to be the

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties SOLUTIONS TO EXERCISES FOR MATHEMATICS 205A Part 3 Fall 2008 III. Spaces with special properties III.1 : Compact spaces I Problems from Munkres, 26, pp. 170 172 3. Show that a finite union of compact subspaces

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

1. Prove that the empty set is a subset of every set.

1. Prove that the empty set is a subset of every set. 1. Prove that the empty set is a subset of every set. Basic Topology Written by Men-Gen Tsai email: b89902089@ntu.edu.tw Proof: For any element x of the empty set, x is also an element of every set since

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

Random graphs with a given degree sequence

Random graphs with a given degree sequence Sourav Chatterjee (NYU) Persi Diaconis (Stanford) Allan Sly (Microsoft) Let G be an undirected simple graph on n vertices. Let d 1,..., d n be the degrees of the vertices of G arranged in descending order.

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

8. Matchings and Factors

8. Matchings and Factors 8. Matchings and Factors Consider the formation of an executive council by the parliament committee. Each committee needs to designate one of its members as an official representative to sit on the council,

More information

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH 31 Kragujevac J. Math. 25 (2003) 31 49. SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH Kinkar Ch. Das Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, W.B.,

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

More information

4. Expanding dynamical systems

4. Expanding dynamical systems 4.1. Metric definition. 4. Expanding dynamical systems Definition 4.1. Let X be a compact metric space. A map f : X X is said to be expanding if there exist ɛ > 0 and L > 1 such that d(f(x), f(y)) Ld(x,

More information

How To Prove The Dirichlet Unit Theorem

How To Prove The Dirichlet Unit Theorem Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

A Sublinear Bipartiteness Tester for Bounded Degree Graphs

A Sublinear Bipartiteness Tester for Bounded Degree Graphs A Sublinear Bipartiteness Tester for Bounded Degree Graphs Oded Goldreich Dana Ron February 5, 1998 Abstract We present a sublinear-time algorithm for testing whether a bounded degree graph is bipartite

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

Fuzzy Differential Systems and the New Concept of Stability

Fuzzy Differential Systems and the New Concept of Stability Nonlinear Dynamics and Systems Theory, 1(2) (2001) 111 119 Fuzzy Differential Systems and the New Concept of Stability V. Lakshmikantham 1 and S. Leela 2 1 Department of Mathematical Sciences, Florida

More information

Applied Algorithm Design Lecture 5

Applied Algorithm Design Lecture 5 Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design

More information

RANDOM INTERVAL HOMEOMORPHISMS. MICHA L MISIUREWICZ Indiana University Purdue University Indianapolis

RANDOM INTERVAL HOMEOMORPHISMS. MICHA L MISIUREWICZ Indiana University Purdue University Indianapolis RANDOM INTERVAL HOMEOMORPHISMS MICHA L MISIUREWICZ Indiana University Purdue University Indianapolis This is a joint work with Lluís Alsedà Motivation: A talk by Yulij Ilyashenko. Two interval maps, applied

More information

ON INDUCED SUBGRAPHS WITH ALL DEGREES ODD. 1. Introduction

ON INDUCED SUBGRAPHS WITH ALL DEGREES ODD. 1. Introduction ON INDUCED SUBGRAPHS WITH ALL DEGREES ODD A.D. SCOTT Abstract. Gallai proved that the vertex set of any graph can be partitioned into two sets, each inducing a subgraph with all degrees even. We prove

More information

Some probability and statistics

Some probability and statistics Appendix A Some probability and statistics A Probabilities, random variables and their distribution We summarize a few of the basic concepts of random variables, usually denoted by capital letters, X,Y,

More information

Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.

Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom. Some Polynomial Theorems by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.com This paper contains a collection of 31 theorems, lemmas,

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

More information

Pacific Journal of Mathematics

Pacific Journal of Mathematics Pacific Journal of Mathematics GLOBAL EXISTENCE AND DECREASING PROPERTY OF BOUNDARY VALUES OF SOLUTIONS TO PARABOLIC EQUATIONS WITH NONLOCAL BOUNDARY CONDITIONS Sangwon Seo Volume 193 No. 1 March 2000

More information

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES CHRISTOPHER HEIL 1. Cosets and the Quotient Space Any vector space is an abelian group under the operation of vector addition. So, if you are have studied

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics Undergraduate Notes in Mathematics Arkansas Tech University Department of Mathematics An Introductory Single Variable Real Analysis: A Learning Approach through Problem Solving Marcel B. Finan c All Rights

More information

COMBINATORIAL PROPERTIES OF THE HIGMAN-SIMS GRAPH. 1. Introduction

COMBINATORIAL PROPERTIES OF THE HIGMAN-SIMS GRAPH. 1. Introduction COMBINATORIAL PROPERTIES OF THE HIGMAN-SIMS GRAPH ZACHARY ABEL 1. Introduction In this survey we discuss properties of the Higman-Sims graph, which has 100 vertices, 1100 edges, and is 22 regular. In fact

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

24. The Branch and Bound Method

24. The Branch and Bound Method 24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information

Lecture 4: BK inequality 27th August and 6th September, 2007

Lecture 4: BK inequality 27th August and 6th September, 2007 CSL866: Percolation and Random Graphs IIT Delhi Amitabha Bagchi Scribe: Arindam Pal Lecture 4: BK inequality 27th August and 6th September, 2007 4. Preliminaries The FKG inequality allows us to lower bound

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

CHAPTER IV - BROWNIAN MOTION

CHAPTER IV - BROWNIAN MOTION CHAPTER IV - BROWNIAN MOTION JOSEPH G. CONLON 1. Construction of Brownian Motion There are two ways in which the idea of a Markov chain on a discrete state space can be generalized: (1) The discrete time

More information

Graphs without proper subgraphs of minimum degree 3 and short cycles

Graphs without proper subgraphs of minimum degree 3 and short cycles Graphs without proper subgraphs of minimum degree 3 and short cycles Lothar Narins, Alexey Pokrovskiy, Tibor Szabó Department of Mathematics, Freie Universität, Berlin, Germany. August 22, 2014 Abstract

More information

4. How many integers between 2004 and 4002 are perfect squares?

4. How many integers between 2004 and 4002 are perfect squares? 5 is 0% of what number? What is the value of + 3 4 + 99 00? (alternating signs) 3 A frog is at the bottom of a well 0 feet deep It climbs up 3 feet every day, but slides back feet each night If it started

More information

Split Nonthreshold Laplacian Integral Graphs

Split Nonthreshold Laplacian Integral Graphs Split Nonthreshold Laplacian Integral Graphs Stephen Kirkland University of Regina, Canada kirkland@math.uregina.ca Maria Aguieiras Alvarez de Freitas Federal University of Rio de Janeiro, Brazil maguieiras@im.ufrj.br

More information

Degree Hypergroupoids Associated with Hypergraphs

Degree Hypergroupoids Associated with Hypergraphs Filomat 8:1 (014), 119 19 DOI 10.98/FIL1401119F Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Degree Hypergroupoids Associated

More information

Lecture 13: Martingales

Lecture 13: Martingales Lecture 13: Martingales 1. Definition of a Martingale 1.1 Filtrations 1.2 Definition of a martingale and its basic properties 1.3 Sums of independent random variables and related models 1.4 Products of

More information

Lecture 3: Linear methods for classification

Lecture 3: Linear methods for classification Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

THREE DIMENSIONAL GEOMETRY

THREE DIMENSIONAL GEOMETRY Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

More information

Collinear Points in Permutations

Collinear Points in Permutations Collinear Points in Permutations Joshua N. Cooper Courant Institute of Mathematics New York University, New York, NY József Solymosi Department of Mathematics University of British Columbia, Vancouver,

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

1 The Brownian bridge construction

1 The Brownian bridge construction The Brownian bridge construction The Brownian bridge construction is a way to build a Brownian motion path by successively adding finer scale detail. This construction leads to a relatively easy proof

More information

Separation Properties for Locally Convex Cones

Separation Properties for Locally Convex Cones Journal of Convex Analysis Volume 9 (2002), No. 1, 301 307 Separation Properties for Locally Convex Cones Walter Roth Department of Mathematics, Universiti Brunei Darussalam, Gadong BE1410, Brunei Darussalam

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH

DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH CHRISTOPHER RH HANUSA AND THOMAS ZASLAVSKY Abstract We investigate the least common multiple of all subdeterminants,

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Zachary Monaco Georgia College Olympic Coloring: Go For The Gold

Zachary Monaco Georgia College Olympic Coloring: Go For The Gold Zachary Monaco Georgia College Olympic Coloring: Go For The Gold Coloring the vertices or edges of a graph leads to a variety of interesting applications in graph theory These applications include various

More information

Solving polynomial least squares problems via semidefinite programming relaxations

Solving polynomial least squares problems via semidefinite programming relaxations Solving polynomial least squares problems via semidefinite programming relaxations Sunyoung Kim and Masakazu Kojima August 2007, revised in November, 2007 Abstract. A polynomial optimization problem whose

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

VERTICES OF GIVEN DEGREE IN SERIES-PARALLEL GRAPHS

VERTICES OF GIVEN DEGREE IN SERIES-PARALLEL GRAPHS VERTICES OF GIVEN DEGREE IN SERIES-PARALLEL GRAPHS MICHAEL DRMOTA, OMER GIMENEZ, AND MARC NOY Abstract. We show that the number of vertices of a given degree k in several kinds of series-parallel labelled

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

POLYNOMIAL HISTOPOLATION, SUPERCONVERGENT DEGREES OF FREEDOM, AND PSEUDOSPECTRAL DISCRETE HODGE OPERATORS

POLYNOMIAL HISTOPOLATION, SUPERCONVERGENT DEGREES OF FREEDOM, AND PSEUDOSPECTRAL DISCRETE HODGE OPERATORS POLYNOMIAL HISTOPOLATION, SUPERCONVERGENT DEGREES OF FREEDOM, AND PSEUDOSPECTRAL DISCRETE HODGE OPERATORS N. ROBIDOUX Abstract. We show that, given a histogram with n bins possibly non-contiguous or consisting

More information

An Introduction to Machine Learning

An Introduction to Machine Learning An Introduction to Machine Learning L5: Novelty Detection and Regression Alexander J. Smola Statistical Machine Learning Program Canberra, ACT 0200 Australia Alex.Smola@nicta.com.au Tata Institute, Pune,

More information

Gambling Systems and Multiplication-Invariant Measures

Gambling Systems and Multiplication-Invariant Measures Gambling Systems and Multiplication-Invariant Measures by Jeffrey S. Rosenthal* and Peter O. Schwartz** (May 28, 997.. Introduction. This short paper describes a surprising connection between two previously

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

Principle of Data Reduction

Principle of Data Reduction Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Notes on Factoring. MA 206 Kurt Bryan

Notes on Factoring. MA 206 Kurt Bryan The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

On the representability of the bi-uniform matroid

On the representability of the bi-uniform matroid On the representability of the bi-uniform matroid Simeon Ball, Carles Padró, Zsuzsa Weiner and Chaoping Xing August 3, 2012 Abstract Every bi-uniform matroid is representable over all sufficiently large

More information

Minimally Infeasible Set Partitioning Problems with Balanced Constraints

Minimally Infeasible Set Partitioning Problems with Balanced Constraints Minimally Infeasible Set Partitioning Problems with alanced Constraints Michele Conforti, Marco Di Summa, Giacomo Zambelli January, 2005 Revised February, 2006 Abstract We study properties of systems of

More information

The Advantages and Disadvantages of Online Linear Optimization

The Advantages and Disadvantages of Online Linear Optimization LINEAR PROGRAMMING WITH ONLINE LEARNING TATSIANA LEVINA, YURI LEVIN, JEFF MCGILL, AND MIKHAIL NEDIAK SCHOOL OF BUSINESS, QUEEN S UNIVERSITY, 143 UNION ST., KINGSTON, ON, K7L 3N6, CANADA E-MAIL:{TLEVIN,YLEVIN,JMCGILL,MNEDIAK}@BUSINESS.QUEENSU.CA

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Large induced subgraphs with all degrees odd

Large induced subgraphs with all degrees odd Large induced subgraphs with all degrees odd A.D. Scott Department of Pure Mathematics and Mathematical Statistics, University of Cambridge, England Abstract: We prove that every connected graph of order

More information

1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v,

1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v, 1.3. DOT PRODUCT 19 1.3 Dot Product 1.3.1 Definitions and Properties The dot product is the first way to multiply two vectors. The definition we will give below may appear arbitrary. But it is not. It

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

Let H and J be as in the above lemma. The result of the lemma shows that the integral

Let H and J be as in the above lemma. The result of the lemma shows that the integral Let and be as in the above lemma. The result of the lemma shows that the integral ( f(x, y)dy) dx is well defined; we denote it by f(x, y)dydx. By symmetry, also the integral ( f(x, y)dx) dy is well defined;

More information

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

More information

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like

More information

Classification of Cartan matrices

Classification of Cartan matrices Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms

More information