Coupling on-line and off-line analyses for random power law graphs

Size: px
Start display at page:

Download "Coupling on-line and off-line analyses for random power law graphs"

Transcription

1 Coupling on-line and off-line analyses for random power law graphs Fan Chung Linyuan Lu Abstract We develop a coupling technique for analyzing on-line models by using off-line models. This method is especially effective for a growth-deletion model which generalizes and includes the preferential attachment model for generating large complex networks that simulate numerous realistic networks. By coupling the on-line model with the off-line model for random power law graphs, we derive strong bounds for a number of graph properties including diameter, average distances, connected components and spectral bounds. For example, we prove that a power law graph generated by the growth-deletion model almost surely has diameter O(log n) and average distance O(log log n). Introduction In the past few years, it has been observed that a variety of information networks including Internet graphs, social networks and biological networks among others [,3,4,5,8,9,]havetheso-calledpower law degree distribution. A graph is called a power law graph if the fraction of vertices with degree k is proportional to k for some constant β>0. There are basically two different models for β random power law graphs. The first model is an on-line model that mimics the growth of a network. Starting from a vertex (or some small initial graph), a new node and/or new edge is added at each unit of time following the so-called preferential attachment scheme [3, 4, 9]. The endpoint of a new edge is chosen with the probability proportional to their (current) degrees. By using a combination of adding new nodes and new edges with given respective probabilities, one can generate large power law graphs with exponents β greater than (see [3, 7] for rigorous proofs). Since realistic networks encounter both growth and deletion of vertices and University of California, San Diego, fan@ucsd.edu Research supported in part by NSF Grants DMS and ITR University of California, San Diego, llu@math.ucsd.edu

2 edges, here we consider a growth-deletion online model that generalizes and includes the preferential attachment model. Detailed definitions will be given in Section 3. The second model is an off-line model of random graphs with given expected degrees. For a given sequence w of weights w v, a random graph in G(w) is formed by choosing the edge between u and v with probability proportional to the product of w u and w v. The Erdős-Rényi model G(n, p) can be viewed as a special case of G(w) with all w i s equal. Because of the independence in the choices of edges, the model G(w) is amenable to a rigorous analysis of various graph properties and structures. In a series of papers [0,,, ], various graph invariants have been examined and sharp bounds have been derived for diameter, average distance, connected components and spectra for random power law graphs and, in general, random graphs with given expected degrees. The on-line model is obviously much harder to analyze than the off-line model. There has been some recent work on the on-line model beyond showing the generated graph has a power law degree distribution. Bollobás and Riordan [7] have derived a number of graph properties for the on-line model by coupling with G(n, p), namely, identifying (almost regular) subgraphs whose behavior can be captured in a similar way as graphs from G(n, p) for some appropriate p. In this paper, our goal is to couple the on-line model with the off-line model of random graphs with a similar power law degree distribution so that we can apply the techniques from the off-line model to the on-line model. The basic idea is similar to the martingale method but with substantially differences. Although a martingale involves a sequence of functions with consecutive functions having small bounded differences, each function is defined on a fixed probability space Ω. For the on-line model, the probability space for the random graph generated at each time instance is different in general. We have a sequence of probability spaces where two consecutive ones have small differences. To analyze this, we need to examine the relationship of two distinct random graph models, each of which can be viewed as a probability space. In order to do so, we shall describe two basic methods that are not only useful for our proofs here but also interesting in their own right. Comparing two random graph models We define the dominance of one random graph model over another in Section 4. Several key lemmas for controlling the differences are also given there. A general Azuma inequality A concentration inequality is derived for martingales that are almost Lipschitz. A complete proof is given in Section

3 5. The main result of this paper is to show the following results for the random graph G generated by the on-line model G(p,p,p 3,p 4,m)withp >p 3,p > p 4,asdefinedinSection5:. Almost surely the degree sequence of the random graph generated by growth-deletion model G(p,p,p 3,p 4,m) follows the power law distribution with exponent β =+(p +p 3 )/(p +p p 3 p 4 ).. Suppose m>log +ɛ n. For p >p 3 +p 4,wehave<β<3. Almost surely a random graph in G(p,p,p 3,p 4,m) has diameter Θ(log n) and log log n average distance O( log(/(β ) ). We note that the average distance is defined to be the average over all distances among pairs of vertices in the same connected component. 3. Suppose m>log +ɛ n. For p <p 3 +p 4,wehaveβ>3. Almost surely a random graph in G(p,p,p 3,p 4,m) has diameter Θ(log n) and average distance O( log n log d )wheredis the average degree. 4. Suppose m>log +ɛ n. Almost surely a random graph in G(p,p,p 3,p 4,m) has Cheeger constant at least /+o(). 5. Suppose m>log +ɛ n. Almost surely a random graph in G(p,p,p 3,p 4,m) has spectral gap λ at least /8+o(). We note that the Cheeger constant h G of a graph G, which is sometimes called the conductance, is defined by E(A, h G = Ā) min{vol(a), vol(ā)} where vol(a) = x Adeg(x). The Cheeger constant is closely related to the spectral gap λ of the Laplacian of a graph by the Cheeger inequality h G λ h G /. Thus both h G and λ are key invariants for controlling the rate of convergence of random walks on G. Strong properties of off-line random power law graphs For random graphs with given expected degree sequences satisfying a power law distribution with exponent β, we may assume that the expected degrees are 3

4 w i = ci β for i satisfying i0 i<n+i 0. Here c depends on the average degree and i 0 depends on the maximum degree m, namely,c= β β dn β,i0 = n( d(β ) m(β ) )β. Average distance and diameter Fact ([]) For a power law random graph with exponent β > 3 and average degree d strictly greater than, almost surely the average distance is ( + o()) log n and the diameter is Θ(log n). log d Fact ([]) Suppose a power law random graph with exponent β has average degree d strictly greater than and maximum degree m satisfying log m log n/ log log n. If <β<3, almost surely the diameter is Θ(log n) and the log log n average distance is at most ( + o()) log(/(β )). For the case of β =3, the power law random graph has diameter almost surely Θ(log n) and has average distance Θ(log n/ log log n). Connected components Fact 3 ([0]) Suppose that G is a random graph in G(w) with given expected degree sequence w. If the expected average degree d is strictly greater than, then the following holds: () Almost surely G has a unique giant component. Furthermore, the volume of the giant component is at least ( de + o())vol(g) if d 4 e = , and is at least ( +log d d + o())vol(g) if d<. () The second largest component almost surely has size O( log n log d ). Spectra of the adjacency matrix and the Laplacian The spectra of the adjacency matrix and the Laplacian of a non-regular graph can have quite different distribution. The definition for the Laplacian can be found in [8]. Fact 4 ([]). The largest eigenvalue of the adjacency matrix of a random graph with a given expected degree sequence is determined by m, themax- imum degree, and d, the weighted average of the squares of the expected degrees. We show that the largest eigenvalue of the adjacency matrix is almost surely ( + o()) max{ d, m} provided some minor conditions are satisfied. In addition, suppose that the k th largest expected degree m k is significantly larger than d. Then the k th largest eigenvalue of the adjacency matrix is almost surely ( + o()) m k.. For a random power law graph with exponent β>.5, the largest eigenvalue of a random power law graph is almost surely (+o()) m where m 4

5 is the maximum degree. Moreover, the k largest eigenvalues of a random power law graph with exponent β have power law distribution with exponent β if the maximum degree is sufficiently large and k is bounded above by a function depending on β,m and d, the average degree. When <β<.5, the largest eigenvalue is heavily concentrated at cm 3 β for some constant c depending on β and the average degree. 3. We will show that the eigenvalues of the Laplacian satisfy the semi-circle law under the condition that the minimum expected degree is relatively large ( the square root of the expected average degree). This condition contains the basic case when all degrees are equal (the Erdös-Rényi model). If we weaken the condition on the minimum expected degree, we can still have the following strong bound for the eigenvalues of the Laplacian which implies strong expansion rates for rapidly mixing, max λ i ( + o()) 4 + g(n)log n i 0 w w min where w is the expected average degree, w min is the minimum expected degree and g(n) is any slow growing function of n. 3 A growth-deletion model for generating random power law graphs One explanation for the ubiquitous occurrence of power laws is the simple growth rules that can result in a power law distribution (see [3, 4] ). Nevertheless, realistic networks usually encounter both the growth and deletion of vertices and edges. Here we consider a general on-line model that combine deletion steps with the preferential attachment model. Vertex-growth-step: Add a new vertex v and form a new edge from v to an existing vertex u chosen with probability proportional to d u. Edge-growth-step: Add a new edge with endpoints to be chosen among existing vertices with probability proportional to the degrees. If existing in the current graph, the generated edge is discarded. The edge-growthstep is repeated until a new edge is successfully added. Vertex-deletion-step: Delete a vertex randomly. Edge-deletion-step: Delete an edge randomly. 5

6 For non-negative values p,p,p 3,p 4 summing to, we consider the following growth-deletion model G(p,p,p 3,p 4 ): At each step, with probability p, take a vertex-growth step; With probability p, take an edge-growth step; With probability p 3, take a vertex-deletion step; With probability p 4 = p p p 3, take an edge-deletion step. Here we assume p 3 <p and p 4 <p so that the number of vertices and edge grows as t goes to infinity. If p 3 = p 4 = 0, the model is just the usual preferential attachment model which generates power law graphs with exponent β =+ p p +p. An extensive survey on the preferential attachment model is given in [5] and rigorous proofs can be found in [3, 3]. This growth-deletion model generates only simple graphs because the multiple edges are disallowed at the edge-growth-step. The drawback is that the edge-growth-step could runs in a loop. It only happens that the current graph is a completed graph. If this happens, we simply restart the whole procedure from the same initial graph. With high probability, the model generates sparse graphs so that we could omit the analysis of this extreme case. Previously, Bollobás considered edge deletion after the power law graph is generated [7]. Very recently, Cooper, Frieze and Vera [4] independently consider the growth-deletion model with vertex deletion only. We will show (see Section 6) the following: Suppose p 3 <p and p 4 <p. Then almost surely the degree sequence of the growth-deletion model G(p,p,p 3,p 4 ) follows the power law distribution with the exponent p +p 3 β =+. p +p p 3 p 4 We note that a random graph in G(p,p,p 3,p 4 ) almost surely has expected average degree (p + p p 4 )/(p + p 3 ). For of p i s in certain ranges, this value can be below and the random graph is not connected. To simulate graphs with specified degrees, we consider the following modified model G(p,p,p 3,p 4,m), for some integer m which generates random graphs with the expected degree m(p + p p 4 )/(p + p 3 ). At each step, with probability p, add a new vertex v and form m new edges from v to existing vertices u chosen with probability proportional to d u. With probability p,takemedge-growth steps; With probability p 3, take a vertex-deletion step; With probability p 4 = p p p 3,takemedge-deletion steps. Suppose p 3 <p and p 4 <p. Then almost surely the degree sequence of the growth-deletion model G(p,p,p 3,p 4,m) follows the power law distribution 6

7 with the exponent β the same as the exponent for the model G(p,p,p 3,p 4 ). p +p 3 β =+. p +p p 3 p 4 Many results for G(p,p,p 3,p 4,m) can be derived in the same fashion as for G(p,p,p 3,p 4 ). Indeed, G(p,p,p 3,p 4 ) = G(p,p,p 3,p 4,) is usually the hardest case because of the sparseness of the graphs. 4 Comparing random graphs In the early work of Erdős and Rényi on random graphs, they first used the model F (n, m) that each graph on n vertices and m edges are chosen randomly with equal probability, where n and m are given fixed numbers. This model is apparently different from the later model G(n, p), for which a random graph is formed by choosing independently each of the ( n ) pairs of vertices to be an edge with probability p. Because of the simplicity and ease to use, G(n, p) is the model for the seminar work of Erdős and Rényi. Since then, G(n, p) has been widely used and often been referred to as the Erdős-Rényi model. For m = p ( n ), the two models are apparently correlated in the sense that many graph properties that are satisfied by both random graph models. To precisely define the relationship of two random graph models, we need some definitions. A graph property P can be viewed as a set of graphs. We say a graph G satisfies property P if G is a member of P. A graph property is said to be monotone if whenever a graph H satisfies A, then any graph containing H must also satisfy A. For example, the property A of containing a specified subgraph, say, the Peterson graph, is a monotone property. A random graph G is a probability distribution Pr(G = ). Given two random graphs G and G on n vertices, we say G dominates G, if for any monotone graph property A, the probability that a random graph from G satisfies A is greater than or equal to the probability that a random graph from G satisfies A, i.e., Pr(G satisfies A) Pr(G satisfies A). In this case, we write G G and G G. For example, for any p p,we have G(n, p ) G(n, p ). For any ɛ>0, we say G dominates G with an error estimate ɛ, iffor any monotone graph property A, the probability that a random graph from G satisfies A is greater than or equal to the probability that a random graph from G satisfies A up to an ɛ error term, i.e., Pr(G satisfies A)+ɛ Pr(G satisfies A). 7

8 If G dominates G with an error estimate ɛ = ɛ n, which goes to zero as n approaches infinity, we say G is almost surely dominates G. In this case, we write almost surely G G and G G. For example, for any δ>0, we have almost surely G(n, ( δ) ( m n )) F (n, m) G(n, ( + δ) ( m n )). We can extend the definition of domination to graphs with different sizes in the following sense. Suppose that random graph G i has n i vertices for i =,, and n <n. By adding n n isolated vertices, the random graph G is extended to the random graph G with the same size as G. We say G dominates G if G dominates G. We consider random graphs that are constructed inductively by pivoting at one edge at a time. Here we assume the number of vertices is n. Edge-pivoting : For an edge e K n, a probability q (0 q ), and a random graph G, a new random graph G can be constructed in the following way. For any graph H, we define { ( q)pr(g=h) if e E(H), Pr(G = H) = Pr(G = H)+qPr(G = H \{e}) if e E(H). It is easy to check that Pr(G = ) is a probability distribution. We say G is constructed from G by pivoting at the edge e with probability q. For any graph property A, we define the set A e to be Further, we define the set Aē to be A e = {H {e} H A.} Aē = {H \{e} H A.} In other words, A e consists of the graphs obtained by adding the edge e to the graphs in A; Aē consists of the graphs obtained by deleting the edge e from the graphs in A. We have the following useful lemma. Lemma Suppose G is constructed from G by pivoting at the edge e with probability q. Then for any property A, we have Pr(G A) =Pr(G A)+q[Pr((A A e )ē) Pr(A Aē)]. In particular, if A is a monotone property, we have Thus, G dominates G. Pr(G A) Pr(G A). 8

9 Proof: The set associated with a property A can be partitioned into the following subsets. Let A = A A e be the graphs of A containing the edge e, and A =A Aē be the graphs of A not containing the edge e. Wehave Pr(G A) = Pr(G A ) + Pr(G A ) = Pr(G = H)+ Pr(G = H) H A H A = (Pr(G = H)+qPr(G = H \{e})) H A + ( q) Pr(G = H) H A = Pr(G A )+Pr(G A )+qpr(g (A )ē) q Pr(A ) = Pr(G A)+q[Pr((A A e )ē) Pr(A Aē)]. If A is monotone, we have A (A )ē. Thus, Pr(G A) Pr(G A). Lemmaisproved. Lemma Suppose G i is constructed from G i by pivoting the edge e with probability q i,fori=,. If q q and G dominates G,thenG dominates G. Proof: Following the definitions of A, and letting A and A be as in the proof of Lemma, we have Pr(G A) = Pr(G A)+q [Pr(G (A )ē) Pr(G A )] = Pr(G A)+q Pr(G ((A )ē \ A )) Pr(G A)+q Pr(G ((A )ē \ A )) = Pr(G A)+q [Pr(G (A )ē) Pr(G A )] = Pr(G A). The proof of Lemma 4. is complete. 9

10 Let G and G be the random graphs on n vertices. We define G G to be the random graph as follows: Pr(G G = H) = Pr(G = H )Pr(G =H ) H H =H where H,H range over all possible pairs of subgraphs that are not necessarily disjoint. The following Lemma is a generalization of Lemma. Lemma 3 If G dominates G 3 with an error estimate ɛ and G dominates G 4 with an error estimate ɛ,theng G dominates G 3 G 4 with an error estimate ɛ + ɛ. Proof: For any monotone property A and any graph H, we define the set f(a, H) tobe f(a, H) ={G G H A}. We observe that f(a, H) is also a monotone property. Therefore, Pr(G G A) = Pr(G = H )Pr(G =H ) H A H H =H = Pr(G = H )Pr(G f(a, H )) H Similarly, we have Thus, we get H Pr(G = H )(Pr(G 4 f(a, H )) ɛ ) Pr(G G 4 A) ɛ. Pr(G G 4 A) Pr(G 3 G 4 A) ɛ. Pr(G G A) Pr(G 3 G 4 A) (ɛ + ɛ ), as desired.. Suppose φ is a sequence of random graphs φ(g ),φ(g ),..., where the indices of φ range over all graphs on n vertices. Recall that a random graph G is a probability distribution Pr(G = ) over the space of all graphs on n vertices. For any random graph G, we define φ(g) to be the random graph defined as follows: Pr(φ(G) =H)= Pr(G = H )Pr(φ(H )=H ). H H =H 0

11 We have Lemma 4 Let φ and φ be two sequences of random graphs where the indices of φ and φ range over all graphs on n vertices. Let G be any random graph. If Pr(G {H φ (H)dominates φ (H)with an error estimate ɛ }) ɛ, then φ (G) dominates φ (G) with an error estimate ɛ + ɛ. Proof: For any monotone property A and any graph H, we have Pr(φ (G) A) = Pr(G = H )Pr(φ (H )=H ) H A H H =H = Pr(G = H )Pr(φ (H ) f(a, H )) H H Pr(G = H )Pr(φ (H ) f(a, H )) ɛ ɛ Pr(φ (G) A) (ɛ + ɛ ), as desired, since f(a, H) ={G G H A}is also a monotone property.. Let G and G be the random graphs on n vertices. We define G \ G to be the random graph as follows: Pr(G \ G = H) = H \H =H where H,H range over all pairs of graphs. Pr(G = H )Pr(G =H ) Lemma 5 If G dominates G 3 with an error estimate ɛ and G is dominated by G 4 with an error estimate ɛ,theng \G dominates G 3 \ G 4 with an error estimate ɛ + ɛ. Proof: For any monotone property A and any graph H, we define the set ψ(a, H) tobe ψ(a, H) ={G G\H A}.

12 We observe that ψ(a, H) is also a monotone property. Therefore, Pr(G \ G A) = Pr(G = H )Pr(G =H ) H A H \H =H = H Pr(G = H ) Pr(G ψ(a, H )) H Pr(G = H )(Pr(G 3 ψ(a, H )) ɛ ) Pr(G 3 \ G A) ɛ. Similarly, we define the set θ(a, H) tobe θ(a, H) ={G H\G A}. We observe that the complement of the set θ(a, H) is a monotone property. We have Pr(G 3 \ G A) = Pr(G 3 = H )Pr(G =H ) H A H \H =H = Pr(G 3 = H ) Pr(G θ(a, H )) H Thus, we get H Pr(G 3 = H )(Pr(G 4 θ(a, H )) ɛ ) Pr(G 3 \ G 4 A) ɛ. Pr(G G A) Pr(G 3 G 4 A) (ɛ + ɛ ), as desired. A random graph G is called edge-independent (or independent, for short) if there is an edge-weighted function p: E(K n ) [0, ] satisfying Pr(G = H) = p e p e ). e H e H( For example, a random graph with a given expected degree sequence is edgeindependent. Edge-independent random graphs have many nice properties, several of which we derive here.

13 Lemma 6 Suppose that G and G are independent random graph with edgeweighted functions p and p,theng G is edge-independent with the edgeweighted function p satisfying p e = p e + p e p ep e. Proof: For any graph H, wehave Pr(G G = H) = Pr(G = H )Pr(G =H ) = H H =H p e p e ( p e3 ) ( p e 4 ) H H =H e H e H e 3 H e 4 H = ( p e )( p e) e ( p e H e H(p e)+( p e )p e +p e p e) = e H p e ( p e ). e H Lemma 7 Suppose that G and G are independent random graph with edgeweighted functions p and p,theng\g is independent with the edge-weighted function p satisfying p e = p e( p e ). Proof: For any graph H, wehave Pr(G \ G = H) = Pr(G = H )Pr(G =H ) = H \H =H p e p e ( p e3 ) ( p e 4 ) H \H =H e H e H e 3 H e 4 H = (p e ( p e)) p e p e p e H e H( e) = e H p e ( p e ). e H Let {p e } e E(Kn) be a probability distribution over all pairs of vertices. Let G be the random graph of one edge, where a pair e of vertices is chosen with probability p e. Inductively, we can define the random graph G m by adding one more random edge to G m, where a pair e of vertices is chosen (as the new edge) with probability p e. (There is a small probability to have the same edges chosen more than once. In such case, we will keep on sampling until we have 3

14 exactly m different edges.) Hence, G m has exactly m edges. The probability that G m has edges e,...,e m is proportional to p e p e p em. The following lemma states that G m can be sandwiched by two independent random graphs with exponentially small errors if m is large enough. Lemma 8 Assume p e = o( m ) for all e E(K n). Let G be the independent random graph with edge-weighted function p e = ( δ)mp e. Let G be the independent random graph with edge-weighted function p e =(+δ)mp e. Then G m dominates G with error e δ m/4,andg m is also dominated by G within an error estimate e δ m/4. Proof: For any Graph H, we define f(h) = e Hp e For any graph property B, we define f(b) = H Bf(H). Let C k be the set of all graphs with exact k edges. Claim : For a graph monotone property A and an integer k, wehave f(a C k ) f(c k ) f(a C k+). f(c k+ ) Proof: Both f(a C k )f(c k+ )andf(a C k+ )f(c k ) are homogeneous polynomials on {p e } of degree k +. We compare the coefficients of a general monomial p e p e r p er+ p ek r+ in f(a C k )f(c k+ )andf(a C k+ )f(c k ). The coefficient c of the monomial in f(a C k )f(c k+ ) is the number of (k r)-subsets {e i,e i,,e ik r }of e r+,...,e k r+ satisfying that the graph with edges {e,...,e r,e i,e i,,e ik r } belongs to A k. The coefficient c of the monomial in f(a C k )f(c k+ )isthe number of (k r + )-subset {e i,e i,,e ik r+ } of e r+,...,e k r+ satisfying that the graph with edges {e,...,e r,e i,e i,,e ik r+ } belongs to A k+. Since A is monotone, if the graph with edges {e,...,e r,e i,e i,,e ik r } belongs to A k then the graph with edges {e,...,e r,e i,e i,,e ik r+ } must belong to A k+. Hence c is always less than or equal to c.thus,wehave f(a C k )f(c k+ ) f(a C k+ )f(c k ). 4

15 The claim is proved. Now let p e = ( δ)mp e. ( δ)mpe +( δ)mp e =(+o())( δ)mp e, or equivalently, p e p e = Pr(G A) = = n Pr(G A C k ) k=0 m Pr(G A C k )+ k=0 e E(K n) e E(K n) f(a C m) f(c m ) = f(a C m) f(c m ) n k=m+ Pr(G C k ) m ( p e ) (( δ)m) k f(a C k )+Pr(G has more than m edges) k=0 m ( p e ) (( δ)m) k f(c k ) f(a C m) +Pr(G has more than m edges) f(c m ) k=0 e E(K n) m ( p e ) (( δ)m) k f(c k )+Pr(G has more than m edges) k=0 m Pr(G C k )+Pr(G has more than m edges) k=0 f(a C m) +Pr(G has more than m edges) f(c m ) = Pr(G m A)+Pr(G has more than m edges) Now we estimate the probability that G has more than m edges. Let X e be the 0- random variable with Pr(X e =)=p e. Let X = e X e.thene(x)= ( + o())m( δ). Now we apply the following large deviation inequality. We have Pr(X E(X) >a)e a (E(X)+a/3). Pr(X >m) = Pr(X E(X) > ( + o())δm) e (+o()) δ m ( δ)m+δm/3 For the other direction, let p implies =(+δ)mp e. p e p e e δ m/. e = (+δ)mpe +(+δ)mp e =(+o())( + δ)mp e, which 5

16 Pr(G A) = n Pr(G A C k ) k=0 n Pr(G A C k ) k=m n = ( p e ) (( + δ)m) k f(a C k ) e k=m ( p e ) n (( + δ)m) k f(c k ) f(a C m) f(c e m ) k=m f(a C m) n ( p f(c m ) e) (( + δ)m) k f(c k ) e k=m = f(a C m m) ( Pr(G C k )) f(c m ) k=0 f(a C m) Pr(G haslessthanmedges) f(c m ) = Pr(G m A) Pr(G haslessthanmedges) Now we estimate the probability that G has less than m edges. Let X e be the 0- random variable with Pr(X e =)=p e. Let X = e X e.thene(x)= ( + o())m( + δ). Now we apply the following large deviation inequality. We have Pr(X E(X) <a)e a E(X), Pr(X <m) = Pr(X E(X) < ( + o())δm) e (+o()) δ m (+δ)m e δ m/3. The proof of Lemma is completed. 5 General martingale inequalities In this subsection, we will extend and generalize the Azuma inequality to a martingale which is not strictly Lipschitz but is nearly Lipschitz. Similar techniques have been introduced by Kim and Vu [0] in their important work on deriving concentration inequalities for multivariate polynomials. Here we use a rather general setting and we shall give a complete proof here. 6

17 Suppose that Ω is a probability space and F is a σ-field. X is a random variable that is F-measurable. (The reader is referred to [7] for the terminology on martingales). A filter F is an increasing chain of σ-subfields {0, Ω} = F 0 F F n =F, a martingale (obtained from) X associated with a filter F is a sequence of random variables X 0,X,...,X n with X i = E(X F i ) and, in particular, X 0 = E(X) andx n =X. For c =(c,c,...,c n ) a positive vector, the martingale X is said to be c-lipschitz if X i X i c i for i =,,...,n. A powerful tool for controlling martingales is the following: Azuma s inequality: If a martingale X is c-lipschitz, then a Pr( X E(X) <a)e È n c i= i, where c =(c,...,c n ). Here we are only interested in finite probability spaces and we use the following computational model. The random variable X can be evaluated by a sequence of decisions Y,Y,...,Y n. Each decision has no more than r outputs. The probability that an output is chosen depends on the previous history. We can describe the process by a decision tree T. T is a complete rooted r-tree with depth n. Each edge uv of T is associated with a probability p uv depending on the decision made from u to v. We allow p uv to be zero and thus include the case of having fewer than r outputs. Let Ω i denote the probability space obtained after first i decisions. Suppose Ω = Ω n and X is the random variable on Ω. Let π i :Ω Ω i be the projection mapping each point to its first i coordinates. Let F i be the σ-field generated by Y,Y,...,Y i. (In fact, F i = π ( Ωi ) is the full σ-field via the projection π i.) F i form a natural filter: {0, Ω} = F 0 F F n =F. Any vertex u of T is associated with a real value f(u). If u is a leaf, we define f(u) = X(u). For a general u, here are several equivalent definitions for f(u).. For any non-leaf node u, f(u) is the weighted average over the f-values of the children of u. r f(u) = p uvi f(v i ), i= where v,v,...,v r are the children of u. 7

18 . For a non-leaf node u, f(u) is the weighted average over all leaves in the sub-tree T u rooted at u. f(u) = p u (v)f(v) v leaf in T u where p u (v) denotes the product of edge-weights over edges in the unique path from u to v. 3. Let X i be a random variable of Ω, which for each node u of depth i, assumes the value f(u) for every leaf in the subtree T u.thenx 0,X,...,X n form a martingale, i.e., X i = E(X n F i ). In particular, X = X n is the restriction of f to leaves of T. We note that the Lipschitz condition X i X i c i is equivalent to f(u) f(v) c i for any edge uv from a vertex u with depth i toavertexvwith depth i. We say an edge uv is bad if f(u) f(v) >c i. We say a node u is good if the path from the root to u does not contain any node of a bad edge. The following theorem further generalizes the Azuma s Inequality. A similar but more restricted version can be found in [0]. Theorem For any c,c,...,c n,amartingalexsatisfies Pr( X E(X) <a)e È n c i= i +Pr(B), where B is the set of all bad leaves of the decision tree associated with X. Proof: We define a modified labeling f on T so that f (u) =f(u)ifuis a good node in T. For each bad node u,letxy be the first bad edge which intersects the path from the root to u at x. We define f (u) =f(x). Claim A: f (u) = r i= p uv i f (v i ), for any u with children v,...,v r. If u is a good vertex, we always have f (v i )=f(v i ) no matter whether v i is good or not. Since f(u) = r i= p uv i f(v i ), we have f (u) = r i= p uv i f (v i ). If u is a bad vertex, v,...,v r are all bad by the definition. We have f (u) = f (v )= =f (v r ). Hence, r i= p uv i f (v i )=f(u) r i= p uv i = f(u). Claim B: f is c-lipschitz. For any edge uv with u of depth i andvof depth i, Ifuis a good vertex, then uv is a good edge, and a f (u) f (v) c i. 8

19 If u is a bad vertex, we have f (u) =f (v) and thus f (u) f (v) c i. Let X be the random variable which is the restriction of f to the leaves. X is c-lipschitz. We can apply Azuma s Inequality to X.Namely, From the definition of f, wehave a Pr( X E(X ) <a)e È n c i= i. E(X )=E(X). Let B denote the set of bad leaves in the decision T of X. Clearly, we have Therefore we have Pr(u : X(u) X (u)) Pr(B). Pr( X E(X) <a) Pr(X X )+Pr( X E(X ) <a) e È a n c i= i +Pr(B). The proof of the theorem is complete. For some applications, even nearly Lipschitz condition is still not feasible. Here we consider an extension of Azuma s inequality. Our starting point is the following well known concentration inequality (see [3]): Theorem Let X be the martingale associated with a filter F satisfying. Var(X i F i ) σ i,forin;. X i X i M,forin. Then, we have a Pr(X E(X) a) e È ( n σ i= i +Ma/3). In this paper, we consider a strenghtened version of the above inequality where the variance Var(X i F i ) is instead upper bounded by a constant factor of X i. We first need some terminology. For a filter F: {0, Ω} = F 0 F F n =F, 9

20 a sequence of random variables X 0,X,...,X n is called a submartingale if X i is F i -measurable and E(X i F i )X i,forin. A sequence of random variables X 0,X,...,X n is said to be a supermartingale if X i is F i -measurable and E(X i F i ) X i,forin. We have the following theorem. Theorem Suppose that a submartingale X, associated with a filter F, satisfies Var(X i F i ) φ i X i and for i n. Then we have X i E(X i F i ) M Pr(X n >X 0 +a)e È ((X 0 +a)( n φ i= i )+Ma/3). Proof: for a positive λ (to be chosen later), we consider E(e λxi F i ) = e λe(xi Fi ) E(e λ(xi E(Xi Fi )) F i ) = e λe(xi Fi ) λ k k! E((X i E(X i F i ) k ) F i ) Let g(y) = y k k= g(y), for y<0. lim y 0 g(y) =. k=0 e λe(xi Fi )+È k= λk k! E((Xi E(Xi Fi )k ) F i ) k! = (ey y) y g(y) is monotone increasing, when y 0. When b<3, we have g(b) = Since X i E(X i F i ) M, wehave k= = a. We use the following facts: k= k= b k k! b k 3 k b/3. () λ k k! E((X i E(X i F i ) k ) F i ) g(λm) λ Var(X i F i ). 0

21 We define λ i 0for0<in, satisfying λ i = λ i + g(λ0m) φ i λ i, while λ 0 will be chosen later. Then λ n λ n λ 0, and E(e λixi F i ) e λie(xi Fi )+ g(λ i M) λ i Var(Xi Fi ) e λixi + g(λ 0 M) λ i φixi = e λi Xi. since g(y) is increasing for y>0. By Markov s inequality, we have Pr(X n >X 0 +a) e λn(x0+a) E(e λnxn ) = e λn(x0+a) E(E(e λnxn F n )) e λn(x0+a) E(e λn Xn ) e λn(x0+a) E(e λ0x0 ) = e λn(x0+a)+λ0x0 Note that Hence λ n = λ 0 n (λ i λ i ) i= n g(λ 0 M) = λ 0 φ i λ i i= λ 0 g(λ 0M) n λ 0 φ i. i= Pr(X n >X 0 +a) e λn(x0+a)+λ0x0 e (λ0 g(λ 0 M) λ 0 È n i= φi)(x0+a)+λ0x0 = e λ0a+ g(λ 0 M) λ 0 (X0+a) È n i= φi a Now we choose λ 0 = (X È n 0+a)( φi)+ma/3. Using the fact that λ 0M<3and i= inequality (), we have Pr(X n >X 0 +a) e λ0a+λ 0 (X0+a)È n i= φi ( λ 0 M/3) e È a ((X 0 +a)( n φ i= i )+Ma/3).

22 The proof of the theorem is finished. The condition of Theorem can be further relaxed using the same technique as in Theorem and we have the following theorem. The proof will be omitted. Theorem 3 For a filter F {0, Ω} = F 0 F F n =F, suppose a random variable X j is F i -measurable, for i n. LetB be the bad set in the decision tree associated with X s where at least one of the following conditions is violated: Then we have E(X i F i ) X i Var(X i F i ) φ i X i X i E(X i F i ) M Pr(X n >X 0 +a)e È ((X 0 +a)( n φ i= i )+Ma/3) +Pr(B ). The theorem for supermartingale is slightly different due to the asymmetry of the condition on variance. Theorem 4 Suppose a supermartingale X, associated with a filter F, satisfies, for i n, Var(X i F i ) φ i X i a and Then we have for any a X 0. E(X i F i ) X i M. a Pr(X n <X 0 a)e È (X 0 ( n φ i= i )+Ma/3), Proof: The proof is similar to that of Theorem. The following inequality still holds. E(e λxi F i ) = e λe(xi Fi ) E(e λ(xi E(Xi Fi )) F i ) = e λe(xi Fi ) λ k k! E((E(X i F i ) X i ) k ) F i ) k=0 e λe(xi Fi )+È k= λk k! E((E(Xi Fi ) Xi)k ) F i ) g(λm) λie(xi Fi )+ e λ Var(X i F i ) g(λm) λixi + e λ φ ix i.

23 We now define λ i 0, for 0 i<nsatisfying λ i = λ i g(λn) φ i λ i. λ n will be defined later. Then we have and λ 0 λ λ n, E(e λixi F i ) e λie(xi Fi )+ g(λ i M) λ i Var(Xi Fi ) By Markov s inequality, we have g(λn M) λixi + e λ i φixi = e λi Xi. Pr(X n <X 0 a) = Pr( λ n X n > λ n (X n a)) e λn(x0 a) E(e λnxn ) = e λn(x0 a) E(E(e λnxn F n )) e λn(x0 a) E(e λn Xn ) e λn(x0 a) E(e λ0x0 ) = e λn(x0 a) λ0x0 We note Thus, we have λ 0 = λ n + n (λ i λ i ) i= n g(λ n M) = λ n φ i λ i i= λ n g(λ nm) n λ n φ i. i= We choose λ n = Pr(X n <X 0 a) e λn(x0 a) λ0x0 X 0( È n a e λn(x0 a) (λn g(λnm) λ nè n i= φi)x0 g(λnm) È λna+ = e λ n n X0 i= φi i= φi)+ma/3. Wehaveλ nm<3and Pr(X n <X 0 a) e λna+λ n X0 È n i= φi ( λn M/3) e È a (X 0 ( n φ i= i )+Ma/3). 3

24 It remains to verify that all λ i s are non-negative. Indeed, λ i λ 0 λ n g(λ nm) λ n n i= φ i λ n ( ( λ n M/3) λ n = λ n ( a ) X 0 0. n φ i ) i= The proof of the theorem is complete. Again, the above theorem can further relaxed as follows: Theorem 5 For a filter F {0, Ω} = F 0 F F n =F, suppose a random variable X j is F i -measurable, for i n. Let B be the bad set where at least one of the following conditions is violated: Then we have for any a X 0. E(X i F i ) X i Var(X i F i ) φ i X i E(X i F i ) X i M Pr(X n <X 0 a)e È (X 0 ( n φ i= i )+Ma/3) +Pr(B ), 6 Main theorems for the growth deletion model We say a random graph G is almost surely edge-independent if there are two edge-independent random graphs G and G on the same vertex set satisfying. G dominates G.. G is dominated by G. 3. For any two vertices u and v, letp (i) uv be the probability of edge uv in G i for i =,. We have p () uv =( o())p() uv. a 4

25 We will prove the following: (p Theorem 6 Suppose p 3 <p,p 4 <p,andlog n m<t +p ). Then. Almost surely the degree sequence of the growth-deletion model G(p,p,p 3,p 4,m) follows the power law distribution with the exponent p (t) ij β =+ p +p 3 p +p p 3 p 4.. G(p,p,p 3,p 4,m) is almost surely edge-independent. It dominates and is dominated by an edge-independent graph with probability p (t) ij of having an edge between vertices i and j, i < j, attimet, satisfying: { p m l α p 4τ (p p 4) i α j ( + ( p4 α p )( j t ) τ +α ) if i α j α pmtα 4τ p 4 ( + o()) p4τ p m iα j α t α where α = p(p+p p3 p4) (p +p p 4)(p p 3) and τ = (p+p p4)(p p3) p +p 3. p if i α j α pmtα 4τ p 4 Without the assumption on m, we have the following genernal but weaker result: Theorem 7 In G(p,p,p 3,p 4,m) with p 3 <p,p 4 <p,letsbe the set of vertices with index i satisfying i m α t α. Let G S be the induced subgraph of G(p,p,p 3,p 4,m) on S. Then we have. G S dominates a random power law graph G, in which the expected degrees are given by p m t α d i p p 4 τ(p p 4 )( p p 3 α) i α.. G S is dominated by a random power law graph G, in which the expected degrees are given by m t α d i p p 4 τ( p p 3 α) i α. Theorem 8 In G(p,p,p 3,p 4,m) with p 3 <p,p 4 <p,lettbe the set of vertices with index i satisfying i m α t α. Then the induced subgraph G T of G(p,p,p 3,p 4,m)is almost a complete graph. Namely, G T dominates an edge-independent graph with p ij = o(). 5

26 Let n t (or τ t ) be the number of vertices (or edges) at time t. We assume the initial graph has n 0 vertices and τ 0 edges. When t is large enough, the graph at time t depend on the initial graph only in a mild manner. The number of vertices n 0 and edges τ 0 in the initial graph affect only a lower order term to random variables under consideration. We first establish the following lemmas on the number of vertices and the number of edges. Lemma 9 For any t and k>,ing(p,p,p 3,p 4 ) with an initial graph on n 0 vertices, the number n t of vertices at time t satisfies (p p 3 )t kt log t n t n 0 (p p 3 )t + kt log t. () with probability at least t k. Proof: The expected number of vertices n t satisfies the following recurrence relation: E(n t+ )=E(n t )+p p 3. Hence, E(n t+ )=n 0 +(p p 3 )t. Since we assume p 3 <p, the graph grows as time t increases. By Azuma s martingale inequality, we have Pr( n t E(n t ) >a)e a t. By choosing a = kt log t, with probability at least t k,wehave (p p 3 )t kt log t n t n 0 (p p 3 )t + kt log t. (3) Lemma 0 The number τ t of edges in G(p,p,p 3,p 4,m), with an initial graph on n 0 vertices and τ 0 edges, satisfies at time t where τ = (p+p p4)(p p3) p +p 3. E(τ t ) τ 0 τmt = O( tlog t) Proof: The expected number of edges satisfies E(τ t+ )=E(τ t )+mp + mp p 3 E( τ t ) mp 4. (4) n t Let C denote a large constant satisfying the following:. C> 8p3 (p p 3), 6

27 . C>4 s log s for some large constant s. We shall inductively prove the following inequality. When t = s, wehave E(τ t ) τ 0 mτt <Cm tlog t for t s. (5) E(τ s ) τ 0 mτs ms Cm slog s, by the definition of C. By the induction assumption, we assume that E(τ t ) τ 0 τmt C tlog t holds. Then we consider E(τ t+ ) τ 0 τm(t+) = E(τ t )+mp + mp p 3 E( τ t ) mp 4 τ 0 τm(t +) n t = E(τ t ) τ 0 τmt p 3 E( τ t mτ )+p 3 ) n t p p 3 p 3 = ( (p p 3 )t )(E(τ t) mτt τ 0 ) p 3 (E( τ t ) E(τ t) n t (p p 3 )t ) p 3 (p p 3 )t τ 0 ( p 3 (p p 3 )t )(E(τ t) mτt τ 0 ) +p 3 E( τ t n t ) E(τ t) (p p 3 )t + p 3 (p p 3 )t τ 0 E(τ t ) τ 0 τmt +p 3 E(τ t ( n t (p p 3 )t )) + O( t ) We wish to substitute n t by nt + n 0 + O( kt log t) if possible. However, E(τ t ( n t (p p 3)t )) can be large. We consider S, the event that n t n 0 (p + p 3 )t < 4 t log t. WehavePr(S)> t from Lemma 9. Let S be the indicator random variable for the event S and S denotes the complement event of S. We can derive an upper bound for E((τ t+ τ 0 τm(t +)) S ) in a similar argument as above and obtain E((τ t+ τ 0 τm(t+)) S ) E((τ t τ 0 τmt) S ) +p 3 E(τ t ( n t (p p 3 )t ) S) + O( ). (6) t We consider each term in the last inequality separately. p 3 E(τ t ( n t (p p 3 )t ) S) p 3 mt (p p 3 )t 4 t log t (p p 3 )t 8p3 log t (p p 3 ) + O( log t ). (7) t t 7

28 Since Pr( S) t and τ t τ 0 + mt, wehave E((τ t+ τ 0 τm(t+ ))) = E((τ t+ τ 0 τm(t+)) S ) + E((τ t+ τ 0 τm(t+)) S) E((τ t+ τ 0 τm(t+)) S ) +m(t+)pr( S) E((τ t+ τ 0 τm(t+)) S ) +m(t+) t By inequalities (6) and (7), we have E((τ t+ τ 0 τm(t + ))) 8p3 log t E((τ t τ 0 τmt) S ) +( (p p 3 ) t 8p3 log t E(τ t τ 0 τmt) + (p p 3 ) t Cm 8p3 log t tlog t +( (p p 3 ) + O( log t )) t t Cm (t + ) log(t +). + O( log t )) + mt t t + O( log t )+4m t t The proof of Lemma 0 is complete. To derive the concentration result on τ t, we will need to bound E(τ t )asthe initial number of edge τ 0 changes. Lemma We consider two random graphs G t and G t in G(p,p,p 3,p 4,m). Suppose that G t initially has τ 0 edges and n 0 vertices, and G t initially have τ 0 edges and n 0 vertices. Let τ t and τ t denote the number of edges in G t and G t, respectively. If n 0 n 0 = O(), then we have E(τ t ) E(τ t) τ 0 τ 0 +O(log t). Proof: From equation (4), we have E(τ t+ ) = E(τ t )+mp + mp p 3 E( τ t n t ) mp 4 E(τ t+ ) = E(τ t )+mp + mp p 3 E( τ t n ) mp 4 t Then, E(τ t+ τ t+) =E(τ t τ t) p 3 E( τ t τ t n t n ). (8) t Since both n t n 0 and n t n 0 follow the same distribution, we have Pr(n t = x)=pr(n t = x+n 0 n 0) for any x. 8

29 We can rewrite E( τt n t τ t n ) as follows. t E( τ t τ t n t n ) = t x = x x E(τ t n t = x)pr(n t = x) y x E(τ t n t = x)pr(n t = x) y E(τ t n t = y)pr(n t = y) x+n x 0 n E(τ t n t = x+n 0 n 0)Pr(n t = x+n 0 n 0) 0 = ( ) Pr(n t = x) x E(τ t n t = x) x+n x 0 n E(τ t n t = x+n 0 n 0 ) 0 = x Pr(n t = x)( x (E(τ t n t = x) E(τ t n t = x + n 0 n 0)) ( x x + n 0 n )E(τ t n t = x + n 0 n 0)) 0 From Lemma 9, with probability at least t,wehave n t n 0 (p p 3 )t tlog t. Let S denote the set of x satisfying x n 0 (p p 3 )t tlog t. The probability for x not in S is at most t. If this case happens, the contribution to E( τt n t τ t n )iso( t t ), which is a minor term. In addition, τ t is always upper bounded by τ 0 + mt. We can bound the second term as follows. ( x x + n x 0 n )E(τ t n t = x + n 0 n 0)Pr(n t = x) 0 = ( x x+n x S 0 n )E(τ t n t = x+n 0 n 0 )Pr(n t = x) +O( 0 t ) n 0 n 0 x (τ 0 +mt)pr(n t = x) (n 0 +(p p 3 )t tlog t)(n 0 +(p p 3 )t tlog t) + O( t ) n 0 n 0 (τ 0 + mt) (n 0 +(p p 3 )t tlog t)(n 0 +(p p 3 )t tlog t) + O( t ) = O( t ). 9

30 Hence, we obtain E( τ t τ t n t n ) = t x Pr(n t = x) x (E(τ t n t = x) E(τ t n t = x+n 0 n 0 )) + O( t ) = x S Pr(n t = x) x (E(τ t n t = x) E(τ t n t = x+n 0 n 0)) + O( t ) = n 0 +(p p 3 )t+o( tlog t) x S Pr(n t = x)(e(τ t n t = x) E(τ t n t = x + n 0 n 0 )) + O( t ) = n 0 +(p p 3 )t+o( tlog t) ( x S E(τ t n t = x)pr(n t = x) x E(τ t n t = x+n 0 n 0 )Pr(n t = x+n 0 n 0 )) + O( t ) = n 0 +(p p 3 )t+o( tlog t) (E(τ t) E(τ t )) + O( t ). Combine this with equation (8), we have E(τ t+ τ t+) =( Therefore we have p 3 n 0 +(p p 3 )t+o( tlog t) )E(τ t τ t)+o( t ). E(τ t+ τ t+ ) ( p 3 n 0 +(p p 3 )t+o( tlog t) )E(τ t τ t ) + O( t ) E(τ t τ t ) +O( t ) t τ 0 τ 0 + O( i ) i= = τ 0 τ 0 + O(log t). The proof of the lemma is complete. In order to prove the concentration result for the number of edges for G(p,p,p 3,p 4,m), we shall use the general martingale inequality. To establish the near Lipschitz coefficients, we will derive upper bounds for the degrees by considering the special case without deletion. For p = α, p = α, p 3 =p 4 =0,G(α, α, 0, 0,m) is just the preferential attachment model. The number of edge increases by m at a time. The total number of edges at time t is exactly mt + τ 0,whereτ 0 is the number of edge of the initial graph at t =0. We label the vertex u by i if u is generated at time i. Let d i (t) denotethe degree of the vertex i at time t. 30

31 Lemma For the preferential attachment model G(γ, γ,0,0,m), we have, with probability at least t k (any k>),thedegreeofvertexiat time t satisfies d i (t) mk( t i ) γ/ log t. Proof: For the preferential attachment model G(γ, γ,0,0,m), the total number of edge at time t is τ t = mt + τ 0. The recurrence for the expected value of d i (t) satisfies E(d i (t +) d i (t)) = m ( + γ τ t + j )d i(t) j= m( γ) = (+ + O( m τ t t ))d i(t). The term O( m t d i (t)) captures the difference from adding m edges sequentially to adding m edges simultaneously. We denote θ t =+ m( γ) τ t + O( m t ). Let X t be the scaled version of d i (t) defined as follows. X t = d i (t) t j=i+ θ. j We have E(X t+ X t )= E(d i(t+) d i (t)) t j=i+ θ j = θ td i (t) t j=i+ θ j = X t. Thus X t forms a martingale with E(X t )=X i+ = d i (i +)=m. We apply Theorem. First we compute Var(X t+ X t ) = = = t j=i+ θ j Var(d i (t +) d i (t)) t j=i+ θ j E((d i (t +) d i (t)) d i (t)) t m(γ d i(t) +( γ) d i(t) ) j=i+ θ τ j t τ t t (θ t )d i (t) j=i+ θ j θ t θ t t j=i+ θ X t. j 3

32 Let φ t = Éθ t θ t t.wehave j=i+ θj φ t = = θ t θ t t j=i+ θ j m( γ) (mt+τ 0) + O( m t ) ( + m( γ) (mt+τ + O( m 0) t )) t m( γ) j=i+ ( + (mj+τ 0) )+O(m t ) ( γ ) t e ( γ ) È t j=i+ ( γ ) t ( i t ) γ/ j+ τ 0 m ( γ ) i γ/ t γ/. In particular, we have t j=i+ φ j ( γ t ). j=i+ Concerning the last condition in Theorem, we have i γ/ j γ/ X t+ E(X t+ X t ) = t j=i+ θ (d i (t +) E(d i (t+) d i (t)) j t j=i+ θ (d i (t +) d i (t)) j t j=i+ θ j m m With M = m and t j=i+ φ j, Theorem gives Pr(X t <m+a)e a (m+a+ma/3). By choosing a = m(k log t ), with probability at least O(t k ), we have X t <m+m(klog t ) = mk log t. Hence, with probability at least O(t k ), d i (t) mk log t( t i ) γ/. Remark: In the above proof, d i (t +) d i (t) roughly follows the Poisson distribution with mean m( γ)d i (t) = O(i ( γ/) t γ/ )=O(t γ/ ). τ t 3

33 It follows with probability at least O(t k ), d i (t +) d i (t) is bounded by k γ. Applying Theorem 3 with M = k γ and t j=i+ φ j, we get Pr(X t <m+a)e a (m+a+ ka 3γ ) + O(t k ). When m log t, wecanchoosea= 3mk log t so that m dominates ka 3γ. In this case, we have Pr(X t <m+a)e a (m+a+ ka 3γ ) + O(t k )=O(t k ). With probability at least O(t k ), we have d i (t) (m+ 3mk log t)( t i ) γ/. Similarly arguments using Theorem 5 give the lower bound of the same order. If i survives at time t in the preferential attachment model G(γ, γ,0,0,m), then, with probability at least O(t k ), we have d i (t) (m 3mk log t)( t i ) γ/. The above bounds will be further generalized for model G(p,p,p 3,p 4,m) later in Lemma 5 with similar ideas. Lemma 3 For any k, i,andt, in graph G(p,p,p 3,p 4,m), the degree of i at time t satisfies: d i (t) Ckmlog t( t i ) p (p +p ) (9) with probability at least O( t k ), for some absolute constant C. Proof: We compare G(p,p,p 3,p 4,m) with the following preferential attachment model G(p,p,0,0,m) without deletion. At each step, with probability p, take a vertex-growth step and add m edges from the new vertex to the current graph; With probability p,takeanedge-growthstepandmedges are added into the current graph; With probability p p, do nothing. We wish to show that the degree d u (t) in the model G(p,p,p 3,p 4,m)(with deletion) is dominated by the degree sequence d u (t) in the model G(p,p,0,0,m). Basically it is a balls-and-bins argument, similar to the one given in [4]. The number of balls in the first bin (denoted by a ) represents the degree of u while the number of balls in the other bin (denoted by a ) represents the sum of degrees of the vertices other than u. When an edge incident to u is added to the 33

34 graph G(p,p,p 3,p 4,m), it increases both a and a by. When an edge not incident to u is added into the graph, a increases by while a remains the same. Without loss of generality, we can assume a is less than a in the initial graph. If an edge uv, which is incident to u, is deleted later, we delay adding this edge until the very moment the edge is to be deleted. At the moment of adding the edge uv, two bins have a and a balls respectively. When we delay adding the edge uv, the number of balls in two bins are still a and a comparing with a +anda + in the original random process. Since a <a,the random process with delay dominates the original random process. If an edge vw which is not incident to u is deleted, we also delay adding this edge until the very moment the edge is to be deleted. Equivalently, we compare the process of a and a balls in bins to the process with a and a + balls. The random process without delay dominates the one with delay. Therefore, for any u, the degrees of u in the model without deletion dominates the degrees in the model with deletion. It remains to derive an appropriate upper bound of d u (t)formodelg(p,p,0,0,m). If a vertex u is added at time i, welabelitbyi. Let us remove the idle steps and re-parameterize the time. For γ = p p +p,wehaveg(p,p,0,0,m)=g(γ, γ,0,0,m). We can use the upper bound for the degrees of G(γ, γ,0,0,m) as in Lemma. This completes the proof for Lemma 3. Lemma 0 can be further strengthened as follows: Lemma 4 In G(p,p,p 3,p 4,m) with intimal graph on n 0 vertices and τ 0 edges, the total number of edges at time t is τ t = τ 0 + τmt+o(kmt p 4(p +p ) log 3/ t) with probability at least O(t k ) where τ = (p+p p4)(p p3) p +p 3. Proof: For a fixed s, s t, we define τ s (t) =#{ij E(G t ) s i, j t}. We use Lemma 3 with the initial graph to be taken as the graph G s at time s. Then Lemma 3 implies that with probability at least O( t k ), we have τ t τ s (t) is d i (t) is C ( ) p t (p +p ) mk log t i ( ) p C t (p +p ) p mkt log t. (p +p ) s 34

35 By choosing s = t,wehave τ t =τ s (t)+o(t p 4(p +p ) mk log t). (0) We want to show that with probability at least O(t k/+ ), we have τ t E(τ t ) τ t τ s (t) + E(τ t ) E(τ s (t)) + τ s (t) E(τ s (t)) O(mkt p 4(p +p 4) log 3/ t) It suffices to show that τ s (t) E(τ s (t)) = O(mkt p 4(p +p 4) log 3/ t). We use the general martingale inequality as in Theorem as follows: Let c i = Ckm( i p s ) (p +p ) log t where C is the constant in equation (9). The nodes of the decision tree T are just graphs generated by graph model G(p,p,p 3,p 4,m). A path from root to a leaf in the decision tree T is associated with a chain of the graph evolution. The value f(i) ateachnodeg i (as defined in the proof of Theorem ) is the expected number of edges at time t with initial graph G i at time i. WenotethatX i might be different from the number of edges of G i, which is denoted by τ i. Let G i+ be any child node of G i in the decision T. We define f(i +)and τ i+ in a similar way. By Lemma, we have f(i +) f(i) τ i+ τ i + O(log t) ( + o())c i We say that an edge of the decision tree T is bad if and only if it deletes a vertex of degree greater than ( + o())c i at time i. A leaf of T is good if none of graphs in the chain contains a vertex with degree large than ( + o())c i at time i. Therefore, the probability for the set B consisting of bad leaves is at most t Pr(B) O(l k )=O(s k+ ). l=s By Theorem, we have Pr( τ s (t) E(τ s (t)) >a) e È a tl=s c l +Pr(B) e e a È p tl=s ( s l ) (p +p ) (Cmk log t) a C C t 3 p + p p +p s + O(s k+ ) p +p m k log t + O(s k+ ). We choose s = t and a = C Ct p 4(p +p ) mk log 3/ t. With probability at least O(t k/+ ), we have τ s (t) E(τ s (t)) = O(t p 4(p +p ) mk log 3/ t) 35

Stationary random graphs on Z with prescribed iid degrees and finite mean connections

Stationary random graphs on Z with prescribed iid degrees and finite mean connections Stationary random graphs on Z with prescribed iid degrees and finite mean connections Maria Deijfen Johan Jonasson February 2006 Abstract Let F be a probability distribution with support on the non-negative

More information

arxiv:1112.0829v1 [math.pr] 5 Dec 2011

arxiv:1112.0829v1 [math.pr] 5 Dec 2011 How Not to Win a Million Dollars: A Counterexample to a Conjecture of L. Breiman Thomas P. Hayes arxiv:1112.0829v1 [math.pr] 5 Dec 2011 Abstract Consider a gambling game in which we are allowed to repeatedly

More information

ON SOME ANALOGUE OF THE GENERALIZED ALLOCATION SCHEME

ON SOME ANALOGUE OF THE GENERALIZED ALLOCATION SCHEME ON SOME ANALOGUE OF THE GENERALIZED ALLOCATION SCHEME Alexey Chuprunov Kazan State University, Russia István Fazekas University of Debrecen, Hungary 2012 Kolchin s generalized allocation scheme A law of

More information

Lecture 4: BK inequality 27th August and 6th September, 2007

Lecture 4: BK inequality 27th August and 6th September, 2007 CSL866: Percolation and Random Graphs IIT Delhi Amitabha Bagchi Scribe: Arindam Pal Lecture 4: BK inequality 27th August and 6th September, 2007 4. Preliminaries The FKG inequality allows us to lower bound

More information

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

More information

Embedding nearly-spanning bounded degree trees

Embedding nearly-spanning bounded degree trees Embedding nearly-spanning bounded degree trees Noga Alon Michael Krivelevich Benny Sudakov Abstract We derive a sufficient condition for a sparse graph G on n vertices to contain a copy of a tree T of

More information

Rising Rates in Random institute (R&I)

Rising Rates in Random institute (R&I) 139. Proc. 3rd Car. Conf. Comb. & Comp. pp. 139-143 SPANNING TREES IN RANDOM REGULAR GRAPHS Brendan D. McKay Computer Science Dept., Vanderbilt University, Nashville, Tennessee 37235 Let n~ < n2

More information

THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok

THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE Alexer Barvinok Papers are available at http://www.math.lsa.umich.edu/ barvinok/papers.html This is a joint work with J.A. Hartigan

More information

Introduced by Stuart Kauffman (ca. 1986) as a tunable family of fitness landscapes.

Introduced by Stuart Kauffman (ca. 1986) as a tunable family of fitness landscapes. 68 Part II. Combinatorial Models can require a number of spin flips that is exponential in N (A. Haken et al. ca. 1989), and that one can in fact embed arbitrary computations in the dynamics (Orponen 1995).

More information

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like

More information

P. Jeyanthi and N. Angel Benseera

P. Jeyanthi and N. Angel Benseera Opuscula Math. 34, no. 1 (014), 115 1 http://dx.doi.org/10.7494/opmath.014.34.1.115 Opuscula Mathematica A TOTALLY MAGIC CORDIAL LABELING OF ONE-POINT UNION OF n COPIES OF A GRAPH P. Jeyanthi and N. Angel

More information

Lecture 13: Martingales

Lecture 13: Martingales Lecture 13: Martingales 1. Definition of a Martingale 1.1 Filtrations 1.2 Definition of a martingale and its basic properties 1.3 Sums of independent random variables and related models 1.4 Products of

More information

Graphs without proper subgraphs of minimum degree 3 and short cycles

Graphs without proper subgraphs of minimum degree 3 and short cycles Graphs without proper subgraphs of minimum degree 3 and short cycles Lothar Narins, Alexey Pokrovskiy, Tibor Szabó Department of Mathematics, Freie Universität, Berlin, Germany. August 22, 2014 Abstract

More information

8. Matchings and Factors

8. Matchings and Factors 8. Matchings and Factors Consider the formation of an executive council by the parliament committee. Each committee needs to designate one of its members as an official representative to sit on the council,

More information

Applied Algorithm Design Lecture 5

Applied Algorithm Design Lecture 5 Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design

More information

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem

More information

Conductance, the Normalized Laplacian, and Cheeger s Inequality

Conductance, the Normalized Laplacian, and Cheeger s Inequality Spectral Graph Theory Lecture 6 Conductance, the Normalized Laplacian, and Cheeger s Inequality Daniel A. Spielman September 21, 2015 Disclaimer These notes are not necessarily an accurate representation

More information

COMBINATORIAL PROPERTIES OF THE HIGMAN-SIMS GRAPH. 1. Introduction

COMBINATORIAL PROPERTIES OF THE HIGMAN-SIMS GRAPH. 1. Introduction COMBINATORIAL PROPERTIES OF THE HIGMAN-SIMS GRAPH ZACHARY ABEL 1. Introduction In this survey we discuss properties of the Higman-Sims graph, which has 100 vertices, 1100 edges, and is 22 regular. In fact

More information

Metric Spaces. Chapter 1

Metric Spaces. Chapter 1 Chapter 1 Metric Spaces Many of the arguments you have seen in several variable calculus are almost identical to the corresponding arguments in one variable calculus, especially arguments concerning convergence

More information

SEQUENCES OF MAXIMAL DEGREE VERTICES IN GRAPHS. Nickolay Khadzhiivanov, Nedyalko Nenov

SEQUENCES OF MAXIMAL DEGREE VERTICES IN GRAPHS. Nickolay Khadzhiivanov, Nedyalko Nenov Serdica Math. J. 30 (2004), 95 102 SEQUENCES OF MAXIMAL DEGREE VERTICES IN GRAPHS Nickolay Khadzhiivanov, Nedyalko Nenov Communicated by V. Drensky Abstract. Let Γ(M) where M V (G) be the set of all vertices

More information

Triangle deletion. Ernie Croot. February 3, 2010

Triangle deletion. Ernie Croot. February 3, 2010 Triangle deletion Ernie Croot February 3, 2010 1 Introduction The purpose of this note is to give an intuitive outline of the triangle deletion theorem of Ruzsa and Szemerédi, which says that if G = (V,

More information

Lecture 1: Course overview, circuits, and formulas

Lecture 1: Course overview, circuits, and formulas Lecture 1: Course overview, circuits, and formulas Topics in Complexity Theory and Pseudorandomness (Spring 2013) Rutgers University Swastik Kopparty Scribes: John Kim, Ben Lund 1 Course Information Swastik

More information

1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let

1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let Copyright c 2009 by Karl Sigman 1 Stopping Times 1.1 Stopping Times: Definition Given a stochastic process X = {X n : n 0}, a random time τ is a discrete random variable on the same probability space as

More information

Labeling outerplanar graphs with maximum degree three

Labeling outerplanar graphs with maximum degree three Labeling outerplanar graphs with maximum degree three Xiangwen Li 1 and Sanming Zhou 2 1 Department of Mathematics Huazhong Normal University, Wuhan 430079, China 2 Department of Mathematics and Statistics

More information

Lectures 5-6: Taylor Series

Lectures 5-6: Taylor Series Math 1d Instructor: Padraic Bartlett Lectures 5-: Taylor Series Weeks 5- Caltech 213 1 Taylor Polynomials and Series As we saw in week 4, power series are remarkably nice objects to work with. In particular,

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

More information

Lecture 4 Online and streaming algorithms for clustering

Lecture 4 Online and streaming algorithms for clustering CSE 291: Geometric algorithms Spring 2013 Lecture 4 Online and streaming algorithms for clustering 4.1 On-line k-clustering To the extent that clustering takes place in the brain, it happens in an on-line

More information

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

MATH 425, PRACTICE FINAL EXAM SOLUTIONS. MATH 45, PRACTICE FINAL EXAM SOLUTIONS. Exercise. a Is the operator L defined on smooth functions of x, y by L u := u xx + cosu linear? b Does the answer change if we replace the operator L by the operator

More information

Lectures on Stochastic Processes. William G. Faris

Lectures on Stochastic Processes. William G. Faris Lectures on Stochastic Processes William G. Faris November 8, 2001 2 Contents 1 Random walk 7 1.1 Symmetric simple random walk................... 7 1.2 Simple random walk......................... 9 1.3

More information

arxiv:cs/0605141v1 [cs.dc] 30 May 2006

arxiv:cs/0605141v1 [cs.dc] 30 May 2006 General Compact Labeling Schemes for Dynamic Trees Amos Korman arxiv:cs/0605141v1 [cs.dc] 30 May 2006 February 1, 2008 Abstract Let F be a function on pairs of vertices. An F- labeling scheme is composed

More information

Regular Induced Subgraphs of a Random Graph*

Regular Induced Subgraphs of a Random Graph* Regular Induced Subgraphs of a Random Graph* Michael Krivelevich, 1 Benny Sudaov, Nicholas Wormald 3 1 School of Mathematical Sciences, Tel Aviv University, Tel Aviv 69978, Israel; e-mail: rivelev@post.tau.ac.il

More information

On Integer Additive Set-Indexers of Graphs

On Integer Additive Set-Indexers of Graphs On Integer Additive Set-Indexers of Graphs arxiv:1312.7672v4 [math.co] 2 Mar 2014 N K Sudev and K A Germina Abstract A set-indexer of a graph G is an injective set-valued function f : V (G) 2 X such that

More information

ONLINE DEGREE-BOUNDED STEINER NETWORK DESIGN. Sina Dehghani Saeed Seddighin Ali Shafahi Fall 2015

ONLINE DEGREE-BOUNDED STEINER NETWORK DESIGN. Sina Dehghani Saeed Seddighin Ali Shafahi Fall 2015 ONLINE DEGREE-BOUNDED STEINER NETWORK DESIGN Sina Dehghani Saeed Seddighin Ali Shafahi Fall 2015 ONLINE STEINER FOREST PROBLEM An initially given graph G. s 1 s 2 A sequence of demands (s i, t i ) arriving

More information

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH 31 Kragujevac J. Math. 25 (2003) 31 49. SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH Kinkar Ch. Das Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, W.B.,

More information

8.1 Min Degree Spanning Tree

8.1 Min Degree Spanning Tree CS880: Approximations Algorithms Scribe: Siddharth Barman Lecturer: Shuchi Chawla Topic: Min Degree Spanning Tree Date: 02/15/07 In this lecture we give a local search based algorithm for the Min Degree

More information

Power Law of Predictive Graphs

Power Law of Predictive Graphs Internet Mathematics Vol. 1, No. 1: 91-114 The Average Distance in a Random Graph with Given Expected Degrees FanChungandLinyuanLu Abstract. Random graph theory is used to examine the small-world phenomenon

More information

The Open University s repository of research publications and other research outputs

The Open University s repository of research publications and other research outputs Open Research Online The Open University s repository of research publications and other research outputs The degree-diameter problem for circulant graphs of degree 8 and 9 Journal Article How to cite:

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d). 1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators...

MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators... MATH4427 Notebook 2 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 2 MATH4427 Notebook 2 3 2.1 Definitions and Examples...................................

More information

Network (Tree) Topology Inference Based on Prüfer Sequence

Network (Tree) Topology Inference Based on Prüfer Sequence Network (Tree) Topology Inference Based on Prüfer Sequence C. Vanniarajan and Kamala Krithivasan Department of Computer Science and Engineering Indian Institute of Technology Madras Chennai 600036 vanniarajanc@hcl.in,

More information

Econometrics Simple Linear Regression

Econometrics Simple Linear Regression Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight

More information

Solving polynomial least squares problems via semidefinite programming relaxations

Solving polynomial least squares problems via semidefinite programming relaxations Solving polynomial least squares problems via semidefinite programming relaxations Sunyoung Kim and Masakazu Kojima August 2007, revised in November, 2007 Abstract. A polynomial optimization problem whose

More information

On an anti-ramsey type result

On an anti-ramsey type result On an anti-ramsey type result Noga Alon, Hanno Lefmann and Vojtĕch Rödl Abstract We consider anti-ramsey type results. For a given coloring of the k-element subsets of an n-element set X, where two k-element

More information

CMSC 858T: Randomized Algorithms Spring 2003 Handout 8: The Local Lemma

CMSC 858T: Randomized Algorithms Spring 2003 Handout 8: The Local Lemma CMSC 858T: Randomized Algorithms Spring 2003 Handout 8: The Local Lemma Please Note: The references at the end are given for extra reading if you are interested in exploring these ideas further. You are

More information

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results

More information

About the inverse football pool problem for 9 games 1

About the inverse football pool problem for 9 games 1 Seventh International Workshop on Optimal Codes and Related Topics September 6-1, 013, Albena, Bulgaria pp. 15-133 About the inverse football pool problem for 9 games 1 Emil Kolev Tsonka Baicheva Institute

More information

0 <β 1 let u(x) u(y) kuk u := sup u(x) and [u] β := sup

0 <β 1 let u(x) u(y) kuk u := sup u(x) and [u] β := sup 456 BRUCE K. DRIVER 24. Hölder Spaces Notation 24.1. Let Ω be an open subset of R d,bc(ω) and BC( Ω) be the bounded continuous functions on Ω and Ω respectively. By identifying f BC( Ω) with f Ω BC(Ω),

More information

Cycles and clique-minors in expanders

Cycles and clique-minors in expanders Cycles and clique-minors in expanders Benny Sudakov UCLA and Princeton University Expanders Definition: The vertex boundary of a subset X of a graph G: X = { all vertices in G\X with at least one neighbor

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

The average distances in random graphs with given expected degrees

The average distances in random graphs with given expected degrees Classification: Physical Sciences, Mathematics The average distances in random graphs with given expected degrees by Fan Chung 1 and Linyuan Lu Department of Mathematics University of California at San

More information

Large induced subgraphs with all degrees odd

Large induced subgraphs with all degrees odd Large induced subgraphs with all degrees odd A.D. Scott Department of Pure Mathematics and Mathematical Statistics, University of Cambridge, England Abstract: We prove that every connected graph of order

More information

(Refer Slide Time: 01:11-01:27)

(Refer Slide Time: 01:11-01:27) Digital Signal Processing Prof. S. C. Dutta Roy Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 6 Digital systems (contd.); inverse systems, stability, FIR and IIR,

More information

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm. Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of

More information

Adaptive Online Gradient Descent

Adaptive Online Gradient Descent Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

More information

The Exponential Distribution

The Exponential Distribution 21 The Exponential Distribution From Discrete-Time to Continuous-Time: In Chapter 6 of the text we will be considering Markov processes in continuous time. In a sense, we already have a very good understanding

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

A REMARK ON ALMOST MOORE DIGRAPHS OF DEGREE THREE. 1. Introduction and Preliminaries

A REMARK ON ALMOST MOORE DIGRAPHS OF DEGREE THREE. 1. Introduction and Preliminaries Acta Math. Univ. Comenianae Vol. LXVI, 2(1997), pp. 285 291 285 A REMARK ON ALMOST MOORE DIGRAPHS OF DEGREE THREE E. T. BASKORO, M. MILLER and J. ŠIRÁŇ Abstract. It is well known that Moore digraphs do

More information

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015. Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x

More information

ON INDUCED SUBGRAPHS WITH ALL DEGREES ODD. 1. Introduction

ON INDUCED SUBGRAPHS WITH ALL DEGREES ODD. 1. Introduction ON INDUCED SUBGRAPHS WITH ALL DEGREES ODD A.D. SCOTT Abstract. Gallai proved that the vertex set of any graph can be partitioned into two sets, each inducing a subgraph with all degrees even. We prove

More information

On the k-path cover problem for cacti

On the k-path cover problem for cacti On the k-path cover problem for cacti Zemin Jin and Xueliang Li Center for Combinatorics and LPMC Nankai University Tianjin 300071, P.R. China zeminjin@eyou.com, x.li@eyou.com Abstract In this paper we

More information

Lecture 6 Online and streaming algorithms for clustering

Lecture 6 Online and streaming algorithms for clustering CSE 291: Unsupervised learning Spring 2008 Lecture 6 Online and streaming algorithms for clustering 6.1 On-line k-clustering To the extent that clustering takes place in the brain, it happens in an on-line

More information

Zachary Monaco Georgia College Olympic Coloring: Go For The Gold

Zachary Monaco Georgia College Olympic Coloring: Go For The Gold Zachary Monaco Georgia College Olympic Coloring: Go For The Gold Coloring the vertices or edges of a graph leads to a variety of interesting applications in graph theory These applications include various

More information

Fairness in Routing and Load Balancing

Fairness in Routing and Load Balancing Fairness in Routing and Load Balancing Jon Kleinberg Yuval Rabani Éva Tardos Abstract We consider the issue of network routing subject to explicit fairness conditions. The optimization of fairness criteria

More information

Message-passing sequential detection of multiple change points in networks

Message-passing sequential detection of multiple change points in networks Message-passing sequential detection of multiple change points in networks Long Nguyen, Arash Amini Ram Rajagopal University of Michigan Stanford University ISIT, Boston, July 2012 Nguyen/Amini/Rajagopal

More information

A 2-factor in which each cycle has long length in claw-free graphs

A 2-factor in which each cycle has long length in claw-free graphs A -factor in which each cycle has long length in claw-free graphs Roman Čada Shuya Chiba Kiyoshi Yoshimoto 3 Department of Mathematics University of West Bohemia and Institute of Theoretical Computer Science

More information

Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one

More information

A Sublinear Bipartiteness Tester for Bounded Degree Graphs

A Sublinear Bipartiteness Tester for Bounded Degree Graphs A Sublinear Bipartiteness Tester for Bounded Degree Graphs Oded Goldreich Dana Ron February 5, 1998 Abstract We present a sublinear-time algorithm for testing whether a bounded degree graph is bipartite

More information

On the independence number of graphs with maximum degree 3

On the independence number of graphs with maximum degree 3 On the independence number of graphs with maximum degree 3 Iyad A. Kanj Fenghui Zhang Abstract Let G be an undirected graph with maximum degree at most 3 such that G does not contain any of the three graphs

More information

CS 598CSC: Combinatorial Optimization Lecture date: 2/4/2010

CS 598CSC: Combinatorial Optimization Lecture date: 2/4/2010 CS 598CSC: Combinatorial Optimization Lecture date: /4/010 Instructor: Chandra Chekuri Scribe: David Morrison Gomory-Hu Trees (The work in this section closely follows [3]) Let G = (V, E) be an undirected

More information

LECTURE 4. Last time: Lecture outline

LECTURE 4. Last time: Lecture outline LECTURE 4 Last time: Types of convergence Weak Law of Large Numbers Strong Law of Large Numbers Asymptotic Equipartition Property Lecture outline Stochastic processes Markov chains Entropy rate Random

More information

Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.

Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom. Some Polynomial Theorems by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.com This paper contains a collection of 31 theorems, lemmas,

More information

Principle of Data Reduction

Principle of Data Reduction Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then

More information

Influences in low-degree polynomials

Influences in low-degree polynomials Influences in low-degree polynomials Artūrs Bačkurs December 12, 2012 1 Introduction In 3] it is conjectured that every bounded real polynomial has a highly influential variable The conjecture is known

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

Markov random fields and Gibbs measures

Markov random fields and Gibbs measures Chapter Markov random fields and Gibbs measures 1. Conditional independence Suppose X i is a random element of (X i, B i ), for i = 1, 2, 3, with all X i defined on the same probability space (.F, P).

More information

VERTICES OF GIVEN DEGREE IN SERIES-PARALLEL GRAPHS

VERTICES OF GIVEN DEGREE IN SERIES-PARALLEL GRAPHS VERTICES OF GIVEN DEGREE IN SERIES-PARALLEL GRAPHS MICHAEL DRMOTA, OMER GIMENEZ, AND MARC NOY Abstract. We show that the number of vertices of a given degree k in several kinds of series-parallel labelled

More information

Algorithm Design and Analysis

Algorithm Design and Analysis Algorithm Design and Analysis LECTURE 27 Approximation Algorithms Load Balancing Weighted Vertex Cover Reminder: Fill out SRTEs online Don t forget to click submit Sofya Raskhodnikova 12/6/2011 S. Raskhodnikova;

More information

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm. Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

Complexity Theory. IE 661: Scheduling Theory Fall 2003 Satyaki Ghosh Dastidar

Complexity Theory. IE 661: Scheduling Theory Fall 2003 Satyaki Ghosh Dastidar Complexity Theory IE 661: Scheduling Theory Fall 2003 Satyaki Ghosh Dastidar Outline Goals Computation of Problems Concepts and Definitions Complexity Classes and Problems Polynomial Time Reductions Examples

More information

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties SOLUTIONS TO EXERCISES FOR MATHEMATICS 205A Part 3 Fall 2008 III. Spaces with special properties III.1 : Compact spaces I Problems from Munkres, 26, pp. 170 172 3. Show that a finite union of compact subspaces

More information

On-line secret sharing

On-line secret sharing On-line secret sharing László Csirmaz Gábor Tardos Abstract In a perfect secret sharing scheme the dealer distributes shares to participants so that qualified subsets can recover the secret, while unqualified

More information

Notes on Continuous Random Variables

Notes on Continuous Random Variables Notes on Continuous Random Variables Continuous random variables are random quantities that are measured on a continuous scale. They can usually take on any value over some interval, which distinguishes

More information

136 CHAPTER 4. INDUCTION, GRAPHS AND TREES

136 CHAPTER 4. INDUCTION, GRAPHS AND TREES 136 TER 4. INDUCTION, GRHS ND TREES 4.3 Graphs In this chapter we introduce a fundamental structural idea of discrete mathematics, that of a graph. Many situations in the applications of discrete mathematics

More information

Randomized algorithms

Randomized algorithms Randomized algorithms March 10, 2005 1 What are randomized algorithms? Algorithms which use random numbers to make decisions during the executions of the algorithm. Why would we want to do this?? Deterministic

More information

Nonparametric adaptive age replacement with a one-cycle criterion

Nonparametric adaptive age replacement with a one-cycle criterion Nonparametric adaptive age replacement with a one-cycle criterion P. Coolen-Schrijner, F.P.A. Coolen Department of Mathematical Sciences University of Durham, Durham, DH1 3LE, UK e-mail: Pauline.Schrijner@durham.ac.uk

More information

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)! Math 7 Fall 205 HOMEWORK 5 SOLUTIONS Problem. 2008 B2 Let F 0 x = ln x. For n 0 and x > 0, let F n+ x = 0 F ntdt. Evaluate n!f n lim n ln n. By directly computing F n x for small n s, we obtain the following

More information

Single machine parallel batch scheduling with unbounded capacity

Single machine parallel batch scheduling with unbounded capacity Workshop on Combinatorics and Graph Theory 21th, April, 2006 Nankai University Single machine parallel batch scheduling with unbounded capacity Yuan Jinjiang Department of mathematics, Zhengzhou University

More information

5 Directed acyclic graphs

5 Directed acyclic graphs 5 Directed acyclic graphs (5.1) Introduction In many statistical studies we have prior knowledge about a temporal or causal ordering of the variables. In this chapter we will use directed graphs to incorporate

More information

1 Approximating Set Cover

1 Approximating Set Cover CS 05: Algorithms (Grad) Feb 2-24, 2005 Approximating Set Cover. Definition An Instance (X, F ) of the set-covering problem consists of a finite set X and a family F of subset of X, such that every elemennt

More information

GENERATING THE FIBONACCI CHAIN IN O(log n) SPACE AND O(n) TIME J. Patera

GENERATING THE FIBONACCI CHAIN IN O(log n) SPACE AND O(n) TIME J. Patera ˆ ˆŠ Œ ˆ ˆ Œ ƒ Ÿ 2002.. 33.. 7 Š 539.12.01 GENERATING THE FIBONACCI CHAIN IN O(log n) SPACE AND O(n) TIME J. Patera Department of Mathematics, Faculty of Nuclear Science and Physical Engineering, Czech

More information

MOP 2007 Black Group Integer Polynomials Yufei Zhao. Integer Polynomials. June 29, 2007 Yufei Zhao yufeiz@mit.edu

MOP 2007 Black Group Integer Polynomials Yufei Zhao. Integer Polynomials. June 29, 2007 Yufei Zhao yufeiz@mit.edu Integer Polynomials June 9, 007 Yufei Zhao yufeiz@mit.edu We will use Z[x] to denote the ring of polynomials with integer coefficients. We begin by summarizing some of the common approaches used in dealing

More information

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

More information

Weighted Sum Coloring in Batch Scheduling of Conflicting Jobs

Weighted Sum Coloring in Batch Scheduling of Conflicting Jobs Weighted Sum Coloring in Batch Scheduling of Conflicting Jobs Leah Epstein Magnús M. Halldórsson Asaf Levin Hadas Shachnai Abstract Motivated by applications in batch scheduling of jobs in manufacturing

More information

x a x 2 (1 + x 2 ) n.

x a x 2 (1 + x 2 ) n. Limits and continuity Suppose that we have a function f : R R. Let a R. We say that f(x) tends to the limit l as x tends to a; lim f(x) = l ; x a if, given any real number ɛ > 0, there exists a real number

More information

The scaling window for a random graph with a given degree sequence

The scaling window for a random graph with a given degree sequence The scaling window for a random graph with a given degree sequence Hamed Hatami and Michael Molloy Department of Computer Science University of Toronto e-mail: hamed@cs.toronto.edu, molloy@cs.toronto.edu

More information