

 Ruby Johns
 1 years ago
 Views:
Transcription
1 THE SHRINKAGE EPONENT OF DE MORGAN FORMULAE IS JOHAN HaSTAD ROYAL INSTITUTE OF TECHNOLOGY STOCKHOLM, SWEDEN Abstract. We prove that if we hit a de Morgan formula of size L with a random restriction from R p then the expected remaining size is at most O(p (log )3= L + p p L). As a corollary we p obtain a (n 3;o() )formula size lower bound for an explicit function in P. This is the strongest known lower bound for any explicit function in NP. Key words. Lower bounds, formula size, random restrictions, computational complexity. AMS subject classications. 68Q5 Warning: Essentially this paper has been published in SIAM Journal on Computing and is hence subject to copyright restrictions. It is for personal use only.. Introduction. Proving lower bounds for various computational models is of fundamental value to our understanding of computation. Still we are very far from proving strong lower bounds for realistic models of computation, but at least there is more or less constant progress. In this paper we study formula size for formulae over the basis ^, _ and :. Our technique is based on random restrictions which were rst dened and explicitly used in [] although some earlier results can be formalized in terms of this type of random restrictions. To create a random restriction in the space R p we, independently for eachvariable, keep it as a variable with probability p and otherwise assign it the value 0 or with. Now suppose we have a function given by a de Morgan formula of size L. What will be the expected formula size of the induced function when we apply a random restriction from R p? The obvious answer is that this size will be at most pl. Subbotovskaya [] was the rst to observe that actually formulae shrink more. Namely she established an upper bound equal probabilities ;p () O(p :5 L +) on the expected formula size of the induced function. This result allowed her to derive an (n :5 )lower bound on the de Morgan formula size of the parity function. This latter bound was superseded by Khrapchenko [, 3] who, using a dierent method, proved a tight (n )lower bound for the parity function. His result implied that the parity function shrinks by a factor (p ), and provided an upper bound ; on the shrinkage exponent ;, dened as the least upper bound of all that can replace.5 in (). New impetus for research on the expected size of the reduced formula was given by Andreev [9] who, based upon Subbotovskaya's result, derived an n :5;o() lower bound on the de Morgan formula size for a function in P. An inspection of the proof reveals that his method actually gives for the same function the bound n ;+;o(). New improvements of the lower bound on ; followed. Nisan and Impagliazzo [5] proved that ; ;p 73 :55. Paterson and Zwick [6], complementing the 8
2 J. HASTAD technique from [5] by very clever and natural arguments, pushed this bound further to ; 5;p 3 :63. One can also dene the corresponding notion for readonce formulae. For such formulae it was established in [3] that ; ro == log ( p 5 ; ) 3:7. This result was made tight in[], in that they removed a polylogarithmic factor in the bounds. In this paper we continue (and possibly end) this string of results by proving that ;=. To be more precise we prove that remaining size is O(p (log p )3= L + p p L). As discussed above this gives an (n 3;o() )lower bound for the formula size of the function dened by Andreev. Our proof is by a sequence of steps. We rst analyze the probability of reducing the formula to a single literal. When viewing the situation suitably, this rst lemma gives a nice and not very dicult generalization of Khrapchenko's [, 3] lower bounds for formula size. As an illustration of the power of this lemma we next, without too much diculty, show how to establish the desired shrinkage when the formula is balanced. The general case is more complicated due to the fact that we need to rely on more dramatic simplications. Namely, suppose that = ^ and is much smaller than. Then, from an intuitive point of view, it seems like we are in a good position to prove that we have substantial shrinkage since it seems quite likely that is reduced to the constant 0 and we can erase all of. The key new point in the main proof is that we havetoestablish that this actually happens. In the balanced case, we did not need this mechanism. The main theorem is established in two steps. First we prove that the probability that a formula of size at least remains after we have applied a restriction from R p is small, and then we prove that the expected remaining size is indeed small. It is curious to note that all except our last and main result are proved even under an arbitrary but "favorable" conditioning, while we are not able to carry this through for the main theorem.. Notation. A de Morgan formula is a binary tree in which each leaf is labeled by a literal from the set fx ::: x n x ::: x n g and each internal node v is labeled by an operation which is either ^ or _. The size of a formula is dened as the number of leaves and is denoted by L(). The depth D() is the depth of the underlying tree. The size and the depth of a Boolean function f are, respectively, the minimal size and depth of any de Morgan formula computing f in the natural sense. For convenience we dene the size and depth of a constant function to be 0. A restriction is an element of f0 g n. For p [0 ] a random restriction from R p is chosen by that we set randomly and independently each variable to with probability p and to 0 with equal probabilities ;p. The interpretation of giving the value to a variable is that it remains a variable, while in the other cases the given constant is substituted as the value of the variable. All logarithms in this paper are to the base. We use the notation x i to denote x i when = 0 and x i when =. We also need the concept of a lter. Definition.. A set of restrictions is a lter, ifwhen, and(x i )=, then for f0 g the restriction 0 obtained by setting 0 (x j )=(x j ), for every j 6= i and 0 (x i )= also belongs to. For any event E we will use Pr[Ej] as shorthand for Pr[Ej ]. Note that the intersection of two lters is a lter. 3. Preliminaries. We analyze the expected size of a formula after it has been hit with a restriction from R p. The variables that are given values are substituted
3 SHRINKAGE OF FORMULAE 3 into the formula after which we use the following rules of simplication: If one input to a _gate is given the value 0 we erase this input and let the other input of this gate take the place of the output of the gate. If one input to a _gate is given the value we replace the gate by the constant. If one input to a ^gate is given the value we erase this input and let the other input of this gate take the place of the output of the gate. If one input to a ^gate is given the value 0 we replace the gate by the constant 0. If one input of a _gate is reduced to the single literal x i (x i ) then x i =0() is substituted in the formula giving the other input to this gate. If possible we do further simplications in this subformula. If one input of a ^gate is reduced to the single literal x i (x i ) then x i =(0) is substituted in the formula giving the other input to this gate. If possible we do further simplications in this subformula. We call the last two rules the onevariable simplication rules. All rules preserve the function the formula is computing. Observe that the onevariable simplication rules are needed to get a nontrivial decrease of the size of the formula as can be seen from the pathological case when the original formula consists of an _ (or ^) of L copies of a single variable x i. If these rules did not exist then with probability p the entire formula would remain and we could get an expected remaining size which ispl. Using the above rules we prove that instead we always get an expected remaining size which is at most slightly larger than p L. In order to be able to speak about the reduced formula in an unambiguous way let us be more precise about the order we do the simplication. Suppose that is a formula and that = ^. We rst make the simplication in and and then only later the simplications which are connected with the top gate. This implies that the simplied will not always consist of a copy of simplied and a copy of simplied since the combination might give more simplications. In particular, this will happen if is simplied to one variable x i since then x i = will be substituted in the simplied. Whenever a onevariable simplication rule actually results in a change in the other subformula we say that a onevariable simplication is active at the corresponding gate. We let d denote formula that results when the above simplications are done to. As usual L(d ) denotes the the size of this formula. It is important to note that simplications have a certain commutativity property. We saythattwo restrictions and are compatible if they never give two dierent constant values to the same x i. In other words, for any x i the pair ( (x i ) (x i )) is one of the pairs ( ) ( 0) ( ) (0 ) (0 0) ( ) or( ). For compatible restrictions we can dene the combined restriction which in the mentioned 7 cases takes the values respectively. This combined restriction is the result (on the variable level) of doing rst and then doing on the variables given by. Note that the fact that and are compatible makes the combining operator commutative. We need to makes sure that combination acts in the proper way also on formulae. Lemma 3.. Let and be two compatible restrictions then for any, (d )d =(d )d = d :
4 4 J. HASTAD Proof. Let denote. Clearly we need only establish (d )d = d since the other equality follows by symmetry. We proceed by induction over the size of. When the size is the claim is obvious. For the general case we assume that = ^ (the case = _ being similar) and we have two cases:. Some onevariable simplication rule is active at the top gate when dening.. This is not the case. In case there is no problem since there is no interaction between and until after both and have been applied. Namely, by induction j d =( j d )d for j = and since we doall simplication to the subformulae before we doany simplication associated with the top gate, the result will be the same in both cases. In case, assume that d = x j. This means that x j = is substituted in d. Viewing this as a restriction (j) giving a non value to only one variable, is simplied to ( d )d (j). Now wehave three possibilities depending on the value of on x j. When (x j ) = then ( d )d 0 and by induction d 0 and hence (d )d = d 0. When (x j )= then by induction ( d )d = d, and since (j) is compatible with (even a subassignment of) wehave (d )d =(( d )d (j) )d =( d )d = d = d where we again have used the induction hypothesis. When (x j )= then since (by induction) d =( d )d = x j when simplifying by, will be simplied also by the restriction (j) and will be ( d )d (j) which by induction is equal to d (j). On the other hand when simplifying by and then by reduces to (( d )d (j))d which by applying the induction hypothesis twice is equal to d (j) and since (j) = (j) the lemma follows. We will need the above lemma in the case of analyzing what happens when we use the onevariable simplication rules. In that case the restriction will just give a non value to one variable. Since the lemma is quite simple and natural we will not always mention it explicitly when we use it. Two examples to keep in mind during the proof are the following:. Suppose computes the parity ofm variables and is of size m. Then if p is small, the probability that will depend on exactly one variable is about pm = p p L() and if p is large, we expect that the remaining formula will compute the parity of around pm variables and thus be of size at least (pm) = p L().. Suppose is the ^ of L= copies of x _ x. By our rules of simplication, this will not be simplied if both x and x are given the value by the restriction. Hence with probability p the entire formula remains and we get expected remaining size of at least p L. 4. Reducing to size. We start by estimating the probability that a given formula reduces to size one. For notational convenience we set q = p ;p. This will be useful since we will change the values of restrictions at individual points and if we change a non value to, wemultiply the probability by q. Since we areinterested in the case when p is small, q is essentially p.
5 SHRINKAGE OF FORMULAE 5 Lemma 4.. Let be a formula of size L and arandom restriction in R p. Let E be the event that is reduced to the constant by for =0. Furthermore let be any lter. Then Pr[L(d )=j] is bounded by q (LP r[e 0 j]pr[e j]) = : p Remark: Most of the time the implied bound q L=4 (which follows from x(;x) =4) will be sucient. Proof. Let be a restriction that satises L(d ) = and belongs to and suppose that makes into the literal x i. By denition (x i)= and we have two fellow restrictions, = 0 where is obtained by changing (x i ) to xor( ). contributes to the event E and by the denition of a lter it belongs to. We can hence identify the set of restrictions we are interested in with edges between restrictions that reduce the formula to the constants 0 and respectively and we are on familiar grounds. Let A be set the of restrictions that satisfy E 0 and belong to and let B be the set of restrictions that satisfy E and belong to. We partition AB into rectangles A j B j, where for each j there is some variable x ij which takes a dierent value in A j and in B j. This was rst done in [0] (see also [7]), but we will here need a slight generalization and thus we choose to use the more intuitive framework introduced by Karchmer and Wigderson [4]. In the normal KWgame P gets an input, x, from f ; () while P 0 gets an input, y, from f ; (0) and their task is to nd a coordinate i such that x i 6= y i. This is solved by tracing the formula from the output to an input maintaining the property that the two inputs give dierent values to the gates on the path. This is achieved in the following way. At an^gate, P 0 points to the input of this gate that evaluates to 0ony. Similarly at an _gate P points to the input that evaluates to on x. We extend this game by giving P a restriction that simplies the formula to the constant. The task is to nd an x i on which the two restrictions take dierent values (we allow answers where one restriction takes the value and the other takes the value 0 or ). To solve this game both players start by setting (x j ) = for each j such that (x j )=. After this they play the standard KWgame. If the path ends at literal x i, then in the extended restrictions (x i )=xor( ). Note that if (x i ) = 0, then this was the initial value (since we only change values to ), while if (x i )=then the initial value was or. In either case we solve the problem. The extended KWgame creates a partition of A B and let A j B j be the inputs that reach leafj. Note that the fact that the set of inputs that reach a certain leaf is a product set follows from the fact that each move of the game is determined by one of the players based only on his own input. Let C j be the set of restrictions that satisfy L(d ) = and belong to and such that the pair ( 0 ) reaches leaf j. By necessity the literal that appears at leaf j is the literal to which reduced the formula. Now, note that the probability ofc j is bounded by q times the probability of A j. This follows since the mapping 7! 0 gives a onetoone correspondence of C j with a subset of A j and that Pr() =qpr( 0 ) for each. We have the same relation between C j and B j and hence Pr[L(d )=j] = j Pr[C j j] j q (Pr[A j j]pr[b j j]) =
6 6 J. HASTAD q j A = j Pr[A j j]pr[b j j] A = = q (LP r[aj]pr[bj]) = where we used CauchySchwartz inequality. Remark: Please note that the theorem of Khrapchenko is indeed a special case of this lemma. Khrapchenko starts with A f ; (0), B f ; () and C which is a set of edges of the form (a b) where a A, b B and the hamming distance between a and b is one. As noted above eachsuch edge naturally corresponds to a restriction by setting (x i )=a i when a i = b i and (x i0 )= for the unique coordinate i 0 such that a i0 6= b i0. Abusing notation we can identify the restriction and the edge. Now setting = A [ B [ C we get a lter and since Pr[C] qjcj = Pr[A] jaj = Pr[B] jbj the lemma reduces to Khrapchenko's theorem, i.e. to L jcj jajjbj : 5. The balanced case. In the general case we cannot hope to have an estimate which depends on the probability of reducing to either constant. The reason is that formulae that describe tautologies do not always reduce to the constant. It remains to take care of the probability that the remaining formula is of size greater than. Definition 5.. Let L () be the expected P size of the remaining formula where we ignore theresult if it is of size, i.e. L () = ip r[l(d i= )=i]. Furthermore, let L (j) be the same quantity conditioned on. Here we think of as taken randomly from R p and thus L () depends on the parameter p. We will, however, suppress this dependence. To familiarize the reader with the ideas involved in the proof, we rst prove the desired result when the formula is balanced. We will here take the strictest possible denition of balanced, namely that the formula is a complete binary tree and just establish the size of the reduced formula as a function of its original depth. Theorem 5.. Let be a formula of depth d and arandom restriction in R p. Let be any lter and assume that q (d +d= ) ;, then L (j) q d d; : Proof. The proof is by induction over the depth. The base case (d = ) is obvious. Now suppose = ^ (the _case is similar) where the depth of each i is d ; and hence the size is bounded by d;. We need to consider the following events:. L( i d ), for i =ori =.. L( d ) = and the size of after the simplication by and the application of the onevariable simplication rule is at least. 3. L( d ) = and the size of after the simplication by and the application of the onevariable simplication rule is at least.
7 The estimate for L (j) is now SHRINKAGE OF FORMULAE 7 q (d ; ) d; + Pr[case ] + Pr[case 3]: The rst term comes from any subformula of size at least two appearing in either of the three cases while the other two terms cover the new contribution in the respective cases. Let us analyze the probability of case. Let p i be the probability that reduces to x i. We know by Lemma 4. that p i q (d;3)= : Now consider the conditional probability that, given that reduces to x i, does not reduce to a constant. The condition that reduces to x i can be written as \ 0 ^ (x i )=" for some lter 0. The reason for this is that if reduces to x i, then changing any (x j), j 6= i from to a constant the resulting restriction still reduces to x i. This follows by Lemma 3.. Thus we should work with the conditioning T 0 ^ (x i ) =. Now we substitute x i = in and we can forget the variable x i and just keep the restrictions in T 0 that satisfy (x i )= (as restrictions on the other n ; variables). This yields a lter 00. Thus we want to estimate the conditional probability that d xi= does not reduce to a constant given that 00. But now we are in position to apply induction and hence this probability can be estimated by q (d;3)= + q (d ; ) d; where the rst term is given by Lemma 4. and the second term is a bound on L ( j 00). Using q (d +d= ) ; we get the total bound q d=;. This implies that the total probability of case is bounded by p i q d=; q d;5= : the probability of case 3 can be bounded the same way and nally q (d ; ) d; + q d;3= q d d; and the proof is complete. It is not dicult to extend Theorem 5. to larger d, but we leave the details to the reader. 6. The probability of size at least remaining. The reason why things are so easy in the balanced case is that we need not rely on very complicated simplications. In particular, we did not need the fact that a subformula can kill its brother. This will be needed in general and let us start by: Lemma 6.. Let be a formula of size L and a random restriction in R p. Let be any lter and assume that q ( p L log L) ;. Then the probability that L(d ) conditioned on is at most q L(log L) = : Proof. We prove the lemma by induction over the size of the formula. It is not hard to see that the lemma is correct when L. Suppose that = ^ (the _case is similar) where L( i )=L i, L + L = L, and L L. We divide the event in question into the following pieces:
8 8 J. HASTAD. L( d ). L( d ) = and even after the onevariable simplication rule is used the size of the simplied is at least. 3. is reduced to the constant by and L( d ). The probability of the rst event is by induction bounded by q L (log L ) =. Suppose the probability, conditioned on, that is reduced to the constant is Q. Call the corresponding set of restrictions 0. Note that 0 is a lter. Then using the induction hypothesis with the conditioning T 0 we get the probability ofthe third event to be bounded by Qq L (log L ) = : Let us now estimate the probability of the second case. The probability that L( d )= isby Lemma 4. bounded by q (L Q( ; Q)) =. To estimate the conditional probability that, given that L( d ) =, remains to be of size at least we reason exactly as in the proof of Theorem 5.. Hence, this probability can be estimated by q p L =4+q L (log L ) = where the rst term comes from an application of Lemma 4. and the second term from the inductive assumption. Thus for the second case we get the total bound q (L Q( ; Q)) = q p L =4+q L (log L ) = q (L L Q( ; Q)) = q (L L ( ; Q)) = where we used q ( p L log L) ; ( p L log L ) ;. We want to bound the sum of the estimates for probabilities of cases and 3. Dierentiating with respect to Q yields QL (log L ) = +(L L ( ; Q)) = Thus the derivative is 0 when L (log L ) = ; = L L : ; Q L ( ; Q) = 4L log L and this corresponds to a maximum since the second derivative is negative. Using this optimal value for Q we get a total estimate which is q L (log L ) = + q L (log L ) = ; q 4 L (log L ) ;= + q L (log L ) ;= = q L (log L ) = + q L (log L ) = + q 4 L (log L ) ;=
9 SHRINKAGE OF FORMULAE 9 and we need to prove that this is less than q L (log L) = when L L and L = L + L. This is only calculus, but let us give the proof. First note that log L log dl=e log L when L 3 and hence it is sucient to bound q L (log L ) = + q L (log L ) = + q L (log L) ;= : Now ifwe set H(x) =x(log x) = it is not dicult to see that H 00 (x) is positive for x and hence the above expression is convex for L. This means that it is maximized either for L = or L = L. The rst two cases correspond to the inequalities and (L ; )(log (L ; )) = + (log L);= L(log L) = +(L ; )(log (L ; )) = + (log L) ;= L(log L) = which are easily seen to hold for L 3 and L 4 respectively. To check the required inequality when L = L we need to prove that This follows from p x ; p x ; p x L (log L=) = + L 4 (log L);= L (log L) = : which isvalid for all x. 7. Main shrinkage theorem. We are now ready for our main theorem. Theorem 7.. Let be a formula of size L and a random restriction in R p. Then the expected size of d is bounded by O p ( + (log (min( p L)))3= )L + p p L : Crucial to this theorem is the following lemma: Lemma 7.. Let be a formula of size L and a random restriction in R p. If q ( p L log L) ;,then while if q (4p L log L) ;, then L () 30q L(log L) 3= L () 00q L(log q ; ) 3= : First note that Theorem 7. follows from Lemma 7. together with Lemma 4. and thus it is sucient toprove Lemma 7.. Also note that Lemma 7. and Theorem 7. do not consider conditional probabilities. There are two reasons for this. It is not needed and the proof does not seem to allow it. Proof. (Of Lemma 7.) The hard part of the lemma is the case when q is small and we will start by establishing this case. As before we proceed by induction. The
10 0 J. HASTAD base case (L =)isobvious. Suppose that = ^ (the _case is similar) where L( i )=L i, L + L = L and L L. Our basic estimate for L () is L ( )+L ( ), which by induction can be bounded by S(L L )=30q L (log L ) 3= +30q L (log L ) 3= : We need to revise this estimate and in particular we need to consider the events which contribute either to the event described by the basic estimate (L( i d ) for i =, or i =)ortotheevent we are trying to estimate (L(d ) ).. We have L( i d ) for i = and i =.. L( i d )=fori =ori =, and the onevariable simplication rule was active at the top gate. 3. L( d )=L( d ) = and the onevariable simplication rule was not active at the top gate. 4. L( d )=andl( d ) and the onevariable simplication rule was not active at the top gate. 5. L( d )=andl( d ) and the onevariable simplication rule was not active at the top gate. 6. The function is reduced to the constant 0 while L( d ). 7. The function is reduced to the constant while L( d ). 8. The function is reduced to the constant 0 while L( d ). 9. The function is reduced to the constant while L( d ). Let us rst investigate what corrections we need to make to our basic estimate in the various cases. Case The basic estimate is correct. Case Suppose that reduces to x i. If the resulting formula is of size at least then was of size at least before we did the simplication of substituting for x i. This reduced the size of by at least one. This means that the formula size of is at most the formula size of before we did this simplication. But in our basic estimate we have not taken this simplication into account andthus we need not add anything to our basic estimate in this case. Case 3 In this case we need to add to our basic estimate. We will need some work to estimate the probability of this case. Case 4 In this case we need to add to our basic estimate. Also the probability of this event needs a little bit of work. Case 5 In this case we need to add to our basic estimate. From our previous work we can estimate the probability of this event simply by the probability thatthe remaining size of is at least and this probability isby Lemma 6. bounded by q L (log L ) = : Cases 6 and 8 In this case we can subtract at least from our original estimate. This follows since in this case we erase a formula of size at least which contributed needlessly to the basic estimate. We will only use case 6. Cases 7 and 9 The basic estimate is correct. The above reasoning gives the total bound: S(L L )+Pr[case 3] + Pr[case 4] + Pr[case 5] ; Pr[case 6]:
11 SHRINKAGE OF FORMULAE We have already bounded Pr[case 5] and for the other probabilities we will establish: and Lemma 7.3. If q ( p L log L ) ;, then Pr[case 3] 0q L + q L (log L ) = + Pr[case 6] Lemma 7.4. If q ( p L log L ) ;, then Pr[case 4] q L + Pr[case 6] Let us just check that this is sucient to prove Lemma 7. in the case when q is small. S(L L )+Pr[case 3] + Pr[case 4] + Pr[case 5] ; Pr[case 6] S(L L )+4q L + q L (log L ) = We need to prove that this is bounded by 30q L(log L) 3= for all possible L. Substituting L = L ; L and dierentiating twice we see that this is a convex function of L when L L=. This means that it is maximized either for L = or L = L. For L =we need to establish 30(L ; )(log (L ; )) 3= +4 30L(log L) 3= which is easily checked to be true for L = and L =3. For L 4 the inequality follows from For L =weneedtocheck which forl 4 follows from L(log L) 3= (L ; )(log L) 3= +: 30(L ; )(log (L ; )) 3= L(log L) 3= L(log L) 3= (L ; )(log L) 3= + 3= and 60 3= > 43. Finally, forl = L= we need to estimate 30L(log L=) 3= + L=(4 + (log L=) = ) and using the inequality x 3= ; (x ; ) 3= x =, which isvalid for any x, this can be bounded by 30L(log L) 3= ; 30L(log L) = + 43 L(log L)= 30L(log L) 3= and we are done. Hence we need just establish Lemma 7.3 and Lemma 7.4. The basic principle is to start with a set of restrictions that contribute to the bad case (cases 3 and 4
12 J. HASTAD respectively ) and create a set of restrictions that contribute to the good case, namely case 6. In this process there will be some \spills" and hence we need the additive terms. The Lemma 7.4 is by far the easier, and since the basic outline is the same, we start with this lemma. Proof. (Of Lemma 7.4) Let C be the set of restrictions such that reduces to exactly one variable or its negation and such that the reduced does not contain this variable. Let A be the set of restrictions that is formed by setting the variable that remains in in such away to make reduce to the constant 0andletB be the corresponding set that makes reduce to. Each element inc corresponds to an edge between A and B and we can (as in the proof of Lemma 4.) let this dene a path in. Thus each leafin corresponds to a set A j B j which reaches this leaf and a subset C j of C such that for any C j, its neighbors belong to A j and B j respectively. The sets A j B j form a partition of A B. Suppose furthermore that the literal at leaf j of is x j d j. Note that this implies that if C j then simplies to x j d j. Let q j be the conditional probability that, when is chosen uniformly from C j, L( d ). The probability of case 4 is then given by j Pr[C j ]q j : If we take any restriction contributing to this event and change the value of at x dj to j then we get a restriction 0 contributing to case 6. This follows since x dj does not appear in the reduced. The set of restrictions created at leaf j will be of total probability q ; Pr[C j ]q j and we seem to be in good shape. However the same restriction 0 might be created at many leaves and hence we would be over counting if we would just sum these probabilities for various j. However, note that 0 belongs to A and if it is created at leaf j then it belongs to A j. Now, since A j B j form a partition of A B we have forany A Pr[B j ]=Pr[B] : jja j This means that if we multiply the total probability of restrictions created at leaf j by Pr[B j ]weavoid over counting. Thus the sure contribution to the probability of case 6 is j q ; Pr[C j ]Pr[B j ]q j : We need to compare this to P j Pr[C j]q j. For the j for which Pr[B j ] q the term in the sum for case 6 is bigger than the corresponding term for the case 4, while for other j, we use that Pr[C j ] qpr[b j ] q and thus summing over those j gives acontribution of at most q L. We have proved Lemma 7.4. Next we turn to Lemma 7.3. This will be more complicated, mainly because the restrictions contributing to case 6 are more dicult to construct. Proof. (Of Lemma 7.3) Let A j, B j and C j be as in the previous proof. For xed j let r j be the conditional probability that L( d )=given that C j. We divide the leaves into two cases depending on whether r j 0qPr[B j ] ;. If we restrictthe summation to those j that satisfy this inequality then j Pr[C j ]r j j qpr[b j ]0qPr[B j ] ; 0q L
13 SHRINKAGE OF FORMULAE 3 and this gives the rst term in the right hand side of Lemma 7.3. We thus concentrate on the case when r j 0qPr[B j ] ;. Let A j and B j be the subsets of C j which reduce to 0 and respectively. Let be a restriction that belongs to C j and contributes to case 3. Assume that reduces to x d. We can obtain a restriction 0 in A j (if =)orb j (if =0)by setting 0 (x k )=(x k ) for k 6= d and 0 (x d )=. To see that 0 C j note that before we play the KWgame we give all variables given the value by the value. Thus the executions on and 0 are identical and thus 0 C j. Also, clearly, 0 forces to. Now, suppose that () j 0 A j Pr[] j 0 B j Pr[] (the other case being symmetric). Suppose A j consists of the restrictions ::: k. For i we dene a set of critical variables and x k is in this set if i (x k )=. Creating the restriction 0 i by setting 0 i (x l) = i (x l ) for every l 6= k while 0 i (x k)= creates a restriction in C and d 0 i reduces to x k. Note that, as observed above, 0 C in fact implies that 0 C j since C j and we go from 0 to by changing the value on a variable from to. Suppose there are s i critical variables for i. By denition, each restriction contributing to the conditional probability r j gives one critical variable for one i A j exactly when is reduced to a negated variable and otherwise it gives no contribution. By () the rst case happens in at least half the cases and hence we have r j = P r[c j ] ; i qs i Pr[ i ] for some satisfying. We now create a set of restriction in the following way. We obtain ; s i new restrictions from i bychoosing two critical variables for i and for each such choice (s t) create a restriction (s t) i by setting (s t) i (x k )= i (x k ) for every k 6 fs tg while (s t) i (x s ) = (s t) i (x t ) =. This way we get a set of restrictions of total probability q ; s i Pr[i ]. Let us relate the total probability of these constructed restrictions to r j. Note that rj = P r[c j ] ; (qs i Pr[ i ]) i! Pr[C j ] ; (q s i Pr[ i ]) i! Pr[C j ] ; i Pr[ i ]! = Pr[C j ] ; Pr[A j jc j ] i q s i Pr[ i ] where the inequality comes from CauchySchwartz inequality. Now i q si Pr[ i ] = i q s i Pr[ i ] ; i q s i Pr[ i ]
14 4 J. HASTAD since and r j 0q. r j Pr[C j] Pr[A j jc j ] ; q r jpr[c j ] ( ; 40 )r j Pr[C j] 0 r j Pr[C j] Remark: Note that the constructed restrictions need not be in C j. The reason is that there is no control when you change a variable from being xed to being. In particular, if we were trying to estimate a conditional expectation we would be in deep trouble, since it need not be the case that these recently constructed restrictions satisfy the condition. Let us look more closely at these obtained restrictions. They give thevalue to the variable x dj since the restriction we started with belonged to C j. They also give the value to the two special variables x s and x t. We nowchange the value at x dj to j in an attempt to force to 0. Note that this attempt might not always be successful since once x s and x t become unassigned might also depend on those variables (as well as others). We leave this problem for the time being. Let us analyze the set of restrictions created in this way. At leaf j we have this way created a set of restrictions of total probability atleast q ; Pr[C 0 j]rj. However, the same restriction might appear many times and we need to adjust for this fact. Take any restriction created from i A j. First note that determines the identity of the two special variables x s and x t. These are namely the only variables x k given the value by with the property that setting (x k )= makes depend on only one variable. This follows since we recreate a restriction from C j with the additional property that x dj is set, but since we are considering cases when was independent of x dj, setting a value to x dj does not matter. To complete the characterization of x s and x t, note that after setting any other variable x k to any value it is still true that depends on both x s and x t. Let x s be the variable with lower index of the variables x s and x t which we just have identied. Consider the restriction 0 obtained by setting 0 (x s ) = while 0 (x k ) = (x k ) for every x k 6= x s. We claim that 0 belongs to A j. Remember that A j B j was the set of inputs reaching leaf j when playing the KWgame on the formula. To see this claim let 00 be obtained by setting 00 (x k )= 0 (x k )for x k 6= x dj while 00 (x dj )=. By the conditions for x t being a critical variable for i, 00 C j and hence 0 A j. Thus, starting with we have created a unique restriction 0 such that whenever is created at leaf j then 0 A j. Thus, reasoning as in the proof of Lemma 7.4, if we multiply the probability of the restrictions produced at leaf j by Pr[B j ], then we avoid making an overestimate. This means that we have created a set of restrictions of total probability at least j 0 q; Pr[C j ]r j Pr[B j ]: The created restrictions are of two types, either they reduce the formula to 0 or not. In the former case they contribute to case 6 (since depends on x s and x t ), and we have to estimate the probability of the latter case. We claim that in this case the reduced contains both the variables x s and x t. This follows, since setting (x s )=or(x t ) = simplies to 0 which in its turn is basically the fact
15 SHRINKAGE OF FORMULAE 5 0 A j established above. By Lemma 6. it follows that the probability of this case is bounded by q L (log L ) =. Summing up we have Pr[case 3] = L j= Pr[C j ]r j = jjr j small Pr[C j ]r j + jjr j large 0q L + q ; Pr[C j ]rj 0 Pr[B j] j 0q L + Pr[case 6] + q L (log L ) = Pr[C j ]r j where the rst inequality uses the bound for r j and the last inequality is based on the above reasoning. The proof of Lemma 7.3 is complete. All that remains is to complete the proof of Lemma 7. when q (4 p L log L) ;. To simplify the calculations we will in this case prove the slightly stronger bound L () 00q L(log q ; ) 3= ; : First note that when (4 p L log L) ; q ( p L log L) ; the second bound follows from the rst bound since 00q L(log q ; ) 3= 00q L(=logL) 3= 6q L(log L) 3= 30q L(log L) 3= + 3(4 p L log L) ; L(log L) 3= 30q L(log L) 3= +: It remains to establish the second bound when q ( p L log L) ; and we do this by induction over L. Assume that = ^, (the _case being similar) where L( i ) = L i, L + L = L, and L L. This implies that we can always use the second bound when bounding L ( ). We have two cases depending on whether q (4 p L log L ) ;. If q (4 p L log L ) ;, then using the induction hypothesis and L () L ( )+L ( ) + the result follows immediately. To take care of the other case, notice that our estimates for the corrections to the basic estimates depended only on L. This means that in this case we get the total bound L ( )+L ( )+4q L + q L (log L ) = and using the induction hypothesis (the rst case for and the second for ) we can bound this by 30q L (log L ) 3= + (00q L (log q ; ) 3= ; ) + q L (4+(logL ) = ) 60q L (log q ; ) 3= + (00q L (log q ; ) 3= ; )+43q L (log q ; ) = ) 00q L(log q ; ) 3= ; and the proof is complete. 8. Application to formula size lower bounds. As mentioned in the introduction, it is well known that shrinkage results can be used to derive lower bounds on formula size. Let us just briey recall the function which seems to be the most appropriate for this purpose. The input bits are of two types. For notational simplicity
16 6 J. HASTAD assume that we have n input bits and that log n is an integer that divides n. The rst n bits dene a Boolean function H on log n bits. The other n bits are divided into log n groups of n= log n bits each. If the parity of the variables in group i is y i then the output is H(y). We call this function A as it was rst dened by Andreev [9]. Theorem 8.. The function A requires formulae of size n 3 : (log n) 7= (log log n) 3 Proof. Assume that we have a formula of size S which describes A. We know ([8], Chap 4, Theorem 3.) that there is a function of log n variables which requires a formulasizewhichis n log log n We x the rst set of values to describe such a function. This might decrease the size of the formula, but it is not clear by howmuch and hence we just note that the resulting log n log log n formula is of size at most S. Apply an R p restriction with p = n on the remaining formula. By our main theorem the resulting formula will be of expected size at most O(Sn ; (log n) 7= (log log n) + ). The probability that all variables in a particular group are xed is bounded by ( ; p) n log n : e ; pn log n (log n) ; : Since there are only log n groups, with probability ; o() there remains at least one live variable in each group. Now since a positive random variable is at most twice its expected with probability at least /, it follows that there is a positive probability that we have at most twice the expected remaining size and some live variable in each group. It follows that O Sn ; (log n) 7= (log log n) n log log n and the proof is complete. We might not that there are indeed formulas for the function A of size O(n 3 (log n) ; ) and hence our bounds are close to optimal. 9. Conclusions. As we see it there remain two interesting questions in shrinking: What is the shrinkage exponent for monotone formulae? In some sense we have established that it is, namely one of the two examples given in the introduction is monotone and shrinks only by a factor p. This is the example of L= copies of x ^x. This is not a natural example and if it is the only one, we are asking the wrong question. We can get around it by using variable and 3variable simplication rules. We could also ask a slightly dierent question, namely what is the minimal such that for arbitrary small p there is a monotone formula of size O(p ; ) that is trivialized by a restriction from R p with probability at most =? Apart from its inherent interest, a successful answer to this question would in most cases (depending on the exact form of the answer) lead to an!(n ) lower bound for monotone formulae for the majority function. :
17 SHRINKAGE OF FORMULAE 7 Are these annoying log factors really needed? This is really of minor importance. If they were indeed needed it would be surprising. Acknowledgment. I am most grateful to Sasha Razborov and V.N. Lebedev for catching gaps in the proofs in previous versions of this paper. Sasha Razborov was also together with Andy Yao very helpful in giving ideas in the initial stages of this research. Finally, I am most grateful to two anonymous referees for a careful reading of the manuscript. REFERENCES [] M. Dubiner, U. Zwick, How do readonce formulae shrink? Combinatorics, Probability & Computing 3, 455{469 (994). [] M. Furst, J. B. Saxe, and M. Sipser. Parity, circuits and the polynomial time hierarchy. Math. Syst. Theory, 7:3{7, 984. [3] J. Hastad, A. Razborov, and A. Yao On the Shrinkage Exponent for ReadOnce Formulae Theoretical Computer Science, Vol 4, 698, 995. [4] M. Karchmer and A. Wigderson Monotone circuits for connectivity require superlogarithmic depth SIAM J. on Discrete Mathematics Vol 3, 990, pp [5] N. Nisan and R. Impagliazzo. The eect of random restrictions on formulae size. Random Structures and Algorithms, Vol. 4, No., 993, pp 34. [6] M. S. Paterson and U. Zwick. Shrinkage of de Morgan formulae under restriction. Random Structures and Algorithms, Vol. 4, No., 993, pp [7] A. Razborov. Applications of matrix methods to the theory of lower bounds in computational complexity. Combinatorica, 0():pp 8{93, 990. [8] I. Wegener, The Complexity of Boolean function, 987, John Wiley & Sons Ltd. Russian references [9] A. E. Andreev. O metode poluqeni bolee qem kvadratiqnyh ninih ocenok dl slonosti shem. Vestnik MGU, ser. matem i mehan., t. 4, v., 987, str (A.E. Andreev, On a method for obtaining more than quadratic eective lower bounds for the complexity of schemes, Moscow Univ. Math. Bull 4()(987), 6366). [0] K.L. Ryqkov. Modifikaci metoda V. M. Hrapqenko i primenenie ee k ocenkam slonosti shem dl kodobyh funkci { v sb \Metody diskrtnogo analiza b torrii grafov i shem" vyl. 4 Novosibirsk, 985, str (K.L. Rychkov,, A modication of Khrapchenko's method and its application to bounding the complexity of networks for coding functions. In: Methods of discrete analysis in the theory of graphs and circuits, Novosibirsk, 985, pages 998.) [] B. A. Subbotovska. O realizacii line nyh funkci formulami v bazise & _ ;. DAN SSSR, t. 36, v. 3, 96, str (B.A.Subbotovskaya, Realizations of linear functions by formulas using + ;, Soviet Mathematics Doklady (96), 0). [] V. M. Hrapqenko. O slonosti realizacii line no funkcii v klasseshem. Matem. zametki, t. 9, v., 97, str (V.M. Khrapchenko, Complexity of the realization of a linear function in the class of circuits, Math. Notes Acad. Sciences USSR 9(97), 3). [3] V. M. Hrapqenko. Ob odnom metode poluqeni ninih ocenok slonosti shem. Matem. zametki, t. 0, v., 97, str (V.M. Khrapchenko, A method of determining lower bounds for the complexity of schemes, Math. Notes Acad. Sciences USSR 0(97), ).
princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Sanjeev Arora
princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Sanjeev Arora Scribe: One of the running themes in this course is the notion of
More informationLecture 1: Course overview, circuits, and formulas
Lecture 1: Course overview, circuits, and formulas Topics in Complexity Theory and Pseudorandomness (Spring 2013) Rutgers University Swastik Kopparty Scribes: John Kim, Ben Lund 1 Course Information Swastik
More informationThe Goldberg Rao Algorithm for the Maximum Flow Problem
The Goldberg Rao Algorithm for the Maximum Flow Problem COS 528 class notes October 18, 2006 Scribe: Dávid Papp Main idea: use of the blocking flow paradigm to achieve essentially O(min{m 2/3, n 1/2 }
More informationTopologybased network security
Topologybased network security Tiit Pikma Supervised by Vitaly Skachek Research Seminar in Cryptography University of Tartu, Spring 2013 1 Introduction In both wired and wireless networks, there is the
More informationLecture 4: AC 0 lower bounds and pseudorandomness
Lecture 4: AC 0 lower bounds and pseudorandomness Topics in Complexity Theory and Pseudorandomness (Spring 2013) Rutgers University Swastik Kopparty Scribes: Jason Perry and Brian Garnett In this lecture,
More informationLecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs
CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like
More informationA simple criterion on degree sequences of graphs
Discrete Applied Mathematics 156 (2008) 3513 3517 Contents lists available at ScienceDirect Discrete Applied Mathematics journal homepage: www.elsevier.com/locate/dam Note A simple criterion on degree
More informationAdaptive Online Gradient Descent
Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650
More informationSome Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.
Some Polynomial Theorems by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.com This paper contains a collection of 31 theorems, lemmas,
More informationarxiv:1112.0829v1 [math.pr] 5 Dec 2011
How Not to Win a Million Dollars: A Counterexample to a Conjecture of L. Breiman Thomas P. Hayes arxiv:1112.0829v1 [math.pr] 5 Dec 2011 Abstract Consider a gambling game in which we are allowed to repeatedly
More informationWHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT?
WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT? introduction Many students seem to have trouble with the notion of a mathematical proof. People that come to a course like Math 216, who certainly
More informationLOOKING FOR A GOOD TIME TO BET
LOOKING FOR A GOOD TIME TO BET LAURENT SERLET Abstract. Suppose that the cards of a well shuffled deck of cards are turned up one after another. At any timebut once only you may bet that the next card
More informationSHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH
31 Kragujevac J. Math. 25 (2003) 31 49. SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH Kinkar Ch. Das Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, W.B.,
More informationCMSC 858T: Randomized Algorithms Spring 2003 Handout 8: The Local Lemma
CMSC 858T: Randomized Algorithms Spring 2003 Handout 8: The Local Lemma Please Note: The references at the end are given for extra reading if you are interested in exploring these ideas further. You are
More informationApproximation Algorithms: LP Relaxation, Rounding, and Randomized Rounding Techniques. My T. Thai
Approximation Algorithms: LP Relaxation, Rounding, and Randomized Rounding Techniques My T. Thai 1 Overview An overview of LP relaxation and rounding method is as follows: 1. Formulate an optimization
More informationGambling Systems and MultiplicationInvariant Measures
Gambling Systems and MultiplicationInvariant Measures by Jeffrey S. Rosenthal* and Peter O. Schwartz** (May 28, 997.. Introduction. This short paper describes a surprising connection between two previously
More informationCOUNTING INDEPENDENT SETS IN SOME CLASSES OF (ALMOST) REGULAR GRAPHS
COUNTING INDEPENDENT SETS IN SOME CLASSES OF (ALMOST) REGULAR GRAPHS Alexander Burstein Department of Mathematics Howard University Washington, DC 259, USA aburstein@howard.edu Sergey Kitaev Mathematics
More information9.2 Summation Notation
9. Summation Notation 66 9. Summation Notation In the previous section, we introduced sequences and now we shall present notation and theorems concerning the sum of terms of a sequence. We begin with a
More informationDiscrete Mathematics and Probability Theory Fall 2009 Satish Rao,David Tse Note 11
CS 70 Discrete Mathematics and Probability Theory Fall 2009 Satish Rao,David Tse Note Conditional Probability A pharmaceutical company is marketing a new test for a certain medical condition. According
More information2.1 Complexity Classes
15859(M): Randomized Algorithms Lecturer: Shuchi Chawla Topic: Complexity classes, Identity checking Date: September 15, 2004 Scribe: Andrew Gilpin 2.1 Complexity Classes In this lecture we will look
More informationGenerating models of a matched formula with a polynomial delay
Generating models of a matched formula with a polynomial delay Petr Savicky Institute of Computer Science, Academy of Sciences of Czech Republic, Pod Vodárenskou Věží 2, 182 07 Praha 8, Czech Republic
More informationApproximation Algorithms
Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NPCompleteness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms
More informationLecture 7: NPComplete Problems
IAS/PCMI Summer Session 2000 Clay Mathematics Undergraduate Program Basic Course on Computational Complexity Lecture 7: NPComplete Problems David Mix Barrington and Alexis Maciel July 25, 2000 1. Circuit
More informationFACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z
FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z DANIEL BIRMAJER, JUAN B GIL, AND MICHAEL WEINER Abstract We consider polynomials with integer coefficients and discuss their factorization
More informationFactoring & Primality
Factoring & Primality Lecturer: Dimitris Papadopoulos In this lecture we will discuss the problem of integer factorization and primality testing, two problems that have been the focus of a great amount
More informationCounting Falsifying Assignments of a 2CF via Recurrence Equations
Counting Falsifying Assignments of a 2CF via Recurrence Equations Guillermo De Ita Luna, J. Raymundo Marcial Romero, Fernando Zacarias Flores, Meliza Contreras González and Pedro Bello López Abstract
More informationMeasuring Rationality with the Minimum Cost of Revealed Preference Violations. Mark Dean and Daniel Martin. Online Appendices  Not for Publication
Measuring Rationality with the Minimum Cost of Revealed Preference Violations Mark Dean and Daniel Martin Online Appendices  Not for Publication 1 1 Algorithm for Solving the MASP In this online appendix
More informationPolynomial Degree and Lower Bounds in Quantum Complexity: Collision and Element Distinctness with Small Range
THEORY OF COMPUTING, Volume 1 (2005), pp. 37 46 http://theoryofcomputing.org Polynomial Degree and Lower Bounds in Quantum Complexity: Collision and Element Distinctness with Small Range Andris Ambainis
More informationPartitioning edgecoloured complete graphs into monochromatic cycles and paths
arxiv:1205.5492v1 [math.co] 24 May 2012 Partitioning edgecoloured complete graphs into monochromatic cycles and paths Alexey Pokrovskiy Departement of Mathematics, London School of Economics and Political
More information! Solve problem to optimality. ! Solve problem in polytime. ! Solve arbitrary instances of the problem. !approximation algorithm.
Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NPhard problem What should I do? A Theory says you're unlikely to find a polytime algorithm Must sacrifice one of
More informationA Sublinear Bipartiteness Tester for Bounded Degree Graphs
A Sublinear Bipartiteness Tester for Bounded Degree Graphs Oded Goldreich Dana Ron February 5, 1998 Abstract We present a sublineartime algorithm for testing whether a bounded degree graph is bipartite
More informationChapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling
Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NPhard problem. What should I do? A. Theory says you're unlikely to find a polytime algorithm. Must sacrifice one
More informationInduction. Margaret M. Fleck. 10 October These notes cover mathematical induction and recursive definition
Induction Margaret M. Fleck 10 October 011 These notes cover mathematical induction and recursive definition 1 Introduction to induction At the start of the term, we saw the following formula for computing
More informationCONTRIBUTIONS TO ZERO SUM PROBLEMS
CONTRIBUTIONS TO ZERO SUM PROBLEMS S. D. ADHIKARI, Y. G. CHEN, J. B. FRIEDLANDER, S. V. KONYAGIN AND F. PAPPALARDI Abstract. A prototype of zero sum theorems, the well known theorem of Erdős, Ginzburg
More information136 CHAPTER 4. INDUCTION, GRAPHS AND TREES
136 TER 4. INDUCTION, GRHS ND TREES 4.3 Graphs In this chapter we introduce a fundamental structural idea of discrete mathematics, that of a graph. Many situations in the applications of discrete mathematics
More informationOn the kpath cover problem for cacti
On the kpath cover problem for cacti Zemin Jin and Xueliang Li Center for Combinatorics and LPMC Nankai University Tianjin 300071, P.R. China zeminjin@eyou.com, x.li@eyou.com Abstract In this paper we
More informationHomework 5 Solutions
Homework 5 Solutions 4.2: 2: a. 321 = 256 + 64 + 1 = (01000001) 2 b. 1023 = 512 + 256 + 128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = (1111111111) 2. Note that this is 1 less than the next power of 2, 1024, which
More informationWeek 5 Integral Polyhedra
Week 5 Integral Polyhedra We have seen some examples 1 of linear programming formulation that are integral, meaning that every basic feasible solution is an integral vector. This week we develop a theory
More informationCombinatorial PCPs with ecient veriers
Combinatorial PCPs with ecient veriers Or Meir Abstract The PCP theorem asserts the existence of proofs that can be veried by a verier that reads only a very small part of the proof. The theorem was originally
More informationOHJ2306 Introduction to Theoretical Computer Science, Fall 2012 8.11.2012
276 The P vs. NP problem is a major unsolved problem in computer science It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute to carry a $ 1,000,000 prize for the
More informationInformation Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay
Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture  17 ShannonFanoElias Coding and Introduction to Arithmetic Coding
More informationChapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm.
Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm. We begin by defining the ring of polynomials with coefficients in a ring R. After some preliminary results, we specialize
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An ndimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0534405967. Systems of Linear Equations Definition. An ndimensional vector is a row or a column
More informationCS 598CSC: Combinatorial Optimization Lecture date: 2/4/2010
CS 598CSC: Combinatorial Optimization Lecture date: /4/010 Instructor: Chandra Chekuri Scribe: David Morrison GomoryHu Trees (The work in this section closely follows [3]) Let G = (V, E) be an undirected
More informationThe LCA Problem Revisited
The LA Problem Revisited Michael A. Bender Martín Faracholton SUNY Stony Brook Rutgers University May 16, 2000 Abstract We present a very simple algorithm for the Least ommon Ancestor problem. We thus
More informationNotes from Week 1: Algorithms for sequential prediction
CS 683 Learning, Games, and Electronic Markets Spring 2007 Notes from Week 1: Algorithms for sequential prediction Instructor: Robert Kleinberg 2226 Jan 2007 1 Introduction In this course we will be looking
More informationIntroduction to Algorithms. Part 3: P, NP Hard Problems
Introduction to Algorithms Part 3: P, NP Hard Problems 1) Polynomial Time: P and NP 2) NPCompleteness 3) Dealing with Hard Problems 4) Lower Bounds 5) Books c Wayne Goddard, Clemson University, 2004 Chapter
More informationSchool of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213
,, 1{8 () c A Note on Learning from MultipleInstance Examples AVRIM BLUM avrim+@cs.cmu.edu ADAM KALAI akalai+@cs.cmu.edu School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 1513 Abstract.
More information15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
More informationOn an antiramsey type result
On an antiramsey type result Noga Alon, Hanno Lefmann and Vojtĕch Rödl Abstract We consider antiramsey type results. For a given coloring of the kelement subsets of an nelement set X, where two kelement
More informationInternational Journal of Information Technology, Modeling and Computing (IJITMC) Vol.1, No.3,August 2013
FACTORING CRYPTOSYSTEM MODULI WHEN THE COFACTORS DIFFERENCE IS BOUNDED Omar Akchiche 1 and Omar Khadir 2 1,2 Laboratory of Mathematics, Cryptography and Mechanics, Fstm, University of Hassan II MohammediaCasablanca,
More informationA Turán Type Problem Concerning the Powers of the Degrees of a Graph
A Turán Type Problem Concerning the Powers of the Degrees of a Graph Yair Caro and Raphael Yuster Department of Mathematics University of HaifaORANIM, Tivon 36006, Israel. AMS Subject Classification:
More informationA Working Knowledge of Computational Complexity for an Optimizer
A Working Knowledge of Computational Complexity for an Optimizer ORF 363/COS 323 Instructor: Amir Ali Ahmadi TAs: Y. Chen, G. Hall, J. Ye Fall 2014 1 Why computational complexity? What is computational
More informationExponential time algorithms for graph coloring
Exponential time algorithms for graph coloring Uriel Feige Lecture notes, March 14, 2011 1 Introduction Let [n] denote the set {1,..., k}. A klabeling of vertices of a graph G(V, E) is a function V [k].
More informationStochastic Inventory Control
Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to the
More informationLecture 7: Approximation via Randomized Rounding
Lecture 7: Approximation via Randomized Rounding Often LPs return a fractional solution where the solution x, which is supposed to be in {0, } n, is in [0, ] n instead. There is a generic way of obtaining
More informationChapter 1 A Pri Characterization of T m e Pairs w in Proceedings NCUR V. (1991), Vol. I, pp. 362{366. Jerey F. Gold Department of Mathematics, Department of Physics University of Utah DonH.Tucker Department
More information4.1 Introduction  Online Learning Model
Computational Learning Foundations Fall semester, 2010 Lecture 4: November 7, 2010 Lecturer: Yishay Mansour Scribes: Elad Liebman, Yuval Rochman & Allon Wagner 1 4.1 Introduction  Online Learning Model
More informationEcient approximation algorithm for minimizing makespan. on uniformly related machines. Chandra Chekuri. November 25, 1997.
Ecient approximation algorithm for minimizing makespan on uniformly related machines Chandra Chekuri November 25, 1997 Abstract We obtain a new ecient approximation algorithm for scheduling precedence
More informationarxiv: v2 [math.co] 30 Nov 2015
PLANAR GRAPH IS ON FIRE PRZEMYSŁAW GORDINOWICZ arxiv:1311.1158v [math.co] 30 Nov 015 Abstract. Let G be any connected graph on n vertices, n. Let k be any positive integer. Suppose that a fire breaks out
More informationModern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem
More informationSEQUENCES OF MAXIMAL DEGREE VERTICES IN GRAPHS. Nickolay Khadzhiivanov, Nedyalko Nenov
Serdica Math. J. 30 (2004), 95 102 SEQUENCES OF MAXIMAL DEGREE VERTICES IN GRAPHS Nickolay Khadzhiivanov, Nedyalko Nenov Communicated by V. Drensky Abstract. Let Γ(M) where M V (G) be the set of all vertices
More information2.3 Convex Constrained Optimization Problems
42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions
More informationOn closedform solutions of a resource allocation problem in parallel funding of R&D projects
Operations Research Letters 27 (2000) 229 234 www.elsevier.com/locate/dsw On closedform solutions of a resource allocation problem in parallel funding of R&D proects Ulku Gurler, Mustafa. C. Pnar, Mohamed
More informationNotes on Factoring. MA 206 Kurt Bryan
The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor
More informationMathematics for Computer Science/Software Engineering. Notes for the course MSM1F3 Dr. R. A. Wilson
Mathematics for Computer Science/Software Engineering Notes for the course MSM1F3 Dr. R. A. Wilson October 1996 Chapter 1 Logic Lecture no. 1. We introduce the concept of a proposition, which is a statement
More informationCatalan Numbers. Thomas A. Dowling, Department of Mathematics, Ohio State Uni versity.
7 Catalan Numbers Thomas A. Dowling, Department of Mathematics, Ohio State Uni Author: versity. Prerequisites: The prerequisites for this chapter are recursive definitions, basic counting principles,
More informationLectures 56: Taylor Series
Math 1d Instructor: Padraic Bartlett Lectures 5: Taylor Series Weeks 5 Caltech 213 1 Taylor Polynomials and Series As we saw in week 4, power series are remarkably nice objects to work with. In particular,
More informationA REMARK ON ALMOST MOORE DIGRAPHS OF DEGREE THREE. 1. Introduction and Preliminaries
Acta Math. Univ. Comenianae Vol. LXVI, 2(1997), pp. 285 291 285 A REMARK ON ALMOST MOORE DIGRAPHS OF DEGREE THREE E. T. BASKORO, M. MILLER and J. ŠIRÁŇ Abstract. It is well known that Moore digraphs do
More information2.3 Scheduling jobs on identical parallel machines
2.3 Scheduling jobs on identical parallel machines There are jobs to be processed, and there are identical machines (running in parallel) to which each job may be assigned Each job = 1,,, must be processed
More informationAnalysis of Algorithms, I
Analysis of Algorithms, I CSOR W4231.002 Eleni Drinea Computer Science Department Columbia University Thursday, February 26, 2015 Outline 1 Recap 2 Representing graphs 3 Breadthfirst search (BFS) 4 Applications
More informationBreaking Generalized DiffieHellman Modulo a Composite is no Easier than Factoring
Breaking Generalized DiffieHellman Modulo a Composite is no Easier than Factoring Eli Biham Dan Boneh Omer Reingold Abstract The DiffieHellman keyexchange protocol may naturally be extended to k > 2
More informationA Brief Introduction to Property Testing
A Brief Introduction to Property Testing Oded Goldreich Abstract. This short article provides a brief description of the main issues that underly the study of property testing. It is meant to serve as
More informationLinear Codes. Chapter 3. 3.1 Basics
Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length
More information6.852: Distributed Algorithms Fall, 2009. Class 2
.8: Distributed Algorithms Fall, 009 Class Today s plan Leader election in a synchronous ring: Lower bound for comparisonbased algorithms. Basic computation in general synchronous networks: Leader election
More informationLarge induced subgraphs with all degrees odd
Large induced subgraphs with all degrees odd A.D. Scott Department of Pure Mathematics and Mathematical Statistics, University of Cambridge, England Abstract: We prove that every connected graph of order
More informationThe degree, size and chromatic index of a uniform hypergraph
The degree, size and chromatic index of a uniform hypergraph Noga Alon Jeong Han Kim Abstract Let H be a kuniform hypergraph in which no two edges share more than t common vertices, and let D denote the
More informationTreerepresentation of set families and applications to combinatorial decompositions
Treerepresentation of set families and applications to combinatorial decompositions BinhMinh BuiXuan a, Michel Habib b Michaël Rao c a Department of Informatics, University of Bergen, Norway. buixuan@ii.uib.no
More information(67902) Topics in Theory and Complexity Nov 2, 2006. Lecture 7
(67902) Topics in Theory and Complexity Nov 2, 2006 Lecturer: Irit Dinur Lecture 7 Scribe: Rani Lekach 1 Lecture overview This Lecture consists of two parts In the first part we will refresh the definition
More informationAbout the inverse football pool problem for 9 games 1
Seventh International Workshop on Optimal Codes and Related Topics September 61, 013, Albena, Bulgaria pp. 15133 About the inverse football pool problem for 9 games 1 Emil Kolev Tsonka Baicheva Institute
More informationCHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e.
CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e. This chapter contains the beginnings of the most important, and probably the most subtle, notion in mathematical analysis, i.e.,
More informationLecture 4 Online and streaming algorithms for clustering
CSE 291: Geometric algorithms Spring 2013 Lecture 4 Online and streaming algorithms for clustering 4.1 Online kclustering To the extent that clustering takes place in the brain, it happens in an online
More information3. Mathematical Induction
3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)
More information! Solve problem to optimality. ! Solve problem in polytime. ! Solve arbitrary instances of the problem. #approximation algorithm.
Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NPhard problem What should I do? A Theory says you're unlikely to find a polytime algorithm Must sacrifice one of three
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationStationary random graphs on Z with prescribed iid degrees and finite mean connections
Stationary random graphs on Z with prescribed iid degrees and finite mean connections Maria Deijfen Johan Jonasson February 2006 Abstract Let F be a probability distribution with support on the nonnegative
More information3.2 The Factor Theorem and The Remainder Theorem
3. The Factor Theorem and The Remainder Theorem 57 3. The Factor Theorem and The Remainder Theorem Suppose we wish to find the zeros of f(x) = x 3 + 4x 5x 4. Setting f(x) = 0 results in the polynomial
More informationON INDUCED SUBGRAPHS WITH ALL DEGREES ODD. 1. Introduction
ON INDUCED SUBGRAPHS WITH ALL DEGREES ODD A.D. SCOTT Abstract. Gallai proved that the vertex set of any graph can be partitioned into two sets, each inducing a subgraph with all degrees even. We prove
More informationLecture 6 Online and streaming algorithms for clustering
CSE 291: Unsupervised learning Spring 2008 Lecture 6 Online and streaming algorithms for clustering 6.1 Online kclustering To the extent that clustering takes place in the brain, it happens in an online
More information8.1 Min Degree Spanning Tree
CS880: Approximations Algorithms Scribe: Siddharth Barman Lecturer: Shuchi Chawla Topic: Min Degree Spanning Tree Date: 02/15/07 In this lecture we give a local search based algorithm for the Min Degree
More information24. The Branch and Bound Method
24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NPcomplete. Then one can conclude according to the present state of science that no
More information6 EXTENDING ALGEBRA. 6.0 Introduction. 6.1 The cubic equation. Objectives
6 EXTENDING ALGEBRA Chapter 6 Extending Algebra Objectives After studying this chapter you should understand techniques whereby equations of cubic degree and higher can be solved; be able to factorise
More informationApplied Algorithm Design Lecture 5
Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design
More informationMany algorithms, particularly divide and conquer algorithms, have time complexities which are naturally
Recurrence Relations Many algorithms, particularly divide and conquer algorithms, have time complexities which are naturally modeled by recurrence relations. A recurrence relation is an equation which
More informationLinear Codes. In the V[n,q] setting, the terms word and vector are interchangeable.
Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a subvector space of V[n,q]. If the subspace of V[n,q]
More informationSHORT CYCLE COVERS OF GRAPHS WITH MINIMUM DEGREE THREE
SHOT YLE OVES OF PHS WITH MINIMUM DEEE THEE TOMÁŠ KISE, DNIEL KÁL, END LIDIKÝ, PVEL NEJEDLÝ OET ŠÁML, ND bstract. The Shortest ycle over onjecture of lon and Tarsi asserts that the edges of every bridgeless
More informationAnswer Key for California State Standards: Algebra I
Algebra I: Symbolic reasoning and calculations with symbols are central in algebra. Through the study of algebra, a student develops an understanding of the symbolic language of mathematics and the sciences.
More informationOffline sorting buffers on Line
Offline sorting buffers on Line Rohit Khandekar 1 and Vinayaka Pandit 2 1 University of Waterloo, ON, Canada. email: rkhandekar@gmail.com 2 IBM India Research Lab, New Delhi. email: pvinayak@in.ibm.com
More informationQuotient Rings and Field Extensions
Chapter 5 Quotient Rings and Field Extensions In this chapter we describe a method for producing field extension of a given field. If F is a field, then a field extension is a field K that contains F.
More informationx 2 f 2 e 1 e 4 e 3 e 2 q f 4 x 4 f 3 x 3
KarlFranzensUniversitat Graz & Technische Universitat Graz SPEZIALFORSCHUNGSBEREICH F 003 OPTIMIERUNG und KONTROLLE Projektbereich DISKRETE OPTIMIERUNG O. Aichholzer F. Aurenhammer R. Hainz New Results
More information