

 Ruby Johns
 1 years ago
 Views:
Transcription
1 THE SHRINKAGE EPONENT OF DE MORGAN FORMULAE IS JOHAN HaSTAD ROYAL INSTITUTE OF TECHNOLOGY STOCKHOLM, SWEDEN Abstract. We prove that if we hit a de Morgan formula of size L with a random restriction from R p then the expected remaining size is at most O(p (log )3= L + p p L). As a corollary we p obtain a (n 3;o() )formula size lower bound for an explicit function in P. This is the strongest known lower bound for any explicit function in NP. Key words. Lower bounds, formula size, random restrictions, computational complexity. AMS subject classications. 68Q5 Warning: Essentially this paper has been published in SIAM Journal on Computing and is hence subject to copyright restrictions. It is for personal use only.. Introduction. Proving lower bounds for various computational models is of fundamental value to our understanding of computation. Still we are very far from proving strong lower bounds for realistic models of computation, but at least there is more or less constant progress. In this paper we study formula size for formulae over the basis ^, _ and :. Our technique is based on random restrictions which were rst dened and explicitly used in [] although some earlier results can be formalized in terms of this type of random restrictions. To create a random restriction in the space R p we, independently for eachvariable, keep it as a variable with probability p and otherwise assign it the value 0 or with. Now suppose we have a function given by a de Morgan formula of size L. What will be the expected formula size of the induced function when we apply a random restriction from R p? The obvious answer is that this size will be at most pl. Subbotovskaya [] was the rst to observe that actually formulae shrink more. Namely she established an upper bound equal probabilities ;p () O(p :5 L +) on the expected formula size of the induced function. This result allowed her to derive an (n :5 )lower bound on the de Morgan formula size of the parity function. This latter bound was superseded by Khrapchenko [, 3] who, using a dierent method, proved a tight (n )lower bound for the parity function. His result implied that the parity function shrinks by a factor (p ), and provided an upper bound ; on the shrinkage exponent ;, dened as the least upper bound of all that can replace.5 in (). New impetus for research on the expected size of the reduced formula was given by Andreev [9] who, based upon Subbotovskaya's result, derived an n :5;o() lower bound on the de Morgan formula size for a function in P. An inspection of the proof reveals that his method actually gives for the same function the bound n ;+;o(). New improvements of the lower bound on ; followed. Nisan and Impagliazzo [5] proved that ; ;p 73 :55. Paterson and Zwick [6], complementing the 8
2 J. HASTAD technique from [5] by very clever and natural arguments, pushed this bound further to ; 5;p 3 :63. One can also dene the corresponding notion for readonce formulae. For such formulae it was established in [3] that ; ro == log ( p 5 ; ) 3:7. This result was made tight in[], in that they removed a polylogarithmic factor in the bounds. In this paper we continue (and possibly end) this string of results by proving that ;=. To be more precise we prove that remaining size is O(p (log p )3= L + p p L). As discussed above this gives an (n 3;o() )lower bound for the formula size of the function dened by Andreev. Our proof is by a sequence of steps. We rst analyze the probability of reducing the formula to a single literal. When viewing the situation suitably, this rst lemma gives a nice and not very dicult generalization of Khrapchenko's [, 3] lower bounds for formula size. As an illustration of the power of this lemma we next, without too much diculty, show how to establish the desired shrinkage when the formula is balanced. The general case is more complicated due to the fact that we need to rely on more dramatic simplications. Namely, suppose that = ^ and is much smaller than. Then, from an intuitive point of view, it seems like we are in a good position to prove that we have substantial shrinkage since it seems quite likely that is reduced to the constant 0 and we can erase all of. The key new point in the main proof is that we havetoestablish that this actually happens. In the balanced case, we did not need this mechanism. The main theorem is established in two steps. First we prove that the probability that a formula of size at least remains after we have applied a restriction from R p is small, and then we prove that the expected remaining size is indeed small. It is curious to note that all except our last and main result are proved even under an arbitrary but "favorable" conditioning, while we are not able to carry this through for the main theorem.. Notation. A de Morgan formula is a binary tree in which each leaf is labeled by a literal from the set fx ::: x n x ::: x n g and each internal node v is labeled by an operation which is either ^ or _. The size of a formula is dened as the number of leaves and is denoted by L(). The depth D() is the depth of the underlying tree. The size and the depth of a Boolean function f are, respectively, the minimal size and depth of any de Morgan formula computing f in the natural sense. For convenience we dene the size and depth of a constant function to be 0. A restriction is an element of f0 g n. For p [0 ] a random restriction from R p is chosen by that we set randomly and independently each variable to with probability p and to 0 with equal probabilities ;p. The interpretation of giving the value to a variable is that it remains a variable, while in the other cases the given constant is substituted as the value of the variable. All logarithms in this paper are to the base. We use the notation x i to denote x i when = 0 and x i when =. We also need the concept of a lter. Definition.. A set of restrictions is a lter, ifwhen, and(x i )=, then for f0 g the restriction 0 obtained by setting 0 (x j )=(x j ), for every j 6= i and 0 (x i )= also belongs to. For any event E we will use Pr[Ej] as shorthand for Pr[Ej ]. Note that the intersection of two lters is a lter. 3. Preliminaries. We analyze the expected size of a formula after it has been hit with a restriction from R p. The variables that are given values are substituted
3 SHRINKAGE OF FORMULAE 3 into the formula after which we use the following rules of simplication: If one input to a _gate is given the value 0 we erase this input and let the other input of this gate take the place of the output of the gate. If one input to a _gate is given the value we replace the gate by the constant. If one input to a ^gate is given the value we erase this input and let the other input of this gate take the place of the output of the gate. If one input to a ^gate is given the value 0 we replace the gate by the constant 0. If one input of a _gate is reduced to the single literal x i (x i ) then x i =0() is substituted in the formula giving the other input to this gate. If possible we do further simplications in this subformula. If one input of a ^gate is reduced to the single literal x i (x i ) then x i =(0) is substituted in the formula giving the other input to this gate. If possible we do further simplications in this subformula. We call the last two rules the onevariable simplication rules. All rules preserve the function the formula is computing. Observe that the onevariable simplication rules are needed to get a nontrivial decrease of the size of the formula as can be seen from the pathological case when the original formula consists of an _ (or ^) of L copies of a single variable x i. If these rules did not exist then with probability p the entire formula would remain and we could get an expected remaining size which ispl. Using the above rules we prove that instead we always get an expected remaining size which is at most slightly larger than p L. In order to be able to speak about the reduced formula in an unambiguous way let us be more precise about the order we do the simplication. Suppose that is a formula and that = ^. We rst make the simplication in and and then only later the simplications which are connected with the top gate. This implies that the simplied will not always consist of a copy of simplied and a copy of simplied since the combination might give more simplications. In particular, this will happen if is simplied to one variable x i since then x i = will be substituted in the simplied. Whenever a onevariable simplication rule actually results in a change in the other subformula we say that a onevariable simplication is active at the corresponding gate. We let d denote formula that results when the above simplications are done to. As usual L(d ) denotes the the size of this formula. It is important to note that simplications have a certain commutativity property. We saythattwo restrictions and are compatible if they never give two dierent constant values to the same x i. In other words, for any x i the pair ( (x i ) (x i )) is one of the pairs ( ) ( 0) ( ) (0 ) (0 0) ( ) or( ). For compatible restrictions we can dene the combined restriction which in the mentioned 7 cases takes the values respectively. This combined restriction is the result (on the variable level) of doing rst and then doing on the variables given by. Note that the fact that and are compatible makes the combining operator commutative. We need to makes sure that combination acts in the proper way also on formulae. Lemma 3.. Let and be two compatible restrictions then for any, (d )d =(d )d = d :
4 4 J. HASTAD Proof. Let denote. Clearly we need only establish (d )d = d since the other equality follows by symmetry. We proceed by induction over the size of. When the size is the claim is obvious. For the general case we assume that = ^ (the case = _ being similar) and we have two cases:. Some onevariable simplication rule is active at the top gate when dening.. This is not the case. In case there is no problem since there is no interaction between and until after both and have been applied. Namely, by induction j d =( j d )d for j = and since we doall simplication to the subformulae before we doany simplication associated with the top gate, the result will be the same in both cases. In case, assume that d = x j. This means that x j = is substituted in d. Viewing this as a restriction (j) giving a non value to only one variable, is simplied to ( d )d (j). Now wehave three possibilities depending on the value of on x j. When (x j ) = then ( d )d 0 and by induction d 0 and hence (d )d = d 0. When (x j )= then by induction ( d )d = d, and since (j) is compatible with (even a subassignment of) wehave (d )d =(( d )d (j) )d =( d )d = d = d where we again have used the induction hypothesis. When (x j )= then since (by induction) d =( d )d = x j when simplifying by, will be simplied also by the restriction (j) and will be ( d )d (j) which by induction is equal to d (j). On the other hand when simplifying by and then by reduces to (( d )d (j))d which by applying the induction hypothesis twice is equal to d (j) and since (j) = (j) the lemma follows. We will need the above lemma in the case of analyzing what happens when we use the onevariable simplication rules. In that case the restriction will just give a non value to one variable. Since the lemma is quite simple and natural we will not always mention it explicitly when we use it. Two examples to keep in mind during the proof are the following:. Suppose computes the parity ofm variables and is of size m. Then if p is small, the probability that will depend on exactly one variable is about pm = p p L() and if p is large, we expect that the remaining formula will compute the parity of around pm variables and thus be of size at least (pm) = p L().. Suppose is the ^ of L= copies of x _ x. By our rules of simplication, this will not be simplied if both x and x are given the value by the restriction. Hence with probability p the entire formula remains and we get expected remaining size of at least p L. 4. Reducing to size. We start by estimating the probability that a given formula reduces to size one. For notational convenience we set q = p ;p. This will be useful since we will change the values of restrictions at individual points and if we change a non value to, wemultiply the probability by q. Since we areinterested in the case when p is small, q is essentially p.
5 SHRINKAGE OF FORMULAE 5 Lemma 4.. Let be a formula of size L and arandom restriction in R p. Let E be the event that is reduced to the constant by for =0. Furthermore let be any lter. Then Pr[L(d )=j] is bounded by q (LP r[e 0 j]pr[e j]) = : p Remark: Most of the time the implied bound q L=4 (which follows from x(;x) =4) will be sucient. Proof. Let be a restriction that satises L(d ) = and belongs to and suppose that makes into the literal x i. By denition (x i)= and we have two fellow restrictions, = 0 where is obtained by changing (x i ) to xor( ). contributes to the event E and by the denition of a lter it belongs to. We can hence identify the set of restrictions we are interested in with edges between restrictions that reduce the formula to the constants 0 and respectively and we are on familiar grounds. Let A be set the of restrictions that satisfy E 0 and belong to and let B be the set of restrictions that satisfy E and belong to. We partition AB into rectangles A j B j, where for each j there is some variable x ij which takes a dierent value in A j and in B j. This was rst done in [0] (see also [7]), but we will here need a slight generalization and thus we choose to use the more intuitive framework introduced by Karchmer and Wigderson [4]. In the normal KWgame P gets an input, x, from f ; () while P 0 gets an input, y, from f ; (0) and their task is to nd a coordinate i such that x i 6= y i. This is solved by tracing the formula from the output to an input maintaining the property that the two inputs give dierent values to the gates on the path. This is achieved in the following way. At an^gate, P 0 points to the input of this gate that evaluates to 0ony. Similarly at an _gate P points to the input that evaluates to on x. We extend this game by giving P a restriction that simplies the formula to the constant. The task is to nd an x i on which the two restrictions take dierent values (we allow answers where one restriction takes the value and the other takes the value 0 or ). To solve this game both players start by setting (x j ) = for each j such that (x j )=. After this they play the standard KWgame. If the path ends at literal x i, then in the extended restrictions (x i )=xor( ). Note that if (x i ) = 0, then this was the initial value (since we only change values to ), while if (x i )=then the initial value was or. In either case we solve the problem. The extended KWgame creates a partition of A B and let A j B j be the inputs that reach leafj. Note that the fact that the set of inputs that reach a certain leaf is a product set follows from the fact that each move of the game is determined by one of the players based only on his own input. Let C j be the set of restrictions that satisfy L(d ) = and belong to and such that the pair ( 0 ) reaches leaf j. By necessity the literal that appears at leaf j is the literal to which reduced the formula. Now, note that the probability ofc j is bounded by q times the probability of A j. This follows since the mapping 7! 0 gives a onetoone correspondence of C j with a subset of A j and that Pr() =qpr( 0 ) for each. We have the same relation between C j and B j and hence Pr[L(d )=j] = j Pr[C j j] j q (Pr[A j j]pr[b j j]) =
6 6 J. HASTAD q j A = j Pr[A j j]pr[b j j] A = = q (LP r[aj]pr[bj]) = where we used CauchySchwartz inequality. Remark: Please note that the theorem of Khrapchenko is indeed a special case of this lemma. Khrapchenko starts with A f ; (0), B f ; () and C which is a set of edges of the form (a b) where a A, b B and the hamming distance between a and b is one. As noted above eachsuch edge naturally corresponds to a restriction by setting (x i )=a i when a i = b i and (x i0 )= for the unique coordinate i 0 such that a i0 6= b i0. Abusing notation we can identify the restriction and the edge. Now setting = A [ B [ C we get a lter and since Pr[C] qjcj = Pr[A] jaj = Pr[B] jbj the lemma reduces to Khrapchenko's theorem, i.e. to L jcj jajjbj : 5. The balanced case. In the general case we cannot hope to have an estimate which depends on the probability of reducing to either constant. The reason is that formulae that describe tautologies do not always reduce to the constant. It remains to take care of the probability that the remaining formula is of size greater than. Definition 5.. Let L () be the expected P size of the remaining formula where we ignore theresult if it is of size, i.e. L () = ip r[l(d i= )=i]. Furthermore, let L (j) be the same quantity conditioned on. Here we think of as taken randomly from R p and thus L () depends on the parameter p. We will, however, suppress this dependence. To familiarize the reader with the ideas involved in the proof, we rst prove the desired result when the formula is balanced. We will here take the strictest possible denition of balanced, namely that the formula is a complete binary tree and just establish the size of the reduced formula as a function of its original depth. Theorem 5.. Let be a formula of depth d and arandom restriction in R p. Let be any lter and assume that q (d +d= ) ;, then L (j) q d d; : Proof. The proof is by induction over the depth. The base case (d = ) is obvious. Now suppose = ^ (the _case is similar) where the depth of each i is d ; and hence the size is bounded by d;. We need to consider the following events:. L( i d ), for i =ori =.. L( d ) = and the size of after the simplication by and the application of the onevariable simplication rule is at least. 3. L( d ) = and the size of after the simplication by and the application of the onevariable simplication rule is at least.
7 The estimate for L (j) is now SHRINKAGE OF FORMULAE 7 q (d ; ) d; + Pr[case ] + Pr[case 3]: The rst term comes from any subformula of size at least two appearing in either of the three cases while the other two terms cover the new contribution in the respective cases. Let us analyze the probability of case. Let p i be the probability that reduces to x i. We know by Lemma 4. that p i q (d;3)= : Now consider the conditional probability that, given that reduces to x i, does not reduce to a constant. The condition that reduces to x i can be written as \ 0 ^ (x i )=" for some lter 0. The reason for this is that if reduces to x i, then changing any (x j), j 6= i from to a constant the resulting restriction still reduces to x i. This follows by Lemma 3.. Thus we should work with the conditioning T 0 ^ (x i ) =. Now we substitute x i = in and we can forget the variable x i and just keep the restrictions in T 0 that satisfy (x i )= (as restrictions on the other n ; variables). This yields a lter 00. Thus we want to estimate the conditional probability that d xi= does not reduce to a constant given that 00. But now we are in position to apply induction and hence this probability can be estimated by q (d;3)= + q (d ; ) d; where the rst term is given by Lemma 4. and the second term is a bound on L ( j 00). Using q (d +d= ) ; we get the total bound q d=;. This implies that the total probability of case is bounded by p i q d=; q d;5= : the probability of case 3 can be bounded the same way and nally q (d ; ) d; + q d;3= q d d; and the proof is complete. It is not dicult to extend Theorem 5. to larger d, but we leave the details to the reader. 6. The probability of size at least remaining. The reason why things are so easy in the balanced case is that we need not rely on very complicated simplications. In particular, we did not need the fact that a subformula can kill its brother. This will be needed in general and let us start by: Lemma 6.. Let be a formula of size L and a random restriction in R p. Let be any lter and assume that q ( p L log L) ;. Then the probability that L(d ) conditioned on is at most q L(log L) = : Proof. We prove the lemma by induction over the size of the formula. It is not hard to see that the lemma is correct when L. Suppose that = ^ (the _case is similar) where L( i )=L i, L + L = L, and L L. We divide the event in question into the following pieces:
8 8 J. HASTAD. L( d ). L( d ) = and even after the onevariable simplication rule is used the size of the simplied is at least. 3. is reduced to the constant by and L( d ). The probability of the rst event is by induction bounded by q L (log L ) =. Suppose the probability, conditioned on, that is reduced to the constant is Q. Call the corresponding set of restrictions 0. Note that 0 is a lter. Then using the induction hypothesis with the conditioning T 0 we get the probability ofthe third event to be bounded by Qq L (log L ) = : Let us now estimate the probability of the second case. The probability that L( d )= isby Lemma 4. bounded by q (L Q( ; Q)) =. To estimate the conditional probability that, given that L( d ) =, remains to be of size at least we reason exactly as in the proof of Theorem 5.. Hence, this probability can be estimated by q p L =4+q L (log L ) = where the rst term comes from an application of Lemma 4. and the second term from the inductive assumption. Thus for the second case we get the total bound q (L Q( ; Q)) = q p L =4+q L (log L ) = q (L L Q( ; Q)) = q (L L ( ; Q)) = where we used q ( p L log L) ; ( p L log L ) ;. We want to bound the sum of the estimates for probabilities of cases and 3. Dierentiating with respect to Q yields QL (log L ) = +(L L ( ; Q)) = Thus the derivative is 0 when L (log L ) = ; = L L : ; Q L ( ; Q) = 4L log L and this corresponds to a maximum since the second derivative is negative. Using this optimal value for Q we get a total estimate which is q L (log L ) = + q L (log L ) = ; q 4 L (log L ) ;= + q L (log L ) ;= = q L (log L ) = + q L (log L ) = + q 4 L (log L ) ;=
9 SHRINKAGE OF FORMULAE 9 and we need to prove that this is less than q L (log L) = when L L and L = L + L. This is only calculus, but let us give the proof. First note that log L log dl=e log L when L 3 and hence it is sucient to bound q L (log L ) = + q L (log L ) = + q L (log L) ;= : Now ifwe set H(x) =x(log x) = it is not dicult to see that H 00 (x) is positive for x and hence the above expression is convex for L. This means that it is maximized either for L = or L = L. The rst two cases correspond to the inequalities and (L ; )(log (L ; )) = + (log L);= L(log L) = +(L ; )(log (L ; )) = + (log L) ;= L(log L) = which are easily seen to hold for L 3 and L 4 respectively. To check the required inequality when L = L we need to prove that This follows from p x ; p x ; p x L (log L=) = + L 4 (log L);= L (log L) = : which isvalid for all x. 7. Main shrinkage theorem. We are now ready for our main theorem. Theorem 7.. Let be a formula of size L and a random restriction in R p. Then the expected size of d is bounded by O p ( + (log (min( p L)))3= )L + p p L : Crucial to this theorem is the following lemma: Lemma 7.. Let be a formula of size L and a random restriction in R p. If q ( p L log L) ;,then while if q (4p L log L) ;, then L () 30q L(log L) 3= L () 00q L(log q ; ) 3= : First note that Theorem 7. follows from Lemma 7. together with Lemma 4. and thus it is sucient toprove Lemma 7.. Also note that Lemma 7. and Theorem 7. do not consider conditional probabilities. There are two reasons for this. It is not needed and the proof does not seem to allow it. Proof. (Of Lemma 7.) The hard part of the lemma is the case when q is small and we will start by establishing this case. As before we proceed by induction. The
10 0 J. HASTAD base case (L =)isobvious. Suppose that = ^ (the _case is similar) where L( i )=L i, L + L = L and L L. Our basic estimate for L () is L ( )+L ( ), which by induction can be bounded by S(L L )=30q L (log L ) 3= +30q L (log L ) 3= : We need to revise this estimate and in particular we need to consider the events which contribute either to the event described by the basic estimate (L( i d ) for i =, or i =)ortotheevent we are trying to estimate (L(d ) ).. We have L( i d ) for i = and i =.. L( i d )=fori =ori =, and the onevariable simplication rule was active at the top gate. 3. L( d )=L( d ) = and the onevariable simplication rule was not active at the top gate. 4. L( d )=andl( d ) and the onevariable simplication rule was not active at the top gate. 5. L( d )=andl( d ) and the onevariable simplication rule was not active at the top gate. 6. The function is reduced to the constant 0 while L( d ). 7. The function is reduced to the constant while L( d ). 8. The function is reduced to the constant 0 while L( d ). 9. The function is reduced to the constant while L( d ). Let us rst investigate what corrections we need to make to our basic estimate in the various cases. Case The basic estimate is correct. Case Suppose that reduces to x i. If the resulting formula is of size at least then was of size at least before we did the simplication of substituting for x i. This reduced the size of by at least one. This means that the formula size of is at most the formula size of before we did this simplication. But in our basic estimate we have not taken this simplication into account andthus we need not add anything to our basic estimate in this case. Case 3 In this case we need to add to our basic estimate. We will need some work to estimate the probability of this case. Case 4 In this case we need to add to our basic estimate. Also the probability of this event needs a little bit of work. Case 5 In this case we need to add to our basic estimate. From our previous work we can estimate the probability of this event simply by the probability thatthe remaining size of is at least and this probability isby Lemma 6. bounded by q L (log L ) = : Cases 6 and 8 In this case we can subtract at least from our original estimate. This follows since in this case we erase a formula of size at least which contributed needlessly to the basic estimate. We will only use case 6. Cases 7 and 9 The basic estimate is correct. The above reasoning gives the total bound: S(L L )+Pr[case 3] + Pr[case 4] + Pr[case 5] ; Pr[case 6]:
11 SHRINKAGE OF FORMULAE We have already bounded Pr[case 5] and for the other probabilities we will establish: and Lemma 7.3. If q ( p L log L ) ;, then Pr[case 3] 0q L + q L (log L ) = + Pr[case 6] Lemma 7.4. If q ( p L log L ) ;, then Pr[case 4] q L + Pr[case 6] Let us just check that this is sucient to prove Lemma 7. in the case when q is small. S(L L )+Pr[case 3] + Pr[case 4] + Pr[case 5] ; Pr[case 6] S(L L )+4q L + q L (log L ) = We need to prove that this is bounded by 30q L(log L) 3= for all possible L. Substituting L = L ; L and dierentiating twice we see that this is a convex function of L when L L=. This means that it is maximized either for L = or L = L. For L =we need to establish 30(L ; )(log (L ; )) 3= +4 30L(log L) 3= which is easily checked to be true for L = and L =3. For L 4 the inequality follows from For L =weneedtocheck which forl 4 follows from L(log L) 3= (L ; )(log L) 3= +: 30(L ; )(log (L ; )) 3= L(log L) 3= L(log L) 3= (L ; )(log L) 3= + 3= and 60 3= > 43. Finally, forl = L= we need to estimate 30L(log L=) 3= + L=(4 + (log L=) = ) and using the inequality x 3= ; (x ; ) 3= x =, which isvalid for any x, this can be bounded by 30L(log L) 3= ; 30L(log L) = + 43 L(log L)= 30L(log L) 3= and we are done. Hence we need just establish Lemma 7.3 and Lemma 7.4. The basic principle is to start with a set of restrictions that contribute to the bad case (cases 3 and 4
12 J. HASTAD respectively ) and create a set of restrictions that contribute to the good case, namely case 6. In this process there will be some \spills" and hence we need the additive terms. The Lemma 7.4 is by far the easier, and since the basic outline is the same, we start with this lemma. Proof. (Of Lemma 7.4) Let C be the set of restrictions such that reduces to exactly one variable or its negation and such that the reduced does not contain this variable. Let A be the set of restrictions that is formed by setting the variable that remains in in such away to make reduce to the constant 0andletB be the corresponding set that makes reduce to. Each element inc corresponds to an edge between A and B and we can (as in the proof of Lemma 4.) let this dene a path in. Thus each leafin corresponds to a set A j B j which reaches this leaf and a subset C j of C such that for any C j, its neighbors belong to A j and B j respectively. The sets A j B j form a partition of A B. Suppose furthermore that the literal at leaf j of is x j d j. Note that this implies that if C j then simplies to x j d j. Let q j be the conditional probability that, when is chosen uniformly from C j, L( d ). The probability of case 4 is then given by j Pr[C j ]q j : If we take any restriction contributing to this event and change the value of at x dj to j then we get a restriction 0 contributing to case 6. This follows since x dj does not appear in the reduced. The set of restrictions created at leaf j will be of total probability q ; Pr[C j ]q j and we seem to be in good shape. However the same restriction 0 might be created at many leaves and hence we would be over counting if we would just sum these probabilities for various j. However, note that 0 belongs to A and if it is created at leaf j then it belongs to A j. Now, since A j B j form a partition of A B we have forany A Pr[B j ]=Pr[B] : jja j This means that if we multiply the total probability of restrictions created at leaf j by Pr[B j ]weavoid over counting. Thus the sure contribution to the probability of case 6 is j q ; Pr[C j ]Pr[B j ]q j : We need to compare this to P j Pr[C j]q j. For the j for which Pr[B j ] q the term in the sum for case 6 is bigger than the corresponding term for the case 4, while for other j, we use that Pr[C j ] qpr[b j ] q and thus summing over those j gives acontribution of at most q L. We have proved Lemma 7.4. Next we turn to Lemma 7.3. This will be more complicated, mainly because the restrictions contributing to case 6 are more dicult to construct. Proof. (Of Lemma 7.3) Let A j, B j and C j be as in the previous proof. For xed j let r j be the conditional probability that L( d )=given that C j. We divide the leaves into two cases depending on whether r j 0qPr[B j ] ;. If we restrictthe summation to those j that satisfy this inequality then j Pr[C j ]r j j qpr[b j ]0qPr[B j ] ; 0q L
Some Optimal Inapproximability Results
Some Optimal Inapproximability Results JOHAN HÅSTAD Royal Institute of Technology, Stockholm, Sweden Abstract. We prove optimal, up to an arbitrary ɛ>0, inapproximability results for MaxEkSat for k 3,
More informationSwitching Algebra and Logic Gates
Chapter 2 Switching Algebra and Logic Gates The word algebra in the title of this chapter should alert you that more mathematics is coming. No doubt, some of you are itching to get on with digital design
More informationHow to Use Expert Advice
NICOLÒ CESABIANCHI Università di Milano, Milan, Italy YOAV FREUND AT&T Labs, Florham Park, New Jersey DAVID HAUSSLER AND DAVID P. HELMBOLD University of California, Santa Cruz, Santa Cruz, California
More informationHow many numbers there are?
How many numbers there are? RADEK HONZIK Radek Honzik: Charles University, Department of Logic, Celetná 20, Praha 1, 116 42, Czech Republic radek.honzik@ff.cuni.cz Contents 1 What are numbers 2 1.1 Natural
More informationRegular Languages are Testable with a Constant Number of Queries
Regular Languages are Testable with a Constant Number of Queries Noga Alon Michael Krivelevich Ilan Newman Mario Szegedy Abstract We continue the study of combinatorial property testing, initiated by Goldreich,
More informationMaximizing the Spread of Influence through a Social Network
Maximizing the Spread of Influence through a Social Network David Kempe Dept. of Computer Science Cornell University, Ithaca NY kempe@cs.cornell.edu Jon Kleinberg Dept. of Computer Science Cornell University,
More informationWhich Problems Have Strongly Exponential Complexity? 1
Journal of Computer and System Sciences 63, 512 530 (2001) doi:10.1006/jcss.2001.1774, available online at http://www.idealibrary.com on Which Problems Have Strongly Exponential Complexity? 1 Russell Impagliazzo
More informationWHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT?
WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT? introduction Many students seem to have trouble with the notion of a mathematical proof. People that come to a course like Math 216, who certainly
More informationWHICH SCORING RULE MAXIMIZES CONDORCET EFFICIENCY? 1. Introduction
WHICH SCORING RULE MAXIMIZES CONDORCET EFFICIENCY? DAVIDE P. CERVONE, WILLIAM V. GEHRLEIN, AND WILLIAM S. ZWICKER Abstract. Consider an election in which each of the n voters casts a vote consisting of
More informationSteering User Behavior with Badges
Steering User Behavior with Badges Ashton Anderson Daniel Huttenlocher Jon Kleinberg Jure Leskovec Stanford University Cornell University Cornell University Stanford University ashton@cs.stanford.edu {dph,
More informationInformation and Computation
Information and Computation 207 (2009) 849 866 Contents lists available at ScienceDirect Information and Computation journal homepage: www.elsevier.com/locate/ic The myriad virtues of Wavelet Trees Paolo
More informationA mini course on additive combinatorics
A mini course on additive combinatorics 1 First draft. Dated Oct 24th, 2007 These are notes from a mini course on additive combinatorics given in Princeton University on August 2324, 2007. The lectures
More informationControllability and Observability of Partial Differential Equations: Some results and open problems
Controllability and Observability of Partial Differential Equations: Some results and open problems Enrique ZUAZUA Departamento de Matemáticas Universidad Autónoma 2849 Madrid. Spain. enrique.zuazua@uam.es
More informationONEDIMENSIONAL RANDOM WALKS 1. SIMPLE RANDOM WALK
ONEDIMENSIONAL RANDOM WALKS 1. SIMPLE RANDOM WALK Definition 1. A random walk on the integers with step distribution F and initial state x is a sequence S n of random variables whose increments are independent,
More informationAlgebrization: A New Barrier in Complexity Theory
Algebrization: A New Barrier in Complexity Theory Scott Aaronson MIT Avi Wigderson Institute for Advanced Study Abstract Any proof of P NP will have to overcome two barriers: relativization and natural
More informationA Tutorial on Spectral Clustering
A Tutorial on Spectral Clustering Ulrike von Luxburg Max Planck Institute for Biological Cybernetics Spemannstr. 38, 7276 Tübingen, Germany ulrike.luxburg@tuebingen.mpg.de This article appears in Statistics
More informationA Modern Course on Curves and Surfaces. Richard S. Palais
A Modern Course on Curves and Surfaces Richard S. Palais Contents Lecture 1. Introduction 1 Lecture 2. What is Geometry 4 Lecture 3. Geometry of InnerProduct Spaces 7 Lecture 4. Linear Maps and the Euclidean
More informationOptimal Inapproximability Results for MAXCUT
Electronic Colloquium on Computational Complexity, Report No. 101 (2005) Optimal Inapproximability Results for MAXCUT and Other 2Variable CSPs? Subhash Khot College of Computing Georgia Tech khot@cc.gatech.edu
More informationIntellectual Need and ProblemFree Activity in the Mathematics Classroom
Intellectual Need 1 Intellectual Need and ProblemFree Activity in the Mathematics Classroom Evan Fuller, Jeffrey M. Rabin, Guershon Harel University of California, San Diego Correspondence concerning
More informationDecoding by Linear Programming
Decoding by Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles, CA 90095
More informationNotes on Richard Dedekind s Was sind und was sollen die Zahlen?
Notes on Richard Dedekind s Was sind und was sollen die Zahlen? David E. Joyce, Clark University December 2005 Contents Introduction 2 I. Sets and their elements. 2 II. Functions on a set. 5 III. Onetoone
More informationRevised Version of Chapter 23. We learned long ago how to solve linear congruences. ax c (mod m)
Chapter 23 Squares Modulo p Revised Version of Chapter 23 We learned long ago how to solve linear congruences ax c (mod m) (see Chapter 8). It s now time to take the plunge and move on to quadratic equations.
More informationTruthful Mechanisms for OneParameter Agents
Truthful Mechanisms for OneParameter Agents Aaron Archer Éva Tardos y Abstract In this paper, we show how to design truthful (dominant strategy) mechanisms for several combinatorial problems where each
More informationON THE DISTRIBUTION OF SPACINGS BETWEEN ZEROS OF THE ZETA FUNCTION. A. M. Odlyzko AT&T Bell Laboratories Murray Hill, New Jersey ABSTRACT
ON THE DISTRIBUTION OF SPACINGS BETWEEN ZEROS OF THE ZETA FUNCTION A. M. Odlyzko AT&T Bell Laboratories Murray Hill, New Jersey ABSTRACT A numerical study of the distribution of spacings between zeros
More informationIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 1289. Compressed Sensing. David L. Donoho, Member, IEEE
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 1289 Compressed Sensing David L. Donoho, Member, IEEE Abstract Suppose is an unknown vector in (a digital image or signal); we plan to
More informationPrice discrimination through communication
Price discrimination through communication Itai Sher Rakesh Vohra May 26, 2014 Abstract We study a seller s optimal mechanism for maximizing revenue when a buyer may present evidence relevant to her value.
More informationHow Bad is Forming Your Own Opinion?
How Bad is Forming Your Own Opinion? David Bindel Jon Kleinberg Sigal Oren August, 0 Abstract A longstanding line of work in economic theory has studied models by which a group of people in a social network,
More informationSubspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at UrbanaChampaign
More informationWhat Cannot Be Computed Locally!
What Cannot Be Computed Locally! Fabian Kuhn Dept. of Computer Science ETH Zurich 8092 Zurich, Switzerland uhn@inf.ethz.ch Thomas Moscibroda Dept. of Computer Science ETH Zurich 8092 Zurich, Switzerland
More informationA Principled Approach to Bridging the Gap between Graph Data and their Schemas
A Principled Approach to Bridging the Gap between Graph Data and their Schemas Marcelo Arenas,2, Gonzalo Díaz, Achille Fokoue 3, Anastasios Kementsietsidis 3, Kavitha Srinivas 3 Pontificia Universidad
More information