Strong Price of Anarchy for Machine Load Balancing


 Melinda Day
 1 years ago
 Views:
Transcription
1 Strong Price of Anarchy for Machine Load Balancing Amos Fiat, Haim Kaplan, Meital Levy, and Svetlana Olonetsky Tel Aviv University Abstract. As defined by Aumann in 1959, a strong equilibrium is a Nash equilibrium that is resilient to deviations by coalitions. We give tight bounds on the strong price of anarchy for load balancing on related machines. We also give tight bounds for kstrong equilibria, where the size of a deviating coalition is at most k. Key words: Game theory, Strong Nash equilibria, Load balancing, Price of Anarchy 1 Introduction Many concepts of game theory are now being studied in the context of computer science. This convergence of different disciplines raises new and interesting questions not previously studied in either of the original areas of study. Much of this interest in game theory within computer science is due to the seminal papers of Nisan and Ronen [0] and Koutsoupias and Papadimitriou [17]. A Nash equilibrium ([19]) is a state in a noncooperative game that is stable in the sense that no agent can gain from unilaterally switching strategies. There are many solution concepts used to study the behavior of selfish agents in a noncooperative game. Many of these are variants and extensions of the original ideas of John Nash from One immediate objection to Nash equilibria as a solution concept is that agents may in fact collude and jointly choose strategies so as to profit. There are many possible interpretations of the statement that a set of agents profit from collusion. One natural interpretation of this statement is the notion of a strong equilibrium due to Aumann [5], where no coalition of players have any joint deviation such that every member strictly benefits. Whereas mixed strategy Nash equilibria always exist for finite games [19], this is not in general true for strong equilibria. School of computer science, Tel Aviv University, Tel Aviv 69978, Israel. Research partially supported by the Israel Science Foundation and the German Israel Foundation. The research has been supported by the Eshkol Fellowship funded by the Israeli Ministry of Science. Dagstuhl Seminar Proceedings 0761 Fair Division
2 Holzman and LawYone [16] characterized the set of congestion games that admit strong equilibria. The class of congestion games studied was extended by Rozenfeld and Tennenholtz in [1]. [1] also considered mixed strong equilibria and correlated mixed strong equilibria under various deviation assumptions, pure, mixed and correlated. Variants of strong equilibria include limiting the set of possible deviations (coalitionproof equililbria [8]) and assuming static predefined coalitions ([15, 14]). The term price of anarchy was coined by Koutsoupias and Papadimitriou [17]. This is the ratio between the cost of the worstcase Nash equilibria and the cost of the social optimum. A related notion is the price of stability defined in [3], the ratio between the cost of the best Nash equilibria and the cost of the social optimum. These concepts have been extensively studied in numerous settings, machine load balancing [17, 18, 11, 7, 10], network routing [, 6, 9], network design [4, 1, 1, 3, 13], etc. Andelman et al. [] initiated the study of the strong price of anarchy (SPoA), the ratio of the worst case strong equilibria to the social optimum. The authors also define the notion of a kstrong equilibrium, where no coalition of size up to k has any joint deviation where all strictly benefit. Analogous definitions can be made for the kstrong price of anarchy. One may argue that the strong price of anarchy (which is never worse than the price of anarchy) removes the element of poor coordination and is entirely due to selfishness. Likewise, the kstrong price of anarchy measures the cost of selfishness and restricted coordination (up to k agents at once). Our work here is a direct continuation of the work of Andelman et al. [], and addresses many of the open problems cited there, in particular in the context of a load balancing game. In this setting agents (jobs) choose a machine, and job j placed on machine i contributes w j (i) to the load on machine i. Agents seek machines with small load, and the social cost usually considered is the makespan, i.e., the maximal load on any machine. Whereas [] considered strong price of anarchy and kstrong price of anarchy for unrelated machines, herein we primarily consider the strong price of anarchy for related machines (machines having an associated speed). Our results. 1. Czumaj and Vocking [11] showed that the price of anarchy for load balancing on related machine is Θ(log m/log log m), we show that the strong price of anarchy for load balancing on related machine is Θ ( log m/(log log m) ). This is our most technically challenging result.. We also give tight results for the problems considered by []: (a) In [] the strong price of anarchy for load balancing on m unrelated machines was shown to lie between m and m 1. We prove that the true value is always m. (b) In [], the kstrong price of anarchy for load balancing of n jobs on m unrelated machines is between O(nm /k) and Ω(n/k). We prove that the kstrong price of anarchy falls in between and is Θ(m(n m + 1)/(k m + 1)).
3 Preliminaries A load balancing game consists of a set M = {M 1,..., M m } of machines, a set N = {1,..., n} of jobs (agents). We use the terms machine i or machine M i interchangeably. Each job j has a weight function w j () such that w j (i) is the running time of job j on machine M i. When the machines are unrelated then w j () is an arbitrary positive real function. For related machines, each job j has weight, denoted by w j, and each machine M i has a speed, denoted by v(i). The running time of job j on machine i is w j (i) = w j /v(i) where w j is the weight of job j. In game theoretic terms, the set of strategies for job j is the set of machines M. A state S is an assignment of jobs to machines. Let m S (j) be the machine chosen by job j in state S. The load on machine M i in state S is j M i =m S (j) w j(i). Given a state S in which job j is assigned to machine M i, we say that the load observed by job j is the load on machine M i in state S. The makespan of a state S is the maximum load of a machine in S. Jobs seek to minimize their observed load. The state OPT (the social optimum) is a state with minimal makespan. We also denote the makespan in state OPT by OPT, and the usage would be clear from the context. A strong equilibrium is a state where no group of jobs can jointly choose an alternative set of strategies so that every job in the group has a reduced observed load in the new state. In a kstrong equilibrium we restrict such groups to include no more than k agents. The strong price of anarchy is the ratio between the makespan of the worst strong equilibrium and OPT. The kstrong price of anarchy is the ratio between the makespan of the worst kstrong equilibrium and OPT. 3 Related Machines Czumaj and Vocking [11] show that the price of anarchy of load balancing on related machines is Θ(log m/ log log m). We show that the lower bound construction of [11] is not in strong equilibrium. We also give a somewhat weaker (and tight) lower bound of Ω(log m/(log log m) ). We first present the lower bound of [11] and claim that it is not resilient to deviation by coalitions. The lower bound of [11]: Consider the following instance in which machines are partitioned into l + 1 groups. Let these groups be G 0, G 1,..., G l with m j machines in group G j. We define m 0 = 1, and m j+1 = (l j) m j for j = 1,..., l. Since the total number of machines m = l j=0 m j and m l = l!, it follows that l log m/ log log m. Suppose that all machines in G j have speed (l j). Consider the following Nash equilibrium, every machine in G j receives l j jobs, each of weight (l j). Each such job contributes 1 to the load of its machine. The total load on every machine in G j is therefore l j. Machines in group G l have no jobs assigned to them. The makespan is l, obtained on machines of group G 0. Consider some job assigned to a machine from group G j, this job has no incentive to migrate to a machine in a group of lower index since the load there is already 3
4 higher than the load it currently observes. If a job on a machine in G j migrate to a machine in G j+1 then it would observe a load of l (j + 1) + (l j) / (l (j+1)) = l j + 1 > l j and even higher loads on machines of groups G j+,..., G l. For the minimal makespan, move all jobs on machines in G j to machines in G j+1 (for j = 0,..., l 1). There are sufficiently many machines so that no machine gets more than one job, and the load on all machines is l j / l (j+1) =. The price of anarchy is therefore Ω(log m/log log m). (This is also shown to be an upper bound). However, this is not a strong Nash equilibrium. Lemma 1. The above Nash equilibrium is not in strong equilibrium for l > 8. Proof. Consider the following scenario. We consider a deviation by a coalition consisting of all l jobs on some machine of group G 0 (machine M) along with 3 jobs from each of l different machines from group G (machines N 1, N,..., N l ). We describe the actions of the coalition in two stages, and argue that all members of the coalition benefit from this deviation. All l jobs located on machine M G 0 migrate to separate machines N 1, N,..., N l in group G. Following this migration, the load on machines N i is l + (it was l, we added a job from machine M that contributed an extra 4 to the load). The load on machine M has dropped to zero (all jobs were removed). Now, remove 3 original jobs (with weight l ) from each of the N i machines and place them on machine M. The load on machine N i has dropped to l 1, so the job that migrated from machine M to machine N i is now experience lower load than before. The load on machine M is now 3l l / l = 3l/4 < l, for l > 8. Thus, the jobs that migrated from machines in G to machine M also benefit from this coalition. 3.1 Lower bound on strong price of anarchy for related machines. Theorem 1. The strong price of anarchy for m related machines and n jobs is Ω ( log m/(log log m) ). Proof. Consider the following instance in which machines are partitioned into l + 1 groups. Let these groups be G 0, G 1,..., G l. We further subdivide each group G i, 0 i < l, into log l subgroups, where all machines within the same subgroup have the same speed, but machines from different subgroups of a group differ in speed. The group G l consists of a single subgroup F l log l. In total, we have l log l + 1 subgroups F 0, F 1,..., F l log l, where subgroups F 0,..., F log l 1 are a partition of G 0, F log l,..., F log l 1 are the subgroups of G 1, etc. The speed of each machine in subgroup F j is (l log l j). Let m j denote the number of machines in subgroup F j, 0 j l log l. Then m 0 = 1, and for subgroup F j+1 such that F j G i we define m j+1 = (l i) m j. It follows that the number of machines in subgroup F l log l is at least (l!) log l and therefore m (l!) log l and l log m/(log log m). 4
5 Consider the following state, S. Each machine of group G i is assigned l i jobs. Jobs that are assigned to machines in subgroup F j have weight (l log l j). As the speed of such machines is (l log l j), it follows that each such job contributes one to the load of the machine it is assigned to. I.e., the load on all machines in G i is l i. Machines of F l log l have no jobs assigned to them. The load on the machines in group G 0 is l which is also the makespan in S. The minimal makespan (OPT) is attained by moving the jobs assigned to machines from F j each to a separate machine of subgroup F j+1, for 0 j < l log l. The load on all machines is now l log l j / l log l (j+1) =. State S is a Nash equlibrium. A job assigned to a machine of subgroup F j has no incentive to migrate to a machine with a lower indexed subgroup since the current load there is equal or higher to the current load it observes. There is no incentive to migrate to a higher indexed subgroup as it observes a load of at least l j +1 > l j. We now argue that state S is not only a Nash Equilibrium but also a strong Nash equilibrium. First, note that jobs residing on machines of group G i, 0 i l, have no incentive to migrate to machines of group G j, for j i +. This follows since the speed of each machine in group G j is smaller by a factor of more than log l = l from the speed of any machine in group G i. Thus, even if the job is alone on such a machine, the resulting load is higher than the load currently observed by the job (current load is l). Thus, any deviating coalition has the property that jobs assigned to machines from group G i may only migrate to machines from groups G j, for j i + 1. Suppose that jobs that participate in a deviating coalition are from machines in groups G i, G i+1,..., G j, 1 i j l. The load on machines from group G i holding participating jobs must strictly decrease since either jobs leave (and the load goes down) or jobs from higher or equal indexed groups join (and then the load must strictly go down too). If machines from group G i have their load decrease, and all deviating jobs belong to groups i through j, i < j, then there must be some machine M G p, i < p j, with an increase in load. Jobs can migrate to machine M either from a machine in group G p 1, or from a machine in group G j for some j p. If a deviating job migrates from a machine in G j for some j p then this contradicts the increase in the load on M. The contradiction arises as such jobs will only join the coalition if they become strictly better off, and for this to happen the load on M should decrease. However, this holds even if the deviating job migrates to M from a machine in G p 1. The observed load for this job prior to deviating was l (p 1) and it must strictly decrease. A job that migrates to machine M from G p 1 increases the load by an integral value. A job that migrates away from machine M decreases the load by an integral value too. This implies that the new load on M must be an integer smaller than l (p 1), which contradicts the increase in load on M. 5
6 4 Upper bound on strong price of anarchy for related machines. We assume that machines are indexed such that v(i) v(j) for i < j. We also assume that the speeds of the machines are scaled so that OPT is 1. Let S be an arbitrary strong Nash equilibrium, and let l max be the maximum load of a machine in S. Our goal is to give an upper bound on l max. When required, we may assume that l max is a sufficiently large constant, since otherwise an upper bound follows trivially. Recall that machines are ordered such that v(1) v() v(m) > 0. Let l(i) be the load on machine M i, i.e., the total weight of jobs assigned to machine M i is l(i)v(i). 4.1 Sketch of the proof l We prove that m = Ω(l max log l max max ), which implies l max = O(log m/(log log m) ). To show that m = l Ω(l max log l max max ) we partition the machines into consecutive disjoint phases (Definition 1), with the property that the number of machines in phase i is Ω(l) times the number of machines in phase i 1 (Lemma 4.3), where l is the minimal load in phases 1 through i. For technical reasons we introduce shifted phases (sphases, Definition ) which are in onetoone correspondence to the phases. We focus on the sphases of faster machines, so that the total drop in load amongst the machines of these sphases is about l max /. We next partition the sphases into consecutive blocks. Let δ i be the load difference between slowest machine in block i 1 and the slowest machine in block i. By construction we get that δ i = Θ(l max ). We map sphases to blocks such that each sphase is mapped to at most one block as follows (Lemmas 7 and 8), see Fig. 1. If δ i < 1/ log l max we map a single (1 = δ i log l max ) sphase to block i If δ i 1/ log l max we map Ω(δ i log l max ) sphases to block i Therefore the total number of sphases is at least δ i log l max = Ω(l max log l max ). Given the onetoone mapping from sphases to phases, this also gives us a lower bound of Ω(l max log l max ) on the number of phases. In Lemma 4.3 we prove that the number of machines in phase i is Ω(l max ) times the number of machines l in phase i 1. This allows us to conclude that the total number of machines m = Ω(l max log l max max ), or that l max = O(log m/(log log m) ). 4. Excess Weight and Excess Jobs Given that the makespan of OPT is 1, the total weight of all jobs assigned to machine σ in OPT cannot exceed v(σ), the speed of machine σ. We define the excess weight on machine 1 σ m to be X(σ) = (l(σ) 1)v(σ). (Note that excess weight can be positive or negative). 6
7 δ 1 > 1/ log l δ 3 1/ log l δ 4 1/ log l B 0 B 1 B P (B 1 ) P (B 3 ) P (B 4 ) Fig. 1. The machines are sorted in order of decreasing speed (and increasing index), and partitioned into sphases. The sphases are further partitioned into blocks B i. The sphases that are mapped to block i are marked P (B i ). Given a set R {1,..., m}, we define the excess weight on R to be X(R) = σ R X(σ). (1) For clarity of exposition, we use intuitive shorthand notation for sets R forming consecutive subsequences of 1,..., m. In particular, we use the notation X(σ) for X({σ}), X( w) for X({σ 1 σ w}), X(w... y) for X({w σ y}), etc. Given that some set of machines R has excess weight X(R) > 0, it follows that there must be some set of jobs J(R), of total weight at least X(R), that are assigned to machines in R by S, but are assigned to machines in {1,..., m} R by OPT. Given sets of machines R and Q, let J(R Q) be the set of jobs that are assigned by S to machines in R but assigned by OPT to machines in Q, and let X(R Q) be the weight of the jobs in J(R Q). Let R 1, and R be a partition of the set of machines R. Then we have X(R) X(R {1,..., m} \ R) = X(R R 1 ) + X(R R ). () In particular, using the shorthand notation above, we have that for 1 y < σ m, X( y) X( y > y) = X( y y σ) + X( y σ m). (3) Similarly, for 1 σ < y m we have X( y) X( σ) + X(σ y). (4) 4.3 Partition into phases Definition 1. We partition the machines 1,..., m into disjoint sets of consecutive machines called phases, Φ 1, Φ,..., where machines of Φ i precede those of Φ i+1. We define ρ 0 = 0 and ρ i = max{j j Φ i } for i 1. Thus, it follows that Φ i = {ρ i 1 + 1,..., ρ i }. It also follows that machines in Φ i are no slower than those of Φ i+1. Let n i be number of machines in the ith phase, i.e., n i = ρ i ρ i 1, for i 1. 7
8 To determine Φ i it suffices to know ρ i 1 and ρ i. For i = 1 we define ρ 1 = 1, as ρ 0 = 0 it follows that Φ 1 = {1}. We define ρ i+1 inductively using both ρ i and ρ i 1 as follows. The phases have the following properties. { ρ i+1 = argmin σ X( ρ i > σ) < X( ρ i 1 ) + X (Φ } i). (5) Lemma. Let l be the minimal load of a machine in phases 1,..., i, (l = min{l(σ) 1 σ ρ i }), then n i+1 n i (l 1)/. Proof. By the inductive definition of ρ i+1 above (Equation 5), we have that X( ρ i > ρ i+1 ) < X( ρ i 1 ) + X (Φ i). Now, since X( ρ i ) X( ρ i Φ i+1 ) + X( ρ i > ρ i+1 ), we have X( ρ i Φ i+1 ) X( ρ i ) X( ρ i > ρ i+1 ) (6) ( > X( ρ i 1 ) + X (Φ i ) X( ρ i 1 ) + X (Φ ) i) (7) = X (Φ i) ; (8) Equation (6) follows by rewriting Equation (). Equation (7) follows from the definition of ρ i+1 (Equation (5)), and the rest is trivial manipulation. Since the speed of any machine in Φ i is no smaller than v(ρ i ) (the lowest speed of any machine in phases 1,..., i), and we have chosen l to be the minimum load of any machine in the set ρ i, for every machine σ Φ i the excess weight X(σ) = (l(σ) 1)v(σ) (l 1)v(ρ i ). Therefore by substituting this into (8) we get X( ρ i Φ i+1 ) > n i(l 1)v(ρ i ). In OPT, no machine can have a load greater than one. Therefore since the speed of any machine in Φ i+1 is no larger than v(ρ i ) at most v(ρ i ) of the weight is on one machine in Φ i+1, so there are at least (l 1)n i / machines in Φ i+1. Lemma 3. Let j < i be two phases. If the minimal load of a machine ρ j 1 X (Φ i ) > X (Φ j ). k ρ i is at least 3 then Proof. Clearly it suffices to prove that X (Φ i+1 ) > X (Φ i ) for every i > 0. Since in OPT the load of every machine is at most one we have that X( ρ i Φ i+1 ) v(σ). (9) σ Φ i+1 8
9 This together with (8) gives that v(σ) > X (Φ i). (10) σ Φ i+1 Let l be the minimal load of a machine in Φ i+1. From our definition follows that X (Φ i+1 ) (l 1) σ Φ i+1 v(σ). (11) The lemma now follows by combining (10) and (11) together with the assumption that l > 3. Let l be the minimal load among machines 1,..., ρ i. Let Γ i be the subset of Φ i that have at least (l 1)/ of their load contributed by jobs of weight w v(ρ i+1 ). Lemma 4. For i > j, σ Γ i v(σ) v(ρ j )n j (l 1)/(l + 3). Proof. First we want to estimate X(Φ i ρ i+1 ). By rewriting Equation () we get that X(Φ i ρ i+1 ) = X( ρ i ρ i+1 ) X( ρ i 1 ρ i+1 ). Since X( ρ i 1 ρ i+1 ) X( ρ i 1 > ρ i ), we also have that X(Φ i ρ i+1 ) X( ρ i ρ i+1 ) X( ρ i 1 > ρ i ). (1) From the definition of ρ i+1 follows that X( ρ i ρ i+1 ) X( ρ i 1 ) + X (Φ i), (13) Similarly, from the definition of ρ i follows that X( ρ i 1 > ρ i ) < X( ρ i ) + X (Φ i 1). (14) Substituting Equations (13) and (14) into Equation (1) we get that X(Φ i ρ i+1 ) X( ρ i 1 ) + X (Φ i) X( ρ i ) + X (Φ i 1 ) + X (Φ i) X (Φ i 1) + X (Φ i) ( X( ρ i ) + X (Φ i 1) ) ( X( ρ i ) + X (Φ i 1). (15) Let A(σ) be the total weight of jobs j on machine σ with w j v(ρ i+1 ) and let A(Φ i ) = σ Φ i A(σ). Since every job in J(Φ i ρ i+1 ) has weight of at most v(ρ i+1 ), it follows that X(Φ i ρ i+1 ) A(Φ i ), and by Equation (15) A(Φ i ) X (Φ i 1) + X (Φ i). (16) We claim that every machine σ with A(σ) > 0 (i.e. the machine has at least one job j with w j v(ρ i+1 )) has load of at most l + 1. To prove the claim, let q ρ i+1 be a machine that has load greater than l + 1 and ) 9
10 a job j with w j v(ρ i+1 ), and let q be the machine among 1,..., ρ i with load l. This state is not a Nash equilibrium since if job j switches to machine q it would have a smaller cost. We get that A(Φ i ) v(σ)(l + 1) + v(σ) l 1 σ Γ i σ Φ i Γ i = σ Γ i v(σ) l + 3 σ Γ i v(σ) l σ Φ i v(σ) l 1 + X (Φ i). (17) Inequality (17) holds l is the smallest load of a machine in 1,..., ρ i. Combining (17) and (16) we get that and therefore v(σ) l + 3 σ Γ i v(σ) l + 3 σ Γ i + X (Φ i) X (Φ i 1) X (Φ i 1) X (Φ j) + X (Φ i) v(ρ j )n j l 1, (18). (19) The first inequality in (19) follows from (18), the second follows from Lemma 3, and the third inequality follows since l is the smallest load of a machine in 1,..., ρ i and since v(ρ j ) is the smallest speed in Φ j. From (19) the lemma clearly follows. Recall that l max is the maximum load in S. Define k to be min{i l(i) < l max /}. Let t be the phase such that ρ t < k and ρ t+1 k. Consider machines 1,..., ρ t. From now on l would be the minimal load of a machine in this set of machines. Then, l = Θ(l max ), and we may assume that l is large enough. Definition. We define another partition of the machines into shifted phases (sphases) Ψ 1, Ψ,... based on the partition to phases Φ 1, Φ,... as follows. We define ϕ 0 = 0. Let ϕ i be the slowest machine in Φ i such that at least (l 1)/ of its load is contributed by jobs with weight w v(ρ i+1 ) (there exists such a machine by Lemma 4). We define Ψ i = {ϕ i 1 + 1,..., ϕ i }. Note that there is a bijection between the sphases Ψ 1, Ψ,... and the phases Φ 1, Φ,.... Furthermore, all machines in Φ i such that at least (l 1)/ of their load is contributed by jobs of weight v(ρ i+1 ) are in Ψ i. Lemma 5. The load difference between machines ϕ and ϕ t, l(ϕ ) l(ϕ t ) > l max / Proof. According to the definition of Ψ i, there is a job on machine ϕ i with weight w v(ρ i+1 ) v(ϕ i+1 ) and therefore it contributes load of at most 1 to machine ϕ i+1. As S is a Nash equilibrium, the load difference between machines ϕ i and ϕ i+1 is at most 1. The load on the fastest machine ϕ 1 is l(ϕ 1 ) > l max 1 since every job contributes a load of at most 1 to it. Thus, the load on machine ϕ Φ is at least l(ϕ ) l(ϕ 1 ) 1 l max. The load on machine ϕ t is l(ϕ t ) l max / + 1, since there is a job with weight of at most v(ϕ t+1 ) on ϕ s and by the definition of ϕ i there is a machine k ϕ t+1 with load less than l max /. 10
11 Therefore, the load difference between machines ϕ and ϕ t is at least (l max ) (l max /+1) > l max /4+4, for l max sufficiently large (> 8). We define z i+1 to be v(ϕ i )/v(ϕ i+1 ). Notice that z i+1 1. We redefine Γ b to be the subset of machines of Ψ b such that for every such machine, at least (l 1)/ of the load is contributed by jobs with weight w v(ϕ b+1 ). For two sphases Ψ a, and Ψ b the lemma below relates the difference in load of ϕ a and ϕ b, to the ratio of speeds v(ϕ a ) and v(ϕ b ). Lemma 6. Consider sphases Ψ a and Ψ b such that a < b. Let l be the minimal load in Ψ a and Ψ b. If v(ϕ a )/v(ϕ b ) z a+1 (l 1)/5 then l(ϕ a ) l(ϕ b ) + 4/z b+1. Proof. Proof by contradiction, assume that l(ϕ a ) = l(ϕ b ) + α/z b+1, for some α > 4, and that v(ϕ a )/v(ϕ b ) z a+1 (l 1)/5. We exhibit a deviating coalition all of whose members reduce their observed loads, contradicting the assumption that the current state is a strong equilibrium. We observe that for every machine σ Γ b we have l(σ) l(ϕ b ) + 1/z b+1.(from this also follows that l(σ) < l(ϕ a ).) If not, take any job j located on σ, such that w σ v(ϕ b+1 ) and send it to machine ϕ b, the contribution of job j to the load of ϕ b is at most v(ϕ b+1 )/v(ϕ b ) = 1/z b+1, i.e., the current state is not even a Nash equilibrium. Similarly, we have l(ϕ b ) l(σ) + 1/z b+1. We group jobs on ϕ a in a way such that the current load contribution of each group is greater than 1/(z a+1 ) and no more than 1/z a+1. I.e., for one such group of jobs G, 1/(z a+1 ) < j G w j/v(ϕ a ) 1/z a+1. At least z a+1 (l 1)/ such groups are formed. Every such group is assigned a unique machine in Γ b and all jobs comprising the group migrate to this machine. Let Γ Γ b be a subset of machines that got an assignment, Γ = min{z a+1 (l 1)/, Γ b }. The load contributed by migrating jobs to the target machine, σ Γ b, is therefore j G w j v(σ) j G w j v(ϕ b ), we also know that v(ϕ a )/v(ϕ b ) z a+1 (l 1)/5 and j G w j/v(ϕ a ) 1/z a+1, this gives us that j G w j v(ϕ b ) j G w j v(ϕ a ) v(ϕ a) (l 1)/5. v(ϕ b ) Therefore, after migration, the load on σ Γ b is l(σ) + (l 1)/5 l(ϕ a ) + (l 1)/5. It is also at least l(ϕ a ) (otherwise S is not a Nash equilibrium). Additionally, jobs will also migrate from machines σ Γ to machine ϕ a (not the same jobs previously sent the other way). We choose jobs to migrate from σ Γ to ϕ a, so that the final load on σ is strictly smaller than l(ϕ a ) and at least l(ϕ a ) 1/z b+1 = l(ϕ b ) + (α 1)/z b+1. It has to be smaller than l(ϕ a ) to guarantee that every job migrating from ϕ a to σ observes a load strictly smaller than the load it observed before the deviation. We want it to be at least l(ϕ b ) + (α 1)/z b+1, so that a job migrating to ϕ a from σ would observe 11
12 a smaller load as we will show below. To achieve this, slightly more than (l 1)/5 of the load of σ Γ has to migrate back to ϕ a. The jobs that migrate from σ Γ to ϕ a are those jobs with load 1/z b+1 on σ. Therefore, each such job which leaves σ reduces the load of σ by at most 1/z b+1. Since the total load of these jobs on σ is (l 1)/ > (l 1)/5, we can successively send jobs from σ to ϕ a until the load drops below to some value y such that l(ϕ b ) + (α 1)/z b+1 y < l(ϕ b ) + α/z b+1. We argued that prior to any migration, the load l(σ) l(ϕ b ) + 1/z b+1 for σ Γ b. Following the migrations above, the new load l(σ) on machine σ is l(σ) l(ϕ b ) + (α 1)/z b+1. Thus, the load on every such machine has gone up by at least (α )/z b+1. If Γ = z a+1 (l 1)/ the net decrease in load on machine ϕ a is at least σ Γ α v(σ) z b+1 v(ϕ a ) α v(ϕ b) z b+1 v(ϕ a ) σ Γ z a+1(l 1) α 5 z b+1 z a+1 (l 1).5(α ) > α + 1. z b+1 z b+1 If Γ < z a+1 (l 1)/, then Γ = Γ b and the net decrease in load on machine ϕ a is at least α v(σ) z b+1 v(ϕ a ) α v(σ) z b+1 v(ϕ a ) σ Γ b σ Γ b α na(l 1) z b+1 l + 3 (0).5(α ) > α + 1. z b+1 z b+1 (1) Inequality (0) follows from Lemma 4. Inequality (1) holds for a > 1 (according to Lemma for a > 1 we get that n a > (l 1)/ ) and for l 10. Thus, the new load l(ϕ a ) on machine ϕ a is at most l(ϕ a ) < l(ϕ b ) + α/z b+1 (α + 1)/z b+1 = l(ϕ b ) 1/z b+1, which ensures that the jobs that migrate to machine ϕ a could form a coalition, benefiting all members, in contradiction to the strong equilibrium assumption. We define a partition of the sphases into blocks B 0, B 1,.... The first block B 0 consists of the first two sphases. Given blocks B 0,..., B j 1, define B j as follows: For all i, let a i be the first sphase of block B i and let b i be be the last sphase of block B i. The first sphase of B j is sphase b j 1 + 1, i.e., a j = b j To specify the last phase of B j we define a consecutive set of sphases denoted by P 1, where b j P 1. The first sphase in P 1 is a j. The last sphase of P 1 is the first phase, indexed p, following a j, such that v(ϕ bj 1 )/v(ϕ p ) > z aj (l 1)/5. Note that P 1 always contains at least two sphases. Let m 1 be an sphase in P 1 \ {a j } such that z m1 z i for every i in P 1 \ {a j }. We consider two cases: 1
13 Case 1: z m1 log l. We define b j = m 1 1. In this case we refer to B j as a block of a type I. Case : z m1 < log l. We define P to be the suffix of P 1 containing all sphases i for which v(ϕ bj 1 )/v(ϕ i ) z aj ((l 1)/5) /3. Note that sphase p is in P and sphase a j is not in P. Let m be an sphase in P such that z m z i for every i in P. We define b j to be m 1. In this case we refer to B j as a block of type II. If v(ϕ bj 1 )/v(ϕ t ) z aj (l 1)/5 we do not define B j and B j 1 is the last block. For each block B j let P (B j ) be the sphases which we map to B j. In Case 1 we define P (B j ) = m 1 = a j+1. In Case we define P (B j ) = P. Lemma 7. The number of sphases associated with block B j, P (B j ), is Ω(log l/z aj+1 ). Proof. If z m1 log l then P (B j ) consists of a single phase. As log l/z m1 < 1, the claim trivially follows. Assume that z m1 < log l. Let s be the first sphase in P, then ( ) /3 l 1 v(ϕ bj 1 )/v(ϕ s 1 ) z aj. () 5 Let k be the last sphase of P (which is also the last sphase of P 1 ), we have that v(ϕ bj 1 )/v(ϕ k ) z aj l 1 5. (3) If we divide () by (3) we obtain that v(ϕ k )/v(ϕ s 1 ) ((l 1)/5) 1/3. Let q be the number of sphases in P. Since z m z i for all i P it follows that (z m ) q ((l 1)/5) 1/3. We conclude that q = Ω(log l/z m ) = Ω(log l/z aj+1 ), as log x x for all x. The following lemma shows each sphase is mapped into at most one block. Lemma 8. For every pair of blocks B, and B we have P (B) P (B ) =. Proof. This is clear if B is of type I and B is of type II since we map to blocks of type I sphase i for which z i < log l and we map to blocks of type II sphases i for which z i < log l. The statement is also holds if both B and B are of type I since each block is of size at least two. So we are left with case where both B and B are of type II. It is enough to prove it for two consecutive blocks B j and B j+1. Let x 1,y 1 be the first and the last phase in P (B j ), and let x be the first phase in P (B j+1 ). From the definition of P (B j ) follows: v(ϕ bj 1 ) / v(ϕ x1 ) z aj ((l 1)/5) /3, (4) v(ϕ bj 1 ) / v(ϕ y1 ) > z aj (l 1)/5, (5) v(ϕ bj 1 ) / v(ϕ y1 1) z aj (l 1)/5. (6) 13
Truthful Mechanisms for OneParameter Agents
Truthful Mechanisms for OneParameter Agents Aaron Archer Éva Tardos y Abstract In this paper, we show how to design truthful (dominant strategy) mechanisms for several combinatorial problems where each
More informationHow Bad is Forming Your Own Opinion?
How Bad is Forming Your Own Opinion? David Bindel Jon Kleinberg Sigal Oren August, 0 Abstract A longstanding line of work in economic theory has studied models by which a group of people in a social network,
More informationHow to Use Expert Advice
NICOLÒ CESABIANCHI Università di Milano, Milan, Italy YOAV FREUND AT&T Labs, Florham Park, New Jersey DAVID HAUSSLER AND DAVID P. HELMBOLD University of California, Santa Cruz, Santa Cruz, California
More informationLower bounds for Howard s algorithm for finding Minimum MeanCost Cycles
Lower bounds for Howard s algorithm for finding Minimum MeanCost Cycles Thomas Dueholm Hansen 1 and Uri Zwick 2 1 Department of Computer Science, Aarhus University. 2 School of Computer Science, Tel Aviv
More informationRegular Languages are Testable with a Constant Number of Queries
Regular Languages are Testable with a Constant Number of Queries Noga Alon Michael Krivelevich Ilan Newman Mario Szegedy Abstract We continue the study of combinatorial property testing, initiated by Goldreich,
More informationMechanism Design via Differential Privacy
Mechanism Design via Differential Privacy Frank McSherry Microsoft Research Silicon Valley Campus mcsherry@microsoft.com Kunal Talwar Microsoft Research Silicon Valley Campus kunal@microsoft.com Abstract
More informationSubspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at UrbanaChampaign
More informationRmax A General Polynomial Time Algorithm for NearOptimal Reinforcement Learning
Journal of achine Learning Research 3 (2002) 213231 Submitted 11/01; Published 10/02 Rmax A General Polynomial Time Algorithm for NearOptimal Reinforcement Learning Ronen I. Brafman Computer Science
More informationThe mean field traveling salesman and related problems
Acta Math., 04 (00), 9 50 DOI: 0.007/s50000467 c 00 by Institut MittagLeffler. All rights reserved The mean field traveling salesman and related problems by Johan Wästlund Chalmers University of Technology
More informationMaximizing the Spread of Influence through a Social Network
Maximizing the Spread of Influence through a Social Network David Kempe Dept. of Computer Science Cornell University, Ithaca NY kempe@cs.cornell.edu Jon Kleinberg Dept. of Computer Science Cornell University,
More informationSizeConstrained Weighted Set Cover
SizeConstrained Weighted Set Cover Lukasz Golab 1, Flip Korn 2, Feng Li 3, arna Saha 4 and Divesh Srivastava 5 1 University of Waterloo, Canada, lgolab@uwaterloo.ca 2 Google Research, flip@google.com
More informationSET COVERING WITH OUR EYES CLOSED
SET COVERING WITH OUR EYES CLOSED FABRIZIO GRANDONI, ANUPAM GUPTA, STEFANO LEONARDI, PAULI MIETTINEN, PIOTR SANKOWSKI, AND MOHIT SINGH Abstract. Given a universe U of n elements and a weighted collection
More informationIEEE TRANSACTIONS ON INFORMATION THEORY 1. The Smallest Grammar Problem
IEEE TRANSACTIONS ON INFORMATION THEORY 1 The Smallest Grammar Problem Moses Charikar, Eric Lehman, April Lehman, Ding Liu, Rina Panigrahy, Manoj Prabhakaran, Amit Sahai, abhi shelat Abstract This paper
More informationNew insights on the meanvariance portfolio selection from de Finetti s suggestions. Flavio Pressacco and Paolo Serafini, Università di Udine
New insights on the meanvariance portfolio selection from de Finetti s suggestions Flavio Pressacco and Paolo Serafini, Università di Udine Abstract: In this paper we offer an alternative approach to
More informationTHE TRAVELING SALESMAN PROBLEM AND ITS VARIATIONS
THE TRAVELING SALESMAN PROBLEM AND ITS VARIATIONS Edited by GREGORY GUTIN Royal Holloway, University of London, UK ABRAHAM PUNNEN University of New Brunswick, St. John, Canada Kluwer Academic Publishers
More informationRobust Set Reconciliation
Robust Set Reconciliation Di Chen 1 Christian Konrad 2 Ke Yi 1 Wei Yu 3 Qin Zhang 4 1 Hong Kong University of Science and Technology, Hong Kong, China 2 Reykjavik University, Reykjavik, Iceland 3 Aarhus
More informationMatching with Contracts
Matching with Contracts By JOHN WILLIAM HATFIELD AND PAUL R. MILGROM* We develop a model of matching with contracts which incorporates, as special cases, the college admissions problem, the KelsoCrawford
More informationNot Only What But also When: A Theory of Dynamic Voluntary Disclosure
Not Only What But also When: A Theory of Dynamic Voluntary Disclosure Ilan Guttman, Ilan Kremer, and Andrzej Skrzypacz Stanford Graduate School of Business September 2012 Abstract The extant theoretical
More informationOPRE 6201 : 2. Simplex Method
OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2
More informationmetric space, approximation algorithm, linear programming relaxation, graph
APPROXIMATION ALGORITHMS FOR THE 0EXTENSION PROBLEM GRUIA CALINESCU, HOWARD KARLOFF, AND YUVAL RABANI Abstract. In the 0extension problem, we are given a weighted graph with some nodes marked as terminals
More informationX Incentivizing Participation in Online Forums for Education
X Incentivizing Participation in Online Forums for Education Arpita Ghosh, Cornell University Jon Kleinberg, Cornell University We present a gametheoretic model for online forums for education, where
More informationOn Nashsolvability of chesslike games
R u t c o r Research R e p o r t On Nashsolvability of chesslike games Endre Boros a Vladimir Oudalov c Vladimir Gurvich b Robert Rand d RRR 92014, December 2014 RUTCOR Rutgers Center for Operations
More informationLOCAL SEARCH HEURISTICS FOR kmedian AND FACILITY LOCATION PROBLEMS
SIAM J. COMPUT. Vol. 33, No. 3, pp. 544 562 c 2004 Society for Industrial and Applied Mathematics LOCAL SEARCH HEURISTICS FOR kmedian AND FACILITY LOCATION PROBLEMS VIJAY ARYA, NAVEEN GARG, ROHIT KHANDEKAR,
More informationAn efficient reconciliation algorithm for social networks
An efficient reconciliation algorithm for social networks Nitish Korula Google Inc. 76 Ninth Ave, 4th Floor New York, NY nitish@google.com Silvio Lattanzi Google Inc. 76 Ninth Ave, 4th Floor New York,
More informationUsing Focal Point Learning to Improve HumanMachine Tacit Coordination
Using Focal Point Learning to Improve HumanMachine Tacit Coordination Inon Zuckerman 1, Sarit Kraus 1, Jeffrey S. Rosenschein 2 1 Department of Computer Science BarIlan University RamatGan, Israel {zukermi,
More informationMobility Increases the Capacity of Ad Hoc Wireless Networks
IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 10, NO. 4, AUGUST 2002 477 Mobility Increases the Capacity of Ad Hoc Wireless Networks Matthias Grossglauser and David N. C. Tse Abstract The capacity of ad hoc
More informationWhat Cannot Be Computed Locally!
What Cannot Be Computed Locally! Fabian Kuhn Dept. of Computer Science ETH Zurich 8092 Zurich, Switzerland uhn@inf.ethz.ch Thomas Moscibroda Dept. of Computer Science ETH Zurich 8092 Zurich, Switzerland
More informationSome Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.
Some Polynomial Theorems by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.com This paper contains a collection of 31 theorems, lemmas,
More informationAn example of a computable
An example of a computable absolutely normal number Verónica Becher Santiago Figueira Abstract The first example of an absolutely normal number was given by Sierpinski in 96, twenty years before the concept
More informationComplexity of UnionSplitFind Problems. Katherine Jane Lai
Complexity of UnionSplitFind Problems by Katherine Jane Lai S.B., Electrical Engineering and Computer Science, MIT, 2007 S.B., Mathematics, MIT, 2007 Submitted to the Department of Electrical Engineering
More information