# Fast Greedy Algorithms in MapReduce and Streaming

Save this PDF as:

Size: px
Start display at page:

## Transcription

3 Problem Objective Constraint Approximation Rounds Reference Modular O( -matroid /δ) Section 5 /(+ɛ) O( log ) Section 4 ɛδ -knapsack / ɛ Submodular d-knapsacks Ω( /d) O( /δ) Section 6 p-system p+ /δ O( p++ɛ ɛδ log ) Section 4 Table : A summary of the results. Here, n denotes the total number of elements and, the maximum change of the function under consideration. Our algorithms use O(n/µ) machines with µ, the memory per machine, µ = O(kn δ log n), where k is the cardinality of the optimum solution. DEFINITION (MODULAR AND SUBMODULAR FUNCTIONS). A function f : U R + is said to be submodular if for every S S U and u U \ S, we have f S (u) f S(u); it is said to be modular if f S (u) = f S(u). A function f is said to be monotone if S S = f(s) f(s ). A family I is hereditary if it is downward closed, i.e., A I B A = B I. The greedy simulation framework we introduce will only be restricted to submodular functions for a hereditary feasible family. We now define more specific types on constraints on the feasible family. DEFINITION (MATROID). The pair M = (U, I) is a matroid if I is hereditary and satisfies the following augmentation property: A, B I, A < B = u B \ A such that A {u} I. Given a matroid (U, I), a set A U is called independent if and only if A I. Throughout the paper, we will often use the terms feasible and independent interchangeably, even for more general families I of feasible solutions. The rank of a matroid is the number of elements in a maximal independent set. We next recall the notion of p-systems, which is a generalization of intersection of p matroids (e.g., see [8, 6, 9]). Given U U and a hereditary family I, the maximal independent sets of I contained in U are given by B(U ) = {A I A U A I, A A U }. DEFINITION 3 (p-system). The pair P = (U, I) is a p-system if I is hereditary and for all U U, we have max A B(U ) A p min A B(U ) A. Given F = (U, I), where I is hereditary, we will use the notation F[X], for X U, to denote the pair (X, I[X]), where I[X] is the restriction of I to the sets over elements in X. We observe that if F is a matroid (resp., p-system), then we have that F[X] is a matroid (resp., p-system) as well.. Greedy algorithms A natural way to solve () is to use the following sequential greedy algorithm: start with an empty set S and grow S by iteratively adding the element u U such that greedily maximizes the benefit: u = arg max u U S {u } I f S(u ). It is known that this greedy algorithm obtains a ( /e) approximation for the maximum coverage problem, a ( /p)-approximation for maximizing a modular function subject to p-system constraints [6] and a ( /(p+))-approximation for maximizing a submodular function [8, 0]. Recently, for p =, Calinescu et al. [8] obtained a ( /e)-approximation algorithm for maximizing a submodular function; this result generalizes the maximum coverage result. We now define an approximate greedy algorithm, which is a modification of the standard greedy algorithm. Let 0 ɛ be a fixed constant. DEFINITION 4 (ɛ-greedy ALGORITHM). Repeatedly, add an element u to ( the current solution S such ) that S {u} I and f S(u) max +ɛ u U f S(u ), with ties broken in an S {u } I arbitrary but consistent way. The usefulness of this definition is evident from the following. +ɛ ap- THEOREM 5. The ɛ-greedy algorithm achieves a: (i) proximation for maximizing a submodular function subject to a matroid constraint; (ii) /e approximation for maximizing a submodular function subject to choosing at most k elements; +ɛ and (iii) (resp., ) approximation for maximizing submodular (modular) function subject to a p++ɛ p+ɛ p-system. The proof follows by definition and from the analysis of Calinescu et al. [8]. Finally, without loss of generality, we assume that for every S U and u U such that f S(u) 0, we have f S(u), i.e., represents the spread of the non-zero incremental values.. MapReduce We describe a high level overview of the MapReduce computational model; for more details see [6, 8, 7, 30]. In this setting all of the data is represented by key; value pairs. For each pair, the key can be seen as the logical address of the machine that contains the value, with pairs sharing the same key being stored on the same machine. The computation itself proceeds in rounds, which each round consisting of a map, shuffle, and reduce phases. Semantically, the map and shuffle phases distribute the data, and the reduce phase performs the computation. In the map phase, the algorithm designer specifies the desired location of each value by potentially changing its key. The system then routes all of the values with the same key to the same machine in the shuffle phase. The algorithm designer specifies a reduce function that takes as input all key; value pairs with the same key and outputs either the final solution or a set of key; value pairs to be mapped in a subsequent MapReduce round. Karloff et al. [7] introduced a formal model, designed to capture the real world restrictions of MapReduce as faithfully as possible. Specifically, their model insists that the total number of machines available for a computation of size n is at most n ɛ, with each

4 machine having n ɛ amount of memory, for some constant ɛ > 0. The overall goal is to reduce the number of rounds necessary for the computation to complete. Later refinements on the model [8, 30, 34] allow for a specific tradeoff between the memory per machine the number of machines and the total number of rounds. The MapReduce model of computation is incomparable to the traditional PRAM model where an algorithm can use a polynomial number of processors that share an unlimited amount of memory. The authors of [7] have shown how to simulate T-step EREW PRAM algorithms in O(T ) rounds of MapReduce. This result has subsequently been extended by [] to CRCW algorithms with an additional overhead of log µ M, where µ is the memory per machine and M is the aggregate memory used by the PRAM algorithm. We note that only a few special cases of the problems we consider have efficient PRAM algorithms and, the MapReduce simulation of these cases requires at least logarithmic factor larger running time than our algorithms. 3. SAMPLE&PRUNE In this section we describe a primitive that will be used in our algorithms to progressively reduce the size of the input. A similar filtering method was used in [30] for maximal matchings and in [8] for clustering. Our contribution is to abstract this method and significantly expand the scope of its applications. At a high level, the idea is to identify a large subset of the data that can be safely discarded without changing the value of the optimum solution. As a concrete example, consider the MAXCOVER problem. We are given a universe of elements U, and a family of subsets S. Given an integer k, the goal is to choose at most k subsets from S such that the largest number of elements are covered. Say that the sets in U are S = {a, b, c}, S = {a}, S 3 = {a, c}, and S 4 = {b, d}. If S has been selected as part of the solution, then both S and S 3 are redundant and can be clearly removed thus reducing the problem size. We refer to S above as a seed solution. We show for a large family of optimization functions that a seed solution computed on a sample of all of the elements can be used to reduce the problem size by a factor proportional to the size of the sample. Formally, consider a universe U and a function G k such that for A U, the function G k (A) returns a subset of A of size at most k and G k (A) G k (B), for A B. The algorithm SAM- PLE&PRUNE begins by running G k on a sample of the data to obtain a seed solution S. It then examines every element u U and removes it if it appears to be redundant under G k given S. SAMPLE&PRUNE(U, G k, l) : X sample each point in U with probability min{, : S G k (X) 3: M S {u U \ S u G k (S {u})} 4: return (S, M S) l U } The algorithm is sequential, but can be easily implemented in the MapReduce setting, since both lines () and (3) are trivially parallelizable. Further, the set X at line () has size at most O(l log n) with high probability. Therefore, for any l > 0 the algorithm can be implemented in two rounds of MapReduce using O( n /µ) machines each with µ = O(l log n) memory. Finally, we show that the number M S of non-redundant elements is smaller than U by a factor of l /k. Thus, a repeated call to SAMPLE&PRUNE terminates after O(log l/k n) iterations, as shown in Corollary 7. LEMMA 6. After one round of SAMPLE&PRUNE with l = kn δ log n, we have Pr [ M S n δ] n k when G k (A) is a function that returns a subset of A of size at most k and G k (A) G k (B), for A B. PROOF. First, observe that when n l, we have M S = and hence we are done. Therefore, assume n > l. Call the set S obtained in step a seed set. Fix a set S and let E S be the event that S was selected as the seed set. Note that when E S happens, by the hereditary property of G k, none of the elements in M S is part of the sample X, since any element x X discarded in step would be discarded in step 3 as well. Therefore, for each S such that M S m = (k/l)n log n, we have Pr[E S] ( ) l MS n ( ) l m n n k. Since each seed set S has at most k elements, there are at most n k such sets, and hence S: M S m n k n k n k = n k COROLLARY 7. Suppose we run SAMPLE&PRUNE repeatedly for T rounds, that is: for i T, let (S i, M i) = SAMPLE&PRUNE(M i, G k, l), where M 0 = U. If l = kn δ log n and T > /δ, then we have M T =, with probability at least T n k. PROOF. By Lemma 6, assuming that M i n(n δ ) i we can prove that M i+ n ( n δ) i+ with probability at least n k. The claim follows from a union bound. We now illustrate how Corollary 7 generalizes some prior results [8, 30]. Note that the choice of the function G k affects both the running time and the approximate ratio of the algorithm. Since Corollary 7 guarantees convergence, the algorithm designer can focus on judiciously selecting a G k that offers approximation guarantees for the problem at hand. For finding maximal matchings, Lattanzi et al. [30] set G k (X, S) to be the maximal matching obtained by streaming all of the edges in X, given that the edges in S have already been selected. Setting k = n/, as at most k edges can ever be in a maximum matching, and the initial edge size to m, we see that using nm δ memory, the algorithm converges after O( ) rounds. For clustering, Ene et al. [8] first compute the δ value v of the distance of the (roughly) n δ th furthest point in X to a predetermined set S and the function G k (X, S) returns all of the points that have distance to S further than v. Since G k returns at most k = n δ points, using n δ memory, the algorithm converges after O( ) rounds. δ In addition to these two examples, we will show below how to use the power of SAMPLE&PRUNE to parallelize a large class of greedy algorithms. 4. GREEDY ALGORITHMS In this section we consider a generic submodular maximization problem with hereditary constraints. We first introduce a simple scaling approach that simulates the ɛ-greedy algorithm. The scaling algorithm proceeds in phases: in phase i {,..., log +ɛ }, only the elements that improve the current solution by an amount in [, ) are considered (recall that no element can improve the solution by more than ). These elements are added one (+ɛ) i+ (+ɛ) i at a time to the solution as the algorithm scans the input. We will show that the algorithm terminates after O( log ) iterations, and ɛ effectively simulates the ɛ-greedy algorithm. We note that somewhat similar scaling ideas have proven to be useful for submodular problems such as in [5, 4]. We first describe the sequential algorithm (GREEDYSCALING ) and then show how it can be parallelized.

6 LEMMA. Let M = (U, I) be a matroid of rank k and f : U R + be an injective weight function. For any X U, let G(X) be the greedy solution for M[X] under f. Then, for any X U and u G(U), it holds that u G(G(X) {u}). PROOF. Let u G(U) and let X be any subset of U such that u X. Let A contain an element u U such that f(u ) > f(u) and let A = A X. We define r(s) to be the rank of the matroid M[S] for any S U and let r(s, u ) = r(s {u}) r(s) for any u U. It can be easily verified that r(s) is a submodular function. Notice that r(a, u) = because u is in G(U). Knowing that r is submodular and A A we have that r(a, u) r(a, u) =. Thus, it must be the case that u G(X {u}) To find the maximum weight independent set in MapReduce we again turn to the SAMPLE&PRUNE procedure. As in the case of the greedy scaling solution, our goal will be to cull the elements not in the global greedy solution, based on the greedy solution for the sub-matroid induced by a set of sampled elements. Intuitively, Lemma guarantees that no optimal element will be removed during this procedure. MATROIDMR(U, I, f, l) : S, M U : while M do 3: (S, M) SAMPLE&PRUNE(S M, G, l) 4: end while 5: return S Algorithm MATROIDMR contains the formal description. Given the matroid M = (U, I), the weight function f, and a memory parameter l, the algorithm repeatedly (a) computes the greedy solution S on a submatroid obtained from M by restricting it to a small sample of elements that includes the solution computed in the previous iteration and (b) prunes away all the elements that cannot individually improve the newly computed solution S. We remark that, unlike in the algorithm for the greedy framework of Section 4, the procedure G passed to SAMPLE&PRUNE is invariant throughout the course of the whole algorithm and corresponds to the classical greedy algorithm for matroids. Note that G always returns a solution of size at most k, as no feasible set has size more than the rank. The correctness of the algorithm again follows from the fact that no element in the global greedy solution is ever removed from consideration. LEMMA 3. Upon termination, the algorithm MATROIDMR returns the optimal greedy solution S. PROOF. It is enough to show inductively that after each call of SAMPLE&PRUNE, we have S S M. Indeed, this implies that after each call, the greedy solution of S M is exactly S. Let s S S M, and suppose by contradiction that after a call of SAMPLE&PRUNE(S M, G, l), we have s S M, where (S, M ) is the new pair returned by the call. By definition of SAMPLE&PRUNE and G, it must be that s was pruned since it was not a part of the greedy solution on S {s }. But then Lemma guarantees that s is not part of the greedy solution of S M. By induction, s / S, a contradiction. For the running time of the algorithm, we bound the number of iterations of the while loop using a slight variant of Corollary 7. The reason Corollary 7 is not directly applicable is that S is added to M after each iteration of the while loop in MATROIDMR. LEMMA 4. Let l = kn δ log n. Then, the while loop of the algorithm MATROIDMR is executed at most T = + /δ times, with probability at least T n k. PROOF. Let (S i, M i) be the pair returned by SAMPLE&PRUNE in the ith iteration of the while loop, where S 0 = and M 0 = U. Also, let n i = S i M i and γ = n δ. Note that n 0 = U = n. We want to show that the sequence of n i decreases rapidly. Assuming n i γ i n + k( + γ + + γ i ), Lemma 6 implies that, with probability at least n k, n i+ M i + S i γn i + S i γn i + r γ i+ n + k i γ j. j=0 Therefore, for every i, we have n i γ i n + k γi < γ γi n + k, with probability at least in k. For i = T =, we γ i δ k γ have n T. Therefore, at iteration T, SAMPLE&PRUNE will sample with probability one all elements in S T M T, and hence S T = G(S T M T ). By Lemma 3, S T = S and therefore M T =. Given that SAMPLE&PRUNE is realizable in MapReduce the following is immediate from Lemma 4. COROLLARY 5. MATROIDMR can be implemented to run in O( /δ) MapReduce rounds using O( n /µ log n) machines with µ = O(kn δ log n) memory with high probability. 6. MONOTONE SUBMODULAR MAXI- MIZATION In this section we consider the problem of maximizing a monotone submodular function under a cardinality constraint, knapsack constraints and p-system constraints. We first consider the special case of a single cardinality constraint, i.e., -knapsack constraint with unit weights. In this case, we provide a ( / ɛ)- approximation using a simple threshold argument. Then, we observe that the same ideas can be applied to achieve a (/d)- approximation for d knapsack constraints. This is essentially the best achievable as Dean et al. [5] show a (/d ( ɛ) )-hardness. 6. Cardinality constraint We consider maximizing a submodular function subject to a cardinality constraint. The general techniques have the same flavor as the greedy algorithm described in Section 4. At a high level, we observe that for these problems a single threshold setting leads to an approximate solution and hence we do not need to iterate over the log +ɛ thresholds. We will show the following theorem. THEOREM 6. For the problem of maximizing a submodular function under a k-cardinality constraint, there is a MapReduce algorithm that produces a ( ɛ)-approximation in O( ) rounds using O( n /µ log n) machines with µ = O(kn δ log n) memory, with δ high probability. Given a monotone submodular function f and an integer k, we would like to find a set S U of size at most k that maximizes f. We show how to achieve a / ɛ approximation in a constant number of MapReduce rounds. Also our techniques will lead to a one-pass streaming algorithms that use Õ(k) space. Details of the streaming algorithm are omitted in this version of the paper. The analysis is based on the following lemma. LEMMA 7. Fix any γ > 0. Let OPT be the value of an optimal solution S γ and τ = OPT. Consider any S = k {s,..., s t} U, S = t k, with the following properties.

7 (i) There exists an ordering of elements s,..., s t such that for all 0 i < t, f {s,...,s i }(s i+) τ. (ii) If t < k, then f S(u) τ, for all u U. Then, f(s) OPT min { γ, } γ. PROOF. If t = k, then f(s) = k i=0 f S i (s i+) k i=0 τ = kτ = OPTγ/. If t < k, we know that f S(u ) τ, for any element u S. Furthermore, since f is submodular, f(s S ) f(s) τ S \ S γopt/. We have f(s) f(s (S \ S)) γopt OPT γopt = ( ) γ OPT, where the third step follows from the monotonicity of f. We now show how to adapt the algorithm to MapReduce. We will again make use of the SAMPLE&PRUNE procedure. As in the case of the greedy scaling solution, our goal will be to prune the elements with marginal gain below τ with respect to a current solution S. Moreover, only elements with marginal gain above τ will be added to S, so that Lemma 7 can be applied. The algorithm THRESHOLDMR shares similar ideas with the MapReduce algorithm for greedy framework. In particular, the procedure G S,τ,k passed to SAMPLE&PRUNE changes throughout different calls, and is defined as follows: G S,τ,k (X) maintains a solution S and for every element u X adds it to the solution S if f S S (u) τ (the marginal gain is above the threshold) and S S < k (the solution is feasible). The output of G S,τ,k (X) is the resulting set S S. Algorithm THRESHOLDMR(U, f, k, τ, l) : S, M U : while M do 3: (S, M) SAMPLE&PRUNE(M, G S,τ,k, l) 4: S S S 5: end while 6: return S Observe that all of the elements that were added to S had their marginal value at least τ by definition of G S,τ,k. The following claim guarantees the applicability of Lemma 7. The proof is substantially a replica of that of Lemma 9 and is omitted. LEMMA 8. Let S be the solution returned by the algorithm THRESHOLD. Then, either S = k, or f S(u) τ, for all u U. Finally, observe that an upper bound on (and therefore on OPT) can be computed in one MapReduce round. Hence, we can easily run the algorithm in parallel on different thresholds τ. This observation along with Lemma 8, Corollary 7 and the fact that SAMPLE&PRUNE can be implemented in two rounds of MapReduce, yield the following corollary. COROLLARY 9. THRESHOLD produces ( ɛ)- approximation and can be implemented to run in O( δ ) rounds using O( n log n /µ) machines with µ = O(kn δ log n) memory with high probability. 6. Knapsack constraints For the case of d knapsack constraints we are given a submodular function f and would like to find a set S U of maximum value with respect to f with the property that its characteristic vector x S satisfies Cx S b. Here, C i,j is the weight of the element u j with respect to the ith knapsack, and b i is the capacity of the ith knapsack, for i d. Our goal is to show the following theorem. Let k be a given upper bound on the size of the smallest optimal solution. Algorithm THRESHOLDSTREAM(f, k, τ) : S : for all u U do 3: if S < k and f S(u) τ then 4: S S {u} 5: end if 6: end for 7: return S THEOREM 0. For the problem of maximizing a submodular function under d knapsack constraints, there exists a MapReduce algorithm that produces a Ω( /d)-approximation in O( δ ) rounds using O( n /µ log n) machines with µ = O(kn δ log n) memory with high probability. Without loss of generality, we will assume that there is no element that individually violates a knapsack constraint, i.e., there is no u j such that c i,j > b i, for every i d. We begin by providing an analog of Lemma 7. LEMMA. Fix any γ > 0. Let OPT be the value of an optimal solution S and τ i = γopt b i, for i d. Consider any d feasible solution S U with the following properties: (i) There exists an ordering of its elements a j,..., a jt such that for all 0 m < t and i d, it holds that f S m (a jm+ ) τ ic i,jm+, where S m = {a j,..., a jm }. (ii) Assuming that for all i d, t m= ci,jm b i, then for all a j U there exists an index h(j) d such that f S(a j) τ h(j) c h(j),j. { } Then, f(s) OPT min γ, γ. 4(d+) PROOF. Consider first the case that the solution S fills at least half of a knapsack, i.e., for some i d, t m= ci,jm b i ; since each element a jm S added a marginal value of at least τ ic i,jm, we have f(s) τ t i m= ci,jm = γopt. 4d Now assume that the solution S does not fill half of any knapsack. Then, using (ii), define Si to be the set of elements a j S \ S such that h(j) = i, for i d. Because of the knapsack constraints, submodularity, and the fact that f S(a j) τ h(j) c h(j),j, we must have that f(s Si ) f(s) γopt, for every i d. d Therefore, we have f(s (S \ S)) f(s) γopt. Therefore, by monotonicity of f, we have f(s) f(s ) γopt = ( γ )OPT. Now we show how the algorithm can be implemented in MapReduce. We assume the algorithm is given an estimate γopt to compute all τ i s for some γ and an upper bound k on the size of the smallest optimal solution. The algorithm will simulate the presence of an extra knapsack constraint stating that no more than k elements can be picked. (Note that the value of the optimum value for this new system is not changed, as S is feasible in this new system.) We say that an element a j U is heavy profitable if c i,j b i and f({a j}) τ ic i,j for some i. The algorithm works as follows: if there exists a heavy profitable element (which can be detected in a single MapReduce round), the algorithm returns it as solution. Otherwise, the algorithm is identical to THRESHOLD, except for the procedure G S,τ,k which is now defined as follows: G S,τ,k (X) maintains a solution S and for every element a j X adds it to S if, for all i d, f S S (aj) τici,j, and S S {a j} is

8 feasible w.r.t. the knapsack constraints; the output of G S,τ,k (X) is the resulting set S S. We now analyze the quality of the solution returned by the algorithm. In the presence of a heavy profitable element, then the returned solution has value at least γopt. Otherwise, we argue that 4d S satisfies the properties of Lemma 7: the first property is satisfied trivially by definition of the algorithm and the second property is valid by submodularity and the fact that there is no heavy profitable element. Similarly to the algorithm THRESHOLD, we can run the algorithm in parallel on different values of the estimate γopt. We can conclude the following. THEOREM. There exists a MapReduce algorithm that produces a Ω( /d) approximation in O( δ ) rounds using O(n log n /µ) machines with µ = O(kn δ log n) memory with high probability. 6.3 p-system constraints Finally, consider the problem of maximizing a monotone submodular function subject to a p-system constraint. This generalizes the question of maximizing a monotone submodular function subject to p matroid constraints. Although one could try to extend the threshold algorithm for submodular functions to this setting, simple examples show that the algorithm can result in a solution whose value is arbitrarily smaller than the optimal due to the p-system constraint. The algorithms in this section are more closely related to the algorithm for the modular matroid; however, in this case, we will need to store all of the intermediate solutions returned by SAMPLE&PRUNE and choose the one with the largest value. Our goal is to show the following theorem. THEOREM 3. SUBMODULAR-P-SYSTEM can be implemented to run in O( δ ) rounds of MapReduce using O(n /µ log n) machines with memory µ = O(kn δ log n) with high probability, and produces a ( p+ δ )-approximation. The algorithm SUBMODULAR-P-SYSTEM is similar to the algorithm for the modular matroid as the procedure G passed to SAMPLE&PRUNE is simply the greedy algorithm. However, unlike MATROIDMR, all intermediate solutions are kept and the largest one is chosen as final solution. The following lemma provides a bound on the quality of the returned solution. LEMMA 4. If T is the number of iterations of the while loop in SUBMODULAR-P-SYSTEM, then SUBMODULAR-P-SYSTEM gives a -approximation. (+p)t PROOF. Let R i be the elements of U that are removed during the ith iteration of the while loop of SUBMODULAR-P-SYSTEM. Let O denote the optimal solution and O i := O R i. Let S i be the set returned by SAMPLE&PRUNE during the ith iteration. Note that the algorithm G used in SUBMODULAR-P-SYSTEM is known to be a /(p+)-approximation for maximizing a monotone submodular function subject to a p-system if the input to G is the entire universe [8]. Let I i denote the p-system induced on only the elements in R i S i. Note that by definition of p-systems I i exists. By definition of G and SAMPLE&PRUNE, an element e is in the set R i because the algorithm G with input I i and f returns S i and e / S i. This is because for each element e R i, SAMPLE&PRUNE runs the algorithm G on the set (S i {e}) and e was not chosen to be in the solution. Thus if we run G on S R i still no element in R i will be chosen to be in the output solution. Therefore, f(s i) p+ f(oi) since G is a /(p + ) approximation algorithm for the instance consisting of I i and f. By submodularity we have that t i= f(si) t p+ i= f(oi) f(o). Since there are p+ only T such sets S i if we return the set arg max S F f(s) then this must give a -approximation. Further, by definition of (+p)t G, we know that for all i the set S i is in I, so the set returned must be a feasible solution. Algorithm 3 SUBMODULAR-P-SYSTEM(U, I, k, f, l) : For X U, let G(X, k) be the greedy procedure running on the subset X of elements: that is, start with A :=, and greedily add to A the element a X \ A, with A {a} I, maximizing f(a {a}) f(a). The output of G(X, k) is the resulting set A. : F = 3: while U do 4: (S, M S) SAMPLE&PRUNE(U, G, l) 5: F F {S} 6: U M S 7: end while 8: return arg max S F f(s) 7. ADAPTATION TO STREAMING In this section we discuss how the algorithms given in this paper extend to the streaming setting. In the data stream model (cf. [3]), we assume that the elements of U arrive in an arbitrary order, with both f and membership in I available via oracle accesses. The algorithm makes one or more passes over the input and has a limited amount of memory. The goal in this setting is to minimize the number of passes over the input and to use as little memory as possible. ɛ-greedy. We can realize each phase of GREEDYSCALING with a pass over the input in which an element is added to the current solution if the marginal value of adding the element is above the threshold for the phase. If k is the desired solution size, the algorithm will require O(k) space and will terminate after O( log ) phases. The ɛ approximation guarantees are given in Theorem 5. For the case of modular function maximization subject to a p-system constraint, the approximation ratio achieved by GREEDYSCALING is ; we p+ɛ show that the approximation ratio achieved in the streaming setting cannot be improved without drastically increasing either the memory or the number of passes. The proof is omitted from this version of the paper. THEOREM 5. Fix any p and ɛ > 0. Then any l-pass streaming algorithm achieving a approximation for maximizing a modular function subject to a p-system constraint requires p ɛ Ω(n/(lp log p)) space. Matroid optimization. Lemma leads to a very simple algorithm in the streaming setting. The algorithm keeps an independent set and, when a new element comes, updates it by running the greedy algorithm on the current solution and the new element. Observe that the algorithm uses only k + space. Lemma implies that no element in the global greedy solution will be discarded when examined. Therefore, we have: THEOREM 6. Algorithm MATROIDSTREAM finds an optimal solution in a single pass using k + memory. Submodular maximization. The algorithms and analysis for these settings are omitted from this version of the paper.

9 THEOREM 7. There exists a one-pass streaming algorithm that uses O( k /ɛ log(n )) (resp. O(k log(n )) memory and produces a ɛ (resp. ( /d)) approximation for the problem of maximizing a submodular function subject to a k-cardinality constraint (resp. d knapsack constraints). THEOREM 8. There exists a streaming algorithm that uses O(kn δ log n) memory and produces a p+ δ approximation in O( ) passes, where k is the size of the largest independent set. δ 8. EXPERIMENTS In this section we describe the experimental results for our algorithm outlined in section 4 for the MAXCOVER problem. Recall the MAXCOVER problem: given a family S of subsets and a budget k, pick at most k elements from S to maximize the size of their union. The standard (sequential) greedy algorithm gives a ( /e)- approximation to this problem. We measure the performance of our algorithm, focusing on two aspects: the quality of approximation with respect to the greedy algorithm and the number of rounds; note that the latter is k for the straightforward implementation of the greedy algorithm. On real-world datasets, our algorithms perform on par with the greedy algorithm in terms of approximation, while obtaining significant savings in the number of rounds. For our experiments, we use two publicly available datasets from ACCIDENTS and KOSARAK. For ACCIDENTS, S = 340, 83 and for KOSARAK, S = 990, 00. Figure shows the performance of our algorithm for ɛ = δ = 0.5 for various values of k, on these two datasets. It is easy to see that the approximation factor is essentially same as that of Greedy once k grows beyond 40 and we obtain a significant factor savings in the number of rounds (more than an order of magnitude savings for KOSARAK.) approximation wrt greedy approximation wrt greedy ε=0.5, δ=0.5, data=accidents k ε=0.5, δ=0.5, data=kosarak approximation rounds k approximation rounds Figure : Approximation and number of rounds with respect to Greedy for KOSARAK and ACCIDENTS datasets as a function of k k/#rounds k/#rounds The first pane of Figure shows the role of ɛ (Theorem 5). Even for large values of ɛ, our algorithm performs almost on par with greedy in terms of approximation (note the scale on the y-axis), while the number of rounds is significantly less. Though Theorem 5 only provides a weak guarantee of ( /e)/( + ɛ), these results show that one can use ɛ in practice without sacrificing much in approximation, while gaining significantly in the number of rounds. approximation wrt greedy rounds/k k=00, δ=0.5, data=kosarak ε k=00, ε=0.5, data=kosarak approximation k/#rounds δ k/rounds Figure : Approximation and number of rounds with respect to Greedy for KOSARAK datasets as a function of ɛ and δ. Finally, the lower pane of Figure shows the role of δ on the number of rounds. Smaller values of δ require more iterations of SAMPLE&PRUNE to achieve the memory requirement, thus requiring a larger number of rounds overall. However even for δ = /, with memory requirement only k n log n, is already enough to achieve the best possible speedup. Once again, a higher value of δ results in a larger savings in the number of rounds. The approximation factor is unchanged. 9. CONCLUSIONS In this paper we presented algorithms for large-scale submodular optimization problems in the MapReduce and streaming models. We showed how to realize the classical and inherently sequential greedy algorithm in these models and allow algorithms to select more than one element at a time in parallel without sacrificing the quality of the solution. We validated our algorithms on real world datasets for the maximum coverage problem and showed that they yield an order of magnitude improvement in reducing the number of rounds, while producing solutions of the same quality. Our work opens up the possibility of solving submodular optimization problems at web-scale. Many interesting questions remain. Not all greedy algorithms fall under the framework presented in this paper. For example the augmenting-paths algorithm for maximum flow, or the celebrated Gale-Shapley matching algorithm cannot be phrased as submodular function optimization. Giving efficient MapReduce or streaming algorithms for those problems is an interesting open question k/#rounds

10 More generally, understanding what classes of algorithms can and cannot be efficiently implemented in the MapReduce setting is a challenging open problem. Acknowledgments. We thank Chandra Chekuri for helpful discussions and pointers to relevant work. We thank Sariel Har-Peled for advice on the presentation of this paper. 0. REFERENCES [] R. Agrawal, S. Gollapudi, A. Halverson, and S. Ieong. Diversifying search results. In WSDM, pages 5 4, 009. [] A. Arasu, S. Chaudhuri, and R. Kaushik. Learning string transformations from examples. PVLDB, ():54 55, 009. [3] M. Babaioff, N. Immorlica, and R. Kleinberg. Matroids, secretary problems, and online mechanisms. In SODA, pages , 007. [4] B. Bahmani, R. Kumar, and S. Vassilvitskii. Densest subgraph in streaming and MapReduce. PVLDB, 5(), 0. [5] M. Bateni, M. Hajiaghayi, and M. Zadimoghaddam. Submodular secretary problem and extensions. In APPROX-RANDOM, 00. [6] Berger, Rompel, and Shor. Efficient NC algorithms for set cover with applications to learning and geometry. JCSS, 49: , 994. [7] G. E. Blelloch, R. Peng, and K. Tangwongsan. Linear-work greedy parallel approximate set cover and variants. In SPAA, pages 3 3, 0. [8] G. Calinescu, C. Chekuri, M. Pal, and J. Vondrak. Maximizing a submodular set function subject to a matroid constraint. SICOMP, To appear. [9] G. Capannini, F. M. Nardini, R. Perego, and F. Silvestri. Efficient diversification of web search results. PVLDB, 4(7):45 459, 0. [0] M. Charikar. Greedy approximation algorithms for finding dense components in a graph. In APPROX, pages 84 95, 000. [] W. Chen, C. Wang, and Y. Wang. Scalable inïňćuence maximization for prevalent viral marketing in large-scale social networks. In KDD, pages , 00. [] W. Chen, Y. Wang, and S. Yang. Efficient influence maximization in social networks. In KDD, pages 99 âăş08, 009. [3] F. Chierichetti, R. Kumar, and A. Tomkins. Max-cover in Map-reduce. In WWW, pages 3 40, 00. [4] G. Cormode, H. J. Karloff, and A. Wirth. Set cover algorithms for very large datasets. In CIKM, pages , 00. [5] B. C. Dean, M. X. Goemans, and J. Vondrák. Adaptivity and approximation for stochastic packing problems. In SODA, pages , 005. [6] J. Dean and S. Ghemawat. MapReduce: Simplified data processing on large clusters. In OSDI, page 0, 004. [7] J. Edmonds. Matroids and the greedy algorithm. Mathematical Programming, :6 36, 97. [8] A. Ene, S. Im, and B. Moseley. Fast clustering using MapReduce. In KDD, pages 85 94, 0. [9] U. Feige. A threshold of ln n for approximating set cover. JACM, 45(4):634 65, 998. [0] M. L. Fisher, G. L. Nemhauser, and L. A. Wolsey. An analysis of approximation for maximizing submodular set functions II. Math. Prog. Study, 8:73 87, 978. [] A. Goel, S. Guha, and K. Munagala. Asking the right questions: model-driven optimization using probes. In PODS, pages 03, 006. [] M. T. Goodrich, N. Sitchinava, and Q. Zhang. Sorting, searching, and simulation in the MapReduce framework. In ISAAC, pages , 0. [3] A. Goyal, F. Bonchi, and L. V. S. Lakshmana. A data-based approach to social influence maximization. PVLDB, 5(), 0. [4] A. Gupta, A. Roth, G. Schoenebeck, and K. Talwar. Constrained non-monotone submodular maximization: Offline and secretary algorithms. In WINE, pages 46 57, 00. [5] E. Hazan, S. Safra, and O. Schwartz. On the complexity of approximating k-set packing. Computational Complexity, 5():0 39, 006. [6] T. A. Jenkyns. The efficacy of the greedy algorithm. In Proceedings of 7th South Eastern Conference on Combinatorics, Graph Theory and Computing, pages , 976. [7] H. J. Karloff, S. Suri, and S. Vassilvitskii. A model of computation for MapReduce. In SODA, pages , 00. [8] D. Kempe, J. M. Kleinberg, and É. Tardos. Maximizing the spread of influence through a social network. In KDD, pages 37 46, 003. [9] B. Korte and D. Hausmann. An analysis of the greedy heuristic for independence systems. Annals of Discrete Math., :65 74, 978. [30] S. Lattanzi, B. Moseley, S. Suri, and S. Vassilvitskii. Filtering: a method for solving graph problems in MapReduce. In SPAA, pages 85 94, 0. [3] J. Leskovec, A. Krause, C. Guestrin, C. Faloutsos, J. VanBriesen, and N. Glance. Cost-effective outbreak detection in networks. In KDD, pages 40 49, 007. [3] S. Muthukrishnan. Data streams: Algorithms and applications. Foundations and Trends in Theoretical Computer Science, (), 005. [33] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximation for maximizing submodular set functions I. Math. Prog., 4:65 94, 978. [34] A. Pietracaprina, G. Pucci, M. Riondato, F. Silvestri, and E. Upfal. Space-round tradeoffs for mapreduce computations. CoRR, abs/.8, 0. [35] B. Saha and L. Getoor. On maximum coverage in the streaming model & application to multi-topic blog-watch. In SDM, pages , 009. [36] A. D. Sarma, A. Lall, D. Nanongkai, R. J. Lipton, and J. J. Xu. Representative skylines using threshold-based preference distributions. In ICDE, pages , 0. [37] L. G. Valiant. A bridging model for parallel computation. Commun. ACM, 33(8):03, 990. [38] J. Vondrak. Submodularity in Combinatorial Optimization. PhD thesis, Charles University, Prague, 007.

### MapReduce and Distributed Data Analysis. Sergei Vassilvitskii Google Research

MapReduce and Distributed Data Analysis Google Research 1 Dealing With Massive Data 2 2 Dealing With Massive Data Polynomial Memory Sublinear RAM Sketches External Memory Property Testing 3 3 Dealing With

### CSC2420 Fall 2012: Algorithm Design, Analysis and Theory

CSC2420 Fall 2012: Algorithm Design, Analysis and Theory Allan Borodin November 15, 2012; Lecture 10 1 / 27 Randomized online bipartite matching and the adwords problem. We briefly return to online algorithms

Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

### Approximation Algorithms

Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

### 2.3 Scheduling jobs on identical parallel machines

2.3 Scheduling jobs on identical parallel machines There are jobs to be processed, and there are identical machines (running in parallel) to which each job may be assigned Each job = 1,,, must be processed

Mechanisms for Fair Attribution Eric Balkanski Yaron Singer Abstract We propose a new framework for optimization under fairness constraints. The problems we consider model procurement where the goal is

### In Search of Influential Event Organizers in Online Social Networks

In Search of Influential Event Organizers in Online Social Networks ABSTRACT Kaiyu Feng 1,2 Gao Cong 2 Sourav S. Bhowmick 2 Shuai Ma 3 1 LILY, Interdisciplinary Graduate School. Nanyang Technological University,

### ! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

### Lecture 4 Online and streaming algorithms for clustering

CSE 291: Geometric algorithms Spring 2013 Lecture 4 Online and streaming algorithms for clustering 4.1 On-line k-clustering To the extent that clustering takes place in the brain, it happens in an on-line

### Streaming Submodular Maximization: Massive Data Summarization on the Fly

Streaming Submodular Maximization: Massive Data Summarization on the Fly Ashwinkumar Badanidiyuru Cornell Unversity ashwinkumarbv@gmail.com Baharan Mirzasoleiman ETH Zurich baharanm@inf.ethz.ch Andreas

### ! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of

### Scheduling to Minimize Power Consumption using Submodular Functions

Scheduling to Minimize Power Consumption using Submodular Functions Erik D. Demaine MIT edemaine@mit.edu Morteza Zadimoghaddam MIT morteza@mit.edu ABSTRACT We develop logarithmic approximation algorithms

### The Goldberg Rao Algorithm for the Maximum Flow Problem

The Goldberg Rao Algorithm for the Maximum Flow Problem COS 528 class notes October 18, 2006 Scribe: Dávid Papp Main idea: use of the blocking flow paradigm to achieve essentially O(min{m 2/3, n 1/2 }

### Analysis of Approximation Algorithms for k-set Cover using Factor-Revealing Linear Programs

Analysis of Approximation Algorithms for k-set Cover using Factor-Revealing Linear Programs Stavros Athanassopoulos, Ioannis Caragiannis, and Christos Kaklamanis Research Academic Computer Technology Institute

### princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Sanjeev Arora Scribe: One of the running themes in this course is the notion of

### Offline sorting buffers on Line

Offline sorting buffers on Line Rohit Khandekar 1 and Vinayaka Pandit 2 1 University of Waterloo, ON, Canada. email: rkhandekar@gmail.com 2 IBM India Research Lab, New Delhi. email: pvinayak@in.ibm.com

### Resource Allocation with Time Intervals

Resource Allocation with Time Intervals Andreas Darmann Ulrich Pferschy Joachim Schauer Abstract We study a resource allocation problem where jobs have the following characteristics: Each job consumes

### CMSC 858T: Randomized Algorithms Spring 2003 Handout 8: The Local Lemma

CMSC 858T: Randomized Algorithms Spring 2003 Handout 8: The Local Lemma Please Note: The references at the end are given for extra reading if you are interested in exploring these ideas further. You are

Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one

### Online and Offline Selling in Limit Order Markets

Online and Offline Selling in Limit Order Markets Kevin L. Chang 1 and Aaron Johnson 2 1 Yahoo Inc. klchang@yahoo-inc.com 2 Yale University ajohnson@cs.yale.edu Abstract. Completely automated electronic

### Lecture 6 Online and streaming algorithms for clustering

CSE 291: Unsupervised learning Spring 2008 Lecture 6 Online and streaming algorithms for clustering 6.1 On-line k-clustering To the extent that clustering takes place in the brain, it happens in an on-line

### Week 5 Integral Polyhedra

Week 5 Integral Polyhedra We have seen some examples 1 of linear programming formulation that are integral, meaning that every basic feasible solution is an integral vector. This week we develop a theory

### Fairness in Routing and Load Balancing

Fairness in Routing and Load Balancing Jon Kleinberg Yuval Rabani Éva Tardos Abstract We consider the issue of network routing subject to explicit fairness conditions. The optimization of fairness criteria

### Efficient Fault-Tolerant Infrastructure for Cloud Computing

Efficient Fault-Tolerant Infrastructure for Cloud Computing Xueyuan Su Candidate for Ph.D. in Computer Science, Yale University December 2013 Committee Michael J. Fischer (advisor) Dana Angluin James Aspnes

### Chapter 6: Episode discovery process

Chapter 6: Episode discovery process Algorithmic Methods of Data Mining, Fall 2005, Chapter 6: Episode discovery process 1 6. Episode discovery process The knowledge discovery process KDD process of analyzing

### All trees contain a large induced subgraph having all degrees 1 (mod k)

All trees contain a large induced subgraph having all degrees 1 (mod k) David M. Berman, A.J. Radcliffe, A.D. Scott, Hong Wang, and Larry Wargo *Department of Mathematics University of New Orleans New

### Set Cover Algorithms For Very Large Datasets

Set Cover Algorithms For Very Large Datasets Graham Cormode AT&T Labs Research graham@research.att.com Howard Karloff AT&T Labs Research howard@research.att.com Anthony Wirth Department of Computer Science

### Private Approximation of Clustering and Vertex Cover

Private Approximation of Clustering and Vertex Cover Amos Beimel, Renen Hallak, and Kobbi Nissim Department of Computer Science, Ben-Gurion University of the Negev Abstract. Private approximation of search

### 1 Polyhedra and Linear Programming

CS 598CSC: Combinatorial Optimization Lecture date: January 21, 2009 Instructor: Chandra Chekuri Scribe: Sungjin Im 1 Polyhedra and Linear Programming In this lecture, we will cover some basic material

### An Empirical Study of Two MIS Algorithms

An Empirical Study of Two MIS Algorithms Email: Tushar Bisht and Kishore Kothapalli International Institute of Information Technology, Hyderabad Hyderabad, Andhra Pradesh, India 32. tushar.bisht@research.iiit.ac.in,

### 1. R In this and the next section we are going to study the properties of sequences of real numbers.

+a 1. R In this and the next section we are going to study the properties of sequences of real numbers. Definition 1.1. (Sequence) A sequence is a function with domain N. Example 1.2. A sequence of real

### Topic: Greedy Approximations: Set Cover and Min Makespan Date: 1/30/06

CS880: Approximations Algorithms Scribe: Matt Elder Lecturer: Shuchi Chawla Topic: Greedy Approximations: Set Cover and Min Makespan Date: 1/30/06 3.1 Set Cover The Set Cover problem is: Given a set of

### Applied Algorithm Design Lecture 5

Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design

### Lecture 3: Linear Programming Relaxations and Rounding

Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can

### The Conference Call Search Problem in Wireless Networks

The Conference Call Search Problem in Wireless Networks Leah Epstein 1, and Asaf Levin 2 1 Department of Mathematics, University of Haifa, 31905 Haifa, Israel. lea@math.haifa.ac.il 2 Department of Statistics,

### The multi-integer set cover and the facility terminal cover problem

The multi-integer set cover and the facility teral cover problem Dorit S. Hochbaum Asaf Levin December 5, 2007 Abstract The facility teral cover problem is a generalization of the vertex cover problem.

### Exponential time algorithms for graph coloring

Exponential time algorithms for graph coloring Uriel Feige Lecture notes, March 14, 2011 1 Introduction Let [n] denote the set {1,..., k}. A k-labeling of vertices of a graph G(V, E) is a function V [k].

### OPRE 6201 : 2. Simplex Method

OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2

### Submodular Function Maximization

Submodular Function Maximization Andreas Krause (ETH Zurich) Daniel Golovin (Google) Submodularity 1 is a property of set functions with deep theoretical consequences and far reaching applications. At

### Best Monotone Degree Bounds for Various Graph Parameters

Best Monotone Degree Bounds for Various Graph Parameters D. Bauer Department of Mathematical Sciences Stevens Institute of Technology Hoboken, NJ 07030 S. L. Hakimi Department of Electrical and Computer

### Two General Methods to Reduce Delay and Change of Enumeration Algorithms

ISSN 1346-5597 NII Technical Report Two General Methods to Reduce Delay and Change of Enumeration Algorithms Takeaki Uno NII-2003-004E Apr.2003 Two General Methods to Reduce Delay and Change of Enumeration

### Ph.D. Thesis. Judit Nagy-György. Supervisor: Péter Hajnal Associate Professor

Online algorithms for combinatorial problems Ph.D. Thesis by Judit Nagy-György Supervisor: Péter Hajnal Associate Professor Doctoral School in Mathematics and Computer Science University of Szeged Bolyai

### Definition 11.1. Given a graph G on n vertices, we define the following quantities:

Lecture 11 The Lovász ϑ Function 11.1 Perfect graphs We begin with some background on perfect graphs. graphs. First, we define some quantities on Definition 11.1. Given a graph G on n vertices, we define

### Cost effective Outbreak Detection in Networks

Cost effective Outbreak Detection in Networks Jure Leskovec Joint work with Andreas Krause, Carlos Guestrin, Christos Faloutsos, Jeanne VanBriesen, and Natalie Glance Diffusion in Social Networks One of

### No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results

### A Near-linear Time Constant Factor Algorithm for Unsplittable Flow Problem on Line with Bag Constraints

A Near-linear Time Constant Factor Algorithm for Unsplittable Flow Problem on Line with Bag Constraints Venkatesan T. Chakaravarthy, Anamitra R. Choudhury, and Yogish Sabharwal IBM Research - India, New

### The Online Set Cover Problem

The Online Set Cover Problem Noga Alon Baruch Awerbuch Yossi Azar Niv Buchbinder Joseph Seffi Naor ABSTRACT Let X = {, 2,..., n} be a ground set of n elements, and let S be a family of subsets of X, S

### 8.1 Makespan Scheduling

600.469 / 600.669 Approximation Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programing: Min-Makespan and Bin Packing Date: 2/19/15 Scribe: Gabriel Kaptchuk 8.1 Makespan Scheduling Consider an instance

### 1 Limiting distribution for a Markov chain

Copyright c 2009 by Karl Sigman Limiting distribution for a Markov chain In these Lecture Notes, we shall study the limiting behavior of Markov chains as time n In particular, under suitable easy-to-check

### Energy Efficient Monitoring in Sensor Networks

Energy Efficient Monitoring in Sensor Networks Amol Deshpande, Samir Khuller, Azarakhsh Malekian, Mohammed Toossi Computer Science Department, University of Maryland, A.V. Williams Building, College Park,

### Online Scheduling with Bounded Migration

Online Scheduling with Bounded Migration Peter Sanders, Naveen Sivadasan, and Martin Skutella Max-Planck-Institut für Informatik, Saarbrücken, Germany, {sanders,ns,skutella}@mpi-sb.mpg.de Abstract. Consider

### Approximated Distributed Minimum Vertex Cover Algorithms for Bounded Degree Graphs

Approximated Distributed Minimum Vertex Cover Algorithms for Bounded Degree Graphs Yong Zhang 1.2, Francis Y.L. Chin 2, and Hing-Fung Ting 2 1 College of Mathematics and Computer Science, Hebei University,

### Cost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation:

CSE341T 08/31/2015 Lecture 3 Cost Model: Work, Span and Parallelism In this lecture, we will look at how one analyze a parallel program written using Cilk Plus. When we analyze the cost of an algorithm

### Notes on Complexity Theory Last updated: August, 2011. Lecture 1

Notes on Complexity Theory Last updated: August, 2011 Jonathan Katz Lecture 1 1 Turing Machines I assume that most students have encountered Turing machines before. (Students who have not may want to look

### Key words. multi-objective optimization, approximate Pareto set, bi-objective shortest path

SMALL APPROXIMATE PARETO SETS FOR BI OBJECTIVE SHORTEST PATHS AND OTHER PROBLEMS ILIAS DIAKONIKOLAS AND MIHALIS YANNAKAKIS Abstract. We investigate the problem of computing a minimum set of solutions that

### Bicolored Shortest Paths in Graphs with Applications to Network Overlay Design

Bicolored Shortest Paths in Graphs with Applications to Network Overlay Design Hongsik Choi and Hyeong-Ah Choi Department of Electrical Engineering and Computer Science George Washington University Washington,

### max cx s.t. Ax c where the matrix A, cost vector c and right hand side b are given and x is a vector of variables. For this example we have x

Linear Programming Linear programming refers to problems stated as maximization or minimization of a linear function subject to constraints that are linear equalities and inequalities. Although the study

### Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

### JUST-IN-TIME SCHEDULING WITH PERIODIC TIME SLOTS. Received December May 12, 2003; revised February 5, 2004

Scientiae Mathematicae Japonicae Online, Vol. 10, (2004), 431 437 431 JUST-IN-TIME SCHEDULING WITH PERIODIC TIME SLOTS Ondřej Čepeka and Shao Chin Sung b Received December May 12, 2003; revised February

### Connected Identifying Codes for Sensor Network Monitoring

Connected Identifying Codes for Sensor Network Monitoring Niloofar Fazlollahi, David Starobinski and Ari Trachtenberg Dept. of Electrical and Computer Engineering Boston University, Boston, MA 02215 Email:

### Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like

### A Note on Maximum Independent Sets in Rectangle Intersection Graphs

A Note on Maximum Independent Sets in Rectangle Intersection Graphs Timothy M. Chan School of Computer Science University of Waterloo Waterloo, Ontario N2L 3G1, Canada tmchan@uwaterloo.ca September 12,

### 3. Linear Programming and Polyhedral Combinatorics

Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

### Why? A central concept in Computer Science. Algorithms are ubiquitous.

Analysis of Algorithms: A Brief Introduction Why? A central concept in Computer Science. Algorithms are ubiquitous. Using the Internet (sending email, transferring files, use of search engines, online

### Factoring & Primality

Factoring & Primality Lecturer: Dimitris Papadopoulos In this lecture we will discuss the problem of integer factorization and primality testing, two problems that have been the focus of a great amount

### COLORED GRAPHS AND THEIR PROPERTIES

COLORED GRAPHS AND THEIR PROPERTIES BEN STEVENS 1. Introduction This paper is concerned with the upper bound on the chromatic number for graphs of maximum vertex degree under three different sets of coloring

### Polynomial Degree and Lower Bounds in Quantum Complexity: Collision and Element Distinctness with Small Range

THEORY OF COMPUTING, Volume 1 (2005), pp. 37 46 http://theoryofcomputing.org Polynomial Degree and Lower Bounds in Quantum Complexity: Collision and Element Distinctness with Small Range Andris Ambainis

### Lecture 7: Approximation via Randomized Rounding

Lecture 7: Approximation via Randomized Rounding Often LPs return a fractional solution where the solution x, which is supposed to be in {0, } n, is in [0, ] n instead. There is a generic way of obtaining

### Permutation Betting Markets: Singleton Betting with Extra Information

Permutation Betting Markets: Singleton Betting with Extra Information Mohammad Ghodsi Sharif University of Technology ghodsi@sharif.edu Hamid Mahini Sharif University of Technology mahini@ce.sharif.edu

### Completion Time Scheduling and the WSRPT Algorithm

Completion Time Scheduling and the WSRPT Algorithm Bo Xiong, Christine Chung Department of Computer Science, Connecticut College, New London, CT {bxiong,cchung}@conncoll.edu Abstract. We consider the online

### Three Effective Top-Down Clustering Algorithms for Location Database Systems

Three Effective Top-Down Clustering Algorithms for Location Database Systems Kwang-Jo Lee and Sung-Bong Yang Department of Computer Science, Yonsei University, Seoul, Republic of Korea {kjlee5435, yang}@cs.yonsei.ac.kr

### Scheduling Single Machine Scheduling. Tim Nieberg

Scheduling Single Machine Scheduling Tim Nieberg Single machine models Observation: for non-preemptive problems and regular objectives, a sequence in which the jobs are processed is sufficient to describe

### CIS 700: algorithms for Big Data

CIS 700: algorithms for Big Data Lecture 6: Graph Sketching Slides at http://grigory.us/big-data-class.html Grigory Yaroslavtsev http://grigory.us Sketching Graphs? We know how to sketch vectors: v Mv

### The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge,

The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge, cheapest first, we had to determine whether its two endpoints

### THE SCHEDULING OF MAINTENANCE SERVICE

THE SCHEDULING OF MAINTENANCE SERVICE Shoshana Anily Celia A. Glass Refael Hassin Abstract We study a discrete problem of scheduling activities of several types under the constraint that at most a single

### MOP 2007 Black Group Integer Polynomials Yufei Zhao. Integer Polynomials. June 29, 2007 Yufei Zhao yufeiz@mit.edu

Integer Polynomials June 9, 007 Yufei Zhao yufeiz@mit.edu We will use Z[x] to denote the ring of polynomials with integer coefficients. We begin by summarizing some of the common approaches used in dealing

### IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

### 17.6.1 Introduction to Auction Design

CS787: Advanced Algorithms Topic: Sponsored Search Auction Design Presenter(s): Nilay, Srikrishna, Taedong 17.6.1 Introduction to Auction Design The Internet, which started of as a research project in

### Constant Factor Approximation Algorithm for the Knapsack Median Problem

Constant Factor Approximation Algorithm for the Knapsack Median Problem Amit Kumar Abstract We give a constant factor approximation algorithm for the following generalization of the k-median problem. We

### The Relative Worst Order Ratio for On-Line Algorithms

The Relative Worst Order Ratio for On-Line Algorithms Joan Boyar 1 and Lene M. Favrholdt 2 1 Department of Mathematics and Computer Science, University of Southern Denmark, Odense, Denmark, joan@imada.sdu.dk

### WORST-CASE PERFORMANCE ANALYSIS OF SOME APPROXIMATION ALGORITHMS FOR MINIMIZING MAKESPAN AND FLOWTIME

WORST-CASE PERFORMANCE ANALYSIS OF SOME APPROXIMATION ALGORITHMS FOR MINIMIZING MAKESPAN AND FLOWTIME PERUVEMBA SUNDARAM RAVI, LEVENT TUNÇEL, MICHAEL HUANG Abstract. In 1976, Coffman and Sethi conjectured

### Distributed Computing over Communication Networks: Maximal Independent Set

Distributed Computing over Communication Networks: Maximal Independent Set What is a MIS? MIS An independent set (IS) of an undirected graph is a subset U of nodes such that no two nodes in U are adjacent.

### Bargaining Solutions in a Social Network

Bargaining Solutions in a Social Network Tanmoy Chakraborty and Michael Kearns Department of Computer and Information Science University of Pennsylvania Abstract. We study the concept of bargaining solutions,

### Efficient Algorithms for Masking and Finding Quasi-Identifiers

Efficient Algorithms for Masking and Finding Quasi-Identifiers Rajeev Motwani Stanford University rajeev@cs.stanford.edu Ying Xu Stanford University xuying@cs.stanford.edu ABSTRACT A quasi-identifier refers

### An improved on-line algorithm for scheduling on two unrestrictive parallel batch processing machines

This is the Pre-Published Version. An improved on-line algorithm for scheduling on two unrestrictive parallel batch processing machines Q.Q. Nong, T.C.E. Cheng, C.T. Ng Department of Mathematics, Ocean

### Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm.

Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm. We begin by defining the ring of polynomials with coefficients in a ring R. After some preliminary results, we specialize

### A New Nature-inspired Algorithm for Load Balancing

A New Nature-inspired Algorithm for Load Balancing Xiang Feng East China University of Science and Technology Shanghai, China 200237 Email: xfeng{@ecusteducn, @cshkuhk} Francis CM Lau The University of

### A Sublinear Bipartiteness Tester for Bounded Degree Graphs

A Sublinear Bipartiteness Tester for Bounded Degree Graphs Oded Goldreich Dana Ron February 5, 1998 Abstract We present a sublinear-time algorithm for testing whether a bounded degree graph is bipartite

### ONLINE DEGREE-BOUNDED STEINER NETWORK DESIGN. Sina Dehghani Saeed Seddighin Ali Shafahi Fall 2015

ONLINE DEGREE-BOUNDED STEINER NETWORK DESIGN Sina Dehghani Saeed Seddighin Ali Shafahi Fall 2015 ONLINE STEINER FOREST PROBLEM An initially given graph G. s 1 s 2 A sequence of demands (s i, t i ) arriving

### Discrete Applied Mathematics. The firefighter problem with more than one firefighter on trees

Discrete Applied Mathematics 161 (2013) 899 908 Contents lists available at SciVerse ScienceDirect Discrete Applied Mathematics journal homepage: www.elsevier.com/locate/dam The firefighter problem with

### A Model of Computation for MapReduce

A Model of Computation for MapReduce Howard Karloff Siddharth Suri Sergei Vassilvitskii Abstract In recent years the MapReduce framework has emerged as one of the most widely used parallel computing platforms

### each college c i C has a capacity q i - the maximum number of students it will admit

n colleges in a set C, m applicants in a set A, where m is much larger than n. each college c i C has a capacity q i - the maximum number of students it will admit each college c i has a strict order i

### Inverse Optimization by James Orlin

Inverse Optimization by James Orlin based on research that is joint with Ravi Ahuja Jeopardy 000 -- the Math Programming Edition The category is linear objective functions The answer: When you maximize

### Big Data & Scripting Part II Streaming Algorithms

Big Data & Scripting Part II Streaming Algorithms 1, 2, a note on sampling and filtering sampling: (randomly) choose a representative subset filtering: given some criterion (e.g. membership in a set),

### ON INDUCED SUBGRAPHS WITH ALL DEGREES ODD. 1. Introduction

ON INDUCED SUBGRAPHS WITH ALL DEGREES ODD A.D. SCOTT Abstract. Gallai proved that the vertex set of any graph can be partitioned into two sets, each inducing a subgraph with all degrees even. We prove

### Nan Kong, Andrew J. Schaefer. Department of Industrial Engineering, Univeristy of Pittsburgh, PA 15261, USA

A Factor 1 2 Approximation Algorithm for Two-Stage Stochastic Matching Problems Nan Kong, Andrew J. Schaefer Department of Industrial Engineering, Univeristy of Pittsburgh, PA 15261, USA Abstract We introduce

### Approximation Algorithms: LP Relaxation, Rounding, and Randomized Rounding Techniques. My T. Thai

Approximation Algorithms: LP Relaxation, Rounding, and Randomized Rounding Techniques My T. Thai 1 Overview An overview of LP relaxation and rounding method is as follows: 1. Formulate an optimization

### A simpler and better derandomization of an approximation algorithm for Single Source Rent-or-Buy

A simpler and better derandomization of an approximation algorithm for Single Source Rent-or-Buy David P. Williamson Anke van Zuylen School of Operations Research and Industrial Engineering, Cornell University,