Lower bounds for Howard s algorithm for finding Minimum MeanCost Cycles


 Brett Terry
 1 years ago
 Views:
Transcription
1 Lower bounds for Howard s algorithm for finding Minimum MeanCost Cycles Thomas Dueholm Hansen 1 and Uri Zwick 2 1 Department of Computer Science, Aarhus University. 2 School of Computer Science, Tel Aviv University, Tel Aviv 69978, Israel. Abstract. Howard s policy iteration algorithm is one of the most widely used algorithms for finding optimal policies for controlling Markov Decision Processes (MDPs). When applied to weighted directed graphs, which may be viewed as Deterministic MDPs (DMDPs), Howard s algorithm can be used to find Minimum MeanCost cycles (MMCC). Experimental studies suggest that Howard s algorithm works extremely well in this context. The theoretical complexity of Howard s algorithm for finding MMCCs is a mystery. No polynomial time bound is known on its running time. Prior to this work, there were only linear lower bounds on the number of iterations performed by Howard s algorithm. We provide the first weighted graphs on which Howard s algorithm performs Ω(n 2 ) iterations, where n is the number of vertices in the graph. 1 Introduction Howard s policy iteration algorithm [11] is one of the most widely used algorithms for solving Markov decision processes (MDPs). The complexity of Howard s algorithm in this setting was unresolved for almost 5 years. Very recently, Fearnley [5], building on results of Friedmann [7], showed that there are MDPs on which Howard s algorithm requires exponential time. In another recent breakthrough, Ye [17] showed that Howard s algorithm is strongly polynomial when applied to discounted MDPs, with a fixed discount ratio. Hansen et al. [1] recently improved some of the bounds of Ye and extended them to the 2player case. Weighted directed graphs may be viewed as Deterministic MDPs (DMDPs) and solving such DMDPs is essentially equivalent to finding minimum meancost cycles (MMCCs) in such graphs. Howard s algorithm can thus be used to solve this purely combinatorial problem. The complexity of Howard s algorithm in this setting is an intriguing open problem. Fearnley s [5] exponential lower bound seems to depend in an essential way on the use of stochastic actions, so it does not extend to the deterministic setting. Similarly, Ye s [17] polynomial upper bound depends in an essential way on the MDPs being discounted and does not extend to the nondiscounted case. Supported by the Center for Algorithmic Game Theory at Aarhus University, funded by the Carlsberg Foundation. 1
2 The MMCC problem is an interesting problem that has various applications. It generalizes the problem of finding a negative cost cycle in a graph. It is also used as a subroutine in algorithms for solving other problems, such as mincost flow algorithms, (See, e.g., Goldberg and Tarjan [9].) There are several polynomial time algorithms for solving the MMCC problem. Karp [12] gave an O(mn)time algorithm for the problem, where m is the number of edges and n is the number of vertices in the input graph. Young et al. [18] gave an algorithm whose complexity is O(mn+n 2 log n). Although this is slightly worse, in some cases, than the running time of Karp s algorithm, the algorithm of Young et al. [18] behaves much better in practice. Dasdan [3] experimented with many different algorithms for the MMCC problem, including Howard s algorithm. He reports that Howard s algorithm usually runs much faster than Karp s algorithm, and is usually almost as fast as the algorithm of Young et al. [18]. A more thorough experimental study of MMCC algorithms was recently conducted by Georgiadis et al. [8]. 3 Understanding the complexity of Howard s algorithm for MMCCs is interesting from both the applied and theoretical points of view. Howard s algorithm for MMCC is an extremely simple and natural combinatorial algorithm, similar in flavor to the BellmanFord algorithm for finding shortest paths [1, 2],[6] and to Karp s [12] algorithm. Yet, its analysis seems to be elusive. Howard s algorithm also has the advantage that it can be applied to the more general problem of finding a cycle with a minimum costtotime ratio (see, e.g., Megiddo [14, 15]). Howard s algorithm works in iteration. Each iteration takes O(m) time. It is trivial to construct instances on which Howard s algorithm performs n iterations. (Recall that n and m are the number of vertices and edges in the input graph.) Madani [13] constructed instances on which the algorithm performs 2n O(1) iterations. No graphs were known, however, on which Howard s algorithm performed more than a linear number of iterations. We construct the first graphs on which Howard s algorithm performs Ω(n 2 ) iterations, showing, in particular, that there are instances on which its running time is Ω(n 4 ), an order of magnitude slower than the running times of the algorithms of Karp [12] and Young et al. [18]. We also construct nvertex outdegree2 graphs on which Howard s algorithm performs 2n O(1) iterations. (Madani s [13] examples used Θ(n 2 ) edges.) This example is interesting as it shows that the number of iterations performed may differ from the number of edges in the graph by only an additive constant. It also sheds some more light on the nontrivial, and perhaps nonintuitive behavior of Howard s algorithm. Our examples still leave open the possibility that the number of iterations performed by Howard s algorithm is always at most m, the number of edges. (The graphs on which the algorithm performs Ω(n 2 ) iterations also have Ω(n 2 ) edges.) We conjecture that this is always the case. 3 Georgiadis et al. [8] claim that Howard s algorithm is not robust. From personal conversations with the authors of [8] it turns out, however, that the version they used is substantially different from Howard s algorithm [11]. 2
3 2 Howard s algorithm for minimum meancost cycles We next describe the specialization of Howard s algorithm for deterministic MDPs, i.e., for finding Minimum MeanCost Cycles. For Howard s algorithm for general MDPs, see Howard [11], Derman [4] or Puterman [16]. Let G = (V, E, c), where c : E R, be a weighted directed graph. We assume that each vertex has a unique serial number associated with. We also assume, without loss of generality, that each vertex v V has at least one outgoing edge. k 1 i= c(v i, v i+1 ), If C = v v 1 v k 1 v is a cycle in G, we let val(c) = 1 k where v k = v, be its mean cost. The vertex on C with the smallest serial number is said to be the head of the cycle. Our goal is to find a cycle C that minimizes val(c). A policy π is a mapping π : V V such that (v, π(v)) E, for every v V. A policy π, defines a subgraph G π = (V, E π ), where E π = (v, π(v)) v V }. As the outdegree of each vertex in G π is 1, we get that G π is composed of a collection of disjoint directed cycles with directed paths leading into them. Given a policy π, we assign to each vertex v V a value val π (v ) and a potential pot π (v ) in the following way. Let P π (v ) = v v 1, be the infinite path defined by v i = π(v i 1 ), for i >. This infinite path is composed of a finite path P leading to a cycle C which is repeated indefinitely. If v r = v r+k is the first vertex visited for the second time, then P = v v 1 v r and C = v r v r+1 v r+k. We let v l be the head of the cycle C. We now define val π (v ) = val(c) = 1 k 1 k i= c(v r+i, v r+i+1 ), pot π (v ) = l 1 i= (c(v i, v i+1 ) val(c)). In other words, val π (v ) is the mean cost of C, the cycle into which P π (v ) is absorbed, while pot π (v ) is the distance from v to v l, the head of this cycle, when the mean cost of the cycle is subtracted from the cost of each edge. It is easy to check that values and potentials satisfy the following equations: val π (v) = val π (π(v)), pot π (v) = c(v, π(v)) val π (v) + pot π (π(v)). The appraisal of an edge (u, v) E is defined as the pair: A π (u, v) = ( val π (v), c(u, v) val π (v) + pot π (v) ). Howard s algorithm starts with an arbitrary policy π and keeps improving it. If π is the current policy, then the next policy π produced by the algorithm is defined by π (u) = arg min A π (u, v). v:(u,v) E In other words, for every vertex the algorithm selects the outgoing edge with the lowest appraisal. (In case of ties, the algorithm favors edges in the current policy.) As appraisals are pairs, they are compared lexicographically, i.e., (u, v 1 ) is better 3
4 than (u, v 2 ) if and only if A π (u, v 1 ) A π (u, v 2 ), where (x 1, y 1 ) (x 2, y 2 ) if and only if x 1 < x 2, or x 1 = x 2 and y 1 < y 2. When π = π, the algorithm stops. The correctness of the algorithm follows from the following two lemmas whose proofs can be found in Howard [11], Derman [4] and Puterman [16]. Lemma 1. Suppose that π is obtained from π by a policy improvement step. Then, for every v V we have (val π (v), pot π (v)) (val π (v), pot π (v)). Furthermore, if π (v) π(v), then (val π (v), pot π (v)) (val π (v), pot π (v)). Lemma 2. If policy π is not modified by an improvement step, then val π (v) is the minimum mean weight of a cycle reachable from v in G. Furthermore by following edges of π from v we get into a cycle of this minimum mean weight. Each iteration of Howard s algorithm takes only O(m) time and is not much more complicated than an iteration of the BellmanFord algorithm. 3 A quadratic lower bound We next construct a family of weighted directed graphs for which the number of iterations performed by Howard s algorithm is quadratic in the number of vertices. More precisely, we prove the following theorem: Theorem 1. Let n and m be even integers, with 2n m n n 2. There exists a weighted directed graph with n vertices and m edges on which Howard s algorithm performs m n + 1 iterations. All policies generated by Howard s algorithm, when run on the instances of Theorem 1, contain a single cycle, and hence all vertices have the same value. Edges are therefore selected for inclusion in the improved policies based on potentials. (Recall that potentials are essentially adjusted distances.) The main idea behind our construction, which we refer to as the dancing cycles construction, is the use of cycles of very large costs, so that Howard s algorithm favors long, i.e., containing many edges, paths to the cycle of the current policy attractive, delaying the discovery of better cycles. Given a graph and a sequence of policies, it is possible check, by solving an appropriate linear program, whether there exist costs for which Howard s algorithm generates the given sequence of policies. Experiments with a program that implements this idea helped us obtain the construction presented below. For simplicity, we first prove Theorem 1 for m = n n 2. We later note that the same construction works when removing pairs of edges, which gives the statement of the theorem. 3.1 The Construction For every n we construct a weighted directed graph G n = (V, E, c), on V = 2n vertices and E = n 2 + 3n edges, and an initial policy π such that Howard s algorithm performs n 2 + n + 1 iterations on G n when it starts with π. 4
5 The graph G n itself is fairly simple. (See Figure 1.) Most of the intricacy goes into the definition of the cost function c : E R. The graph G n is composed of two symmetric parts. To highlight the symmetry we let V = vn,, v1, v1, 1, vn}. 1 Note that the set of vertices is split in two, according to whether the superscript is or 1. In order to simplify notation when dealing with vertices with different superscripts, we sometimes refer to v1 as v 1 and to v1 1 as v. The set of edges is: E = (v i, v j ), (v 1 i, v 1 j ) 1 i n, i 1 j n}. We next describe a sequence of policies Π n of length n 2 +n+1. We then construct a cost function that causes Howard s algorithm to generate this long sequence of policies. For 1 l r n and s, 1}, and for l = 1, r = and s =, we define a policy π s : π(v s i) t = vi 1 t vr t vn t for t s or i > l for t = s and i = l for t = s and i < l The policy π s contains a single cycle vs rvr 1 s vl svs r which is determined by its defining edge e s = (vs l, vs r). As shown in Figure 1, all vertices to the left of vl s, the head of the cycle, choose an edge leading furthest to the right, while all the vertices to the right of vl s choose an edge leading furthest to the left. The sequence Π n is composed of the policies π s, where 1 l r n and s, 1}, or l = 1, r = and s =, with the following ordering. Policy π s1 l 1,r 1 precedes policy π s2 l 2,r 2 in Π n if and only if l 1 > l 2, or l 1 = l 2 and r 1 > r 2, or l 1 = l 2 and r 1 = r 2 and s 1 < s 2. (Note that this is a reversed lexicographical ordering on the triplets (l 1, r 1, 1 s 1 ) and (l 2, r 2, 1 s 2 ).) For every 1 l r n and s, 1}, or l = 1, r = and s =, we let f(l, r, s) be the index of π s in Π n, where indices start at. We can now write: ( ((π ) ) Π n = (π k ) n2 +n s 1 l n 1 k= = n l,n r), π s= 1, r= l= We refer to Figure 1 for an illustration of G 4 and the corresponding sequence Π The edge costs Recall that each policy π k = π s is determined by an edge e k = e s = (vs l, vs r), where k = f(l, r, s). Let N = n 2 + n. We assign the edges the following exponential costs: c(e k ) = c(v s l, vs r) = n N k, k < N, c(v 1, v 1 1) = c(v 1 1, v 1) = n N, c(vi s, vs i 1 ) =, 2 i n, s, 1}. 5
6 v4 18 v3 12 v2 4 v1 M v1 1 3 v v v M π : π4,4 : π 2 : π3,4 : π 4 : π3,3 : π 6 : π2,4 : π 8 : π2,3 : π 1 : π2,2 : π 12 : π1,4 : π 14 : π1,3 : π 16 : π1,2 : π 18 : π1,1 : π 1 : π4,4 1 : π 3 : π3,4 1 : π 5 : π3,3 1 : π 7 : π2,4 1 : π 9 : π2,3 1 : π 11 : π2,2 1 : π 13 : π1,4 1 : π 15 : π1,3 1 : π 17 : π1,2 1 : π 19 : π1,1 1 : π 2 : Fig. 1. G 4 and the corresponding sequence Π 4. Π 4 is shown in lefttoright order. Policies π f(,s) = π s are shown in bold, with e s being highlighted. Numbers below edges define costs. means, k > means n k, and M means n N. 6
7 We claim that with these exponential edge costs Howard s algorithm does indeed produce the sequence Π n. To show that π k+1 is indeed the policy that Howard s algorithm obtains by improving π k, we have to show that π k+1 (u) = arg min A πk (u, v), u V. (1) v:(u,v) E For brevity, we let c(vi s, vs j ) = cs i,j. The only cycle in π k = π s is Cs = vrv s r 1 s vl svs r. As c s i,i 1 =, for 2 i n and s, 1}, we have µ s = val(c s ) = c s r l + 1. As c s = nn k and all cycles in our construction are of size at most n we have n N k 1 µ s n N k. As all vertices have the same value µ s under π k = π s, edges are compared based on the second component of their appraisals A πk (u, v). Hence, (1) becomes: π k+1 (u) = arg min v:(u,v) E c(u, v) + pot πk (v), u V. (2) Note that an edge (u, v 1 ) is preferred over (u, v 2 ) if and only if c(u, v 1 ) c(u, v 2 ) < pot πk (v 2 ) pot πk (v 1 ). Let vl s be the head of the cycle Cs. Keeping in mind that cs i,i 1 =, for 2 i n and s, 1}, it is not difficult to see that the potentials of the vertices under policy π s are given by the following expression: c s i,n (n l + 1)µs if t = s and i < l, pot π s (vi) t = (i l)µ s if t = s and i l, c t 1, iµ s + pot π s (vs 1) if t s. It is convenient to note that we have pot π s (vi t ), for every 1 l r n, 1 i n and s, t, 1}. In the first case (t = s and i < l), this follows from the fact that (vi s, vs n) has a larger index than (vl s, vs r) (note that i < l). In the third case (t = s), it follows from the fact that c t 1, = n N <. 3.3 The dynamics The proof that Howard s algorithm produces the sequence Π n is composed of three main cases, shown in Figure 2. Each case is broken into several subcases. Each subcase on its own is fairly simple and intuitive. Case 1. Suppose that π k = π. We need to show that π k+1 = π 1. We have the following four subcases, shown at the top of Figure 2. 7
8 (1) π : v n v r (1.1) v l v11 v 1 (1.2) (1.3) vl 1 vr 1 (1.4) (1.5) v 1 n (2) π 1 : v n v r 1 v l v 1 v 1 l v 1 r v 1 n (2.5) v11 (2.4) (2.3) (2.2) (2.1) π 1 : v n v r 1 v l v 1 v 1 1 v 1 l v 1 r v 1 n (3) π 1 l,l : v n v l 1 v 1 v 1 l v 1 n (3.5) v11 (3.4) (3.3) (3.2) (3.1) π l 1,n : v n v l 1 v 1 v 1 1 v 1 l v 1 n Fig. 2. Policies of transitions (1) π to π 1, (2) π 1 to π 1, and (3) π 1 l,l to π l 1,n. Vertices of the corresponding subcases have been annotated accordingly. Case 1.1. We show that π k+1 (vi ) = v i 1, for 2 i n. We have to show that (vi, v i 1 ) beats (v i, v j ), for every 2 i j n, or in other words that c i,i 1 + pot π (v i 1) < c i,j + pot π (v j ), 2 i j n. Case Assume that j < l. We then have pot π (v i 1 ) = c i 1,n (n l + 1)µ, pot π (v j ) = c j,n (n l + 1)µ. 8
9 Recalling that c i,i 1 =, the inequality that we have to show becomes c i 1,n < c i,j + c j,n, 2 i j < l n. As the edge (v i 1, v n) comes after (v i, v j ) in our ordering, we have c i 1,n < c i,j. The other term on the right is nonnegative and the inequality follows easily. Case Assume that i 1 < l j. We then have and the required inequality becomes pot π (v i 1 ) = c i 1,n (n l + 1)µ, pot π (v j ) = (j l)µ, c i 1,n < c i,j + (n j + 1)µ, 1 i 1 < l j n. As j n, the inequality again follows from the fact that c i 1,n < c i,j. Case Assume that l i 1 < j. We then have pot π (v j ) pot π (v i 1) = (i j 1)µ, and the required inequality becomes (j i + 1)µ < c i,j, 1 l i 1 < j n. This inequality holds as (j i + 1)µ < n c c i,j. The last inequality follows as (vl, v r) appears after (vi, v j ) in our ordering. (Note that we are using here, for the first time, the fact that the weights are exponential.) Case 1.2. We show that π k+1 (v 1) = v 1 1. We have to show that c 1, + pot π (v 1 1) < c 1,j + pot π (v j ), 1 j n. This inequality is easy. Note that c 1, = n N, pot π (v 1 1 ), while c 1,j > and pot π (v j ) > nn. Case 1.3. We show that π k+1 (v 1 i ) = v1 n, for 1 i < l. We have to show that c 1 i,n c 1 i,j < pot π (v 1 j ) pot π (v 1 n), 1 i < l, i 1 j < n. Case Suppose that i = 1 and j =. We need to verify that c 1 1,n c 1 1, < pot π (v 1 ) pot π (v 1 n). As pot π (v 1 ) = pot π (v 1) and pot π (v 1 n) = c 1 1, nµ + pot π (v 1), we have to verify that c 1 1,n < nµ, which follows from the fact that l > 1 and that (vl, v r) has a smaller index than (v1, 1 v1 n ). Case Suppose that j 1. We have to verify that c 1 i,n c 1 i,j < pot π (v 1 j ) pot π (v 1 n) = (n j)µ, 1 i < l, i 1 j < n. 9
10 As in Case we have c 1 i,n µ while c1 i,j >. Case 1.4. We show that π k+1 (v 1 l ) = v1 r. We have to show that c 1 c 1 l,j < pot π (v 1 j ) pot π (v 1 r), l 1 j n, j r. Case Suppose that l = 1 and j =. As in case 1.3.1, the inequality becomes c 1 1,r < rµ which is easily seen to hold. Case Suppose that l 1 j < r and < j. We need to show that c 1 c 1 l,j < pot π (v 1 j ) pot π (v 1 r) = (r j)µ, l 1 j n, < j < r. As (vl 1, v1 r) immediately follows (vl, v r) in our ordering, we have c 1 = n 1 c. Thus c 1 µ (r j)µ. As c1 l,j >, the inequality follows. Case Suppose that r < j. We need to show that c 1 c 1 l,j < pot π (v 1 j ) pot π (v 1 r) = (r j)µ, r < j n, or equivalently that c 1 l,j c1 < (j r)µ, for r < j n,. This follows from the fact that (vl 1, v1 r) comes after (vl 1, v1 j ) in the ordering and that c1 >. Case 1.5. We show that π k+1 (vi 1) = v1 i 1, for l < i n. We have to show that c 1 i,i 1 c 1 i,j < pot π s (v 1 j ) pot π s (v 1 i 1) = (i j 1)µ s, l < i j n. This is identical to case Case 2. Suppose that π k = π 1 and l < r. We need to show that π k+1 = π 1. The proof is very similar to Case 1 and is omitted. Case 3. Suppose that π k = π 1 l,l. We need to show that π k+1 = π l 1,n. The proof is very similar to Cases 1 and 2 and is omitted. 3.4 Remarks For any l r < n, if the edges (v l, v r) and (v 1 l, v1 r) are removed from G n, then Howard s algorithm skips π and π1, but otherwise Π n remains the same. This can be repeated any number of times, essentially without modifying the proof given in Section 3.3, thus giving us the statement of Theorem 1. Let us also note that the costs presented here have been chosen to simplify the analysis. It is possible to define smaller costs, but assuming c s i,i 1 = for s, 1} and 2 i n, which can always be enforced using a potential transformation, any costs generating Π n must satisfy the following subset of the inequalities from cases and 2.4.2: c 1 1,r c 1 1,r 1 < µ 1,r = c 1,r r, 2 r n c 1,r 1 c 1,r 2 < µ 1 1,r = c1 1,r r, 3 r n. 1
11 v1 v2 v3 v4 v5 v6 v7 v8 v9 v1 v11 28 Stage 1 Stage 3 Stage Stage 4 Fig. 3. G 5 and the corresponding sequence of policies. Let w 2r s = c s 1,r. Joining the inequalities we get: k w k > (w k 1 w k 3 ), 4 k 2n. 2 It is then easy to see that for integral costs the size of the costs must be exponential in n. 4 A 2n O(1) lower bounds for outdegree2 graphs In this section we briefly mention a construction of a sequence of outdegree2 DMDPs on which the number of iterations of Howard s algorithm is only two less than the total number of edges. Theorem 2. For every n 3 there exists a weighted directed graph G n = (V, E, c), where c : E R, with V = 2n + 1 and E = 2 V, on which Howard s algorithm performs E 2 = 4n iterations. The graph used in the proof of Theorem 2 is simply a bidirected cycle on 2n + 1 vertices. (The graph G 5 is depicted at the top of Figure 3.) The proof of Theorem 2 can be found in Appendix A. 5 Concluding remarks We presented a quadratic lower bound on the number of iterations performed by Howard s algorithm for finding Minimum MeanCost Cycles (MMCCs). Our 11
12 lower bound is quadratic in the number of vertices, but is only linear in the number of edges. We conjecture that this is best possible: Conjecture The number of iterations performed by Howard s algorithm, when applied to a weighted directed graph, is at most the number of edges in the graph. Proving (or disproving) our conjecture is a major open problem. Our lower bounds shed some light on the nontrivial behavior of Howard s algorithm, even on deterministic DMDPs, and expose some of the difficulties that need to be overcome to obtain nontrivial upper bounds on its complexity. Our lower bounds on the complexity of Howard s algorithm do not undermine the usefulness of Howard s algorithm, as the instances used in our quadratic lower bound are very unlikely to appear in practice. Acknowledgement We would like to thank Omid Madani for sending us his example [13], and to Mike Paterson for helping us to obtain the results of Section 4. We would also like to thank Daniel Andersson, Peter Bro Miltersen, as well as Omid Madani and Mike Paterson, for helpful discussions on policy iteration algorithms. References 1. R.E. Bellman. Dynamic programming. Princeton University Press, R.E. Bellman. On a routing problem. Quarterly of Applied Mathematics, 16:87 9, A. Dasdan. Experimental analysis of the fastest optimum cycle ratio and mean algorithms. ACM Trans. Des. Autom. Electron. Syst., 9(4): , C. Derman. Finite state Markov decision processes. Academic Press, J. Fearnley. Exponential lower bounds for policy iteration. In Proc. of 37th ICALP, 21. Preliminaey version available at 6. L. R. Ford, Jr. and D. R. Fulkerson. Maximal flow through a network. Canadian Journal of Mathematics, 8:399 44, O. Friedmann. An exponential lower bound for the parity game strategy improvement algorithm as we know it. In Proc. of 24th LICS, pages , L. Georgiadis, A.V. Goldberg, R.E. Tarjan, and R.F.F. Werneck. An experimental study of minimum mean cycle algorithms. In Proc. of 11th ALENEX, pages 1 13, A.V. Goldberg and R.E. Tarjan. Finding minimumcost circulations by canceling negative cycles. Journal of the ACM, 36(4): , T.D. Hansen, P.B. Miltersen, and U. Zwick. Strategy iteration is strongly polynomial for 2player turnbased stochastic games with a constant discount factor. CoRR, abs/18.53, R.A. Howard. Dynamic programming and Markov processes. MIT Press, R.M. Karp. A characterization of the minimum cycle mean in a digraph. Discrete Mathematics, 23(3):39 311, O. Madani. Personal communication,
13 14. N. Megiddo. Combinatorial optimization with rational objective functions. Mathematics of Operations Research, 4(4): , N. Megiddo. Applying parallel computation algorithms in the design of serial algorithms. Journal of the ACM, 3(4): , M.L. Puterman. Markov decision processes. Wiley, Y. Ye. The simplex method is strongly polynomial for the Markov decision problem with a fixed discount rate. Available at yyye/simplexmdp1.pdf, N.E. Young, R.E. Tarjan, and J.B. Orlin. Faster parametric shortest path and minimumbalance algorithms. Networks, 21:25 221,
14 APPENDIX A A 2n O(1) lower bounds for outdegree2 graphs A.1 The construction Let V = v 1,, v 2n+1 }. We let v 2n+2 = v 1 and v = v 2n+1. Let E = (v i, v i 1 ), (v i, v i+1 ) 1 i 2n + 1}. The graph G n is simply a bidirected cycle on 2n + 1 vertices. (The graph G 5 is depicted at the top of Figure 3.) We define costs, for n 3, as follows: i 1 2 i n 2 n 1 n n + 1 n + 2 n + 3 i 2n 2n + 1 c(v i, v i 1 ) 1 2(n i + 1) (i n) + 1 i 1 i 2n 2n + 1 2n c(v i, v i+1 ) k=n c(v k, v k 1 ) We describe a sequence of policies Π n = (π j ) 4n j=1 of length 4n generated by running Howard s algorithm on G n, starting with π 1. Let us first introduce some notation. We describe a policy π as a sequence of indices [i 1,, i k ], such that for every 1 j 2n + 1, if i l 1 j < i l where l is odd, then π(v j ) = v j+1. Otherwise, π(v j ) = v j 1. We always assume that i = 1 and i k+1 = 2n + 2. In other words, for every j [i, i 1 ) [i 2, i 3 ), we have π(v j ) = v j+1, while for every j [i 1, i 2 ) [i 3, i 4 ), we have π(v j ) = v j 1 The sequence Π n is composed of four subsequences Πn, 1 Πn, 2 Πn, 3 Πn 4 that we call stages. Thus, for example: Π 1 n = [1, 2n + 1], ( [2n k + 1], [k + 1, 2n] ) n 2 k=1 Π 2 n = [n, n + 1, n + 2], [n, 2n], [n] Π 3 n = ( [1, k + 1, n, 2n k, 2n + 1] ) n 2 k= Π 4 n = ( [1, n + 2 k, 2n + 1] ) n+1 k=1 Π 1 n = [1, 2n + 1], [2n], [2, 2n], [2n 1], [3, 2n],, [n + 3], [n 1, 2n] Doing a case analysis, similar to that of Section 3.3, it is possible to prove that the sequence Π n is indeed the sequence of policies generated by running Howard s algorithm on G n, starting at [1, 2n + 1]. The proof can be found in Appendix A. For alternative pictorial description of the sequence Π n, for n = 15, is given in Figure 4 of the appendix. 14
15 APPENDIX Fig. 4. Numerical representation of the sequence Π 15. Vertices are ordered as in Figure 3. 1 means left and 2 means right. Background colors have been added to roughly highlight the structure of the sequence. The lines separate different stages. 15
16 APPENDIX A.2 Correctness of the construction Before proving Theorem 2 we first describe a bit of the intuition behind the construction. An alternative pictorial description of the sequence Π n, for n = 15, is given in Figure 4. Consulting both Figure 3 and Figure 4, we see that during stage 1, the central vertices switch their actions back and forth. All the policies in this stage contain a single cycle, but this cycle jumps around from side to side. At the end of Stage 1, the cycle is almost at the middle. The first policy of Stage 2 contains two cycles. The third and last policy of Stage 2 again contains a single cycle. All policies of Stage 3 contain two cycles, a middle cycle and a right cycle. The middle cycle has a larger value. The right cycle is the minimum mean cost cycle of the graph. The vertices to the left of the middle cycle are fooled into going left, making them switch back during stage 4 for the cheaper path to the final cycle. We also note that by adding a selfloop to every vertex and modifying the sequence we have managed to find costs that realize sequences of length 3n 5 for a graph with n states, n being odd, and 3n edges. In order to prove that Π n is indeed generated by Howard s algorithm, we must show that: 1 j 4n 1 1 i 2n+1 : π j+1 (v i ) = arg min (val πj (v k ), c(v i, v k )+pot πj (v k )) v k : k i 1,i+1} Where arg min is applied lexicographically, such that value weighs higher than potential. If π j+1 (v i ) = v i+ti, t i 1, 1}, we can, requiring strict inequality, also express this as: 1 j 4n 1 1 i 2n + 1 : val πj (v i+ti ) < val πj (v i ti ) val πj (v i+ti ) = val πj (v i ti ) and c(v i, v i+ti )+pot πj (v i+ti ) < c(v i, v i ti )+pot πj (v i ti ) We will argue about the four stages seperately. One fact that we will be using repeatedly is the following: Fact Let π be the current policy, and let π be the policy generated from π in one step of Howard s policy iteration algorithm. If for some 1 i 2n + 1, π(v i 1 ) = v i and π(v i ) = v i+1 then: π (v i ) = v i 1 c(v i, v i 1 ) + c(v i 1, v i ) 2 Similarly, for π(v i ) = v i 1 and π(v i+1 ) = v i we get: π (v i ) = v i+1 c(v i, v i+1 ) + c(v i+1, v i ) 2 < val π (v i ) < val π (v i ) To see this, observe that the potential of v i 1 in the first case can be expressed as: pot π (v i 1 ) = c(v i 1, v i ) val π (v i ) + pot π (v i ) = c(v i 1, v i ) + c(v i, v i+1 ) 2val π (v i ) + pot π (v i+1 ) or 16
17 APPENDIX Since v i 1 and v i+1 have same value in π, we see that π (v i ) = v i 1 if and only if: c(v i, v i 1 ) + pot π (v i 1 ) < c(v i, v i+1 ) + pot π (v i+1 ) c(v i, v i 1 )+c(v i 1, v i )+c(v i, v i+1 ) 2val π (v i )+pot π (v i+1 ) < c(v i, v i+1 )+pot π (v i+1 ) From which the result follows. The second case is shown similarly. Let val(c i ) = (c(v i, v i 1 )+c(v i 1, v i ))/2. We note that for all i = 1,, n 2 we have: val(c i ) > val(c 2n+1 i ) > val(c i+1 ) and val(c n+3 ) > val(c n+1 ) > val(c n 1 ) > val(c n+2 ) > val(c n ) > val(c 2n+1 ) which specifies the order of all cycles of length two of G n according to value. We will say that a vertex v V switches from π to π if π(v) π (v). A.3 Stage 1 Let us first note that in all of stage 1, there is only one cycle in every policy, meaning that all changes are based solely on differences in potential. Case 1, [1, 2n + 1] to [2n]: Since the transition from π 1 = [1, 2n + 1] to π 2 = [2n] is a bit different from the rest we will handle it separately. Let us first note that by Fact all vertices v 1,, v 2n 1 switch from π 1 to π 2. We must show that v 2n+1 also switches, whereas v 2n does not. Let us first describe potentials of v 1, v 2n 1 and v 2n in terms of the potential of v 2n+1 = v : pot π1 (v 1 ) = c(v 1, v ) val π1 (v ) + pot π1 (v ) pot π1 (v 2n 1 ) = pot π1 (v 2n ) = 2n 1 i=1 i=1 c(v i, v i 1 ) (2n 1)val π1 (v ) + pot π1 (v ) c(v i, v i 1 ) (2n)val π1 (v ) + pot π1 (v ) To see that v 2n does not switch, we observe that: c(v 2n, v 2n 1 ) + pot π1 (v 2n 1 ) < c(v 2n, v 2n+1 ) + pot π1 (v ) 2n 1 c(v 2n, v 2n 1 ) + c(v i, v i 1 ) (2n 1)val π1 (v ) + pot π1 (v ) < pot π1 (v ) i=1 i=1 c(v i, v i 1 ) < (2n 1) 1 + 2n i=n c(v i, v i 1 ) 2 17
18 APPENDIX The last inequality is easily satisfied since n 1 i=1 c(v i, v i 1 ) < 2n i=n c(v i, v i 1 ). Similarly, to see that v 2n+1 does switch, we observe that: i=1 c(v 2n+1, v 2n ) + pot π1 (v 2n ) < c(v 2n+1, v 1 ) + pot π1 (v 1 ) pot π1 (v 2n ) < pot π1 (v 1 ) + i=n c(v i, v i 1 ) c(v i, v i 1 ) (2n)val π1 (v ) < c(v 1, v ) val π1 (v ) + i=n n 1 c(v i, v i 1 ) < (2n 1) 1 + 2n i=n c(v i, v i 1 ) 2 i=2 c(v i, v i 1 ) Which is again satisfied. Hence, we have shown that the transition from π 1 to π 2 is correct. Case 2, [2n k + 1] to [1 + k, 2n]: For 1 k n 2, let π = [2n k + 1] we will show that the next policy π will, indeed, be [1+k, 2n]. For all 2 i 2n, v i switches correctly due to Fact. That is, v i s with i k or 2n k +1 i 2n 1 do not switch whereas the remaining vertices do switch, since the values of the cycles C k,, C 2n k and C 2n+1 are lower than that of C 2n k+1. Thus, it remains to show that v 1 does not switch and v 2n+1 does switch, which will be similar to the analysis in the transition from π 1 to π 2 in case 1. Let us first show that v 1 does not switch. Rather than defining all the relevant potentials in terms of the potential of v 2n k+1, we simply note that going to v 2 both ensures a longer and cheaper path to v 2n k+1 than going to v, and since val(c 2n k+1 ) > this means that v 2 is the preferred choice. To show that v 2n+1 switches we need to be a bit more careful. Let us define the potential of v 1 and v 2n in terms of the potential of v 2n k+1 : pot π (v 1 ) = pot π (v 2n k+1 ) (2n k)val(c 2n k+1 ) pot π (v 2n ) = pot π (v 2n k+1 ) + Now, v 2n+1 switches if and only if: i=2n k+2 c(v i, v i 1 ) (k 1)val(C 2n k+1 ) c(v 2n+1, v 1 ) + pot π (v 1 ) < c(v 2n+1, v 2n ) + pot π (v 2n ) c(v 2n+1, v 1 ) (2n k)val(c 2n k+1 ) < i=n c(v i, v i 1 ) < i=2n k+2 i=2n k+2 c(v i, v i 1 ) (k 1)val(C 2n k+1 ) c(v i, v i 1 ) + (2n + 1)val(C 2n k+1 ) 18
19 APPENDIX 2n k+1 i=n 2n k+1 c(v i, v i 1 ) = 7 + i=n+3 c(v i, v i 1 ) < 7 + (n k 1) 2val(C 2n k+1 ) < (2n + 1)val(C 2n k+1 ) which is always satisfied. Case 3, [k + 1, 2n] to [2n k]: For 1 k n 3, let π = [k + 1, 2n] we will show that the next policy π will, indeed, be [2n k]. We will proceed very similarly to case (ii). We first observe that all vertices but v 2n 1 and v 2n switch correctly due to Fact. Showing that v 2n 1 does not switch is done in the same way as showing that v 2n does not switch in case 1. We first express the potentials of v 2n 2 and v 2n in terms of the potential of v k+1 : pot π (v 2n 2 ) = 2n 2 i=k+2 c(v i, v i 1 ) (2n k 3)val(C k+1 ) + pot π (v k+1 ) pot π (v 2n ) = c(v 2n+1, v 1 ) (k + 2)val(C k+1 ) + pot π (v k+1 ) We then observe that v 2n 1 does not switch if and only if: c(v 2n 1, v 2n 2 ) + pot π (v 2n 2 ) < c(v 2n 1, v 2n ) + pot π (v 2n ) 2n 2 c(v 2n 1, v 2n 2 )+ n 1 i=k+2 2n 1 i=k+2 i=k+2 c(v i, v i 1 ) < c(v i, v i 1 ) (2n k 3)val(C k+1 ) < c(v 2n+1, v 1 ) (k+2)val(c k+1 ) i=n c(v i, v i 1 ) + (2n 2k 5)val(C k+1 ) c(v i, v i 1 ) < (n k 2) 2val(C k+2 ) < c(v 2n, v 2n 1 )+(2n 2k 5)val(C k+1 ) which is satisfied since c(v 2n, v 2n 1 ) > val(c k+1 ) and val(c k+1 ) > val(c k+2 ). Similarly, to show that v 2n switches we first need the potentials of v 2n 1 and v 2n+1 : pot π (v 2n 1 ) = 2n 1 i=k+2 c(v i, v i 1 ) (2n k 2)val(C k+1 ) + pot π (v k+1 ) pot π (v 2n+1 ) = c(v 2n+1, v 1 ) (k + 1)val(C k+1 ) + pot π (v k+1 ) We observe that v 2n switches if and only if: c(v 2n, v 2n 1 ) + pot π (v 2n 1 ) < c(v 2n, v 2n+1 ) + pot π (v 2n+1 ) 2n 1 c(v 2n, v 2n 1 )+ i=k+2 c(v i, v i 1 ) (2n k 2)val(C k+1 ) < c(v 2n+1, v 1 ) (k+1)val(c k+1 ) 19
20 n 1 i=k+2 i=k+2 c(v i, v i 1 ) < i=n APPENDIX c(v i, v i 1 ) + (2n 2k 3)val(C k+1 ) c(v i, v i 1 ) < (n k 2) 2val(C k+2 ) < (2n 2k 3)val(C k+1 ) which is satisfied since val(c k+1 ) > and val(c k+1 ) > val(c k+2 ). Case 4, [n 1, 2n] to [n, n+1, n+2]: In fact, the correctness of the transition from [n 1, 2n] to [n, n+1, n+2] is shown in exactly the same way as in case 3. We just want to note, that since c(v n 1, v n 2 ) = 3, c(v n, v n 1 ) = 1, c(v n+1, v n ) = 4 and c(v n+2, v n+1 ) = 2, Fact causes two cycles to be formed. A.4 Stage 2 There are only three policies in stage 2, and we will argue for the correctness of the transitions of each of these three policies to the next separately. In the transition from [n, n + 1, n + 2] to [n, 2n] the correct behaviour of all vertices but v 1, v n, v n+1 and v 2n+1 are ensured by Fact. The correct behaviour of the remaining four vertices follows from val(c n ) < val(c n+2 ). That is, the switches are based on the vertices having different values rather than on differences in potential. In the transition from [n, 2n] to [n] the correct behaviour of all vertices but v 2n 1 and v 2n are ensured by Fact. Rather than specifying the potentials causing v 2n to switch, we note that the two paths going from v 2n to v n 1 have same cost, but that the path going through v 2n 1 is one step longer, which causes the switch. Showing that v 2n 1 does not switch is essentially the same as showing that v 2n 1 does not switch in the transition from [n 1, 2n] to [n, n + 1, n + 2] in case 4 of stage 1, for which the calculations are shown in case 3. The only difference is that the current value is val(c k+2 ) rather than val(c k+1 ), but the final inequalities are satisfied all the same. Let π = [n]. In the transition from [n] to [1, 1, n, 2n, 2n + 1] = [n, 2n, 2n + 1] the correct behaviour of all vertices but v 1 and v 2n+1 are ensured by Fact. v 2n+1 does not switch because the two paths going from v 2n+1 to v n 1 have same costs, whereas the path going through v 2n is three steps longer. To show that v 1 does also not switch, we will need to consider the potentials of v 2 and v 2n+1 in terms of the potential of v n 1 : pot π (v 2 ) = (n 3)val(C n ) + pot π (v n 1 ) pot π (v 2n+1 ) = i=n It follows that v 1 does not switch if and only if: c(v i, v i 1 ) (n + 2)val(C n ) + pot π (v n 1 ) c(v 1, v 2 ) + pot π (v 2 ) < c(v 1, v 2n+1 ) + pot π (v 2n+1 ) 2
Finding and counting given length cycles
Finding and counting given length cycles Noga Alon Raphael Yuster Uri Zwick Abstract We present an assortment of methods for finding and counting simple cycles of a given length in directed and undirected
More informationSHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH
31 Kragujevac J. Math. 25 (2003) 31 49. SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH Kinkar Ch. Das Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, W.B.,
More informationThe Goldberg Rao Algorithm for the Maximum Flow Problem
The Goldberg Rao Algorithm for the Maximum Flow Problem COS 528 class notes October 18, 2006 Scribe: Dávid Papp Main idea: use of the blocking flow paradigm to achieve essentially O(min{m 2/3, n 1/2 }
More informationprinceton univ. F 13 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Sanjeev Arora
princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Sanjeev Arora Scribe: One of the running themes in this course is the notion of
More informationExponential time algorithms for graph coloring
Exponential time algorithms for graph coloring Uriel Feige Lecture notes, March 14, 2011 1 Introduction Let [n] denote the set {1,..., k}. A klabeling of vertices of a graph G(V, E) is a function V [k].
More informationA Sublinear Bipartiteness Tester for Bounded Degree Graphs
A Sublinear Bipartiteness Tester for Bounded Degree Graphs Oded Goldreich Dana Ron February 5, 1998 Abstract We present a sublineartime algorithm for testing whether a bounded degree graph is bipartite
More informationBicolored Shortest Paths in Graphs with Applications to Network Overlay Design
Bicolored Shortest Paths in Graphs with Applications to Network Overlay Design Hongsik Choi and HyeongAh Choi Department of Electrical Engineering and Computer Science George Washington University Washington,
More informationA Turán Type Problem Concerning the Powers of the Degrees of a Graph
A Turán Type Problem Concerning the Powers of the Degrees of a Graph Yair Caro and Raphael Yuster Department of Mathematics University of HaifaORANIM, Tivon 36006, Israel. AMS Subject Classification:
More informationMidterm Practice Problems
6.042/8.062J Mathematics for Computer Science October 2, 200 Tom Leighton, Marten van Dijk, and Brooke Cowan Midterm Practice Problems Problem. [0 points] In problem set you showed that the nand operator
More informationOptimal Index Codes for a Class of Multicast Networks with Receiver Side Information
Optimal Index Codes for a Class of Multicast Networks with Receiver Side Information Lawrence Ong School of Electrical Engineering and Computer Science, The University of Newcastle, Australia Email: lawrence.ong@cantab.net
More informationA simple criterion on degree sequences of graphs
Discrete Applied Mathematics 156 (2008) 3513 3517 Contents lists available at ScienceDirect Discrete Applied Mathematics journal homepage: www.elsevier.com/locate/dam Note A simple criterion on degree
More information6.852: Distributed Algorithms Fall, 2009. Class 2
.8: Distributed Algorithms Fall, 009 Class Today s plan Leader election in a synchronous ring: Lower bound for comparisonbased algorithms. Basic computation in general synchronous networks: Leader election
More informationTHE SCHEDULING OF MAINTENANCE SERVICE
THE SCHEDULING OF MAINTENANCE SERVICE Shoshana Anily Celia A. Glass Refael Hassin Abstract We study a discrete problem of scheduling activities of several types under the constraint that at most a single
More informationChapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling
Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NPhard problem. What should I do? A. Theory says you're unlikely to find a polytime algorithm. Must sacrifice one
More informationStochastic Inventory Control
Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to the
More informationAnalysis of Algorithms, I
Analysis of Algorithms, I CSOR W4231.002 Eleni Drinea Computer Science Department Columbia University Thursday, February 26, 2015 Outline 1 Recap 2 Representing graphs 3 Breadthfirst search (BFS) 4 Applications
More informationHigh degree graphs contain largestar factors
High degree graphs contain largestar factors Dedicated to László Lovász, for his 60th birthday Noga Alon Nicholas Wormald Abstract We show that any finite simple graph with minimum degree d contains a
More informationCSC2420 Fall 2012: Algorithm Design, Analysis and Theory
CSC2420 Fall 2012: Algorithm Design, Analysis and Theory Allan Borodin November 15, 2012; Lecture 10 1 / 27 Randomized online bipartite matching and the adwords problem. We briefly return to online algorithms
More informationOn an antiramsey type result
On an antiramsey type result Noga Alon, Hanno Lefmann and Vojtĕch Rödl Abstract We consider antiramsey type results. For a given coloring of the kelement subsets of an nelement set X, where two kelement
More informationU.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009. Notes on Algebra
U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009 Notes on Algebra These notes contain as little theory as possible, and most results are stated without proof. Any introductory
More informationLecture 3: Linear Programming Relaxations and Rounding
Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can
More informationMinimum cost maximum flow, Minimum cost circulation, Cost/Capacity scaling
6.854 Advanced Algorithms Lecture 16: 10/11/2006 Lecturer: David Karger Scribe: Kermin Fleming and Chris Crutchfield, based on notes by Wendy Chu and Tudor Leu Minimum cost maximum flow, Minimum cost circulation,
More informationEvery tree contains a large induced subgraph with all degrees odd
Every tree contains a large induced subgraph with all degrees odd A.J. Radcliffe Carnegie Mellon University, Pittsburgh, PA A.D. Scott Department of Pure Mathematics and Mathematical Statistics University
More informationThe positive minimum degree game on sparse graphs
The positive minimum degree game on sparse graphs József Balogh Department of Mathematical Sciences University of Illinois, USA jobal@math.uiuc.edu András Pluhár Department of Computer Science University
More informationLarge induced subgraphs with all degrees odd
Large induced subgraphs with all degrees odd A.D. Scott Department of Pure Mathematics and Mathematical Statistics, University of Cambridge, England Abstract: We prove that every connected graph of order
More informationOn Nashsolvability of chesslike games
R u t c o r Research R e p o r t On Nashsolvability of chesslike games Endre Boros a Vladimir Oudalov c Vladimir Gurvich b Robert Rand d RRR 92014, December 2014 RUTCOR Rutgers Center for Operations
More informationApproximated Distributed Minimum Vertex Cover Algorithms for Bounded Degree Graphs
Approximated Distributed Minimum Vertex Cover Algorithms for Bounded Degree Graphs Yong Zhang 1.2, Francis Y.L. Chin 2, and HingFung Ting 2 1 College of Mathematics and Computer Science, Hebei University,
More informationSCORE SETS IN ORIENTED GRAPHS
Applicable Analysis and Discrete Mathematics, 2 (2008), 107 113. Available electronically at http://pefmath.etf.bg.ac.yu SCORE SETS IN ORIENTED GRAPHS S. Pirzada, T. A. Naikoo The score of a vertex v in
More information1 Introduction. Dr. T. Srinivas Department of Mathematics Kakatiya University Warangal 506009, AP, INDIA tsrinivasku@gmail.com
A New Allgoriitthm for Miiniimum Costt Liinkiing M. Sreenivas Alluri Institute of Management Sciences Hanamkonda 506001, AP, INDIA allurimaster@gmail.com Dr. T. Srinivas Department of Mathematics Kakatiya
More informationOPRE 6201 : 2. Simplex Method
OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2
More informationSome Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.
Some Polynomial Theorems by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.com This paper contains a collection of 31 theorems, lemmas,
More informationAll trees contain a large induced subgraph having all degrees 1 (mod k)
All trees contain a large induced subgraph having all degrees 1 (mod k) David M. Berman, A.J. Radcliffe, A.D. Scott, Hong Wang, and Larry Wargo *Department of Mathematics University of New Orleans New
More informationApproximation Algorithms
Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NPCompleteness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms
More informationarxiv:1112.0829v1 [math.pr] 5 Dec 2011
How Not to Win a Million Dollars: A Counterexample to a Conjecture of L. Breiman Thomas P. Hayes arxiv:1112.0829v1 [math.pr] 5 Dec 2011 Abstract Consider a gambling game in which we are allowed to repeatedly
More informationNotes from Week 1: Algorithms for sequential prediction
CS 683 Learning, Games, and Electronic Markets Spring 2007 Notes from Week 1: Algorithms for sequential prediction Instructor: Robert Kleinberg 2226 Jan 2007 1 Introduction In this course we will be looking
More information9.2 Summation Notation
9. Summation Notation 66 9. Summation Notation In the previous section, we introduced sequences and now we shall present notation and theorems concerning the sum of terms of a sequence. We begin with a
More informationABSTRACT. For example, circle orders are the containment orders of circles (actually disks) in the plane (see [8,9]).
Degrees of Freedom Versus Dimension for Containment Orders Noga Alon 1 Department of Mathematics Tel Aviv University Ramat Aviv 69978, Israel Edward R. Scheinerman 2 Department of Mathematical Sciences
More information! Solve problem to optimality. ! Solve problem in polytime. ! Solve arbitrary instances of the problem. !approximation algorithm.
Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NPhard problem What should I do? A Theory says you're unlikely to find a polytime algorithm Must sacrifice one of
More informationOn the kpath cover problem for cacti
On the kpath cover problem for cacti Zemin Jin and Xueliang Li Center for Combinatorics and LPMC Nankai University Tianjin 300071, P.R. China zeminjin@eyou.com, x.li@eyou.com Abstract In this paper we
More informationCMSC 858T: Randomized Algorithms Spring 2003 Handout 8: The Local Lemma
CMSC 858T: Randomized Algorithms Spring 2003 Handout 8: The Local Lemma Please Note: The references at the end are given for extra reading if you are interested in exploring these ideas further. You are
More informationAdaptive Online Gradient Descent
Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650
More informationCompetitive Analysis of On line Randomized Call Control in Cellular Networks
Competitive Analysis of On line Randomized Call Control in Cellular Networks Ioannis Caragiannis Christos Kaklamanis Evi Papaioannou Abstract In this paper we address an important communication issue arising
More informationMathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 19967 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
More informationI. GROUPS: BASIC DEFINITIONS AND EXAMPLES
I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called
More informationPh.D. Thesis. Judit NagyGyörgy. Supervisor: Péter Hajnal Associate Professor
Online algorithms for combinatorial problems Ph.D. Thesis by Judit NagyGyörgy Supervisor: Péter Hajnal Associate Professor Doctoral School in Mathematics and Computer Science University of Szeged Bolyai
More informationMultilayer Structure of Data Center Based on Steiner Triple System
Journal of Computational Information Systems 9: 11 (2013) 4371 4378 Available at http://www.jofcis.com Multilayer Structure of Data Center Based on Steiner Triple System Jianfei ZHANG 1, Zhiyi FANG 1,
More informationDefinition 11.1. Given a graph G on n vertices, we define the following quantities:
Lecture 11 The Lovász ϑ Function 11.1 Perfect graphs We begin with some background on perfect graphs. graphs. First, we define some quantities on Definition 11.1. Given a graph G on n vertices, we define
More informationDiscrete Mathematics & Mathematical Reasoning Chapter 10: Graphs
Discrete Mathematics & Mathematical Reasoning Chapter 10: Graphs Kousha Etessami U. of Edinburgh, UK Kousha Etessami (U. of Edinburgh, UK) Discrete Mathematics (Chapter 6) 1 / 13 Overview Graphs and Graph
More informationVilnius University. Faculty of Mathematics and Informatics. Gintautas Bareikis
Vilnius University Faculty of Mathematics and Informatics Gintautas Bareikis CONTENT Chapter 1. SIMPLE AND COMPOUND INTEREST 1.1 Simple interest......................................................................
More informationSHORT CYCLE COVERS OF GRAPHS WITH MINIMUM DEGREE THREE
SHOT YLE OVES OF PHS WITH MINIMUM DEEE THEE TOMÁŠ KISE, DNIEL KÁL, END LIDIKÝ, PVEL NEJEDLÝ OET ŠÁML, ND bstract. The Shortest ycle over onjecture of lon and Tarsi asserts that the edges of every bridgeless
More informationGraphical degree sequences and realizations
swap Graphical and realizations Péter L. Erdös Alfréd Rényi Institute of Mathematics Hungarian Academy of Sciences MAPCON 12 MPIPKS  Dresden, May 15, 2012 swap Graphical and realizations Péter L. Erdös
More informationDiscuss the size of the instance for the minimum spanning tree problem.
3.1 Algorithm complexity The algorithms A, B are given. The former has complexity O(n 2 ), the latter O(2 n ), where n is the size of the instance. Let n A 0 be the size of the largest instance that can
More informationThe degree, size and chromatic index of a uniform hypergraph
The degree, size and chromatic index of a uniform hypergraph Noga Alon Jeong Han Kim Abstract Let H be a kuniform hypergraph in which no two edges share more than t common vertices, and let D denote the
More informationOn the independence number of graphs with maximum degree 3
On the independence number of graphs with maximum degree 3 Iyad A. Kanj Fenghui Zhang Abstract Let G be an undirected graph with maximum degree at most 3 such that G does not contain any of the three graphs
More informationPractical Guide to the Simplex Method of Linear Programming
Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear
More informationx if x 0, x if x < 0.
Chapter 3 Sequences In this chapter, we discuss sequences. We say what it means for a sequence to converge, and define the limit of a convergent sequence. We begin with some preliminary results about the
More informationCompletion Time Scheduling and the WSRPT Algorithm
Completion Time Scheduling and the WSRPT Algorithm Bo Xiong, Christine Chung Department of Computer Science, Connecticut College, New London, CT {bxiong,cchung}@conncoll.edu Abstract. We consider the online
More informationA NOTE ON OFFDIAGONAL SMALL ONLINE RAMSEY NUMBERS FOR PATHS
A NOTE ON OFFDIAGONAL SMALL ONLINE RAMSEY NUMBERS FOR PATHS PAWE L PRA LAT Abstract. In this note we consider the online Ramsey numbers R(P n, P m ) for paths. Using a high performance computing clusters,
More informationA REMARK ON ALMOST MOORE DIGRAPHS OF DEGREE THREE. 1. Introduction and Preliminaries
Acta Math. Univ. Comenianae Vol. LXVI, 2(1997), pp. 285 291 285 A REMARK ON ALMOST MOORE DIGRAPHS OF DEGREE THREE E. T. BASKORO, M. MILLER and J. ŠIRÁŇ Abstract. It is well known that Moore digraphs do
More informationImproved Algorithms for Data Migration
Improved Algorithms for Data Migration Samir Khuller 1, YooAh Kim, and Azarakhsh Malekian 1 Department of Computer Science, University of Maryland, College Park, MD 20742. Research supported by NSF Award
More informationLecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs
CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like
More informationStationary random graphs on Z with prescribed iid degrees and finite mean connections
Stationary random graphs on Z with prescribed iid degrees and finite mean connections Maria Deijfen Johan Jonasson February 2006 Abstract Let F be a probability distribution with support on the nonnegative
More informationThe UnionFind Problem Kruskal s algorithm for finding an MST presented us with a problem in datastructure design. As we looked at each edge,
The UnionFind Problem Kruskal s algorithm for finding an MST presented us with a problem in datastructure design. As we looked at each edge, cheapest first, we had to determine whether its two endpoints
More informationCOMBINATORIAL PROPERTIES OF THE HIGMANSIMS GRAPH. 1. Introduction
COMBINATORIAL PROPERTIES OF THE HIGMANSIMS GRAPH ZACHARY ABEL 1. Introduction In this survey we discuss properties of the HigmanSims graph, which has 100 vertices, 1100 edges, and is 22 regular. In fact
More informationNan Kong, Andrew J. Schaefer. Department of Industrial Engineering, Univeristy of Pittsburgh, PA 15261, USA
A Factor 1 2 Approximation Algorithm for TwoStage Stochastic Matching Problems Nan Kong, Andrew J. Schaefer Department of Industrial Engineering, Univeristy of Pittsburgh, PA 15261, USA Abstract We introduce
More informationChapter 10: Network Flow Programming
Chapter 10: Network Flow Programming Linear programming, that amazingly useful technique, is about to resurface: many network problems are actually just special forms of linear programs! This includes,
More informationModern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem
More informationNPCompleteness I. Lecture 19. 19.1 Overview. 19.2 Introduction: Reduction and Expressiveness
Lecture 19 NPCompleteness I 19.1 Overview In the past few lectures we have looked at increasingly more expressive problems that we were able to solve using efficient algorithms. In this lecture we introduce
More information! Solve problem to optimality. ! Solve problem in polytime. ! Solve arbitrary instances of the problem. #approximation algorithm.
Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NPhard problem What should I do? A Theory says you're unlikely to find a polytime algorithm Must sacrifice one of three
More informationEquilibrium computation: Part 1
Equilibrium computation: Part 1 Nicola Gatti 1 Troels Bjerre Sorensen 2 1 Politecnico di Milano, Italy 2 Duke University, USA Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium
More informationAnalysis of Approximation Algorithms for kset Cover using FactorRevealing Linear Programs
Analysis of Approximation Algorithms for kset Cover using FactorRevealing Linear Programs Stavros Athanassopoulos, Ioannis Caragiannis, and Christos Kaklamanis Research Academic Computer Technology Institute
More information8.1 Min Degree Spanning Tree
CS880: Approximations Algorithms Scribe: Siddharth Barman Lecturer: Shuchi Chawla Topic: Min Degree Spanning Tree Date: 02/15/07 In this lecture we give a local search based algorithm for the Min Degree
More informationElementary Number Theory We begin with a bit of elementary number theory, which is concerned
CONSTRUCTION OF THE FINITE FIELDS Z p S. R. DOTY Elementary Number Theory We begin with a bit of elementary number theory, which is concerned solely with questions about the set of integers Z = {0, ±1,
More informationLecture 3: Finding integer solutions to systems of linear equations
Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture
More informationOffline sorting buffers on Line
Offline sorting buffers on Line Rohit Khandekar 1 and Vinayaka Pandit 2 1 University of Waterloo, ON, Canada. email: rkhandekar@gmail.com 2 IBM India Research Lab, New Delhi. email: pvinayak@in.ibm.com
More informationThe Binary Blocking Flow Algorithm. Andrew V. Goldberg Microsoft Research Silicon Valley www.research.microsoft.com/ goldberg/
The Binary Blocking Flow Algorithm Andrew V. Goldberg Microsoft Research Silicon Valley www.research.microsoft.com/ goldberg/ Theory vs. Practice In theory, there is no difference between theory and practice.
More informationWhy? A central concept in Computer Science. Algorithms are ubiquitous.
Analysis of Algorithms: A Brief Introduction Why? A central concept in Computer Science. Algorithms are ubiquitous. Using the Internet (sending email, transferring files, use of search engines, online
More informationSecurityAware Beacon Based Network Monitoring
SecurityAware Beacon Based Network Monitoring Masahiro Sasaki, Liang Zhao, Hiroshi Nagamochi Graduate School of Informatics, Kyoto University, Kyoto, Japan Email: {sasaki, liang, nag}@amp.i.kyotou.ac.jp
More informationReading 13 : Finite State Automata and Regular Expressions
CS/Math 24: Introduction to Discrete Mathematics Fall 25 Reading 3 : Finite State Automata and Regular Expressions Instructors: Beck Hasti, Gautam Prakriya In this reading we study a mathematical model
More informationGraphs without proper subgraphs of minimum degree 3 and short cycles
Graphs without proper subgraphs of minimum degree 3 and short cycles Lothar Narins, Alexey Pokrovskiy, Tibor Szabó Department of Mathematics, Freie Universität, Berlin, Germany. August 22, 2014 Abstract
More informationOn three zerosum Ramseytype problems
On three zerosum Ramseytype problems Noga Alon Department of Mathematics Raymond and Beverly Sackler Faculty of Exact Sciences Tel Aviv University, Tel Aviv, Israel and Yair Caro Department of Mathematics
More informationReinforcement Learning
Reinforcement Learning LU 2  Markov Decision Problems and Dynamic Programming Dr. Martin Lauer AG Maschinelles Lernen und Natürlichsprachliche Systeme AlbertLudwigsUniversität Freiburg martin.lauer@kit.edu
More informationON INDUCED SUBGRAPHS WITH ALL DEGREES ODD. 1. Introduction
ON INDUCED SUBGRAPHS WITH ALL DEGREES ODD A.D. SCOTT Abstract. Gallai proved that the vertex set of any graph can be partitioned into two sets, each inducing a subgraph with all degrees even. We prove
More information11 Ideals. 11.1 Revisiting Z
11 Ideals The presentation here is somewhat different than the text. In particular, the sections do not match up. We have seen issues with the failure of unique factorization already, e.g., Z[ 5] = O Q(
More informationAn Empirical Study of Two MIS Algorithms
An Empirical Study of Two MIS Algorithms Email: Tushar Bisht and Kishore Kothapalli International Institute of Information Technology, Hyderabad Hyderabad, Andhra Pradesh, India 32. tushar.bisht@research.iiit.ac.in,
More informationDynamic programming. Doctoral course Optimization on graphs  Lecture 4.1. Giovanni Righini. January 17 th, 2013
Dynamic programming Doctoral course Optimization on graphs  Lecture.1 Giovanni Righini January 1 th, 201 Implicit enumeration Combinatorial optimization problems are in general NPhard and we usually
More information1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
More informationMean RamseyTurán numbers
Mean RamseyTurán numbers Raphael Yuster Department of Mathematics University of Haifa at Oranim Tivon 36006, Israel Abstract A ρmean coloring of a graph is a coloring of the edges such that the average
More informationWeek 5: Binary Relations
1 Binary Relations Week 5: Binary Relations The concept of relation is common in daily life and seems intuitively clear. For instance, let X be the set of all living human females and Y the set of all
More informationOn the Interaction and Competition among Internet Service Providers
On the Interaction and Competition among Internet Service Providers Sam C.M. Lee John C.S. Lui + Abstract The current Internet architecture comprises of different privately owned Internet service providers
More informationCombinatorial 5/6approximation of Max Cut in graphs of maximum degree 3
Combinatorial 5/6approximation of Max Cut in graphs of maximum degree 3 Cristina Bazgan a and Zsolt Tuza b,c,d a LAMSADE, Université ParisDauphine, Place du Marechal de Lattre de Tassigny, F75775 Paris
More informationWHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT?
WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT? introduction Many students seem to have trouble with the notion of a mathematical proof. People that come to a course like Math 216, who certainly
More informationClique coloring B 1 EPG graphs
Clique coloring B 1 EPG graphs Flavia Bonomo a,c, María Pía Mazzoleni b,c, and Maya Stein d a Departamento de Computación, FCENUBA, Buenos Aires, Argentina. b Departamento de Matemática, FCEUNLP, La
More informationComputer Algorithms. NPComplete Problems. CISC 4080 Yanjun Li
Computer Algorithms NPComplete Problems NPcompleteness The quest for efficient algorithms is about finding clever ways to bypass the process of exhaustive search, using clues from the input in order
More informationReinforcement Learning
Reinforcement Learning LU 2  Markov Decision Problems and Dynamic Programming Dr. Joschka Bödecker AG Maschinelles Lernen und Natürlichsprachliche Systeme AlbertLudwigsUniversität Freiburg jboedeck@informatik.unifreiburg.de
More informationDETERMINANTS. b 2. x 2
DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in
More information1 Approximating Set Cover
CS 05: Algorithms (Grad) Feb 224, 2005 Approximating Set Cover. Definition An Instance (X, F ) of the setcovering problem consists of a finite set X and a family F of subset of X, such that every elemennt
More informationCOUNTING INDEPENDENT SETS IN SOME CLASSES OF (ALMOST) REGULAR GRAPHS
COUNTING INDEPENDENT SETS IN SOME CLASSES OF (ALMOST) REGULAR GRAPHS Alexander Burstein Department of Mathematics Howard University Washington, DC 259, USA aburstein@howard.edu Sergey Kitaev Mathematics
More informationStrong Ramsey Games: Drawing on an infinite board
Strong Ramsey Games: Drawing on an infinite board Dan Hefetz Christopher Kusch Lothar Narins Alexey Pokrovskiy Clément Requilé Amir Sarid arxiv:1605.05443v2 [math.co] 25 May 2016 May 26, 2016 Abstract
More information