# RANDOMIZATION IN APPROXIMATION AND ONLINE ALGORITHMS. Manos Thanos Randomized Algorithms NTUA

Save this PDF as:

Size: px
Start display at page:

Download "RANDOMIZATION IN APPROXIMATION AND ONLINE ALGORITHMS. Manos Thanos Randomized Algorithms NTUA"

## Transcription

1 RANDOMIZATION IN APPROXIMATION AND ONLINE ALGORITHMS 1 Manos Thanos Randomized Algorithms NTUA

2 RANDOMIZED ROUNDING Linear Programming problems are problems for the optimization of a linear objective function, subject to linear equality and linear inequality constraints. If all the unknown variables are required to be integers, then the problem is called an Integer Linear Programming problem. Linear Programming has been proved to be in P by Leonid Khachiyan in 1979 Integer Linear Programming is a NP-hard problem. There is a reduction from 3-SAT to ILP. 2

3 RANDOMIZED ROUNDING For problems in NP we don t expect to design exact algorithms with good running time We design approximation algorithms for these problems, e.g. algorithms that find a nearly optimal solution We say that an algorithm is k-approximation when the cost of the solution it finds, is at most k times the cost of the optimal solution for a minimization problem. 3

4 RANDOMIZED ROUNDING The technique of randomized rounding Given an Integer Linear program, we compute its relaxation, e.g. the same linear program without the restriction for the variables to be integers. We solve the relaxation linear program to get an optimal solution OPTf. In this solution the decision variables may not be integers. We use a randomize process to round the fractional values of the decision variables into integers. Within the process we have to achieve a good approximation factor. 4

5 RANDOMIZED ROUNDING (SET COVER) SET COVER: Given a universe X of m elements ej, a collection of subsets of U, S={s1,,sn} and a cost function c : S Q find a minimum cost subcollection of S that covers all elements of X Linear Program: minimize i: e i1 subject to: x 1 j 1,2,... n j m S i i i i x 0,1 i 1,2,... m i cx 5

6 RANDOMIZED ROUNDING (SET COVER) A natural idea for rounding an optimal fractional solution is to view fractions as probabilities, flip coins with these biases and round accordingly. Repeating this process O(logn) times and picking a set if it is chosen in any of the iterations, we get a set cover with high probability having an expected cost at most O(logn) times the cost of the optimal solution. Theorem: The above algorithm produces a set cover that covers all the elements and it has a cost at most O(logn)*OPT with probability greater or equal than 1/2 6

7 RANDOMIZED ROUNDING (SET COVER) Proof: Let x be an optimal solution to the linear program. For each set S pick S with probability xs. Let C be the collection of sets picked. The expected cost of C is: E[ c( C)] Pr[ s is picked ] c( s) x c( s) OPT ss Next we compute the probability that an element e is covered by C. If e occurs in k sets of S that have probabilities to be picked x1,,xk, then since e is fractionally covered, x1+x2+ +xk 1. The probability that e is covered, is minimized when x1=x2= =xk=1/k. Thus, ss s f 1 1 Pr[ e is cov ered by C] k e k 7

8 RANDOMIZED ROUNDING (SET COVER) In our algorithm we independently choose dlogn set covers and output their union C. We choose d to be a constant such that: Now, dlog n 1 1 e 4n Summing over all elements, we get: dlog n 1 1 Pr[ e is not cov ered by C '] e 4n 1 1 Pr[ we haven ' t a valid set cov er] n 4 n 4 (1) 8

9 RANDOMIZED ROUNDING (SET COVER) Clearly E[ c( C ')] OPT d log n Applying Markov s Inequality we get: 1 Pr[ c( C ') OPT f 4d log n] 4 f As a result of (1) and (2) we have that algorithm produces a set cover that covers all the elements and it has a cost at most 4dlogn*OPT with probability greater or equal than 1/2 (2) 9

10 RANDOMIZED ROUNDING (MAX-SAT) MAX-SAT: Given a conjunctive normal form formula f on Boolean variables x1, xn, and non-negative weights, wc, for each clause c of f, find a truth assignment to the Boolean variables that maximizes the total weight of satisfied clauses. Let size(c) denote the number of literals in c. Let random variable W denote the total weight of satisfied clauses and Wc denote the weight contributed by clause c to W. Linear Program: maximize cc subject to: y (1 y ) z c C i i c i: S i: S c c c c y, z 0,1 i, c i wz c 10

11 RANDOMIZED ROUNDING (MAX-SAT) The algorithm we will use is straightforward. We solve the relaxation of the linear program to get a fractional solution (y*,z*). Independently we set xi to True with probability yi*. Finally we output the resulting truth assignment. Theorem: The above algorithm produces a solution with expected profit more or equal than (1-1/e)*OPT 11

12 JOHNSON S ALGORITHM FOR MAX-SAT The algorithm is the following: Set each Boolean variable to be True independently with probability 1/2 and output the resulting truth assignment. Theorem: The above algorithm produces a solution with expected profit equal or greater than 1/2*OPT Proof: If the size of a clause is k, then it is not satisfied if all its k literals are set to False. As a result E[ Wc] (1 2 ) wc 1 For a clause c with k 1, E[ Wc] wc. By linearity of 2 expectation: 1 1 E[ W ] E[ Wc] wc OPT cc 2 cc 2 12

13 DERANDOMIZATION VIA THE METHOD OF CONDITIONAL EXPECTATION Johnson s algorithm can be derandomized using the method of conditional probabilities We will use the self-reducibility tree T for formula f. Each internal node at level i corresponds to a setting for Boolean variables x1,,xi, and each leaf represents a complete truth assignment to the n variables. We can compute the conditional expectation of any node in T in polynomial time. Consider a node at level i. Let φ be the Boolean formula, on variables xi+1,, xn, obtained for this node via self-reducibility. Clearly the expected weight of satisfied clauses of φ can be computed in polynomial time. Adding to this the weight of clauses of f already satisfied gives the answer 13

14 DERANDOMIZATION VIA THE METHOD OF CONDITIONAL EXPECTATION We can compute in polynomial time, a path from the root to a leaf of T such that the conditional expectation of each node on this path is equal or greater than E[W]. Because of the fact that xi+1 is equally likely to be set to True or False, the conditional expectation of a node is the average of the conditional expectation of its children, i.e., 1 E[ W x1 True,..., xi True] E[ W x1 True,..., xi True, xi 1 True] 2 1 E [ W x1 True,..., xi True, xi 1 False ] 2 As a result of this fact, the child with the larger conditional expectation has at least the same value of its father s conditional expectation. This path gives an assignment with total weight ½*OPT 14

15 A 3/4-FACTOR ALGORITHM FOR MAX-SAT We will combine the previous two algorithms as follows: We flip an unbiased coin and if the result is 1 then we run the first algorithm, else we run the second algorithm. Theorem: The above algorithm produces a solution with expected profit equal or greater than 3/4*OPT Proof: Let size(c)=k. We have proved that E[ W coin 0] (1 2 ) w (1 2 ) w z k k * c c c c E[ W coin 1] 1 1 w z k k 1 * c c c (1) (2) 15

16 A 3/4-FACTOR ALGORITHM FOR MAX-SAT Combining (1) and (2) we get: 1 1 E[ Wc ] E[ Wc coin 0] E[ Wc coin 1] 2 2 k 1 * 1 k wz c c 1 1 (1 2 ) (3) 2 k 3 For k=1 or k=2 we get from (3) that E[ W ] w z 4 For k 3 7 (1 1/ e) * 3 * E[ Wc ] wc zc wc zc * c c c 16

17 A 3/4-FACTOR ALGORITHM FOR MAX-SAT By linearity of expectation: 3 3 E[ W ] E[ W ] w z OPT 4 4 cc * c c c cc which completes the proof. We can change the above ¾-factor algorithm to make it deterministic: Use the derandomized 1/2 factor algorithm to solve the problem Use the derandomized (1-1/e) factor algorithm to solve the problem Output the better of the two assignments 17

18 AN IMPROVED ALGORITHM FOR MAX-2SAT A quadratic program is a problem of optimizing a quadratic function of integer valued variables, subject to quadratic constraints on these variables. If each monomial in the objective function, as well as in each of the constraints, is of degree 0 or 2, then we will say that this is a strict quadratic program. A strict quadratic program over n variables defines a n vector program over n vector variables in R. Every degree 2 term corresponds to an inner product Vector programs are equivalent to semi-definite programs For all ε>0 semi-definite programs can be solved within an additive error of ε, in time polynomial in n and log(1/ε). 18

19 AN IMPROVED ALGORITHM FOR MAX-2SAT We can construct a strict quadratic program for MAX-2SAT Corresponding to each Boolean variable xi, we introduce variable yi which is constrained to be +1 or -1. In addition, we introduce another variable y0 which is also constrained to be +1 or -1. Variable xi will be true if yi=y0 and false otherwise. Now: vx ( ) i 1 yy 2 1 yy 2 0 i 0 vxi i 1 yy 1 yy 0 i 0 j v( xi x j ) 1 vxi vx j yy 1 y y 1 y y i 0 j i j 19

20 AN IMPROVED ALGORITHM FOR MAX-2SAT Strict Quadratic Program: 0i jn Vector Relaxation Program: maximize a (1 y y ) b (1 y y ) ij i j ij i j 2 subject to: y i =1 0 y Z 0 i n i i n 0i jn maximize a (1 u u ) b (1 u u ) i i ij i j ij i j subject to: u u =1 0 i n n1 u i 0 R i n 20

21 AN IMPROVED ALGORITHM FOR MAX-2SAT The algorithm is the following: We solve the vector program. Let a0,, an be an optimal solution. We pick a vector r uniformly distributed on the unit sphere in n+1 dimensions and we set yi=1 iff ra i 0 This gives a truth assignment for the Boolean variables. Theorem: The above algorithm produces a solution with expected profit equal or greater than a*opt where a>0,

22 AN IMPROVED ALGORITHM FOR MAX-2SAT Proof: Let W be the random variable denoting the weight of the truth assignment produced by the algorithm. We will prove that E[W] a*opt. We can see that: E[ W] 2 a Pr[ y y ] b Pr[ y y ] 0i jn ij i j ij i j Let θij denote the angle between ai and aj.then: ij ij Pr[ yi yj] Pr[ yi yj] 1 ij ij E[ W ] 2 aij 1 bij (1) 0i jn 22

23 AN IMPROVED ALGORITHM FOR MAX-2SAT Now, if we choose 2 a min 0 1 cos We have that: ij 1 cosij a (2) 2 ij 1 cosij 1 a (3) 2 (1) becomes from (2) and (3): ij ij E[ W ] 2 aij 1 bij a aij (1 cos ij ) bij (1 cos ij) 0i jn 0i jn 23

24 AN IMPROVED ALGORITHM FOR MAX-2SAT Because of the fact that vectors are on the unit sphere we have that uiuj=cosθij. The minimization function was 0i jn minimize a (1 u u ) b (1 u u ) ij i j ij i j And the angles θij are the angles between the vectors that gives the optimal solution. Thus: E[ W] a a (1 cos ) b (1 cos ) aopt 0i jn ij ij ij ij Using elementary calculus we can prove that 2 a min a cos The above algorithm is approximation 24

25 ONLINE ALGORITHMS Online algorithms receive and process the input in partial amounts. In a typical setting, an online algorithm receives a sequence of requests for service. It must service each request before it services the next one. In servicing each request the algorithm has a choice of several alternatives, each with an associated cost. The choice influences the future requests. In order to evaluate an online algorithm we compare its total cost on a sequence of requests to the total cost of an offline algorithm that services the same sequence of requests. The worst case ratio over all possible request sequences is the competitive ratio of an online algorithm and is the natural measure of the quality of the algorithm 25

26 ONLINE PAGING PROBLEM PAGING PROBLEM: Consider the following two level virtual memory system. Each level can store a number of fixed-size memory units, called pages. The first level, called the slow memory can store a fixed set P={p1,,pN} of N pages. The second level, called the fast memory, can store any k-subset of P. Given a request for a page pi the system must bring pi to the fast memory. If pi is already to the fast memory, then no cost incurs; otherwise a unit cost incurs. In order to move pi to the fast memory another page has to be evicted. The problem is to minimize the total cost by making a wise choice when evicting a page. It is obvious that a page which will be requested in the near future shouldn t be evicted but the future is unknown as the problem is online. 26

27 DETERMINISTIC ALGORITHMS FOR PAGING Least Recently Used (LRU): evict the item in the cache whose most recent request occurred furthest in the past Fist-in, First-out (FIFO): evict the item that has been in the cash for the longest period Last-in, First-out (LIFO): evict the item that has been moved to the cash most recently Least Frequently Used (LFU): evict the item in the cash that has been requested least often LRU and FIFO are k-competitive. LIFO and LFU don t achieve a bounded competitiveness coefficient. 27

28 DETERMINISTIC ALGORITHMS FOR PAGING Theorem: There isn t a deterministic online algorithm for the online paging problem that achieves a competitiveness coefficient better than k where k is the number of pages that could be stored in the cash Can we overcome the negative result of the above theorem using randomization? 28

29 ADVERSARY MODELS We can view the offline algorithm servicing the request sequence as an adversary who is not only managing the cache, but it also generating the request sequence. The adversary s goal is to increase the cost of the given online algorithm, while keeping it down for the offline algorithm. Oblivious adversary: must construct the request sequence in advance and pays optimally Adaptive online adversary: serves the current request online and then chooses the next request based on the online algorithm s actions so far Adaptive offline adversary: chooses the next request based on the online algorithm s actions thus far, but pays the optimal offline cost to service the resulting request sequence 29

30 A RANDOMIZED ALGORITHM FOR PAGING Algorithm MARKER: The algorithm proceeds in a series of rounds. Each of the k cache locations has a marker bit associated with it. At the beginning of every round, all k marker bits are reset to zero. As memory requests come in, the algorithm processes them as follows. If the item requested is already in one of the k cache locations, the marker bit of that location is set to one. If the request is a miss, the item is brought into the cache and the item that is evicted to make room for it is chosen as follows: choose an unmarked cache location uniformly at random, evict the item in it and set its marker bit to one. After all the locations have been thus marked, the round is deemed over on the next request to an item not in the cache. 30

31 A RANDOMIZED ALGORITHM FOR PAGING Theorem: Marker algorithm is 2Hk-competitive against an oblivious adversary, where Hk is the k-th harmonic number Proof: We assume that algorithm MARK and algorithm OPT start with the same set of items in their caches; otherwise it is possible to attribute the difference to an additive constant. For each phase i we call the pages in the cache immediately prior to the start of the phase old. We call the rest of the pages new. Consider the i-th phase of the algorithm. Let mi be the number of new pages requested in this phase. It is obvious that the worst scenario for MARK is that all new pages requests precede the old pages requests. 31

32 A RANDOMIZED ALGORITHM FOR PAGING Ordered in this way, the first mi requests incur mi page faults. We now investigate the page faults resulted for the first k-mi requests to old pages. The j-th old page requested in this phase is in the cache with probability: k mi ( j 1) k( j1) A page fault will occur only if the page is not in the cash, thus with probability: k m i ( j1) 32

33 A RANDOMIZED ALGORITHM FOR PAGING Hence, the expected number of faults during the i-th phase is: kmi mi m m m ( H H ) m ( H H 1) m H i i k( j1) i i i k m i k m i k j1 During the i-th and (i-1)-th phase, at least k+mi pages are requested. As a result, OPT has made at least mi page faults. At the first phase OPT made at least m1 page faults. Thus the total number of page faults of OPT is: 1 mi 2 i mh Thus the competitive ratio is at least: i 1 2 i k 2H k mi 33 i

34 LOWER BOUND FOR RANDOMIZED PAGING ALGORITHMS Theorem: Any randomized algorithm for online paging can t achieve a competitiveness coefficient better than Hk against an oblivious adversary Proof: We will use Yao s minimal principle: The expected running time of the optimal deterministic algorithm for an arbitrarily chosen distribution ρ is a lower bound on the expected running time of the optimal randomized algorithm for problem P It suffices to show that for an arbitrarily chosen distribution ρ any deterministic algorithm can t achieve a better competitiveness coefficient than Hk. Then Hk will be a lower bound for every randomized algorithm. 34

35 LOWER BOUND FOR RANDOMIZED PAGING ALGORITHMS We will use only k+1 different pages I={p1, pk+1}. The distribution on requests will be the following: The request ri is chosen uniformly at random from the pages I-{ri-1}. We will divide the requests into rounds. A round ends when all different pages in I are requested. The expected length of a round is khk as it is equivalent to a random walk on a complete graph with k+1 vertices. The offline algorithm makes a page fault in every round. The expected page faults of every deterministic algorithm is Hk. As a result Hk is a lower bound on every deterministic algorithm and this yields the result 35

36 ONLINE K-SERVER PROBLEM K-SERVER PROBLEM: Let k>1 be an integer and let M=(M,d) be a metric space where M is a set of points with M >k and d is a metric over M. An algorithm controls k mobile servers, which are located on points of M. The algorithm is presented with a sequence σ=r1,r2,,rn of requests where a request ri is a point in the space. We say that a request ri is served if one of the k servers lies on point ri. By moving servers, the algorithm, must serve all the requests sequentially. Our goal is to minimize the total cost occurred which is the total distance moved by all servers until all requests are served. 36

37 ONLINE K-SERVER PROBLEM k-server is a generalization of the online paging problem. The greedy algorithm doesn t achieve a bounded competitiveness coefficient k-server conjecture: Any metric space allows for a deterministic k-competitive, k-server algorithm. (k has been proved to be a lower bound on the competitiveness coefficient of any deterministic algorithm) We have found a deterministic (2k-1)-competitive algorithm for all metric spaces. We have found deterministic k-competitive algorithms when the metric space is a line or a tree. 37

38 ONLINE K-SERVER PROBLEM We know of relatively few cases where randomization against an oblivious adversary beats the lower bound of k for deterministic algorithms. It is an open question if there is a randomized algorithm that can achieve a competitiveness coefficient better than Hk against an oblivious adversary. We have shown that the lower bound on every online algorithm against an adaptive online adversary is k. 38

39 ONLINE K-SERVER PROBLEM Harmonic Algorithm: Let di be the distance between the i- th server and the requested point. The algorithm chooses independently of the past to the j-th server to service the request with probability: We have found metrics that the Harmonic algorithm can t achieve a competitiveness coefficient better than k(k+1)/2 We have shown that 5k2 4 k 1 di k 1 1 d i is an upper bound on Harmonic s competitiveness coefficient 39

40 Thank You 40

### Approximation Algorithms: LP Relaxation, Rounding, and Randomized Rounding Techniques. My T. Thai

Approximation Algorithms: LP Relaxation, Rounding, and Randomized Rounding Techniques My T. Thai 1 Overview An overview of LP relaxation and rounding method is as follows: 1. Formulate an optimization

### princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Sanjeev Arora Scribe: One of the running themes in this course is the notion of

CS787: Advanced Algorithms Lecture 14: Online algorithms We now shift focus to a different kind of algorithmic problem where we need to perform some optimization without knowing the input in advance. Algorithms

### 9th Max-Planck Advanced Course on the Foundations of Computer Science (ADFOCS) Primal-Dual Algorithms for Online Optimization: Lecture 1

9th Max-Planck Advanced Course on the Foundations of Computer Science (ADFOCS) Primal-Dual Algorithms for Online Optimization: Lecture 1 Seffi Naor Computer Science Dept. Technion Haifa, Israel Introduction

### 10.1 Integer Programming and LP relaxation

CS787: Advanced Algorithms Lecture 10: LP Relaxation and Rounding In this lecture we will design approximation algorithms using linear programming. The key insight behind this approach is that the closely

### Lecture 7: Approximation via Randomized Rounding

Lecture 7: Approximation via Randomized Rounding Often LPs return a fractional solution where the solution x, which is supposed to be in {0, } n, is in [0, ] n instead. There is a generic way of obtaining

### Discuss the size of the instance for the minimum spanning tree problem.

3.1 Algorithm complexity The algorithms A, B are given. The former has complexity O(n 2 ), the latter O(2 n ), where n is the size of the instance. Let n A 0 be the size of the largest instance that can

### Markov Chains, part I

Markov Chains, part I December 8, 2010 1 Introduction A Markov Chain is a sequence of random variables X 0, X 1,, where each X i S, such that P(X i+1 = s i+1 X i = s i, X i 1 = s i 1,, X 0 = s 0 ) = P(X

### Lecture 6: Approximation via LP Rounding

Lecture 6: Approximation via LP Rounding Let G = (V, E) be an (undirected) graph. A subset C V is called a vertex cover for G if for every edge (v i, v j ) E we have v i C or v j C (or both). In other

Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one

### Approximation Algorithms

Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

### Solving systems of linear equations over finite fields

Solving systems of linear equations over finite fields June 1, 2007 1 Introduction 2 3 4 Basic problem Introduction x 1 + x 2 + x 3 = 1 x 1 + x 2 = 1 x 1 + x 2 + x 4 = 1 x 2 + x 4 = 0 x 1 + x 3 + x 4 =

### ! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of

### Equilibria in Online Games

Equilibria in Online Games Roee Engelberg Joseph (Seffi) Naor Abstract We initiate the study of scenarios that combine online decision making with interaction between noncooperative agents To this end

### Guessing Game: NP-Complete?

Guessing Game: NP-Complete? 1. LONGEST-PATH: Given a graph G = (V, E), does there exists a simple path of length at least k edges? YES 2. SHORTEST-PATH: Given a graph G = (V, E), does there exists a simple

### ! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

### The Conference Call Search Problem in Wireless Networks

The Conference Call Search Problem in Wireless Networks Leah Epstein 1, and Asaf Levin 2 1 Department of Mathematics, University of Haifa, 31905 Haifa, Israel. lea@math.haifa.ac.il 2 Department of Statistics,

### Lecture Notes 6: Approximations for MAX-SAT. 2 Notes about Weighted Vertex Cover Approximation

Algorithmic Methods //00 Lecture Notes 6: Approximations for MAX-SAT Professor: Yossi Azar Scribe:Alon Ardenboim Introduction Although solving SAT is nown to be NP-Complete, in this lecture we will cover

### Introduction to Algorithms. Part 3: P, NP Hard Problems

Introduction to Algorithms Part 3: P, NP Hard Problems 1) Polynomial Time: P and NP 2) NP-Completeness 3) Dealing with Hard Problems 4) Lower Bounds 5) Books c Wayne Goddard, Clemson University, 2004 Chapter

### Why? A central concept in Computer Science. Algorithms are ubiquitous.

Analysis of Algorithms: A Brief Introduction Why? A central concept in Computer Science. Algorithms are ubiquitous. Using the Internet (sending email, transferring files, use of search engines, online

### Lecture 3: Linear Programming Relaxations and Rounding

Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can

### 2.3 Scheduling jobs on identical parallel machines

2.3 Scheduling jobs on identical parallel machines There are jobs to be processed, and there are identical machines (running in parallel) to which each job may be assigned Each job = 1,,, must be processed

### Solutions to Homework 6

Solutions to Homework 6 Debasish Das EECS Department, Northwestern University ddas@northwestern.edu 1 Problem 5.24 We want to find light spanning trees with certain special properties. Given is one example

### Lecture 1: Course overview, circuits, and formulas

Lecture 1: Course overview, circuits, and formulas Topics in Complexity Theory and Pseudorandomness (Spring 2013) Rutgers University Swastik Kopparty Scribes: John Kim, Ben Lund 1 Course Information Swastik

### Spectral graph theory

Spectral graph theory Uri Feige January 2010 1 Background With every graph (or digraph) one can associate several different matrices. We have already seen the vertex-edge incidence matrix, the Laplacian

### ONLINE DEGREE-BOUNDED STEINER NETWORK DESIGN. Sina Dehghani Saeed Seddighin Ali Shafahi Fall 2015

ONLINE DEGREE-BOUNDED STEINER NETWORK DESIGN Sina Dehghani Saeed Seddighin Ali Shafahi Fall 2015 ONLINE STEINER FOREST PROBLEM An initially given graph G. s 1 s 2 A sequence of demands (s i, t i ) arriving

### An Efficient Client Server Assignment for Internet Distributed Systems

1 An Efficient Client Server Assignment for Internet Distributed Systems Swathi Balakrishna, Dr. Ling Ding Computer Science and Systems, University of Washington, Tacoma Abstract Internet is a network

### Near Optimal Solutions

Near Optimal Solutions Many important optimization problems are lacking efficient solutions. NP-Complete problems unlikely to have polynomial time solutions. Good heuristics important for such problems.

### Lecture 23: Competitive ratio: rent or buy problems

COMPSCI 53: Design and Analysis of Algorithms November 19, 213 Lecture 23: Competitive ratio: rent or buy problems Lecturer: Debmalya Panigrahi Scribe: Hangjie Ji 1 Overview In this lecture, we focus on

### A Randomized Approximation Algorithm for MAX 3-SAT

A Randomized Approximation Algorithm for MAX 3-SAT CS 511 Iowa State University December 8, 2008 Note. This material is based on Kleinberg & Tardos, Algorithm Design, Chapter 13, Section 4, and associated

### Offline sorting buffers on Line

Offline sorting buffers on Line Rohit Khandekar 1 and Vinayaka Pandit 2 1 University of Waterloo, ON, Canada. email: rkhandekar@gmail.com 2 IBM India Research Lab, New Delhi. email: pvinayak@in.ibm.com

### princeton university F 02 cos 597D: a theorist s toolkit Lecture 7: Markov Chains and Random Walks

princeton university F 02 cos 597D: a theorist s toolkit Lecture 7: Markov Chains and Random Walks Lecturer: Sanjeev Arora Scribe:Elena Nabieva 1 Basics A Markov chain is a discrete-time stochastic process

### 5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 General Integer Linear Program: (ILP) min c T x Ax b x 0 integer Assumption: A, b integer The integrality condition

### Nan Kong, Andrew J. Schaefer. Department of Industrial Engineering, Univeristy of Pittsburgh, PA 15261, USA

A Factor 1 2 Approximation Algorithm for Two-Stage Stochastic Matching Problems Nan Kong, Andrew J. Schaefer Department of Industrial Engineering, Univeristy of Pittsburgh, PA 15261, USA Abstract We introduce

### The multi-integer set cover and the facility terminal cover problem

The multi-integer set cover and the facility teral cover problem Dorit S. Hochbaum Asaf Levin December 5, 2007 Abstract The facility teral cover problem is a generalization of the vertex cover problem.

### CSC2420 Fall 2012: Algorithm Design, Analysis and Theory

CSC2420 Fall 2012: Algorithm Design, Analysis and Theory Allan Borodin November 15, 2012; Lecture 10 1 / 27 Randomized online bipartite matching and the adwords problem. We briefly return to online algorithms

### Lecture Steiner Tree Approximation- June 15

Approximation Algorithms Workshop June 13-17, 2011, Princeton Lecture Steiner Tree Approximation- June 15 Fabrizio Grandoni Scribe: Fabrizio Grandoni 1 Overview In this lecture we will summarize our recent

### Introduction to Logic in Computer Science: Autumn 2006

Introduction to Logic in Computer Science: Autumn 2006 Ulle Endriss Institute for Logic, Language and Computation University of Amsterdam Ulle Endriss 1 Plan for Today Now that we have a basic understanding

### MATHEMATICAL INDUCTION AND DIFFERENCE EQUATIONS

1 CHAPTER 6. MATHEMATICAL INDUCTION AND DIFFERENCE EQUATIONS 1 INSTITIÚID TEICNEOLAÍOCHTA CHEATHARLACH INSTITUTE OF TECHNOLOGY CARLOW MATHEMATICAL INDUCTION AND DIFFERENCE EQUATIONS 1 Introduction Recurrence

### Gauss-Markov Theorem. The Gauss-Markov Theorem is given in the following regression model and assumptions:

Gauss-Markov Theorem The Gauss-Markov Theorem is given in the following regression model and assumptions: The regression model y i = β 1 + β x i + u i, i = 1,, n (1) Assumptions (A) or Assumptions (B):

### Applied Algorithm Design Lecture 5

Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design

### max cx s.t. Ax c where the matrix A, cost vector c and right hand side b are given and x is a vector of variables. For this example we have x

Linear Programming Linear programming refers to problems stated as maximization or minimization of a linear function subject to constraints that are linear equalities and inequalities. Although the study

### Advice Complexity of Online Coloring for Paths

Advice Complexity of Online Coloring for Paths Michal Forišek 1, Lucia Keller 2, and Monika Steinová 2 1 Comenius University, Bratislava, Slovakia 2 ETH Zürich, Switzerland LATA 2012, A Coruña, Spain Definition

### Minimum Makespan Scheduling

Minimum Makespan Scheduling Minimum makespan scheduling: Definition and variants Factor 2 algorithm for identical machines PTAS for identical machines Factor 2 algorithm for unrelated machines Martin Zachariasen,

### Good luck, veel succes!

Final exam Advanced Linear Programming, May 7, 13.00-16.00 Switch off your mobile phone, PDA and any other mobile device and put it far away. No books or other reading materials are allowed. This exam

### Algorithm Design and Analysis

Algorithm Design and Analysis LECTURE 27 Approximation Algorithms Load Balancing Weighted Vertex Cover Reminder: Fill out SRTEs online Don t forget to click submit Sofya Raskhodnikova 12/6/2011 S. Raskhodnikova;

### 4.4 The recursion-tree method

4.4 The recursion-tree method Let us see how a recursion tree would provide a good guess for the recurrence = 3 4 ) Start by nding an upper bound Floors and ceilings usually do not matter when solving

### HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!

Math 7 Fall 205 HOMEWORK 5 SOLUTIONS Problem. 2008 B2 Let F 0 x = ln x. For n 0 and x > 0, let F n+ x = 0 F ntdt. Evaluate n!f n lim n ln n. By directly computing F n x for small n s, we obtain the following

### Dynamic TCP Acknowledgement: Penalizing Long Delays

Dynamic TCP Acknowledgement: Penalizing Long Delays Karousatou Christina Network Algorithms June 8, 2010 Karousatou Christina (Network Algorithms) Dynamic TCP Acknowledgement June 8, 2010 1 / 63 Layout

### Solving the ILP using branch-and-cut

Solving the ILP using branch-and-cut Solving ILPs is a main topic in combinatorial optimization. We will take a brief look at the branch-and-cut approach. Branch-and-cut makes use of two techniques: Cutting

CS787: Advanced Algorithms Topic: Online Learning Presenters: David He, Chris Hopman 17.3.1 Follow the Perturbed Leader 17.3.1.1 Prediction Problem Recall the prediction problem that we discussed in class.

### Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

### Scheduling Shop Scheduling. Tim Nieberg

Scheduling Shop Scheduling Tim Nieberg Shop models: General Introduction Remark: Consider non preemptive problems with regular objectives Notation Shop Problems: m machines, n jobs 1,..., n operations

### Optimal Online Preemptive Scheduling

IEOR 8100: Scheduling Lecture Guest Optimal Online Preemptive Scheduling Lecturer: Jir Sgall Scribe: Michael Hamilton 1 Introduction In this lecture we ll study online preemptive scheduling on m machines

### 4.6 Linear Programming duality

4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

### Introduction to Algorithms Review information for Prelim 1 CS 4820, Spring 2010 Distributed Wednesday, February 24

Introduction to Algorithms Review information for Prelim 1 CS 4820, Spring 2010 Distributed Wednesday, February 24 The final exam will cover seven topics. 1. greedy algorithms 2. divide-and-conquer algorithms

### Probability and Statistics

CHAPTER 2: RANDOM VARIABLES AND ASSOCIATED FUNCTIONS 2b - 0 Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute - Systems and Modeling GIGA - Bioinformatics ULg kristel.vansteen@ulg.ac.be

### Introduction to Flocking {Stochastic Matrices}

Supelec EECI Graduate School in Control Introduction to Flocking {Stochastic Matrices} A. S. Morse Yale University Gif sur - Yvette May 21, 2012 CRAIG REYNOLDS - 1987 BOIDS The Lion King CRAIG REYNOLDS

### Ph.D. Thesis. Judit Nagy-György. Supervisor: Péter Hajnal Associate Professor

Online algorithms for combinatorial problems Ph.D. Thesis by Judit Nagy-György Supervisor: Péter Hajnal Associate Professor Doctoral School in Mathematics and Computer Science University of Szeged Bolyai

### 8.1 Makespan Scheduling

600.469 / 600.669 Approximation Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programing: Min-Makespan and Bin Packing Date: 2/19/15 Scribe: Gabriel Kaptchuk 8.1 Makespan Scheduling Consider an instance

### Satisfiability of Boolean Expressions

Satisfiability of Boolean Expressions In the previous section, we have analysed in great detail one particular problem reachability in directed graphs. The reason for the attention devoted to this one

### COS 511: Foundations of Machine Learning. Rob Schapire Lecture #14 Scribe: Qian Xi March 30, 2006

COS 511: Foundations of Machine Learning Rob Schapire Lecture #14 Scribe: Qian Xi March 30, 006 In the previous lecture, we introduced a new learning model, the Online Learning Model. After seeing a concrete

### The Relative Worst Order Ratio for On-Line Algorithms

The Relative Worst Order Ratio for On-Line Algorithms Joan Boyar 1 and Lene M. Favrholdt 2 1 Department of Mathematics and Computer Science, University of Southern Denmark, Odense, Denmark, joan@imada.sdu.dk

### Exponential time algorithms for graph coloring

Exponential time algorithms for graph coloring Uriel Feige Lecture notes, March 14, 2011 1 Introduction Let [n] denote the set {1,..., k}. A k-labeling of vertices of a graph G(V, E) is a function V [k].

### Data Structures Fibonacci Heaps, Amortized Analysis

Chapter 4 Data Structures Fibonacci Heaps, Amortized Analysis Algorithm Theory WS 2012/13 Fabian Kuhn Fibonacci Heaps Lacy merge variant of binomial heaps: Do not merge trees as long as possible Structure:

### Level Sets of Arbitrary Dimension Polynomials with Positive Coefficients and Real Exponents

Level Sets of Arbitrary Dimension Polynomials with Positive Coefficients and Real Exponents Spencer Greenberg April 20, 2006 Abstract In this paper we consider the set of positive points at which a polynomial

### ARTICLE IN PRESS. European Journal of Operational Research xxx (2004) xxx xxx. Discrete Optimization. Nan Kong, Andrew J.

A factor 1 European Journal of Operational Research xxx (00) xxx xxx Discrete Optimization approximation algorithm for two-stage stochastic matching problems Nan Kong, Andrew J. Schaefer * Department of

### 1 Approximating Set Cover

CS 05: Algorithms (Grad) Feb 2-24, 2005 Approximating Set Cover. Definition An Instance (X, F ) of the set-covering problem consists of a finite set X and a family F of subset of X, such that every elemennt

### Permutation Betting Markets: Singleton Betting with Extra Information

Permutation Betting Markets: Singleton Betting with Extra Information Mohammad Ghodsi Sharif University of Technology ghodsi@sharif.edu Hamid Mahini Sharif University of Technology mahini@ce.sharif.edu

### Lecture 3: Summations and Analyzing Programs with Loops

high school algebra If c is a constant (does not depend on the summation index i) then ca i = c a i and (a i + b i )= a i + b i There are some particularly important summations, which you should probably

### Week 5 Integral Polyhedra

Week 5 Integral Polyhedra We have seen some examples 1 of linear programming formulation that are integral, meaning that every basic feasible solution is an integral vector. This week we develop a theory

### Chapter 1: The binomial asset pricing model

Chapter 1: The binomial asset pricing model Simone Calogero April 17, 2015 Contents 1 The binomial model 1 2 1+1 dimensional stock markets 4 3 Arbitrage portfolio 8 4 Implementation of the binomial model

### Why graph clustering is useful?

Graph Clustering Why graph clustering is useful? Distance matrices are graphs as useful as any other clustering Identification of communities in social networks Webpage clustering for better data management

### ECEN 5682 Theory and Practice of Error Control Codes

ECEN 5682 Theory and Practice of Error Control Codes Convolutional Codes University of Colorado Spring 2007 Linear (n, k) block codes take k data symbols at a time and encode them into n code symbols.

### CS151 Complexity Theory

Gap producing reductions CS151 Complexity Theory Lecture 15 May 18, 2015 L 1 f L 2 (min.) rk k Main purpose: r-approximation algorithm for L 2 distinguishes between f() and f(); can use to decide L 1 NP-hard

### THE SCHEDULING OF MAINTENANCE SERVICE

THE SCHEDULING OF MAINTENANCE SERVICE Shoshana Anily Celia A. Glass Refael Hassin Abstract We study a discrete problem of scheduling activities of several types under the constraint that at most a single

### Analysis of Approximation Algorithms for k-set Cover using Factor-Revealing Linear Programs

Analysis of Approximation Algorithms for k-set Cover using Factor-Revealing Linear Programs Stavros Athanassopoulos, Ioannis Caragiannis, and Christos Kaklamanis Research Academic Computer Technology Institute

### Steiner Tree Approximation via IRR. Randomized Rounding

Steiner Tree Approximation via Iterative Randomized Rounding Graduate Program in Logic, Algorithms and Computation μπλ Network Algorithms and Complexity June 18, 2013 Overview 1 Introduction Scope Related

### CMPSCI611: Approximating MAX-CUT Lecture 20

CMPSCI611: Approximating MAX-CUT Lecture 20 For the next two lectures we ll be seeing examples of approximation algorithms for interesting NP-hard problems. Today we consider MAX-CUT, which we proved to

### Optimization of Design. Lecturer:Dung-An Wang Lecture 12

Optimization of Design Lecturer:Dung-An Wang Lecture 12 Lecture outline Reading: Ch12 of text Today s lecture 2 Constrained nonlinear programming problem Find x=(x1,..., xn), a design variable vector of

### Lecture 14: Fourier Transform

Machine Learning: Foundations Fall Semester, 2010/2011 Lecture 14: Fourier Transform Lecturer: Yishay Mansour Scribes: Avi Goren & Oron Navon 1 14.1 Introduction In this lecture, we examine the Fourier

### Discrete Optimization

Discrete Optimization [Chen, Batson, Dang: Applied integer Programming] Chapter 3 and 4.1-4.3 by Johan Högdahl and Victoria Svedberg Seminar 2, 2015-03-31 Todays presentation Chapter 3 Transforms using

### On the Unique Games Conjecture

On the Unique Games Conjecture Antonios Angelakis National Technical University of Athens June 16, 2015 Antonios Angelakis (NTUA) Theory of Computation June 16, 2015 1 / 20 Overview 1 Introduction 2 Preliminary

### Counting Falsifying Assignments of a 2-CF via Recurrence Equations

Counting Falsifying Assignments of a 2-CF via Recurrence Equations Guillermo De Ita Luna, J. Raymundo Marcial Romero, Fernando Zacarias Flores, Meliza Contreras González and Pedro Bello López Abstract

### By W.E. Diewert. July, Linear programming problems are important for a number of reasons:

APPLIED ECONOMICS By W.E. Diewert. July, 3. Chapter : Linear Programming. Introduction The theory of linear programming provides a good introduction to the study of constrained maximization (and minimization)

### NP-Completeness and Cook s Theorem

NP-Completeness and Cook s Theorem Lecture notes for COM3412 Logic and Computation 15th January 2002 1 NP decision problems The decision problem D L for a formal language L Σ is the computational task:

### Linear Programming. March 14, 2014

Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

### CHAPTER III - MARKOV CHAINS

CHAPTER III - MARKOV CHAINS JOSEPH G. CONLON 1. General Theory of Markov Chains We have already discussed the standard random walk on the integers Z. A Markov Chain can be viewed as a generalization of

### Introduction to Stationary Distributions

13 Introduction to Stationary Distributions We first briefly review the classification of states in a Markov chain with a quick example and then begin the discussion of the important notion of stationary

### Notes from Week 1: Algorithms for sequential prediction

CS 683 Learning, Games, and Electronic Markets Spring 2007 Notes from Week 1: Algorithms for sequential prediction Instructor: Robert Kleinberg 22-26 Jan 2007 1 Introduction In this course we will be looking

### Sequential Data Structures

Sequential Data Structures In this lecture we introduce the basic data structures for storing sequences of objects. These data structures are based on arrays and linked lists, which you met in first year

### COLORED GRAPHS AND THEIR PROPERTIES

COLORED GRAPHS AND THEIR PROPERTIES BEN STEVENS 1. Introduction This paper is concerned with the upper bound on the chromatic number for graphs of maximum vertex degree under three different sets of coloring

### Solving Mixed Integer Linear Programs Using Branch and Cut Algorithm

1 Solving Mixed Integer Linear Programs Using Branch and Cut Algorithm by Shon Albert A Project Submitted to the Graduate Faculty of North Carolina State University in Partial Fulfillment of the Requirements

### Practical Applications of Boolean Satisfiability

Practical Applications of Boolean Satisfiability Joao Marques-Silva School of Electronics and Computer Science University of Southampton WODES 08, Göteborg, Sweden, May 2008 Motivation Many Uses of Satisfiability

### Auctions for Digital Goods Review

Auctions for Digital Goods Review Rattapon Limprasittiporn Computer Science Department University of California Davis rlim@ucdavisedu Abstract The emergence of the Internet brings a large change in the

### CHAPTER 9. Integer Programming

CHAPTER 9 Integer Programming An integer linear program (ILP) is, by definition, a linear program with the additional constraint that all variables take integer values: (9.1) max c T x s t Ax b and x integral

### Interpolating Polynomials Handout March 7, 2012

Interpolating Polynomials Handout March 7, 212 Again we work over our favorite field F (such as R, Q, C or F p ) We wish to find a polynomial y = f(x) passing through n specified data points (x 1,y 1 ),

### Minimize subject to. x S R

Chapter 12 Lagrangian Relaxation This chapter is mostly inspired by Chapter 16 of [1]. In the previous chapters, we have succeeded to find efficient algorithms to solve several important problems such