8.1 Makespan Scheduling

Save this PDF as:

Size: px
Start display at page:

Transcription

1 / Approximation Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programing: Min-Makespan and Bin Packing Date: 2/19/15 Scribe: Gabriel Kaptchuk 8.1 Makespan Scheduling Consider an instance of the Makespan Scheduling Problem Input: Jobs J {1, 2,..., n} with processing times P i. Feasable: Machines i {1, 2,..., m} Φ : J M Objective: Minimize makespan (max load) (8.1.1) We proved in the previous lecture that there exists a simple greedy algorithm that is a 2-approximation. We will now attempt to find a (1 + ɛ)-approximation. To accomplish this, we will divide jobs into large and small. Let us suppose we have an algorithm which, given some guess T, will either 1. if T OP T, return a solution with makespan at most (1 + ɛ)t 2. if T < OP T, either return false or return a solution with makespan at most (1 + ɛ)op T Such an algorithm is sometimes called a (1 + ɛ)-relaxed decision procedure. Definition Let P = i J P i. Clearly 1 OP T P. Since 1 OP T P, if we have access to a (1 + ɛ)-relaxed decision procedure we can use it O(log P ) times to do a binary search on different values of T (between 1 and P ) to find a (1 + ɛ)- approximation. Since log P is polynomial in the input size, if the relaxed decision procedure runs in polynomial time then we have a (1 + ɛ)-approximation for Makespan Scheduling that runs in polynomial time. Hence we will now only be concerned with designing a (1 + ɛ)-relaxed decision procedure. Since we are now given a guess T, we can define small and large jobs with respect to T. Definition Let J small = {j J P j ɛt } and J large = {j J P j > ɛt } Theorem Given a schedule for J large with makespan at most (1+ɛ)T, we can find a schedule for all of J with makespan at most (1 + ɛ) max(t, OP T ) Proof: We greedily place small jobs. More specifically, we start with the schedule for large jobs guaranteed by the theorem. We then consider the small jobs in arbitrary order, and for each job we consider we place it on the least-loaded machine. Consider a machine i. We break into two cases, depending on whether i was assigned any small jobs. 1

2 Case 1: i has no small jobs. Then the jobs on i are exactly the large jobs which were originally scheduled on it. Hence its makespan is at most (1 + ɛ)t because its load is bounded by the given solution to J large. Case 2: i has one or more small jobs. Then we can repeat the analysis of the greedy algorithm from the last lecture, but now using the fact that the jobs added are all small. In particular, the load on i before the last job was added to it was smaller than the load on any other machine (by the definition of the greedy algorithm), and hence just before the last job was added to i is has load at most P M OP T. The last job added has processing time at most ɛt since it is small, and therefore the final load on i is at most OP T +ɛt (1+ɛ) max(t, OP T ). Since the above analysis holds for an arbitrary machine i, it holds for every machine and thus the makespan is at most (1 + ɛ) max(t, OP T ). Given the above theorem, from this point forward we will only consider methods for solving Makespan Scheduling for J large Definition Let b = 1 ɛ, so 1 b ɛ. Definition Let P j = P j T T We can think of P j as rounding P j down to the nearest multiple of T/. In particular, we get that P j P j P j + T. Also, since j J large we know that ɛt P j T, and hence P j = k T for some k {b, b + 1,..., }. We will call this new instance the rounded instance. Lemma If there is a schedule with makespan at most T in the original instance, then there is a schedule with makespan at most T in the rounded instance. Proof: Consider the schedule with makespan at most T in the original instance. Since P j P j for all j, this schedule has makespan at most T under the rounded processing times. Theorem Any schedule with makespan at most T in the rounded instance has makespan at most (1 + ɛ)t under the original processing times. Proof: Since P j T b, there are at most b jobs on each machine. Thus under the original processing times, for any machine i the load is at most P j (P j + T ) proving the theorem. P j + T T + T + b T = (1 + 1 )T (1 + ɛ)t, b Thus in order to find a solution with makespan at most (1 + ɛ)t (assuming a schedule of makespan at most T exists), we just need an algorithm that computes a solution to the rounded instance with makespan at most T. Fortunately, since we have bounded the number of possible job lengths, this is a reasonably straightforward using dynamic programming. 2 T

3 Definition Let a configuration be a tuple (a b, a b+1,..., a ) with each a i Z +, such that i=b (a i i T ) T. Let C(T ) be a set of all configurations. Note that C(T ) (b + 1) b2 b b b2 The point of this definition is that in any schedule with makespan at most T in the rounded instance, the jobs assigned to each machine are described by a configuration. In other words, for each machine i, there is some configuration (a b,..., a ) such that there are exactly a k jobs of length k T assigned to it for each k {b, b + 1,..., }. Thus in order to find a schedule with makespan at most T in the rounded instance we just need to find a configuration for each machine so that every job is assigned to some machine. Definition Let f(n b, n b+1,..., n ) be the minimum number of machines need to schedule n i jobs of length i T, (for all i {b, b + 1,..., }) with makespan at most T. Clearly f(0, 0,..., 0) = 0. Now we can write a recurrence relation for other inputs: f(n b, n b+1,..., n ) = 1 + min f(n b a b, n b+1 a b+1,..., n a ) a C(T ),a i n i This is a correct relation because it essentially tries all possible configurations for one machine, recursing on the rest. Consider a dynamic program for this function. Since each n i n, clearly the number of table entries of the dynamic program is at most n b2. Since the number of configurations is at most b b2, computing each table entry given the previous ones takes at most O(b b2 ) time. Hence the total running time of this dynamic program is at most O(n O(b2) ). With this function f in hand, we are finished. Suppose the rounded instance has n i jobs of length i T. Then we can just check whether f(n b, n b+1,..., n ) is at most m (the number of machines we have available). If no, then we return false there is no schedule of makespan at most T in the rounded instance and thus by Lemma no schedule of makepsan at most T in the original instance. If yes, then by Theorem this schedule has makespan at most (1 + ɛ) on the original instance. 8.2 Bin Packing Bin packing is essentially the converse of Makespan scheduling: rather than minimizing the makespan for a fixed number of machines, it is the problem of minimizing the machines subject to a makespan bound. For historical reasons, though, it is usually phrased somewhat differently. The Bin Packing problem is defined as follows. Input: Items I {1, 2,..., n} with sizes 0 S i 1. Feasable: Partition {B j } j {1,2,...,m} of I with S i 1, j {1, 2,..., m} i B j Objective: Minimize m Consider the following greedy algorithm, which simply assigns items one at a time in arbitrary order. If there is a bin with enough room in it to add the item, we do so. Otherwise, we create a new bin. 3

4 Algorithm 1 Greedy Algorithm for Bin Packing Problem Input: Items I with sizes S i Output: a partition of those items into B j k 1 B 1 while there is an unassigned element j I do if there exists a k with S j + i B a S i 1 then assign j to B a (if more than one such a exists pick one arbitrarily) else k k + 1 assign j to B k end if end while return B j, j {1, 2,..., k} Theorem The above greedy algorithm uses at most 2 OP T + 1 Bins Proof: First note that all bins in this algorithm other than last one are at least half full. To see this, suppose there were two bins with load at most 1/2. Then when the second bin was created, the job added to it could have fit in the first bin. Hence the algorithm would not have created the second bin, giving us a contradiction. Let s(i) = i I S i, and suppose that greedy uses a bins. Then since all but the last are at least half full, we know that s(i) 1 2 (a 1). But clearly since each bin has capacity 1 we also know that s(i) OP T. Hence a 2 OP T + 1. Similar to makespan scheduling, we can improve this with rounding and dynamic programming to get the following theorem. Theorem For all ɛ > 0, there is an algorithm that finds a solution with at most (1+ɛ)OP T + 1 bins with running time n O( 1 ɛ 2 ). We begin by splitting into large and small jobs. Definition Let I large = {i I S i ɛ 2 } and I small = {i I S i < ɛ 2 } Theorem Given a partition of I large into b bins, we can find a partition of all of I into max(b, (1 + ɛ)op T + 1) bins. Proof: We simply perform greedy on I small starting with the solution from I large. The argument is similar to Makespan Scheduling. If the greedy algorithm does not need to add more bins, then we get a solution with b bins. If it does need to add more bins, then every bin other than the last one must contain items with total size at least 1 ɛ 2. Hence if the greedy algorithm ends up with a bins, we know that (a 1)(1 ɛ 1 2 ) OP T and hence a 1 ɛ/2op T + 1 (1 + ɛ)op T + 1 Given the above theorem, from this point forward we will only consider methods for solving Bin Packing for I large We want to construct a rounded instance as in makespan scheduling, but unfortunately simple rounding does not work as well here. Instead, we need to use a more sophisticated method known 4

5 as linear grouping. We first order the elements of I large by size. WLOG we say S 1 S 2 S l for l = I large. Then for some number k (to be defined later) we partition I large into sets of size k sets by descending order. So we have G 1 = {S 1, S 2,..., S k } G 2 = {S k+1, S k+2,..., S 2k }. G r = {S (r 1)k+1,..., S l } Note that every group other than the last one has exactly k elements (the last one may have fewer). Now for all but the first group G 1, we round up to the largest element of that group. More formally, let i I large G 1 and suppose that i G j. Then we set S i = max i G j S i = S (j 1)k+1 Lemma S i k S i S i, i I large G 1 Proof: The second inequality is obvious, as our rounding never decreased any value. For the first inequality, note that i k is an item which is in a different group from item i: if i is in G j, then i k is in G j 1. Since the rounding worked by setting all items in a group to the largest size in the group, we know that S i k = S (j 2)k+1 S (j 1)k+1 = S i. We essentially ran out of time at this point. The following is an informal description of the rest of the proof. Let I denote the new (rounded) instance. Lemma OP T (I ) OP T (I large ) Proof: Consider the optimal partition for I large. Suppose each item i I large gets put in bin f(i) (so f is the assignment function for this solution). To get a solution for I with the same number of bins, we just place item i in bin f(i k). The first inequality of Lemma implies that this is a valid solution to I (note that this is why why could not include G 1 in I. Lemma Suppose there is a solution for I using l bins. Then there is a solution for I large using at most l + k bins. Proof: Consider a partition for I into l bins. By the second inequality in Lemma this partition is also valid when using the original sizes. We just need to add the items in G 1. But we can trivially do this using k more bins by putting each item of G 1 into its own bin. So suppose that we could solve I optimally. Then we can use Lemma to turn this into a solution for I large using at most OP T (I ) + k bins. Now Lemma implies that this is at most OP T (I large ) + k bins. If we set k to be ɛ i I large S i we know that k ɛop T (I large ), and hence we have a solution using at most (1 + ɛ)op T (I large ) bins. Adding small items using Theorem now gives the overall approximation guarantee. So we just need to solve I optimally. Fortunately, this is essentially the exact same dynamic program from makespan scheduling! We may assume that l 2/ɛ 2 since otherwise it is trivial to 5

6 solve I large optimally through brute force. Then the number of distinct sizes in I is at most l/k l ɛ i I large S i l ɛ l ɛ 2 = l l ɛ 2 /2 l l ɛ 2 /4 = 4 ɛ 2 So there are only a constant number of distinct sizes, so as with makespan scheduling there are at most a constant number of configurations and there is a dynamic programming algorithm to find the minimum number of bins necessary that runs in time n O(1/ɛ2). 6

Approximation Algorithms: LP Relaxation, Rounding, and Randomized Rounding Techniques. My T. Thai

Approximation Algorithms: LP Relaxation, Rounding, and Randomized Rounding Techniques My T. Thai 1 Overview An overview of LP relaxation and rounding method is as follows: 1. Formulate an optimization

2.3 Scheduling jobs on identical parallel machines

2.3 Scheduling jobs on identical parallel machines There are jobs to be processed, and there are identical machines (running in parallel) to which each job may be assigned Each job = 1,,, must be processed

Lecture 6: Approximation via LP Rounding

Lecture 6: Approximation via LP Rounding Let G = (V, E) be an (undirected) graph. A subset C V is called a vertex cover for G if for every edge (v i, v j ) E we have v i C or v j C (or both). In other

Minimum Makespan Scheduling

Minimum Makespan Scheduling Minimum makespan scheduling: Definition and variants Factor 2 algorithm for identical machines PTAS for identical machines Factor 2 algorithm for unrelated machines Martin Zachariasen,

CS268: Geometric Algorithms Handout #5 Design and Analysis Original Handout #15 Stanford University Tuesday, 25 February 1992

CS268: Geometric Algorithms Handout #5 Design and Analysis Original Handout #15 Stanford University Tuesday, 25 February 1992 Original Lecture #6: 28 January 1991 Topics: Triangulating Simple Polygons

Answers to some of the exercises.

Answers to some of the exercises. Chapter 2. Ex.2.1 (a) There are several ways to do this. Here is one possibility. The idea is to apply the k-center algorithm first to D and then for each center in D

Near Optimal Solutions

Near Optimal Solutions Many important optimization problems are lacking efficient solutions. NP-Complete problems unlikely to have polynomial time solutions. Good heuristics important for such problems.

ON THE FIBONACCI NUMBERS

ON THE FIBONACCI NUMBERS Prepared by Kei Nakamura The Fibonacci numbers are terms of the sequence defined in a quite simple recursive fashion. However, despite its simplicity, they have some curious properties

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Sanjeev Arora Scribe: One of the running themes in this course is the notion of

Topic: Greedy Approximations: Set Cover and Min Makespan Date: 1/30/06

CS880: Approximations Algorithms Scribe: Matt Elder Lecturer: Shuchi Chawla Topic: Greedy Approximations: Set Cover and Min Makespan Date: 1/30/06 3.1 Set Cover The Set Cover problem is: Given a set of

Algorithm Design and Analysis

Algorithm Design and Analysis LECTURE 27 Approximation Algorithms Load Balancing Weighted Vertex Cover Reminder: Fill out SRTEs online Don t forget to click submit Sofya Raskhodnikova 12/6/2011 S. Raskhodnikova;

Applied Algorithm Design Lecture 5

Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design

1. Please write your name in the blank above, and sign & date below. 2. Please use the space provided to write your solution.

Name : Instructor: Marius Ionescu Instructions: 1. Please write your name in the blank above, and sign & date below. 2. Please use the space provided to write your solution. 3. If you need extra pages

The Real Numbers. Here we show one way to explicitly construct the real numbers R. First we need a definition.

The Real Numbers Here we show one way to explicitly construct the real numbers R. First we need a definition. Definitions/Notation: A sequence of rational numbers is a funtion f : N Q. Rather than write

Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one

10.1 Integer Programming and LP relaxation

CS787: Advanced Algorithms Lecture 10: LP Relaxation and Rounding In this lecture we will design approximation algorithms using linear programming. The key insight behind this approach is that the closely

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of

Approximation Algorithms

Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

Introduction to Algorithms Review information for Prelim 1 CS 4820, Spring 2010 Distributed Wednesday, February 24

Introduction to Algorithms Review information for Prelim 1 CS 4820, Spring 2010 Distributed Wednesday, February 24 The final exam will cover seven topics. 1. greedy algorithms 2. divide-and-conquer algorithms

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Steiner Tree and TSP Date: 01/29/15 Scribe: Katie Henry

600.469 / 600.669 Approximation Algorithms Lecturer: Michael Dinitz Topic: Steiner Tree and TSP Date: 01/29/15 Scribe: Katie Henry 2.1 Steiner Tree Definition 2.1.1 In the Steiner Tree problem the input

CS2223 Algorithms B Term 2013 Exam 2 Solutions

CS2223 Algorithms B Term 2013 Exam 2 Solutions Dec. 3, 2013 By Prof. Carolina Ruiz Dept. of Computer Science WPI PROBLEM 1: Solving Recurrences (15 points) Solve the recurrence T (n) = 2T (n/3) + n using

Optimal Online Preemptive Scheduling

IEOR 8100: Scheduling Lecture Guest Optimal Online Preemptive Scheduling Lecturer: Jir Sgall Scribe: Michael Hamilton 1 Introduction In this lecture we ll study online preemptive scheduling on m machines

Math 317 HW #7 Solutions

Math 17 HW #7 Solutions 1. Exercise..5. Decide which of the following sets are compact. For those that are not compact, show how Definition..1 breaks down. In other words, give an example of a sequence

1 Approximating Set Cover

CS 05: Algorithms (Grad) Feb 2-24, 2005 Approximating Set Cover. Definition An Instance (X, F ) of the set-covering problem consists of a finite set X and a family F of subset of X, such that every elemennt

Answers to Homework 1 p. 13 1.2 3 What is the smallest value of n such that an algorithm whose running time is 100n 2 runs faster than an algorithm whose running time is 2 n on the same machine? Find the

Recurrence Relations

Recurrence Relations Introduction Determining the running time of a recursive algorithm often requires one to determine the big-o growth of a function T (n) that is defined in terms of a recurrence relation.

max cx s.t. Ax c where the matrix A, cost vector c and right hand side b are given and x is a vector of variables. For this example we have x

Linear Programming Linear programming refers to problems stated as maximization or minimization of a linear function subject to constraints that are linear equalities and inequalities. Although the study

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

Theorem (The division theorem) Suppose that a and b are integers with b > 0. There exist unique integers q and r so that. a = bq + r and 0 r < b.

Theorem (The division theorem) Suppose that a and b are integers with b > 0. There exist unique integers q and r so that a = bq + r and 0 r < b. We re dividing a by b: q is the quotient and r is the remainder,

1 Unrelated Parallel Machine Scheduling

IIIS 014 Spring: ATCS - Selected Topics in Optimization Lecture date: Mar 13, 014 Instructor: Jian Li TA: Lingxiao Huang Scribe: Yifei Jin (Polishing not finished yet.) 1 Unrelated Parallel Machine Scheduling

Lecture 22: November 10

CS271 Randomness & Computation Fall 2011 Lecture 22: November 10 Lecturer: Alistair Sinclair Based on scribe notes by Rafael Frongillo Disclaimer: These notes have not been subjected to the usual scrutiny

1. R In this and the next section we are going to study the properties of sequences of real numbers.

+a 1. R In this and the next section we are going to study the properties of sequences of real numbers. Definition 1.1. (Sequence) A sequence is a function with domain N. Example 1.2. A sequence of real

Branch and Bound Methods

Branch and Bound Methods basic ideas and attributes unconstrained nonconvex optimization mixed convex-boolean optimization Prof. S. Boyd, EE364b, Stanford University Methods for nonconvex optimization

8.1 Min Degree Spanning Tree

CS880: Approximations Algorithms Scribe: Siddharth Barman Lecturer: Shuchi Chawla Topic: Min Degree Spanning Tree Date: 02/15/07 In this lecture we give a local search based algorithm for the Min Degree

5 The Beginning of Transcendental Numbers

5 The Beginning of Transcendental Numbers We have defined a transcendental number (see Definition 3 of the Introduction), but so far we have only established that certain numbers are irrational. We now

Instructor: Bobby Kleinberg Lecture Notes, 5 May The Miller-Rabin Randomized Primality Test

Introduction to Algorithms (CS 482) Cornell University Instructor: Bobby Kleinberg Lecture Notes, 5 May 2010 The Miller-Rabin Randomized Primality Test 1 Introduction Primality testing is an important

Lecture 1: Course overview, circuits, and formulas

Lecture 1: Course overview, circuits, and formulas Topics in Complexity Theory and Pseudorandomness (Spring 2013) Rutgers University Swastik Kopparty Scribes: John Kim, Ben Lund 1 Course Information Swastik

The Greedy Method. Fundamental Techniques. The Greedy Method. Fractional Knapsack Problem

Fundamental Techniques There are some algorithmic tools that are quite specialised. They are good for problems they are intended to solve, but they are not very versatile. There are also more fundamental

2 A General Definition of Identification

1 Identification in Econometrics Much of the course so far has studied properties of certain estimators (e.g., extremum estimators). A minimal requirement on an estimator is consistency, i.e., as the sample

CMPSCI611: Primality Testing Lecture 14

CMPSCI611: Primality Testing Lecture 14 One of the most important domains for randomized algorithms is number theory, particularly with its applications to cryptography. Today we ll look at one of the

CS420/520 Algorithm Analysis Spring 2010 Lecture 04

CS420/520 Algorithm Analysis Spring 2010 Lecture 04 I. Chapter 4 Recurrences A. Introduction 1. Three steps applied at each level of a recursion a) Divide into a number of subproblems that are smaller

COS 511: Foundations of Machine Learning. Rob Schapire Lecture #14 Scribe: Qian Xi March 30, 2006

COS 511: Foundations of Machine Learning Rob Schapire Lecture #14 Scribe: Qian Xi March 30, 006 In the previous lecture, we introduced a new learning model, the Online Learning Model. After seeing a concrete

Data Structures and Algorithms Week 3

Data Structures and Algorithms Week 3 1. Divide and conquer 2. Merge sort, repeated substitutions 3. Tiling 4. Recurrences Recurrences Running times of algorithms with recursive calls can be described

Find-The-Number. 1 Find-The-Number With Comps

Find-The-Number 1 Find-The-Number With Comps Consider the following two-person game, which we call Find-The-Number with Comps. Player A (for answerer) has a number x between 1 and 1000. Player Q (for questioner)

princeton university F 02 cos 597D: a theorist s toolkit Lecture 7: Markov Chains and Random Walks

princeton university F 02 cos 597D: a theorist s toolkit Lecture 7: Markov Chains and Random Walks Lecturer: Sanjeev Arora Scribe:Elena Nabieva 1 Basics A Markov chain is a discrete-time stochastic process

Randomized algorithms

Randomized algorithms March 10, 2005 1 What are randomized algorithms? Algorithms which use random numbers to make decisions during the executions of the algorithm. Why would we want to do this?? Deterministic

CS 2336 Discrete Mathematics

CS 2336 Discrete Mathematics Lecture 5 Proofs: Mathematical Induction 1 Outline What is a Mathematical Induction? Strong Induction Common Mistakes 2 Introduction What is the formula of the sum of the first

Mathematical Induction

Mathematical Induction Victor Adamchik Fall of 2005 Lecture 2 (out of three) Plan 1. Strong Induction 2. Faulty Inductions 3. Induction and the Least Element Principal Strong Induction Fibonacci Numbers

Single machine parallel batch scheduling with unbounded capacity

Workshop on Combinatorics and Graph Theory 21th, April, 2006 Nankai University Single machine parallel batch scheduling with unbounded capacity Yuan Jinjiang Department of mathematics, Zhengzhou University

Math 317 HW #5 Solutions

Math 317 HW #5 Solutions 1. Exercise 2.4.2. (a) Prove that the sequence defined by x 1 = 3 and converges. x n+1 = 1 4 x n Proof. I intend to use the Monotone Convergence Theorem, so my goal is to show

Outline. Database Theory. Perfect Query Optimization. Turing Machines. Question. Definition VU , SS Trakhtenbrot s Theorem

Database Theory Database Theory Outline Database Theory VU 181.140, SS 2016 4. Trakhtenbrot s Theorem Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität Wien 4.

Spectral graph theory

Spectral graph theory Uri Feige January 2010 1 Background With every graph (or digraph) one can associate several different matrices. We have already seen the vertex-edge incidence matrix, the Laplacian

Lecture 7: Approximation via Randomized Rounding

Lecture 7: Approximation via Randomized Rounding Often LPs return a fractional solution where the solution x, which is supposed to be in {0, } n, is in [0, ] n instead. There is a generic way of obtaining

Math 421, Homework #5 Solutions

Math 421, Homework #5 Solutions (1) (8.3.6) Suppose that E R n and C is a subset of E. (a) Prove that if E is closed, then C is relatively closed in E if and only if C is a closed set (as defined in Definition

CMSC 858F: Algorithmic Lower Bounds Fall 2014 Puzzles and Reductions from 3-Partition

CMSC 858F: Algorithmic Lower Bounds Fall 2014 Puzzles and Reductions from 3-Partition Instructor: Mohammad T. Hajiaghayi Scribe: Thomas Pensyl September 9, 2014 1 Overview In this lecture, we first examine

6.896 Probability and Computation February 14, Lecture 4

6.896 Probability and Computation February 14, 2011 Lecture 4 Lecturer: Constantinos Daskalakis Scribe: Georgios Papadopoulos NOTE: The content of these notes has not been formally reviewed by the lecturer.

A Note on Maximum Independent Sets in Rectangle Intersection Graphs

A Note on Maximum Independent Sets in Rectangle Intersection Graphs Timothy M. Chan School of Computer Science University of Waterloo Waterloo, Ontario N2L 3G1, Canada tmchan@uwaterloo.ca September 12,

Approximability of Two-Machine No-Wait Flowshop Scheduling with Availability Constraints

Approximability of Two-Machine No-Wait Flowshop Scheduling with Availability Constraints T.C. Edwin Cheng 1, and Zhaohui Liu 1,2 1 Department of Management, The Hong Kong Polytechnic University Kowloon,

Classification - Examples -1- 1 r j C max given: n jobs with processing times p 1,..., p n and release dates

Lecture 2 Scheduling 1 Classification - Examples -1-1 r j C max given: n jobs with processing times p 1,..., p n and release dates r 1,..., r n jobs have to be scheduled without preemption on one machine

8 Primes and Modular Arithmetic

8 Primes and Modular Arithmetic 8.1 Primes and Factors Over two millennia ago already, people all over the world were considering the properties of numbers. One of the simplest concepts is prime numbers.

Loop Invariants and Binary Search

Loop Invariants and Binary Search Chapter 4.3.3 and 9.3.1-1 - Outline Ø Iterative Algorithms, Assertions and Proofs of Correctness Ø Binary Search: A Case Study - 2 - Outline Ø Iterative Algorithms, Assertions

Optimized Asynchronous Passive Multi-Channel Discovery of Beacon-Enabled Networks

t t Technische Universität Berlin Telecommunication Networks Group arxiv:1506.05255v1 [cs.ni] 17 Jun 2015 Optimized Asynchronous Passive Multi-Channel Discovery of Beacon-Enabled Networks Niels Karowski,

Basic building blocks for a triple-double intermediate format

Basic building blocks for a triple-double intermediate format corrected version) Christoph Quirin Lauter February 26, 2009 Abstract The implementation of correctly rounded elementary functions needs high

FORMAL LANGUAGES, AUTOMATA AND COMPUTATION

FORMAL LANGUAGES, AUTOMATA AND COMPUTATION REDUCIBILITY ( LECTURE 16) SLIDES FOR 15-453 SPRING 2011 1 / 20 THE LANDSCAPE OF THE CHOMSKY HIERARCHY ( LECTURE 16) SLIDES FOR 15-453 SPRING 2011 2 / 20 REDUCIBILITY

Batch Scheduling of Deteriorating Products

Decision Making in Manufacturing and Services Vol. 1 2007 No. 1 2 pp. 25 34 Batch Scheduling of Deteriorating Products Maksim S. Barketau, T.C. Edwin Cheng, Mikhail Y. Kovalyov, C.T. Daniel Ng Abstract.

Notes on Complexity Theory Last updated: August, 2011. Lecture 1

Notes on Complexity Theory Last updated: August, 2011 Jonathan Katz Lecture 1 1 Turing Machines I assume that most students have encountered Turing machines before. (Students who have not may want to look

Theorem 2. If x Q and y R \ Q, then. (a) x + y R \ Q, and. (b) xy Q.

Math 305 Fall 011 The Density of Q in R The following two theorems tell us what happens when we add and multiply by rational numbers. For the first one, we see that if we add or multiply two rational numbers

Steven Skiena. skiena

Lecture 3: Recurrence Relations (1997) Steven Skiena Department of Computer Science State University of New York Stony Brook, NY 11794 4400 http://www.cs.sunysb.edu/ skiena Argue the solution to T(n) =

Divide And Conquer Algorithms

CSE341T/CSE549T 09/10/2014 Lecture 5 Divide And Conquer Algorithms Recall in last lecture, we looked at one way of parallelizing matrix multiplication. At the end of the lecture, we saw the reduce SUM

Take-Away Games MICHAEL ZIEVE

Games of No Chance MSRI Publications Volume 29, 1996 Take-Away Games MICHAEL ZIEVE Abstract. Several authors have considered take-away games where the players alternately remove a positive number of counters

Mathematical Induction

Mathematical Induction Victor Adamchik Fall of 2005 Lecture 1 (out of three) Plan 1. The Principle of Mathematical Induction 2. Induction Examples The Principle of Mathematical Induction Suppose we have

Data Structures. Tutorial 1: Solutions for Assignment 1 & 2

Data Structures Tutorial 1: Solutions for Assignment 1 & 2 Assignment 1 Question 1 Your friend, John, has given you an array A[1..n] of n numbers. He told you that there is some i such that A[1..i] is

Practice Problems Solutions

Practice Problems Solutions The problems below have been carefully selected to illustrate common situations and the techniques and tricks to deal with these. Try to master them all; it is well worth it!

Lecture 3: Linear Programming Relaxations and Rounding

Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can

Approximation Algorithms. Scheduling. Approximation algorithms. Scheduling jobs on a single machine

Approximation algorithms Approximation Algorithms Fast. Cheap. Reliable. Choose two. NP-hard problems: choose 2 of optimal polynomial time all instances Approximation algorithms. Trade-off between time

Resource Allocation with Time Intervals

Resource Allocation with Time Intervals Andreas Darmann Ulrich Pferschy Joachim Schauer Abstract We study a resource allocation problem where jobs have the following characteristics: Each job consumes

Diagonalization. Ahto Buldas. Lecture 3 of Complexity Theory October 8, 2009. Slides based on S.Aurora, B.Barak. Complexity Theory: A Modern Approach.

Diagonalization Slides based on S.Aurora, B.Barak. Complexity Theory: A Modern Approach. Ahto Buldas Ahto.Buldas@ut.ee Background One basic goal in complexity theory is to separate interesting complexity

Learning Threshold Functions with Small Weights using Membership Queries

Learning Threshold Functions with Small Weights using Membership Queries Elias Abboud Research Center Ibillin Elias College P.O. Box 102 Ibillin 30012 Nader Agha Research Center Ibillin Elias College P.O.

Classification - Examples

Lecture 2 Scheduling 1 Classification - Examples 1 r j C max given: n jobs with processing times p 1,...,p n and release dates r 1,...,r n jobs have to be scheduled without preemption on one machine taking

Linear Span and Bases

MAT067 University of California, Davis Winter 2007 Linear Span and Bases Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (January 23, 2007) Intuition probably tells you that the plane R 2 is of dimension

Algorithms complexity of recursive algorithms. Jiří Vyskočil, Marko Genyk-Berezovskyj

complexity of recursive algorithms Jiří Vyskočil, Marko Genyk-Berezovskyj 2010-2014 Recurrences A recurrence is an equation or inequality that describes a function in terms of its value on smaller inputs.

Michal Forišek: Early beta verzia skrípt z ADŠ

Časová zložitosť II Michal Forišek: Early beta verzia skrípt z ADŠ In this part of the article we will focus on estimating the time complexity for recursive programs. In essence, this will lead to finding

JUST-IN-TIME SCHEDULING WITH PERIODIC TIME SLOTS. Received December May 12, 2003; revised February 5, 2004

Scientiae Mathematicae Japonicae Online, Vol. 10, (2004), 431 437 431 JUST-IN-TIME SCHEDULING WITH PERIODIC TIME SLOTS Ondřej Čepeka and Shao Chin Sung b Received December May 12, 2003; revised February

A Randomized Approximation Algorithm for MAX 3-SAT

A Randomized Approximation Algorithm for MAX 3-SAT CS 511 Iowa State University December 8, 2008 Note. This material is based on Kleinberg & Tardos, Algorithm Design, Chapter 13, Section 4, and associated

Theory of Computation Prof. Kamala Krithivasan Department of Computer Science and Engineering Indian Institute of Technology, Madras

Theory of Computation Prof. Kamala Krithivasan Department of Computer Science and Engineering Indian Institute of Technology, Madras Lecture No. # 31 Recursive Sets, Recursively Innumerable Sets, Encoding

Chapter 3: Elementary Number Theory and Methods of Proof. January 31, 2010

Chapter 3: Elementary Number Theory and Methods of Proof January 31, 2010 3.4 - Direct Proof and Counterexample IV: Division into Cases and the Quotient-Remainder Theorem Quotient-Remainder Theorem Given

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!

Math 7 Fall 205 HOMEWORK 5 SOLUTIONS Problem. 2008 B2 Let F 0 x = ln x. For n 0 and x > 0, let F n+ x = 0 F ntdt. Evaluate n!f n lim n ln n. By directly computing F n x for small n s, we obtain the following

9th Max-Planck Advanced Course on the Foundations of Computer Science (ADFOCS) Primal-Dual Algorithms for Online Optimization: Lecture 1

9th Max-Planck Advanced Course on the Foundations of Computer Science (ADFOCS) Primal-Dual Algorithms for Online Optimization: Lecture 1 Seffi Naor Computer Science Dept. Technion Haifa, Israel Introduction

Section 3 Sequences and Limits, Continued.

Section 3 Sequences and Limits, Continued. Lemma 3.6 Let {a n } n N be a convergent sequence for which a n 0 for all n N and it α 0. Then there exists N N such that for all n N. α a n 3 α In particular

POWER SETS AND RELATIONS

POWER SETS AND RELATIONS L. MARIZZA A. BAILEY 1. The Power Set Now that we have defined sets as best we can, we can consider a sets of sets. If we were to assume nothing, except the existence of the empty

The Goldberg Rao Algorithm for the Maximum Flow Problem

The Goldberg Rao Algorithm for the Maximum Flow Problem COS 528 class notes October 18, 2006 Scribe: Dávid Papp Main idea: use of the blocking flow paradigm to achieve essentially O(min{m 2/3, n 1/2 }

NP-Completeness and Cook s Theorem

NP-Completeness and Cook s Theorem Lecture notes for COM3412 Logic and Computation 15th January 2002 1 NP decision problems The decision problem D L for a formal language L Σ is the computational task:

ALGEBRAIC APPROACH TO COMPOSITE INTEGER FACTORIZATION

ALGEBRAIC APPROACH TO COMPOSITE INTEGER FACTORIZATION Aldrin W. Wanambisi 1* School of Pure and Applied Science, Mount Kenya University, P.O box 553-50100, Kakamega, Kenya. Shem Aywa 2 Department of Mathematics,

Lecture 3. Mathematical Induction

Lecture 3 Mathematical Induction Induction is a fundamental reasoning process in which general conclusion is based on particular cases It contrasts with deduction, the reasoning process in which conclusion

Discrete Mathematics, Chapter 3: Algorithms

Discrete Mathematics, Chapter 3: Algorithms Richard Mayr University of Edinburgh, UK Richard Mayr (University of Edinburgh, UK) Discrete Mathematics. Chapter 3 1 / 28 Outline 1 Properties of Algorithms

4.3 Limit of a Sequence: Theorems

4.3. LIMIT OF A SEQUENCE: THEOREMS 5 4.3 Limit of a Sequence: Theorems These theorems fall in two categories. The first category deals with ways to combine sequences. Like numbers, sequences can be added,

Min-Max Multiway Cut

Min-Max Multiway Cut Zoya Svitkina and Éva Tardos Cornell University, Department of Computer Science, Upson Hall, Ithaca, NY 14853. {zoya, eva}@cs.cornell.edu Abstract. We propose the Min-max multiway