Randomized algorithms
|
|
|
- Doris Peters
- 10 years ago
- Views:
Transcription
1 Randomized algorithms March 10, What are randomized algorithms? Algorithms which use random numbers to make decisions during the executions of the algorithm. Why would we want to do this?? Deterministic algorithm: for a fixed input, it always gives the same output and has the same running-time. We want our algorithms to be correct and fast. So the deterministic algorithm must always produce the correct answer, and must always run within the stated time-bound. Isn t this overkill? How about this: for a fixed input, we usually produce the correct answer, and usually run within the stated time-bound. Advantages Maybe we can get faster algorithms? (yes!) Maybe we can get simpler algorithms? (yes!) Maybe we can tackle harder problems? (open) 1
2 Two kinds of randomized algorithms 1. Monte Carlo algorithms: May sometimes produce an incorrect solution we bound the probability of failure. 2. Las Vegas algorithms: Always give the correct solution, the only variation is the running time we study the distribution of the running time. Example: Nerds and Doofuses m students take an exam which has n questions. Every student is either a nerd or a doofus. Nerds get all n answers right. Doofuses get less than n 2 answers right. Problem: Grade all the exams, giving all nerds an A and all doofuses a D. Deterministic algorithm: 1. For each student, grade at most the first n 2 see a wrong answer. questions in order stop as soon as you 2. If you ve seen a wrong answer, give grade D. Otherwise give grade A. Correctness: Always correct. Running time: nm 2 Monte Carlo algorithm: 1. For each student, choose 10 questions at random and grade them. 2. If you ve seen a wrong answer, give grade D. Otherwise give grade A. Correctness: Nerds always get an A, doofuses may get an A too. (What is the probability of this?) Running time: 10m 2
3 Las Vegas algorithm: 1. For each student, repeatedly choose a question at random and grade it, until you have graded n correct answers or seen a wrong answer If you ve seen a wrong answer, give grade D. Otherwise give grade A. Correctness: Always correct, provided it terminates! Running time: Variable - algorithm may not terminate! How about computing the average running time? Let RT (e) = running time of Las Vegas algorithm on exam e The running time RT (e) is a random variable. We want to compute the expected running time E[RT (e)]. Definition 1.1 T (n) = max E[RT (e)] input e, e =n is the expected running time of the algorithm. Important: Note that the variation in running time is due to the random choices made by the algorithm we do NOT assume a distribution on the inputs. Compare this with the definition of worst-case running time for a deterministic algorithm: Definition 1.2 T (n) = max RT (e) input e, e =n is the worst case running time of the deterministic algorithm. Caveat: A randomized algorithm might have exponential running time in the worst case (or even unbounded) while having good expected running time. Definition 1.3 The running time of a randomized algorithm A is O(f(n)) with highprobability if Pr[RT (A) c f(n)] 1 n d, where c and d are appropriate constants. [ For technical ] reasons, we also require that A has expected running time O(f(n)), i.e. E RT (A) = O(f(n)). 3
4 2 Some Probability Definition 2.1 (Random variable) A random variable is a real-valued function over the set of events [plus some technical conditions]. Definition 2.2 (Conditional Probability) Pr[X = x Y = y] (read as the probability that X = x given that Y = y ) is defined as Pr[X = x Y = y] = Pr[X = x Y = y] Pr[Y = y] For example, let s roll a dice. Let Y = 1 if the number we get is even, and X is the number we get. Then Pr[X = 2 Y = 1] = Pr[number is 2 number is even] Pr[number is even] = 1/6 1/2 = 1 3 Lemma 2.3 If X and Y are random variables, then Pr[X = x] = y Pr[X = x Y = y] Pr[Y = y] Definition 2.4 (Expectation) The expectation of a random variable X is E[X] = x x Pr[X = x] If Y is also a random variable, then the expectation of X given that Y = y is E[X Y = y] = x x Pr[X = x Y = y] Corollary 2.5 (Follows from Lemma 2.3) If X and Y are random variables, E[X] = y E[X Y = y] Pr[Y = y] Quick question: What is the expected running time of the Las Vegas algorithm given earlier if all students are doofuses? Lemma 2.6 (Linearity of expectation) For any two random variables X, Y, we have E[X + Y ] = E[X] + E[Y ]. Definition 2.7 (Independence) X and Y are independent random variables if Pr[X Y = y] = Pr[X] Lemma 2.8 If X and Y are independent random variables, then E[XY ] = E[X] E[Y ] 4
5 3 Sorting Nuts and Bolts Problem 3.1 You are given n nuts and n bolts. Every nut has a matching bolt, and all the n pairs of nuts and bolts have different sizes. Unfortunately, you get the nuts and bolts separated from each other and you have to match the nuts to the bolts. Furthermore, given a nut and a bolt, all you can do is to try and match one bolt against a nut (i.e., you can not compare two nuts to each other, or two bolts to each other). When comparing a nut to a bolt, either they match, or one is smaller than other. How to match the n nuts to the n bolts quickly? Answer: Simulate QuickSort: MatchNutsAndBolts(N,B) Pick a random nut α n from B (pivot) Find its matching bolt α b in B N L All bolts in B smaller than α n N R All bolts in B larger than α n B L, B R - def similarly MatchNutsAndBolts(N R,B R ) MatchNutsAndBolts(N L,B L ) Theorem 3.2 The expected running time of this algorithm is T (n) = O(n log n) where n is the number of nuts and bolts. The worst case running time of this algorithm is O(n 2 ). Proof: Let RT (n) = maximum running time of MatchNutsAndBolts on instances of size n (random variable). Rank of a bolt is its location in the sorted order of bolts. Let P = rank of the first pivot chosen by the algorithm MatchNutsAndBolts (random variable). We want to compute T (n) = E[RT (n)] by definition = n k=1 E[RT (n) P = k] Pr[P = k] by Corollary 2.5 = n 1 k=1 E[RT (n) P = k] n because for any k, Pr[P = k] = 1 n What is (RT (n) P = k)? (RT (n) P = k) RT (k 1) + RT (n k) + O(n) 5
6 So we have, T (n) = 1 n n k=1 E[RT (n) P = k] 1 ( n n k=1 E [RT (k 1) + RT (n k) + O(n)]) = 1 ( n n k=1 E[RT (k 1)] + E[RT (n k)] + O(n)) by linearity of expectation = 1 [ n n k=1 T (k 1) + T (n k)] + O(n) We have the recurrence: T (n) = O(n) + 1 n n (T (k 1) + T (n k)) k=1 Solution? T (n) = O(n log n). Proof? Guess and verify (by induction). 4 Analyzing QuickSort The previous analysis also works for QuickSort. However, there is an alternative analysis which is also very interesting. Let a 1,..., a n be the n given numbers (in sorted order as they appear in the output). It is enough to bound the number of comparisons performed by QuickSort to bound its running time. Let { 1 if we compared ai to a X ij = j 0 otherwise X ij is an indicator variable (i.e. a random variable that only takes values 0 and 1). The number of comparisons performed by QuickSort is exactly i<j X ij Hence, the expected running time of QuickSort is So what is Pr[X ij = 1]? E[ i<j X ij] = i<j E[X ij] = i<j Pr[X ij = 1] 6
7 Key observation: Two elements a i and a j are compared by QuickSort at most once, and this happens if and only if both a i and a j are in the same sub-problem, and one of them is selected as a pivot. If the pivot is smaller than min(a i, a j ) or larger than max(a i, a j ), then a i and a j will remain in the same sub-problem. So, X ij = 1 iff a i or a j is the first element in a i, a i+1,..., a j to be chosen as the pivot. The probability of this happening is clearly 2 j i + 1 Thus, the running time of QuickSort is i<j Pr[X ij = 1] = i<j 2 j i + 1 = O(n log n). In fact, the running time of QuickSort is O(n log n) with high-probability. 5 QuickSort with High Probability Given a set of numbers S, QuickSort randomly chooses a pivot from S and splits the set into two sub-problems: S L and S R. Every element in a sub-problem S of size 2 or more is compared once (to the pivot). So, an element takes part in k comparisons iff it occurs in k sub-problems. We want to show: each element occurs in O(log n) sub-problems with high probability. More specifically, let X i be an indicator variable such that X i = 1 iff the element a i occurs in more than 20 log n sub-problems. Suppose we know that Pr[X i = 1] 1/n 3. Then, if C is the number of comparisons performed by QuickSort, we have: [ ] n 1 Pr[C 20n log n] Pr X i = 1 Pr[X i = 1] n n = 1 3 n 2 (We used the union rule above: Pr[A B] Pr[A] + Pr[B]). So we will be done if we can show that Pr[X i = 1] 1/n 3. How to prove such a thing??? i 7 i=1
8 5.1 Proving that an element occurs in few sub-problems We say that a sub-problem S is lucky if it splits evenly into sub-problems S L and S R. More specifically, both S L and S R must be at most (3/4) S. Every time an element is in a lucky sub-problem S, it will next be in a sub-problem of size at most (3/4) S. So, an element can occur in at most M = log 4/3 n lucky sub-problems. Let Y k be an indicator variable such that Y k = 1 iff the k-th sub-problem is lucky. A sub-problem is lucky iff the pivot is picked from the middle half of S in sorted order, so Pr[Y k = 1] = 1 2 Also, if j k then Y j and Y k are independent. So, the Y k s are independent indicator variables that get value 1 with probability 1/2. Think of these as coin flips, where head = lucky sub-problem. We want to know: how many times do we need to toss a fair coin before we get at least M heads with high probability? Answer: Next time. But could we also use it to prove that the Las Vegas algorithm for the Nerds and Doofuses problem runs in a certain time T with high probability? 8
2.1 Complexity Classes
15-859(M): Randomized Algorithms Lecturer: Shuchi Chawla Topic: Complexity classes, Identity checking Date: September 15, 2004 Scribe: Andrew Gilpin 2.1 Complexity Classes In this lecture we will look
Introduction to Algorithms. Part 3: P, NP Hard Problems
Introduction to Algorithms Part 3: P, NP Hard Problems 1) Polynomial Time: P and NP 2) NP-Completeness 3) Dealing with Hard Problems 4) Lower Bounds 5) Books c Wayne Goddard, Clemson University, 2004 Chapter
Sorting revisited. Build the binary search tree: O(n^2) Traverse the binary tree: O(n) Total: O(n^2) + O(n) = O(n^2)
Sorting revisited How did we use a binary search tree to sort an array of elements? Tree Sort Algorithm Given: An array of elements to sort 1. Build a binary search tree out of the elements 2. Traverse
Near Optimal Solutions
Near Optimal Solutions Many important optimization problems are lacking efficient solutions. NP-Complete problems unlikely to have polynomial time solutions. Good heuristics important for such problems.
6.042/18.062J Mathematics for Computer Science. Expected Value I
6.42/8.62J Mathematics for Computer Science Srini Devadas and Eric Lehman May 3, 25 Lecture otes Expected Value I The expectation or expected value of a random variable is a single number that tells you
Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 13. Random Variables: Distribution and Expectation
CS 70 Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 3 Random Variables: Distribution and Expectation Random Variables Question: The homeworks of 20 students are collected
IMPROVING PERFORMANCE OF RANDOMIZED SIGNATURE SORT USING HASHING AND BITWISE OPERATORS
Volume 2, No. 3, March 2011 Journal of Global Research in Computer Science RESEARCH PAPER Available Online at www.jgrcs.info IMPROVING PERFORMANCE OF RANDOMIZED SIGNATURE SORT USING HASHING AND BITWISE
Finding and counting given length cycles
Finding and counting given length cycles Noga Alon Raphael Yuster Uri Zwick Abstract We present an assortment of methods for finding and counting simple cycles of a given length in directed and undirected
Breaking Generalized Diffie-Hellman Modulo a Composite is no Easier than Factoring
Breaking Generalized Diffie-Hellman Modulo a Composite is no Easier than Factoring Eli Biham Dan Boneh Omer Reingold Abstract The Diffie-Hellman key-exchange protocol may naturally be extended to k > 2
CSC 180 H1F Algorithm Runtime Analysis Lecture Notes Fall 2015
1 Introduction These notes introduce basic runtime analysis of algorithms. We would like to be able to tell if a given algorithm is time-efficient, and to be able to compare different algorithms. 2 Linear
CS473 - Algorithms I
CS473 - Algorithms I Lecture 9 Sorting in Linear Time View in slide-show mode 1 How Fast Can We Sort? The algorithms we have seen so far: Based on comparison of elements We only care about the relative
CSC2420 Fall 2012: Algorithm Design, Analysis and Theory
CSC2420 Fall 2012: Algorithm Design, Analysis and Theory Allan Borodin November 15, 2012; Lecture 10 1 / 27 Randomized online bipartite matching and the adwords problem. We briefly return to online algorithms
Distributed Computing over Communication Networks: Maximal Independent Set
Distributed Computing over Communication Networks: Maximal Independent Set What is a MIS? MIS An independent set (IS) of an undirected graph is a subset U of nodes such that no two nodes in U are adjacent.
Feb 7 Homework Solutions Math 151, Winter 2012. Chapter 4 Problems (pages 172-179)
Feb 7 Homework Solutions Math 151, Winter 2012 Chapter Problems (pages 172-179) Problem 3 Three dice are rolled. By assuming that each of the 6 3 216 possible outcomes is equally likely, find the probabilities
arxiv:1112.0829v1 [math.pr] 5 Dec 2011
How Not to Win a Million Dollars: A Counterexample to a Conjecture of L. Breiman Thomas P. Hayes arxiv:1112.0829v1 [math.pr] 5 Dec 2011 Abstract Consider a gambling game in which we are allowed to repeatedly
CMSC 858T: Randomized Algorithms Spring 2003 Handout 8: The Local Lemma
CMSC 858T: Randomized Algorithms Spring 2003 Handout 8: The Local Lemma Please Note: The references at the end are given for extra reading if you are interested in exploring these ideas further. You are
Guessing Game: NP-Complete?
Guessing Game: NP-Complete? 1. LONGEST-PATH: Given a graph G = (V, E), does there exists a simple path of length at least k edges? YES 2. SHORTEST-PATH: Given a graph G = (V, E), does there exists a simple
Lecture 1: Course overview, circuits, and formulas
Lecture 1: Course overview, circuits, and formulas Topics in Complexity Theory and Pseudorandomness (Spring 2013) Rutgers University Swastik Kopparty Scribes: John Kim, Ben Lund 1 Course Information Swastik
Efficiency of algorithms. Algorithms. Efficiency of algorithms. Binary search and linear search. Best, worst and average case.
Algorithms Efficiency of algorithms Computational resources: time and space Best, worst and average case performance How to compare algorithms: machine-independent measure of efficiency Growth rate Complexity
SIMS 255 Foundations of Software Design. Complexity and NP-completeness
SIMS 255 Foundations of Software Design Complexity and NP-completeness Matt Welsh November 29, 2001 [email protected] 1 Outline Complexity of algorithms Space and time complexity ``Big O'' notation Complexity
Introduction to Algorithms March 10, 2004 Massachusetts Institute of Technology Professors Erik Demaine and Shafi Goldwasser Quiz 1.
Introduction to Algorithms March 10, 2004 Massachusetts Institute of Technology 6.046J/18.410J Professors Erik Demaine and Shafi Goldwasser Quiz 1 Quiz 1 Do not open this quiz booklet until you are directed
CAD Algorithms. P and NP
CAD Algorithms The Classes P and NP Mohammad Tehranipoor ECE Department 6 September 2010 1 P and NP P and NP are two families of problems. P is a class which contains all of the problems we solve using
Expected Value and the Game of Craps
Expected Value and the Game of Craps Blake Thornton Craps is a gambling game found in most casinos based on rolling two six sided dice. Most players who walk into a casino and try to play craps for the
Lecture 10: CPA Encryption, MACs, Hash Functions. 2 Recap of last lecture - PRGs for one time pads
CS 7880 Graduate Cryptography October 15, 2015 Lecture 10: CPA Encryption, MACs, Hash Functions Lecturer: Daniel Wichs Scribe: Matthew Dippel 1 Topic Covered Chosen plaintext attack model of security MACs
MapReduce and Distributed Data Analysis. Sergei Vassilvitskii Google Research
MapReduce and Distributed Data Analysis Google Research 1 Dealing With Massive Data 2 2 Dealing With Massive Data Polynomial Memory Sublinear RAM Sketches External Memory Property Testing 3 3 Dealing With
Security in Outsourcing of Association Rule Mining
Security in Outsourcing of Association Rule Mining W. K. Wong The University of Hong Kong [email protected] David W. Cheung The University of Hong Kong [email protected] Ben Kao The University of Hong
Small Maximal Independent Sets and Faster Exact Graph Coloring
Small Maximal Independent Sets and Faster Exact Graph Coloring David Eppstein Univ. of California, Irvine Dept. of Information and Computer Science The Exact Graph Coloring Problem: Given an undirected
Random variables P(X = 3) = P(X = 3) = 1 8, P(X = 1) = P(X = 1) = 3 8.
Random variables Remark on Notations 1. When X is a number chosen uniformly from a data set, What I call P(X = k) is called Freq[k, X] in the courseware. 2. When X is a random variable, what I call F ()
General Sampling Methods
General Sampling Methods Reference: Glasserman, 2.2 and 2.3 Claudio Pacati academic year 2016 17 1 Inverse Transform Method Assume U U(0, 1) and let F be the cumulative distribution function of a distribution
Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs
CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like
GREATEST COMMON DIVISOR
DEFINITION: GREATEST COMMON DIVISOR The greatest common divisor (gcd) of a and b, denoted by (a, b), is the largest common divisor of integers a and b. THEOREM: If a and b are nonzero integers, then their
The Running Time of Programs
CHAPTER 3 The Running Time of Programs In Chapter 2, we saw two radically different algorithms for sorting: selection sort and merge sort. There are, in fact, scores of algorithms for sorting. This situation
The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge,
The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge, cheapest first, we had to determine whether its two endpoints
1 The Line vs Point Test
6.875 PCP and Hardness of Approximation MIT, Fall 2010 Lecture 5: Low Degree Testing Lecturer: Dana Moshkovitz Scribe: Gregory Minton and Dana Moshkovitz Having seen a probabilistic verifier for linearity
Introduction to computer science
Introduction to computer science Michael A. Nielsen University of Queensland Goals: 1. Introduce the notion of the computational complexity of a problem, and define the major computational complexity classes.
Private Approximation of Clustering and Vertex Cover
Private Approximation of Clustering and Vertex Cover Amos Beimel, Renen Hallak, and Kobbi Nissim Department of Computer Science, Ben-Gurion University of the Negev Abstract. Private approximation of search
Large induced subgraphs with all degrees odd
Large induced subgraphs with all degrees odd A.D. Scott Department of Pure Mathematics and Mathematical Statistics, University of Cambridge, England Abstract: We prove that every connected graph of order
each college c i C has a capacity q i - the maximum number of students it will admit
n colleges in a set C, m applicants in a set A, where m is much larger than n. each college c i C has a capacity q i - the maximum number of students it will admit each college c i has a strict order i
Lecture 2: Universality
CS 710: Complexity Theory 1/21/2010 Lecture 2: Universality Instructor: Dieter van Melkebeek Scribe: Tyson Williams In this lecture, we introduce the notion of a universal machine, develop efficient universal
Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling
Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one
7. Show that the expectation value function that appears in Lecture 1, namely
Lectures on quantum computation by David Deutsch Lecture 1: The qubit Worked Examples 1. You toss a coin and observe whether it came up heads or tails. (a) Interpret this as a physics experiment that ends
The Classes P and NP
The Classes P and NP We now shift gears slightly and restrict our attention to the examination of two families of problems which are very important to computer scientists. These families constitute the
Diagonalization. Ahto Buldas. Lecture 3 of Complexity Theory October 8, 2009. Slides based on S.Aurora, B.Barak. Complexity Theory: A Modern Approach.
Diagonalization Slides based on S.Aurora, B.Barak. Complexity Theory: A Modern Approach. Ahto Buldas [email protected] Background One basic goal in complexity theory is to separate interesting complexity
Primality - Factorization
Primality - Factorization Christophe Ritzenthaler November 9, 2009 1 Prime and factorization Definition 1.1. An integer p > 1 is called a prime number (nombre premier) if it has only 1 and p as divisors.
4.6 Linear Programming duality
4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal
Lecture 4: AC 0 lower bounds and pseudorandomness
Lecture 4: AC 0 lower bounds and pseudorandomness Topics in Complexity Theory and Pseudorandomness (Spring 2013) Rutgers University Swastik Kopparty Scribes: Jason Perry and Brian Garnett In this lecture,
The Assumption(s) of Normality
The Assumption(s) of Normality Copyright 2000, 2011, J. Toby Mordkoff This is very complicated, so I ll provide two versions. At a minimum, you should know the short one. It would be great if you knew
Many algorithms, particularly divide and conquer algorithms, have time complexities which are naturally
Recurrence Relations Many algorithms, particularly divide and conquer algorithms, have time complexities which are naturally modeled by recurrence relations. A recurrence relation is an equation which
Approximation Algorithms
Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms
(67902) Topics in Theory and Complexity Nov 2, 2006. Lecture 7
(67902) Topics in Theory and Complexity Nov 2, 2006 Lecturer: Irit Dinur Lecture 7 Scribe: Rani Lekach 1 Lecture overview This Lecture consists of two parts In the first part we will refresh the definition
CSC148 Lecture 8. Algorithm Analysis Binary Search Sorting
CSC148 Lecture 8 Algorithm Analysis Binary Search Sorting Algorithm Analysis Recall definition of Big Oh: We say a function f(n) is O(g(n)) if there exists positive constants c,b such that f(n)
! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.
Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three
Is n a Prime Number? Manindra Agrawal. March 27, 2006, Delft. IIT Kanpur
Is n a Prime Number? Manindra Agrawal IIT Kanpur March 27, 2006, Delft Manindra Agrawal (IIT Kanpur) Is n a Prime Number? March 27, 2006, Delft 1 / 47 Overview 1 The Problem 2 Two Simple, and Slow, Methods
1 Formulating The Low Degree Testing Problem
6.895 PCP and Hardness of Approximation MIT, Fall 2010 Lecture 5: Linearity Testing Lecturer: Dana Moshkovitz Scribe: Gregory Minton and Dana Moshkovitz In the last lecture, we proved a weak PCP Theorem,
Loop Invariants and Binary Search
Loop Invariants and Binary Search Chapter 4.3.3 and 9.3.1-1 - Outline Ø Iterative Algorithms, Assertions and Proofs of Correctness Ø Binary Search: A Case Study - 2 - Outline Ø Iterative Algorithms, Assertions
Cost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation:
CSE341T 08/31/2015 Lecture 3 Cost Model: Work, Span and Parallelism In this lecture, we will look at how one analyze a parallel program written using Cilk Plus. When we analyze the cost of an algorithm
Faster deterministic integer factorisation
David Harvey (joint work with Edgar Costa, NYU) University of New South Wales 25th October 2011 The obvious mathematical breakthrough would be the development of an easy way to factor large prime numbers
Notes from Week 1: Algorithms for sequential prediction
CS 683 Learning, Games, and Electronic Markets Spring 2007 Notes from Week 1: Algorithms for sequential prediction Instructor: Robert Kleinberg 22-26 Jan 2007 1 Introduction In this course we will be looking
6.852: Distributed Algorithms Fall, 2009. Class 2
.8: Distributed Algorithms Fall, 009 Class Today s plan Leader election in a synchronous ring: Lower bound for comparison-based algorithms. Basic computation in general synchronous networks: Leader election
Tutorial 8. NP-Complete Problems
Tutorial 8 NP-Complete Problems Decision Problem Statement of a decision problem Part 1: instance description defining the input Part 2: question stating the actual yesor-no question A decision problem
Analysis of Approximation Algorithms for k-set Cover using Factor-Revealing Linear Programs
Analysis of Approximation Algorithms for k-set Cover using Factor-Revealing Linear Programs Stavros Athanassopoulos, Ioannis Caragiannis, and Christos Kaklamanis Research Academic Computer Technology Institute
The Relative Worst Order Ratio for On-Line Algorithms
The Relative Worst Order Ratio for On-Line Algorithms Joan Boyar 1 and Lene M. Favrholdt 2 1 Department of Mathematics and Computer Science, University of Southern Denmark, Odense, Denmark, [email protected]
! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.
Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of
THE SCHEDULING OF MAINTENANCE SERVICE
THE SCHEDULING OF MAINTENANCE SERVICE Shoshana Anily Celia A. Glass Refael Hassin Abstract We study a discrete problem of scheduling activities of several types under the constraint that at most a single
Online Supplement for Maximizing throughput in zero-buffer tandem lines with dedicated and flexible servers by Mohammad H. Yarmand and Douglas G.
Online Supplement for Maximizing throughput in zero-buffer tandem lines with dedicated and flexible servers by Mohammad H Yarmand and Douglas G Down Appendix A Lemma 1 - the remaining cases In this appendix,
Ph.D. Thesis. Judit Nagy-György. Supervisor: Péter Hajnal Associate Professor
Online algorithms for combinatorial problems Ph.D. Thesis by Judit Nagy-György Supervisor: Péter Hajnal Associate Professor Doctoral School in Mathematics and Computer Science University of Szeged Bolyai
IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2
IEOR 4404 Homework # Intro OR: Deterministic Models February 14, 011 Prof. Jay Sethuraman Page 1 of 5 Homework #.1 (a) What is the optimal solution of this problem? Let us consider that x 1, x and x 3
Find-The-Number. 1 Find-The-Number With Comps
Find-The-Number 1 Find-The-Number With Comps Consider the following two-person game, which we call Find-The-Number with Comps. Player A (for answerer) has a number x between 1 and 1000. Player Q (for questioner)
Statistics 100A Homework 3 Solutions
Chapter Statistics 00A Homework Solutions Ryan Rosario. Two balls are chosen randomly from an urn containing 8 white, black, and orange balls. Suppose that we win $ for each black ball selected and we
A Note on Maximum Independent Sets in Rectangle Intersection Graphs
A Note on Maximum Independent Sets in Rectangle Intersection Graphs Timothy M. Chan School of Computer Science University of Waterloo Waterloo, Ontario N2L 3G1, Canada [email protected] September 12,
Theorem (informal statement): There are no extendible methods in David Chalmers s sense unless P = NP.
Theorem (informal statement): There are no extendible methods in David Chalmers s sense unless P = NP. Explication: In his paper, The Singularity: A philosophical analysis, David Chalmers defines an extendible
A Note on Proebsting s Paradox
A Note on Proebsting s Paradox Leonid Pekelis March 8, 2012 Abstract Proebsting s Paradox is two-stage bet where the naive Kelly gambler (wealth growth rate maximizing) can be manipulated in some disconcertingly
Factoring Algorithms
Factoring Algorithms The p 1 Method and Quadratic Sieve November 17, 2008 () Factoring Algorithms November 17, 2008 1 / 12 Fermat s factoring method Fermat made the observation that if n has two factors
CS473 - Algorithms I
CS473 - Algorithms I Lecture 4 The Divide-and-Conquer Design Paradigm View in slide-show mode 1 Reminder: Merge Sort Input array A sort this half sort this half Divide Conquer merge two sorted halves Combine
THE TURING DEGREES AND THEIR LACK OF LINEAR ORDER
THE TURING DEGREES AND THEIR LACK OF LINEAR ORDER JASPER DEANTONIO Abstract. This paper is a study of the Turing Degrees, which are levels of incomputability naturally arising from sets of natural numbers.
Research Tools & Techniques
Research Tools & Techniques for Computer Engineering Ron Sass http://www.rcs.uncc.edu/ rsass University of North Carolina at Charlotte Fall 2009 1/ 106 Overview of Research Tools & Techniques Course What
Analysis of Binary Search algorithm and Selection Sort algorithm
Analysis of Binary Search algorithm and Selection Sort algorithm In this section we shall take up two representative problems in computer science, work out the algorithms based on the best strategy to
Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
MATHEMATICS 154, SPRING 2010 PROBABILITY THEORY Outline #3 (Combinatorics, bridge, poker)
Last modified: February, 00 References: MATHEMATICS 5, SPRING 00 PROBABILITY THEORY Outline # (Combinatorics, bridge, poker) PRP(Probability and Random Processes, by Grimmett and Stirzaker), Section.7.
Outline. NP-completeness. When is a problem easy? When is a problem hard? Today. Euler Circuits
Outline NP-completeness Examples of Easy vs. Hard problems Euler circuit vs. Hamiltonian circuit Shortest Path vs. Longest Path 2-pairs sum vs. general Subset Sum Reducing one problem to another Clique
MAS108 Probability I
1 QUEEN MARY UNIVERSITY OF LONDON 2:30 pm, Thursday 3 May, 2007 Duration: 2 hours MAS108 Probability I Do not start reading the question paper until you are instructed to by the invigilators. The paper
14.1 Rent-or-buy problem
CS787: Advanced Algorithms Lecture 14: Online algorithms We now shift focus to a different kind of algorithmic problem where we need to perform some optimization without knowing the input in advance. Algorithms
Why? A central concept in Computer Science. Algorithms are ubiquitous.
Analysis of Algorithms: A Brief Introduction Why? A central concept in Computer Science. Algorithms are ubiquitous. Using the Internet (sending email, transferring files, use of search engines, online
Cryptography and Network Security Number Theory
Cryptography and Network Security Number Theory Xiang-Yang Li Introduction to Number Theory Divisors b a if a=mb for an integer m b a and c b then c a b g and b h then b (mg+nh) for any int. m,n Prime
Talk announcement please consider attending!
Talk announcement please consider attending! Where: Maurer School of Law, Room 335 When: Thursday, Feb 5, 12PM 1:30PM Speaker: Rafael Pass, Associate Professor, Cornell University, Topic: Reasoning Cryptographically
1 Problem Statement. 2 Overview
Lecture Notes on an FPRAS for Network Unreliability. Eric Vigoda Georgia Institute of Technology Last updated for 7530 - Randomized Algorithms, Spring 200. In this lecture, we will consider the problem
Discuss the size of the instance for the minimum spanning tree problem.
3.1 Algorithm complexity The algorithms A, B are given. The former has complexity O(n 2 ), the latter O(2 n ), where n is the size of the instance. Let n A 0 be the size of the largest instance that can
1 Message Authentication
Theoretical Foundations of Cryptography Lecture Georgia Tech, Spring 200 Message Authentication Message Authentication Instructor: Chris Peikert Scribe: Daniel Dadush We start with some simple questions
Outline BST Operations Worst case Average case Balancing AVL Red-black B-trees. Binary Search Trees. Lecturer: Georgy Gimel farb
Binary Search Trees Lecturer: Georgy Gimel farb COMPSCI 220 Algorithms and Data Structures 1 / 27 1 Properties of Binary Search Trees 2 Basic BST operations The worst-case time complexity of BST operations
17.3.1 Follow the Perturbed Leader
CS787: Advanced Algorithms Topic: Online Learning Presenters: David He, Chris Hopman 17.3.1 Follow the Perturbed Leader 17.3.1.1 Prediction Problem Recall the prediction problem that we discussed in class.
Sequential Data Structures
Sequential Data Structures In this lecture we introduce the basic data structures for storing sequences of objects. These data structures are based on arrays and linked lists, which you met in first year
Revealing Information while Preserving Privacy
Revealing Information while Preserving Privacy Irit Dinur Kobbi Nissim NEC Research Institute 4 Independence Way Princeton, NJ 08540 {iritd,kobbi }@research.nj.nec.com ABSTRACT We examine the tradeoff
Factoring & Primality
Factoring & Primality Lecturer: Dimitris Papadopoulos In this lecture we will discuss the problem of integer factorization and primality testing, two problems that have been the focus of a great amount
