Lecture 3. denote the orthogonal complement of S k. Then. 1 x S k. n. 2 x T Ax = ( ) λ x. with x = 1, we have. i = λ k x 2 = λ k.



Similar documents
Lecture 4: Cheeger s Inequality

Lecture 13. Lecturer: Jonathan Kelner Scribe: Jonathan Pines (2009)

Properties of MLE: consistency, asymptotic normality. Fisher information.

I. Chi-squared Distributions

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 8

In nite Sequences. Dr. Philippe B. Laval Kennesaw State University. October 9, 2008

Asymptotic Growth of Functions

Approximating Area under a curve with rectangles. To find the area under a curve we approximate the area using rectangles and then use limits to find

Convexity, Inequalities, and Norms

5 Boolean Decision Trees (February 11)

Lecture 4: Cauchy sequences, Bolzano-Weierstrass, and the Squeeze theorem

SAMPLE QUESTIONS FOR FINAL EXAM. (1) (2) (3) (4) Find the following using the definition of the Riemann integral: (2x + 1)dx

SECTION 1.5 : SUMMATION NOTATION + WORK WITH SEQUENCES

Sequences and Series

Infinite Sequences and Series

Chapter 6: Variance, the law of large numbers and the Monte-Carlo method

Chapter 7 Methods of Finding Estimators

Repeating Decimals are decimal numbers that have number(s) after the decimal point that repeat in a pattern.

CS103X: Discrete Structures Homework 4 Solutions

Discrete Mathematics and Probability Theory Spring 2014 Anant Sahai Note 13

Section 11.3: The Integral Test

Chapter 5: Inner Product Spaces

Soving Recurrence Relations

Example 2 Find the square root of 0. The only square root of 0 is 0 (since 0 is not positive or negative, so those choices don t exist here).


FIBONACCI NUMBERS: AN APPLICATION OF LINEAR ALGEBRA. 1. Powers of a matrix

Hypothesis testing. Null and alternative hypotheses

Incremental calculation of weighted mean and variance

1 Computing the Standard Deviation of Sample Means

Project Deliverables. CS 361, Lecture 28. Outline. Project Deliverables. Administrative. Project Comments

Department of Computer Science, University of Otago

5: Introduction to Estimation

Maximum Likelihood Estimators.

A probabilistic proof of a binomial identity

U.C. Berkeley CS270: Algorithms Lecture 9 Professor Vazirani and Professor Rao Last revised. Lecture 9

Lesson 15 ANOVA (analysis of variance)

Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Lecture 5: Span, linear independence, bases, and dimension

1 Correlation and Regression Analysis

Factoring x n 1: cyclotomic and Aurifeuillian polynomials Paul Garrett <garrett@math.umn.edu>

1. C. The formula for the confidence interval for a population mean is: x t, which was

Trigonometric Form of a Complex Number. The Complex Plane. axis. ( 2, 1) or 2 i FIGURE The absolute value of the complex number z a bi is

University of California, Los Angeles Department of Statistics. Distributions related to the normal distribution

Our aim is to show that under reasonable assumptions a given 2π-periodic function f can be represented as convergent series

The following example will help us understand The Sampling Distribution of the Mean. C1 C2 C3 C4 C5 50 miles 84 miles 38 miles 120 miles 48 miles

CS103A Handout 23 Winter 2002 February 22, 2002 Solving Recurrence Relations

The Stable Marriage Problem

LECTURE 13: Cross-validation

1. MATHEMATICAL INDUCTION

1 The Gaussian channel

Confidence Intervals. CI for a population mean (σ is known and n > 30 or the variable is normally distributed in the.

Determining the sample size

THE REGRESSION MODEL IN MATRIX FORM. For simple linear regression, meaning one predictor, the model is. for i = 1, 2, 3,, n

Lesson 17 Pearson s Correlation Coefficient

Running Time ( 3.1) Analysis of Algorithms. Experimental Studies ( 3.1.1) Limitations of Experiments. Pseudocode ( 3.1.2) Theoretical Analysis

CHAPTER 7: Central Limit Theorem: CLT for Averages (Means)

3. Greatest Common Divisor - Least Common Multiple

Solutions to Selected Problems In: Pattern Classification by Duda, Hart, Stork

THE ARITHMETIC OF INTEGERS. - multiplication, exponentiation, division, addition, and subtraction

Math C067 Sampling Distributions

UC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Solutions 9 Spring 2006

Case Study. Normal and t Distributions. Density Plot. Normal Distributions

How To Solve An Old Japanese Geometry Problem

MARTINGALES AND A BASIC APPLICATION

Taking DCOP to the Real World: Efficient Complete Solutions for Distributed Multi-Event Scheduling

Modified Line Search Method for Global Optimization

Descriptive Statistics

Your organization has a Class B IP address of Before you implement subnetting, the Network ID and Host ID are divided as follows:

Definition. A variable X that takes on values X 1, X 2, X 3,...X k with respective frequencies f 1, f 2, f 3,...f k has mean

THE ABRACADABRA PROBLEM

Theorems About Power Series

Present Value Factor To bring one dollar in the future back to present, one uses the Present Value Factor (PVF): Concept 9: Present Value

Week 3 Conditional probabilities, Bayes formula, WEEK 3 page 1 Expected value of a random variable

CHAPTER 3 DIGITAL CODING OF SIGNALS

Normal Distribution.

A Recursive Formula for Moments of a Binomial Distribution

Confidence Intervals for One Mean

Output Analysis (2, Chapters 10 &11 Law)

GCSE STATISTICS. 4) How to calculate the range: The difference between the biggest number and the smallest number.

Measures of Spread and Boxplots Discrete Math, Section 9.4

Basic Elements of Arithmetic Sequences and Series

Chapter 14 Nonparametric Statistics

Lecture 2: Karger s Min Cut Algorithm

How To Solve The Homewor Problem Beautifully

WHEN IS THE (CO)SINE OF A RATIONAL ANGLE EQUAL TO A RATIONAL NUMBER?

Notes on exponential generating functions and structures.

THIN SEQUENCES AND THE GRAM MATRIX PAMELA GORKIN, JOHN E. MCCARTHY, SANDRA POTT, AND BRETT D. WICK

Perfect Packing Theorems and the Average-Case Behavior of Optimal and Online Bin Packing

Overview. Learning Objectives. Point Estimate. Estimation. Estimating the Value of a Parameter Using Confidence Intervals

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

hp calculators HP 12C Statistics - average and standard deviation Average and standard deviation concepts HP12C average and standard deviation

Chapter 7: Confidence Interval and Sample Size

Chapter 7 - Sampling Distributions. 1 Introduction. What is statistics? It consist of three major areas:

THE HEIGHT OF q-binary SEARCH TREES

Factors of sums of powers of binomial coefficients

Solving Logarithms and Exponential Equations

Class Meeting # 16: The Fourier Transform on R n

Fast Fourier Transform

Math 113 HW #11 Solutions

Transcription:

18.409 A Algorithmist s Toolkit September 17, 009 Lecture 3 Lecturer: Joatha Keler Scribe: Adre Wibisoo 1 Outlie Today s lecture covers three mai parts: Courat-Fischer formula ad Rayleigh quotiets The coectio of λ to graph cuttig Cheeger s Iequality Courat-Fischer ad Rayleigh Quotiets The Courat-Fischer theorem gives a variatioal formulatio of the eigevalues of a symmetric matrix, which ca be useful for obtaiig bouds o the eigevalues. Theorem 1 (Courat-Fischer Formula) Let A be a symmetric matrix with eigevalues λ 1 λ... λ ad correspodig eigevectors v 1,...,v. The λ 1 = mi = mi, x =1 x 0 x T x λ = mi = mi, x =1 x 0 x T x xv 1 xv 1 λ = λ max = max = max. x =1 x 0 x T x I geeral, for 1 k, let S k deote the spa of v 1,...,v k (with S 0 = {0}), ad let S k deote the orthogoal complemet of S k. The λ k = mi = mi. x =1 x 0 x T x x S k 1 x S k 1 Proof Let A = Q T ΛQ be the eigedecompositio of A. We observe that = x T Q T ΛQx = (Qx) T Λ(Qx), ad sice Q is orthogoal, Qx = x. Thus it suffices to cosider the case whe A = Λ is a diagoal matrix with the eigevalues λ 1,...,λ i the diagoal. The we ca write λ 1 x 1. = ( ) x 1 x... = λ i x i. λ x i=1 We ote that whe A is diagoal, the eigevectors of A are v k = e k, the stadard basis vector i R, i.e. (e k ) i =1 if i = k, ad (e k ) i = 0 otherwise. The the coditio x S k 1 implies x e i for i =1,...,k 1, so x i = x, e i = 0. Therefore, for x S k 1 with x = 1, we have = λ i x i = λ i x i λ k x i = λ k x = λ k. i=1 i=k i=k 3-1

O the other had, pluggig i x = e k S k 1 yields = (e k ) T Ae k = λ k. This shows that Similarly, for x = 1, λ k = mi. x =1 x S k 1 = λ i x i λ max x i = λ max x = λ max. i=1 i=1 O the other had, takig x = e yields = (e ) T Ae = λ max. Hece we coclude that λ max = max. x =1 The Rayleigh quotiet is the applicatio of the Courat-Fischer Formula to the Laplacia of a graph. Corollary (Rayleigh Quotiet) Let G = (V, E) be a graph ad L be the Laplacia of G. We already kow that the smallest eigevalue is λ 1 =0 with eigevector v 1 = 1. By the Courat-Fischer Formula, (i,j) E (x i x j ) λ = mi = mi, x =0 x T x x=0 xv 1 x1 i V x i (i,j) E (x i x j ) λ max = max = max. x =0 x T x x=0 i V x i We ca iterpret the formula for λ as puttig sprigs o each edge (with slightly weird boudary coditios correspodig to ormalizatio) ad miimizig the potetial eergy of the cofiguratio. Some big matrices are hard or aoyig to diagoalize, so i some cases, we may ot wat to calculate the exact value of λ. However, we ca still get a approximatio by just costructig a vector x that has a small Rayleigh quotiet. Similarly, we ca fid a lower boud o λ max by costructig a vector that has a large Rayleigh quotiet. We will look at two examples i which we boud λ..1 Example 1: The Path Graph Let P +1 be the path graph of +1 vertices. Label the vertices as 0, 1,..., from oe ed of the path to the other. Cosider the vector x R +1 give by x i =i for vertices i = 0, 1,...,. Note that i=0 x i =0, so x 1. Calculatig the Rayleigh quotiet for x gives us ( ) (i,j) E (x i x j ) 4 4 1 = = = O. i V x i i=0(i ) Ω( 3 ) Thus we ca boud λ O(1/ ). We kew this was true from the explicit formula of λ i terms of sies ad cosies from Lecture, but this is much cleaer ad more geeral of a result.. Example : A Complete Biary Tree Let G be a complete biary tree o = h 1 odes. Defie the vector x R to have the value 0 o the root ode, 1 o all odes i the left subtree of the root, ad 1 o all odes i the right subtree of the root. 3-

It is easy to see that i V x i = 0, sice there are equal umbers of odes o the left ad right subtrees of the root, so x 1. Calculatig the Rayleigh quotiet of x gives us ( ) (i,j) E (x i x j ) 1 = = O. x 1 i V i Thus we get λ O(1/), agai with little effort. It turs out i this case that our approximatio is correct withi a costat factor, ad we did ot eve eed to diagoalize a big matrix. 3 Graph Cuttig The basic problem of graph cuttig is to cut a give graph G ito two pieces such that both are pretty big. Graph cuttig has may applicatios i computer sciece ad computig, e.g. for parallel processig, divide-ad-coquer algorithms, or clusterig. I each applicatio, we wat to divide the problem ito smaller pieces so as to optimize some measure of efficiecy, depedig o the specific problems. 3.1 How Do We Cut Graphs? The first questio to ask about graph cuttig is what we wat to optimize whe we are cuttig a graph. Before attemptig to aswer this questio, we itroduce several otatios. Let G = (V, E) be a graph. Give a set S V of vertices of G, let S = V \ S be the complemet of S i V. Let S ad S deote the umber of vertices i S ad S, respectively. Fially, let e(s) deote the umber of edges betwee S ad S. Note that e(s) = e(s ). Now we cosider some possible aswers to our earlier questio. Attempt 1: Mi-cut. Divide the vertex set V ito two parts S ad S to miimize e(s). This approach is motivated by the ituitio that to get a good cut, we do ot wat to break too may edges. However, this approach aloe is ot sufficiet, as Figure 1(a) demostrates. I this example, we ideally wat to cut the graph across the two edges i the middle, but the mi-cut criterio would result i a cut across the oe edge o the right. Attempt : Approximate bisectio. Divide the vertex set V ito two parts S ad S, such that S ad S are approximately / (or at least /3). This criterio would take care of the problem metioed i Figure 1(a), but it is also ot free of problems, as Figure 1(b) shows. I this example, we ideally wat to cut the graph across the oe edge i the middle that separates the two clusters. However, the approximate bisectio criterio would force us to make a cut across the dese graph o the left. (a) Problem with mi-cut (b) Problem with approximate bisectio Figure 1: Illustratio for problems with the proposed graph cuttig criteria. Now we propose a criterio for graph cuttig that balaces the two approaches above. Defiitio 3 (Cut Ratio) The cut ratio φ of a cut S S is give by e(s) φ(s) = mi( S, S ). 3-3

The cut of miimum ratio is the cut that miimizes φ(s). The isoperimetric umber of a graph G is the value of the miimum cut, φ(g) = mi φ(s). S V As we ca see from the defiitio above, the cut ratio is tryig to miimize the umber of edges across the cut, while pealizig cuts with small umber of vertices. This criterio turs out to be a good oe, ad is widely used for graph cuttig i practice. 3. A Iteger Program for the Cut Ratio Now that we have a good defiitio of graph cuttig, the questio is how to fid the optimal cut i a reasoable time. It turs out that we ca cast the problem of fidig cut of miimum ratio as a iteger program as follows. Associate every cut S S with a vector x { 1, 1}, where { 1, if i S, ad x i = 1, if i S. The it is easy to see that we ca write 1 e(s) = (x i x j ). 4 (i,j) E For a boolea statemet A, let [A] deote the characteristic fuctio o A, so[a] =1if A is true, ad [A] =0 if A is false. The we also have ( ) S S = [i S] [j S] = [i S, j S]= 1 1 [x i = x j ]= (x i x j ). 4 i V j V i,j V i,j V i<j Combiig the two computatios above, (i,j) E (x i x j ) e(s) mi = mi. x { 1,1} (x i x j ) S V S S Now ote that if V = S + S =, the Therefore, solvig the iteger program (i,j) E(x i x j ) mi x { 1,1} (x i x j ) i<j mi( S, S ) S S mi( S, S ), so we get 1 e(s) (i,j) E (x i x j ) e(s) φ(g) = mi mi mi = φ(g). S V mi( S, S ) x { 1,1} (x i x j ) S V mi( S, S ) i<j i<j allows us to approximate φ(g) withi a factor of. The bad ews is that it is NP-hard to solve this program. However, if we remove the x { 1, 1} costrait, we ca actually solve the program. Note that removig the costrait x { 1, 1} is actually the same as sayig that x [ 1, 1], sice we ca scale x without chagig the value of the objective fuctio. 3-4

3.3 Iterlude o Relaxatios The idea to drop the costrait x { 1, 1} metioed i the previous sectio is actually a recurrig techique i algorithms, so it is worthwhile to give a more geeral explaatio of this relaxatio techique. A commo setup i approximatio algorithms is as follows: we wat to solve a NP-hard questio which takes the form of miimizig f(x) subject to the costrait x C. Istead, we miimize f(x) subject to a weaker costrait x C C (see Figure for a illustratio). Let p ad q be the poits that miimize f i C ad C, respectively. Sice C C, we kow that f(q) f(p). smaller f(x) C C p q q Figure : Illustratio of the relaxatio techique for approximatio algorithms. For this relaxatio to be useful, we have to show how to roud q to a feasible poit q C, ad prove f(q ) γf(q) for some costat γ 1. This implies f(q ) γf(q) γf(p), so this process gives us a γ-approximatio. 3.4 Solvig the Relaxed Program Goig back to our iteger program to fid the cut of miimum ratio, ow cosider the followig relaxed program, (i,j) E(x i x j ) mi. x R i<j (x i x j ) Sice the value of the objective fuctio oly depeds o the differeces x i x j, we ca traslate x R such that x 1, i.e. i=1 x i =0. The observe that (x i x j ) = x i, i<j i=1 which ca be obtaied either by expadig the summatio directly, or by otig that x is a eigevector of the Laplacia of the complete graph K with eigevalue (as we saw i Lecture ). Therefore, usig the Rayleigh quotiet, (i,j) E (x i x j ) (i,j) E (x i x j ) λ mi = mi =. x R i<j (x i x j ) x R i=1 x i x1 3-5

Puttig all the pieces together, we get 4 Cheeger s Iequality e(s) φ(g) = mi S V mi( S, S ) e(s) mi S V S S (i,j) E (x i x j ) = mi x { 1,1} i<j (x i x j ) (i,j) E (x i x j ) mi x R i<j (x i x j ) (i,j) E (x i x j ) = mi x R i=1 x i x1 λ =. I the previous sectio, we obtaied the boud φ(g) λ /, but what about the other directio? For that, we would eed a roudig method, which is a way of gettig a cut from λ ad v, ad a upper boud o how much ithe roudig icreases the cut ratio that we are tryig to miimize. I the ext sectio, we will see how to costruct a cut from λ ad v that gives us the followig boud, which is Cheeger s Iequality. Theorem 4 (Cheeger s Iequality) Give a graph G, where d max is the maximum degree i G. φ(g) λ φ(g), d max As a side ote, the d max disappears from the formula if we use the ormalized Laplacia i our calculatios, but the proof is messier ad is ot fudametally ay differet from the proof usig the regular Laplacia. The lower boud of φ(g) /d max i Cheeger s Iequality is the best we ca do to boud λ. The square factor φ(g) is ufortuate, but if it were withi a costat factor of φ(g), we would be able to fid a costat approximatio of a NP-hard problem. Also, if we look at the examples of the path graph ad the complete biary tree, their isoperimetric umbers are the same sice we ca cut exactly oe edge i the middle of the graph ad divide the graphs ito two asymptotically equal-sized pieces for a value of O(1/). However, the two graphs have differet upper bouds for λ, O(1/ ) ad O(1/) respectively, which demostrate that both the lower ad upper bouds of λ i Cheeger s iequality are tight (to a costat factor). 4.1 How to Get a Cut from v ad λ Let x R such that x 1. We will use x as a map from the vertices V to R. Cuttig R would thus give a partitio of V as follows: order the vertices such that x 1 x... x, ad the cut will be defied by the set S = {1,...,k} for some value of k. The value of k caot be kow a priori sice the best cut depeds o the graph. I practice, a algorithm would have to try all values of k to actually fid the optimal cut after embeddig the graph to the real lie. We will actually prove somethig slightly stroger tha Cheeger s Iequality: 3-6

Theorem 5 For ay x 1, x 1 x... x, there is some i for which x T Lx φ({1,...,i}). x T x d max This is great because it ot oly implies Cheeger s iequality by takig x = v, but it also gives a actual cut. It also works eve if we have ot calculated the exact values for λ ad v ; we just have to get a good approximatio of v ad we ca still get a cut. 4. Proof of Cheeger s Iequality 4..1 Step 1: Preprocessig First, we are goig to do some preprocessig. This step does ot reduce the geerality of the proof much, but it will make the actual proof cleaer. For simplicity, suppose is odd. Let m = ( +1)/. Defie the vector y by y i = x i x m. We ca observe that y m = 0, half of the vertices are to the left of y m, ad the other half are to the right of y m. Claim 6 Proof x T Lx y T Ly x T x y T y First, the umerators are equal by the operatio of the Laplacia, x T Lx = (x i x j ) ( = (yi + x m ) (y j + x m ) ) = (yi y j ) = y T Ly. (i,j) E (i,j) E (i,j) E Next, sice x 1, y T y = (x + x m 1) T (x + x m 1)= x T x +x m (x T 1)+ x m(1 T 1)= x T x + x m x T x. Puttig together the two computatios above yields the desired iequality. 4.. Step : A Little More Preprocessig We do ot wat edges crossig y m = 0 (because we will later cosider the positive ad egative vertices separately), so we replace ay edge (i, j) with two edges (i, m) ad (m, j). Call this ew edge set E. Claim 7 (i,j) E (y i y j ) (i,j) E (y i y j ). i V y i i V y i Proof The oly differece i the umerator comes from the edges (i, j) that we split ito (i, m) ad (m, j). I that case, it is easy to see that (also otig that y m =0) (y j y i ) (y j y m ) +(y m y i ). 3-7

4..3 Step 3: Breakig the Sum i Half We would like to break the summatios i half so that we do ot have to deal with separate cases with positive ad egative umbers. Let E be the edges (i, j) with i, j m, ad let E + be the edges (i, j) with i, j m. We the have (y i y j ) + (y i y j ) (i,j) E (y i y j ) (i,j) E (i,j) E+ = m. i y i i=1 y i + i=m yi Note that y m appears twice i the summatio o the deomiator, which is fie sice y m = 0. We also kow that for ay a, b, c, d > 0, ( ) a + b a b mi,, c + d c d so it is eough to boud (i,j) E (y i y j ) (y i y j ) (i,j) E+ m ad. i=1 y i i=m y i Sice the two values are essetially the same, we will focus oly o the first oe. 4..4 The Mai Lemma Let C i be the umber of edges crossig the poit x i, i.e. the umber of edges i the cut if we were to take S = {1,...,i}. Recall that e(s) φ = φ(g) = mi, S V mi( S, S ) so by takig S = {1,...,i}, weget C i φi for i / ad C i φ( i) for i /. The mai lemma we use to prove Cheeger s Iequality is as follows. Lemma 8 (Summatio by Parts) For ay z 1... z m =0, m z i z j φ z i. j 1 z i z j = z j z i =(z i+1 z i )+(z i+ z i+1 )+ +(z j z j 1 )= (z k+1 z k ). m 1 m 1 z i z j = C k (z k+1 z k ) φ k(z k+1 z k ). m 1 z i z j φ k(z k+1 z k ) ( ) = φ (z z 1 )+(z 3 z )+3(z 4 z 3 )+ +(m 1)(z m z m 1 ) m = φ z i. Proof For each (i, j) E with i <j, write Summig over (i, j) E, we observe that each term z k+1 z k appears exactly C k times. Therefore, i=1 (i,j) E k=1 k=1 Note that z i z m =0, so z i = z i for 1 i m. The we ca evaluate the last summatio above as (i,j) E k=1 = φ( z 1 z z m 1 +(m 1)z m ) i=1 k=i 3-8

4..5 Usig the Mai Lemma to Prove Cheeger s Iequality Now we ca fially prove Cheeger s iequality. Proof of Cheeger s Iequality: This proof has five mai steps. 1. First, we ormalize y such that m i=1 y i =1.. Next, this is perhaps a somewhat oituitive step, but we wat to get squares ito our expressio, so we apply the mai lemma (Lemma 8) to a ew vector z with z i = y i. We ow have m y i y j φ yi = φ. 3. Next, we wat somethig that looks like (y i y j ) istead of y i y j, so we are goig to use the Cauchy-Schwarz iequality. 1/ 1/ y i y j = y i y j y i + y j (y i y j ) (y i + y j ). i=1 4. We wat to get rid of the (y i + y j ) part, so we boud it ad observe that the maximum umber of times ay y i ca show up i the summatio over the edges is the the maximum degree of ay vertex. m (y i + y j ) (y i + y j ) d max y i d max. 5. Puttig it all together, we get ( ) (y i y j ) (i,j) E yi y j φ m. (y i + y j ) d max (i,j) E Similarly, we ca also show that Therefore, i=1 y i (i,j) E (i,j) E + i=m y i d max { } T x Lx y T Ly (i,j) E (y i y j ) (y i y j ) φ (i,j) E mi m, +. x T x y T y y i=1 (y i y j ) φ. i=1 y i i=m i d max 4..6 So who is Cheeger ayway? Jeff Cheeger is a differetial geometer. His iequality makes a lot more sese i the cotiuous world, ad his motivatio was i differetial geometry. This was part of his PhD thesis, ad he was actually ivestigatig heat kerels o smooth maifolds. A heat kerel ca also be thought of as a poit of heat i space, ad the questio is the speed at which the heat spreads. It ca also be thought of as the mixig time of a radom walk, which will be discussed i future lectures. 3-9

MIT OpeCourseWare http://ocw.mit.edu 18.409 Topics i Theoretical Computer Sciece: A Algorithmist's Toolkit Fall 009 For iformatio about citig these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.