Lecture 8: Expanders and Applications

Similar documents
10.2 Systems of Linear Equations: Matrices

Math , Fall 2012: HW 1 Solutions

2r 1. Definition (Degree Measure). Let G be a r-graph of order n and average degree d. Let S V (G). The degree measure µ(s) of S is defined by,

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS

(67902) Topics in Theory and Complexity Nov 2, Lecture 7

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Lecture 1: Course overview, circuits, and formulas

Inverse Trig Functions

Lecture L25-3D Rigid Body Kinematics

Sensor Network Localization from Local Connectivity : Performance Analysis for the MDS-MAP Algorithm

Lecture 3: Finding integer solutions to systems of linear equations

Notes on tangents to parabolas

Section Inner Products and Norms

Measures of distance between samples: Euclidean

Similarity and Diagonalization. Similar Matrices

THE DIMENSION OF A VECTOR SPACE

On Adaboost and Optimal Betting Strategies

Ch 10. Arithmetic Average Options and Asian Opitons

Determine If An Equation Represents a Function

by the matrix A results in a vector which is a reflection of the given

Inner Product Spaces

MSc. Econ: MATHEMATICAL STATISTICS, 1995 MAXIMUM-LIKELIHOOD ESTIMATION

Firewall Design: Consistency, Completeness, and Compactness

Vector and Matrix Norms

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

JON HOLTAN. if P&C Insurance Ltd., Oslo, Norway ABSTRACT

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Inner Product Spaces and Orthogonality

Lagrangian and Hamiltonian Mechanics

y or f (x) to determine their nature.

Here the units used are radians and sin x = sin(x radians). Recall that sin x and cos x are defined and continuous everywhere and

3. INNER PRODUCT SPACES

Answers to the Practice Problems for Test 2

LS.6 Solution Matrices

Lecture 4: AC 0 lower bounds and pseudorandomness

α = u v. In other words, Orthogonal Projection

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

NOTES ON LINEAR TRANSFORMATIONS

Department of Mathematical Sciences, University of Copenhagen. Kandidat projekt i matematik. Jens Jakob Kjær. Golod Complexes

A Generalization of Sauer s Lemma to Classes of Large-Margin Functions

Calculating Viscous Flow: Velocity Profiles in Rivers and Pipes

Given three vectors A, B, andc. We list three products with formula (A B) C = B(A C) A(B C); A (B C) =B(A C) C(A B);

The Quick Calculus Tutorial

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

Handout #Ch7 San Skulrattanakulchai Gustavus Adolphus College Dec 6, Chapter 7: Digraphs

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

How To Find Out How To Calculate Volume Of A Sphere

Orthogonal Diagonalization of Symmetric Matrices

Notes on Determinant

Chapter 6. Orthogonality

The Determinant: a Means to Calculate Volume

DATA ANALYSIS II. Matrix Algorithms

Network Flow I. Lecture Overview The Network Flow Problem

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

5.1 Bipartite Matching

1 if 1 x 0 1 if 0 x 1

Linear Programming. March 14, 2014

Numerical Analysis Lecture Notes

Factoring Dickson polynomials over finite fields

Chapter 6. Cuboids. and. vol(conv(p ))

Properties of Real Numbers

Solutions to Math 51 First Exam January 29, 2015

6. Vectors Scott Surgent (surgent@asu.edu)

Continued Fractions and the Euclidean Algorithm

Lecture 1: Schur s Unitary Triangularization Theorem

Pythagorean Triples Over Gaussian Integers

BANACH AND HILBERT SPACE REVIEW

1 Solving LPs: The Simplex Algorithm of George Dantzig

17. Inner product spaces Definition Let V be a real vector space. An inner product on V is a function

Notes on Symmetric Matrices

Modelling and Resolving Software Dependencies

The Goldberg Rao Algorithm for the Maximum Flow Problem

2. Properties of Functions

Solution to Homework 2

Linear Algebra Notes

Which Networks Are Least Susceptible to Cascading Failures?

Numerical Analysis Lecture Notes

Inner products on R n, and more

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

8.1 Min Degree Spanning Tree

Cross-Over Analysis Using T-Tests

1 The Line vs Point Test

Search Advertising Based Promotion Strategies for Online Retailers

Triangle deletion. Ernie Croot. February 3, 2010

1 Homework 1. [p 0 q i+j p i 1 q j+1 ] + [p i q j ] + [p i+1 q j p i+j q 0 ]

Chapter 17. Orthogonal Matrices and Symmetries of Space

BALTIC OLYMPIAD IN INFORMATICS Stockholm, April 18-22, 2009 Page 1 of?? ENG rectangle. Rectangle

5.3 The Cross Product in R 3

Metric Spaces. Chapter Metrics

October 3rd, Linear Algebra & Properties of the Covariance Matrix

A New Evaluation Measure for Information Retrieval Systems

tr(a + B) = tr(a) + tr(b) tr(ca) = c tr(a)

Lecture 16 : Relations and Functions DRAFT

PYTHAGOREAN TRIPLES KEITH CONRAD

Fluid Pressure and Fluid Force

Permutation Betting Markets: Singleton Betting with Extra Information

A Comparison of Performance Measures for Online Algorithms

Review Jeopardy. Blue vs. Orange. Review Jeopardy

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

Transcription:

Lecture 8: Expaners an Applications Topics in Complexity Theory an Pseuoranomness (Spring 013) Rutgers University Swastik Kopparty Scribes: Amey Bhangale, Mrinal Kumar 1 Overview In this lecture, we will introuce some notions of expaners an then explore some of their useful properties an applications. Expaners In one of the previous lectures, we have alreay use one notion of expaner graphs while stuying ata structures for the set membership problem. We will now look at some other efinitions typically use to escribe Expaners an then explore their properties. Definition 1. For a real number α > 0 an a natural number k, a graph G(V, E) is sai to be an (α, k)-ege Expaner if for every S V, such that S k, the number of eges from S to V \ S, enote as e(s, S) is at least α S. Now, it is easy to see from the efinition that a complete graph is efinitely an ege expaner as per the above efinition. For our applications, we will be mostly intereste in expaner graphs which have a much smaller number of eges. For example, we woul be intereste in -regular graphs which are expaners with parameters = O(1) an α = θ(1). Clearly, α cannot excee. The existence of such graphs is guarantee by the following theorem which we will come back an prove at a later point in time. Theorem. For any natural number 3, an sufficiently large n, there exist -regular graphs on n vertices which are ( 10, n ) ege expaners. Let us now efine another notion of expaners, this time base upon the number of vertices in the neighborhoo of small subsets of vertices. Definition 3. For a real number α an a natural number k, a graph G(V, E) is sai to be an (α, k)- Vertex Expaner if for every S V, such that S k, N(S) α S. Here, N(S) = {x V : y S such that (x, y) E}. The following theorem guarantees the existence of vertex expaners for some choice of parameters. Theorem 4. For any natural number 3, an sufficiently large n, there exist -regular graphs on n vertices which are ( 10, n 10 ) vertex expaners. 1

These efinitions of expaners were efine in terms of the expansion properties of small size subsets of vertices. We will now efine another notion of expaners in term of the eigenvalues of the ajacency matrix of a graph. Definition 5. For any natural number, a -regular graph G(V, E) is sai to be a λ-absolute eigenvalue expaner if λ, λ 3,..., λ n λ. Here, λ 1 λ... λ n are the eigenvalues of the ajacency matrix A of G. The following theorem which we will believe without a proof tells us that there exist expaners whose all eigenvalues except the first one are boune away from. Theorem 6 (Broer, Shamir). For all positive integers 3 there exists a λ < such that for all sufficiently large n, there is a -regular graph which is a λ-absolute eigenvalue expaner. In fact, we also know that the there exist λ-absolute expaners with λ aroun. 3 Some properties of eigenvectors an eigenvalues To unerstan the efinitions better an to be able to use them for our ens, let us first look at some basic properties of the eigenvalues an eigenvectors of the ajacency matrix of a -regular graph. During the course of this entire iscussion, we will sometimes refer to the eigenvalues an eigenvectors of the ajacency matrix A of G as eigenvalues an eigenvectors of G. Lemma 7. Let G(V, E) be an n vertex unirecte - regular graph for some natural number. Let λ 1 λ... λ n be its n eigenvalues. Then, 1. For all i [n], - λ i. λ 1 = 3. G is connecte λ < 4. G is nonbipartite λ n > - Proof. Before going into the proof, let us list own some basic properties of the ajacency matrix A of G. A is symmetric Every row an every column of A have exactly ones. Now, let us prove the items in the lemma. 1. Let v be an eigenvector of A with eigenvalue λ. Let v(x) be the component of v with the maximum absolute value. Then,we know that λv(x) = A ij v(j) (1) j [n]

Now, using - v(x) v(j) v(i) in the equation above, we get λv(x) = A ij v(j) A ij v(j) v(x) () j [n] j [n] This gives us - λ (3). To show that the maximum eigenvalue is, it is sufficient to show that there is a vector v 1 so that Av = v 1. From the item above, it follows that λ 1 =. Consier the vector v 1 R n which is 1 in all coorinates. Then, Av 1 = v 1. Thus, is the maximum eigenvalue. 3. Let us first prove the reverse irection. Let G be isconnecte. Let C 1 an C be its two connecte components. Let v 1 = 1 C1 an v = 1 C. Observe that Av 1 = v 1 an Av = v. Since v 1 an v are linearly inepenent, this gives us that the secon largest eigenvalue is also. Let us now argue the converse. Let G have the secon eigenvalue. This means that there is a vector v which is orthogonal to 1 V such that Av = v. Let x be the component such that v (x) has the maximum value for x V. Now, v (x) = 1 y N(x) v (y). Since there are precisely nonzero terms in the sum on the right han sie, the maximality of x implies that the v (y) = v (y) at every neighbor y of x. This argument can be now extene similarly to imply that every vertex z in the same connecte component as x, satisfies v (x) = v (z). In particular, all the cells of v inexe by the vertices in the same connecte component as x have the same sign. But from the fact that v is orthogonal to 1 V, we know that there are cells with entries with ifferent signs in v. Hence, not all vertices in the graph lie in the same connecte component as x. Therefore, G is isconnecte. 4. Let G(V, E) be a -regular bipartite graph with bipartitions L an R. To show that - is an eigenvalue, it is sufficient to give a vector v such that Av = v. Consier a vector v efine as follows: v(x) = 1, x L (4) v(x) = 1, x R (5) Now it is not ifficult to see that Av = -v an hence there is an eigenvalue which is not greater than -. Since we know that the all eigenvalues are, so λ n =. Let us now show that the converse also hols. Let us now consier a graph which has an eigenvalue less than or equal to -. Item 1 of this lemma tells us that λ n =. Let us work with a graph which is connecte. For isconnecte graphs, the argument can be applie by applying it to each of the connecte components. Then, let the eigenvector for λ n be v n. Let x be the component of v with the largest absolute value. From the eigenvalue relation, we will get v(x) = v(y) (6) y N(x) From the choice of x, the only way this equality can hol is when all neighbors of x have the same absolute value as v(x) with a ifferent sign. This argument can be applie again with one of the neighbors of x as the component of interest to conclue that all the components 3

corresponing to the vertices in the same connecte component as x have the same same absolute value an the sign at any vertex iffers from all its neighbors. Therefore, there are no eges among the vertices with the same sign an hence the positive an the negatively signe vertices form a bipartition of G. Let us now look at how eigenvalues of a matrix change with respect to some operations on them. We will be using these properties very cruicially in our analysis later in the lecture. Lemma 8. If λ is an eigenvalue of a n n matrix M with eigenvector v, then 1. λ is an eigenvalue of 1 M with eigenvector v for a nonzero scalar.. λ i is an eigenvalue for M i with eigenvector v for a natural number i. Proof. The proofs basically follow from the basic efinition. 1. Since λ is an eigenvalue of M with eigenvector v, we get Mv = λv. Multipliying both sies by the scalar 1, we get the esire claim.. Since λ is an eigenvalue of M with eigenvector v, we get Mv = λv. Now, multiplying both sies of the equality by M, we get MMv = Mλv, which is the same as M v = λmv = λ v. Repeating this proceure i more times, we get the require claim. We will also cruicially use the following fact about the spectrum of a real symmetric matrix. Fact 9. For a real symmetric square matrix M, the following is true All its eigenvalues are real. There is set of eigenvectors of M, {v i : i [n]} which form an orthonormal basis for R n. 3.0.1 Some Notations For the rest of the lecture, we will always use G(V, E) to refer to a n vertex -regular graph which is a λ-absolute eigenvalue expaner an A will be its ajacency matrix. We will refer by P, the matrix 1 A. We will use λ 1 λ... λ n to refer to the eigenvalues of A an {v i : i [n]} as the set of orthonormal eigenvectors, where Av i = λ i v i for every i [n]. From Claim 8, we know that λ 1 λ... λn are the eigenvalues of P an the set of eigenvectors remains the same. 4

4 Ranom walks on expaners From the efinition of ege an vertex expaners, it is intuitively clear that while oing a ranom walk on an expaner, the chances of being trappe insie a small subset of vertices shoul be small. We will now analyse a ranom walk on an expaner an show that this is inee correct. Let G be -regular graph on n vertices, which is a λ-absolute eigenvalue expaner. Let us efine a ranom walk on it as follows: We start at a fixe vertex x 0 in the first step. In the i t h step, we chose a uniformly ranom neighbor of x i 1 as x i. This proceure will give us a istribution on the vertex set of the graph. Let us call the istribution obtaine at the en of the i th step as f i. Clearly, f 0 is 1 at x o an 0 everywhere else. The next claim tells us the relation between f i 1 an f i. Claim 10. For all integers i 1 f i = 1 Af i 1. Proof. The probability of being at a vertex x in the i th step is precisely the probability that we are at a veterx y, which is a neighbor of x in the i 1 step an we pick the ege (x, y) in the i t h step. So we get, f i (x) = y N[x] Pr[we are at y after (i 1)th step] Pr[ege (x, y) is picke in the next step] This gives us f i (x) = y N(x) f i 1(x) 1. Hence, f i = 1 Af i 1. Recall that from the efinition of matrix P, we get f i = P f i a. Now, in general, applying the Claim 10 multiple times, we will get the following Claim 11. For all integers i 0, f i = P i f 0 Let us now show that after sufficiently large number of steps i, the istribution f i is close to the uniform istribution on the vertex set of G. Theorem 1. For an n vertex -regular λ-absolute eigenvalue expaner G, the istribution f i obtaine at the en of i steps of a ranom walk as efine above satisfies f i U ( ) λ i ( 1 1 n). Proof. Using Fact 9 above, we know that we can express any vector u R n as a linear combination of {v i : i [n]}. In particular, f 0 = i [n] α iv i, where α i = f 0, v i. Now, using the Claim 11, we obtain f i = P i f 0 = P i α j v j (7) Now we can separate this sum into two parts, keeping in min that λ 1 = an P i has eigenvalues {λ i k : k [n]}. We get f i = α 1 v 1 + ( ) i λj α j v j (8) j [n]\{1} j [n] 5

Taking the l norms on both sies, f i α 1 v 1 = j [n]\{1} ( ) i λj α j v j (9) Since G is a λ-absolute eigenvalue expaner, we can know that λ i λ for i {, 3, 4..., n}. So, the right han sie gets simplifie, an we obtain ( ) λ i f i α 1 v 1 α j v j (10) j [n]\{1} Now, from the efinition of α i, we know that f 0 = α 1 v 1 + j [n]\{1} α jv j. So taking the l norm both sies, we obtain using orthonormality of v i s f 0 = α 1 + α j v j (11) j [n]\{1} We also know that α 1 = f 0, v 1. Recall that f 0 is a vector with all 0 s an 1 at one position. v 1 is a normalize all ones vector, an so has each entry 1 n. So, α 1 = f 0, v 1 = 1 n. f 0 1 n = j [n]\{1} α j v j Besies, we can also observe that α 1 v 1 is a vector whose all components are equal to 1 n, which is precisely equal to the uniform istribution U on n vertices. Using both these observations, we obtain fron Equation 10, the following ( ) λ i ( f i U f 0 1 ) 1 (13) n Now, the l norm of f 0 is 1. So, we obtain f i U ( ) λ i ( 1 1 ) 1 n (1) (14) From here, we can also get something about the l 1 norm using the relation between the l an the l 1 norm. We obtain the following corollary. Corollary 13. f i U 1 n ( ) λ i ( ) 1 1 1 n In particular, if λ an are absolute constants, for i = θ(log(n)), we can conclue that f 1 is 1 n 100 close to uniform istribution in l 1 norm. It also follows from this statement that the support of f i has to be n for i = c log n, where c is a sufficiently large constant. This gives us the following corollary, Corollary 14. The iameter of a λ-absolute eigenvalue expaner is θ(log n). 6

5 Properties of λ-absolute eigenvalue expaner 5.1 Expaner Mixing Lemma It is not har to prove that a ranom -regular graph is a goo expaner. We can think of expaner mixing lemma as showing somewhat converse of the previous statement. Informally it says that, any -regular λ-absolute eigenvalue expaner graph is close to a ranom -regular graph. Lemma 15. (Expaner mixing lemma) If G(V, E) be a -regular λ-absolute eigenvalue expaner then for all S, T V, e(s, T ) n S T λ S T where e(s, T ) is the number of eges between the vertex set S an T. In the above expression n S T is the expecte number of eges between S, T in a ranom -regular graph. So the above lemma says that for any -regular λ-absolute eigenvalue expaner graph, the quantity e(s, T ) is close to the the expecte value in a ranom -regular graph. Proof of expaner mixing lemma : Proof. Let 1 S an 1 T be the inicator 0/1 vectors for vertex set S an T respectively. The number of eges between S an T is given by following expression. e(s, T ) = u S,v T = 1 T S A1 T A uv We can write vectors 1 S an 1 T as a linear combinations of eigen vectors. So, ( ) T e(s, T ) = α i v i A β j v j i j β j α i vi T λ j v j = i j = i α i β i λ i = α 1 β 1 λ 1 + n α i β i λ i i= 7

We know α 1 = S n, β 1 = T n, λ 1 = α 1 β 1 λ 1 = n S T Therefore, e(s, T ) n n S T = α i β i λ i i= e(s, T ) n S T n = α i β i λ i i= We can boun the right han sie as, n n α i β i λ i λ α i β i i= i= ( n ) 1 ( ) 1 n λ α i β i i= λ 1 S 1 T = λ S T i= Therefore, e(s, T ) n S T n = α i β i λ i i= e(s, T ) n S T λ S T 6 Error Reuction Consier a ranomize circuit C computing some function f on n variables. The property of circuit C is that for every input x {0, 1} n, with probability at least /3, it outputs the correct answer using r truly ranom bits. We have alreay seen the following methos to bring own the error probability. 8

1. Repeating the compution of C with fresh ranom bits every time an taking the majority of the outputs. In this case, if we repeat the computation m times inepenently then we can bring own the error probability to exp( m). But we pay for it in total number of ranom bits use by the circuit which is rm.. Instea of using truly ranom bits every time, if we use m pairwise inepenent strings, we can bring own the probability to 1/m. Here the total number of truly ranom bits use is r. In this section we will see an application of expaner graphs in bringing own the error probability using few ranom bits. Consier the universe of r ranom bits {0, 1} r. Circuit C has an error probability atmost 1/3 means for every input x {0, 1} n there are atmost r /3 values in {0, 1} r which are ba for x. Let B be the ba set. If we can generate points efficiently in the set {0, 1} r such that probability that more than half of them lie insie B is small then we can just use these points as sees to circuit C an output the majority. Hence with very small probability th circuits errs. Take a -regular λ-eigenvalue expaner graph G on the vertex set {0, 1} r. We want this graph to be explicit, that is, given a vertex x {0, 1} r in G, we shoul be able to fin its neighbors in poly(r) time. 6.1 Approach 1 1. Pick a uniformly ranom x {0, 1} r. Let x 1, x,, x be its neighbors in G. 3. Run circuit C with sees x 1, x,, x 4. Output majority of the answers. Ranomness use : The only ranomness use in above proceure is in step 1 which is just r ranom bits. We will show that the error probability of above algorithm is very small. Claim 16. The error probability of the above algorithm is atmost O( 1 m ) Proof. For an input y {0, 1} n, let B be the subset of {0, 1} r which is ba for y i.e. C(y, r) = f(y) if an only if r / B. Define a set D as follows, D = {x such that at least / of its neighbors x 1, x, x are in B} In orer to show the error probability is small we want to argue that the size of D is small, since the error probability of the algorithm is D / r, 9

Consier these two subsets B an D of vertex set of a garph G. Applying the expaner mixing lemma (Lemma 15): e(d, B) r B D λ B D Number of eges going across B an D is at least D./, D r B D + λ B D D 3 D + λ B D 6 D λ B D D O( λ B ) Setting = m an λ = m 3/4, Error Probability = D r O( 1 m ) So this algorithm uses only r ranom bits an brings own the error probability from 1/3 to O(1/ m)! 6. Approach 1. Pick x {0, 1} r uniformly at ranom.. Take a ranom walk of length m x 0 = x, x 1, x,, x m. 3. Output the majority of (C(x i ) i [m]). Ranomness use : First step requires r ranom bits, to pick a ranom vertex in {0, 1} r. Since G is a -regular graph, we can think of neighbors of a vertex in a graph are labele by numbers in []. So each step of ranom walk is same as picking a ranom number between 1 to an moving to that neighbor of a vertex. Picking a ranom number between 1 to nees roughly log ranom bits, so for a ranom walk of length m we nee total m log ranom bits. Since is a constant in this case, the ranomness use by above algorithn is r + O(m). In orer to boun the error probability of above algorithm we will be intereste in following quantities. 10

1. Pr[ all x i lie in B ]. For a fixe set of inices I [m], I = m/. Pr[ x i, i I are all in B ] 3. Union boun over all I s. Since we pick the vertex x 0 uniformly at ranom, we know P r[x 0 B] = B r Want to estimate P r[x 0, x 1 B] =?. In orer to estimate this quantity, we start with a istribution f 0 which correspons to istribution of vertex x 0, an uniform istribution. Let π be the restriction onto B i.e. π : R V R V, { fi if i B π(f) i = 0 otherwise Let P be the normalize ajacency matrix of expaner graph G. Then, P r[x 0 B] = πf 0 1 = β P r[x 0, x 1 B] = πp πf 0 1 P r[x 0, x 1,, x j B] = (πp ) j πf 0 1 Since π = π, (πp ) j π = (πp π) j. Hence if we can get an upper boun on (πp π) j f 0, we get an upper boun on (πp π) j f 0 1. Claim 17. For a -regular, λ eigenvalue expaner graph, ( f, πp πf β + λ ) f where P, β an π as efine above. Proof. We can write a vector πf as a linear combination of eigenvectors v 1, v,, v r v be the component of vector πf orthogonal to v 1. of G. Let πf = α 1 v 1 + v πp πf = πp α 1 v 1 + πp v = α 1 πp v 1 + π(p v ) By triangle inequality, πp πf α 1 πp v 1 + π(p v ) (15) 11

We can boun the first expression in equation 15 as, α 1 πp v 1 = α 1 πp v 1 α 1 πv 1 = α 1 β We know, α 1 = πf, v 1 By Cauchy-Schwarz inequality an the fact that πf has atmost B / r fraction of non-zero entries, α 1 πf β f β Hence, α 1 πp v 1 f β β = β f Also for the secon expression in equation 15, π(p v ) P v λ v as require. Now, P r[x 1, x,, x m all in B] (πp π) m f 0 1 λ f r (πp π) m f 0 ( r β + λ ) m f 0... by claim 17 = ( r β + λ ) m 1 r ( = β + λ ) m Using the claim we prove, we will try to estimate the following probability, For a fixe set of inices I [m], I = m/. Pr[ x i, i I are all in B]. 1

The above probability is exactly some expression of the form Pr[ x i, i I are all in B] =..(πp )P P (πp )(πp )... 1 which contains exactly m/ (πp ) terms an m/ P terms. By combining the terms, we can rewrite the above expression in the form, For some k 1, k, k m/ 1. Pr[ x i, i I are all in B] = (πp k 1 π)(πp k π) f 0 1 Claim 18. f, πp k πf ( β + ( ) ) λ k f Proof. Proof of this claim is similar to the claim 17 except in equation 15 we have the secon term P k v instea of P v which is atmost (λ/) k f. Using above claim, For a fixe set of inices I [m], I = m/, Pr[ x i, i I are all in B] = (πp k 1 π)(πp k π) f 0 1 r (πp k 1 π)(πp k π) f 0 r (β + ( ) ) ( λ k1 β + ( ) ) ( λ k β + ( ) ) λ km/ 1 f 0 ( β + λ ) m/ 1 If we choose λ an such that (β + λ )1/ < 1 5, then the error probability of the algorithm is, Error Probability = Pr[ majority of x i s are in B] = Pr[x i, i I are in B] I [m], I m/ O ( m ) max {Pr[ x i, i I are all in B]} I,I [m], I m/ ( ) m O 5 m = exp( m) Hence the algorithm uses r + O(m) ranom bits an reuces error probability to exp( m). 13

7 Connectivity of a -regular graph In this section we will be looking at the following problem. Problem : Given an unirecte graph G on n vertices which is a -regular, etermine whether it is connecte. The graph is given as input in the form of ajacency matrix in reaonly memory. This problem is simple if we have access to poly(n) bits of space for computation. 1. Start with any arbitrary noe in a graph G.. Preform a DFS/BFS from the starting noe an count the number of noes in the DFS/BFS tree. 3. If the count is equal to n then the graph is connecte otherwise it is isconnecte. But the problem is not trivial if we have access to only O(log n) space. We will iscuss a ranomize algorithm to solve this problem in O(log n) space. The algorithm is as follows: For every pair of vertices s, t, take n 10 inepenent ranom walks of length n 10 each starting from s an check if it ens at vertex t. If for all pairs of vertices the above conition is satisfie for at least one ranom walk then G is connecte (true) otherwise isconnecte(f alse). Claim 19. The above algrithm fails with exponential small probability in n. Proof. If the graph is isconnecte then the algorithm always return false. We will show if G is connecte then the algorithm returns f alse with very small probability. If G is a connecte -regular graph on n vertices, then except λ 1 the absolute value of other eigenvalues is atmost /n. Let e s be an inicator vector of a vertex s. Let P be the normalize ajacency matrix of graph G. By theorem 1- the property of ranom walks on an expaner graph, Pr[ a ranom walk from s lans on t at the en of n 10th step] 1 n U P n10 e s 1 1 n ( ) λ n 10 n 1 n n (1 1n ) n 10 1 n e n8 1 n 14

Therefore, [ none of the n 10 ranom walks from s P r lans on t at the en of n 10th step ] ( 1 1 ) n 10 n exp( n 9 ) Hence by union boun, P r[failing] n exp( n 9 ) exp( n 8 ) Hence the algorithm fails with exponential small probability. In a later lecture, we will see a eterministic algorithm for this problem which uses only O(log n) space. 15