Conductance, the Normalized Laplacian, and Cheeger s Inequality
|
|
|
- Nigel Sparks
- 9 years ago
- Views:
Transcription
1 Spectral Graph Theory Lecture 6 Conductance, the Normalized Laplacian, and Cheeger s Inequality Daniel A. Spielman September 21, 2015 Disclaimer These notes are not necessarily an accurate representation of what happened in class. The notes written before class say what I think I should say. I sometimes edit the notes after class to make them way what I wish I had said. There may be small mistakes, so I recommend that you check any mathematically precise statement before using it in your own work. These notes were last revised on September 21, Overview As the title suggests, in this lecture I will introduce conductance, a measure of the quality of a cut, and the normalized Laplacian matrix of a graph. I will then prove Cheeger s inequality, which relates the second-smallest eigenvalue of the normalized Laplacian to the conductance of a graph. Cheeger [Che70] first proved his famous inequality for manifolds. Many discrete versions of Cheeger s inequality were proved in the late 80 s [SJ89, LS88, AM85, Alo86, Dod84, Var85]. Some of these consider the walk matrix (which we will see in a week or two) instead of the normalized Laplacian, and some consider the isoperimetic ratio instead of conductance. The proof that I present today follows an approach developed by Luca Trevisan [Tre11]. 6.2 Conductance Back in Lecture 2, we related to isoperimetric ratio of a subset of the vertices to the second eigenvalue of the Laplacian. We proved that for every S V θ(s) λ 2 (1 s), where s = S / V and θ(s) def = (S). S 6-1
2 Lecture 6: September 21, Re-arranging terms slightly, this can be stated as V (S) S V S λ 2. Cheeger s inequality provides a relation in the other direction. However, the relation is tighter and cleaner when we look at two slightly different quantities: the conductance of the set and the second eigenvalue of the normalized Laplacian. The formula for conductance has a different denominator that depends upon the sum of the degrees of the vertices in S. I will write d(s) for the sum of the degrees of the vertices in S. Thus, d(v ) is twice the number of edges in the graph. We define the conductance of S to be φ(s) def = (S) min(d(s), d(v S)). Note that many similar, although sometimes slightly different, definitions appear in the literature. For example, we would instead use d(v ) (S) d(s)d(v S), which appears below in (6.5). We define the conductance of a graph G to be φ G def = min S V φ(s). The conductance of a graph is more useful in many applications than the isoperimetric number. I usually find that conductance is the more useful quantity when you are concerned about edges, and that isoperimetric ratio is most useful when you are concerned about vertices. Conductance is particularly useful when studying random walks in graphs. 6.3 The Normalized Laplacian It seems natural to try to relate the conductance to the following generalized Rayleigh quotient: If we make the change of variables y T Ly y T Dy. (6.1) D 1/2 y = x, then this ratio becomes x T D 1/2 LD 1/2 x x T. x That is an ordinary Rayleigh quotient, which we understand a little better. The matrix in the middle is called the normalized Laplacian (see [Chu97]). We reserve the letter N for this matrix: N def = D 1/2 LD 1/2.
3 Lecture 6: September 21, This matrix often proves more useful when examining graphs in which nodes have different degrees. We will let 0 = ν 1 ν 2 ν n denote the eigenvalues of N. The conductance is related to ν 2 as the isoperimetric number is related to λ 2 : I include a proof of this in the appendix. My goal for today s lecture is to prove Cheeger s inequality, ν 2 /2 φ G. (6.2) φ G 2ν 2, which is much more interesting. In fact, it is my favorite theorem in spectral graph theory. The eigenvector of eigenvalue 0 of N is d 1/2, by which I mean the vector whose entry for vertex u is the square root of the degree of u. Observe that D 1/2 LD 1/2 d 1/2 = D 1/2 L1 = D 1/2 0 = 0. The eigenvector of ν 2 is given by arg min x d 1/2 x T N x x T x. Transfering back into the variable y, and observing that x T d 1/2 = y T D 1/2 d 1/2 = y T d, we find ν 2 = min y d y T Ly y T Dy. 6.4 Cheeger s Inequality Cheeger s inequality proves that if we have a vector y, orthogonoal to d, for which the generalized Rayleigh quotient (6.1) is small, then one can obtain a set of small conductance from y. We obtain such a set by carefully choosing a real number t, and setting S t = {u : y(u) t}. Theorem Let y be a vector orthogonal to d. Then, there is a number t for which the set S t = {u : y(u) < t} satisfies φ(s t ) 2 y T Ly y T Dy. Before proving the theorem, I wish to make one small point about the denominator in the expression above. It is essentially minimized when y T d = 0, at least with regards to shifts.
4 Lecture 6: September 21, Lemma Let v s = y + z1. Then, the minimum of v T z Dv T z v T z d = 0. is achieved at the z for which Proof. The derivative with respect to z is 2d T v z, and the minimum is achieved when this derivative is zero. We begin our proof of Cheeger s inequality by defining ρ = y T Ly y T Dy. So, we need to show that there is a t for which φ(s t ) 2ρ. By renumbering the vertices, we may assume without loss of generality that y(1) y(2) y(n). We begin with some normalization. Let j be the least number for which j d(u) d(v )/2. u=1 We would prefer a vector that is centered at j. So, set z = y y(j)1. This vector z satisfies z (j) = 0, and, by Lemma 6.4.2, We also multiply z by a constant so that z T Lz z T Dz ρ. Recall that φ(s) = z (1) 2 + z (n) 2 = 1. (S) min(d(s), d(v S)). We will define a distribution on t for which we can prove that This implies 1 that there is some t for which E [ (S t ) ] 2ρ E [min(d(s t ), d(v S t ))]. (S t ) 2ρ min(d(s t ), d(v S t )), 1 If this is not immediately clear, note that it is equivalent to assert that E [ 2ρ min(d(s), d(v S)) (S) ] 0, which means that there must be some S for which the expression is non-negative.
5 Lecture 6: September 21, which means φ(s) 2ρ. To switch from working with y to working with z, define We will set S t = {u : z (u) t}. Trevisan had the remarkable idea of choosing t between z (1) and z (n) with probability density 2 t. That is, the probability that t lies in the interval [a, b] is b t=a To see that the total probability is 1, observe that z (n) t=z (1) 2 t = 0 t=z (1) as z (1) z (j) z (n) and z (j) = 0. 2 t = + 2 t. z (n) t=0 Similarly, the probability that t lies in the interval [a, b] is where Lemma E t [ (S t ) ] = b t=a 2 t = sgn(b)b 2 sgn(a)a 2, 1 if x > 0 sgn(x) = 0 if x = 0, and 1 if x < 0. Pr t [(u, v) (S t )] 2 t = z (n) 2 + z (1) 2 = 1, z (u) z (v) ( z (u) + z (v) ). (6.3) Proof. An edge (u, v) with z (u) z (v) is on the boundary of S if The probability that this happens is sgn(z (v))z (v) 2 sgn(z (u))z (u) 2 = z (u) t < z (v). { z (u) 2 z (v) 2 when sgn(u) = sgn(v), z (u) 2 + z (v) 2 We now show that both of these terms are upper bounded by z (u) z (v) ( z (u) + z (v) ). when sgn(u) sgn(v). Regardless of the signs, z (u) 2 z (v) 2 = (z (u) z (v))(z (u) + z (v)) z (u) z (v) ( z (u) + z (v) ). When sgn(u) = sgn(v), z (u) 2 + z (v) 2 (z (u) z (v)) 2 = z (u) z (v) ( z (u) + z (v) ).
6 Lecture 6: September 21, We now derive a formula for the expected denominator of φ. Lemma E t [min(d(s t ), d(v S t ))] = z T Dz. Proof. Observe that E t [d(s t )] = u Pr t [u S t ] d(u) = u Pr t [z (u) t] d(u). The result of our centering of z at j is that t < 0 = d(s) = min(d(s), d(v S)), and t 0 = d(v S) = min(d(s), d(v S)). That is, for u < j, u is in the smaller set if t < 0; and, for u j, u is in the smaller set if t 0. So, E t [min(d(s t ), d(v S t ))] = Pr [z (u) < t and t < 0] d(u) + Pr [z (u) > t and t 0] d(u) u<j u j = Pr [z (u) < t < 0] d(u) + Pr [z (u) > t 0] d(u) u<j u j = + u<j u j = u = z T Dz. Recall that our goal is to prove that E [ (S t ) ] 2ρ E [min(d(s t ), d(v S t ))], and we know that E t [min(d(s t ), d(v S t ))] = u and that E t [ (S t ) ] z (u) z (v) ( z (u) + z (v) ). We may use the Cauchy-Schwartz inequality to upper bound the term above by (z (u) z (v)) 2 ( z (u) + z (v) ) 2. (6.4)
7 Lecture 6: September 21, We have defined ρ so that the term under the left-hand square root is at most ρ u. To bound the right-hand square root, we observe ( z (u) + z (v) ) 2 2 z (u) 2 + z (v) 2 = 2 u. Putting all these inequalities together yields E [ (S) ] ρ u 2 u = 2ρ u = 2ρ E [min (d(s), d(v S))]. I wish to point out two important features of this proof: 1. This proof does not require y to be an eigenvector it obtains a cut from any vector y that is orthogonal to d. 2. This proof goes through almost without change for weighted graphs. The main difference is that for weighted graphs we measure the sum of the weights of edges on the boundary instead of their number. The main difference in the proof is that lines (6.3) and (6.4) become E [w( (S))] = Pr [(u, v) (S)] w u,v z (u) z (v) ( z (u) + z (v) )w u,v w u,v (z (u) z (v)) 2 w u,v ( z (u) + z (v) ) 2, and we observe that w u,v ( z (u) + z (v) ) 2 2 u. The only drawback that I see to the approach that we took in this proof is that the application of Cauchy-Schwartz is a little mysterious. Shang-Hua Teng and I came up with a proof that avoids this by introducing one inequality for each edge. If you want to see that proof, look at my notes from 2009.
8 Lecture 6: September 21, A Proof of (6.2) Lemma A.1. For every S V, φ(s) ν 2 /2. Proof. As in Lecture 2, we would like to again use χ S as a test vector. But, it is not orthogonal to d. To fix this, we subtrat a constant. Set y = χ S σ1, where You should now check that y T d = 0: σ = d(s)/d(v ). y T d = χ T S d σ1 T d = d(s) (d(s)/d(v ))d(v ) = 0. We already know that y T Ly = (S). It remains to compute y T Dy. If you remember the previous computation like this, you would guess that it is d(s)(1 σ) = d(s)d(v S)/d(V ), and you would be right: So, y T Dy = u S d(u)(1 σ) 2 + u S d(u)σ 2 = d(s)(1 σ) 2 + d(v S)σ 2 = d(s) 2d(S)σ + d(v )σ 2 = d(s) d(s)σ = d(s)d(v S)/d(V ). ν 2 y T Ly y T Dy As the larger of d(s) and d(v S) is at least half of d(v ), we find = (S) d(v ) d(s)d(v S). (6.5) (S) ν 2 2 min(d(s), d(v S)). References [Alo86] N. Alon. Eigenvalues and expanders. Combinatorica, 6(2):83 96, [AM85] Noga Alon and V. D. Milman. λ 1, isoperimetric inequalities for graphs, and superconcentrators. J. Comb. Theory, Ser. B, 38(1):73 88, 1985.
9 Lecture 6: September 21, [Che70] J. Cheeger. A lower bound for smallest eigenvalue of the Laplacian. In Problems in Analysis, pages , Princeton University Press, [Chu97] F. R. K. Chung. Spectral Graph Theory. American Mathematical Society, [Dod84] Jozef Dodziuk. Difference equations, isoperimetric inequality and transience of certain random walks. Transactions of the American Mathematical Society, 284(2): , [LS88] [SJ89] Gregory F. Lawler and Alan D. Sokal. Bounds on the l 2 spectrum for Markov chains and Markov processes: A generalization of Cheeger s inequality. Transactions of the American Mathematical Society, 309(2): , Alistair Sinclair and Mark Jerrum. Approximate counting, uniform generation and rapidly mixing Markov chains. Information and Computation, 82(1):93 133, July [Tre11] Luca Trevisan. Lecture 4 from cs359g: Graph partitioning and expanders, stanford university, January available at trevisan/cs359g/lecture04.pdf. [Var85] N. Th. Varopoulos. Isoperimetric inequalities and Markov chains. Journal of Functional Analysis, 63(2): , 1985.
DATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.
3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with
Solutions to Homework 10
Solutions to Homework 1 Section 7., exercise # 1 (b,d): (b) Compute the value of R f dv, where f(x, y) = y/x and R = [1, 3] [, 4]. Solution: Since f is continuous over R, f is integrable over R. Let x
Cheeger Inequalities for General Edge-Weighted Directed Graphs
Cheeger Inequalities for General Edge-Weighted Directed Graphs T-H. Hubert Chan, Zhihao Gavin Tang, and Chenzi Zhang The University of Hong Kong {hubert,zhtang,czzhang}@cs.hku.hk Abstract. We consider
Markov random fields and Gibbs measures
Chapter Markov random fields and Gibbs measures 1. Conditional independence Suppose X i is a random element of (X i, B i ), for i = 1, 2, 3, with all X i defined on the same probability space (.F, P).
Triangle deletion. Ernie Croot. February 3, 2010
Triangle deletion Ernie Croot February 3, 2010 1 Introduction The purpose of this note is to give an intuitive outline of the triangle deletion theorem of Ruzsa and Szemerédi, which says that if G = (V,
Continued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
Similarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
Lecture 4: Cheeger s Inequality
Spectral Graph Theory ad Applicatios WS 0/0 Lecture 4: Cheeger s Iequality Lecturer: Thomas Sauerwald & He Su Statemet of Cheeger s Iequality I this lecture we assume for simplicity that G is a d-regular
CS 103X: Discrete Structures Homework Assignment 3 Solutions
CS 103X: Discrete Structures Homework Assignment 3 s Exercise 1 (20 points). On well-ordering and induction: (a) Prove the induction principle from the well-ordering principle. (b) Prove the well-ordering
(67902) Topics in Theory and Complexity Nov 2, 2006. Lecture 7
(67902) Topics in Theory and Complexity Nov 2, 2006 Lecturer: Irit Dinur Lecture 7 Scribe: Rani Lekach 1 Lecture overview This Lecture consists of two parts In the first part we will refresh the definition
Maximum Likelihood Estimation
Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for
Lecture Notes on Elasticity of Substitution
Lecture Notes on Elasticity of Substitution Ted Bergstrom, UCSB Economics 210A March 3, 2011 Today s featured guest is the elasticity of substitution. Elasticity of a function of a single variable Before
Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur
Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
Basic numerical skills: EQUATIONS AND HOW TO SOLVE THEM. x + 5 = 7 2 + 5-2 = 7-2 5 + (2-2) = 7-2 5 = 5. x + 5-5 = 7-5. x + 0 = 20.
Basic numerical skills: EQUATIONS AND HOW TO SOLVE THEM 1. Introduction (really easy) An equation represents the equivalence between two quantities. The two sides of the equation are in balance, and solving
INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS
INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem
Orthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
Quadratic forms Cochran s theorem, degrees of freedom, and all that
Quadratic forms Cochran s theorem, degrees of freedom, and all that Dr. Frank Wood Frank Wood, [email protected] Linear Regression Models Lecture 1, Slide 1 Why We Care Cochran s theorem tells us
SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH
31 Kragujevac J. Math. 25 (2003) 31 49. SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH Kinkar Ch. Das Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, W.B.,
[1] Diagonal factorization
8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:
Chapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
Introduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
THE DYING FIBONACCI TREE. 1. Introduction. Consider a tree with two types of nodes, say A and B, and the following properties:
THE DYING FIBONACCI TREE BERNHARD GITTENBERGER 1. Introduction Consider a tree with two types of nodes, say A and B, and the following properties: 1. Let the root be of type A.. Each node of type A produces
Metric Spaces. Chapter 7. 7.1. Metrics
Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some
Continuity of the Perron Root
Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North
Numerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number
Part 2: Community Detection
Chapter 8: Graph Data Part 2: Community Detection Based on Leskovec, Rajaraman, Ullman 2014: Mining of Massive Datasets Big Data Management and Analytics Outline Community Detection - Social networks -
Notes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
Section 4.4 Inner Product Spaces
Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer
Cubes and Cube Roots
CUBES AND CUBE ROOTS 109 Cubes and Cube Roots CHAPTER 7 7.1 Introduction This is a story about one of India s great mathematical geniuses, S. Ramanujan. Once another famous mathematician Prof. G.H. Hardy
3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style
Solving quadratic equations 3.2 Introduction A quadratic equation is one which can be written in the form ax 2 + bx + c = 0 where a, b and c are numbers and x is the unknown whose value(s) we wish to find.
Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem
Practice with Proofs
Practice with Proofs October 6, 2014 Recall the following Definition 0.1. A function f is increasing if for every x, y in the domain of f, x < y = f(x) < f(y) 1. Prove that h(x) = x 3 is increasing, using
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance
A characterization of trace zero symmetric nonnegative 5x5 matrices
A characterization of trace zero symmetric nonnegative 5x5 matrices Oren Spector June 1, 009 Abstract The problem of determining necessary and sufficient conditions for a set of real numbers to be the
BANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
1 Lecture: Integration of rational functions by decomposition
Lecture: Integration of rational functions by decomposition into partial fractions Recognize and integrate basic rational functions, except when the denominator is a power of an irreducible quadratic.
State of Stress at Point
State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,
Lecture 8: More Continuous Random Variables
Lecture 8: More Continuous Random Variables 26 September 2005 Last time: the eponential. Going from saying the density e λ, to f() λe λ, to the CDF F () e λ. Pictures of the pdf and CDF. Today: the Gaussian
Linear Programming. March 14, 2014
Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1
88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a
88 CHAPTER. VECTOR FUNCTIONS.4 Curvature.4.1 Definitions and Examples The notion of curvature measures how sharply a curve bends. We would expect the curvature to be 0 for a straight line, to be very small
Zeros of a Polynomial Function
Zeros of a Polynomial Function An important consequence of the Factor Theorem is that finding the zeros of a polynomial is really the same thing as factoring it into linear factors. In this section we
Scalar Valued Functions of Several Variables; the Gradient Vector
Scalar Valued Functions of Several Variables; the Gradient Vector Scalar Valued Functions vector valued function of n variables: Let us consider a scalar (i.e., numerical, rather than y = φ(x = φ(x 1,
Elasticity Theory Basics
G22.3033-002: Topics in Computer Graphics: Lecture #7 Geometric Modeling New York University Elasticity Theory Basics Lecture #7: 20 October 2003 Lecturer: Denis Zorin Scribe: Adrian Secord, Yotam Gingold
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,
Lecture 4: BK inequality 27th August and 6th September, 2007
CSL866: Percolation and Random Graphs IIT Delhi Amitabha Bagchi Scribe: Arindam Pal Lecture 4: BK inequality 27th August and 6th September, 2007 4. Preliminaries The FKG inequality allows us to lower bound
The Goldberg Rao Algorithm for the Maximum Flow Problem
The Goldberg Rao Algorithm for the Maximum Flow Problem COS 528 class notes October 18, 2006 Scribe: Dávid Papp Main idea: use of the blocking flow paradigm to achieve essentially O(min{m 2/3, n 1/2 }
Connectivity and cuts
Math 104, Graph Theory February 19, 2013 Measure of connectivity How connected are each of these graphs? > increasing connectivity > I G 1 is a tree, so it is a connected graph w/minimum # of edges. Every
Constrained optimization.
ams/econ 11b supplementary notes ucsc Constrained optimization. c 2010, Yonatan Katznelson 1. Constraints In many of the optimization problems that arise in economics, there are restrictions on the values
Mathematical Induction
Mathematical Induction (Handout March 8, 01) The Principle of Mathematical Induction provides a means to prove infinitely many statements all at once The principle is logical rather than strictly mathematical,
MULTIVARIATE PROBABILITY DISTRIBUTIONS
MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined
Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok
THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE Alexer Barvinok Papers are available at http://www.math.lsa.umich.edu/ barvinok/papers.html This is a joint work with J.A. Hartigan
Polynomial and Rational Functions
Polynomial and Rational Functions Quadratic Functions Overview of Objectives, students should be able to: 1. Recognize the characteristics of parabolas. 2. Find the intercepts a. x intercepts by solving
Math 120 Final Exam Practice Problems, Form: A
Math 120 Final Exam Practice Problems, Form: A Name: While every attempt was made to be complete in the types of problems given below, we make no guarantees about the completeness of the problems. Specifically,
Solving Rational Equations
Lesson M Lesson : Student Outcomes Students solve rational equations, monitoring for the creation of extraneous solutions. Lesson Notes In the preceding lessons, students learned to add, subtract, multiply,
Stanford Math Circle: Sunday, May 9, 2010 Square-Triangular Numbers, Pell s Equation, and Continued Fractions
Stanford Math Circle: Sunday, May 9, 00 Square-Triangular Numbers, Pell s Equation, and Continued Fractions Recall that triangular numbers are numbers of the form T m = numbers that can be arranged in
About the Gamma Function
About the Gamma Function Notes for Honors Calculus II, Originally Prepared in Spring 995 Basic Facts about the Gamma Function The Gamma function is defined by the improper integral Γ) = The integral is
This is a square root. The number under the radical is 9. (An asterisk * means multiply.)
Page of Review of Radical Expressions and Equations Skills involving radicals can be divided into the following groups: Evaluate square roots or higher order roots. Simplify radical expressions. Rationalize
Invariant Metrics with Nonnegative Curvature on Compact Lie Groups
Canad. Math. Bull. Vol. 50 (1), 2007 pp. 24 34 Invariant Metrics with Nonnegative Curvature on Compact Lie Groups Nathan Brown, Rachel Finck, Matthew Spencer, Kristopher Tapp and Zhongtao Wu Abstract.
1 Sufficient statistics
1 Sufficient statistics A statistic is a function T = rx 1, X 2,, X n of the random sample X 1, X 2,, X n. Examples are X n = 1 n s 2 = = X i, 1 n 1 the sample mean X i X n 2, the sample variance T 1 =
Section 6.1 - Inner Products and Norms
Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
Lecture 22: November 10
CS271 Randomness & Computation Fall 2011 Lecture 22: November 10 Lecturer: Alistair Sinclair Based on scribe notes by Rafael Frongillo Disclaimer: These notes have not been subjected to the usual scrutiny
COMBINATORIAL PROPERTIES OF THE HIGMAN-SIMS GRAPH. 1. Introduction
COMBINATORIAL PROPERTIES OF THE HIGMAN-SIMS GRAPH ZACHARY ABEL 1. Introduction In this survey we discuss properties of the Higman-Sims graph, which has 100 vertices, 1100 edges, and is 22 regular. In fact
No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics
No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results
3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style
Solving linear equations 3.1 Introduction Many problems in engineering reduce to the solution of an equation or a set of equations. An equation is a type of mathematical expression which contains one or
Math 215 HW #6 Solutions
Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T
Network File Storage with Graceful Performance Degradation
Network File Storage with Graceful Performance Degradation ANXIAO (ANDREW) JIANG California Institute of Technology and JEHOSHUA BRUCK California Institute of Technology A file storage scheme is proposed
3. Mathematical Induction
3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)
On Integer Additive Set-Indexers of Graphs
On Integer Additive Set-Indexers of Graphs arxiv:1312.7672v4 [math.co] 2 Mar 2014 N K Sudev and K A Germina Abstract A set-indexer of a graph G is an injective set-valued function f : V (G) 2 X such that
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
1 Completeness of a Set of Eigenfunctions. Lecturer: Naoki Saito Scribe: Alexander Sheynis/Allen Xue. May 3, 2007. 1.1 The Neumann Boundary Condition
MAT 280: Laplacian Eigenfunctions: Theory, Applications, and Computations Lecture 11: Laplacian Eigenvalue Problems for General Domains III. Completeness of a Set of Eigenfunctions and the Justification
MATH 10034 Fundamental Mathematics IV
MATH 0034 Fundamental Mathematics IV http://www.math.kent.edu/ebooks/0034/funmath4.pdf Department of Mathematical Sciences Kent State University January 2, 2009 ii Contents To the Instructor v Polynomials.
1 Prior Probability and Posterior Probability
Math 541: Statistical Theory II Bayesian Approach to Parameter Estimation Lecturer: Songfeng Zheng 1 Prior Probability and Posterior Probability Consider now a problem of statistical inference in which
Mechanics 1: Conservation of Energy and Momentum
Mechanics : Conservation of Energy and Momentum If a certain quantity associated with a system does not change in time. We say that it is conserved, and the system possesses a conservation law. Conservation
ECE302 Spring 2006 HW5 Solutions February 21, 2006 1
ECE3 Spring 6 HW5 Solutions February 1, 6 1 Solutions to HW5 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics
3.3 Real Zeros of Polynomials
3.3 Real Zeros of Polynomials 69 3.3 Real Zeros of Polynomials In Section 3., we found that we can use synthetic division to determine if a given real number is a zero of a polynomial function. This section
Computational Geometry Lab: FEM BASIS FUNCTIONS FOR A TETRAHEDRON
Computational Geometry Lab: FEM BASIS FUNCTIONS FOR A TETRAHEDRON John Burkardt Information Technology Department Virginia Tech http://people.sc.fsu.edu/ jburkardt/presentations/cg lab fem basis tetrahedron.pdf
Least-Squares Intersection of Lines
Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
5.1 Radical Notation and Rational Exponents
Section 5.1 Radical Notation and Rational Exponents 1 5.1 Radical Notation and Rational Exponents We now review how exponents can be used to describe not only powers (such as 5 2 and 2 3 ), but also roots
Class Meeting # 1: Introduction to PDEs
MATH 18.152 COURSE NOTES - CLASS MEETING # 1 18.152 Introduction to PDEs, Fall 2011 Professor: Jared Speck Class Meeting # 1: Introduction to PDEs 1. What is a PDE? We will be studying functions u = u(x
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
STAT 830 Convergence in Distribution
STAT 830 Convergence in Distribution Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Convergence in Distribution STAT 830 Fall 2011 1 / 31
MA107 Precalculus Algebra Exam 2 Review Solutions
MA107 Precalculus Algebra Exam 2 Review Solutions February 24, 2008 1. The following demand equation models the number of units sold, x, of a product as a function of price, p. x = 4p + 200 a. Please write
Math 55: Discrete Mathematics
Math 55: Discrete Mathematics UC Berkeley, Fall 2011 Homework # 5, due Wednesday, February 22 5.1.4 Let P (n) be the statement that 1 3 + 2 3 + + n 3 = (n(n + 1)/2) 2 for the positive integer n. a) What
Total colorings of planar graphs with small maximum degree
Total colorings of planar graphs with small maximum degree Bing Wang 1,, Jian-Liang Wu, Si-Feng Tian 1 Department of Mathematics, Zaozhuang University, Shandong, 77160, China School of Mathematics, Shandong
DERIVATIVES AS MATRICES; CHAIN RULE
DERIVATIVES AS MATRICES; CHAIN RULE 1. Derivatives of Real-valued Functions Let s first consider functions f : R 2 R. Recall that if the partial derivatives of f exist at the point (x 0, y 0 ), then we
SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties
SOLUTIONS TO EXERCISES FOR MATHEMATICS 205A Part 3 Fall 2008 III. Spaces with special properties III.1 : Compact spaces I Problems from Munkres, 26, pp. 170 172 3. Show that a finite union of compact subspaces
4.3 Lagrange Approximation
206 CHAP. 4 INTERPOLATION AND POLYNOMIAL APPROXIMATION Lagrange Polynomial Approximation 4.3 Lagrange Approximation Interpolation means to estimate a missing function value by taking a weighted average
The positive minimum degree game on sparse graphs
The positive minimum degree game on sparse graphs József Balogh Department of Mathematical Sciences University of Illinois, USA [email protected] András Pluhár Department of Computer Science University
n 2 + 4n + 3. The answer in decimal form (for the Blitz): 0, 75. Solution. (n + 1)(n + 3) = n + 3 2 lim m 2 1
. Calculate the sum of the series Answer: 3 4. n 2 + 4n + 3. The answer in decimal form (for the Blitz):, 75. Solution. n 2 + 4n + 3 = (n + )(n + 3) = (n + 3) (n + ) = 2 (n + )(n + 3) ( 2 n + ) = m ( n
Chapter 16. Spectral Graph Theory. 16.1 Introduction. Daniel Spielman Yale University
Chapter 16 Spectral Graph Theory Daniel Spielman Yale University 16.1 Introduction... 1 16.2 Preliminaries... 2 16.3 The matrices associated with a graph... 2 16.3.1 Operators on the vertices... 3 16.3.2
Collinear Points in Permutations
Collinear Points in Permutations Joshua N. Cooper Courant Institute of Mathematics New York University, New York, NY József Solymosi Department of Mathematics University of British Columbia, Vancouver,
7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
MA4001 Engineering Mathematics 1 Lecture 10 Limits and Continuity
MA4001 Engineering Mathematics 1 Lecture 10 Limits and Dr. Sarah Mitchell Autumn 2014 Infinite limits If f(x) grows arbitrarily large as x a we say that f(x) has an infinite limit. Example: f(x) = 1 x
LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method
LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method Introduction to dual linear program Given a constraint matrix A, right
