Lecture 8: Expanders and Applications

Size: px
Start display at page:

Download "Lecture 8: Expanders and Applications"

Transcription

1 Lecture 8: Expaners an Applications Topics in Complexity Theory an Pseuoranomness (Spring 013) Rutgers University Swastik Kopparty Scribes: Amey Bhangale, Mrinal Kumar 1 Overview In this lecture, we will introuce some notions of expaners an then explore some of their useful properties an applications. Expaners In one of the previous lectures, we have alreay use one notion of expaner graphs while stuying ata structures for the set membership problem. We will now look at some other efinitions typically use to escribe Expaners an then explore their properties. Definition 1. For a real number α > 0 an a natural number k, a graph G(V, E) is sai to be an (α, k)-ege Expaner if for every S V, such that S k, the number of eges from S to V \ S, enote as e(s, S) is at least α S. Now, it is easy to see from the efinition that a complete graph is efinitely an ege expaner as per the above efinition. For our applications, we will be mostly intereste in expaner graphs which have a much smaller number of eges. For example, we woul be intereste in -regular graphs which are expaners with parameters = O(1) an α = θ(1). Clearly, α cannot excee. The existence of such graphs is guarantee by the following theorem which we will come back an prove at a later point in time. Theorem. For any natural number 3, an sufficiently large n, there exist -regular graphs on n vertices which are ( 10, n ) ege expaners. Let us now efine another notion of expaners, this time base upon the number of vertices in the neighborhoo of small subsets of vertices. Definition 3. For a real number α an a natural number k, a graph G(V, E) is sai to be an (α, k)- Vertex Expaner if for every S V, such that S k, N(S) α S. Here, N(S) = {x V : y S such that (x, y) E}. The following theorem guarantees the existence of vertex expaners for some choice of parameters. Theorem 4. For any natural number 3, an sufficiently large n, there exist -regular graphs on n vertices which are ( 10, n 10 ) vertex expaners. 1

2 These efinitions of expaners were efine in terms of the expansion properties of small size subsets of vertices. We will now efine another notion of expaners in term of the eigenvalues of the ajacency matrix of a graph. Definition 5. For any natural number, a -regular graph G(V, E) is sai to be a λ-absolute eigenvalue expaner if λ, λ 3,..., λ n λ. Here, λ 1 λ... λ n are the eigenvalues of the ajacency matrix A of G. The following theorem which we will believe without a proof tells us that there exist expaners whose all eigenvalues except the first one are boune away from. Theorem 6 (Broer, Shamir). For all positive integers 3 there exists a λ < such that for all sufficiently large n, there is a -regular graph which is a λ-absolute eigenvalue expaner. In fact, we also know that the there exist λ-absolute expaners with λ aroun. 3 Some properties of eigenvectors an eigenvalues To unerstan the efinitions better an to be able to use them for our ens, let us first look at some basic properties of the eigenvalues an eigenvectors of the ajacency matrix of a -regular graph. During the course of this entire iscussion, we will sometimes refer to the eigenvalues an eigenvectors of the ajacency matrix A of G as eigenvalues an eigenvectors of G. Lemma 7. Let G(V, E) be an n vertex unirecte - regular graph for some natural number. Let λ 1 λ... λ n be its n eigenvalues. Then, 1. For all i [n], - λ i. λ 1 = 3. G is connecte λ < 4. G is nonbipartite λ n > - Proof. Before going into the proof, let us list own some basic properties of the ajacency matrix A of G. A is symmetric Every row an every column of A have exactly ones. Now, let us prove the items in the lemma. 1. Let v be an eigenvector of A with eigenvalue λ. Let v(x) be the component of v with the maximum absolute value. Then,we know that λv(x) = A ij v(j) (1) j [n]

3 Now, using - v(x) v(j) v(i) in the equation above, we get λv(x) = A ij v(j) A ij v(j) v(x) () j [n] j [n] This gives us - λ (3). To show that the maximum eigenvalue is, it is sufficient to show that there is a vector v 1 so that Av = v 1. From the item above, it follows that λ 1 =. Consier the vector v 1 R n which is 1 in all coorinates. Then, Av 1 = v 1. Thus, is the maximum eigenvalue. 3. Let us first prove the reverse irection. Let G be isconnecte. Let C 1 an C be its two connecte components. Let v 1 = 1 C1 an v = 1 C. Observe that Av 1 = v 1 an Av = v. Since v 1 an v are linearly inepenent, this gives us that the secon largest eigenvalue is also. Let us now argue the converse. Let G have the secon eigenvalue. This means that there is a vector v which is orthogonal to 1 V such that Av = v. Let x be the component such that v (x) has the maximum value for x V. Now, v (x) = 1 y N(x) v (y). Since there are precisely nonzero terms in the sum on the right han sie, the maximality of x implies that the v (y) = v (y) at every neighbor y of x. This argument can be now extene similarly to imply that every vertex z in the same connecte component as x, satisfies v (x) = v (z). In particular, all the cells of v inexe by the vertices in the same connecte component as x have the same sign. But from the fact that v is orthogonal to 1 V, we know that there are cells with entries with ifferent signs in v. Hence, not all vertices in the graph lie in the same connecte component as x. Therefore, G is isconnecte. 4. Let G(V, E) be a -regular bipartite graph with bipartitions L an R. To show that - is an eigenvalue, it is sufficient to give a vector v such that Av = v. Consier a vector v efine as follows: v(x) = 1, x L (4) v(x) = 1, x R (5) Now it is not ifficult to see that Av = -v an hence there is an eigenvalue which is not greater than -. Since we know that the all eigenvalues are, so λ n =. Let us now show that the converse also hols. Let us now consier a graph which has an eigenvalue less than or equal to -. Item 1 of this lemma tells us that λ n =. Let us work with a graph which is connecte. For isconnecte graphs, the argument can be applie by applying it to each of the connecte components. Then, let the eigenvector for λ n be v n. Let x be the component of v with the largest absolute value. From the eigenvalue relation, we will get v(x) = v(y) (6) y N(x) From the choice of x, the only way this equality can hol is when all neighbors of x have the same absolute value as v(x) with a ifferent sign. This argument can be applie again with one of the neighbors of x as the component of interest to conclue that all the components 3

4 corresponing to the vertices in the same connecte component as x have the same same absolute value an the sign at any vertex iffers from all its neighbors. Therefore, there are no eges among the vertices with the same sign an hence the positive an the negatively signe vertices form a bipartition of G. Let us now look at how eigenvalues of a matrix change with respect to some operations on them. We will be using these properties very cruicially in our analysis later in the lecture. Lemma 8. If λ is an eigenvalue of a n n matrix M with eigenvector v, then 1. λ is an eigenvalue of 1 M with eigenvector v for a nonzero scalar.. λ i is an eigenvalue for M i with eigenvector v for a natural number i. Proof. The proofs basically follow from the basic efinition. 1. Since λ is an eigenvalue of M with eigenvector v, we get Mv = λv. Multipliying both sies by the scalar 1, we get the esire claim.. Since λ is an eigenvalue of M with eigenvector v, we get Mv = λv. Now, multiplying both sies of the equality by M, we get MMv = Mλv, which is the same as M v = λmv = λ v. Repeating this proceure i more times, we get the require claim. We will also cruicially use the following fact about the spectrum of a real symmetric matrix. Fact 9. For a real symmetric square matrix M, the following is true All its eigenvalues are real. There is set of eigenvectors of M, {v i : i [n]} which form an orthonormal basis for R n Some Notations For the rest of the lecture, we will always use G(V, E) to refer to a n vertex -regular graph which is a λ-absolute eigenvalue expaner an A will be its ajacency matrix. We will refer by P, the matrix 1 A. We will use λ 1 λ... λ n to refer to the eigenvalues of A an {v i : i [n]} as the set of orthonormal eigenvectors, where Av i = λ i v i for every i [n]. From Claim 8, we know that λ 1 λ... λn are the eigenvalues of P an the set of eigenvectors remains the same. 4

5 4 Ranom walks on expaners From the efinition of ege an vertex expaners, it is intuitively clear that while oing a ranom walk on an expaner, the chances of being trappe insie a small subset of vertices shoul be small. We will now analyse a ranom walk on an expaner an show that this is inee correct. Let G be -regular graph on n vertices, which is a λ-absolute eigenvalue expaner. Let us efine a ranom walk on it as follows: We start at a fixe vertex x 0 in the first step. In the i t h step, we chose a uniformly ranom neighbor of x i 1 as x i. This proceure will give us a istribution on the vertex set of the graph. Let us call the istribution obtaine at the en of the i th step as f i. Clearly, f 0 is 1 at x o an 0 everywhere else. The next claim tells us the relation between f i 1 an f i. Claim 10. For all integers i 1 f i = 1 Af i 1. Proof. The probability of being at a vertex x in the i th step is precisely the probability that we are at a veterx y, which is a neighbor of x in the i 1 step an we pick the ege (x, y) in the i t h step. So we get, f i (x) = y N[x] Pr[we are at y after (i 1)th step] Pr[ege (x, y) is picke in the next step] This gives us f i (x) = y N(x) f i 1(x) 1. Hence, f i = 1 Af i 1. Recall that from the efinition of matrix P, we get f i = P f i a. Now, in general, applying the Claim 10 multiple times, we will get the following Claim 11. For all integers i 0, f i = P i f 0 Let us now show that after sufficiently large number of steps i, the istribution f i is close to the uniform istribution on the vertex set of G. Theorem 1. For an n vertex -regular λ-absolute eigenvalue expaner G, the istribution f i obtaine at the en of i steps of a ranom walk as efine above satisfies f i U ( ) λ i ( 1 1 n). Proof. Using Fact 9 above, we know that we can express any vector u R n as a linear combination of {v i : i [n]}. In particular, f 0 = i [n] α iv i, where α i = f 0, v i. Now, using the Claim 11, we obtain f i = P i f 0 = P i α j v j (7) Now we can separate this sum into two parts, keeping in min that λ 1 = an P i has eigenvalues {λ i k : k [n]}. We get f i = α 1 v 1 + ( ) i λj α j v j (8) j [n]\{1} j [n] 5

6 Taking the l norms on both sies, f i α 1 v 1 = j [n]\{1} ( ) i λj α j v j (9) Since G is a λ-absolute eigenvalue expaner, we can know that λ i λ for i {, 3, 4..., n}. So, the right han sie gets simplifie, an we obtain ( ) λ i f i α 1 v 1 α j v j (10) j [n]\{1} Now, from the efinition of α i, we know that f 0 = α 1 v 1 + j [n]\{1} α jv j. So taking the l norm both sies, we obtain using orthonormality of v i s f 0 = α 1 + α j v j (11) j [n]\{1} We also know that α 1 = f 0, v 1. Recall that f 0 is a vector with all 0 s an 1 at one position. v 1 is a normalize all ones vector, an so has each entry 1 n. So, α 1 = f 0, v 1 = 1 n. f 0 1 n = j [n]\{1} α j v j Besies, we can also observe that α 1 v 1 is a vector whose all components are equal to 1 n, which is precisely equal to the uniform istribution U on n vertices. Using both these observations, we obtain fron Equation 10, the following ( ) λ i ( f i U f 0 1 ) 1 (13) n Now, the l norm of f 0 is 1. So, we obtain f i U ( ) λ i ( 1 1 ) 1 n (1) (14) From here, we can also get something about the l 1 norm using the relation between the l an the l 1 norm. We obtain the following corollary. Corollary 13. f i U 1 n ( ) λ i ( ) n In particular, if λ an are absolute constants, for i = θ(log(n)), we can conclue that f 1 is 1 n 100 close to uniform istribution in l 1 norm. It also follows from this statement that the support of f i has to be n for i = c log n, where c is a sufficiently large constant. This gives us the following corollary, Corollary 14. The iameter of a λ-absolute eigenvalue expaner is θ(log n). 6

7 5 Properties of λ-absolute eigenvalue expaner 5.1 Expaner Mixing Lemma It is not har to prove that a ranom -regular graph is a goo expaner. We can think of expaner mixing lemma as showing somewhat converse of the previous statement. Informally it says that, any -regular λ-absolute eigenvalue expaner graph is close to a ranom -regular graph. Lemma 15. (Expaner mixing lemma) If G(V, E) be a -regular λ-absolute eigenvalue expaner then for all S, T V, e(s, T ) n S T λ S T where e(s, T ) is the number of eges between the vertex set S an T. In the above expression n S T is the expecte number of eges between S, T in a ranom -regular graph. So the above lemma says that for any -regular λ-absolute eigenvalue expaner graph, the quantity e(s, T ) is close to the the expecte value in a ranom -regular graph. Proof of expaner mixing lemma : Proof. Let 1 S an 1 T be the inicator 0/1 vectors for vertex set S an T respectively. The number of eges between S an T is given by following expression. e(s, T ) = u S,v T = 1 T S A1 T A uv We can write vectors 1 S an 1 T as a linear combinations of eigen vectors. So, ( ) T e(s, T ) = α i v i A β j v j i j β j α i vi T λ j v j = i j = i α i β i λ i = α 1 β 1 λ 1 + n α i β i λ i i= 7

8 We know α 1 = S n, β 1 = T n, λ 1 = α 1 β 1 λ 1 = n S T Therefore, e(s, T ) n n S T = α i β i λ i i= e(s, T ) n S T n = α i β i λ i i= We can boun the right han sie as, n n α i β i λ i λ α i β i i= i= ( n ) 1 ( ) 1 n λ α i β i i= λ 1 S 1 T = λ S T i= Therefore, e(s, T ) n S T n = α i β i λ i i= e(s, T ) n S T λ S T 6 Error Reuction Consier a ranomize circuit C computing some function f on n variables. The property of circuit C is that for every input x {0, 1} n, with probability at least /3, it outputs the correct answer using r truly ranom bits. We have alreay seen the following methos to bring own the error probability. 8

9 1. Repeating the compution of C with fresh ranom bits every time an taking the majority of the outputs. In this case, if we repeat the computation m times inepenently then we can bring own the error probability to exp( m). But we pay for it in total number of ranom bits use by the circuit which is rm.. Instea of using truly ranom bits every time, if we use m pairwise inepenent strings, we can bring own the probability to 1/m. Here the total number of truly ranom bits use is r. In this section we will see an application of expaner graphs in bringing own the error probability using few ranom bits. Consier the universe of r ranom bits {0, 1} r. Circuit C has an error probability atmost 1/3 means for every input x {0, 1} n there are atmost r /3 values in {0, 1} r which are ba for x. Let B be the ba set. If we can generate points efficiently in the set {0, 1} r such that probability that more than half of them lie insie B is small then we can just use these points as sees to circuit C an output the majority. Hence with very small probability th circuits errs. Take a -regular λ-eigenvalue expaner graph G on the vertex set {0, 1} r. We want this graph to be explicit, that is, given a vertex x {0, 1} r in G, we shoul be able to fin its neighbors in poly(r) time. 6.1 Approach 1 1. Pick a uniformly ranom x {0, 1} r. Let x 1, x,, x be its neighbors in G. 3. Run circuit C with sees x 1, x,, x 4. Output majority of the answers. Ranomness use : The only ranomness use in above proceure is in step 1 which is just r ranom bits. We will show that the error probability of above algorithm is very small. Claim 16. The error probability of the above algorithm is atmost O( 1 m ) Proof. For an input y {0, 1} n, let B be the subset of {0, 1} r which is ba for y i.e. C(y, r) = f(y) if an only if r / B. Define a set D as follows, D = {x such that at least / of its neighbors x 1, x, x are in B} In orer to show the error probability is small we want to argue that the size of D is small, since the error probability of the algorithm is D / r, 9

10 Consier these two subsets B an D of vertex set of a garph G. Applying the expaner mixing lemma (Lemma 15): e(d, B) r B D λ B D Number of eges going across B an D is at least D./, D r B D + λ B D D 3 D + λ B D 6 D λ B D D O( λ B ) Setting = m an λ = m 3/4, Error Probability = D r O( 1 m ) So this algorithm uses only r ranom bits an brings own the error probability from 1/3 to O(1/ m)! 6. Approach 1. Pick x {0, 1} r uniformly at ranom.. Take a ranom walk of length m x 0 = x, x 1, x,, x m. 3. Output the majority of (C(x i ) i [m]). Ranomness use : First step requires r ranom bits, to pick a ranom vertex in {0, 1} r. Since G is a -regular graph, we can think of neighbors of a vertex in a graph are labele by numbers in []. So each step of ranom walk is same as picking a ranom number between 1 to an moving to that neighbor of a vertex. Picking a ranom number between 1 to nees roughly log ranom bits, so for a ranom walk of length m we nee total m log ranom bits. Since is a constant in this case, the ranomness use by above algorithn is r + O(m). In orer to boun the error probability of above algorithm we will be intereste in following quantities. 10

11 1. Pr[ all x i lie in B ]. For a fixe set of inices I [m], I = m/. Pr[ x i, i I are all in B ] 3. Union boun over all I s. Since we pick the vertex x 0 uniformly at ranom, we know P r[x 0 B] = B r Want to estimate P r[x 0, x 1 B] =?. In orer to estimate this quantity, we start with a istribution f 0 which correspons to istribution of vertex x 0, an uniform istribution. Let π be the restriction onto B i.e. π : R V R V, { fi if i B π(f) i = 0 otherwise Let P be the normalize ajacency matrix of expaner graph G. Then, P r[x 0 B] = πf 0 1 = β P r[x 0, x 1 B] = πp πf 0 1 P r[x 0, x 1,, x j B] = (πp ) j πf 0 1 Since π = π, (πp ) j π = (πp π) j. Hence if we can get an upper boun on (πp π) j f 0, we get an upper boun on (πp π) j f 0 1. Claim 17. For a -regular, λ eigenvalue expaner graph, ( f, πp πf β + λ ) f where P, β an π as efine above. Proof. We can write a vector πf as a linear combination of eigenvectors v 1, v,, v r v be the component of vector πf orthogonal to v 1. of G. Let πf = α 1 v 1 + v πp πf = πp α 1 v 1 + πp v = α 1 πp v 1 + π(p v ) By triangle inequality, πp πf α 1 πp v 1 + π(p v ) (15) 11

12 We can boun the first expression in equation 15 as, α 1 πp v 1 = α 1 πp v 1 α 1 πv 1 = α 1 β We know, α 1 = πf, v 1 By Cauchy-Schwarz inequality an the fact that πf has atmost B / r fraction of non-zero entries, α 1 πf β f β Hence, α 1 πp v 1 f β β = β f Also for the secon expression in equation 15, π(p v ) P v λ v as require. Now, P r[x 1, x,, x m all in B] (πp π) m f 0 1 λ f r (πp π) m f 0 ( r β + λ ) m f 0... by claim 17 = ( r β + λ ) m 1 r ( = β + λ ) m Using the claim we prove, we will try to estimate the following probability, For a fixe set of inices I [m], I = m/. Pr[ x i, i I are all in B]. 1

13 The above probability is exactly some expression of the form Pr[ x i, i I are all in B] =..(πp )P P (πp )(πp )... 1 which contains exactly m/ (πp ) terms an m/ P terms. By combining the terms, we can rewrite the above expression in the form, For some k 1, k, k m/ 1. Pr[ x i, i I are all in B] = (πp k 1 π)(πp k π) f 0 1 Claim 18. f, πp k πf ( β + ( ) ) λ k f Proof. Proof of this claim is similar to the claim 17 except in equation 15 we have the secon term P k v instea of P v which is atmost (λ/) k f. Using above claim, For a fixe set of inices I [m], I = m/, Pr[ x i, i I are all in B] = (πp k 1 π)(πp k π) f 0 1 r (πp k 1 π)(πp k π) f 0 r (β + ( ) ) ( λ k1 β + ( ) ) ( λ k β + ( ) ) λ km/ 1 f 0 ( β + λ ) m/ 1 If we choose λ an such that (β + λ )1/ < 1 5, then the error probability of the algorithm is, Error Probability = Pr[ majority of x i s are in B] = Pr[x i, i I are in B] I [m], I m/ O ( m ) max {Pr[ x i, i I are all in B]} I,I [m], I m/ ( ) m O 5 m = exp( m) Hence the algorithm uses r + O(m) ranom bits an reuces error probability to exp( m). 13

14 7 Connectivity of a -regular graph In this section we will be looking at the following problem. Problem : Given an unirecte graph G on n vertices which is a -regular, etermine whether it is connecte. The graph is given as input in the form of ajacency matrix in reaonly memory. This problem is simple if we have access to poly(n) bits of space for computation. 1. Start with any arbitrary noe in a graph G.. Preform a DFS/BFS from the starting noe an count the number of noes in the DFS/BFS tree. 3. If the count is equal to n then the graph is connecte otherwise it is isconnecte. But the problem is not trivial if we have access to only O(log n) space. We will iscuss a ranomize algorithm to solve this problem in O(log n) space. The algorithm is as follows: For every pair of vertices s, t, take n 10 inepenent ranom walks of length n 10 each starting from s an check if it ens at vertex t. If for all pairs of vertices the above conition is satisfie for at least one ranom walk then G is connecte (true) otherwise isconnecte(f alse). Claim 19. The above algrithm fails with exponential small probability in n. Proof. If the graph is isconnecte then the algorithm always return false. We will show if G is connecte then the algorithm returns f alse with very small probability. If G is a connecte -regular graph on n vertices, then except λ 1 the absolute value of other eigenvalues is atmost /n. Let e s be an inicator vector of a vertex s. Let P be the normalize ajacency matrix of graph G. By theorem 1- the property of ranom walks on an expaner graph, Pr[ a ranom walk from s lans on t at the en of n 10th step] 1 n U P n10 e s 1 1 n ( ) λ n 10 n 1 n n (1 1n ) n 10 1 n e n8 1 n 14

15 Therefore, [ none of the n 10 ranom walks from s P r lans on t at the en of n 10th step ] ( 1 1 ) n 10 n exp( n 9 ) Hence by union boun, P r[failing] n exp( n 9 ) exp( n 8 ) Hence the algorithm fails with exponential small probability. In a later lecture, we will see a eterministic algorithm for this problem which uses only O(log n) space. 15

10.2 Systems of Linear Equations: Matrices

10.2 Systems of Linear Equations: Matrices SECTION 0.2 Systems of Linear Equations: Matrices 7 0.2 Systems of Linear Equations: Matrices OBJECTIVES Write the Augmente Matrix of a System of Linear Equations 2 Write the System from the Augmente Matrix

More information

Math 230.01, Fall 2012: HW 1 Solutions

Math 230.01, Fall 2012: HW 1 Solutions Math 3., Fall : HW Solutions Problem (p.9 #). Suppose a wor is picke at ranom from this sentence. Fin: a) the chance the wor has at least letters; SOLUTION: All wors are equally likely to be chosen. The

More information

2r 1. Definition (Degree Measure). Let G be a r-graph of order n and average degree d. Let S V (G). The degree measure µ(s) of S is defined by,

2r 1. Definition (Degree Measure). Let G be a r-graph of order n and average degree d. Let S V (G). The degree measure µ(s) of S is defined by, Theorem Simple Containers Theorem) Let G be a simple, r-graph of average egree an of orer n Let 0 < δ < If is large enough, then there exists a collection of sets C PV G)) satisfying: i) for every inepenent

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS Contents 1. Moment generating functions 2. Sum of a ranom number of ranom variables 3. Transforms

More information

(67902) Topics in Theory and Complexity Nov 2, 2006. Lecture 7

(67902) Topics in Theory and Complexity Nov 2, 2006. Lecture 7 (67902) Topics in Theory and Complexity Nov 2, 2006 Lecturer: Irit Dinur Lecture 7 Scribe: Rani Lekach 1 Lecture overview This Lecture consists of two parts In the first part we will refresh the definition

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Lecture 1: Course overview, circuits, and formulas

Lecture 1: Course overview, circuits, and formulas Lecture 1: Course overview, circuits, and formulas Topics in Complexity Theory and Pseudorandomness (Spring 2013) Rutgers University Swastik Kopparty Scribes: John Kim, Ben Lund 1 Course Information Swastik

More information

Inverse Trig Functions

Inverse Trig Functions Inverse Trig Functions c A Math Support Center Capsule February, 009 Introuction Just as trig functions arise in many applications, so o the inverse trig functions. What may be most surprising is that

More information

Lecture L25-3D Rigid Body Kinematics

Lecture L25-3D Rigid Body Kinematics J. Peraire, S. Winall 16.07 Dynamics Fall 2008 Version 2.0 Lecture L25-3D Rigi Boy Kinematics In this lecture, we consier the motion of a 3D rigi boy. We shall see that in the general three-imensional

More information

Sensor Network Localization from Local Connectivity : Performance Analysis for the MDS-MAP Algorithm

Sensor Network Localization from Local Connectivity : Performance Analysis for the MDS-MAP Algorithm Sensor Network Localization from Local Connectivity : Performance Analysis for the MDS-MAP Algorithm Sewoong Oh an Anrea Montanari Electrical Engineering an Statistics Department Stanfor University, Stanfor,

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

Notes on tangents to parabolas

Notes on tangents to parabolas Notes on tangents to parabolas (These are notes for a talk I gave on 2007 March 30.) The point of this talk is not to publicize new results. The most recent material in it is the concept of Bézier curves,

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

Measures of distance between samples: Euclidean

Measures of distance between samples: Euclidean 4- Chapter 4 Measures of istance between samples: Eucliean We will be talking a lot about istances in this book. The concept of istance between two samples or between two variables is funamental in multivariate

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

On Adaboost and Optimal Betting Strategies

On Adaboost and Optimal Betting Strategies On Aaboost an Optimal Betting Strategies Pasquale Malacaria 1 an Fabrizio Smerali 1 1 School of Electronic Engineering an Computer Science, Queen Mary University of Lonon, Lonon, UK Abstract We explore

More information

Ch 10. Arithmetic Average Options and Asian Opitons

Ch 10. Arithmetic Average Options and Asian Opitons Ch 10. Arithmetic Average Options an Asian Opitons I. Asian Option an the Analytic Pricing Formula II. Binomial Tree Moel to Price Average Options III. Combination of Arithmetic Average an Reset Options

More information

Determine If An Equation Represents a Function

Determine If An Equation Represents a Function Question : What is a linear function? The term linear function consists of two parts: linear and function. To understand what these terms mean together, we must first understand what a function is. The

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

MSc. Econ: MATHEMATICAL STATISTICS, 1995 MAXIMUM-LIKELIHOOD ESTIMATION

MSc. Econ: MATHEMATICAL STATISTICS, 1995 MAXIMUM-LIKELIHOOD ESTIMATION MAXIMUM-LIKELIHOOD ESTIMATION The General Theory of M-L Estimation In orer to erive an M-L estimator, we are boun to make an assumption about the functional form of the istribution which generates the

More information

Firewall Design: Consistency, Completeness, and Compactness

Firewall Design: Consistency, Completeness, and Compactness C IS COS YS TE MS Firewall Design: Consistency, Completeness, an Compactness Mohame G. Goua an Xiang-Yang Alex Liu Department of Computer Sciences The University of Texas at Austin Austin, Texas 78712-1188,

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

JON HOLTAN. if P&C Insurance Ltd., Oslo, Norway ABSTRACT

JON HOLTAN. if P&C Insurance Ltd., Oslo, Norway ABSTRACT OPTIMAL INSURANCE COVERAGE UNDER BONUS-MALUS CONTRACTS BY JON HOLTAN if P&C Insurance Lt., Oslo, Norway ABSTRACT The paper analyses the questions: Shoul or shoul not an iniviual buy insurance? An if so,

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Lagrangian and Hamiltonian Mechanics

Lagrangian and Hamiltonian Mechanics Lagrangian an Hamiltonian Mechanics D.G. Simpson, Ph.D. Department of Physical Sciences an Engineering Prince George s Community College December 5, 007 Introuction In this course we have been stuying

More information

y or f (x) to determine their nature.

y or f (x) to determine their nature. Level C5 of challenge: D C5 Fining stationar points of cubic functions functions Mathematical goals Starting points Materials require Time neee To enable learners to: fin the stationar points of a cubic

More information

Here the units used are radians and sin x = sin(x radians). Recall that sin x and cos x are defined and continuous everywhere and

Here the units used are radians and sin x = sin(x radians). Recall that sin x and cos x are defined and continuous everywhere and Lecture 9 : Derivatives of Trigonometric Functions (Please review Trigonometry uner Algebra/Precalculus Review on the class webpage.) In this section we will look at the erivatives of the trigonometric

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

Answers to the Practice Problems for Test 2

Answers to the Practice Problems for Test 2 Answers to the Practice Problems for Test 2 Davi Murphy. Fin f (x) if it is known that x [f(2x)] = x2. By the chain rule, x [f(2x)] = f (2x) 2, so 2f (2x) = x 2. Hence f (2x) = x 2 /2, but the lefthan

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Lecture 4: AC 0 lower bounds and pseudorandomness

Lecture 4: AC 0 lower bounds and pseudorandomness Lecture 4: AC 0 lower bounds and pseudorandomness Topics in Complexity Theory and Pseudorandomness (Spring 2013) Rutgers University Swastik Kopparty Scribes: Jason Perry and Brian Garnett In this lecture,

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Department of Mathematical Sciences, University of Copenhagen. Kandidat projekt i matematik. Jens Jakob Kjær. Golod Complexes

Department of Mathematical Sciences, University of Copenhagen. Kandidat projekt i matematik. Jens Jakob Kjær. Golod Complexes F A C U L T Y O F S C I E N C E U N I V E R S I T Y O F C O P E N H A G E N Department of Mathematical Sciences, University of Copenhagen Kaniat projekt i matematik Jens Jakob Kjær Golo Complexes Avisor:

More information

A Generalization of Sauer s Lemma to Classes of Large-Margin Functions

A Generalization of Sauer s Lemma to Classes of Large-Margin Functions A Generalization of Sauer s Lemma to Classes of Large-Margin Functions Joel Ratsaby University College Lonon Gower Street, Lonon WC1E 6BT, Unite Kingom J.Ratsaby@cs.ucl.ac.uk, WWW home page: http://www.cs.ucl.ac.uk/staff/j.ratsaby/

More information

Calculating Viscous Flow: Velocity Profiles in Rivers and Pipes

Calculating Viscous Flow: Velocity Profiles in Rivers and Pipes previous inex next Calculating Viscous Flow: Velocity Profiles in Rivers an Pipes Michael Fowler, UVa 9/8/1 Introuction In this lecture, we ll erive the velocity istribution for two examples of laminar

More information

Given three vectors A, B, andc. We list three products with formula (A B) C = B(A C) A(B C); A (B C) =B(A C) C(A B);

Given three vectors A, B, andc. We list three products with formula (A B) C = B(A C) A(B C); A (B C) =B(A C) C(A B); 1.1.4. Prouct of three vectors. Given three vectors A, B, anc. We list three proucts with formula (A B) C = B(A C) A(B C); A (B C) =B(A C) C(A B); a 1 a 2 a 3 (A B) C = b 1 b 2 b 3 c 1 c 2 c 3 where the

More information

The Quick Calculus Tutorial

The Quick Calculus Tutorial The Quick Calculus Tutorial This text is a quick introuction into Calculus ieas an techniques. It is esigne to help you if you take the Calculus base course Physics 211 at the same time with Calculus I,

More information

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH 31 Kragujevac J. Math. 25 (2003) 31 49. SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH Kinkar Ch. Das Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, W.B.,

More information

Handout #Ch7 San Skulrattanakulchai Gustavus Adolphus College Dec 6, 2010. Chapter 7: Digraphs

Handout #Ch7 San Skulrattanakulchai Gustavus Adolphus College Dec 6, 2010. Chapter 7: Digraphs MCS-236: Graph Theory Handout #Ch7 San Skulrattanakulchai Gustavus Adolphus College Dec 6, 2010 Chapter 7: Digraphs Strong Digraphs Definitions. A digraph is an ordered pair (V, E), where V is the set

More information

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like

More information

How To Find Out How To Calculate Volume Of A Sphere

How To Find Out How To Calculate Volume Of A Sphere Contents High-Dimensional Space. Properties of High-Dimensional Space..................... 4. The High-Dimensional Sphere......................... 5.. The Sphere an the Cube in Higher Dimensions...........

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

The Determinant: a Means to Calculate Volume

The Determinant: a Means to Calculate Volume The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

Network Flow I. Lecture 16. 16.1 Overview. 16.2 The Network Flow Problem

Network Flow I. Lecture 16. 16.1 Overview. 16.2 The Network Flow Problem Lecture 6 Network Flow I 6. Overview In these next two lectures we are going to talk about an important algorithmic problem called the Network Flow Problem. Network flow is important because it can be

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

5.1 Bipartite Matching

5.1 Bipartite Matching CS787: Advanced Algorithms Lecture 5: Applications of Network Flow In the last lecture, we looked at the problem of finding the maximum flow in a graph, and how it can be efficiently solved using the Ford-Fulkerson

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

Factoring Dickson polynomials over finite fields

Factoring Dickson polynomials over finite fields Factoring Dickson polynomials over finite fiels Manjul Bhargava Department of Mathematics, Princeton University. Princeton NJ 08544 manjul@math.princeton.eu Michael Zieve Department of Mathematics, University

More information

Chapter 6. Cuboids. and. vol(conv(p ))

Chapter 6. Cuboids. and. vol(conv(p )) Chapter 6 Cuboids We have already seen that we can efficiently find the bounding box Q(P ) and an arbitrarily good approximation to the smallest enclosing ball B(P ) of a set P R d. Unfortunately, both

More information

Properties of Real Numbers

Properties of Real Numbers 16 Chapter P Prerequisites P.2 Properties of Real Numbers What you should learn: Identify and use the basic properties of real numbers Develop and use additional properties of real numbers Why you should

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

6. Vectors. 1 2009-2016 Scott Surgent (surgent@asu.edu)

6. Vectors. 1 2009-2016 Scott Surgent (surgent@asu.edu) 6. Vectors For purposes of applications in calculus and physics, a vector has both a direction and a magnitude (length), and is usually represented as an arrow. The start of the arrow is the vector s foot,

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information

Pythagorean Triples Over Gaussian Integers

Pythagorean Triples Over Gaussian Integers International Journal of Algebra, Vol. 6, 01, no., 55-64 Pythagorean Triples Over Gaussian Integers Cheranoot Somboonkulavui 1 Department of Mathematics, Faculty of Science Chulalongkorn University Bangkok

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function 17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

Modelling and Resolving Software Dependencies

Modelling and Resolving Software Dependencies June 15, 2005 Abstract Many Linux istributions an other moern operating systems feature the explicit eclaration of (often complex) epenency relationships between the pieces of software

More information

The Goldberg Rao Algorithm for the Maximum Flow Problem

The Goldberg Rao Algorithm for the Maximum Flow Problem The Goldberg Rao Algorithm for the Maximum Flow Problem COS 528 class notes October 18, 2006 Scribe: Dávid Papp Main idea: use of the blocking flow paradigm to achieve essentially O(min{m 2/3, n 1/2 }

More information

2. Properties of Functions

2. Properties of Functions 2. PROPERTIES OF FUNCTIONS 111 2. Properties of Funtions 2.1. Injetions, Surjetions, an Bijetions. Definition 2.1.1. Given f : A B 1. f is one-to-one (short han is 1 1) or injetive if preimages are unique.

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

Linear Algebra Notes

Linear Algebra Notes Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

More information

Which Networks Are Least Susceptible to Cascading Failures?

Which Networks Are Least Susceptible to Cascading Failures? Which Networks Are Least Susceptible to Cascaing Failures? Larry Blume Davi Easley Jon Kleinberg Robert Kleinberg Éva Taros July 011 Abstract. The resilience of networks to various types of failures is

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

Inner products on R n, and more

Inner products on R n, and more Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

More information

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

More information

8.1 Min Degree Spanning Tree

8.1 Min Degree Spanning Tree CS880: Approximations Algorithms Scribe: Siddharth Barman Lecturer: Shuchi Chawla Topic: Min Degree Spanning Tree Date: 02/15/07 In this lecture we give a local search based algorithm for the Min Degree

More information

Cross-Over Analysis Using T-Tests

Cross-Over Analysis Using T-Tests Chapter 35 Cross-Over Analysis Using -ests Introuction his proceure analyzes ata from a two-treatment, two-perio (x) cross-over esign. he response is assume to be a continuous ranom variable that follows

More information

1 The Line vs Point Test

1 The Line vs Point Test 6.875 PCP and Hardness of Approximation MIT, Fall 2010 Lecture 5: Low Degree Testing Lecturer: Dana Moshkovitz Scribe: Gregory Minton and Dana Moshkovitz Having seen a probabilistic verifier for linearity

More information

Search Advertising Based Promotion Strategies for Online Retailers

Search Advertising Based Promotion Strategies for Online Retailers Search Avertising Base Promotion Strategies for Online Retailers Amit Mehra The Inian School of Business yeraba, Inia Amit Mehra@isb.eu ABSTRACT Web site aresses of small on line retailers are often unknown

More information

Triangle deletion. Ernie Croot. February 3, 2010

Triangle deletion. Ernie Croot. February 3, 2010 Triangle deletion Ernie Croot February 3, 2010 1 Introduction The purpose of this note is to give an intuitive outline of the triangle deletion theorem of Ruzsa and Szemerédi, which says that if G = (V,

More information

1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ]

1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ] 1 Homework 1 (1) Prove the ideal (3,x) is a maximal ideal in Z[x]. SOLUTION: Suppose we expand this ideal by including another generator polynomial, P / (3, x). Write P = n + x Q with n an integer not

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

BALTIC OLYMPIAD IN INFORMATICS Stockholm, April 18-22, 2009 Page 1 of?? ENG rectangle. Rectangle

BALTIC OLYMPIAD IN INFORMATICS Stockholm, April 18-22, 2009 Page 1 of?? ENG rectangle. Rectangle Page 1 of?? ENG rectangle Rectangle Spoiler Solution of SQUARE For start, let s solve a similar looking easier task: find the area of the largest square. All we have to do is pick two points A and B and

More information

5.3 The Cross Product in R 3

5.3 The Cross Product in R 3 53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

A New Evaluation Measure for Information Retrieval Systems

A New Evaluation Measure for Information Retrieval Systems A New Evaluation Measure for Information Retrieval Systems Martin Mehlitz martin.mehlitz@ai-labor.e Christian Bauckhage Deutsche Telekom Laboratories christian.bauckhage@telekom.e Jérôme Kunegis jerome.kunegis@ai-labor.e

More information

tr(a + B) = tr(a) + tr(b) tr(ca) = c tr(a)

tr(a + B) = tr(a) + tr(b) tr(ca) = c tr(a) Chapter 3 Determinant 31 The Determinant Funtion We follow an intuitive approah to introue the efinition of eterminant We alreay have a funtion efine on ertain matries: the trae The trae assigns a numer

More information

Lecture 16 : Relations and Functions DRAFT

Lecture 16 : Relations and Functions DRAFT CS/Math 240: Introduction to Discrete Mathematics 3/29/2011 Lecture 16 : Relations and Functions Instructor: Dieter van Melkebeek Scribe: Dalibor Zelený DRAFT In Lecture 3, we described a correspondence

More information

PYTHAGOREAN TRIPLES KEITH CONRAD

PYTHAGOREAN TRIPLES KEITH CONRAD PYTHAGOREAN TRIPLES KEITH CONRAD 1. Introduction A Pythagorean triple is a triple of positive integers (a, b, c) where a + b = c. Examples include (3, 4, 5), (5, 1, 13), and (8, 15, 17). Below is an ancient

More information

Fluid Pressure and Fluid Force

Fluid Pressure and Fluid Force 0_0707.q //0 : PM Page 07 SECTION 7.7 Section 7.7 Flui Pressure an Flui Force 07 Flui Pressure an Flui Force Fin flui pressure an flui force. Flui Pressure an Flui Force Swimmers know that the eeper an

More information

Permutation Betting Markets: Singleton Betting with Extra Information

Permutation Betting Markets: Singleton Betting with Extra Information Permutation Betting Markets: Singleton Betting with Extra Information Mohammad Ghodsi Sharif University of Technology ghodsi@sharif.edu Hamid Mahini Sharif University of Technology mahini@ce.sharif.edu

More information

A Comparison of Performance Measures for Online Algorithms

A Comparison of Performance Measures for Online Algorithms A Comparison of Performance Measures for Online Algorithms Joan Boyar 1, Sany Irani 2, an Kim S. Larsen 1 1 Department of Mathematics an Computer Science, University of Southern Denmark, Campusvej 55,

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem

More information