Real-rooted polynomials and interlacing families
|
|
- Magnus Johnson
- 7 years ago
- Views:
Transcription
1 page.1 Real-rooted polynomials and interlacing families Adam W. Marcus Yale University Crisply, LLC October 10, 2013
2 page.2 2/50 Joint work with: Dan Spielman Yale University Nikhil Srivastava Microsoft Research, India My involvement supported by: National Science Foundation Mathematical Sciences Postdoctoral Research Fellowship
3 page.3 Because not everyone can be a combinatorist 3/50 Some notation 1 [k] will denote the set of integers {1,..., k}. 2 for a set S, 2 S will denote the set of all subsets of S 3 For a set S, ( S k) will denote the subsets of S that are size k 4 For a set S, a S = i S a i
4 page.4 Outline Introduction 4/50 1 Introduction 2 Multivariate Extensions Real Stable Polynomials Hyperbolic Polynomials 3 Roots 4 Interlacing families
5 page.5 Understanding distributions Introduction 5/50 Question: How do we normally understand distributions?
6 page.6 Understanding distributions Introduction 5/50 Question: How do we normally understand distributions? Answer: Use things like norms (moments) and transforms.
7 page.7 Understanding distributions Introduction 5/50 Question: How do we normally understand distributions? Answer: Use things like norms (moments) and transforms. Question: How do we compare distributions?
8 page.8 Understanding distributions Question: How do we normally understand distributions? Answer: Use things like norms (moments) and transforms. Question: How do we compare distributions? Answers: Use inequalities.
9 page.9 Finite distributions Introduction 6/50 Finite distributions are the same.
10 page.10 Finite distributions Introduction 6/50 Finite distributions are the same. That said, there can be advantages to different encodings of the distribution.
11 page.11 Finite distributions Introduction 6/50 Finite distributions are the same. That said, there can be advantages to different encodings of the distribution. Example: Generating Functions Given a sequence a 0, a 1,... the ordinary generating function is the formal power series a 0 + a 1 x + a 2 x 2 + = a i x i i So the sequence 1, 1, 1,... can be encoded as 1 1 x.
12 page.12 Finite distributions Introduction 6/50 Finite distributions are the same. That said, there can be advantages to different encodings of the distribution. Example: Generating Functions Given a sequence a 0, a 1,... the ordinary generating function is the formal power series a 0 + a 1 x + a 2 x 2 + = a i x i i So the sequence 1, 1, 1,... can be encoded as 1 1 x. Advantage: we can use power series arithmetic to combine sequences and get new ones.
13 page.13 Polynomials Polynomials can be used in a similar way: Given values a 1,..., a d, we can encode them as p(x) = (x a 1 )(x a 2 )... (x a d ) = i (x a i )
14 page.14 Polynomials Polynomials can be used in a similar way: Given values a 1,..., a d, we can encode them as p(x) = (x a 1 )(x a 2 )... (x a d ) = i (x a i ) Advantage: we can use our knowledge of polynomials to help understand the distributions. To see that this is non-trivial, try to come up with a non-polynomial way of getting distribution that is produced by p (x).
15 page.15 Example: Finite L p -norms Given real numbers a 1,..., a d, define the k th power sum P k (a 1,..., a d ) = a k a k d Used all the time in harmonic analysis.
16 page.16 Example: Finite L p -norms Given real numbers a 1,..., a d, define the k th power sum P k (a 1,..., a d ) = a k a k d Used all the time in harmonic analysis. Also define the k th elementary symmetric polynomial E k (a 1,..., a d ) = a S = S ( [d] k ) S [d] S =k i S a i
17 page.17 Example: Finite L p -norms Given real numbers a 1,..., a d, define the k th power sum P k (a 1,..., a d ) = a k a k d Used all the time in harmonic analysis. Also define the k th elementary symmetric polynomial E k (a 1,..., a d ) = a S = Examples: 1 E 0 (a 1,..., a n ) = 1 S ( [d] k ) 2 E 1 (a 1,..., a d ) = a 1 + a a n 3 E d (a 1,..., a d ) = a 1 a 2... a d 4 E k (a 1,..., a d ) = 0 for k > d S [d] S =k i S a i
18 page.18 Newton Identities Introduction 9/50 These are connected by Newton s identities ke k (x 1,..., x d ) = k ( 1) i 1 E k i (x 1,..., x d )P i (x 1,..., x d ) i=1
19 page.19 Newton Identities Introduction 9/50 These are connected by Newton s identities ke k (x 1,..., x d ) = k ( 1) i 1 E k i (x 1,..., x d )P i (x 1,..., x d ) i=1 Elementary symmetric functions can serve similar role as p-norms.
20 page.20 Newton Identities These are connected by Newton s identities ke k (x 1,..., x d ) = k ( 1) i 1 E k i (x 1,..., x d )P i (x 1,..., x d ) i=1 Elementary symmetric functions can serve similar role as p-norms. Can we get inequalities like we do with p-norms?
21 page.21 Why real-rooted polynomials? Regular generating functions can have arbitrary coefficients. Obviously arbitrary does not have any added structure.
22 page.22 Why real-rooted polynomials? Regular generating functions can have arbitrary coefficients. Obviously arbitrary does not have any added structure. By maintaining real-rootedness, we are also maintaining structure.
23 page.23 Why real-rooted polynomials? Introduction 10/50 Regular generating functions can have arbitrary coefficients. Obviously arbitrary does not have any added structure. By maintaining real-rootedness, we are also maintaining structure. This allows us to get inequalities that would not be true in the general case.
24 page.24 Example: Bernoulli Random Variables Introduction 11/50 Exercise: Let X 1,..., X d be a collection of independent Bernoulli random variables with P [X i = 0] = p i and P [X i = 1] = 1 p i and let X = i X i. What is P [X = k]?
25 page.25 Example: Bernoulli Random Variables Introduction 11/50 Exercise: Let X 1,..., X d be a collection of independent Bernoulli random variables with P [X i = 0] = p i and P [X i = 1] = 1 p i and let X = i X i. What is P [X = k]? For each X i create the (generating) polynomial y i (x) = p i x + (1 p i ). Then m d Y (x) = y i (x) = x d k P [X = k] i=1 k=0
26 page.26 Example: Bernoulli Random Variables Exercise: Let X 1,..., X d be a collection of independent Bernoulli random variables with P [X i = 0] = p i and P [X i = 1] = 1 p i and let X = i X i. What is P [X = k]? For each X i create the (generating) polynomial y i (x) = p i x + (1 p i ). Then m d Y (x) = y i (x) = x d k P [X = k] i=1 Coefficients act like Gaussians? k=0
27 page.27 Making this more precise Recall the quadratic formula: if p(x) = Ax 2 + 2Bx + C then p(x) = (x R 1 )(x R 2 ) where R 1 = B B 2 AC A and R 2 = B + B 2 AC A
28 page.28 Making this more precise Recall the quadratic formula: if p(x) = Ax 2 + 2Bx + C then p(x) = (x R 1 )(x R 2 ) where R 1 = B B 2 AC A and R 2 = B + B 2 AC A In particular, if R 1 and R 2 are real numbers, B 2 AC.
29 page.29 Making this more precise Recall the quadratic formula: if p(x) = Ax 2 + 2Bx + C then p(x) = (x R 1 )(x R 2 ) where R 1 = B B 2 AC A and R 2 = B + B 2 AC A In particular, if R 1 and R 2 are real numbers, B 2 AC. Extends to (using derivatives!) Theorem (Newton Inequalities) Let p(x) = ) i ai x i be a degree-d polynomial with all real roots. Then ( d i a 2 i a i 1 a i+1
30 page.30 Outline Multivariate Extensions 13/50 1 Introduction 2 Multivariate Extensions Real Stable Polynomials Hyperbolic Polynomials 3 Roots 4 Interlacing families
31 page.31 Two extensions Real-rooted polynomials are somewhat restricted by the fact that they are polynomials (you only get one x). There are two well-studied multivariate extensions of real-rooted polynomials.
32 page.32 Two extensions Real-rooted polynomials are somewhat restricted by the fact that they are polynomials (you only get one x). There are two well-studied multivariate extensions of real-rooted polynomials. Actually somewhat equivalent, but theory is more developed in different contexts. 1 Real stable polynomials: better construction properties 2 Hyperbolic polynomials: more refined convexity properties
33 Multivariate Extensions 14/50 page.33 Two extensions Real-rooted polynomials are somewhat restricted by the fact that they are polynomials (you only get one x). There are two well-studied multivariate extensions of real-rooted polynomials. Actually somewhat equivalent, but theory is more developed in different contexts. 1 Real stable polynomials: better construction properties 2 Hyperbolic polynomials: more refined convexity properties Important trait: both are closed under diagonalization (z 1,..., z d ) (x, x,..., x) Allows us to start with one of these and prove things about univariate polynomials
34 page.34 What do I mean by construction properties? Multivariate Extensions 15/50 The issue with real-rooted polynomials is that it is hard to see how to get from one to another.
35 page.35 Parking garage phenomenon Multivariate Extensions 16/50 The issue with real-rooted polynomials is that it is hard to see how to get from one to another. Unless you consider them to be a projection of higher dimensional objects.
36 page.36 Real Stable Polynomials Multivariate Extensions 17/50 Real stable polynomials are useful in this scenario. A polynomial p is called real stable if all coefficients are real and p(x 1,..., x d ) 0 whenever I(x i ) > 0 for all i (there are no zeroes where all coordinates are in the upper half plane).
37 page.37 Real Stable Polynomials Multivariate Extensions 17/50 Real stable polynomials are useful in this scenario. A polynomial p is called real stable if all coefficients are real and p(x 1,..., x d ) 0 whenever I(x i ) > 0 for all i (there are no zeroes where all coordinates are in the upper half plane). Univariate polynomials are real-rooted if and only if they are real stable.
38 page.38 Closure Multivariate Extensions 18/50 Stable polynomials have nice closure properties: For f (z 1, z 2,... z n ) real stable, the following operations preserve stability: 1 Permutation: for σ Σ n, f f (z σ(1),..., z σ(n) ). 2 Scaling: for a > 0, f f (az 1, z 2,..., z n ). 3 Diagonalization: f f (z 1, z 1, z 3..., z n ) 4 Specialization: for I(a) 0, f f (a, z 2,..., z n ). 5 Inversion: if deg z1 (f ) = d, f (z 1 ) d f ( 1/z 1, z 2,..., z d ) 6 Translation: f g(t, z 1,..., z d ) = f (z 1 + t, z 2,..., z n ) 7 Differentiation: f f / z 1
39 page.39 Example: Permanents The permanent of a d d square matrix A is defined as Multivariate Extensions 19/50 perm(a) = d A(i, σ(i)) σ Σ d i=1
40 page.40 Example: Permanents The permanent of a d d square matrix A is defined as Multivariate Extensions 19/50 perm(a) = d A(i, σ(i)) σ Σ d i=1 Similar to the determinant but without the annoying minuses.
41 page.41 Example: Permanents The permanent of a d d square matrix A is defined as Multivariate Extensions 19/50 perm(a) = d A(i, σ(i)) σ Σ d i=1 Similar to the determinant but without the annoying minuses. We can use real stable polynomials to help understand permanents: ( d d ) Q(t 1,..., t d ) = a i t i j=1 is a real stable polynomial, and therefore so is perm(a) = i=1 d t 1,... t d Q(t 1,..., t d ).
42 page.42 Linear Transformations For a linear differential operator T = c α,β z α β α,β N n define its Weyl polynomial F T (z, w) = c α,β z α w β α,β N n
43 page.43 Linear Transformations For a linear differential operator T = c α,β z α β α,β N n define its Weyl polynomial F T (z, w) = c α,β z α w β α,β N n Call T stability preserving if T [p] is real stable for all real stable p.
44 page.44 Linear Transformations Multivariate Extensions 20/50 For a linear differential operator T = c α,β z α β α,β N n define its Weyl polynomial F T (z, w) = c α,β z α w β α,β N n Call T stability preserving if T [p] is real stable for all real stable p. Theorem (Borcea and Brändén) T is stability preserving if and only if F T (z, w) is real stable
45 page.45 Negative association Let µ be a probability distribution on 2 [n].
46 page.46 Negative association Let µ be a probability distribution on 2 [n]. A function f is called non-increasing (non-decreasing) if S T f (S) ( )f (T )
47 page.47 Negative association Let µ be a probability distribution on 2 [n]. A function f is called non-increasing (non-decreasing) if S T f (S) ( )f (T ) Set functions f and g are called disjoint if f (S) = f (S A) and g(s) = g(s A) for some A (the variables that f uses and g uses are disjoint subsets of [n]). µ is said to be negatively associated if for all disjoint non-increasing functions f, g E [f (X )g(x )] E [f (X )] E [g(x )]
48 page.48 Negative association Let µ be a probability distribution on 2 [n]. A function f is called non-increasing (non-decreasing) if S T f (S) ( )f (T ) Set functions f and g are called disjoint if f (S) = f (S A) and g(s) = g(s A) for some A (the variables that f uses and g uses are disjoint subsets of [n]). µ is said to be negatively associated if for all disjoint non-increasing functions f, g E [f (X )g(x )] E [f (X )] E [g(x )] Example: Uniform distribution on spanning trees of a graph
49 page.49 Characterization of NA Theorem (Brändén) A distribution µ is negatively associated if and only if the polynomial G µ = E [x S] = P [X = S] x i S 2 [n] i S is real stable.
50 page.50 Characterization of NA Theorem (Brändén) A distribution µ is negatively associated if and only if the polynomial G µ = E [x S] = P [X = S] x i S 2 [n] i S is real stable. Implies tight concentration and convex-geometry-type inequalities: Lemma Let µ be negatively associated and let f i be non-decreasing functions. Then E f i (X i ) E [f i (X i )] i [n] i [n]
51 page.51 Hyperbolic polynomials Let x = (x 1,..., x d ). A multivariate homogeneous polynomial p( x) is said to be hyperbolic in direction e if 1 p( e) 0 2 The univariate polynomial q y (t) = p( y + t e) is real-rooted for all y R d If this makes no intuitive sense, wait for the picture.
52 page.52 Hyperbolic polynomials Multivariate Extensions 23/50 Let x = (x 1,..., x d ). A multivariate homogeneous polynomial p( x) is said to be hyperbolic in direction e if 1 p( e) 0 2 The univariate polynomial q y (t) = p( y + t e) is real-rooted for all y R d If this makes no intuitive sense, wait for the picture. Examples: 1 p(x, y) = xy is hyperbolic in the direction (1, 1) 2 E k ( x) is hyperbolic in the direction (1,..., 1) 3 p(x, y, z) = x 2 y 2 z 2 is hyperbolic in the direction (1, 0, 0) 4 For Hermitian A, p(a) = det [A] is hyperbolic in the direction I
53 Multivariate Extensions 24/50 page.53 Zero surfaces Theorem (Helton Vinnikov) The zero surfaces of a degree d hyperbolic polynomial form d 2 nested ovaloids and a pseudo-hyperplane (if d is odd)
54 page.54 Characteristic Polynomials Hyperbolic polynomials can be viewed as a generalization of characteristic polynomials (for matrices).
55 page.55 Characteristic Polynomials Hyperbolic polynomials can be viewed as a generalization of characteristic polynomials (for matrices). Multivariate Extensions 25/50 Any hyperbolic polynomial can be factored: p( x + t e) = p( e) j (t + λ j ( e, x)) with λ = (λ 1,..., λ d ) ordered as λ 1... λ d.
56 page.56 Characteristic Polynomials Hyperbolic polynomials can be viewed as a generalization of characteristic polynomials (for matrices). Multivariate Extensions 25/50 Any hyperbolic polynomial can be factored: p( x + t e) = p( e) j (t + λ j ( e, x)) with λ = (λ 1,..., λ d ) ordered as λ 1... λ d. The λ j are called eigenvalues. When p( x) = det [ x] and e = I, then p( x + t e) = det [ti + x] = i (t + λ i ) which is (the negative of) the matrix version of eigenvalues.
57 page.57 Hyperbolicity Cones The hyperbolicity cone of p (with respect to e) is the set and is denoted Λ ++ (p, e). { x : λ 1 ( x) > 0}
58 page.58 Hyperbolicity Cones Multivariate Extensions 26/50 The hyperbolicity cone of p (with respect to e) is the set { x : λ 1 ( x) > 0} and is denoted Λ ++ (p, e). Theorem (Gårding) Let p be hyperbolic in the direction e. Then 1 p is hyperbolic in the direction e for all e Λ ++ (p, e) 2 e Λ ++ (p, e) e Λ ++ (p, e ) 3 D e p is hyperbolic in direction e and Λ ++ (p, e) Λ ++ (D e p, e) 4 Λ ++ (p, e) is convex Inequalities exploit this convexity.
59 page.59 Some examples Theorem (Bauschke, Güler, Lewis, Sendov) Let p be degree-d and hyperbolic in the direction e and let f : R d [, + ] be convex and symmetric. Let µ( x) = λ( e, x) Then f µ is convex.
60 page.60 Some examples Theorem (Bauschke, Güler, Lewis, Sendov) Let p be degree-d and hyperbolic in the direction e and let f : R d [, + ] be convex and symmetric. Let µ( x) = λ( e, x) Then f µ is convex. Theorem (Kummer, Plaumann, Vinzant) Let p be degree-d, square-free, and hyperbolic in the direction e and let h be any degree-(d 1) polynomial such that for all x p( x) = 0 D e p( x) h( x) 0 Then D e p h p D e h (everywhere).
61 page.61 Lax conjecture Multivariate Extensions 28/50 Theorem (Lewis, Parrilo, Ramana) Let h(x, y, z) be degree-d and hyperbolic in the direction (e 1, e 2, e 3 ) such that h(e 1, e 2, e 3 ) = 1. Then there exist symmetric d d matrices A, B, C such that e 1 A + e 2 B + e 3 C = I and h(x, y, z) = det [xa + yb + zc].
62 page.62 Lax conjecture Multivariate Extensions 28/50 Theorem (Lewis, Parrilo, Ramana) Let h(x, y, z) be degree-d and hyperbolic in the direction (e 1, e 2, e 3 ) such that h(e 1, e 2, e 3 ) = 1. Then there exist symmetric d d matrices A, B, C such that e 1 A + e 2 B + e 3 C = I and h(x, y, z) = det [xa + yb + zc]. Uses theory developed by Helton and Vinnikov.
63 page.63 Lax conjecture Multivariate Extensions 28/50 Theorem (Lewis, Parrilo, Ramana) Let h(x, y, z) be degree-d and hyperbolic in the direction (e 1, e 2, e 3 ) such that h(e 1, e 2, e 3 ) = 1. Then there exist symmetric d d matrices A, B, C such that e 1 A + e 2 B + e 3 C = I and h(x, y, z) = det [xa + yb + zc]. Uses theory developed by Helton and Vinnikov. Not true for more than 3 variables.
64 page.64 More inequalities Multivariate Extensions 29/50 Let p( x) be degree-d and hyperbolic in direction e. Let a 1,..., a d Λ ++ (p, e). Denoting the d th directional derivative of p in the directions a 1,..., a d as D d p[ a 1,..., a d ], we have: Theorem (Gårding) D d p( x)[ a 1,..., a d ] d! d p( a i ) 1/d i=1 Define Cap(p) = inf{p(t 1 a 1,..., t d a d ) i t i = 1, t i > 0} Theorem (Gurvits) D d p( x)[ a 1,..., a d ] d! Cap(p) d d
65 page.65 Equivalence From stable to hyperbolic: Lemma Let p(x 1,..., x n ) be a polynomial. Then p is stable if and only if y d p(x 1 /y,..., x n /y) is hyperbolic with respect to (1,..., 1, 0).
66 page.66 Equivalence From stable to hyperbolic: Lemma Let p(x 1,..., x n ) be a polynomial. Then p is stable if and only if y d p(x 1 /y,..., x n /y) is hyperbolic with respect to (1,..., 1, 0). From hyperbolic to stable: Lemma (Borcea and Brändén) Let h( x) be a degree-d homogeneous polynomial, and let a, b be such that h( a)h( b) 0. The following are equivalent 1 h is hyperbolic with respect to a, and b Λ ++ (h, a). 2 The bivariate polynomial f (s, t) = h( x + s a + t b) is real stable for all x.
67 page.67 Outline Roots 31/50 1 Introduction 2 Multivariate Extensions Real Stable Polynomials Hyperbolic Polynomials 3 Roots 4 Interlacing families
68 page.68 Roots Roots 32/50 So far we have seen that having the property of real roots (or one of the multivariate extensions) gives interesting inequalities on the coefficients and the values.
69 page.69 Roots Roots 32/50 So far we have seen that having the property of real roots (or one of the multivariate extensions) gives interesting inequalities on the coefficients and the values. We have still not talked about the roots themselves.
70 page.70 Roots Roots 32/50 So far we have seen that having the property of real roots (or one of the multivariate extensions) gives interesting inequalities on the coefficients and the values. We have still not talked about the roots themselves. For good reason... The technology we have seen so far can be used to prove the realness of roots, but not the location of them!
71 page.71 Some motivation Let s begin with a simple question. You have a symmetric d d matrix A and you add a rank-1 matrix uu T. What happens to the eigenvalues?
72 page.72 Some motivation Let s begin with a simple question. You have a symmetric d d matrix A and you add a rank-1 matrix uu T. What happens to the eigenvalues? Some trivial observations: The eigenvalues have to go up The amount λ i goes up should depend on u, v i Traces add Averaging over all possible u should move all eigenvalues the same amount Let s try looking at this through the frames of polynomials.
73 page.73 Linear algebra review Lemma (Matrix Determinant Lemma) Let A be an invertible d d matrix and let u, v R d. Then [ ] det A + uv T = det [A] (1 + u T A 1 v) Lemma (Spectral Decomposition) Let A be a d d symmetric matrix. Then there exists real numbers λ 1,..., λ d and an orthonormal basis v 1,..., v d such that A = i λ i v i v T i
74 page.74 The characteristic polynomial The characteristic polynomial of a d d matrix A is the polynomial Roots 35/50 χ A (x) = det [xi A]
75 page.75 The characteristic polynomial The characteristic polynomial of a d d matrix A is the polynomial Roots 35/50 If the spectral decomposition of A is χ A (x) = det [xi A] A = i λ i v i v T i then d χ A (x) = (x λ i ) i=1 and so the spectral decomposition of xi A is xi A = i (x λ i )v i v T i
76 page.76 Adding a rank-1 operator Roots 36/50 Using the matrix determinant lemma, we can see what happens to the eigenvalues of a matrix when a rank-1 operator is added: [ ] ( ) det xi (A + uu T ) = det [xi A] 1 u T (xi A) 1 u Decompose xi A = i (x λ i )v i v T i where the v i are orthonormal. Then (xi A) 1 = i ( 1 x λ i ) v i v T i
77 page.77 New roots Roots 37/50 So we have ( [ ] det xi (A + uu T ) = det [xi A] 1 i ) v i, u 2 x λ i
78 page.78 New roots Roots 37/50 So we have ( [ ] det xi (A + uu T ) = det [xi A] 1 i ) v i, u 2 x λ i The roots of det [ xi (A + uu T ) ] are either roots of det [xi A] or solutions to v i, u 2 = 1 x λ i i
79 page.79 New roots Roots 37/50 So we have ( [ ] det xi (A + uu T ) = det [xi A] 1 i ) v i, u 2 x λ i The roots of det [ xi (A + uu T ) ] are either roots of det [xi A] or solutions to v i, u 2 = 1 x λ i i Let s assume that v i, u 2 > 0 for all i and that all λ i are distinct.
80 page.80 Digging deeper Consider the equation f (x) := i v i, u 2 x λ i = 1
81 page.81 Digging deeper Consider the equation f (x) := i v i, u 2 x λ i = 1 Notice that for all i, lim x λ i f (x) = and lim x λ + i f (x) = + so by continuity, f (x) = 1 must have a solution between λ i and λ i+1.
82 page.82 Digging deeper Consider the equation f (x) := i v i, u 2 x λ i = 1 Notice that for all i, lim x λ i f (x) = and lim x λ + i f (x) = + so by continuity, f (x) = 1 must have a solution between λ i and λ i+1. In other words, if λ i are the new eigenvalues, we have λ 1 λ 1 λ 2 λ d 1 λ d 1 λ d λ d This phenomenon is known as interlacing (and is definitely not a trivial observation).
83 page.83 Billiard Balls Roots 39/50 This has interesting consequences. Consider adding the operator αuu T for different α R.
84 page.84 Billiard Balls Roots 39/50 This has interesting consequences. Consider adding the operator αuu T for different α R. As α increases, the only eigenvalue that has room to move is the top one.
85 page.85 Billiard Balls Roots 39/50 This has interesting consequences. Consider adding the operator αuu T for different α R. As α increases, the only eigenvalue that has room to move is the top one. As a result, we cannot hope to understand the maximal eigenvalue by only keeping track of the maximum eigenvalue. And (as we will see tomorrow) understanding maximum eigenvalues seems to be a useful thing to be able do.
86 page.86 Outline Interlacing families 40/50 1 Introduction 2 Multivariate Extensions Real Stable Polynomials Hyperbolic Polynomials 3 Roots 4 Interlacing families
87 page.87 Adding randomness We have seen that polynomials can help us understand the process of adding rank-1 operators. But what if we add a random rank-1 operator?
88 page.88 Adding randomness We have seen that polynomials can help us understand the process of adding rank-1 operators. But what if we add a random rank-1 operator? One thing we can do is look at what happens in expectation.
89 page.89 Adding randomness We have seen that polynomials can help us understand the process of adding rank-1 operators. But what if we add a random rank-1 operator? One thing we can do is look at what happens in expectation. As before, let s look at this through the frames of polynomials.
90 page.90 In expectation Interlacing families 42/50 Assume we have an operator (matrix) A and we add a random operator to it that takes the values {uu T, vv T } uniformly.
91 page.91 In expectation Interlacing families 42/50 Assume we have an operator (matrix) A and we add a random operator to it that takes the values {uu T, vv T } uniformly. We will (perhaps naively) consider the polynomial Why is this naive? p(x) = 1 2 χ A+vv T (x) χ A+uu T (x)
92 page.92 In expectation Interlacing families 42/50 Assume we have an operator (matrix) A and we add a random operator to it that takes the values {uu T, vv T } uniformly. We will (perhaps naively) consider the polynomial Why is this naive? p(x) = 1 2 χ A+vv T (x) χ A+uu T (x) Adding polynomials is a function of the coefficients and we are interested in the roots. In general, it is easy to get the coefficients from the roots but hard to get the roots from the coefficients.
93 page.93 But wait... Interlacing families 43/50 But we have already seen that we can say something about the case of adding rank-1 operators (interlacing). Can we do something similar here?
94 page.94 But wait... Interlacing families 43/50 But we have already seen that we can say something about the case of adding rank-1 operators (interlacing). Can we do something similar here? Let s formalize the interlacing property we saw before.
95 page.95 Interlacing polynomials Interlacing families 44/50 Let p be a degree-n real-rooted polynomial and q a degree-(n 1) real-rooted polynomial p(x) = n n 1 (x α i ) and q(x) = (x β i ) i=1 with α 1 α n and β 1 β n 1 i=1
96 page.96 Interlacing polynomials Interlacing families 44/50 Let p be a degree-n real-rooted polynomial and q a degree-(n 1) real-rooted polynomial p(x) = n n 1 (x α i ) and q(x) = (x β i ) i=1 with α 1 α n and β 1 β n 1 We say q interlaces p if α 1 β 1 α 2 α d 1 β n 1 α n. Think: The roots of q separate the roots of p i=1
97 page.97 Interlacing polynomials Interlacing families 44/50 Let p be a degree-n real-rooted polynomial and q a degree-(n 1) real-rooted polynomial p(x) = n n 1 (x α i ) and q(x) = (x β i ) i=1 with α 1 α n and β 1 β n 1 We say q interlaces p if α 1 β 1 α 2 α d 1 β n 1 α n. Think: The roots of q separate the roots of p Example 1: p (x) interlaces p(x) Example 2: If p has no multiple roots (and largest root R), then let q = p/(x R). Then q(x + ɛ) interlaces p(x) i=1
98 page.98 Common Interlacers We say that two degree n polynomials p and r have a common interlacer if there exists a q such that q interlaces both p and r simultaneously. Think: the roots of q split up R into n intervals, each of which contains exactly one root of p and one root of r
99 page.99 Common Interlacers We say that two degree n polynomials p and r have a common interlacer if there exists a q such that q interlaces both p and r simultaneously. Think: the roots of q split up R into n intervals, each of which contains exactly one root of p and one root of r Example 1: If p has no multiple roots, then p(x) and p(x) + ɛ have a common interlacer (p (x)) Example 2: If p has no multiple roots, then p(x) and p(x + ɛ) have a common interlacer (p (x))
100 page.100 Common Interlacers We say that two degree n polynomials p and r have a common interlacer if there exists a q such that q interlaces both p and r simultaneously. Think: the roots of q split up R into n intervals, each of which contains exactly one root of p and one root of r Example 1: If p has no multiple roots, then p(x) and p(x) + ɛ have a common interlacer (p (x)) Example 2: If p has no multiple roots, then p(x) and p(x + ɛ) have a common interlacer (p (x)) How does this help?
101 Interlacing families 46/50 page.101 A lemma Lemma Let f and g be monic polynomials. Assume there exists a point c R such that f and g each has exactly one real root larger than c (call these the extreme roots ). Then the largest real root of f + g lies between these extreme roots.
102 page.102 A lemma Lemma Let f and g be monic polynomials. Assume there exists a point c R such that f and g each has exactly one real root larger than c (call these the extreme roots ). Then the largest real root of f + g lies between these extreme roots. Proof. By picture Note: if f and g have a common interlacer (say q), then setting c = q d 1 satisfies the lemma! Interlacing families 46/50
103 page.103 Without c to anchor
104 page.104 Without c to anchor
105 page.105 So what can we say? Recall our goal was to understand the roots of p(x) = 1 2 χ A+vv T (x) χ A+uu T (x) = 1 2 q(x) r(x)
106 page.106 So what can we say? Recall our goal was to understand the roots of Interlacing families 48/50 p(x) = 1 2 χ A+vv T (x) χ A+uu T (x) = 1 2 q(x) r(x) We will say that {p, q, r} form an interlacing family if 1 p, q and r are all real rooted 2 q and r have a common interlacer
107 page.107 So what can we say? Recall our goal was to understand the roots of Interlacing families 48/50 p(x) = 1 2 χ A+vv T (x) χ A+uu T (x) = 1 2 q(x) r(x) We will say that {p, q, r} form an interlacing family if 1 p, q and r are all real rooted 2 q and r have a common interlacer Corollary If {p, q, r} forms an interlacing family, then there exists an assignment of our random variable (either uu T or vv T ) such that the largest root of the resulting polynomial is at most the largest root of p(x) (the expected polynomial).
108 page.108 Interlacing for free Fortunately, the interlacing follows directly from a well-known lemma: Lemma (Fisk, among others) Let f, g be polynomials of the same degree such that every λf + (1 λ)g is real-rooted for all λ [0, 1]. Then f and g have a common interlacer.
109 page.109 Interlacing for free Fortunately, the interlacing follows directly from a well-known lemma: Lemma (Fisk, among others) Let f, g be polynomials of the same degree such that every λf + (1 λ)g is real-rooted for all λ [0, 1]. Then f and g have a common interlacer. Recall (again) our equation p(x) = 1 2 χ A+vv T (x) χ A+uu T (x)
110 page.110 Interlacing for free Interlacing families 49/50 Fortunately, the interlacing follows directly from a well-known lemma: Lemma (Fisk, among others) Let f, g be polynomials of the same degree such that every λf + (1 λ)g is real-rooted for all λ [0, 1]. Then f and g have a common interlacer. Recall (again) our equation p(x) = 1 2 χ A+vv T (x) χ A+uu T (x) If we could show that p(x) = λχ A+vv T (x) + (1 λ)χ A+uu T (x) for all λ [0, 1], then we would get the interlacing for free.
111 page.111 Full Circle Interlacing families 50/50 So what have we accomplished?
112 page.112 Full Circle Interlacing families 50/50 So what have we accomplished? We saw (from previous results) that real-rootedness gave inequalities on coefficients of polynomials (making them useful generating functions).
113 page.113 Full Circle Interlacing families 50/50 So what have we accomplished? We saw (from previous results) that real-rootedness gave inequalities on coefficients of polynomials (making them useful generating functions). We also saw (from previous results) that real-rootedness gave inequalities on the values of polynomials.
114 page.114 Full Circle Interlacing families 50/50 So what have we accomplished? We saw (from previous results) that real-rootedness gave inequalities on coefficients of polynomials (making them useful generating functions). We also saw (from previous results) that real-rootedness gave inequalities on the values of polynomials. But now (via interlacing families) we can use real-rootedness to relate the roots of expected polynomials with the possible realizations of the polynomial!
115 page.115 Full Circle So what have we accomplished? We saw (from previous results) that real-rootedness gave inequalities on coefficients of polynomials (making them useful generating functions). We also saw (from previous results) that real-rootedness gave inequalities on the values of polynomials. But now (via interlacing families) we can use real-rootedness to relate the roots of expected polynomials with the possible realizations of the polynomial! How that could be useful remains to be seen...
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationPUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.
PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include
More information1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
More informationApplied Linear Algebra I Review page 1
Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties
More informationTHE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS
THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear
More informationMATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationFactorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
More information1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
More informationThe Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
More informationMath 4310 Handout - Quotient Vector Spaces
Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
More informationInner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
More informationα = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationI. GROUPS: BASIC DEFINITIONS AND EXAMPLES
I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called
More information1 Norms and Vector Spaces
008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)
More informationAlgebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.
This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra
More informationNotes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
More informationLecture 5 Principal Minors and the Hessian
Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and
More informationPolynomial Invariants
Polynomial Invariants Dylan Wilson October 9, 2014 (1) Today we will be interested in the following Question 1.1. What are all the possible polynomials in two variables f(x, y) such that f(x, y) = f(y,
More informationA characterization of trace zero symmetric nonnegative 5x5 matrices
A characterization of trace zero symmetric nonnegative 5x5 matrices Oren Spector June 1, 009 Abstract The problem of determining necessary and sufficient conditions for a set of real numbers to be the
More informationApplication. Outline. 3-1 Polynomial Functions 3-2 Finding Rational Zeros of. Polynomial. 3-3 Approximating Real Zeros of.
Polynomial and Rational Functions Outline 3-1 Polynomial Functions 3-2 Finding Rational Zeros of Polynomials 3-3 Approximating Real Zeros of Polynomials 3-4 Rational Functions Chapter 3 Group Activity:
More informationMathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
More informationLINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
More informationChapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
More information7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More information1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
More informationHow To Prove The Dirichlet Unit Theorem
Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if
More informationReal Roots of Univariate Polynomials with Real Coefficients
Real Roots of Univariate Polynomials with Real Coefficients mostly written by Christina Hewitt March 22, 2012 1 Introduction Polynomial equations are used throughout mathematics. When solving polynomials
More informationChapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm.
Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm. We begin by defining the ring of polynomials with coefficients in a ring R. After some preliminary results, we specialize
More informationNotes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
More informationLinear Algebra I. Ronald van Luijk, 2012
Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.
More informationEigenvalues and Eigenvectors
Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution
More information15. Symmetric polynomials
15. Symmetric polynomials 15.1 The theorem 15.2 First examples 15.3 A variant: discriminants 1. The theorem Let S n be the group of permutations of {1,, n}, also called the symmetric group on n things.
More informationUnique Factorization
Unique Factorization Waffle Mathcamp 2010 Throughout these notes, all rings will be assumed to be commutative. 1 Factorization in domains: definitions and examples In this class, we will study the phenomenon
More informationQuotient Rings and Field Extensions
Chapter 5 Quotient Rings and Field Extensions In this chapter we describe a method for producing field extension of a given field. If F is a field, then a field extension is a field K that contains F.
More informationLectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain
Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal
More information1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
More informationSection 6.1 - Inner Products and Norms
Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
More informationa 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationOrthogonal Projections
Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors
More informationLecture 1: Schur s Unitary Triangularization Theorem
Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections
More informationON GALOIS REALIZATIONS OF THE 2-COVERABLE SYMMETRIC AND ALTERNATING GROUPS
ON GALOIS REALIZATIONS OF THE 2-COVERABLE SYMMETRIC AND ALTERNATING GROUPS DANIEL RABAYEV AND JACK SONN Abstract. Let f(x) be a monic polynomial in Z[x] with no rational roots but with roots in Q p for
More informationLEARNING OBJECTIVES FOR THIS CHAPTER
CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional
More information[1] Diagonal factorization
8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:
More informationDIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents
DIFFERENTIABILITY OF COMPLEX FUNCTIONS Contents 1. Limit definition of a derivative 1 2. Holomorphic functions, the Cauchy-Riemann equations 3 3. Differentiability of real functions 5 4. A sufficient condition
More informationBANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
More informationProbability Generating Functions
page 39 Chapter 3 Probability Generating Functions 3 Preamble: Generating Functions Generating functions are widely used in mathematics, and play an important role in probability theory Consider a sequence
More informationZeros of a Polynomial Function
Zeros of a Polynomial Function An important consequence of the Factor Theorem is that finding the zeros of a polynomial is really the same thing as factoring it into linear factors. In this section we
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the
More informationClassification of Cartan matrices
Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms
More informationBindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8
Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e
More informationActually Doing It! 6. Prove that the regular unit cube (say 1cm=unit) of sufficiently high dimension can fit inside it the whole city of New York.
1: 1. Compute a random 4-dimensional polytope P as the convex hull of 10 random points using rand sphere(4,10). Run VISUAL to see a Schlegel diagram. How many 3-dimensional polytopes do you see? How many
More informationLecture 7: Finding Lyapunov Functions 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1
More informationBasics of Polynomial Theory
3 Basics of Polynomial Theory 3.1 Polynomial Equations In geodesy and geoinformatics, most observations are related to unknowns parameters through equations of algebraic (polynomial) type. In cases where
More informationThe Method of Partial Fractions Math 121 Calculus II Spring 2015
Rational functions. as The Method of Partial Fractions Math 11 Calculus II Spring 015 Recall that a rational function is a quotient of two polynomials such f(x) g(x) = 3x5 + x 3 + 16x x 60. The method
More informationMath Review. for the Quantitative Reasoning Measure of the GRE revised General Test
Math Review for the Quantitative Reasoning Measure of the GRE revised General Test www.ets.org Overview This Math Review will familiarize you with the mathematical skills and concepts that are important
More informationSHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH
31 Kragujevac J. Math. 25 (2003) 31 49. SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH Kinkar Ch. Das Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, W.B.,
More informationLagrange Interpolation is a method of fitting an equation to a set of points that functions well when there are few points given.
Polynomials (Ch.1) Study Guide by BS, JL, AZ, CC, SH, HL Lagrange Interpolation is a method of fitting an equation to a set of points that functions well when there are few points given. Sasha s method
More informationx1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.
Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,
More informationSF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationCORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA
We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA April 2016 www.voyagersopris.com Mathematical
More informationMOP 2007 Black Group Integer Polynomials Yufei Zhao. Integer Polynomials. June 29, 2007 Yufei Zhao yufeiz@mit.edu
Integer Polynomials June 9, 007 Yufei Zhao yufeiz@mit.edu We will use Z[x] to denote the ring of polynomials with integer coefficients. We begin by summarizing some of the common approaches used in dealing
More information9. POLYNOMIALS. Example 1: The expression a(x) = x 3 4x 2 + 7x 11 is a polynomial in x. The coefficients of a(x) are the numbers 1, 4, 7, 11.
9. POLYNOMIALS 9.1. Definition of a Polynomial A polynomial is an expression of the form: a(x) = a n x n + a n-1 x n-1 +... + a 1 x + a 0. The symbol x is called an indeterminate and simply plays the role
More informationRESULTANT AND DISCRIMINANT OF POLYNOMIALS
RESULTANT AND DISCRIMINANT OF POLYNOMIALS SVANTE JANSON Abstract. This is a collection of classical results about resultants and discriminants for polynomials, compiled mainly for my own use. All results
More informationMatrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.
Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that
More information4.5 Linear Dependence and Linear Independence
4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then
More informationModern Algebra Lecture Notes: Rings and fields set 4 (Revision 2)
Modern Algebra Lecture Notes: Rings and fields set 4 (Revision 2) Kevin Broughan University of Waikato, Hamilton, New Zealand May 13, 2010 Remainder and Factor Theorem 15 Definition of factor If f (x)
More informationLecture 3: Finding integer solutions to systems of linear equations
Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture
More information8 Square matrices continued: Determinants
8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You
More informationCreating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities
Algebra 1, Quarter 2, Unit 2.1 Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities Overview Number of instructional days: 15 (1 day = 45 60 minutes) Content to be learned
More informationA note on companion matrices
Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod
More informationCS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
More informationCHAPTER SIX IRREDUCIBILITY AND FACTORIZATION 1. BASIC DIVISIBILITY THEORY
January 10, 2010 CHAPTER SIX IRREDUCIBILITY AND FACTORIZATION 1. BASIC DIVISIBILITY THEORY The set of polynomials over a field F is a ring, whose structure shares with the ring of integers many characteristics.
More informationIdeal Class Group and Units
Chapter 4 Ideal Class Group and Units We are now interested in understanding two aspects of ring of integers of number fields: how principal they are (that is, what is the proportion of principal ideals
More informationChapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
More informationMean value theorem, Taylors Theorem, Maxima and Minima.
MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.
More informationSolving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
More informationCONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
More informationOperation Count; Numerical Linear Algebra
10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point
More informationOrthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
More information1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm
Uppsala Universitet Matematiska Institutionen Andreas Strömbergsson Prov i matematik Funktionalanalys Kurs: F3B, F4Sy, NVP 005-06-15 Skrivtid: 9 14 Tillåtna hjälpmedel: Manuella skrivdon, Kreyszigs bok
More information1 = (a 0 + b 0 α) 2 + + (a m 1 + b m 1 α) 2. for certain elements a 0,..., a m 1, b 0,..., b m 1 of F. Multiplying out, we obtain
Notes on real-closed fields These notes develop the algebraic background needed to understand the model theory of real-closed fields. To understand these notes, a standard graduate course in algebra is
More informationPROBLEM SET 6: POLYNOMIALS
PROBLEM SET 6: POLYNOMIALS 1. introduction In this problem set we will consider polynomials with coefficients in K, where K is the real numbers R, the complex numbers C, the rational numbers Q or any other
More informationTHE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok
THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE Alexer Barvinok Papers are available at http://www.math.lsa.umich.edu/ barvinok/papers.html This is a joint work with J.A. Hartigan
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
More information3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.
Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R
More informationPrime numbers and prime polynomials. Paul Pollack Dartmouth College
Prime numbers and prime polynomials Paul Pollack Dartmouth College May 1, 2008 Analogies everywhere! Analogies in elementary number theory (continued fractions, quadratic reciprocity, Fermat s last theorem)
More informationMatrix Representations of Linear Transformations and Changes of Coordinates
Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under
More informationDifferentiation and Integration
This material is a supplement to Appendix G of Stewart. You should read the appendix, except the last section on complex exponentials, before this material. Differentiation and Integration Suppose we have
More informationContinuity of the Perron Root
Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North
More informationThe Ideal Class Group
Chapter 5 The Ideal Class Group We will use Minkowski theory, which belongs to the general area of geometry of numbers, to gain insight into the ideal class group of a number field. We have already mentioned
More informationLinear Codes. Chapter 3. 3.1 Basics
Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length
More information