Characteristic Polynomials of Real Symmetric Matrices Generated from Random Graphs

Similar documents
Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

The Characteristic Polynomial

Chapter 6. Orthogonality

n 2 + 4n + 3. The answer in decimal form (for the Blitz): 0, 75. Solution. (n + 1)(n + 3) = n lim m 2 1

LINEAR ALGEBRA. September 23, 2010

DATA ANALYSIS II. Matrix Algorithms

[1] Diagonal factorization

by the matrix A results in a vector which is a reflection of the given

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Chapter 17. Orthogonal Matrices and Symmetries of Space

Inner Product Spaces

Orthogonal Diagonalization of Symmetric Matrices

Similarity and Diagonalization. Similar Matrices

Introduction to Matrix Algebra

Nonlinear Iterative Partial Least Squares Method

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

x = + x 2 + x

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Solving Quadratic Equations

Eigenvalues and Eigenvectors

Linear Algebra and TI 89

Zeros of a Polynomial Function

Lecture 5 Principal Minors and the Hessian

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Solution to Homework 2

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Similar matrices and Jordan form

Dimensionality Reduction: Principal Components Analysis

6.1 Add & Subtract Polynomial Expression & Functions

Operation Count; Numerical Linear Algebra

Notes on Determinant

Algebra 1 Course Title

Application. Outline. 3-1 Polynomial Functions 3-2 Finding Rational Zeros of. Polynomial. 3-3 Approximating Real Zeros of.

The last three chapters introduced three major proof techniques: direct,

DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH

1 Sets and Set Notation.

6.3 Conditional Probability and Independence

A note on companion matrices

8 Polynomials Worksheet

POLYNOMIAL FUNCTIONS

Social Media Mining. Network Measures

2.3. Finding polynomial functions. An Introduction:

The Point-Slope Form

Numerical Analysis Lecture Notes

MA107 Precalculus Algebra Exam 2 Review Solutions

Vector and Matrix Norms

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Zeros of Polynomial Functions

6. Vectors Scott Surgent (surgent@asu.edu)

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

The Open University s repository of research publications and other research outputs

University of Lille I PC first year list of exercises n 7. Review

Mathematics Pre-Test Sample Questions A. { 11, 7} B. { 7,0,7} C. { 7, 7} D. { 11, 11}

MATH APPLIED MATRIX THEORY

THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the school year.

7 Gaussian Elimination and LU Factorization

Algebra II End of Course Exam Answer Key Segment I. Scientific Calculator Only

MATH Fundamental Mathematics IV

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Math Quizzes Winter 2009

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Review Jeopardy. Blue vs. Orange. Review Jeopardy

3 Some Integer Functions

REPEATED TRIALS. The probability of winning those k chosen times and losing the other times is then p k q n k.

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Name: Section Registered In:

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

Definition 8.1 Two inequalities are equivalent if they have the same solution set. Add or Subtract the same value on both sides of the inequality.

One-Way ANOVA using SPSS SPSS ANOVA procedures found in the Compare Means analyses. Specifically, we demonstrate

Figure 1.1 Vector A and Vector F

1 Review of Least Squares Solutions to Overdetermined Systems

3.3 Real Zeros of Polynomials

ALGEBRA. sequence, term, nth term, consecutive, rule, relationship, generate, predict, continue increase, decrease finite, infinite

5/31/ Normal Distributions. Normal Distributions. Chapter 6. Distribution. The Normal Distribution. Outline. Objectives.

LINEAR ALGEBRA W W L CHEN

Linearly Independent Sets and Linearly Dependent Sets

2.4 Real Zeros of Polynomial Functions

Data Mining: Algorithms and Applications Matrix Math Review

6 3 The Standard Normal Distribution

6.4 Normal Distribution

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Recall that two vectors in are perpendicular or orthogonal provided that their dot

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Determinants can be used to solve a linear system of equations using Cramer s Rule.

Conductance, the Normalized Laplacian, and Cheeger s Inequality

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

MEASURES OF VARIATION

The Method of Partial Fractions Math 121 Calculus II Spring 2015

Polynomial and Synthetic Division. Long Division of Polynomials. Example 1. 6x 2 7x 2 x 2) 19x 2 16x 4 6x3 12x 2 7x 2 16x 7x 2 14x. 2x 4.

Zachary Monaco Georgia College Olympic Coloring: Go For The Gold

Linear Programming. March 14, 2014

Math 115A - Week 1 Textbook sections: Topics covered: What is a vector? What is a vector space? Span, linear dependence, linear independence

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

ROUTH S STABILITY CRITERION

Polynomial and Rational Functions

Logo Symmetry Learning Task. Unit 5

Transcription:

Characteristic Polynomials of Real Symmetric Matrices Generated from Random Graphs Anagha Deshmane 1 American Institute of Mathematics 360 Portage Avenue Palo Alto, CA 94306 July 17, 2003 1 E-mail: anaghad@mit.com

Abstract Eigenvalues are the roots of the characteristic polynomials of an n n matrix. In this paper we will use 4 4 real symmetric matrices to represent random graphs, and we will study the eigenvalues of these matrices. We will show that if each non-diagonal entry in a 4 4 matrix is chosen from a uniform distribution on the interval [0, α], where 0 α, then the eigenvalues of the matrix are rescaled by a factor α. We will also show that if each non-diagonal entry in the matrix is chosen from a uniform distribution on the interval [β, β + 1], where 0 β, then roughly 3 of the eigenvalues will be concentrated around β and roughly 1 of them will 4 4 be concentrated around 3β. An investigation into the application of eigenvalues is beyond the scope of this extended essay. However, studying the distribution of these eigenvalues is analogous to studying the roots of a general polynomial with real coefficients. By studying how the distribution of the eigenvalues shifts based on how the inputs in the matrix are altered, we can understand how the roots of a polynomial shift if the coefficients of the polynomial are altered. Thus we expand the problem of the distribution of eigenvalues to encompass the roots of polynomials of the general form p(x) = a n x n + a n 1 x n 1 +... + a 0. In this paper we will quantify how much each coefficient can be shifted such that the real roots of the polynomial do not move outside a small window.

Contents 1 Background 2 1.1 Random Graphs, d-regular Graphs, and Their Matrices...... 2 1.2 Weighted Matrices......................... 4 1.3 Eigenvalues............................. 4 2 Investigation 1: 4 4 Matrices 7 2.1 Rescaled Intervals.......................... 7 2.1.1 Experiments........................ 7 2.1.2 Theory........................... 8 2.2 Shifted Intervals........................... 10 2.2.1 Experiments........................ 10 2.2.2 Theory........................... 11 3 Investigation 2: n n Matrices 13 4 Conclusion 16 1

Chapter 1 Background 1.1 Random Graphs, d-regular Graphs, and Their Matrices A graph is a collection of vertices (points) and edges (lines which connect the vertices). In a random graph, the vertices are randomly connected to each other. A d-regular graph is a type of random graph in which each vertex is randomly connected to d other vertices. For example, in a 3-regular graph, each vertex is connected to 3 other vertices. Suppose we make a 3-regular graph with 4 vertices. The only way to construct such a graph is to connect each vertex to every other vertex. Let s label the vertices as Q, R, S, and T. The graph will look like this: Q T R S 2

We can represent any graph with n vertices with an n n real symmetric matrix (meaning that the entries in the matrix are real numbers, and the entries above the diagonal are reflected below the diagonal). This matrix is called the graph s adjacency matrix. Each column and each row of the adjacency matrix represents a vertex. A 1 is placed in a position if the two vertices that the position represents are connected; a 0 is placed in that position if the two vertices are not connected. The graphs we are investigating do not allow a vertex to be connected to itself, so we place zeros in every position on the diagonal of the adjacency matrix. In a d-regular graph, there are d 1s in each row and d 1s in each column, because there are d connections to each vertex. In our example of a 3-regular graph with 4 vertices, the graph s adjacency matrix will be a 4 4 real symmetric matrix. The first column and the first row represent the vertices connected to vertex Q, the second column and row represent the vertices connected to vertex R, the third column and row represent the vertices connected to vertex S, and the fourth column and row represent the vertices connected to vertex T. In general, the matrix will look like this: M = q, q q, r q, s q, t r, q r, r r, s r, t s, q s, r s, s s, t t, q t, r t, s t, t We will represent our graph by a real symmetric matrix with 0s on the diagonal and 1s above and below the diagonal: M = 0 1 1 1 1 0 1 1 1 1 0 1 1 1 1 0 3

1.2 Weighted Matrices Suppose our graph and matrix represents a network of n computers. In this case, it is more beneficial to consider the connection between a pair of vertices as a number other than 1. For example, each entry may represent the speed of a connection between the computers a number greater or less than 1. Therefore, we can choose each non-diagonal entry in the matrix as a weighted value from a uniform distribution on various intervals. A uniform distribution on a given interval will yield each subinterval within that interval with equal probability. For example, if we have a uniform distribution on the interval [0, 1], the probability of observing x in a subinterval [a, b] equals b a for any subinterval within [0, 1]. Thus, choosing a number from [0.1, 0.2] has the same probability as choosing from [0.48, 0.58]. Here, we will study the cases in which each such element of the matrix is chosen from uniform distributions on the intervals [0, α], and [β, β + 1], where α and β are real numbers. 1.3 Eigenvalues Eigenvalues can be useful in many sections of mathematics and physics. For example, eigenvalues play an important part in the physics of rotating bodies; and if we form a matrix from a random graph, the second eigenvalue reveals information about the graph s connectivity. However, for our purposes, eigenvalues are the solutions to an interesting subset of polynomials. By definition, λ is an eigenvalue if A v = λ v, (1.1) where A is an n n, or square, matrix and v is a column vector with n components. We call v an eigenvector, which is a vector other than the zero vector. Matrix A must be square because the output of the product A v is a vector and also has n entries. The output vector is a multiple of v, which means that matrix A is a transformation that does not change the direction of v; the matrix only changes the length of the vector. Note that not every vector is an eigenvector. Each square matrix has its own eigenvectors, depending on what the eigenvalues of the matrix are. 4

If we are given a square matrix, we can easily compute its eigenvalues by generating the characteristic polynomial of the matrix. Referring to Equation 1.1, we first collect like terms: 0 = A v λ v = (A λi) v. We must multiply λ by an n n identity matrix, I, so that A λi forms an n n matrix. The determinant of this matrix is a degree n polynomial that is equal to zero, because the matrix sends v to zero. By solving for λ, we can find the n roots of this characteristic polynomial, which are the eigenvalues of matrix A. Let M be a 4 4 real symmetric matrix formed from a 3-regular graph: M = 0 a b c a 0 d e b d 0 f c e f 0. To find the eigenvalues of matrix M, we must find and solve its characteristic polynomial, P (λ). This, we have P (λ) = det(a λi) λ a b c = a λ d e b d λ f c e f λ = λ 4 λ 2 (a 2 + b 2 + c 2 + d 2 + e 2 + f 2 ) (1.2) 2λ(abd + ace + bcf + def) +(c 2 d 2 2bcde + b 2 e 2 2acdf 2abef + a 2 f 2 ). The non-diagonal entries of the matrix, a, b, c, d, e, and f, comprise the coefficients of the polynomial P (λ). Suppose we take the non-diagonal entries from a uniform distribution on the interval [0, 1]. If we look at a sufficiently large number of matrices, what does the projection of the eigenvalues onto the real axis 5

look like? Below is a histogram of the distribution. We will use this histogram 1 to compare results for cases in which a, b, c, d, e, and f are taken from uniform distributions on different intervals. 175 150 125 100 75 50 25-1 0 1 2 1 The range over which the eigenvalues vary is divided into many bins of equal size, each of which represents a subinterval in the range. The histogram depicts how many eigenvalues fall into each bin. 6

Chapter 2 Investigation 1: 4 4 Matrices We will begin by studying 4 4 real symmetric matrices, such as that shown below: 0 a b c M = a 0 d e b d 0 f (2.1) c e f 0 The 4 eigenvalues of this matrix react in distinct ways when the non-diagonal entries a, b, c, d, e, and f are chosen from uniform distributions on the intervals [0, α] and [β, β + 1], where 0 α and 0 β. If the interval is rescaled by a factor α, then the eigenvalues are rescaled by the same factor. If the interval is shifted to [β, β + 1], then roughly 3 of the eigenvalues will be concentrated around β and 4 roughly 1 will be concentrated around 3β. 4 2.1 Rescaled Intervals We have already seen the distribution of the eigenvalues if the non-diagonal values are taken from the interval [0, 1]. We will now examine what happens when the interval is rescaled by a factor α, where 0 α, so that the non-diagonal values are chosen from the interval [0, α]. 2.1.1 Experiments Using Mathematica, we can average over a finite number of weighted graphs, form their corresponding real symmetric 4 4 matrices that are structured like 7

matrix M, and observe how the projection of the matrices eigenvalues onto the real axis is altered when the non-diagonal entries are taken from rescaled intervals. Consider the cases in which we take each non-diagonal entry in matrix M from a uniform distribution on the intervals [0, 2] and [0, 10]. Below are histograms of these matrices. 175 150 100 50 150 125 100 75 50 25-2 0 2 4-10 0 10 20 The two graphs differ only slightly in shape and have roughly the same number of eigenvalues in each bin. However, the overall range of the eigenvalues changes, as do the bin sizes. In the first histogram, the range of the eigenvalues is rescaled by a factor of 2; in the second, by a factor of 10. The same results occur if the interval from which a, b, c, d, e, and f are chosen is rescaled by any real number greater than or equal to zero. 2.1.2 Theory Assume we rescale the interval of each non-diagonal entry in our 4 4 adjacency matrix by a factor α, so that the entries are chosen from a uniform distribution on [0, α]. Then the eigenvalues are rescaled by a factor α. Recall that each n n matrix is associated with a characteristic polynomial of degree n, similar to that in Equation 1.2. The n roots of the polynomial are the eigenvalues of the matrix. The polynomial for our 4 4 matrix is P (λ) = λ 4 λ 2 (a 2 + b 2 + c 2 + d 2 + e 2 + f 2 ) 2λ(abd + ace + bcf + def) +(c 2 d 2 2bcde + b 2 e 2 2acdf 2abef + a 2 f 2 ). 8

Note that this polynomial does not have a cubic term. The characteristic polynomial for an n n matrix is given by det(a λi) = λ n T r(a)λ n 1 +..., where T r(a) is the trace of the matrix (the sum of all its diagonal elements). In an adjacency matrix, each diagonal element is zero, so T r(m) = 0. Therefore there is no λ n 1 term in the polynomial. For the sake of simplicity, we can rewrite the expression by replacing the coefficients such that P (λ) = C 0 + C 1 λ + C 2 λ 2 + λ 4, where C 0, C 1, and C 2 are polynomials of degree 4, 3, and 2 (respectively) in the coefficients a, b, c, d, e, and f. Now suppose that each of the non-diagonal values in matrix M (a, b, c, d, e, and f) is chosen from a uniform distribution on the interval [0, α]. Then writing a as αa, b as αb, c as αc and so on, we see that We now have C i (a, b, c, d, e, f) = α 4 i C i (a, b, c, d, e, f ). P (λ) = C 0 α 4 + C 1 α 3 λ + C 2 α 2 λ 2 + λ 4. We want to determine the eigenvalues of the matrix, so we must equate P (λ) to zero. We can then manipulate the expression to find the roots: 0 = C 0 α 4 + C 1 α 3 λ + C 2 α 2 λ 2 + λ 4 0 = C 0α 4 + C 1 α 3 λ + C 2 α 2 λ 2 + λ 4 α 4 = C 0 + C 1 ( λ α ) + C 2( λ α )2 + ( λ α )4. The roots of this expression are λ. The eigenvalues (λ) are therefore rescaled by a α factor, α. Thus, when each non-diagonal value of a 4 4 real symmetric matrix is rescaled by a factor α, the eigenvalues are rescaled by the same factor. This result can easily be reproduced for an n n matrix. 9

2.2 Shifted Intervals Let us return to matrix M in line 2.1. Now we will choose non-diagonal entries a, b, c, d, e, and f from a uniform distribution that is shifted from its original position on the interval [0, 1] to a new position on the interval [β, β + 1], where 0 β. 2.2.1 Experiments We again use Mathematica to average over a finite number of weighted graphs, form corresponding real symmetric 4 4 matrices that are structured like matrix M, and observe how the projection of the matrices eigenvalues onto the real axis is altered when the non-diagonal entries are taken from shifted intervals. If we shift the range of these entries (which also make up the coefficients of the characteristic polynomial of the matrix) to the interval [7, 8] or [100, 101], we will observe the projection of the eigenvalues onto the real axis separating into two masses: 1000 800 600 400 200 2500 2000 1500 1000 500-5 0 5 10 15 20-100 0 100 200 300 Although it may look as if the sections of the histograms are converging to certain values, a rescaled graph shows otherwise. In fact, the two sections are nearly the same in size and shape as they were when a, b, c, d, e, and f were taken from the interval [0, 1]. They have simply moved apart. Below are "zoomed-in" views of the two sections of the histogram of eigenvalues generated from matrices whose non-diagonal values are taken from a uniform distribution on the interval [100, 101]. 10

140 120 100 80 60 40 20 60 50 40 30 20 10-101.5-101 -100.5-100 -99.5 301 301.5 302 302.5 We see that when a, b, c, d, e, and f are taken from the interval [100, 101], one section of the eigenvalues distribution is shifted from approximately 1 to approximately 100, and the other shifts from approximately 3 to approximately 300. The same shifts in the distribution of the eigenvalues occurs if a, b, c, d, e, and f are taken from β plus some number chosen from a uniform distribution on the interval [0, 1], as β gets large. 2.2.2 Theory Assume we shift the interval of each non-diagonal entry in our 4 4 adjacency matrix from [0, 1] to [β, β + 1]. Then the eigenvalues are shifted to β and 3β. Recall the characteristic polynomial from line 1.2. If a, b, c, d, e, and f were exactly equal to β, we would have P (λ) = λ 4 λ 2 (a 2 + b 2 + c 2 + d 2 + e 2 + f 2 ) 2λ(abd + ace + bcf + def) +(c 2 d 2 2bcde + b 2 e 2 2acdf 2abef + a 2 f 2 ) = λ 4 6β 2 λ 2 (4β 3 )2λ +(β 4 2β 4 + β 4 2β 4 2β 4 + β 4 ) = λ 4 6β 2 λ 2 8β 3 λ 3β 4. Again, because we are solving for the eigenvalues, we can set P (λ) = 0 to get 0 = ( λ β )4 6( λ β )2 8( λ β ) 3. 11

There are four solutions to this equation: λ β = 1, 1, 1, 3. The solutions to this polynomial make sense. If the non-diagonal values in our matrix are chosen on the interval [β, β + 1], they do not vary much from β, and the eigenvalues remain close to those corresponding to the eigenvalues found if a, b, c, d, e, and f were exactly equal to β. Therefore, we would expect roughly 3 of the eigenvalues to be concentrated around β and roughly 1 of them to be 4 4 concentrated around 3β. Furthermore, as β gets very large, a random number chosen from a uniform distribution on the interval [0, 1] becomes small relative to β. Thus in the limit, if a, b, c, d, e, and f are chosen from β + k, where k is some random number chosen from a uniform distribution on the interval [0, 1], the eigenvalues behave as if the non-diagonal values were chosen from the interval [β, β + 1]. 12

Chapter 3 Investigation 2: n n Matrices If we have a 4 4 matrix, it is relatively easy to observe how the eigenvalues shift if the non-diagonal entries in the matrix are chosen from altered intervals. However, it is difficult to do so for a larger matrix, such as a 10 10 matrix. If each non-diagonal entry is taken from a uniform distribution on an interval that has been rescaled by a certain factor, then the eigenvalues will be rescaled by that factor. But if we take each non-diagonal element from a uniform distribution on a shifted interval, how much should we expect the eigenvalues to move? The determinant of a large real matrix yields a large polynomial with real coefficients. For an n n matrix, it is of the form p(x) = a n x n + a n 1 x n 1 +... + a 0. Assume that at some real simple root, r, there are two points very close to the root, d 1 and d 2, d 1 < r < d 2, with p(d 1 ) < 0 and p(d 2 ) > 0. Assume also that we have a new polynomial q(x) such that q(x) = (a n + s n )x n + (a n 1 + s n 1 )x n 1 +... + (a 0 + s 0 ), where s n, s n 1, s n 2,..., s 0 are small amounts by which we shift each coefficient of the original polynomial p(x). Theorem 3.0.1. If r is a real simple root (its order is 1) of p(x), then if we change the coefficients by small amounts s i, s i < p(d 1 ) (1 + d 1 n )(n + 1) 13

and s i < the new polynomial has a real root near r. p(d 2 ) (1 + d 2 n )(n + 1), Proof. First consider d 1. Expanding q(x), we have q(x) = a n x n + s n x n + a n 1 x n 1 + s n 1 x n 1 +... + a 0 + s 0 = p(x) + s n x n + s n 1 x n 1 +... + s 0. (3.1) We want q(d 1 ) to remain negative, which will be true if p(d 1 ) > s n d n 1 + s n 1 d n 1 1 +... + s 0. If we consider only the last term on the right hand side of the equation, s 0, we would have s 0 < p(d 1 ). If we consider only the s 1 d 1 term, we would have s 1 < p(d 1). d 1 In Equation 3.1 we want to ensure that each term in the sum is less than p(d 1 ), so we divide each addition by the highest power of d 1. However, if d n 1 1, each term will be as large as or greater than p(d 1 ) ; if we make the denominator of each s i greater than 1, then each term will be less than p(d 1 ). Therefore we add 1 to the denominator of each term. Also, Equation 3.1 has n + 1 terms added to p(d 1 ). To ensure that the sum of all the s terms is less than p(d 1 ), we divide each term by n + 1. Thus, q(d 1 ) remains negative, if each s i satisfies s i < p(d 1 ) (1 + d 1 n )(n + 1). (3.2) Now consider d 2. We want q(d 2 ) > 0. So in Equation 3.1, we want the s i terms to be less than p(d 2 ) in absolute value. Using the same process we used to find s i above, we find that q(d 2 ) remains positive if s i < p(d 2 ) (1 + d 2 n )(n + 1). (3.3) 14

Theorem 3.0.2. Assume p(x) has n real simple roots. Then if we change the coefficients by small amounts s i, and s i < s i < p(d 1 ) (1 + d 1 n )(n + 1) p(d 2 ) (1 + d 2 n )(n + 1), the new polynomial has n real simple roots, where each root is near a previous one. Proof. Each root of q(x) is associated with two values for s i. If s i is less than the minimum of the 2n values for s i generated by Equations 3.2 and 3.3, then q(d 1 ) will remain negative and q(d 2 ) will remain positive. In fact, if d 1 and d 2 exist such that d 1 = r r and d 2 = r + r, where r is less than 1 the difference 4 between two adjacent roots, then the roots of q(x) will be less than r away from the roots of p(x), and there will not be more than one root in the interval [d 1, d 2 ]. Thus, shifting the coefficients of a polynomial by a small amount does not have a large effect on the roots. This result explains why choosing the non-diagonal entries in matrix M (a, b, c, d, e, and f) from β + k, where k is a chosen from a uniform distribution on the interval [0, 1], is effectively the same as choosing each non-diagonal entry from a uniform distribution on the interval [β, β + 1]. Shifting the coefficients of the characteristic polynomial from β to β plus some small amount does not significantly change the distribution of the roots (eigenvalues), so the distribution looks very close to the distribution where each non-diagonal element is chosen from the interval [β, β + 1]. 15

Chapter 4 Conclusion What began as an investigation of random d-regular graphs and their matrices has evolved into the much broader investigation of a certain class of polynomial. We first observed how rescaling or shifting the entries in a 4 4 real symmetric matrix affected the eigenvalues of the matrix. It should be noted that our results for rescaled intervals hold true for any n n matrix. However, our results for shifted intervals are specific to 4 4 matrices. Because the eigenvalues are the solution to a characteristic polynomial, we found that our observations of the behavior of the eigenvalues under certain conditions are analogous to the behavior of the roots of a polynomial under similar conditions. If we change the coefficients of a polynomial, it is possible that some real roots will become complex, and that some complex roots will become real. For example, if there was a double root, shifting the coefficients can yield either two distinct real roots or two distinct complex roots. It can be proven that the complex roots will be nearby the original double root, but the proof requires tools derived from Complex Analysis. However, by expanding our investigation to general polynomials, we can now quantify how much each coefficient in the polynomial can safely be shifted, so that the roots stay within a desired range. This is a handy tool, and can be used in many situations involving similar polynomials. 16