Eigenvalues and eigenvectors of a matrix

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Eigenvalues and eigenvectors of a matrix"

Transcription

1 Eigenvalues and eigenvectors of a matrix Definition: If A is an n n matrix and there exists a real number λ and a non-zero column vector V such that AV = λv then λ is called an eigenvalue of A and V is called an eigenvector corresponding to the eigenvalue λ. Example: Consider the matrix A = ( )( ) ( ) 2 1. Note that 2 3 = 4 ( ) 1 2 So, the number 4 is an eigenvalue of A and the column vector eigenvector corresponding to 4. ( ) 1 is an 2 Dana Mackey (DIT) Numerical Methods II 1 / 23

2 Finding eigenvalues and eigenvectors for a given matrix A 1. The eigenvalues of A are calculated by solving the characteristic equation of A: det(a λi ) = 0 2. Once the eigenvalues of A have been found, the eigenvectors corresponding to each eigenvalue λ can be determined by solving the matrix equation AV = λv Example: Find the eigenvalues of A = Dana Mackey (DIT) Numerical Methods II 2 / 23

3 Example: The eigenvalues of A = quadratic equation ( ) 2 1 are 1 and 4, which is seen by solving the λ λ = λ2 5λ + 4 = 0 To find the eigenvectors corresponding to 1 we write V = ( )( V1 V 2 ) = 1 ( V1 V 2 ) so ( V1 V 2 2V 1 + V 2 = V 1 2V 1 + 3V 2 = V 2 ) and solve This gives a ( single ) independent equation V 1 + V 2 = 0 so any vector of the 1 form V = α is an eigenvector corresponding to the eigenvalue 1. 1 Dana Mackey (DIT) Numerical Methods II 3 / 23

4 Matrix diagonalisation The eigenvalues and eigenvectors of a matrix have the following important property: If a square n n matrix A has n linearly independent eigenvectors then it is diagonalisable, that is, it can be factorised as follows A = PDP 1 where D is the diagonal matrix containing the eigenvalues of A along the diagonal, also written as D = diag[λ 1,λ 2,...,λ n ] and P is a matrix formed with the corresponding eigenvectors as its columns. For example, matrices whose eigenvalues are distinct numbers are diagonalisable. Symmetric matrices are also diagonalisable. Dana Mackey (DIT) Numerical Methods II 4 / 23

5 This property has many important applications. For example, it can be used to simplify algebraic operations involving the matrix A, such as calculating powers: A m = ( PDP 1) (PDP 1) (PDP 1) = PD m P 1 Note that if D = diag[λ 1,λ 2,...,λ n ] then D m = diag[λ m 1,λm 2,...,λm n ], which is very easy to calculate. ( ) 2 1 Example: The matrix A = has eigenvalues 1 and 4 with 2 3 ( ) ( ) 1 1 corresponding eigenvectors and. Then it can be diagonalised 1 2 as follows ( )( )( ) /3 1/3 A = /3 1/3 Dana Mackey (DIT) Numerical Methods II 5 / 23

6 The power and inverse power method Overview: The Gerschgorin circle theorem is used for locating the eigenvalues of a matrix. The power method is used for approximating the dominant eigenvalue (that is, the largest eigenvalue) of a matrix and its associated eigenvector. The inverse power method is used for approximating the smallest eigenvalue of a matrix or for approximating the eigenvalue nearest to a given value, together with the corresponding eigenvector. Dana Mackey (DIT) Numerical Methods II 6 / 23

7 The Gerschgorin Circle Theorem Let A be an n n matrix and define R i = n j=1 j i a ij for each i = 1,2,3,...n. Also consider the circles C i = {z Cl : z a ii R i } 1 If λ is an eigenvalue of A then λ lies in one of the circles C i. 2 If k of the circles C i form a connected region R in the complex plane, disjoint from the remaining n k circles then the region contains exactly k eigenvalues. Dana Mackey (DIT) Numerical Methods II 7 / 23

8 Example Consider the matrix A = The radii of the Gerschgorin circles are R 1 = = 1; R 2 = = 2; R 3 = = 3 and the circles are C 1 = {z Cl : z 1 1}; C 2 = {z Cl : z 5 2}; C 3 = {z Cl : z 9 3} Since C 1 is disjoint from the other circles, it must contain one of the eigenvalues and this eigenvalue is real. As C 2 and C 3 overlap, their union must contain the other two eigenvalues. Dana Mackey (DIT) Numerical Methods II 8 / 23

9 The power method The power method is an iterative technique for approximating the dominant eigenvalue of a matrix together with an associated eigenvector. Let A be an n n matrix with eigenvalues satisfying λ 1 > λ 2 λ 3... λ n The eigenvalue with the largest absolute value, λ 1 is called the dominant eigenvalue. Any eigenvector corresponding to λ 1 is called a dominant eigenvector. Let x 0 be an n-vector. Then it can be written as x 0 = α 1 v 1 + α n v n where v 1,...,v n are the eigenvectors of A. Dana Mackey (DIT) Numerical Methods II 9 / 23

10 Now construct the sequence x (m) = Ax (m 1), for m 1. x (1) = Ax (0) = α 1 λ 1 v 1 + α n λ n v n x (2) = Ax (1) = α 1 λ 2 1v 1 + α n λ 2 nv n. x (m) = Ax (m 1) = α 1 λ m 1 v 1 + α n λ m n v n and hence which gives x (m) λ m 1 ( ) m ( λ2 λn = α 1 v 1 + α 2 + α n λ 1 λ 1 x (m) lim m λ m = α 1 v 1 1 ) m v n Hence, the sequence x(m) λ m 1 dominant eigenvalue. converges to an eigenvector associated with the Dana Mackey (DIT) Numerical Methods II 10 / 23

11 The power method implementation Choose the initial guess, x (0) such that max i x (0) i = 1. For m 1, let y (m) = Ax (m 1), x (m) = y(m) y (m) p m where p m is the index of the component of y (m) which has maximum absolute value. Note that and since x (m 1) p m 1 y (m) = Ax (m 1) λ 1 x (m 1) = 1 (by construction), it follows that y (m) p m 1 λ 1 Dana Mackey (DIT) Numerical Methods II 11 / 23

12 Note: Since the power method is an iterative scheme, a stopping condition could be given as λ (m) λ (m 1) < required error where λ (m) is the dominant eigenvalue approximation during the m th iteration, or max x (m) i i x (m 1) i < required error Note: If the ratio λ 2 /λ 1 is small, the convergence rate is fast. If λ 1 = λ 2 then, in general, the power method does not converge. Example Consider the matrix A = Show that its eigenvectors are λ 1 = 12, λ 2 = 3 and λ 3 = 3 and calculate the associated eigenvectors. Use the power method to approximate the dominant eigenvalue and a corresponding eigenvector. Dana Mackey (DIT) Numerical Methods II 12 / 23

13 Matrix polynomials and inverses Let A be an n n matrix with eigenvalues λ 1,λ 2,...λ n and associated eigenvectors v 1,v 2,...v n. 1 Let p(x) = a 0 + a 1 x + + a m x m be a polynomial of degree m. Then the eigenvalues of the matrix B = p(a) = a 0 + a 1 A + + a m A m are p(λ 1 ), p(λ 2 ),...p(λ n ) with associated eigenvectors v 1, v 2,...v n. 2 If det(a) 0 then the inverse matrix A 1 has eigenvalues 1 λ 1, 1 λ 2,... 1 λ n with associated eigenvectors v 1, v 2,...v n. Dana Mackey (DIT) Numerical Methods II 13 / 23

14 The inverse power method Let A be an n n matrix with eigenvalues λ 1, λ 2,...λ n and associated eigenvectors v 1, v 2,...v n. Let q be a number for which det(a qi ) 0 (so q is not an eigenvalue of A). Let B = (A qi ) 1 so the eigenvalues of B are given by µ 1 = 1 λ 1 q, µ 2 = 1 λ 2 q,... µ m = 1 λ m q. If we know that λ k is the eigenvalue that is closest to the number q, then µ k is the dominant eigenvalue for B and so it can be determined using the power method. Dana Mackey (DIT) Numerical Methods II 14 / 23

15 Example Consider the matrix A = Use Gerschgorin circles to obtain an estimate for one of the eigenvalues of A and then get a more accurate approximation using the inverse power method. (Note: The eigenvalues are, approximately, , and ) Dana Mackey (DIT) Numerical Methods II 15 / 23

16 The Gershgorin circles are z + 4 2; z 3 1 and z 15 2 so they are all disjoint! There is one eigenvalue in each of the three circles so they lie close to -4, 3 and 15. Let λ be the eigenvalue closest to q = 3 and let B = (A 3I ) 1. Then µ = 1 λ 3 is the dominant eigenvalue for B and can be determined using the power method, together with an associated eigenvector, v. Note that, once the desired approximation, µ (n), v (n) has been obtained, we must calculate λ (n) = µ (n) The associated eigenvector for λ is the same as the one calculated for µ. Dana Mackey (DIT) Numerical Methods II 16 / 23

17 Smallest eigenvalue To find the eigenvalue of A that is smallest in magnitude is equivalent to find the dominant eigenvalue of the matrix B = A 1. (This is the inverse power method with q = 0.) Example: Consider the matrix A = Use the inverse power method to find an approximation for the smallest eigenvalue of A. (Note: The eigenvalues are 3, 4 and 5.) Dana Mackey (DIT) Numerical Methods II 17 / 23

18 The QR method for finding eigenvalues We say that two n n matrices, A and B, are orthogonally similar if there exists an orthogonal matrix Q such that AQ = QB or A = QBQ T If A and B are orthogonally similar matrices then they have the same eigenvalues. Proof: If λ is an eigenvalue of A with eigenvector v, that is, then Av = λv BQ T v = λq T v which means λ is an eigenvalue of B with eigenvector Q T v. Dana Mackey (DIT) Numerical Methods II 18 / 23

19 To find the eigenvalues of a matrix A using the QR algorithm, we generate a sequence of matrices A (m) which are orthogonally similar to A (and so have the same eigenvalues), and which converge to a matrix whose eigenvalues are easily found. If the matrix A is symmetric and tridiagonal then the sequence of QR iterations converges to a diagonal matrix, so its eigenvalues can easily be read from the main diagonal. Dana Mackey (DIT) Numerical Methods II 19 / 23

20 The basic QR algorithm We find the QR factorisation for the matrix A and then take the reverse order product RQ to construct the first matrix in the sequence. A = QR = R = Q T A A (1) = RQ = A (1) = Q T AQ Easy to see that A and A (1) are orthogonally similar and so have the same eigenvalues. We then find the QR factorisation of A (1) : A (1) = Q (1) R (1) = A (2) = R (1) Q (1) This procedure is then continued to construct A (2), A (3), etc. and (if the original matrix is symmetric and tridiagonal) this sequence converges to a diagonal matrix. The diagonal values of each of the iteration matrices A (m) can be considered as approximations of the eigenvalues of A. Dana Mackey (DIT) Numerical Methods II 20 / 23

21 Example Consider the matrix A = whose eigenvalues are , , Some of the iterations of the QR algorithm are shown below: A (2) = A (10) = Note that the diagonal elements approximate the eigenvalues of A to 3 decimal places while the off-diagonal elements are converging to zero. Dana Mackey (DIT) Numerical Methods II 21 / 23

22 Remarks 1 Since the rate of convergence of the QR algorithm is quite slow (especially when the eigenvalues are closely spaced in magnitude) there are a number of variations of this algorithm (not discussed here) which can accelerate convergence. 2 The QR algorithm can be used for finding eigenvectors as well but we will not cover this technique now Exercise: Calculate the first iteration of the QR algorithm for the matrix on the previous slide. Dana Mackey (DIT) Numerical Methods II 22 / 23

23 More examples 1. Let A = Calculate the eigenvalues of A (using the definition) and perform two iterations of the QR algorithm to approximate them. 2. Let A = whose eigenvalues are , , Perform one iteration of the QR algorithm. Dana Mackey (DIT) Numerical Methods II 23 / 23

24 Applications of eigenvalues and eigenvectors 1. Systems of differential equations Recall that a square matrix A is diagonalisable if it can be factorised as A = PDP 1, where D is the diagonal matrix containing the eigenvalues of A along the diagonal, and P (called the modal matrix) has the corresponding eigenvectors as columns. Consider the system of 1st order linear differential equations x (t) = a 11 x(t) + a 12 y(t), y (t) = a 21 x(t) + a 22 y(t) written in matrix form as X (t) = AX(t), where X = (x,y) T and A = (a ij ) i.j=1,2. Dana Mackey (DIT) Numerical Methods II 24 / 23

25 If A is diagonalisable then A = PDP 1, where D = diag(λ 1,λ 2 ) so the system becomes P 1 X (t) = DP 1 X(t) If we let X(t) = P 1 X(t) then X (t) = D X(t) which can be written as x (t) = λ 1 x(t), ỹ (t) = λ 2 ỹ(t) which is easy to solve: x(t) = C 1 e λ 1t, ỹ(t) = C 2 e λ 2t, where C 1 and C 2 are arbitrary constants. The final solution is then recovered from X(t) = P X(t). Dana Mackey (DIT) Numerical Methods II 25 / 23

26 Example: Consider the following linear predator-prey model, where x(t) represents a population of rabbits at time t and y(t) are foxes x (t) = x(t) 2y(t), y (t) = 3x(t) 4y(t) Solve the system using eigenvalues and determine how the two populations are going to evolve. Assume x(0) = 4, y(0) = 1. Dana Mackey (DIT) Numerical Methods II 26 / 23

27 Applications of eigenvalues and eigenvectors 2. Discrete age-structured population model The Leslie model describes the growth of the female portion of a human or animal population. The females are divided into n age classes of equal duration (L/n, where L is the population age limit). The initial number of females in each age group is given by ( x (0) = x (0) 1,x (0) 2 ) (0) T,...x n where x (0) 1 is the number of females aged 0 to L/n years, x (0) 2 is the number of females aged L/n to 2L/n etc. This is called the initial age distribution vector. Dana Mackey (DIT) Numerical Methods II 27 / 23

28 The birth and death parameters which describe the future evolution of the population are given by a i = average no. of daughters born to each female in age class i b i = fraction of females in age class i which survive to next age class Note that a i 0 for i = 1,2,...n and 0 < b i 1 for i = 1,2,...n 1. Let x (k) be the age distribution vector at time t k (where k = 1,2,... and t k+1 t k = L/n). The Leslie model states that the distribution at time t k+1 is given by x (k+1) = Lx (k) where L, the Leslie matrix, is defined as Dana Mackey (DIT) Numerical Methods II 28 / 23

29 This matrix equation is equivalent to a 1 a 2 a 3... a n b L = 0 b b n 1 0 x (k+1) 1 = a 1 x (k) 1 + a 2 x (k) 2 + a n x n (k) x (k+1) i+1 = b i x (k) i, i = 1,2,...n 1 The Leslie matrix has the following properties: 1. It has a unique positive eigenvalue λ 1, for which the corresponding eigenvector has only positive components; 2. This unique positive eigenvalue is the dominant eigenvalue; 3. The dominant eigenvalue λ 1 represents the population growth rate, while the corresponding eigenvector gives the limiting age distribution. Dana Mackey (DIT) Numerical Methods II 29 / 23

30 Example Suppose the age limit of a certain animal population is 15 years and divide the female population into 3 age groups. The Leslie matrix is given by L = If there are initially 1000 females in each age class, find the age distribution after 15 years. Knowing that the dominant eigenvalue is λ 1 = 3 2, find the long term age distribution. Dana Mackey (DIT) Numerical Methods II 30 / 23

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

8. Linear least-squares

8. Linear least-squares 8. Linear least-squares EE13 (Fall 211-12) definition examples and applications solution of a least-squares problem, normal equations 8-1 Definition overdetermined linear equations if b range(a), cannot

More information

System of First Order Differential Equations

System of First Order Differential Equations CHAPTER System of First Order Differential Equations In this chapter, we will discuss system of first order differential equations. There are many applications that involving find several unknown functions

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10 Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

More information

Algebra I Credit Recovery

Algebra I Credit Recovery Algebra I Credit Recovery COURSE DESCRIPTION: The purpose of this course is to allow the student to gain mastery in working with and evaluating mathematical expressions, equations, graphs, and other topics,

More information

Matrix Calculations: Applications of Eigenvalues and Eigenvectors; Inner Products

Matrix Calculations: Applications of Eigenvalues and Eigenvectors; Inner Products Matrix Calculations: Applications of Eigenvalues and Eigenvectors; Inner Products H. Geuvers Institute for Computing and Information Sciences Intelligent Systems Version: spring 2015 H. Geuvers Version:

More information

Notes on Factoring. MA 206 Kurt Bryan

Notes on Factoring. MA 206 Kurt Bryan The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

Orthogonal Bases and the QR Algorithm

Orthogonal Bases and the QR Algorithm Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

Identifying second degree equations

Identifying second degree equations Chapter 7 Identifing second degree equations 7.1 The eigenvalue method In this section we appl eigenvalue methods to determine the geometrical nature of the second degree equation a 2 + 2h + b 2 + 2g +

More information

Mathematics (MAT) MAT 061 Basic Euclidean Geometry 3 Hours. MAT 051 Pre-Algebra 4 Hours

Mathematics (MAT) MAT 061 Basic Euclidean Geometry 3 Hours. MAT 051 Pre-Algebra 4 Hours MAT 051 Pre-Algebra Mathematics (MAT) MAT 051 is designed as a review of the basic operations of arithmetic and an introduction to algebra. The student must earn a grade of C or in order to enroll in MAT

More information

Model order reduction via dominant poles

Model order reduction via dominant poles Model order reduction via dominant poles NXP PowerPoint template (Title) Template for presentations (Subtitle) Joost Rommes [joost.rommes@nxp.com] NXP Semiconductors/Corp. I&T/DTF/Mathematics Joint work

More information

SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated

SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET Course Title Course Number Department Linear Algebra Mathematics MAT-240 Action Taken (Please Check One) New Course Initiated

More information

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0. Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

More information

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

Chapter 7. Lyapunov Exponents. 7.1 Maps

Chapter 7. Lyapunov Exponents. 7.1 Maps Chapter 7 Lyapunov Exponents Lyapunov exponents tell us the rate of divergence of nearby trajectories a key component of chaotic dynamics. For one dimensional maps the exponent is simply the average

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

The Dirichlet Unit Theorem

The Dirichlet Unit Theorem Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if

More information

Section 13.5 Equations of Lines and Planes

Section 13.5 Equations of Lines and Planes Section 13.5 Equations of Lines and Planes Generalizing Linear Equations One of the main aspects of single variable calculus was approximating graphs of functions by lines - specifically, tangent lines.

More information

Linear Control Systems

Linear Control Systems Chapter 3 Linear Control Systems Topics : 1. Controllability 2. Observability 3. Linear Feedback 4. Realization Theory Copyright c Claudiu C. Remsing, 26. All rights reserved. 7 C.C. Remsing 71 Intuitively,

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

Matrices and Polynomials

Matrices and Polynomials APPENDIX 9 Matrices and Polynomials he Multiplication of Polynomials Let α(z) =α 0 +α 1 z+α 2 z 2 + α p z p and y(z) =y 0 +y 1 z+y 2 z 2 + y n z n be two polynomials of degrees p and n respectively. hen,

More information

12.5 Equations of Lines and Planes

12.5 Equations of Lines and Planes Instructor: Longfei Li Math 43 Lecture Notes.5 Equations of Lines and Planes What do we need to determine a line? D: a point on the line: P 0 (x 0, y 0 ) direction (slope): k 3D: a point on the line: P

More information

Solutions of Equations in One Variable. Fixed-Point Iteration II

Solutions of Equations in One Variable. Fixed-Point Iteration II Solutions of Equations in One Variable Fixed-Point Iteration II Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year. This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra

More information

(Refer Slide Time: 1:42)

(Refer Slide Time: 1:42) Introduction to Computer Graphics Dr. Prem Kalra Department of Computer Science and Engineering Indian Institute of Technology, Delhi Lecture - 10 Curves So today we are going to have a new topic. So far

More information

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5. PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

More information

Mathematics Review for MS Finance Students

Mathematics Review for MS Finance Students Mathematics Review for MS Finance Students Anthony M. Marino Department of Finance and Business Economics Marshall School of Business Lecture 1: Introductory Material Sets The Real Number System Functions,

More information

L 2 : x = s + 1, y = s, z = 4s + 4. 3. Suppose that C has coordinates (x, y, z). Then from the vector equality AC = BD, one has

L 2 : x = s + 1, y = s, z = 4s + 4. 3. Suppose that C has coordinates (x, y, z). Then from the vector equality AC = BD, one has The line L through the points A and B is parallel to the vector AB = 3, 2, and has parametric equations x = 3t + 2, y = 2t +, z = t Therefore, the intersection point of the line with the plane should satisfy:

More information

BX in ( u, v) basis in two ways. On the one hand, AN = u+

BX in ( u, v) basis in two ways. On the one hand, AN = u+ 1. Let f(x) = 1 x +1. Find f (6) () (the value of the sixth derivative of the function f(x) at zero). Answer: 7. We expand the given function into a Taylor series at the point x = : f(x) = 1 x + x 4 x

More information

March 29, 2011. 171S4.4 Theorems about Zeros of Polynomial Functions

March 29, 2011. 171S4.4 Theorems about Zeros of Polynomial Functions MAT 171 Precalculus Algebra Dr. Claude Moore Cape Fear Community College CHAPTER 4: Polynomial and Rational Functions 4.1 Polynomial Functions and Models 4.2 Graphing Polynomial Functions 4.3 Polynomial

More information

Introduction to Partial Differential Equations. John Douglas Moore

Introduction to Partial Differential Equations. John Douglas Moore Introduction to Partial Differential Equations John Douglas Moore May 2, 2003 Preface Partial differential equations are often used to construct models of the most basic theories underlying physics and

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

More information

Applications to Data Smoothing and Image Processing I

Applications to Data Smoothing and Image Processing I Applications to Data Smoothing and Image Processing I MA 348 Kurt Bryan Signals and Images Let t denote time and consider a signal a(t) on some time interval, say t. We ll assume that the signal a(t) is

More information

SOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve

SOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve SOLUTIONS Problem. Find the critical points of the function f(x, y = 2x 3 3x 2 y 2x 2 3y 2 and determine their type i.e. local min/local max/saddle point. Are there any global min/max? Partial derivatives

More information

We shall turn our attention to solving linear systems of equations. Ax = b

We shall turn our attention to solving linear systems of equations. Ax = b 59 Linear Algebra We shall turn our attention to solving linear systems of equations Ax = b where A R m n, x R n, and b R m. We already saw examples of methods that required the solution of a linear system

More information

ELA http://math.technion.ac.il/iic/ela

ELA http://math.technion.ac.il/iic/ela SIGN PATTERNS THAT ALLOW EVENTUAL POSITIVITY ABRAHAM BERMAN, MINERVA CATRAL, LUZ M. DEALBA, ABED ELHASHASH, FRANK J. HALL, LESLIE HOGBEN, IN-JAE KIM, D. D. OLESKY, PABLO TARAZAGA, MICHAEL J. TSATSOMEROS,

More information

Roots of Polynomials

Roots of Polynomials Roots of Polynomials (Com S 477/577 Notes) Yan-Bin Jia Sep 24, 2015 A direct corollary of the fundamental theorem of algebra is that p(x) can be factorized over the complex domain into a product a n (x

More information

9 MATRICES AND TRANSFORMATIONS

9 MATRICES AND TRANSFORMATIONS 9 MATRICES AND TRANSFORMATIONS Chapter 9 Matrices and Transformations Objectives After studying this chapter you should be able to handle matrix (and vector) algebra with confidence, and understand the

More information

Factoring Polynomials

Factoring Polynomials Factoring Polynomials Sue Geller June 19, 2006 Factoring polynomials over the rational numbers, real numbers, and complex numbers has long been a standard topic of high school algebra. With the advent

More information

An Introduction to Applied Mathematics: An Iterative Process

An Introduction to Applied Mathematics: An Iterative Process An Introduction to Applied Mathematics: An Iterative Process Applied mathematics seeks to make predictions about some topic such as weather prediction, future value of an investment, the speed of a falling

More information

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Minimum rank of graphs that allow loops. Rana Catherine Mikkelson. A dissertation submitted to the graduate faculty

Minimum rank of graphs that allow loops. Rana Catherine Mikkelson. A dissertation submitted to the graduate faculty Minimum rank of graphs that allow loops by Rana Catherine Mikkelson A dissertation submitted to the graduate faculty in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Major:

More information

MATH1231 Algebra, 2015 Chapter 7: Linear maps

MATH1231 Algebra, 2015 Chapter 7: Linear maps MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales danielc@unsw.edu.au Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter

More information

Computational Optical Imaging - Optique Numerique. -- Deconvolution --

Computational Optical Imaging - Optique Numerique. -- Deconvolution -- Computational Optical Imaging - Optique Numerique -- Deconvolution -- Winter 2014 Ivo Ihrke Deconvolution Ivo Ihrke Outline Deconvolution Theory example 1D deconvolution Fourier method Algebraic method

More information

(Quasi-)Newton methods

(Quasi-)Newton methods (Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting

More information

Chapter 7 Nonlinear Systems

Chapter 7 Nonlinear Systems Chapter 7 Nonlinear Systems Nonlinear systems in R n : X = B x. x n X = F (t; X) F (t; x ; :::; x n ) B C A ; F (t; X) =. F n (t; x ; :::; x n ) When F (t; X) = F (X) is independent of t; it is an example

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009. Notes on Algebra

U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009. Notes on Algebra U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009 Notes on Algebra These notes contain as little theory as possible, and most results are stated without proof. Any introductory

More information

26. Determinants I. 1. Prehistory

26. Determinants I. 1. Prehistory 26. Determinants I 26.1 Prehistory 26.2 Definitions 26.3 Uniqueness and other properties 26.4 Existence Both as a careful review of a more pedestrian viewpoint, and as a transition to a coordinate-independent

More information

Polynomials. Teachers Teaching with Technology. Scotland T 3. Teachers Teaching with Technology (Scotland)

Polynomials. Teachers Teaching with Technology. Scotland T 3. Teachers Teaching with Technology (Scotland) Teachers Teaching with Technology (Scotland) Teachers Teaching with Technology T Scotland Polynomials Teachers Teaching with Technology (Scotland) POLYNOMIALS Aim To demonstrate how the TI-8 can be used

More information

( ) which must be a vector

( ) which must be a vector MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are

More information

By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

By choosing to view this document, you agree to all provisions of the copyright laws protecting it. This material is posted here with permission of the IEEE Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services Internal

More information

Factor analysis. Angela Montanari

Factor analysis. Angela Montanari Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number

More information

Die ganzen zahlen hat Gott gemacht

Die ganzen zahlen hat Gott gemacht Die ganzen zahlen hat Gott gemacht Polynomials with integer values B.Sury A quote attributed to the famous mathematician L.Kronecker is Die Ganzen Zahlen hat Gott gemacht, alles andere ist Menschenwerk.

More information

(67902) Topics in Theory and Complexity Nov 2, 2006. Lecture 7

(67902) Topics in Theory and Complexity Nov 2, 2006. Lecture 7 (67902) Topics in Theory and Complexity Nov 2, 2006 Lecturer: Irit Dinur Lecture 7 Scribe: Rani Lekach 1 Lecture overview This Lecture consists of two parts In the first part we will refresh the definition

More information

Copyrighted Material. Chapter 1 DEGREE OF A CURVE

Copyrighted Material. Chapter 1 DEGREE OF A CURVE Chapter 1 DEGREE OF A CURVE Road Map The idea of degree is a fundamental concept, which will take us several chapters to explore in depth. We begin by explaining what an algebraic curve is, and offer two

More information

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm. Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

More information

x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3

x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3 Math 24 FINAL EXAM (2/9/9 - SOLUTIONS ( Find the general solution to the system of equations 2 4 5 6 7 ( r 2 2r r 2 r 5r r x + y + z 2x + y + 4z 5x + 6y + 7z 2 2 2 2 So x z + y 2z 2 and z is free. ( r

More information

2.3. Finding polynomial functions. An Introduction:

2.3. Finding polynomial functions. An Introduction: 2.3. Finding polynomial functions. An Introduction: As is usually the case when learning a new concept in mathematics, the new concept is the reverse of the previous one. Remember how you first learned

More information

Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one

More information

Ideal Class Group and Units

Ideal Class Group and Units Chapter 4 Ideal Class Group and Units We are now interested in understanding two aspects of ring of integers of number fields: how principal they are (that is, what is the proportion of principal ideals

More information

Manifold Learning Examples PCA, LLE and ISOMAP

Manifold Learning Examples PCA, LLE and ISOMAP Manifold Learning Examples PCA, LLE and ISOMAP Dan Ventura October 14, 28 Abstract We try to give a helpful concrete example that demonstrates how to use PCA, LLE and Isomap, attempts to provide some intuition

More information

Copy in your notebook: Add an example of each term with the symbols used in algebra 2 if there are any.

Copy in your notebook: Add an example of each term with the symbols used in algebra 2 if there are any. Algebra 2 - Chapter Prerequisites Vocabulary Copy in your notebook: Add an example of each term with the symbols used in algebra 2 if there are any. P1 p. 1 1. counting(natural) numbers - {1,2,3,4,...}

More information

SMOOTHING APPROXIMATIONS FOR TWO CLASSES OF CONVEX EIGENVALUE OPTIMIZATION PROBLEMS YU QI. (B.Sc.(Hons.), BUAA)

SMOOTHING APPROXIMATIONS FOR TWO CLASSES OF CONVEX EIGENVALUE OPTIMIZATION PROBLEMS YU QI. (B.Sc.(Hons.), BUAA) SMOOTHING APPROXIMATIONS FOR TWO CLASSES OF CONVEX EIGENVALUE OPTIMIZATION PROBLEMS YU QI (B.Sc.(Hons.), BUAA) A THESIS SUBMITTED FOR THE DEGREE OF MASTER OF SCIENCE DEPARTMENT OF MATHEMATICS NATIONAL

More information

Arithmetic algorithms for cryptology 5 October 2015, Paris. Sieves. Razvan Barbulescu CNRS and IMJ-PRG. R. Barbulescu Sieves 0 / 28

Arithmetic algorithms for cryptology 5 October 2015, Paris. Sieves. Razvan Barbulescu CNRS and IMJ-PRG. R. Barbulescu Sieves 0 / 28 Arithmetic algorithms for cryptology 5 October 2015, Paris Sieves Razvan Barbulescu CNRS and IMJ-PRG R. Barbulescu Sieves 0 / 28 Starting point Notations q prime g a generator of (F q ) X a (secret) integer

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

System Identification for Acoustic Comms.:

System Identification for Acoustic Comms.: System Identification for Acoustic Comms.: New Insights and Approaches for Tracking Sparse and Rapidly Fluctuating Channels Weichang Li and James Preisig Woods Hole Oceanographic Institution The demodulation

More information

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm. Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of

More information

2 Polynomials over a field

2 Polynomials over a field 2 Polynomials over a field A polynomial over a field F is a sequence (a 0, a 1, a 2,, a n, ) where a i F i with a i = 0 from some point on a i is called the i th coefficient of f We define three special

More information

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH 31 Kragujevac J. Math. 25 (2003) 31 49. SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH Kinkar Ch. Das Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, W.B.,

More information

Mathematics INDIVIDUAL PROGRAM INFORMATION 2014 2015. 866.Macomb1 (866.622.6621) www.macomb.edu

Mathematics INDIVIDUAL PROGRAM INFORMATION 2014 2015. 866.Macomb1 (866.622.6621) www.macomb.edu Mathematics INDIVIDUAL PROGRAM INFORMATION 2014 2015 866.Macomb1 (866.622.6621) www.macomb.edu Mathematics PROGRAM OPTIONS CREDENTIAL TITLE CREDIT HOURS REQUIRED NOTES Associate of Arts Mathematics 62

More information

DEFINITION 5.1.1 A complex number is a matrix of the form. x y. , y x

DEFINITION 5.1.1 A complex number is a matrix of the form. x y. , y x Chapter 5 COMPLEX NUMBERS 5.1 Constructing the complex numbers One way of introducing the field C of complex numbers is via the arithmetic of matrices. DEFINITION 5.1.1 A complex number is a matrix of

More information

Classification of Cartan matrices

Classification of Cartan matrices Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms

More information

Quantum Computing and Grover s Algorithm

Quantum Computing and Grover s Algorithm Quantum Computing and Grover s Algorithm Matthew Hayward January 14, 2015 1 Contents 1 Motivation for Study of Quantum Computing 3 1.1 A Killer App for Quantum Computing.............. 3 2 The Quantum Computer

More information

Algebra and Geometry Review (61 topics, no due date)

Algebra and Geometry Review (61 topics, no due date) Course Name: Math 112 Credit Exam LA Tech University Course Code: ALEKS Course: Trigonometry Instructor: Course Dates: Course Content: 159 topics Algebra and Geometry Review (61 topics, no due date) Properties

More information

4.3 Least Squares Approximations

4.3 Least Squares Approximations 18 Chapter. Orthogonality.3 Least Squares Approximations It often happens that Ax D b has no solution. The usual reason is: too many equations. The matrix has more rows than columns. There are more equations

More information

PROOFS BY DESCENT KEITH CONRAD

PROOFS BY DESCENT KEITH CONRAD PROOFS BY DESCENT KEITH CONRAD As ordinary methods, such as are found in the books, are inadequate to proving such difficult propositions, I discovered at last a most singular method... that I called the

More information

A linear algebraic method for pricing temporary life annuities

A linear algebraic method for pricing temporary life annuities A linear algebraic method for pricing temporary life annuities P. Date (joint work with R. Mamon, L. Jalen and I.C. Wang) Department of Mathematical Sciences, Brunel University, London Outline Introduction

More information

Content. Chapter 4 Functions 61 4.1 Basic concepts on real functions 62. Credits 11

Content. Chapter 4 Functions 61 4.1 Basic concepts on real functions 62. Credits 11 Content Credits 11 Chapter 1 Arithmetic Refresher 13 1.1 Algebra 14 Real Numbers 14 Real Polynomials 19 1.2 Equations in one variable 21 Linear Equations 21 Quadratic Equations 22 1.3 Exercises 28 Chapter

More information

A Course on Number Theory. Peter J. Cameron

A Course on Number Theory. Peter J. Cameron A Course on Number Theory Peter J. Cameron ii Preface These are the notes of the course MTH6128, Number Theory, which I taught at Queen Mary, University of London, in the spring semester of 2009. There

More information

G.A. Pavliotis. Department of Mathematics. Imperial College London

G.A. Pavliotis. Department of Mathematics. Imperial College London EE1 MATHEMATICS NUMERICAL METHODS G.A. Pavliotis Department of Mathematics Imperial College London 1. Numerical solution of nonlinear equations (iterative processes). 2. Numerical evaluation of integrals.

More information

it is easy to see that α = a

it is easy to see that α = a 21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coefficient ring is a field. We already know that such a polynomial ring is a UF. Therefore

More information

Sensitivity analysis of utility based prices and risk-tolerance wealth processes

Sensitivity analysis of utility based prices and risk-tolerance wealth processes Sensitivity analysis of utility based prices and risk-tolerance wealth processes Dmitry Kramkov, Carnegie Mellon University Based on a paper with Mihai Sirbu from Columbia University Math Finance Seminar,

More information

Continuous Groups, Lie Groups, and Lie Algebras

Continuous Groups, Lie Groups, and Lie Algebras Chapter 7 Continuous Groups, Lie Groups, and Lie Algebras Zeno was concerned with three problems... These are the problem of the infinitesimal, the infinite, and continuity... Bertrand Russell The groups

More information

The world s largest matrix computation. (This chapter is out of date and needs a major overhaul.)

The world s largest matrix computation. (This chapter is out of date and needs a major overhaul.) Chapter 7 Google PageRank The world s largest matrix computation. (This chapter is out of date and needs a major overhaul.) One of the reasons why Google TM is such an effective search engine is the PageRank

More information