Eigenvalues and eigenvectors of a matrix


 Preston Daniel
 1 years ago
 Views:
Transcription
1 Eigenvalues and eigenvectors of a matrix Definition: If A is an n n matrix and there exists a real number λ and a nonzero column vector V such that AV = λv then λ is called an eigenvalue of A and V is called an eigenvector corresponding to the eigenvalue λ. Example: Consider the matrix A = ( )( ) ( ) 2 1. Note that 2 3 = 4 ( ) 1 2 So, the number 4 is an eigenvalue of A and the column vector eigenvector corresponding to 4. ( ) 1 is an 2 Dana Mackey (DIT) Numerical Methods II 1 / 23
2 Finding eigenvalues and eigenvectors for a given matrix A 1. The eigenvalues of A are calculated by solving the characteristic equation of A: det(a λi ) = 0 2. Once the eigenvalues of A have been found, the eigenvectors corresponding to each eigenvalue λ can be determined by solving the matrix equation AV = λv Example: Find the eigenvalues of A = Dana Mackey (DIT) Numerical Methods II 2 / 23
3 Example: The eigenvalues of A = quadratic equation ( ) 2 1 are 1 and 4, which is seen by solving the λ λ = λ2 5λ + 4 = 0 To find the eigenvectors corresponding to 1 we write V = ( )( V1 V 2 ) = 1 ( V1 V 2 ) so ( V1 V 2 2V 1 + V 2 = V 1 2V 1 + 3V 2 = V 2 ) and solve This gives a ( single ) independent equation V 1 + V 2 = 0 so any vector of the 1 form V = α is an eigenvector corresponding to the eigenvalue 1. 1 Dana Mackey (DIT) Numerical Methods II 3 / 23
4 Matrix diagonalisation The eigenvalues and eigenvectors of a matrix have the following important property: If a square n n matrix A has n linearly independent eigenvectors then it is diagonalisable, that is, it can be factorised as follows A = PDP 1 where D is the diagonal matrix containing the eigenvalues of A along the diagonal, also written as D = diag[λ 1,λ 2,...,λ n ] and P is a matrix formed with the corresponding eigenvectors as its columns. For example, matrices whose eigenvalues are distinct numbers are diagonalisable. Symmetric matrices are also diagonalisable. Dana Mackey (DIT) Numerical Methods II 4 / 23
5 This property has many important applications. For example, it can be used to simplify algebraic operations involving the matrix A, such as calculating powers: A m = ( PDP 1) (PDP 1) (PDP 1) = PD m P 1 Note that if D = diag[λ 1,λ 2,...,λ n ] then D m = diag[λ m 1,λm 2,...,λm n ], which is very easy to calculate. ( ) 2 1 Example: The matrix A = has eigenvalues 1 and 4 with 2 3 ( ) ( ) 1 1 corresponding eigenvectors and. Then it can be diagonalised 1 2 as follows ( )( )( ) /3 1/3 A = /3 1/3 Dana Mackey (DIT) Numerical Methods II 5 / 23
6 The power and inverse power method Overview: The Gerschgorin circle theorem is used for locating the eigenvalues of a matrix. The power method is used for approximating the dominant eigenvalue (that is, the largest eigenvalue) of a matrix and its associated eigenvector. The inverse power method is used for approximating the smallest eigenvalue of a matrix or for approximating the eigenvalue nearest to a given value, together with the corresponding eigenvector. Dana Mackey (DIT) Numerical Methods II 6 / 23
7 The Gerschgorin Circle Theorem Let A be an n n matrix and define R i = n j=1 j i a ij for each i = 1,2,3,...n. Also consider the circles C i = {z Cl : z a ii R i } 1 If λ is an eigenvalue of A then λ lies in one of the circles C i. 2 If k of the circles C i form a connected region R in the complex plane, disjoint from the remaining n k circles then the region contains exactly k eigenvalues. Dana Mackey (DIT) Numerical Methods II 7 / 23
8 Example Consider the matrix A = The radii of the Gerschgorin circles are R 1 = = 1; R 2 = = 2; R 3 = = 3 and the circles are C 1 = {z Cl : z 1 1}; C 2 = {z Cl : z 5 2}; C 3 = {z Cl : z 9 3} Since C 1 is disjoint from the other circles, it must contain one of the eigenvalues and this eigenvalue is real. As C 2 and C 3 overlap, their union must contain the other two eigenvalues. Dana Mackey (DIT) Numerical Methods II 8 / 23
9 The power method The power method is an iterative technique for approximating the dominant eigenvalue of a matrix together with an associated eigenvector. Let A be an n n matrix with eigenvalues satisfying λ 1 > λ 2 λ 3... λ n The eigenvalue with the largest absolute value, λ 1 is called the dominant eigenvalue. Any eigenvector corresponding to λ 1 is called a dominant eigenvector. Let x 0 be an nvector. Then it can be written as x 0 = α 1 v 1 + α n v n where v 1,...,v n are the eigenvectors of A. Dana Mackey (DIT) Numerical Methods II 9 / 23
10 Now construct the sequence x (m) = Ax (m 1), for m 1. x (1) = Ax (0) = α 1 λ 1 v 1 + α n λ n v n x (2) = Ax (1) = α 1 λ 2 1v 1 + α n λ 2 nv n. x (m) = Ax (m 1) = α 1 λ m 1 v 1 + α n λ m n v n and hence which gives x (m) λ m 1 ( ) m ( λ2 λn = α 1 v 1 + α 2 + α n λ 1 λ 1 x (m) lim m λ m = α 1 v 1 1 ) m v n Hence, the sequence x(m) λ m 1 dominant eigenvalue. converges to an eigenvector associated with the Dana Mackey (DIT) Numerical Methods II 10 / 23
11 The power method implementation Choose the initial guess, x (0) such that max i x (0) i = 1. For m 1, let y (m) = Ax (m 1), x (m) = y(m) y (m) p m where p m is the index of the component of y (m) which has maximum absolute value. Note that and since x (m 1) p m 1 y (m) = Ax (m 1) λ 1 x (m 1) = 1 (by construction), it follows that y (m) p m 1 λ 1 Dana Mackey (DIT) Numerical Methods II 11 / 23
12 Note: Since the power method is an iterative scheme, a stopping condition could be given as λ (m) λ (m 1) < required error where λ (m) is the dominant eigenvalue approximation during the m th iteration, or max x (m) i i x (m 1) i < required error Note: If the ratio λ 2 /λ 1 is small, the convergence rate is fast. If λ 1 = λ 2 then, in general, the power method does not converge. Example Consider the matrix A = Show that its eigenvectors are λ 1 = 12, λ 2 = 3 and λ 3 = 3 and calculate the associated eigenvectors. Use the power method to approximate the dominant eigenvalue and a corresponding eigenvector. Dana Mackey (DIT) Numerical Methods II 12 / 23
13 Matrix polynomials and inverses Let A be an n n matrix with eigenvalues λ 1,λ 2,...λ n and associated eigenvectors v 1,v 2,...v n. 1 Let p(x) = a 0 + a 1 x + + a m x m be a polynomial of degree m. Then the eigenvalues of the matrix B = p(a) = a 0 + a 1 A + + a m A m are p(λ 1 ), p(λ 2 ),...p(λ n ) with associated eigenvectors v 1, v 2,...v n. 2 If det(a) 0 then the inverse matrix A 1 has eigenvalues 1 λ 1, 1 λ 2,... 1 λ n with associated eigenvectors v 1, v 2,...v n. Dana Mackey (DIT) Numerical Methods II 13 / 23
14 The inverse power method Let A be an n n matrix with eigenvalues λ 1, λ 2,...λ n and associated eigenvectors v 1, v 2,...v n. Let q be a number for which det(a qi ) 0 (so q is not an eigenvalue of A). Let B = (A qi ) 1 so the eigenvalues of B are given by µ 1 = 1 λ 1 q, µ 2 = 1 λ 2 q,... µ m = 1 λ m q. If we know that λ k is the eigenvalue that is closest to the number q, then µ k is the dominant eigenvalue for B and so it can be determined using the power method. Dana Mackey (DIT) Numerical Methods II 14 / 23
15 Example Consider the matrix A = Use Gerschgorin circles to obtain an estimate for one of the eigenvalues of A and then get a more accurate approximation using the inverse power method. (Note: The eigenvalues are, approximately, , and ) Dana Mackey (DIT) Numerical Methods II 15 / 23
16 The Gershgorin circles are z + 4 2; z 3 1 and z 15 2 so they are all disjoint! There is one eigenvalue in each of the three circles so they lie close to 4, 3 and 15. Let λ be the eigenvalue closest to q = 3 and let B = (A 3I ) 1. Then µ = 1 λ 3 is the dominant eigenvalue for B and can be determined using the power method, together with an associated eigenvector, v. Note that, once the desired approximation, µ (n), v (n) has been obtained, we must calculate λ (n) = µ (n) The associated eigenvector for λ is the same as the one calculated for µ. Dana Mackey (DIT) Numerical Methods II 16 / 23
17 Smallest eigenvalue To find the eigenvalue of A that is smallest in magnitude is equivalent to find the dominant eigenvalue of the matrix B = A 1. (This is the inverse power method with q = 0.) Example: Consider the matrix A = Use the inverse power method to find an approximation for the smallest eigenvalue of A. (Note: The eigenvalues are 3, 4 and 5.) Dana Mackey (DIT) Numerical Methods II 17 / 23
18 The QR method for finding eigenvalues We say that two n n matrices, A and B, are orthogonally similar if there exists an orthogonal matrix Q such that AQ = QB or A = QBQ T If A and B are orthogonally similar matrices then they have the same eigenvalues. Proof: If λ is an eigenvalue of A with eigenvector v, that is, then Av = λv BQ T v = λq T v which means λ is an eigenvalue of B with eigenvector Q T v. Dana Mackey (DIT) Numerical Methods II 18 / 23
19 To find the eigenvalues of a matrix A using the QR algorithm, we generate a sequence of matrices A (m) which are orthogonally similar to A (and so have the same eigenvalues), and which converge to a matrix whose eigenvalues are easily found. If the matrix A is symmetric and tridiagonal then the sequence of QR iterations converges to a diagonal matrix, so its eigenvalues can easily be read from the main diagonal. Dana Mackey (DIT) Numerical Methods II 19 / 23
20 The basic QR algorithm We find the QR factorisation for the matrix A and then take the reverse order product RQ to construct the first matrix in the sequence. A = QR = R = Q T A A (1) = RQ = A (1) = Q T AQ Easy to see that A and A (1) are orthogonally similar and so have the same eigenvalues. We then find the QR factorisation of A (1) : A (1) = Q (1) R (1) = A (2) = R (1) Q (1) This procedure is then continued to construct A (2), A (3), etc. and (if the original matrix is symmetric and tridiagonal) this sequence converges to a diagonal matrix. The diagonal values of each of the iteration matrices A (m) can be considered as approximations of the eigenvalues of A. Dana Mackey (DIT) Numerical Methods II 20 / 23
21 Example Consider the matrix A = whose eigenvalues are , , Some of the iterations of the QR algorithm are shown below: A (2) = A (10) = Note that the diagonal elements approximate the eigenvalues of A to 3 decimal places while the offdiagonal elements are converging to zero. Dana Mackey (DIT) Numerical Methods II 21 / 23
22 Remarks 1 Since the rate of convergence of the QR algorithm is quite slow (especially when the eigenvalues are closely spaced in magnitude) there are a number of variations of this algorithm (not discussed here) which can accelerate convergence. 2 The QR algorithm can be used for finding eigenvectors as well but we will not cover this technique now Exercise: Calculate the first iteration of the QR algorithm for the matrix on the previous slide. Dana Mackey (DIT) Numerical Methods II 22 / 23
23 More examples 1. Let A = Calculate the eigenvalues of A (using the definition) and perform two iterations of the QR algorithm to approximate them. 2. Let A = whose eigenvalues are , , Perform one iteration of the QR algorithm. Dana Mackey (DIT) Numerical Methods II 23 / 23
24 Applications of eigenvalues and eigenvectors 1. Systems of differential equations Recall that a square matrix A is diagonalisable if it can be factorised as A = PDP 1, where D is the diagonal matrix containing the eigenvalues of A along the diagonal, and P (called the modal matrix) has the corresponding eigenvectors as columns. Consider the system of 1st order linear differential equations x (t) = a 11 x(t) + a 12 y(t), y (t) = a 21 x(t) + a 22 y(t) written in matrix form as X (t) = AX(t), where X = (x,y) T and A = (a ij ) i.j=1,2. Dana Mackey (DIT) Numerical Methods II 24 / 23
25 If A is diagonalisable then A = PDP 1, where D = diag(λ 1,λ 2 ) so the system becomes P 1 X (t) = DP 1 X(t) If we let X(t) = P 1 X(t) then X (t) = D X(t) which can be written as x (t) = λ 1 x(t), ỹ (t) = λ 2 ỹ(t) which is easy to solve: x(t) = C 1 e λ 1t, ỹ(t) = C 2 e λ 2t, where C 1 and C 2 are arbitrary constants. The final solution is then recovered from X(t) = P X(t). Dana Mackey (DIT) Numerical Methods II 25 / 23
26 Example: Consider the following linear predatorprey model, where x(t) represents a population of rabbits at time t and y(t) are foxes x (t) = x(t) 2y(t), y (t) = 3x(t) 4y(t) Solve the system using eigenvalues and determine how the two populations are going to evolve. Assume x(0) = 4, y(0) = 1. Dana Mackey (DIT) Numerical Methods II 26 / 23
27 Applications of eigenvalues and eigenvectors 2. Discrete agestructured population model The Leslie model describes the growth of the female portion of a human or animal population. The females are divided into n age classes of equal duration (L/n, where L is the population age limit). The initial number of females in each age group is given by ( x (0) = x (0) 1,x (0) 2 ) (0) T,...x n where x (0) 1 is the number of females aged 0 to L/n years, x (0) 2 is the number of females aged L/n to 2L/n etc. This is called the initial age distribution vector. Dana Mackey (DIT) Numerical Methods II 27 / 23
28 The birth and death parameters which describe the future evolution of the population are given by a i = average no. of daughters born to each female in age class i b i = fraction of females in age class i which survive to next age class Note that a i 0 for i = 1,2,...n and 0 < b i 1 for i = 1,2,...n 1. Let x (k) be the age distribution vector at time t k (where k = 1,2,... and t k+1 t k = L/n). The Leslie model states that the distribution at time t k+1 is given by x (k+1) = Lx (k) where L, the Leslie matrix, is defined as Dana Mackey (DIT) Numerical Methods II 28 / 23
29 This matrix equation is equivalent to a 1 a 2 a 3... a n b L = 0 b b n 1 0 x (k+1) 1 = a 1 x (k) 1 + a 2 x (k) 2 + a n x n (k) x (k+1) i+1 = b i x (k) i, i = 1,2,...n 1 The Leslie matrix has the following properties: 1. It has a unique positive eigenvalue λ 1, for which the corresponding eigenvector has only positive components; 2. This unique positive eigenvalue is the dominant eigenvalue; 3. The dominant eigenvalue λ 1 represents the population growth rate, while the corresponding eigenvector gives the limiting age distribution. Dana Mackey (DIT) Numerical Methods II 29 / 23
30 Example Suppose the age limit of a certain animal population is 15 years and divide the female population into 3 age groups. The Leslie matrix is given by L = If there are initially 1000 females in each age class, find the age distribution after 15 years. Knowing that the dominant eigenvalue is λ 1 = 3 2, find the long term age distribution. Dana Mackey (DIT) Numerical Methods II 30 / 23
Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions
Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential
More informationMATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More informationChapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
More information10.3 POWER METHOD FOR APPROXIMATING EIGENVALUES
55 CHAPTER NUMERICAL METHODS. POWER METHOD FOR APPROXIMATING EIGENVALUES In Chapter 7 we saw that the eigenvalues of an n n matrix A are obtained by solving its characteristic equation n c n n c n n...
More information10.3 POWER METHOD FOR APPROXIMATING EIGENVALUES
58 CHAPTER NUMERICAL METHODS. POWER METHOD FOR APPROXIMATING EIGENVALUES In Chapter 7 you saw that the eigenvalues of an n n matrix A are obtained by solving its characteristic equation n c nn c nn...
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)
More informationIterative Methods for Computing Eigenvalues and Eigenvectors
The Waterloo Mathematics Review 9 Iterative Methods for Computing Eigenvalues and Eigenvectors Maysum Panju University of Waterloo mhpanju@math.uwaterloo.ca Abstract: We examine some numerical iterative
More informationOctober 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix
Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,
More informationSection 2.1. Section 2.2. Exercise 6: We have to compute the product AB in two ways, where , B =. 2 1 3 5 A =
Section 2.1 Exercise 6: We have to compute the product AB in two ways, where 4 2 A = 3 0 1 3, B =. 2 1 3 5 Solution 1. Let b 1 = (1, 2) and b 2 = (3, 1) be the columns of B. Then Ab 1 = (0, 3, 13) and
More informationMATH 551  APPLIED MATRIX THEORY
MATH 55  APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points
More informationMatrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc.
2 Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Theorem 8: Let A be a square matrix. Then the following statements are equivalent. That is, for a given A, the statements are either all true
More information4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION
4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:
More informationDATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
More informationOrthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,
More informationLinear Least Squares
Linear Least Squares Suppose we are given a set of data points {(x i,f i )}, i = 1,...,n. These could be measurements from an experiment or obtained simply by evaluating a function at some points. One
More informationInner products on R n, and more
Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +
More informationα = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
More informationSummary of week 8 (Lectures 22, 23 and 24)
WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry
More informationby the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the yaxis We observe that
More information1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
More information1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each)
Math 33 AH : Solution to the Final Exam Honors Linear Algebra and Applications 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) (1) If A is an invertible
More information1 Eigenvalues and Eigenvectors
Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x
More informationNumerical methods for finding the roots of a function
Numerical methods for finding the roots of a function The roots of a function f (x) are defined as the values for which the value of the function becomes equal to zero. So, finding the roots of f (x) means
More informationUniversity of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
More informationMath 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
More informationLinear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices
MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two
More informationBrief Introduction to Vectors and Matrices
CHAPTER 1 Brief Introduction to Vectors and Matrices In this chapter, we will discuss some needed concepts found in introductory course in linear algebra. We will introduce matrix, vector, vectorvalued
More informationNotes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
More informationfor each i =;2; ::; n. Example 2. Theorem of Linear Independence or Orthogonal set of vectors. An orthogonal set of vectors not containing 0 is LI. De
Session 9 Approximating Eigenvalues Ernesto GutierrezMiravete Fall, 200 Linear Algebra Eigenvalues Recall that for any matrix A, the zeros of the characteristic polynomial pèçè = detèa, çiè are the eigenvalues
More informationSolution based on matrix technique Rewrite. ) = 8x 2 1 4x 1x 2 + 5x x1 2x 2 2x 1 + 5x 2
8.2 Quadratic Forms Example 1 Consider the function q(x 1, x 2 ) = 8x 2 1 4x 1x 2 + 5x 2 2 Determine whether q(0, 0) is the global minimum. Solution based on matrix technique Rewrite q( x1 x 2 = x1 ) =
More informationLinear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
More informationExamination paper for TMA4205 Numerical Linear Algebra
Department of Mathematical Sciences Examination paper for TMA4205 Numerical Linear Algebra Academic contact during examination: Markus Grasmair Phone: 97580435 Examination date: December 16, 2015 Examination
More information9.3 Advanced Topics in Linear Algebra
548 93 Advanced Topics in Linear Algebra Diagonalization and Jordan s Theorem A system of differential equations x = Ax can be transformed to an uncoupled system y = diag(λ,, λ n y by a change of variables
More information10.4 APPLICATIONS OF NUMERICAL METHODS Applications of Gaussian Elimination with Pivoting
558 HAPTER NUMERIAL METHODS 5. (a) ompute the eigenvalues of A 5. (b) Apply four iterations of the power method with scaling to each matrix in part (a), starting with x,. 5. (c) ompute the ratios for A
More informationsince by using a computer we are limited to the use of elementary arithmetic operations
> 4. Interpolation and Approximation Most functions cannot be evaluated exactly: x, e x, ln x, trigonometric functions since by using a computer we are limited to the use of elementary arithmetic operations
More informationLecture 5 Principal Minors and the Hessian
Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and
More informationApplied Linear Algebra
Applied Linear Algebra OTTO BRETSCHER http://www.prenhall.com/bretscher Chapter 7 Eigenvalues and Eigenvectors ChiaHui Chang Email: chia@csie.ncu.edu.tw National Central University, Taiwan 7.1 DYNAMICAL
More information22 Matrix exponent. Equal eigenvalues
22 Matrix exponent. Equal eigenvalues 22. Matrix exponent Consider a first order differential equation of the form y = ay, a R, with the initial condition y) = y. Of course, we know that the solution to
More informationInverses. Stephen Boyd. EE103 Stanford University. October 27, 2015
Inverses Stephen Boyd EE103 Stanford University October 27, 2015 Outline Left and right inverses Inverse Solving linear equations Examples Pseudoinverse Left and right inverses 2 Left inverses a number
More informationBindel, Fall 2012 Matrix Computations (CS 6210) Week 8: Friday, Oct 12
Why eigenvalues? Week 8: Friday, Oct 12 I spend a lot of time thinking about eigenvalue problems. In part, this is because I look for problems that can be solved via eigenvalues. But I might have fewer
More informationLS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
More information1 Orthogonal projections and the approximation
Math 1512 Fall 2010 Notes on least squares approximation Given n data points (x 1, y 1 ),..., (x n, y n ), we would like to find the line L, with an equation of the form y = mx + b, which is the best fit
More informationLecture 5: Singular Value Decomposition SVD (1)
EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25Sep02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system
More informationIterative Techniques in Matrix Algebra. Jacobi & GaussSeidel Iterative Techniques II
Iterative Techniques in Matrix Algebra Jacobi & GaussSeidel Iterative Techniques II Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin
More informationFacts About Eigenvalues
Facts About Eigenvalues By Dr David Butler Definitions Suppose A is an n n matrix An eigenvalue of A is a number λ such that Av = λv for some nonzero vector v An eigenvector of A is a nonzero vector v
More information[1] Diagonal factorization
8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:
More informationIntroduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra  1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
More informationSection 6.1  Inner Products and Norms
Section 6.1  Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
More informationLinear Dependence Tests
Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks
More informationPresentation 3: Eigenvalues and Eigenvectors of a Matrix
Colleen Kirksey, Beth Van Schoyck, Dennis Bowers MATH 280: Problem Solving November 18, 2011 Presentation 3: Eigenvalues and Eigenvectors of a Matrix Order of Presentation: 1. Definitions of Eigenvalues
More informationRecall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.
ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the ndimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?
More information2.1 Functions. 2.1 J.A.Beachy 1. from A Study Guide for Beginner s by J.A.Beachy, a supplement to Abstract Algebra by Beachy / Blair
2.1 J.A.Beachy 1 2.1 Functions from A Study Guide for Beginner s by J.A.Beachy, a supplement to Abstract Algebra by Beachy / Blair 21. The Vertical Line Test from calculus says that a curve in the xyplane
More informationMATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a nonempty
More information13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in threespace, we write a vector in terms
More informationEigenvalues, Eigenvectors, Matrix Factoring, and Principal Components
Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they
More informationChapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More informationsome algebra prelim solutions
some algebra prelim solutions David Morawski August 19, 2012 Problem (Spring 2008, #5). Show that f(x) = x p x + a is irreducible over F p whenever a F p is not zero. Proof. First, note that f(x) has no
More information10.4 APPLICATIONS OF NUMERICAL METHODS Applications of Gaussian Elimination with Pivoting
59 CHAPTER NUMERICAL METHODS. APPLICATIONS OF NUMERICAL METHODS Applications of Gaussian Elimination with Pivoting In Section.5 you used least squares regression analysis to find linear mathematical models
More informationMATH 240 Fall, Chapter 1: Linear Equations and Matrices
MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationEigenvectors. Chapter Motivation Statistics. = ( x i ˆv) ˆv since v v = v 2
Chapter 5 Eigenvectors We turn our attention now to a nonlinear problem about matrices: Finding their eigenvalues and eigenvectors. Eigenvectors x and their corresponding eigenvalues λ of a square matrix
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationLinear Algebra I. Ronald van Luijk, 2012
Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.
More informationSimilar matrices and Jordan form
Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive
More informationMATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.
MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted
More informationVarious Symmetries in Matrix Theory with Application to Modeling Dynamic Systems
Various Symmetries in Matrix Theory with Application to Modeling Dynamic Systems A Aghili Ashtiani a, P Raja b, S K Y Nikravesh a a Department of Electrical Engineering, Amirkabir University of Technology,
More informationInner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 34 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
More informationLecture 1: Schur s Unitary Triangularization Theorem
Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections
More informationLinear Algebra Notes
Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note
More information(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.
Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product
More informationPractice Math 110 Final. Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16.
Practice Math 110 Final Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16. 1. Let A = 3 1 1 3 3 2. 6 6 5 a. Use Gauss elimination to reduce A to an upper triangular
More informationMatrix Methods for Linear Systems of Differential Equations
Matrix Methods for Linear Systems of Differential Equations We now present an application of matrix methods to linear systems of differential equations. We shall follow the development given in Chapter
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More informationLINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LUdecomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
More informationSolutions to Linear Algebra Practice Problems
Solutions to Linear Algebra Practice Problems. Find all solutions to the following systems of linear equations. (a) x x + x 5 x x x + x + x 5 (b) x + x + x x + x + x x + x + 8x Answer: (a) We create the
More informationApplied Linear Algebra I Review page 1
Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties
More informationContinuity of the Perron Root
Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North
More informationEigenvalues and Eigenvectors
Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution
More information(57) (58) (59) (60) (61)
Module 4 : Solving Linear Algebraic Equations Section 5 : Iterative Solution Techniques 5 Iterative Solution Techniques By this approach, we start with some initial guess solution, say for solution and
More informationMATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006
MATH 511 ADVANCED LINEAR ALGEBRA SPRING 26 Sherod Eubanks HOMEWORK 1 1.1 : 3, 5 1.2 : 4 1.3 : 4, 6, 12, 13, 16 1.4 : 1, 5, 8 Section 1.1: The EigenvalueEigenvector Equation Problem 3 Let A M n (R). If
More informationMath 241, Exam 1 Information.
Math 241, Exam 1 Information. 9/24/12, LC 310, 11:1512:05. Exam 1 will be based on: Sections 12.112.5, 14.114.3. The corresponding assigned homework problems (see http://www.math.sc.edu/ boylan/sccourses/241fa12/241.html)
More information5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 51 Orthonormal
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 918/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 10 Boundary Value Problems for Ordinary Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at UrbanaChampaign
More informationNotes for STA 437/1005 Methods for Multivariate Data
Notes for STA 437/1005 Methods for Multivariate Data Radford M. Neal, 26 November 2010 Random Vectors Notation: Let X be a random vector with p elements, so that X = [X 1,..., X p ], where denotes transpose.
More informationExamination paper for TMA4115 Matematikk 3
Department of Mathematical Sciences Examination paper for TMA45 Matematikk 3 Academic contact during examination: Antoine Julien a, Alexander Schmeding b, Gereon Quick c Phone: a 73 59 77 82, b 40 53 99
More informationSTUDY GUIDE LINEAR ALGEBRA. David C. Lay University of Maryland College Park AND ITS APPLICATIONS THIRD EDITION UPDATE
STUDY GUIDE LINEAR ALGEBRA AND ITS APPLICATIONS THIRD EDITION UPDATE David C. Lay University of Maryland College Park Copyright 2006 Pearson AddisonWesley. All rights reserved. Reproduced by Pearson AddisonWesley
More informationSF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,
More informationQuadratic forms Cochran s theorem, degrees of freedom, and all that
Quadratic forms Cochran s theorem, degrees of freedom, and all that Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 1, Slide 1 Why We Care Cochran s theorem tells us
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationNotes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 201112 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
More informationOn the general equation of the second degree
On the general equation of the second degree S Kesavan The Institute of Mathematical Sciences, CIT Campus, Taramani, Chennai  600 113 email:kesh@imscresin Abstract We give a unified treatment of the
More informationSome Notes on Taylor Polynomials and Taylor Series
Some Notes on Taylor Polynomials and Taylor Series Mark MacLean October 3, 27 UBC s courses MATH /8 and MATH introduce students to the ideas of Taylor polynomials and Taylor series in a fairly limited
More information3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.
Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R
More informationLecture 13 Linear quadratic Lyapunov theory
EE363 Winter 289 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discretetime
More information