Linear Algebra: Matrix Eigenvalue Problems

Size: px
Start display at page:

Download "Linear Algebra: Matrix Eigenvalue Problems"

Transcription

1 c08.qxd 6/23/ 7:40 PM Page 50 CHAPTER 8 Linear Algebra: Matrix Eigenvalue Problems Prerequisite for this chapter is some familiarity with the notion of a matrix and with the two algebraic operations for matrices. Otherwise the chapter is independent of Chap. 7, so that it can be used for teaching eigenvalue problems and their applications, without first going through the material in Chap. 7. SECTION 8.. The Matrix Eigenvalue Problem. Determining Eigenvalues and Eigenvectors, page 323 Purpose. To familiarize the student with the determination of eigenvalues and eigenvectors of real matrices and to give a first impression of what one can expect (multiple eigenvalues, complex eigenvalues, etc.). Main Content, Important Concepts Eigenvalue, eigenvector Determination of eigenvalues from the characteristic equation Determination of eigenvectors Algebraic and geometric multiplicity, defect Comments on Content To maintain undivided attention on the basic concepts and techniques, all the examples in this section are formal, and typical applications are put into a separate section (Sec. 8.2). The distinction between the algebraic and geometric multiplicity is mentioned in this early section, and the idea of a basis of eigenvectors ( eigenbasis ) could perhaps be mentioned briefly in class, whereas a thorough discussion of this in a later section (Sec. 8.4) will profit from the increased experience with eigenvalue problems, which the student will have gained at that later time. The possibility of normalizing any eigenvector is mentioned in connection with Theorem 2, but this will be of greater interest to us only in connection with orthonormal or unitary systems (Secs. 8.4 and 8.5). In our present work we find eigenvalues first and are then left with the much simpler task of determining corresponding eigenvectors. Numeric work (Secs ) may proceed in the opposite order, but to mention this here would perhaps just confuse the student. Further Comments on Examples and Theorems The simple examples should give the student a first impression of what to expect. In particular, Example 4 shows that eigenvalue problems lead to work in complex, even if the matrices are real. This is an important point to emphasize. Theorem shows that for matrices, in contrast to differential equations, eigenvalue problems involve no existence questions since the existence of an eigenvalue is always guaranteed. Theorems 2 and 3 concern the notion of eigenspace and the invariance of eigenvalues under transposition. Comments on Problems Problems 6 involve straightforward calculations to gain skill and an understanding of the concepts involved. Sidetracking attention by solving cubic or higher-order equations is avoided. 50

2 c08.qxd 6/23/ 7:40 PM Page 5 Instructor s Manual 5 Problems 7 20 illustrate simple applications to analytic geometry. Actual applications of eigenvalue problems follow in the next section, as has been mentioned before. Problems 2 25 illustrate some important simple facts. SOLUTIONS TO PROBLEM SET 8., page 329. Eigenvalues are: 3>2 and 3 and the corresponding eigenvectors are [, 0] T, and [0, ] T respectively. 2. This zero matrix, like any square zero matrix, has the eigenvalue 0. The algebraic multiplicity and geometric multiplicity are both equal to 2, and we can choose the basis [ 0] T, [0 ] T. 3. Eigenvalues are 0 and 3, and the corresponding eigenvectors are [2>3, ] T, and [>3, ] T respectively. 4. Eigenvalues are 5 and 0 and eigenvectors [ 2] T and [ 2 ] T, respectively. The matrix is symmetric, and for such a matrix it is typical that the eigenvalues are real and the eigenvectors orthogonal. Also, make the students aware of the fact that 0 can very well be an eigenvalue. 5. Eigenvalues are 4i and 4i, and the corresponding eigenvectors are [ i, ] T and [i, ] T respectively. 6. Eigenvalues are and 3 and eigenvectors [ 0] T and [ ] T, respectively. Note that for such a triangular matrix, the main diagonal entries are still the eigenvalues (as for a diagonal matrix; cf. Prob. ), but the eigenvectors are no longer orthogonal. 7. The matrix has a repeated eigenvalue of 0 with eigenvectors [, 0] T, and [0, 0] T. 8. Eigenvalues: a 2 k; Eigenvectors: [> 2 k, ] T and [ > 2 k, ] T, respectively. 9. Eigenvalues are , and the eigenvectors are [i, ] T and [ i, ] T respectively. 0. The characteristic equation is (cos u l) 2 sin 2 u 0. Solutions (eigenvalues) are l cos u i sin u. Eigenvectors are obtained from (l cos u)x (sin u)x 2 (sin u)( ix x 2 ) 0, say, x, x 2 i. Note that this matrix represents a rotation through an angle u, and this linear transformation preserves no real direction in the x x 2 -plane, as would be the case if the eigenvectors were positive real. This explains why these vectors must be complex.. Eigenvalues: 4,, 7, Eigenvectors are: [ >2,, ] T, [, >2, ] T, [ 2, 2, ] T. 2. 3, [ 0 0] T ; 4, [5 0] T ;, [7, 4 2] T 3. Repeated eigenvalue 2. Eigenvectors: [2, 2, ] T, [0, 0, 0] T, [0, 0, 0] T. 4. Develop the characteristic determinant by the second row, obtaining ( 2 l)[(2 l)(4 l) ] ( 2 l)(l 3) 2. 2 Eigenvectors for the eigenvalues and 3 are [0 0] T and [ 0 ] T, respectively, and we get no basis for R 3.

3 c08.qxd 6/23/ 7:40 PM Page Instructor s Manual 6. The indicated division of the characteristic polynomial gives The eigenvalues and eigenvectors are 8. c 0 d (a) Any point of the x -axis is mapped onto 0 ; (a), c d 0 ; (b), c 0 d. itself. (b) Any point (0, x on the x -axis is mapped onto (0, x, so that [0 x 2 ] T 2 ) 2 2 ) is an eigenvector corresponding to l D 0T. The eigenvalue with eigenvectors D0T and DT (which span the plane x 2 x ) indicates that every point in the plane x 2 x is mapped onto itself. The other eigenvalue 0 with eigenvector [ 0] T indicates that any point on the line x 2 x, x 3 0 (which is perpendicular to the plane x 2 x ) is mapped onto the origin. The student should perhaps make a sketch to see what is going on geometrically. 24. By Theorem in Sec. 7.8 the inverse exists if and only if det A 0. On the other hand, from the product representation of the characteristic polynomial we obtain Hence exists if and only if 0 is not an eigenvalue of A. Furthermore, let l 0 be an eigenvalue of A. Then A 0 Multiply this by Now divide by l: (l 4 22l 2 24l 45)>(l 3) 2 l 2 6l 5. l 3, [ ] T with a defect of l 2 D(l) det (A li) ( ) n (l l )(l l 2 ) Á (l l n ) det A ( ) n ( l )( l 2 ) Á ( l n ) l l 2 Á ln. A from the left: [3 ] T l 3 5, [ 5 ] T. Ax lx. A Ax la x. l x A x. SECTION 8.2. Some Applications of Eigenvalue Problems, page 329 Purpose. Matrix eigenvalue problems are of greatest importance in physics, engineering, geometry, etc., and the applications in this section and in the problem set are supposed to give the student at least some impression of this fact. Main Content Applications of eigenvalue problems in Elasticity theory (Example ) Probability theory (Example 2) 0

4 c08.qxd 6/23/ 7:40 PM Page 53 Instructor s Manual 53 Biology (Example 3) Mechanical vibrations (Example 4) Short Courses. Of course, this section can be omitted, for reasons of time, or one or two of the examples can be considered quite briefly. Comments on Content The examples in this section have been selected from the viewpoint of modest prerequisites, so that not too much time will be needed to set the scene. Example 4 illustrates why real matrices can have complex eigenvalues (as mentioned before, in Sec. 8.), and why these eigenvalues are physically meaningful. (For students familiar with systems of ODEs, one can easily pick further examples from Chap. 4.) Comments on Problems Problems 2 are similar to the applications shown in the examples of the text. Problems 3 5 show an interesting application of eigenvalue problems to production, typical of various other applications of eigenvalue theory in economics included in various textbooks in economic theory. SOLUTIONS TO PROBLEM SET 8.2, page 333. Eigenvalue and eigenvectors are, [, ] T and 2, [, ] T. The eigenvectors are orthogonal. 2. Eigenvalues and eigenvectors are.6, [ ] T and 2.4, [ ] T. These vectors are orthogonal, as is typical of a symmetric matrix. Directions are 45 and 45, respectively. 3. Eigenvalues 3, 3 and eigenvectors [22, ] T and [ > 22, ] T respectively. 4. Extension factors and in the directions given by [ 2 5] T and [ 2 5] T (76.7 and 3.3, respectively). 6. 2, [ ] T ; 2, [ ] T ; directions 45 and 45, respectively. T 7. Eigenvector [2.5, ] with eigenvalue. 8. [ ] T, as could also be seen without calculation because A has row sums equal to, which would not be the case in general. 9. Eigenvector [ 0.2, 0.4, ] with eigenvalue. 0. Growth rate 3. The characteristic polynomial is 4(x 3)(2x 5)(2x ) which gives the remaining two eigenvalues as 2.5 and 0.5. The sum of all the eigenvalues is the trace which is zero. Note that the growth rate is not that sensitive to the elements of the matrix. Working with two decimal digits still retains the intrinsic characteristic of the problem.. Growth rate is 4. Characteristic polynomial is (x 4)(x )(x 3). 2. Growth rate The other eigenvalues are complex or negative and are not needed. The sum of all eigenvalues equals the trace, that is, 0, except for a roundoff error. This 4 4 Leslie matrix corresponds to a classification of the population into four classes.

5 c08.qxd 6/23/ 7:40 PM Page Instructor s Manual 4. A has the same eigenvalues as, and has row sums, so that it has the eigenvalue with eigenvector x [ Á ] T. Leontief is a leader in the development and application of quantitative methods in empirical economical research, using genuine data from the economy of the United States to provide, in addition to the closed model of Prob. 3 (where the producers consume the whole production), open models of various situations of production and consumption, including import, export, taxes, capital gains and losses, etc. See W. W. Leontief, The Structure of the American Economy (Oxford: Oxford University Press, 95). H. B. Cheney and P. G. Clark, Interindustry Economics (New York: Wiley, 959). A T A T 6. This follows by comparing the coefficient of in the development of the characteristic determinant D (l) with that obtained from the product representation. 8. The first statement follows from l n Ax lx, (ka)x k(ax) k(lx) (kl)x, the second by induction and multiplication of A k x j l k j x j by A from the left. 20. det (L li) l 3 l 2 l 2 l l 3 l 2 l Hence l 0. If all three eigenvalues are real, at least one is positive since trace L 0. The only other possibility is l a ib, l 2 a ib, l 3 real (except for the numbering of the eigenvalues). Then l 3 0 because l l 2 l 3 (a 2 b 2 )l 3 det L l 3 l 2 l SECTION 8.3. Symmetric, Skew-Symmetric, and Orthogonal Matrices, page 334 Purpose. To introduce the student to the three most important classes of real square matrices and their general properties and eigenvalue theory. Main Content, Important Concepts The eigenvalues of a symmetric matrix are real. The eigenvalues of a skew-symmetric matrix are pure imaginary or zero. The eigenvalues of an orthogonal matrix have absolute value. Further properties of orthogonal matrices. Comments on Content The student should memorize the preceding three statements on the locations of eigenvalues as well as the basic properties of orthogonal matrices (orthonormality of row vectors and of column vectors, invariance of inner product, determinant equal to or ). Furthermore, it may be good to emphasize that, since the eigenvalues of an orthogonal matrix may be complex, so may be the eigenvectors. Similarly for skew-symmetric matrices. Both cases are simultaneously illustrated by 0 A c d with eigenvectors c d and c d 0 i i corresponding to the eigenvalues i and i, respectively.

6 c08.qxd 6/23/ 7:40 PM Page 55 Instructor s Manual 55 Further Comments on the Three Classes of Matrices in This Section Reality of eigenvalues is a main reason for the importance of symmetric matrices many quantities in physics, such as mass, energy, etc., are real. Formula (4) brings in skew-symmetric matrices in a rather natural fashion. Theorem 3 explains the importance of orthogonal matrices. Typical examples of the spectra of the matrices considered in this section are illustrated by Probs. 0 most importantly, Probs. 4 and 8. Problems 3 20 should help the student gain a deeper understanding of the concepts and properties of the three classes of matrices considered in this section. SOLUTIONS TO PROBLEM SET 8.3, page 338. Eigenvalues: 3>5 4>5i, Eigenvectors: [i, ] T and [ i, ] T respectively. Skewsymmetric and orthogonal. 2. Eigenvalues a ib. Symmetric if b 0; then the eigenvalues are real. Skew-symmetric if a 0; then the eigenvalues are pure imaginary (or zero). Orthogonal if a 2 b 2 ; then the eigenvalues have an absolute value of. 3. Non-orthogonal; skew-symmetric; Eigenvalues 4i with eigenvectors [ i, ] T and [i, ] T. 4. The characteristic equation is Hence the eigenvalues are cos u l sin u 2 2 l 2 (2 cos u) l 0. sin u cos u l l cos u i sin u. If u 0 (the identity transformation) we have l with multiplicity 2. If u 0, we obtain the eigenvectors from x 2 ix, say, [ i], ] T, [, >2, ] T, [4>5 which are complex; indeed, no (real) direction is preserved under a rotation. 5. Symmetric with eigenvalues 2, 2, 3 and eigenvectors [0, > 23, ] T, [0, 23, [, 0, 0] T respectively; non-orthogonal. 6. a 2k, [ ] T ; a k, [ 0 ] T, [ 0] T ; symmetric (for real a and k) 7. Skew-symmetric; Eigenvalues: 0, 3 2i with eigenvectors 3>5i, 2>5 6>5i, ] T, [4>5 3>5i, 2>5 6>5i, ] T respectively; non-orthogonal. 8. Orthogonal, a rotation about the x -axis through an angle u. Eigenvalues and cos u i sin u. Compare with Prob Skew-symmetric; Eigenvalues:, and i with eigenvectors [0,, 0] T, [i, 0, ] T, [ i, 0, ] T respectively; orthogonal. 0. Orthogonal; eigenvalues:,,, all of absolute value. Eigenvectors [,, ] T, [, 0, ] T, [,, 0] T.

7 c08.qxd 6/23/ 7:40 PM Page Instructor s Manual 2. CAS Experiment (a) A T A, B T B, (AB) T B T A T B A (AB). Also (A ) T (A T ) (A ). In terms of rotations it means that the composite of rotations and the inverse of a rotation are rotations. (b) The inverse is (c) To a rotation of about No limit. For a student unfamiliar with complex numbers this may require some thought. (d) Limit 0, approach along some spiral. (e) The matrix is obtained by using familiar values of cosine and sine, 6. Let Ax lx (x 0), Ay y (y 0). Then Thus Hence if l, then x T y 0, which proves orthogonality. 8. det A det (A T ) det ( A) ( ) n if n is odd. Hence the answer is no. For even n 2, 4, Á det A det A 0 we have c c cos u sin u d sin u cos u. A c 3>2 2 d 2 3>2. lx T (Ax) T x T A T x T A. lx T y x T Ay x T y x T y d 0, E U, etc, Yes, for instance, >2 0 D3>2 2 0T. 0 0 SECTION 8.4. Eigenbases. Diagonalization. Quadratic Forms, page 339 Purpose. This section exhibits the role of bases of eigenvectors ( eigenbases ) in connection with linear transformations and contains theorems of great practical importance in connection with eigenvalue problems. Main Content, Important Concepts Bases of eigenvectors (Theorems, 2) Similar matrices have the same spectrum (Theorem 3)

8 c08.qxd 6/23/ 7:40 PM Page 57 Instructor s Manual 57 Diagonalization of matrices (Theorem 4) Principal axes transformation of forms (Theorem 5) Short Courses. Complete omission of this section or restriction to a short look at Theorems and 5. Comments on Content Theorem on similar matrices has various applications in the design of numeric methods (Chap. 20), which often use subsequent similarity transformations to tridiagonalize or (nearly) diagonalize matrices on the way to approximations of eigenvalues and eigenvectors. The matrix X of eigenvectors [see (5)] also occurs quite frequently in that context. Theorem 2 is another result of fundamental importance in many applications, for instance, in those methods for numerically determining eigenvalues and eigenvectors. Its proof is substantially more difficult than the proofs given in this chapter. The theorems in this section give sufficient conditions for the existence of eigenbases ( bases of eigenvectors), namely, the almost trivial Theorem as well as the very important Theorem 2, exhibiting another basic property of symmetric matrices. This is followed in Theorems 3 and 4 by similarity of matrices and its application to diagonalization. The second part of the section concerns the principal axes transformation of quadratic forms and its application to conic sections. The extension of these ideas and results to complex matrices and forms follows in the next section, the last one of this chapter. SOLUTIONS TO PROBLEM SET 8.4, page  c d. 4 Similarly, for the second eigenvalue we obtain d ; l, y c 0 d ; x Py c l, y c 2>9 d ; x Py c >9 40>9 d l, y [26 6] T, x [4 2] T 4.  D 6 3 0T ; l 3, y [0 0] T, x [0 0] T l 2, y [ 2 2 ] T, x [ 2 ] T  D0 2 2T; l 0, y [>3,, ] T, x Py [, >3, ] T l 4, y [, 0, 0] T, x Py [0,, 0] T l 0, y [2, 6, ] T, x Py [6, 2, ] T

9 c08.qxd 6/23/ 7:40 PM Page Instructor s Manual 6. Project. (a) This follows immediately from the product representation of the characteristic polynomial of A. (b) C AB, c a n Furthermore, trace BA is the sum of involving the same terms as those in the double sum of trace AB. (c) By multiplications from the right and from the left we readily obtain A ~ P 2 ÂP 2. (d) Interchange the corresponding eigenvectors (columns) in the matrix X in (5). 9. Eigenvalues: 2, 6; Matrix of corresponding eigenvectors: C S 0. The eigenvalues of A are and. A matrix of corresponding eigenvectors is Hence diagonalization gives 2. A has the eigenvalues 0 and 5. A matrix of eigenvectors is Its inverse is l n 2 Hence diagonalization gives n a l b l, c 22 a a 2l b l2, etc. Now take the sum of these n sums. c~ a n m l b m a m, Á, c ~ nn a n X c 0 >2 d. X c 2 0 d. D X AX c 0 0 d. 7 X c d 3. m >50 >50 X c d 3>50 7>50. D X AX c d. b nm a mn, 8 7 6> Eigenvalues: 2, 9, 6; Matrix of corresponding eigenvectors: D 7 6>7 T

10 c08.qxd 6/23/ 7:40 PM Page 59 Instructor s Manual A has the eigenvalues 2, 4,. A matrix of eigenvectors is 2 0 X D T. Its inverse is 2 0 X D 2 2 3T. Hence diagonalization gives D X AX D 0 4 0T >2 5. Eigenvalues: 5,, 3; Matrix of corresponding eigenvectors: D 0T 6. A has eigenvalues 2, 2, 0. A matrix of corresponding eigenvectors is 0 X D 0 T. Its inverse is 0 0 >2 >2 0 X D 0 0 T. Diagonalization thus gives >2 > D X Ax D 0 2 0T. 8. The symmetric coefficient matrix is C c d. It has eigenvalues 5 and 5. Hence the transformed quadratic form is 5y 2 5y 2 2 0, or y 2 y

11 c08.qxd 6/23/ 7:40 PM Page Instructor s Manual This is a hyperbola. The matrix whose columns are normalized eigenvectors of C gives the relation between y and x in the form 20. The symmetric coefficient matrix is Its eigenvalues are 0 and 0. Hence the transformed form is This represents a pair of parallel straight lines The matrix X whose columns are normalized eigenvectors of C gives the relation between y and x in the form 22. The symmetric coefficient matrix is x c 3>00 >00 d y. >00 3>00 C c d. 0y y 2 2 0, thus y 2. x 0 c 3 3 d y. C c 4 6 d. 6 3 Its eigenvalues are and 6. Hence the transformed form is y 2 6y This represents an ellipse. The matrix whose columns are normalized eigenvectors of C gives the relation between y and x in the form x 5 c 2 2 d y. 24. Transform Q (x) by (9) to the canonical form (0). Since the inverse transform y X x of (9) exists, there is a one-to-one correspondence between all x 0 and y 0. Hence the values of Q (x) for x 0 coincide with the values of (0) on the right. But the latter are obviously controlled by the signs of the eigenvalues in the three ways stated in the theorem. This completes the proof. SECTION 8.5. Complex Matrices and Forms. Optional, page 346 Purpose. This section is devoted to the three most important classes of complex matrices and corresponding forms and eigenvalue theory.

12 c08.qxd 6/23/ 7:40 PM Page 6 Instructor s Manual 6 Main Content, Important Concepts Hermitian and skew-hermitian matrices Unitary matrices, unitary systems Location of eigenvalues (Fig. 63) Quadratic forms, their symmetric coefficient matrix Hermitian and skew-hermitian forms Background Material. Section 8.3, which the present section generalizes. The prerequisites on complex numbers are very modest, so that students will need hardly any extra help in that respect. Short Courses. This section can be omitted. The importance of these matrices results from quantum mechanics as well as from mathematics itself (e.g., from unitary transformations, product representations of nonsingular matrices A UH, U unitary, H Hermitian, etc.). The determinant of a unitary matrix (see Theorem 4) may be complex. For example, the matrix is unitary and has A i 2 c 0 0 d det A i. Comments on Problems Complex matrices appear in quantum mechanics; see Prob. 7, etc. Problems 3 20 give an impression of calculations for complex matrices. Normal matrices, defined in Prob.8, play an important role in a more extended theory of complex matrices. SOLUTIONS TO PROBLEM SET 8.5, page 35. Hermitian; Eigenvalues: 3, ; and the corresponding matrix of eigenvectors is C i i S 2. Skew-Hermitian. Eigenvalues and eigenvectors are i, [ i 2] T and 2i, [2 i] T. 3. Non-Hermitian; Eigenvalues: and the matrix of eigenvectors is C 4 i22, S 4. Skew-Hermitian, as well as unitary, eigenvalues i and i, eigenvectors [ ] T and [ ] T, respectively.

13 c08.qxd 6/23/ 7:40 PM Page Instructor s Manual 5. Non-Hermitian; Eigenvalues: i, i, i, and the corresponding matrix of eigenvectors is 0 0 D 0 0T Hermitian. Eigenvalues and eigenvectors are 4, [i i ] T 0, [ 0 i] T 4, [i i ] T. 8. Eigenvectors are as follows. (Multiplication by a complex constant may change them drastically!) For A [ 3i 5] T, [ 3i 2] T For B [2 i i] T, [2 i 5i] T For C [ ] T, [ ] T. 9. Skew-Hermitian; x T Ax 8 2i. 0. The matrix is non-hermitian. x T Ax [3, 2i][ 4 i, 3 6i] T 3i. Skew-Hermitian; x T Ax 4i. 2. The matrix is Hermitian. We obtain the real value [ i i]a[ i i] T [ i i][ 4i 2i 4 2i] T (BA) T (BA) T A T B T A( B) AB. For the matrices in Example 2. 9i 5 3i AB c d. 23 0i 6. The inverse of a product UV of unitary matrices is (UV) V U V T U T (UV) T. This proves that UV is unitary. We show that the inverse A B of a unitary matrix A is unitary. We obtain B (A ) (A T ) (A ) T B T, as had to be shown. 8. A T A A 2 AA T if A is Hermitian, A T A A 2 A( A) AA T if A is skew- Hermitian, A T A A A I AA AA T if A is unitary. 20. For instance, c 0 0 d i 0 is not normal. A normal matrix that is not Hermitian, skew-hermitian, or unitary is obtained if we take a unitary matrix and multiply it by 2 or some other real factor different from.

14 c08.qxd 6/23/ 7:40 PM Page 63 Instructor s Manual 63 SOLUTIONS TO CHAPTER 8 REVIEW QUESTIONS AND PROBLEMS, page 352. Eigenvalues:, 2, and the corresponding matrix of eigenvectors is C S 2. The eigenvalues are and. Corresponding eigenvectors are [2 3] T and [ 2] T, respectively. Note that this basis is not orthogonal. 2>5 2>3 3. Eigenvalues: /2, 3/2, and the matrix of eigenvectors is C S 4. One of the eigenvalues is 9. Its algebraic and geometric multiplicities are 2. Corresponding linearly independent eigenvectors are [ 0 2] T and [0 2] T. The other eigenvalue is 4.5. A corresponding eigenvector is [2 2 ] T. 5. Eigenvalues: 2i, 0, and the matrix of eigenvectors is >4 3>4i >4 3>4i 2 D >4 3>4i >4 3>4i 2T 6. The eigenvalues of A are 7 and 9. The similar matrix, having the same eigenvalues, is 7. Eigenvalues: 5 and A has the eigenvalues 2,, 2. The inverse of P is The similar matrix Â, having the same eigenvalues, is 8 3 >2 >2  P AP c d c 9 7 >2 >  c P D0 3T d 9 7 d c d  D0 3T D T D T Eigenvalues: 2>5 and >20; The corresponding matrix of eigenvectors is 4>5 5>4 C S and its inverse is C S

15 c08.qxd 6/23/ 7:40 PM Page Instructor s Manual 20. Eigenvalues are 26 and 36. The corresponding matrix of eigenvalues is >7 7 X c d. Note that the vectors are orthogonal. Its inverse is Diagonalization gives c The symmetric coefficient matrix is Its eigenvalues are 8 and 8; they are both positive real. The transformed form is This is the canonical form; there is no y y 2 -term. It represents an ellipse. The matrix X whose columns are normalized eigenvectors of C gives the relationship between y and x in the form 24. The symmetric coefficient matrix is 7 X 290 c d c d c d. 9 3 C c d y 2 8y x 0 c 3 3 d y. C c d. Its eigenvalues are 0 and 0. The transformed form is 0y 2 0y 2 2 0(y y 2 )(y y 2 ) 0. It represents two perpendicular straight lines through the origin. The matrix X whose columns are normalized eigenvectors of C gives the relationship between x and y in the form x c 2>525 >525 d y. >525 2> d.

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n. ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

9 MATRICES AND TRANSFORMATIONS

9 MATRICES AND TRANSFORMATIONS 9 MATRICES AND TRANSFORMATIONS Chapter 9 Matrices and Transformations Objectives After studying this chapter you should be able to handle matrix (and vector) algebra with confidence, and understand the

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. Math 312, Fall 2012 Jerry L. Kazdan Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. In addition to the problems below, you should also know how to solve

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

Linearly Independent Sets and Linearly Dependent Sets

Linearly Independent Sets and Linearly Dependent Sets These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi Geometry of Vectors Carlo Tomasi This note explores the geometric meaning of norm, inner product, orthogonality, and projection for vectors. For vectors in three-dimensional space, we also examine the

More information

1 Symmetries of regular polyhedra

1 Symmetries of regular polyhedra 1230, notes 5 1 Symmetries of regular polyhedra Symmetry groups Recall: Group axioms: Suppose that (G, ) is a group and a, b, c are elements of G. Then (i) a b G (ii) (a b) c = a (b c) (iii) There is an

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Lecture L3 - Vectors, Matrices and Coordinate Transformations S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

THREE DIMENSIONAL GEOMETRY

THREE DIMENSIONAL GEOMETRY Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Figure 1.1 Vector A and Vector F

Figure 1.1 Vector A and Vector F CHAPTER I VECTOR QUANTITIES Quantities are anything which can be measured, and stated with number. Quantities in physics are divided into two types; scalar and vector quantities. Scalar quantities have

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Physics 235 Chapter 1. Chapter 1 Matrices, Vectors, and Vector Calculus

Physics 235 Chapter 1. Chapter 1 Matrices, Vectors, and Vector Calculus Chapter 1 Matrices, Vectors, and Vector Calculus In this chapter, we will focus on the mathematical tools required for the course. The main concepts that will be covered are: Coordinate transformations

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

PYTHAGOREAN TRIPLES KEITH CONRAD

PYTHAGOREAN TRIPLES KEITH CONRAD PYTHAGOREAN TRIPLES KEITH CONRAD 1. Introduction A Pythagorean triple is a triple of positive integers (a, b, c) where a + b = c. Examples include (3, 4, 5), (5, 1, 13), and (8, 15, 17). Below is an ancient

More information

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj Section 5. l j v j = [ u u j u m ] l jj = l jj u j + + l mj u m. l mj Section 5. 5.. Not orthogonal, the column vectors fail to be perpendicular to each other. 5..2 his matrix is orthogonal. Check that

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

5.3 The Cross Product in R 3

5.3 The Cross Product in R 3 53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or

More information

Unified Lecture # 4 Vectors

Unified Lecture # 4 Vectors Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

Identifying second degree equations

Identifying second degree equations Chapter 7 Identifing second degree equations 7.1 The eigenvalue method In this section we appl eigenvalue methods to determine the geometrical nature of the second degree equation a 2 + 2h + b 2 + 2g +

More information

The Determinant: a Means to Calculate Volume

The Determinant: a Means to Calculate Volume The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are

More information

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Style. Learning Outcomes

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Style. Learning Outcomes The Inverse of a Matrix 7.4 Introduction In number arithmetic every number a 0 has a reciprocal b written as a or such that a ba = ab =. Similarly a square matrix A may have an inverse B = A where AB =

More information

MAT188H1S Lec0101 Burbulla

MAT188H1S Lec0101 Burbulla Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

Geometric Transformations

Geometric Transformations Geometric Transformations Definitions Def: f is a mapping (function) of a set A into a set B if for every element a of A there exists a unique element b of B that is paired with a; this pairing is denoted

More information

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0. Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system 1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables

More information

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that

More information

South Carolina College- and Career-Ready (SCCCR) Pre-Calculus

South Carolina College- and Career-Ready (SCCCR) Pre-Calculus South Carolina College- and Career-Ready (SCCCR) Pre-Calculus Key Concepts Arithmetic with Polynomials and Rational Expressions PC.AAPR.2 PC.AAPR.3 PC.AAPR.4 PC.AAPR.5 PC.AAPR.6 PC.AAPR.7 Standards Know

More information

3 Orthogonal Vectors and Matrices

3 Orthogonal Vectors and Matrices 3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first

More information

Vectors Math 122 Calculus III D Joyce, Fall 2012

Vectors Math 122 Calculus III D Joyce, Fall 2012 Vectors Math 122 Calculus III D Joyce, Fall 2012 Vectors in the plane R 2. A vector v can be interpreted as an arro in the plane R 2 ith a certain length and a certain direction. The same vector can be

More information

The Method of Partial Fractions Math 121 Calculus II Spring 2015

The Method of Partial Fractions Math 121 Calculus II Spring 2015 Rational functions. as The Method of Partial Fractions Math 11 Calculus II Spring 015 Recall that a rational function is a quotient of two polynomials such f(x) g(x) = 3x5 + x 3 + 16x x 60. The method

More information

DRAFT. Further mathematics. GCE AS and A level subject content

DRAFT. Further mathematics. GCE AS and A level subject content Further mathematics GCE AS and A level subject content July 2014 s Introduction Purpose Aims and objectives Subject content Structure Background knowledge Overarching themes Use of technology Detailed

More information

LINEAR EQUATIONS IN TWO VARIABLES

LINEAR EQUATIONS IN TWO VARIABLES 66 MATHEMATICS CHAPTER 4 LINEAR EQUATIONS IN TWO VARIABLES The principal use of the Analytic Art is to bring Mathematical Problems to Equations and to exhibit those Equations in the most simple terms that

More information

Mean value theorem, Taylors Theorem, Maxima and Minima.

Mean value theorem, Taylors Theorem, Maxima and Minima. MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

Vector Math Computer Graphics Scott D. Anderson

Vector Math Computer Graphics Scott D. Anderson Vector Math Computer Graphics Scott D. Anderson 1 Dot Product The notation v w means the dot product or scalar product or inner product of two vectors, v and w. In abstract mathematics, we can talk about

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

Mathematics (MAT) MAT 061 Basic Euclidean Geometry 3 Hours. MAT 051 Pre-Algebra 4 Hours

Mathematics (MAT) MAT 061 Basic Euclidean Geometry 3 Hours. MAT 051 Pre-Algebra 4 Hours MAT 051 Pre-Algebra Mathematics (MAT) MAT 051 is designed as a review of the basic operations of arithmetic and an introduction to algebra. The student must earn a grade of C or in order to enroll in MAT

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

Mathematics Pre-Test Sample Questions A. { 11, 7} B. { 7,0,7} C. { 7, 7} D. { 11, 11}

Mathematics Pre-Test Sample Questions A. { 11, 7} B. { 7,0,7} C. { 7, 7} D. { 11, 11} Mathematics Pre-Test Sample Questions 1. Which of the following sets is closed under division? I. {½, 1,, 4} II. {-1, 1} III. {-1, 0, 1} A. I only B. II only C. III only D. I and II. Which of the following

More information

A note on companion matrices

A note on companion matrices Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Elementary Linear Algebra

Elementary Linear Algebra Elementary Linear Algebra Kuttler January, Saylor URL: http://wwwsaylororg/courses/ma/ Saylor URL: http://wwwsaylororg/courses/ma/ Contents Some Prerequisite Topics Sets And Set Notation Functions Graphs

More information

Lecture 14: Section 3.3

Lecture 14: Section 3.3 Lecture 14: Section 3.3 Shuanglin Shao October 23, 2013 Definition. Two nonzero vectors u and v in R n are said to be orthogonal (or perpendicular) if u v = 0. We will also agree that the zero vector in

More information

Introduction to Matrices for Engineers

Introduction to Matrices for Engineers Introduction to Matrices for Engineers C.T.J. Dodson, School of Mathematics, Manchester Universit 1 What is a Matrix? A matrix is a rectangular arra of elements, usuall numbers, e.g. 1 0-8 4 0-1 1 0 11

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col

More information