Eigenvalues and Eigenvectors
|
|
|
- Jeffery Phillip Price
- 9 years ago
- Views:
Transcription
1 Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution of du=dt D Au is changing with time growing or decaying or oscillating. We can t find it by elimination. This chapter enters a new part of linear algebra, based on Ax D x. All matrices in this chapter are square. A good model comes from the powers A; A ;A ; of a matrix. Suppose you need the hundredth power A 00. The starting matrix A becomes unrecognizable after a few steps, and A 00 is very close to Œ6 6I A A A A 00 A 00 was found by using the eigenvalues of A, not by multiplying 00 matrices. Those eigenvalues (here they are and =) are a new way to see into the heart of a matrix. To explain eigenvalues, we first explain eigenvectors. Almost all vectors change direction, when they are multiplied by A. Certain exceptional vectors x are in the same direction as Ax. Those are the eigenvectors. Multiply an eigenvector by A, and the vector Ax is a number times the original x. The basic equation is Ax D x. The number is an eigenvalue of A. The eigenvalue tells whether the special vector x is stretched or shrunk or reversed or left unchanged when it is multiplied by A. We may find D or or or. The eigenvalue could be zero! Then Ax D 0x means that this eigenvector x is in the nullspace. If A is the identity matrix, every vector has Ax D x. All vectors are eigenvectors of I. All eigenvalues lambda are D. This is unusual to say the least. Most by matrices have two eigenvector directions and two eigenvalues. We will show that det.a I / D 0. 8
2 84 Chapter 6. Eigenvalues and Eigenvectors This section will explain how to compute the x s and s. It can come early in the course because we only need the determinant of a by matrix. Let me use det.a I / D 0 to find the eigenvalues for this first example, and then derive it properly in equation (). Example The matrix A has two eigenvalues D and D =. Look at det.a I / 8 8 A D det D 7 7 C D. / I factored the quadratic into times, to see the two eigenvalues D and D. For those numbers, the matrix A I becomes singular (zero determinant). The eigenvectors x and x are in the nullspaces of A I and A I..A I/x D 0 is Ax D x and the first eigenvector is.6; 4/..A I/x D 0 is Ax D x and the second eigenvector is.; / 6 x D 4 x D and and 8 6 Ax D D x 7 4 (Ax D x means that D ) 8 5 Ax D D (this is 7 5 x so D ). If x is multiplied again by A, we still get x. Every power of A will give A n x D x. Multiplying x by A gave x, and if we multiply again we get. / times x. When A is squared, the eigenvectors stay the same. The eigenvalues are squared. This pattern keeps going, because the eigenvectors stay in their own directions (Figure 6.) and never get mixed. The eigenvectors of A 00 are the same x and x. The eigenvalues of A 00 are 00 D and. /00 D very small number. D D 5 Ax D x D 6 4 Ax D x D x D D D A x D./ x A x D.5/ x D Ax D x A x D x 5 5 Figure 6. The eigenvectors keep their directions. A has eigenvalues and.5/. Other vectors do change direction. But all other vectors are combinations of the two eigenvectors. The first column of A is the combination x C./x 8 6 Separate into eigenvectors D x C./x D C () 4
3 6.. Introduction to Eigenvalues 85 Multiplying by A gives.7; /, the first column of A. Do it separately for x and./x. Of course Ax D x. And A multiplies x by its eigenvalue 8 7 Multiply each x i by i A D is x C 6./x D C 4 Each eigenvector is multiplied by its eigenvalue, when we multiply by A. We didn t need these eigenvectors to find A. But it is the good way to do 99 multiplications. At every step x is unchanged and x is multiplied by. /,sowehave. /99 A 99 8 is really x C./. very 6 /99 x D C 4small 5 4 vector This is the first column of A 00. The number we originally wrote as 6000 was not exact. We left out./. /99 which wouldn t show up for 0 decimal places. The eigenvector x is a steady state that doesn t change (because D /. The eigenvector x is a decaying mode that virtually disappears (because D 5/. The higher the power of A, the closer its columns approach the steady state. We mention that this particular A is a Markov matrix. Its entries are positive and every column adds to. Those facts guarantee that the largest eigenvalue is D (as we found). Its eigenvector x D.6; 4/ is the steady state which all columns of A k will approach. Section 8. shows how Markov matrices appear in applications like Google. For projections we can spot the steady state. D / and the nullspace. D 0/. 5 5 Example The projection matrix P D has eigenvalues D and D Its eigenvectors are x D.; / and x D.; /. For those vectors, P x D x (steady state) and P x D 0 (nullspace). This example illustrates Markov matrices and singular matrices and (most important) symmetric matrices. All have special s and x s 5 5. Each column of P D adds to, so D is an eigenvalue P is singular,so D 0 is an eigenvalue.. P is symmetric, so its eigenvectors.; / and.; / are perpendicular. The only eigenvalues of a projection matrix are 0 and. The eigenvectors for D 0 (which means P x D 0x/ fill up the nullspace. The eigenvectors for D (which means P x D x/ fill up the column space. The nullspace is projected to zero. The column space projects onto itself. The projection keeps the column space and destroys the nullspace 0 Project each part v D C projects onto P v D C 0 Special properties of a matrix lead to special eigenvalues and eigenvectors. That is a major theme of this chapter (it is captured in a table at the very end).
4 86 Chapter 6. Eigenvalues and Eigenvectors Projections have D 0 and. Permutations have all jj D. The next matrix R (a reflection and at the same time a permutation) is also special. Example The reflection matrix R D 0 0 has eigenvalues and. The eigenvector.; / is unchanged by R. The second eigenvector is.; / its signs are reversed by R. A matrix with no negative entries can still have a negative eigenvalue! The eigenvectors for R are the same as for P, because reflection D.projection/ I R D P I 0 D () 0 Here is the point. If P x D x then P x D x. The eigenvalues are doubled when the matrix is doubled. Now subtract I x D x. The result is.p I/x D. /x. When a matrix is shifted by I, each is shifted by. No change in eigenvectors. Figure 6. Projections P have eigenvalues and 0. Reflections R have D and. A typical x changes direction, but not the eigenvectors x and x. Key idea The eigenvalues of R and P are related exactly as the matrices are related The eigenvalues of R D P I are./ D and.0/ D. The eigenvalues of R are. In this case R D I. Check./ D and. / D. The Equation for the Eigenvalues For projections and reflections we found s and x s by geometry P x D x;px D 0; Rx D x. Now we use determinants and linear algebra. This is the key calculation in the chapter almost every application starts by solving Ax D x. First move x to the left side. Write the equation Ax D x as.a I /x D 0. The matrix A I times the eigenvector x is the zero vector. The eigenvectors make up the nullspace of A I. When we know an eigenvalue, we find an eigenvector by solving.a I /x D 0. Eigenvalues first. If.A I /x D 0 has a nonzero solution, A I is not invertible. The determinant of A I must be zero. This is how to recognize an eigenvalue
5 6.. Introduction to Eigenvalues 87 Eigenvalues The number is an eigenvalue of A if and only if A I is singular det.a I / D 0 () This characteristic equation det.a I / D 0 involves only, not x. When A is n by n, the equation has degree n. Then A has n eigenvalues and each leads to x For each solve.a I /x D 0 or Ax D x to find an eigenvector x Example 4 A D is already singular (zero determinant). Find its s and x s. 4 When A is singular, D 0 is one of the eigenvalues. The equation Ax D 0x has solutions. They are the eigenvectors for D 0. But det.a I / D 0 is the way to find all s and x s. Always subtract I from A Subtract from the diagonal to find A I D (4) 4 Take the determinant ad bc of this by matrix. From times 4, the ad part is 5 C 4. The bc part, not containing, is times. det D. /.4 /././ D 4 5 (5) Set this determinant 5 to zero. One solution is D 0 (as expected, since A is singular). Factoring into times 5, the other root is D 5 det.a I / D 5 D 0 yields the eigenvalues D 0 and D 5 Now find the eigenvectors. Solve.A I /x D 0 separately for D 0 and D 5 y 0 y.a 0I /x D D yields an eigenvector D for 4 z 0 z D 0 4 y 0 y.a 5I /x D D yields an eigenvector D for z 0 z D 5 The matrices A 0I and A 5I are singular (because 0 and 5 are eigenvalues). The eigenvectors.; / and.; / are in the nullspaces.a I /x D 0 is Ax D x. We need to emphasize There is nothing exceptional about D 0. Like every other number, zero might be an eigenvalue and it might not. If A is singular, it is. The eigenvectors fill the nullspace Ax D 0x D 0. If A is invertible, zero is not an eigenvalue. We shift A by a multiple of I to make it singular. In the example, the shifted matrix A 5I is singular and 5 is the other eigenvalue.
6 88 Chapter 6. Eigenvalues and Eigenvectors Summary To solve the eigenvalue problem for an n by n matrix, follow these steps. Compute the determinant of A I. With subtracted along the diagonal, this determinant starts with n or n. It is a polynomial in of degree n.. Find the roots of this polynomial, by solving det.a I / D 0. The n roots are the n eigenvalues of A. They make A I singular.. For each eigenvalue, solve.a I /x D 0 to find an eigenvector x. A note on the eigenvectors of by matrices. When A I is singular, both rows are multiples of a vector.a; b/. The eigenvector is any multiple of.b; a/. The example had D 0 and D 5 D 0 W rows of A 0I in the direction.; /; eigenvector in the direction.; / D 5 W rows of A 5I in the direction. 4; /; eigenvector in the direction.; 4/ Previously we wrote that last eigenvector as.; /. Both.; / and.; 4/ are correct. There is a whole line of eigenvectors any nonzero multiple of x is as good as x. MATLAB s eig.a/ divides by the length, to make the eigenvector into a unit vector. We end with a warning. Some by matrices have only one line of eigenvectors. This can only happen when two eigenvalues are equal. (On the other hand A D I has equal eigenvalues and plenty of eigenvectors.) Similarly some n by n matrices don t have n independent eigenvectors. Without n eigenvectors, we don t have a basis. We can t write every v as a combination of eigenvectors. In the language of the next section, we can t diagonalize a matrix without n independent eigenvectors. Good News, Bad News Bad news first If you add a row of A to another row, or exchange rows, the eigenvalues usually change. Elimination does not preserve the s. The triangular U has its eigenvalues sitting along the diagonal they are the pivots. But they are not the eigenvalues of A! Eigenvalues are changed when row is added to row U D has D 0 and D ; A D has D 0 and D Good news second The product times and the sum C can be found quickly from the matrix. For this A, the product is 0 times 7. That agrees with the determinant (which is 0/. The sum of eigenvalues is 0 C 7. That agrees with the sum down the main diagonal (the trace is C 6/. These quick checks always work The product of the n eigenvalues equals the determinant. The sum of the n eigenvalues equals the sum of the n diagonal entries.
7 6.. Introduction to Eigenvalues 89 The sum of the entries on the main diagonal is called the trace of A C CC n D trace D a C a CCa nn (6) Those checks are very useful. They are proved in Problems 6 7 and again in the next section. They don t remove the pain of computing s. But when the computation is wrong, they generally tell us so. To compute the correct s, go back to det.a I / D 0. The determinant test makes the product of the s equal to the product of the pivots (assuming no row exchanges). But the sum of the s is not the sum of the pivots as the example showed. The individual s have almost nothing to do with the pivots. In this new part of linear algebra, the key equation is really nonlinear multiplies x. Why do the eigenvalues of a triangular matrix lie on its diagonal? Imaginary Eigenvalues One more bit of news (not too terrible). The eigenvalues might not be real numbers. Example 5 The 90 ı rotation QD 0 0 has no real eigenvectors. Its eigenvalues are D i and D i. Sum of s D trace D 0. Product D determinant D. After a rotation, no vector Qx stays in the same direction as x (except x D 0 which is useless). There cannot be an eigenvector, unless we go to imaginary numbers. Which we do. To see how i can help, look at Q which is I. If Q is rotation through 90 ı, then Q is rotation through 80 ı. Its eigenvalues are and. (Certainly I x D x.) Squaring Q will square each, so we must have D. The eigenvalues of the 90 ı rotation matrix Q are Ci and i, because i D. Those s come as usual from det.q I / D 0. This equation gives C D 0. Its roots are i and i. We meet the imaginary number i also in the eigenvectors Complex eigenvectors 0 D i 0 i i and 0 i i D i 0 Somehow these complex vectors x D.; i/ and x D.i; / keep their direction as they are rotated. Don t ask me how. This example makes the all-important point that real matrices can easily have complex eigenvalues and eigenvectors. The particular eigenvalues i and i also illustrate two special properties of Q. Q is an orthogonal matrix so the absolute value of each is jj D.. Q is a skew-symmetric matrix so each is pure imaginary.
8 90 Chapter 6. Eigenvalues and Eigenvectors A symmetric matrix.a T D A/ can be compared to a real number. A skew-symmetric matrix.a T D A/ can be compared to an imaginary number. An orthogonal matrix.a T A D I/can be compared to a complex number with jj D. For the eigenvalues those are more than analogies they are theorems to be proved in Section 64. The eigenvectors for all these special matrices are perpendicular. Somehow.i; / and.; i/ are perpendicular (Chapter 0 explains the dot product of complex vectors). Eigshow in MATLAB There is a MATLAB demo (just type eigshow), displaying the eigenvalue problem for a by matrix. It starts with the unit vector x D.; 0/. The mouse makes this vector move around the unit circle. At the same time the screen shows Ax, in color and also moving. Possibly Ax is ahead of x. Possibly Ax is behind x. Sometimes Ax is parallel to x. At that parallel moment, Ax D x (at x and x in the second figure). The eigenvalue is the length of Ax, when the unit eigenvector x lines up. The built-in choices for A illustrate three possibilities 0;, or directions where Ax crosses x. 0. There are no real eigenvectors. Ax stays behind or ahead of x. This means the eigenvalues and eigenvectors are complex, as they are for the rotation Q.. There is only one line of eigenvectors (unusual). The moving directions Ax and x touch but don t cross over. This happens for the last by matrix below.. There are eigenvectors in two independent directions. This is typical! Ax crosses x at the first eigenvector x, and it crosses back at the second eigenvector x. Then Ax and x cross again at x and x. You can mentally follow x and Ax for these five matrices. Under the matrices I will count their real eigenvectors. Can you see where Ax lines up with x? A D
9 6.. Introduction to Eigenvalues 9 When A is singular (rank one), its column space is a line. The vector Ax goes up and down that line while x circles around. One eigenvector x is along the line. Another eigenvector appears when Ax D 0. Zero is an eigenvalue of a singular matrix. REVIEW OF THE KEY IDEAS. Ax D x says that eigenvectors x keep the same direction when multiplied by A.. Ax D x also says that det.a I / D 0. This determines n eigenvalues.. The eigenvalues of A and A are and, with the same eigenvectors. 4. The sum of the s equals the sum down the main diagonal of A (the trace). The product of the s equals the determinant. 5. Projections P, reflections R, 90 ı rotations Q have special eigenvalues ; 0; ; i; i. Singular matrices have D 0. Triangular matrices have s on their diagonal. WORKED EXAMPLES 6. A Find the eigenvalues and eigenvectors of A and A and A and A C 4I A D and A 5 4 D 4 5 Check the trace C and the determinant for A and also A. Solution The eigenvalues of A come from det.a I / D 0 det.a I / D ˇ ˇ ˇ D 4 C D 0 This factors into. /. / D 0 so the eigenvalues of A are D and D. For the trace, the sum C agrees with C. The determinant agrees with the product D. The eigenvectors come separately by solving.a I /x D 0 which is Ax D x x 0 D.A I/x D D gives the eigenvector x y 0 D D x 0.A I /x D D y 0 gives the eigenvector x D
10 9 Chapter 6. Eigenvalues and Eigenvectors A and A and A C 4I keep the same eigenvectors as A. Their eigenvalues are and and C 4 A has eigenvalues D and D 9 A has and A C 4I has C 4 D 5 C 4 D 7 The trace of A is 5 C 5 which agrees with C 9. The determinant is 5 6 D 9. Notes for later sections A has orthogonal eigenvectors (Section 6.4 on symmetric matrices). A can be diagonalized since (Section 6.). A is similar to any by matrix with eigenvalues and (Section 6.6). A is a positive definite matrix (Section 6.5) since A D A T and the s are positive. 6. B Find the eigenvalues and eigenvectors of this by matrix A Symmetric matrix Singular matrix Trace C C D 4 0 A D Solution Since all rows of A add to zero, the vector x D.;;/gives Ax D 0. This is an eigenvector for the eigenvalue D 0. To find and I will compute the by determinant 0 D. /. /. /. / det.a I / D D. /Œ. /. / ˇ 0 ˇ D. /. /. / That factor confirms that D 0 is a root, and an eigenvalue of A. The other factors. / and. / give the other eigenvalues and, adding to 4 (the trace). Each eigenvalue 0; ; corresponds to an eigenvector x D 45 Ax D 0x x D 4 05 Ax D x x D 4 5 Ax D x I notice again that eigenvectors are perpendicular when A is symmetric. The by matrix produced a third-degree (cubic) polynomial for det.a I / D C 4. We were lucky to find simple roots D 0; ;. Normally we would use a command like eig.a/, and the computation will never even use determinants (Section 9. shows a better way for large matrices). The full command ŒS; D D eig.a/ will produce unit eigenvectors in the columns of the eigenvector matrix S. The first one happens to have three minus signs, reversed from.;;/and divided by p. The eigenvalues of A will be on the diagonal of the eigenvalue matrix (typed as D but soon called ƒ).
11 6.. Introduction to Eigenvalues 9 Problem Set 6. The example at the start of the chapter has powers of this matrix A 8 A D and A D and A 0 55 D Find the eigenvalues of these matrices. All powers have the same eigenvectors. (a) Show from A how a row exchange can produce different eigenvalues. (b) Why is a zero eigenvalue not changed by the steps of elimination? Find the eigenvalues and the eigenvectors of these two matrices 4 4 A D and A C I D 4 A C I has the eigenvectors as A. Its eigenvalues are by. Compute the eigenvalues and eigenvectors of A and A. Check the trace! 0 A D and A = D = 0 A has the eigenvectors as A. When A has eigenvalues and, its inverse has eigenvalues. 4 Compute the eigenvalues and eigenvectors of A and A A D and A 0 7 D 6 A has the same as A. When A has eigenvalues and, A has eigenvalues. In this example, why is C D? 5 Find the eigenvalues of A and B (easy for triangular matrices) and A C B 0 4 A D and B D and A C B D 0 4 Eigenvalues of A C B (are equal to)(are not equal to) eigenvalues of A plus eigenvalues of B. 6 Find the eigenvalues of A and B and AB and BA 0 A D and B D and AB D 0 and BA D (a) Are the eigenvalues of AB equal to eigenvalues of A times eigenvalues of B? (b) Are the eigenvalues of AB equal to the eigenvalues of BA?
12 94 Chapter 6. Eigenvalues and Eigenvectors 7 Elimination produces A D LU. The eigenvalues of U are on its diagonal; they are the. The eigenvalues of L are on its diagonal; they are all. The eigenvalues of A are not the same as. 8 (a) If you know that x is an eigenvector, the way to find is to. (b) If you know that is an eigenvalue, the way to find x is to. 9 What do you do to the equation Ax D x, in order to prove (a), (b), and (c)? (a) is an eigenvalue of A, as in Problem 4. (b) is an eigenvalue of A, as in Problem. (c) C is an eigenvalue of A C I, as in Problem. 0 Find the eigenvalues and eigenvectors for both of these Markov matrices A and A. Explain from those answers why A 00 is close to A A D and A D = = = = Here is a strange fact about by matrices with eigenvalues The columns of A I are multiples of the eigenvector x. Any idea why this should be? Find three eigenvectors for this matrix P (projection matrices have D and 0) Projection matrix 4 0 P D If two eigenvectors share the same, so do all their linear combinations. Find an eigenvector of P with no zero components. From the unit vector u D 6 ; 6 ; 6 ; 5 6 construct the rank one projection matrix P D uu T. This matrix has P D P because u T u D. (a) P udu comes from.uu T /udu. /. Then u is an eigenvector with D. (b) If v is perpendicular to u show that P v D 0. Then D 0. (c) Find three independent eigenvectors of P all with eigenvalue D 0. 4 Solve det.q I / D 0 by the quadratic formula to reach D cos i sin cos sin Q D rotates the xy plane by the angle. No real s. sin cos Find the eigenvectors of Q by solving.q I /x D 0. Use i D.
13 6.. Introduction to Eigenvalues 95 5 Every permutation matrix leaves x D.;; ;/ unchanged. Then D. Find two more s (possibly complex) for these permutations, from det.p I / D P D and P D The determinant of A equals the product n. Start with the polynomial det.a I / separated into its n factors (always possible). Then set D 0 det.a I / D. /. /. n / so det A D Check this rule in Example where the Markov matrix has D and. 7 The sum of the diagonal entries (the trace) equals the sum of the eigenvalues a b A D has det.a I / D.a C d/c ad bc D 0 c d The quadratic formula gives the eigenvalues D.aCd C p /= and D. Their sum is.ifa has D and D 4 then det.a I / D. 8 If A has D 4 and D 5 then det.a I / D. 4/. 5/ D 9 C 0. Find three matrices that have trace a C d D 9 and determinant 0 and D 4; 5. 9 A by matrix B is known to have eigenvalues 0; ;. This information is enough to find three of these (give the answers where possible) (a) the rank of B (b) the determinant of B T B (c) the eigenvalues of B T B (d) the eigenvalues of.b C I/. 0 Choose the last rows of A and C to give eigenvalues 4; 7 and ; ; Companion matrices A D C D The eigenvalues of A equal the eigenvalues of A T. This is because det.a I / equals det.a T I /. That is true because. Show by an example that the eigenvectors of A and A T are not the same. Construct any by Markov matrix M positive entries down each column add to. Show that M T.;;/ D.;;/. By Problem, D is also an eigenvalue of M. Challenge A by singular Markov matrix with trace has what s?
14 96 Chapter 6. Eigenvalues and Eigenvectors Find three by matrices that have D D 0. The trace is zero and the determinant is zero. A might not be the zero matrix but check that A D 0. 4 This matrix is singular with rank one. Find three s and three eigenvectors A D 45 D Suppose A and B have the same eigenvalues ;; n with the same independent eigenvectors x ; ;x n. Then A D B. Reason Any vector x is a combination c x CCc n x n. What is Ax? What is Bx? 6 The block B has eigenvalues ; and C has eigenvalues ; 4 and D has eigenvalues 5; 7. Find the eigenvalues of the 4 by 4 matrix A 0 0 B C A D D D Find the rank and the four eigenvalues of A and C 0 0 A D and C D Subtract I from the previous A. Find the s and then the determinants of 0 0 B D A I D and C D I A D (Review) Find the eigenvalues of A, B, and C 0 0 A D and B D and C D When a C b D c C d show that.; / is an eigenvector and find both eigenvalues a b A D c d
15 6.. Introduction to Eigenvalues 97 If we exchange rows and and columns and, the eigenvalues don t change. Find eigenvectors of A and B for D. Rank one gives D D 0. 6 A D and B D PAP T D Suppose A has eigenvalues 0; ; 5 with independent eigenvectors u; v; w. (a) Give a basis for the nullspace and a basis for the column space. (b) Find a particular solution to Ax D v C w. Find all solutions. (c) Ax Du has no solution. If it did then would be in the column space. Suppose u; v are orthonormal vectors in R, and A D uv T. Compute A D uv T uv T to discover the eigenvalues of A. Check that the trace of A agrees with C. 4 Find the eigenvalues of this permutation matrix P from det.p I / D 0. Which vectors are not changed by the permutation? They are eigenvectors for D. Can you find three more eigenvectors? P D Challenge Problems 5 There are six by permutation matrices P. What numbers can be the determinants of P? What numbers can be pivots? What numbers can be the trace of P? What four numbers can be eigenvalues of P, as in Problem 5? 6 Is there a real by matrix (other than I ) with A D I? Its eigenvalues must satisfy D. They can be e i= and e i=. What trace and determinant would this give? Construct a rotation matrix as A (which angle of rotation?). 7 (a) Find the eigenvalues and eigenvectors of A. They depend on c 4 c A D 6 c (b) Show that A has just one line of eigenvectors when c D 6. (c) This is a Markov matrix when c D8. Then A n will approach what matrix A?
LINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
Chapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
Similar matrices and Jordan form
Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive
Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.
ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?
Chapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices
MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two
Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
[1] Diagonal factorization
8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:
Linear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka [email protected] http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
CS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
Similarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
MATH 551 - APPLIED MATRIX THEORY
MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points
Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors
1 Chapter 13. VECTORS IN THREE DIMENSIONAL SPACE Let s begin with some names and notation for things: R is the set (collection) of real numbers. We write x R to mean that x is a real number. A real number
Orthogonal Projections
Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors
α = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.
Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R
13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms
Lecture 5: Singular Value Decomposition SVD (1)
EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system
Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain
Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal
MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =
MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the
Recall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.
Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability
Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj
Section 5. l j v j = [ u u j u m ] l jj = l jj u j + + l mj u m. l mj Section 5. 5.. Not orthogonal, the column vectors fail to be perpendicular to each other. 5..2 his matrix is orthogonal. Check that
Applied Linear Algebra I Review page 1
Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties
4.3 Least Squares Approximations
18 Chapter. Orthogonality.3 Least Squares Approximations It often happens that Ax D b has no solution. The usual reason is: too many equations. The matrix has more rows than columns. There are more equations
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
9 MATRICES AND TRANSFORMATIONS
9 MATRICES AND TRANSFORMATIONS Chapter 9 Matrices and Transformations Objectives After studying this chapter you should be able to handle matrix (and vector) algebra with confidence, and understand the
Orthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
The Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
8 Square matrices continued: Determinants
8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You
Inner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
ISOMETRIES OF R n KEITH CONRAD
ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x
by the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
Introduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
LINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation
Solution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
Review Jeopardy. Blue vs. Orange. Review Jeopardy
Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?
Linear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
Using row reduction to calculate the inverse and the determinant of a square matrix
Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible
Numerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,
Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components
Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they
6. Cholesky factorization
6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix
expression is written horizontally. The Last terms ((2)( 4)) because they are the last terms of the two polynomials. This is called the FOIL method.
A polynomial of degree n (in one variable, with real coefficients) is an expression of the form: a n x n + a n 1 x n 1 + a n 2 x n 2 + + a 2 x 2 + a 1 x + a 0 where a n, a n 1, a n 2, a 2, a 1, a 0 are
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar
Lecture 14: Section 3.3
Lecture 14: Section 3.3 Shuanglin Shao October 23, 2013 Definition. Two nonzero vectors u and v in R n are said to be orthogonal (or perpendicular) if u v = 0. We will also agree that the zero vector in
a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)
ROOTS OF POLYNOMIAL EQUATIONS In this unit we discuss polynomial equations. A polynomial in x of degree n, where n 0 is an integer, is an expression of the form P n (x) =a n x n + a n 1 x n 1 + + a 1 x
Question 2: How do you solve a matrix equation using the matrix inverse?
Question : How do you solve a matrix equation using the matrix inverse? In the previous question, we wrote systems of equations as a matrix equation AX B. In this format, the matrix A contains the coefficients
SOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis
JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson
JUST THE MATHS UNIT NUMBER 1.8 ALGEBRA 8 (Polynomials) by A.J.Hobson 1.8.1 The factor theorem 1.8.2 Application to quadratic and cubic expressions 1.8.3 Cubic equations 1.8.4 Long division of polynomials
Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.
Math 312, Fall 2012 Jerry L. Kazdan Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. In addition to the problems below, you should also know how to solve
Lecture notes on linear algebra
Lecture notes on linear algebra David Lerner Department of Mathematics University of Kansas These are notes of a course given in Fall, 2007 and 2008 to the Honors sections of our elementary linear algebra
Section 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
3.2 Sources, Sinks, Saddles, and Spirals
3.2. Sources, Sinks, Saddles, and Spirals 6 3.2 Sources, Sinks, Saddles, and Spirals The pictures in this section show solutions to Ay 00 C By 0 C Cy D 0. These are linear equations with constant coefficients
Notes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
Linear Algebra: Determinants, Inverses, Rank
D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0
1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION
4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:
Math 215 HW #6 Solutions
Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T
University of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
Notes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
DATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
Applied Linear Algebra
Applied Linear Algebra OTTO BRETSCHER http://www.prenhall.com/bretscher Chapter 7 Eigenvalues and Eigenvectors Chia-Hui Chang Email: [email protected] National Central University, Taiwan 7.1 DYNAMICAL
Solving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
What are the place values to the left of the decimal point and their associated powers of ten?
The verbal answers to all of the following questions should be memorized before completion of algebra. Answers that are not memorized will hinder your ability to succeed in geometry and algebra. (Everything
THREE DIMENSIONAL GEOMETRY
Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,
Lecture 2 Matrix Operations
Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or
3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
Matrices and Linear Algebra
Chapter 2 Matrices and Linear Algebra 2. Basics Definition 2... A matrix is an m n array of scalars from a given field F. The individual values in the matrix are called entries. Examples. 2 3 A = 2 4 2
MAT188H1S Lec0101 Burbulla
Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u
26. Determinants I. 1. Prehistory
26. Determinants I 26.1 Prehistory 26.2 Definitions 26.3 Uniqueness and other properties 26.4 Existence Both as a careful review of a more pedestrian viewpoint, and as a transition to a coordinate-independent
18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in 2-106. Total: 175 points.
806 Problem Set 4 Solution Due Wednesday, March 2009 at 4 pm in 2-06 Total: 75 points Problem : A is an m n matrix of rank r Suppose there are right-hand-sides b for which A x = b has no solution (a) What
Operation Count; Numerical Linear Algebra
10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point
Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.
This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra
Vector Math Computer Graphics Scott D. Anderson
Vector Math Computer Graphics Scott D. Anderson 1 Dot Product The notation v w means the dot product or scalar product or inner product of two vectors, v and w. In abstract mathematics, we can talk about
1.3 Algebraic Expressions
1.3 Algebraic Expressions A polynomial is an expression of the form: a n x n + a n 1 x n 1 +... + a 2 x 2 + a 1 x + a 0 The numbers a 1, a 2,..., a n are called coefficients. Each of the separate parts,
Vectors 2. The METRIC Project, Imperial College. Imperial College of Science Technology and Medicine, 1996.
Vectors 2 The METRIC Project, Imperial College. Imperial College of Science Technology and Medicine, 1996. Launch Mathematica. Type
Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).
MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix
Linear Algebra I. Ronald van Luijk, 2012
Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.
Identifying second degree equations
Chapter 7 Identifing second degree equations 7.1 The eigenvalue method In this section we appl eigenvalue methods to determine the geometrical nature of the second degree equation a 2 + 2h + b 2 + 2g +
u = [ 2 4 5] has one row with three components (a 3 v = [2 4 5] has three rows separated by semicolons (a 3 w = 2:5 generates the row vector w = [ 2 3
MATLAB Tutorial You need a small numb e r of basic commands to start using MATLAB. This short tutorial describes those fundamental commands. You need to create vectors and matrices, to change them, and
9 Multiplication of Vectors: The Scalar or Dot Product
Arkansas Tech University MATH 934: Calculus III Dr. Marcel B Finan 9 Multiplication of Vectors: The Scalar or Dot Product Up to this point we have defined what vectors are and discussed basic notation
October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix
Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,
11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space
11 Vectors and the Geometry of Space 11.1 Vectors in the Plane Copyright Cengage Learning. All rights reserved. Copyright Cengage Learning. All rights reserved. 2 Objectives! Write the component form of
APPLICATIONS. are symmetric, but. are not.
CHAPTER III APPLICATIONS Real Symmetric Matrices The most common matrices we meet in applications are symmetric, that is, they are square matrices which are equal to their transposes In symbols, A t =
Solving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
Factorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3
Math 24 FINAL EXAM (2/9/9 - SOLUTIONS ( Find the general solution to the system of equations 2 4 5 6 7 ( r 2 2r r 2 r 5r r x + y + z 2x + y + 4z 5x + 6y + 7z 2 2 2 2 So x z + y 2z 2 and z is free. ( r
3.2 The Factor Theorem and The Remainder Theorem
3. The Factor Theorem and The Remainder Theorem 57 3. The Factor Theorem and The Remainder Theorem Suppose we wish to find the zeros of f(x) = x 3 + 4x 5x 4. Setting f(x) = 0 results in the polynomial
GCE Mathematics (6360) Further Pure unit 4 (MFP4) Textbook
Version 36 klm GCE Mathematics (636) Further Pure unit 4 (MFP4) Textbook The Assessment and Qualifications Alliance (AQA) is a company limited by guarantee registered in England and Wales 364473 and a
Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points
Introduction to Algebraic Geometry Bézout s Theorem and Inflection Points 1. The resultant. Let K be a field. Then the polynomial ring K[x] is a unique factorisation domain (UFD). Another example of a
3 Orthogonal Vectors and Matrices
3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first
Solution to Homework 2
Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if
