# APPLICATIONS. are symmetric, but. are not.

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 CHAPTER III APPLICATIONS Real Symmetric Matrices The most common matrices we meet in applications are symmetric, that is, they are square matrices which are equal to their transposes In symbols, A t = A Examples are symmetric, but are not,, 4 Symmetric matrices are in many ways much simpler to deal with than general matrices First, as we noted previously, it is not generally true that the roots of the characteristic equation of a matrix are necessarily real numbers, even if the matrix has only real entries However, if A is a symmetric matrix with real entries, then the roots of its characteristic equation are all real Example The characteristic equations of and are λ = and λ += respectively Notice the dramatic effect of a simple change of sign The reason for the reality of the roots (for a real symmetric matrix) is a bit subtle, and we will come back to it later sections The second important property of real symmetric matrices is that they are always diagonalizable, that is, there is always a basis for R n consisting of eigenvectors for the matrix

2 III APPLICATIONS Example We previously found a basis for R consisting of eigenvectors for the symmetric matrix A = The eigenvalues are λ =,λ =, and the basis of eigenvectors is { v =, v = } If you look carefully, you will note that the vectors v and v not only form a basis, but they are perpendicular to one another, ie, v v =( ) + () = The perpendicularity of the eigenvectors is no accident It is always the case for a symmetric matrix by the following reasoning First, recall that the dot product of two column vectors u and v in R n can be written as a row by column product u v = u t v =[u u u n ] v v v n = n i= u i v i Suppose now that Au = λu and Av = µv, ie, u and v are eigenvectors for A with corresponding eigenvalues λ and µ Assume λ µ Then () u (Av) =u t (Av)=u t (µv)=µ(u t v)=µ(u v) On the other hand, () (Au) v =(Au) t v=(λu) t v=λ(u t v)=λ(u v) However, since the matrix is symmetric, A t = A, and (Au) t v =(u t A t )v=(u t A)v=u t (Av) The first of these expressions is what was calculated in () and the last was calculated in (), so the two are equal, ie, µ(u v) =λ(u v) If u v, we can cancel the common factor to conclude that µ = λ, which is contrary to our assumption, so it must be true that u v =, ie, u v We summarize this as follows Eigenvectors for a real symmetric matrix which belong to different eigenvalues are necessarily perpendicular This fact has important consequences Assume first that the eigenvalues of A are distinct and that it is real and symmetric Then not only is there a basis consisting of eigenvectors, but the basis elements are also mutually perpendicular

3 REAL SYMMETRIC MATRICES This is reminiscent of the familiar situation in R and R, where coordinate axes are almost always assumed to be mutually perpendicular For arbitrary matrices, we may have to face the prospect of using skew axes, but the above remark tells us we can avoid this possibility in the symmetric case In two or three dimensions, we usually require our basis vectors to be unit vectors There is no problem with that here Namely, if u is not a unit vector, we can always obtain a unit vector by dividing u by its length u Moreover, if u is an eigenvector for A with eigenvalue λ, then any nonzero multiple of u is also such an eigenvector, in particular, the unit vector u u is Example, revisited The eigenvectors v and v both have length So we replace them by the corresponding unit vectors [ ] v = v = [ which also constitute a basis for R There is some special terminology which is commonly used in linear algebra for the familiar concepts discussed above Two vectors are said to be orthogonal if they are perpendicular A unit vector is said to be normalized The idea is that if we started with a non-unit vector, we would produce an equivalent unit vector by dividing it by its length The latter process is called normalization Finally, a basis for R n consisting of mutually perpendicular unit vectors is called an orthonormal basis Exercises for Section 4 (a) Find a basis of eigenvectors for A = 4 (b) Check that the basis vectors are orthogonal, and normalize them to yield an orthonormal basis (a) Find a basis of eigenvectors for A = 8 (b) Are the basis vectors orthogonal to one another? If not what might be the problem? (a) Find a basis of eigenvectors for A = (b) Check that the basis vectors are orthogonal, and normalize them to yield an orthonormal basis 4 Let A = 4 4 Find an orthonormal basis of eigenvectors Let A be a symmetric n n matrix, and let P be any n n matrix Show that P t AP is also symmetric ]

4 4 III APPLICATIONS Repeated Eigenvalues, The Gram Schmidt Process We now consider the case in which one or more eigenvalues of a real symmetric matrix A is a repeated root of the characteristic equation It turns out that we can still find an orthonormal basis of eigenvectors, but it is a bit more complicated Example Consider The characteristic equation is det λ λ λ A = = ( + λ)(( + λ) ) ( λ ) + ( + + λ) = ( + λ)(λ +λ)+(λ+) = (λ +λ 4)= Using the method suggested in Chapter, we may find the roots of this equation by trying the factors of the constant term The roots are λ =, which has multiplicity, and λ =, which has multiplicity For λ =, we need to reduce A I = The general solution is v = v,v =v with v free A basic eigenvector is v = but we should normalize this by dividing it by v = This gives u = For λ =, the situation is more complicated Reduce A +I =

5 REPEATED EIGENVALUES, THE GRAM SCHMIDT PROCESS which yields the general solution v = v v with v,v free This gives basic eigenvectors v =, v = Note that, as the general theory predicts, v is perpendicular to both v and v (The eigenvalues are different) Unfortunately, v and v are not perpendicular to each other However, with a little effort, this is easy to remedy All we have to do is pick another basis for the subspace spanned by {v, v } The eigenvectors with eigenvalue are exactly the non-zero vectors in this subspace, so any basis will do as well Hence, we arrange to pick a basis consisting of mutually perpendicular vectors It is easy to construct the new basis Indeed we need only replace one of the two vectors Keep v, and let v = v cv where c is chosen so that v v = v v cv v =, ie, take c = v v v v (See the diagram to get some idea of the geometry behind this calculation) v v We have v v v = v v v = v v = = We should also normalize this basis by choosing u = v v =, u = v v = Putting this all together, we see that u =, u =, u = form an orthonormal basis for R consisting of eigenvectors for A

6 6 III APPLICATIONS The Gram Schmidt Process In Example, we used a special case of a more general algorithm in order to construct an orthonormal basis of eigenvectors The algorithm, called the Gram Schmidt Process works as follows Suppose {v, v,,v k } is a linearly independent set spanning a certain subspace W of R n We construct an orthonormal basis for W as follows Let v = v v = v v v v v v v = v v v v v v v v v v v k v k v k v j = v k v j v j v j j= It is not hard to see that each new v k before it For example, is perpendicular to those constructed v v = v v v v v v v v v v v v v v However, we may suppose that we already know that v v = (from the previous stage of the construction), so the above becomes v v = v v v v = The same argument works at each stage It is also not hard to see that at each stage, replacing v j by v j in {v, v,,v j,v j } does not change the subspace spanned by the set Hence, for j = k, we conclude that {v, v,,v k } is a basis for W consisting of mutually perpendicular vectors Finally, to complete the process simply divide each v j by its length u j = v j v j Then {u,,u k } is an orthonormal basis for W

7 REPEATED EIGENVALUES, THE GRAM SCHMIDT PROCESS 7 Example Consider the subspace of R 4 spanned by v =, v =, v = Then Normalizing, we get v = v = = v = 9 u = u = u = 4 = = 4 = The Principal Axis Theorem The Principal Axis Theorem asserts that the process outlined above for finding mutually perpendicular eigenvectors always works If A is a real symmetric n n matrix, there is always an orthonormal basis for R n consisting of eigenvectors for A Here is a summary of the method If the roots of the characteristic equation are all different, then all we need to do is find an eigenvector for each eigenvalue and if necessary normalize it by dividing by its length If there are repeated roots, then it will usually be necessary to apply the Gram Schmidt process to the set of basic eigenvectors obtained for each repeated eigenvalue 4

8 8 III APPLICATIONS Exercises for Section Apply the Gram Schmidt Process to each of the following sets of vectors (a), (b),, Find an orthonormal basis of eigenvectors for A = Find an orthonormal basis of eigenvectors for A = Hint: is an eigenvalue 4 Let {v, v, v } be a linearly independent set Suppose {v, v, v } is the set obtained (before normalizing) by the Gram-Schmidt Process (a) Explain why v is not zero (b) Explain why v is not zero The generalization of this to an arbitrary linearly independent set is one reason the Gram-Schmidt Process works The vectors produced by that process are mutually perpendicular provided they are non-zero, and so they form a linearly independent set Since they are in the subspace W spanned by the original set of vectors and there are just enough of them, they must form a basis a basis for W Change of Coordinates As we have noted previously, it is probably a good idea to use a special basis like an orthonormal basis of eigenvectors Any problem associated with the matrix A is likely to take a particularly simple form when expressed relative to such a basis To study this in greater detail, we need to talk a bit more about changes of coordinates Although the theory is quite general, we shall concentrate on some simple examples In R n, the entries in a column vector x may be thought of as the coordinates x,x,,x n of the vector with respect to the standard basis To simplify the algebra, let s concentrate on one specific n, sayn= In that case, we may make the usual identifications e = i, e = j, e = k for the elements of the standard basis Suppose {v, v, v } is another basis The coordinates of x with respect to the new basis call them x,x,x are defined by the relation () x = v x + v x + v x =[v v v ] x x =[v v v ]x x

9 CHANGE OF COORDINATES 9 One way to view this relation is as a system of equations in which the old coordinates x = x x x are given, and we want to solve for the new coordinates x = x x x The coefficient matrix of this system P =[v v v ] is called the change of basis matrix It s columns are the old coordinates of the new basis vectors The relation () may be rewritten () x = P x and it may also be interpreted as expressing the old coordinates of a vector in terms of its new coordinates This seems backwards, but it is easy to turn it around Since the columns of P are linearly independent, P is invertible and we may write instead () x = P x where we express the new coordinates in terms of the old coordinates Example Suppose in R we pick a new set of coordinate axes by rotating each of the old axes through angle θ in the counterclockwise direction Call the old coordinates (x,x ) and the new coordinates (x,x ) According to the above discussion, the columns of the change of basis matrix P come from the old coordinates of the new basis vectors, ie, of unit vectors along the new axes From the diagram, these are [ cos θ sin θ ] sin θ cos θ

10 III APPLICATIONS x x x θ θ x Hence, [ x x ] [ cos θ sin θ x = sin θ cos θ The change of basis matrix is easy to invert in this case (Use the special rule which applies to matrices) cos θ sin θ cos θ sin θ cos θ sin θ = sin θ cos θ cos θ + sin = θ sin θ cos θ sin θ cos θ (You could also have obtained this by using the matrix for rotation through angle θ) Hence, we may express the new coordinates in terms of the old coordinates through the relation x x = [ cos θ sin θ sin θ cos θ x ][ x For example, suppose θ = π/6 The new coordinates of the point with original coordinates (, 6) are given by [ ] x / / = = / / 6 x x ] ] [ ] + + So with respect to the rotated axes, the coordinates are ( +, ) Orthogonal Matrices You may have noticed that the matrix P obtained in Example has the property P = P t This is no accident It is a consequence of the fact that its columns are mutually perpendicular unit vectors Indeed, The columns of an n n matrix form an orthonormal basis for R n if and only if its inverse is its transpose An n n real matrix with this property is called orthogonal

11 CHANGE OF COORDINATES Example Let [ P = 4 4 ] The columns of P are u = [ ] 4, u = 4, and it is easy to check that these are mutually perpendicular unit vectors in R To see that P = P t, it suffices to show that P t P = [ 4 4 ][ 4 ] = 4 Of course, it is easy to see that this true by direct calculation, but it may be more informative to write it out as follows [ P t (u ) P = t (u ) t ] [ u u ]= u u u u u u u u where the entries in the product are exhibited as row by column dot products The off diagonal entries are zero because the vectors are perpendicular, and the diagonal entries are ones because the vectors are unit vectors The argument for n n matrices is exactly the same except that there are more entries Note The terminology is very confusing The definition of an orthogonal matrix requires that the columns be mutually perpendicular and also that they be unit vectors Unfortunately, the terminology reminds us of the former condition but not of the latter condition It would have been better if such matrices had been named orthonormal matrices rather than orthogonal matrices, but that is not how it happened, and we don t have the option of changing the terminology at this late date The Principal Axis Theorem Again As we have seen, given a real symmetric n n matrix A, the Principal Axis Theorem assures us that we can always find an orthonormal basis {v, v,,v n } for R n consisting of eigenvectors for A Let P =[v v v n ]

12 III APPLICATIONS be the corresponding change of basis matrix As in Chapter II, Section, we have λ Av = v λ =[v v v n ] Av =v λ =[v v λ v n ] Av n =v n λ n =[v v v n ] where some eigenvalues λ j for different eigenvectors might be repeated These equations can be written in a single matrix equation λ λ A [ v v v n ]=[v v v n ] λ n or AP = PD where D is a diagonal matrix with eigenvalues (possibly repeated) on the diagonal This may also be written (4) P AP = D Since we have insisted that the basic eigenvectors form an orthonormal basis, the change of basis matrix P is orthogonal, and we have P = P t Hence, (4) can be written in the alternate form () P t AP = D with P orthogonal 7 4 Example Let A = The characteristic equation of A turns out 4 7 to be λ 6 =, so the eigenvalues are λ = ± Calculation shows that an orthonormal basis of eigenvectors is formed by [ ] u = 4 4 for λ = and u = for λ = Hence, we may take P to be the orthogonal matrix [ 4 ] The reader should check in this ] case that P t AP = [ [ ][ 4 ] = 4 λ n

13 CHANGE OF COORDINATES Appendix A Proof of the Principal Axis Theorem The following section outlines how the Principal Axis Theorem is proved for the very few of you who may insist on seeing it It is not necessary for what follows In view of the previous discussions, we can establish the Principal Axis Theorem by showing that there is an orthogonal n n matrix P such that (6) AP = PD or equivalently P t AP = D where D is a diagonal matrix with the eigenvalues of A (possibly repeated) on its diagonal The method is to proceed by induction on n If n = there really isn t anything to prove (Take P = [ ]) Suppose the theorem has been proved for (n ) (n ) matrices Let u be a unit eigenvector for A with eigenvalue λ Consider the subspace W consisting of all vectors perpendicular to u It is not hard to see that W is an n dimensional subspace Choose (by the Gram Schmidt Process) an orthonormal basis {w, w,w n } for W Then {u, w,,w n } is an orthonormal basis for R n, and Au = u λ =[u w w n ] } {{ } P This gives the first column of AP, and we want to say something about its remaining columns Aw, Aw,,Aw n To this end, note that if w is any vector in W, then Aw is also a vector in W For, we have u (Aw) =(u ) t Aw=(u ) t A t w=(au ) t w=λ (u ) t w)=λ (u w)=, which is to say, Aw is perpendicular to u if w is perpendicular to u It follows that each Aw j is a linear combination just of w, w,,w n, ie, Aw j =[u w w n ] where denotes some unspecified entry Putting this all together, we see that AP = P λ A } {{ } A λ

14 4 III APPLICATIONS where A is an (n ) (n ) matrix P is orthogonal (since its columns form an orthonormal basis) so P t AP = A, and it is not hard to derive from this the fact that A is symmetric Because of the structure of A, this implies that A is symmetric Hence, by induction we may assume there is an (n ) (n ) orthogonal matrix P such that A P = P D with D diagonal It follows that A λ P = A P } {{ } P = λ A P = P = λ P D λ D } {{ } } {{ } P D Note that P is orthogonal and D is diagonal Thus, AP P }{{ = P } A P = P P D } {{ } P P or AP = PD = P D However, a product of orthogonal matrices is orthogonal see the Exercises so P is orthogonal as required This completes the proof There is one subtle point involved in the above proof We have to know that a real symmetric n n matrix has at least one real eigenvalue This follows from the fact, alluded to earlier, that the roots of the characteristic equation for such a matrix are necessarily real Since the equation does have a root, that root is the desired eigenvalue Exercises for Section Find the change of basis matrix for a rotation through (a) degrees in the counterclockwise direction and (b) degrees in the clockwise direction Let P (θ) be the matrix for rotation of axes through θ Show that P ( θ) = P(θ) t =P(θ)

15 4 CLASSIFICATION OF CONICS AND QUADRICS An inclined plane makes an angle of degrees with the horizontal Change to a coordinate system with x axis parallel to the inclined plane and x axis perpendicular to it Use the change of variables formula derived in the section to find the components of the gravitational acceleration vector gj in the new coordinate system Compare this with what you would get by direct geometric reasoning 4 Let A = Find a orthogonal matrix P such that P t AP is diagonal What are the diagonal entries? Let A = 4 4 Find a orthogonal matrix P such that P t AP is diagonal What are the diagonal entries? 6 Show that the product of two orthogonal matrices is orthogonal How about the inverse of an orthogonal matrix? 7 The columns of an orthogonal matrix are mutually perpendicular unit vectors Is the same thing true of the rows? Explain 4 Classification of Conics and Quadrics The Principal Axis Theorem derives its name from its relation to classifying conics, quadric surfaces, and their higher dimensional analogues The general quadratic equation ax + bxy + cy + dx + ey = f (with enough of the coefficients non-zero) defines a curve in the plane Such a curve is generally an ellipse, a hyperbola, a parabola, all of which are called conics, or two lines, which is considered a degenerate case (See the Appendix to this section for a review) Examples x + y 4 = x y = x xy +y = x +xy y +x y= If the linear terms are not present (d = e = and f ), we call the curve a central conic It turns out to be an ellipse or hyperbola (but its axes of symmetry may not be the coordinate axes) or a pair of lines in the degenerate case Parabolas can t be obtained this way

16 6 III APPLICATIONS In this section, we shall show how to use linear algebra to classify such central conics and higher dimensional analogues such as quadric surfaces in R Once you understand the central case, it is fairly easy to reduce the general case to that (You just use completion of squares to get rid of the linear terms in the same way that you identify a circle with center not at the origin from its equation) In order to apply the linear algebra we have studied, we adopt a more systematic notation, using subscripted variables x,x instead of x, y Consider the central conic defined by the equation f(x) =a x +a x x + a x = C (The reason for the will be clear shortly) It is more useful to express the function f as follows f(x) =(x a + x a )x +(x a + x a )x = x (a x + a x )+x (a x + a x ), where we have introduced a = a The above expression may also be written in matrix form f(x) = x j a jk x k = x t Ax j,k= where A is the symmetric matrix of coefficients Note what has happened to the coefficients The coefficients of the squares appear on the diagonal of A, while the coefficient of the cross term bx x is divided into two equal parts Half of it appears as b in the, position (corresponding to the product x x ) while the other half appears as b in the, position (corresponding to the product x x which of course equals x x ) So it is clear why the matrix is symmetric This may be generalized to n> in a rather obvious manner Let A be a real symmetric n n matrix, and define f(x) = n j,k= For n = this may be written explicitly x j a jk x k = x t Ax f(x) =(x a + x a + x a )x +(x a + x a + x a )x +(x a + x a + x a )x = a x + a x + a x +a x x +a x x +a x x The rule for forming the matrix A from the equation for f(x) is the same as in the case The coefficients of the squares are put on the diagonal The

17 4 CLASSIFICATION OF CONICS AND QUADRICS 7 coefficient of a cross term involving x i x j is split in half, with one half put in the i, j position, and the other half is put in the j, i position The level set defined by f(x) =C is called a central hyperquadric It should be visualized as an n dimensional curved object in R n For n = it will be an ellipsoid or a hyperboloid (of one or two sheets) or perhaps a degenerate quadric like a cone (As in the case of conics, we must also allow linear terms to encompass paraboloids) If the above descriptions are accurate, we should expect the locus of the equation f(x) =Cto have certain axes of symmetry which we shall call its principal axes It turns out that these axes are determined by an orthonormal basis of eigenvectors for the coefficient matrix A To see this, suppose {u, u,,u n } is such a basis and P =[u u u n ] is the corresponding orthogonal matrix By the Principal Axis Theorem, P t AP = D is diagonal with the eigenvalues, λ,λ,,λ n,ofa appearing on the diagonal Make the change of coordinates x = P x where x represents the old coordinates and x represents the new coordinates Then f(x) =x t Ax=(Px ) t A(Px )=(x ) t P t AP x =(x ) t Dx Since D is diagonal, the quadratic expression on the right has no cross terms, ie λ λ (x ) t Dx =[x x x n] λ n =λ (x ) +λ (x ) + +λ n (x n) In the new coordinates, the equation takes the form λ (x ) + λ (x ) + +λ n (x n) =C and its graph is usually quite easy to describe Example We shall investigate the conic f(x, y) =x +4xy + y = First rewrite the equation x [ x y] = y (Note how the 4 was split into two symmetrically placed s) Next, find the eigenvalues of the coefficient matrix by solving λ det =( λ) 4=λ λ = λ This equation is easy to factor, and the roots are λ =,λ= For λ =, to find the eigenvectors, we need to solve v = v x x x n

18 8 III APPLICATIONS Reduction of the coefficient matrix yields with the general solution v = v, v free A basic normalized eigenvector is u = For λ =, a similar calculation (which you should make) yields the basic normalized eigenvector u = (Note that u u as expected) From this we can form the corresponding orthogonal matrix P and make the change of coordinates x x = P y y, and, according to the above analysis, the equation of the conic in the new coordinate system is (x ) (y ) = It is clear that this is a hyperbola with principal axes pointing along the new axes y y x x Example Consider the quadric surface defined by x + x + x x x = We take f(x) =x +x +x x x =[x x x ] x x x

19 4 CLASSIFICATION OF CONICS AND QUADRICS 9 Note how the coefficient in x x was split into two equal parts, a inthe,-position and a in the, -position The coefficients of the other cross terms were zero As usual, the coefficients of the squares were put on the diagonal The characteristic equation of the coefficient matrix is det λ λ =( λ) ( λ) = (λ )(λ )λ = λ Thus, the eigenvalues are λ =,, For λ =, reduce to obtain v = v,v = with v free Thus, v = is a basic eigenvector for λ =, and u = is a basic unit eigenvector Similarly, for λ = reduce which yields v = v = with v free Thus a basic unit eigenvector for λ =is u = Finally, for λ =, reduce This yields v = x,v = with v free Thus, a basic unit eigenvector for λ =is u =

20 III APPLICATIONS The corresponding orthogonal change of basis matrix is P =[u u u ]= Moreover, putting x = P x, we can express the equation of the quadric surface in the new coordinate system () x +x +x =x +x = Thus it is easy to see what this quadric surface is: an elliptical cylinder perpendicular to the x,x plane (This is one of the degenerate cases) The three principal axes in this case are the two axes of the ellipse in the x,x plane and the x axis, which is the central axis of the cylinder Tilted cylinder relative to the new axes The new axes are labelled Representing the graph in the new coordinates makes it easy to understand its geometry Suppose, for example, that we want to find the points on the graph which are closest to the origin These are the points at which the x -axis intersects the surface These are the points with new coordinates x = ±,x =x = If you want the coordinates of these points in the original coordinate system, use the change of coordinates formula x = P x Thus, the old coordinates of the minimum point with new coordinates (/,, ) are given by =

21 4 CLASSIFICATION OF CONICS AND QUADRICS Appendix A review of conics and quadrics You are probably familiar with certain graphs when they arise in standard configurations In two dimensions, the central conics have equations of the form ± x a ± y b = If both signs are +, the conic is an ellipse If one sign is + and one is, then the conic is a hyperbola The + goes with the axis which crosses the hyperbola Some examples are sketched below Ellipse Hyperbola Two signs result in an empty graph, ie, there are no points satisfying the equation Parabolas arise from equations of the form y = px with p Parabolas For x = py, the parabola opens along the positive or negative y-axis There are also some degenerate cases For example, x a y b = defines two lines which intersect at the origin In three dimensions, the central quadrics have equations of the form ± x a ± y b ± z c =

23 CLASSIFICATION OF CONICS AND QUADRICS Its graph is a double cone with elliptical cross sections Another would be ± x a ± y b = with at least one + sign Its graph is a cylinder perpendicular to the x, y-plane The cross sections are ellipses or hyperbolas, depending on the combination of signs Cone Cylinder Exercises for Section 4 Find the principal axes and classify the central conic x + xy + y = Identify the conic defined by x +4xy + y = 4 Find its principal axes, and find the points closest and furthest (if any) from the origin Identify the conic defined by x +7xy +y = Find its principal axes, and find the points closest and furthest (if any) from the origin 4 Find the principal axes and classify the central quadric defined by x y + z 4xy 4yz = (Optional) Classify the surface defined by x +y +z +xy +yz z = Hint: This is not a central quadric To classify it, first apply the methods of the section to the quadratic expression x +y +z +xy+yz to find a new coordinate system in which this expression has the form λ x + λ y + λ z Use the change of coordinates formula to express z in terms of x,y, and z and then complete squares to eliminate all linear terms At this point, it should be clear what the surface is

24 4 III APPLICATIONS Conics and the Method of Lagrange Multipliers There is another approach to finding the principal axes of a conic, quadric, or hyperquadric Consider for an example an ellipse in R centered at the origin One of the principal axes intersects the conic in the two points at greatest distance from the origin, and the other intersects it in the two points at least distance from the origin Similarly, two of the three principal axes of a central ellipsoid in R may be obtained in this way Thus, if we didn t know about eigenvalues and eigenvectors, we might try to find the principal axes by maximizing (or minimizing) the function giving the distance to the origin subject to the quadratic equation defining the conic or quadric In other words, we need to minimize a function given a constraint among the variables Such problems are solved by the method of Lagrange multipliers, which you learned in your multidimensional calculus course Here is a review of the method Suppose we want to maximize (minimize) the real valued function f(x) = f(x,x,,x n ) subject to the constraint g(x) = g(x,x,,x )=c For n =, this has a simple geometric interpretation The locus of the equation g(x,x )=cis a level curve of the function g, and we want to maximize (minimize) the function f on that curve Similarly, for n =, the level set g(x,x x )=cis a surface in R, and we want to maximize (minimize) f on that surface g( ) = c x g( ) = c x n = Level curve in the plane n = Level surface in space Examples Maximize f(x, y) =x +y on the ellipse g(x, y) =x +4y = (This is easy if you draw the picture) Minimize f(x, y, z) = x +xy + y + xz 4z on the sphere g(x, y, z) = x +y +z = Minimize f(x, y, z, t) =x +y +z t on the hypersphere g(x, y, z, t) = x +y +z +t = We shall concentrate on the case of n = variables, but the reasoning for any n is similar We want to maximize (or minimize) f(x) on a level surface g(x) =c in R, where as usual we abbreviate x =(x,x,x ) At any point x on the level surface at which such an extreme value is obtained, we must have () f(x) =λ g(x) for some scalar λ

25 CONICS AND THE METHOD OF LAGRANGE MULTIPLIERS f is parallel to g maximum point f other point () is a necessary condition which must hold at the relevant points (It doesn t by itself guarantee that there is a maximum or a minimum at the point There could be no extreme value at all at the point) In deriving this condition, we assume implicitly that the level surface is smooth and has a well defined normal vector g, and that the function f is also smooth If these conditions are violated at some point, that point could also be a candidate for a maximum or minimum Taking components, we obtain scalar equations for the 4 variables x,x,x,λ We would not expect, even in the best of circumstances to get a unique solution from this, but the defining equation for the level surface g(x) =c provides a 4th equation We still won t generally get a unique solution, but we will usually get at most a finite number of possible solutions Each of these can be examined further to see if f attains a maximum (or minimum) at that point in the level set Notice that the variable λ plays an auxiliary role since we really only want the coordinates of the point x (In some applications, λ has some significance beyond that) λ is called a Lagrange multiplier The method of Lagrange multipliers often leads to a set of equations which is difficult to solve However, in the case of quadratic functions f, there is a typical pattern which emerges Example Suppose we want to minimize the function f(x, y) =x +4xy + y on the circle x + y = For this problem n =, and the level set is a curve Take g(x, y) =x +y Then f = x +4y, 4x +y, g= x, y, and f = λ g yields the equations to which we add x +4y =λ(x) 4x +y =λ(y) x + y = g

26 6 III APPLICATIONS After canceling a common factor of, the first two equations may be written in matrix form x x =λ y y which says that x y is an eigenvector for the eigenvalue λ, and the equation x + y = says it is a unit eigenvector You should know how to solve such problems, and we leave it to you to make the required calculations (See also Example in the previous section where we made these calculations in another context) The eigenvalues are λ = and λ = For λ =, a basic unit eigenvector is u = [ and every other eigenvector is of the form cu The latter will be a unit vector if and only c =, ie, c = ± We conclude that λ = yields two solutions of the Lagrange multiplier problem: (/, / ) and ( /, / ) At each of these points f(x, y) =x +4xy + y = For λ =, we obtain the basic unit eigenvector u = [ and a similar analysis (which you should do) yields the two points: (/, / ) and ( /, / ) At each of these points f(x, y) =x +4xy + y = ], ], Min Max Max Min Hence, the function attains its maximum value at the first two points and its minimum value at the second two

27 CONICS AND THE METHOD OF LAGRANGE MULTIPLIERS 7 Example Suppose we want to minimize the function g(x, y) =x +y (which is the square of the distance to the origin) on the conic f(x, y) =x +4xy + y = Note that this is basically the same as the previous example except that the roles of the two functions are reversed The Lagrange multiplier condition g = λ f is the same as the condition f =(/λ) g provided λ (λ in this case since otherwise g =, which yields x = y = However, (, ) is not a point on the conic) We just solved that problem and found eigenvalues /λ = or /λ = In this case, we don t need unit eigenvectors, so to avoid square roots we choose basic eigenvectors v = and corresponding respectively to λ = and λ = The endpoint of v does not lie on the conic, but any other eigenvector for λ = is of the form cv, so all we need to do is adjust c so that the point satisfies the equation f(x, y) =x +4xy +y = Substituting (x, y) =(c, c) yields 6c =orc=±/ 6 Thus, we obtain the two points (/ 6, / 6) and ( / 6, / 6) For λ =, substituting (x, y) = ( c, c) in the equation yields c = which has no solutions Thus, the only candidates for a minimum (or maximum) are the first pair of points: (/ 6, / 6) and ( / 6, / 6) A simple calculation shows these are both / units from the origin, but without further analysis, we can t tell if this is the maximum, the minimum, or neither However, it is not hard to classify this conic see the previous section and discover that it is a hyperbola Hence, the two points are minimum points The Rayleigh-Ritz Method Example above is typical of a certain class of Lagrange multiplier problems Let A be a real symmetric n n matrix, and consider the problem of maximizing (minimizing) the quadratic function f(x) = x t Axsubject to the constraint g(x) = x = This is called the Rayleigh Ritz problem For n =orn=, the level set x = is a circle or sphere, and for n>, it is called a hypersphere Alternately, we could reverse the roles of the functions f and g, ie, we could try to maximize (minimize) the square of the distance to the origin g(x) = x on the level set f(x) = Because the Lagrange multiplier condition in either case asserts that the two gradients f and g are parallel, these two problems are very closely related The latter problem finding the points on a conic, quadric, or hyperquadric furthest from (closest to) the origin is easier to visualize, but the former problem maximizing or minimizing the quadratic function f on the hypersphere x = is easier to compute with Let s go about applying the Lagrange Multiplier method to the Rayleigh Ritz problem The components of g are easy: g =x i, i =,,n x i The calculation of f is harder First write n n f(x) = x j ( a jk x k ) j= k=

28 8 III APPLICATIONS and then carefully apply the product rule together with a jk = a kj The result is f x i = n a ij x j j= i =,,,n (Work this out explicitly in the cases n = and n = if you don t believe it) Thus, the Lagrange multiplier condition f = λ g yields the equations n a ij x j = λ(x i ) j= i =,,,n which may be rewritten in matrix form (after canceling the s) () Ax = λx To this we must add the equation of the level set g(x) = x = Thus, any potential solution x is a unit eigenvector for the matrix A with eigenvalue λ Note also that for such a unit eigenvector, we have f(x) =x t Ax=x t (λx)=λx t x=λ x =λ Thus the eigenvalue is the extreme value of the quadratic function at the point on the (hyper)sphere given by the unit eigenvector The upshot of this discussion is that for a real symmetric matrix A, the Rayleigh Ritz problem is equivalent to the problem of finding an orthonormal basis of eigenvectors for A The Rayleigh Ritz method may be used to show that the characteristic equation of a real symmetric matrix only has real eigenvalues This was an issue left unresolved in our earlier discussions Here is an outline of the argument The hypersphere g(x) = x = is a closed bounded set in R n for any n It follows from a basic theorem in analysis that any continuous function, in particular the quadratic function f(x), must attain both maximum and minimum values on the hypersphere Hence, the Lagrange multiplier problem always has solutions, which by the above algebra amounts to the assertion that the real symmetric matrix A must have at least one eigenvalue This suggests a general procedure for showing that all the eigenvalues are real First find the largest eigenvalue by maximizing the quadratic function f(x) on the set x = Let x = u be the corresponding eigenvector Change coordinates by choosing an orthonormal basis starting with u Then the additional basis elements will span the subspace perpendicular to u and we may obtain a lower dimensional quadratic function by restricting f to that subspace We can now repeat the process to find the next smaller real eigenvalue Continuing in this way, we will obtain an orthonormal basis of eigenvectors for A and each of the corresponding eigenvalues will be real

29 6 NORMAL MODES 9 Exercises for Section Find the maximum and minimum values of the function f(x, y) =x +y given the constraint x + xy + y = Find the maximum and/or minimum value of f(x, y, z) =x y +z 4xy 4yz subject to x + y + z = (Optional) The derivation of the Lagrange multiplier condition f = λ g assumes that the g, so there is a well defined tangent plane at the potential maximum or minimum point However, a maximum or minimum could occur at a point where g =, so all such points should also be checked (Similarly, either f or g might fail to be smooth at a maximum or minimum point) With these remarks in mind, find where f(x, y, z) =x +y +z attains its minimum value subject to the constraint g(x, y, z) =x +y z = 4 Consider as in Example the problem of maximizing f(x, y) =x +4xy + y given the constraint x + y = This is equivalent to maximizing F (x, y) =xy on the circle x +y = (Why?) Draw a diagram showing the circle and selected level curves F (x, y) =cof the function F Can you see why F (x, y) attains its maximum at (/, / ) and ( /, / ) without using any calculus? Hint: consider how the level curves of F intersect the circle and decide from that where F is increasing, and where it is decreasing on the circle 6 Normal Modes Eigenvalues and eigenvectors are an essential tool in solving systems of linear differential equations We leave an extended treatment of this subject for a course in differential equations, but it is instructive to consider an interesting class of vibration problems that have many important scientific and engineering applications We start with some elementary physics you may have encountered in a physics class Imagine an experiment in which a small car is placed on a track and connected to a wall though a stiff spring With the spring in its rest position, the car will just sit there forever, but if the car is pulled away from the wall a small distance and then released, it will oscillate back and forth about its rest position If we assume the track is so well greased that we can ignore friction, this oscillation will in principle continue forever k m We want to describe this situation symbolically Let x denote the displacement of the car from equilibrium, and suppose the car has mass m Hooke s Law tells x

30 4 III APPLICATIONS us that there is a restoring force of the form F = kx where k is a constant called the spring constant Newton s second law relating force and acceleration tells us () m d x dt = kx This is also commonly written d x dt + k x = You may have learned how to solve m this differential equation in a previous course, but in this particular case, it is not really necessary From the physical characteristics of the solution, we can pretty much guess what it should look like () x = A cos(ωt) where A is the amplitude of the oscillation and ω is determined by the frequency or rapidity of the oscillation It is usually called the angular frequency and it is related to the actual frequency f by the equation ω =πf A is determined by the size of the initial displacement It gives the maximum displacement attained as the car oscillates ω however is determined by the spring constant To see how, just substitute () in () We get m( ω A cos(ωt)) = ka cos(ωt) which after canceling common factors yields mω = k k or ω = m The above discussion is a bit simplified We could not only have initially displaced the car from rest, but we could also have given it an initial shove or velocity In that case, the maximal displacement would be shifted in time The way to describe this symbolically is x = A cos(ωt + δ) where δ is called the phase shift This complication does not change the basic character of the problem since it is usually the fundamental vibration of the system that we are interested in, and that turns out to be the same if we include a possible phase shift We now want to generalize this to more than one mass connected by several springs This may seem a bit bizarre, but it is just a model for situations commonly met in scientific applications For example, in chemistry, one often needs to determine the basic vibrations of a complex molecule The molecule consists of atoms connected by interatomic forces As a first approximation, we may treat the atoms as point masses and the forces between them as linear restoring forces from equilibrium positions Thus the mass-spring model may tell us something useful about real problems

31 6 NORMAL MODES 4 Example Consider the the configuration of masses and springs indicated below, where m is the common mass of the two particles and k is the common spring constant of the three springs k m k m k x x Look at the first mass When it is displaced a distance x to the right from equilibrium, it will be acted upon by two forces Extension of the spring on the left will pull it back with force kx At the same time, the spring in the middle will push or pull it depending on whether it is compressed or stretched If x is the displacement of the second mass from equilibrium, the change in length of the second spring will be x x, so the force on the first mass will be k(x x ) This yields a total force of kx k(x x )= kx + kx A similar analysis works for the second mass differential equations Thus, we obtain the system of m d x dt = kx + kx m d x dt = kx kx The system may also be rewritten in matrix form () m d x dt = [ k k k k ] x where x = Note that the matrix on the right is a symmetric matrix This is always the case in such problems It is an indirect consequence of Newton s third law which asserts that the forces exerted by two masses on each other must be equal and opposite To solve this, we look for solutions of the form [ x x ] (4) x = v cos(ωt) x = v cos(ωt) In such a solution, the two particles oscillate with the same frequency but with possibly different amplitudes v and v Such a solution is called a normal mode General motions of the system can be quite a bit more complicated First of all, we have to worry about possible phase shifts More important, we also have to

32 4 III APPLICATIONS allow for linear combinations of the normal modes in which there is a mixture of different frequencies In this way the situation is similar to that of a musical instrument which may produce a complex sound which can be analyzed in terms of basic frequencies or harmonics We leave such complications for another course Here we content ourselves at doing the first step, which is to find the fundamental oscillations or normal modes (4) may be rewritten in matrix form () x = v cos(ωt) where ω and v are to be determined Then d x dt Hence, putting () in () yields = ω v cos(ωt) [ m( ω k k v cos(ωt)) = k k ] vcos(ωt) Now factor out the common scalar factor cos(ωt) to obtain ω k k mv = v k k Note that the amplitude v is a vector in this case, so we cannot cancel it as we did in the case of a single particle The above equation may now be rewritten v = ω m k v This is a trifle messy, but if we put abbreviate λ = ω m k for the scalar on the right, we can write it v = λv This equation should look familiar It says that v is an eigenvector for the matrix on the left, and that λ = ω m is the corresponding eigenvalue However, k we know how to solve such problems First we find the eigenvalues by solving the characteristic equation For each eigenvalue, we can find the corresponding λm frequency ω from ω = Next, for each eigenvalue, we can determine basic k eigenvectors as before In this example, the characteristic equation is λ det =( λ) λ =λ +4λ+4 =λ +4λ+ =(λ+ )(λ +)=

33 6 NORMAL MODES 4 Hence, the roots are λ = (ω= k/m) and λ = (ω= k/m) For λ = (ω= k/m), finding the eigenvectors results in reducing the matrix + = + Hence, the solution is v = v with v free A basic solution vector for the subspace of solutions is v = The corresponding normal mode has the form x = cos( k/m t) Note that x (t) =x (t) for all t, so the two particles move together in tandem with the same angular frequency k/m This behavior of the particles is a consequence of the fact that the components of the basic vector v are equal Similarly, for λ = (ω= k/m), we have + = + The solution is v = v with v free, and a basic solution vector for the system is The corresponding normal mode is is x = v = cos( k/m t) Note that x (t) = x (t) for all t, so the two masses move opposite to one another k with the same amplitude and angular frequency m Note that in the above example, we could have determined the two vectors v and v by inspection As noted, the first corresponds to motion in which the particles move in tandem and the spring between them experiences no net change in length The second corresponds to motion in which the particles move back and forth equal amounts in opposite directions but with the same frequency This would have simplified the problem quite a lot For, if you know an eigenvector of a matrix, it is fairly simple to find the corresponding eigenvalue, and hence the angular frequency In fact, it is often true that careful consideration of the physical arrangement of the particles, with particular attention to any symmetries that may be present, may suggest possible normal modes with little or no calculation

34 44 III APPLICATIONS Relation to the Principal Axis Theorem As noted above normal mode problems typically result in systems of the form d x (7) dt = Ax where A is a real symmetric matrix (In the case that all the particles have the same mass, A = K, where K is a symmetric matrix of spring constants If the m masses are different, the situation is a bit more complicated, but the problem may still be restated in the above form) If P is a matrix with columns the elements of a basis of eigenvectors for A, then we saw earlier that AP = PD where D is a diagonal matrix with the eigenvalues on the diagonal Assume we make the change of coordinates x = P x Then d P x dt = AP x P d x = AP x dt d x dt = P AP x = Dx However, since D is diagonal, this last equation may be written as n scalar equations d x j dt = λ jx j j =,,,n In the original coordinates, the motions of the particles are coupled since the motion of each particle may affect the motion of the other particles In the new coordinate system, these motions are decoupled The new coordinates are called normal coordinates Each x j may be thought of as the displacement of one of n fictitious particles, each of which oscillates independently of the others in one of n mutually perpendicular directions The physical significance in terms of the original particles of each normal coordinate is a but murky, but they presumably represent underlying structure of some importance Example, revisited d x dt = k m x A basis of eigenvectors for the coefficient matrix is as before { } v =, v =

35 6 NORMAL MODES 4 If we divide the vectors by their lengths, we obtain the orthonormal basis { }, This in turn leads to the change of coordinates matrix [ ] P = x x x π /4 π /4 x If you look carefully, you will see this represents a rotation of the original x,x - axes through an angle π/4 However, this has nothing to do with the original geometry of the problem x and x stand for displacements of two different particles along the same one dimensional axis The x,x plane is a fictitious configuration space in which a single point represents a pair of particles It is not absolutely clear what a rotation of axes means for this plane, but the new normal coordinates x,x obtained thereby give us a formalism in which the normal modes appear as decoupled oscillations Exercises for Section 6 Determine the normal modes, including frequencies and relative motions for the system m d x dt = k(x x )= kx + kx m d x dt = k(x x )+k(x x )=kx kx + kx m d x dt = k(x x )=kx kx

### Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

### 3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

### Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

### discuss how to describe points, lines and planes in 3 space.

Chapter 2 3 Space: lines and planes In this chapter we discuss how to describe points, lines and planes in 3 space. introduce the language of vectors. discuss various matters concerning the relative position

### ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

### (a) We have x = 3 + 2t, y = 2 t, z = 6 so solving for t we get the symmetric equations. x 3 2. = 2 y, z = 6. t 2 2t + 1 = 0,

Name: Solutions to Practice Final. Consider the line r(t) = 3 + t, t, 6. (a) Find symmetric equations for this line. (b) Find the point where the first line r(t) intersects the surface z = x + y. (a) We

### Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

### 42 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. Figure 1.18: Parabola y = 2x 2. 1.6.1 Brief review of Conic Sections

2 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE Figure 1.18: Parabola y = 2 1.6 Quadric Surfaces 1.6.1 Brief review of Conic Sections You may need to review conic sections for this to make more sense. You

### Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

### Section 1.1. Introduction to R n

The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

### LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

### 3. INNER PRODUCT SPACES

. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

### Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

### Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

### VELOCITY, ACCELERATION, FORCE

VELOCITY, ACCELERATION, FORCE velocity Velocity v is a vector, with units of meters per second ( m s ). Velocity indicates the rate of change of the object s position ( r ); i.e., velocity tells you how

### Math 241, Exam 1 Information.

Math 241, Exam 1 Information. 9/24/12, LC 310, 11:15-12:05. Exam 1 will be based on: Sections 12.1-12.5, 14.1-14.3. The corresponding assigned homework problems (see http://www.math.sc.edu/ boylan/sccourses/241fa12/241.html)

### Solutions to old Exam 1 problems

Solutions to old Exam 1 problems Hi students! I am putting this old version of my review for the first midterm review, place and time to be announced. Check for updates on the web site as to which sections

### 9 Multiplication of Vectors: The Scalar or Dot Product

Arkansas Tech University MATH 934: Calculus III Dr. Marcel B Finan 9 Multiplication of Vectors: The Scalar or Dot Product Up to this point we have defined what vectors are and discussed basic notation

### Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

1 Chapter 13. VECTORS IN THREE DIMENSIONAL SPACE Let s begin with some names and notation for things: R is the set (collection) of real numbers. We write x R to mean that x is a real number. A real number

### Chapter 6. Orthogonality

6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

### Inner Product Spaces

Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

### Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

### Orthogonal Projections

Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

### We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P.

Volumes of parallelograms 1 Chapter 8 Volumes of parallelograms In the present short chapter we are going to discuss the elementary geometrical objects which we call parallelograms. These are going to

### x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

### December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

### Determine whether the following lines intersect, are parallel, or skew. L 1 : x = 6t y = 1 + 9t z = 3t. x = 1 + 2s y = 4 3s z = s

Homework Solutions 5/20 10.5.17 Determine whether the following lines intersect, are parallel, or skew. L 1 : L 2 : x = 6t y = 1 + 9t z = 3t x = 1 + 2s y = 4 3s z = s A vector parallel to L 1 is 6, 9,

### 1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

### THREE DIMENSIONAL GEOMETRY

Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

### Two vectors are equal if they have the same length and direction. They do not

Vectors define vectors Some physical quantities, such as temperature, length, and mass, can be specified by a single number called a scalar. Other physical quantities, such as force and velocity, must

### [1] Diagonal factorization

8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

### Name Class. Date Section. Test Form A Chapter 11. Chapter 11 Test Bank 155

Chapter Test Bank 55 Test Form A Chapter Name Class Date Section. Find a unit vector in the direction of v if v is the vector from P,, 3 to Q,, 0. (a) 3i 3j 3k (b) i j k 3 i 3 j 3 k 3 i 3 j 3 k. Calculate

### Vectors 2. The METRIC Project, Imperial College. Imperial College of Science Technology and Medicine, 1996.

Vectors 2 The METRIC Project, Imperial College. Imperial College of Science Technology and Medicine, 1996. Launch Mathematica. Type

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

### 15.062 Data Mining: Algorithms and Applications Matrix Math Review

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

### Equations Involving Lines and Planes Standard equations for lines in space

Equations Involving Lines and Planes In this section we will collect various important formulas regarding equations of lines and planes in three dimensional space Reminder regarding notation: any quantity

### Eigenvalues and Eigenvectors

Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution

### A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS

A QUIK GUIDE TO THE FOMULAS OF MULTIVAIABLE ALULUS ontents 1. Analytic Geometry 2 1.1. Definition of a Vector 2 1.2. Scalar Product 2 1.3. Properties of the Scalar Product 2 1.4. Length and Unit Vectors

### α = u v. In other words, Orthogonal Projection

Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

### Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

### Solving Simultaneous Equations and Matrices

Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering

### Solutions for Review Problems

olutions for Review Problems 1. Let be the triangle with vertices A (,, ), B (4,, 1) and C (,, 1). (a) Find the cosine of the angle BAC at vertex A. (b) Find the area of the triangle ABC. (c) Find a vector

### FINAL EXAM SOLUTIONS Math 21a, Spring 03

INAL EXAM SOLUIONS Math 21a, Spring 3 Name: Start by printing your name in the above box and check your section in the box to the left. MW1 Ken Chung MW1 Weiyang Qiu MW11 Oliver Knill h1 Mark Lucianovic

### State of Stress at Point

State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,

### MATHEMATICS (CLASSES XI XII)

MATHEMATICS (CLASSES XI XII) General Guidelines (i) All concepts/identities must be illustrated by situational examples. (ii) The language of word problems must be clear, simple and unambiguous. (iii)

### LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN c W W L Chen, 1982, 2008. This chapter originates from material used by author at Imperial College, University of London, between 1981 and 1990. It is available free to all individuals,

### Vector Algebra CHAPTER 13. Ü13.1. Basic Concepts

CHAPTER 13 ector Algebra Ü13.1. Basic Concepts A vector in the plane or in space is an arrow: it is determined by its length, denoted and its direction. Two arrows represent the same vector if they have

### Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

### Understanding Basic Calculus

Understanding Basic Calculus S.K. Chung Dedicated to all the people who have helped me in my life. i Preface This book is a revised and expanded version of the lecture notes for Basic Calculus and other

### Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

### Lecture L19 - Vibration, Normal Modes, Natural Frequencies, Instability

S. Widnall 16.07 Dynamics Fall 2009 Version 1.0 Lecture L19 - Vibration, Normal Modes, Natural Frequencies, Instability Vibration, Instability An important class of problems in dynamics concerns the free

### x 2 + y 2 = 1 y 1 = x 2 + 2x y = x 2 + 2x + 1

Implicit Functions Defining Implicit Functions Up until now in this course, we have only talked about functions, which assign to every real number x in their domain exactly one real number f(x). The graphs

### Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.

### Exam 1 Sample Question SOLUTIONS. y = 2x

Exam Sample Question SOLUTIONS. Eliminate the parameter to find a Cartesian equation for the curve: x e t, y e t. SOLUTION: You might look at the coordinates and notice that If you don t see it, we can

### 1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v,

1.3. DOT PRODUCT 19 1.3 Dot Product 1.3.1 Definitions and Properties The dot product is the first way to multiply two vectors. The definition we will give below may appear arbitrary. But it is not. It

### Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

### Lecture 6. Weight. Tension. Normal Force. Static Friction. Cutnell+Johnson: 4.8-4.12, second half of section 4.7

Lecture 6 Weight Tension Normal Force Static Friction Cutnell+Johnson: 4.8-4.12, second half of section 4.7 In this lecture, I m going to discuss four different kinds of forces: weight, tension, the normal

### Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

Math 312, Fall 2012 Jerry L. Kazdan Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. In addition to the problems below, you should also know how to solve

### Thnkwell s Homeschool Precalculus Course Lesson Plan: 36 weeks

Thnkwell s Homeschool Precalculus Course Lesson Plan: 36 weeks Welcome to Thinkwell s Homeschool Precalculus! We re thrilled that you ve decided to make us part of your homeschool curriculum. This lesson

### Identifying second degree equations

Chapter 7 Identifing second degree equations 7.1 The eigenvalue method In this section we appl eigenvalue methods to determine the geometrical nature of the second degree equation a 2 + 2h + b 2 + 2g +

### Inner product. Definition of inner product

Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product

### 17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):

### Copyrighted Material. Chapter 1 DEGREE OF A CURVE

Chapter 1 DEGREE OF A CURVE Road Map The idea of degree is a fundamental concept, which will take us several chapters to explore in depth. We begin by explaining what an algebraic curve is, and offer two

### Midterm Solutions. mvr = ω f (I wheel + I bullet ) = ω f 2 MR2 + mr 2 ) ω f = v R. 1 + M 2m

Midterm Solutions I) A bullet of mass m moving at horizontal velocity v strikes and sticks to the rim of a wheel a solid disc) of mass M, radius R, anchored at its center but free to rotate i) Which of

### WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE?

WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE? JOEL H. SHAPIRO Abstract. These notes supplement the discussion of linear fractional mappings presented in a beginning graduate course

### Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.

This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra

### Review Sheet for Test 1

Review Sheet for Test 1 Math 261-00 2 6 2004 These problems are provided to help you study. The presence of a problem on this handout does not imply that there will be a similar problem on the test. And

### L 2 : x = s + 1, y = s, z = 4s + 4. 3. Suppose that C has coordinates (x, y, z). Then from the vector equality AC = BD, one has

The line L through the points A and B is parallel to the vector AB = 3, 2, and has parametric equations x = 3t + 2, y = 2t +, z = t Therefore, the intersection point of the line with the plane should satisfy:

### ASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1

19 MDOF Dynamic Systems ASEN 3112 Lecture 1 Slide 1 A Two-DOF Mass-Spring-Dashpot Dynamic System Consider the lumped-parameter, mass-spring-dashpot dynamic system shown in the Figure. It has two point

### Nonlinear Iterative Partial Least Squares Method

Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

### 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each)

Math 33 AH : Solution to the Final Exam Honors Linear Algebra and Applications 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) (1) If A is an invertible

### Vectors, Gradient, Divergence and Curl.

Vectors, Gradient, Divergence and Curl. 1 Introduction A vector is determined by its length and direction. They are usually denoted with letters with arrows on the top a or in bold letter a. We will use

### South Carolina College- and Career-Ready (SCCCR) Pre-Calculus

South Carolina College- and Career-Ready (SCCCR) Pre-Calculus Key Concepts Arithmetic with Polynomials and Rational Expressions PC.AAPR.2 PC.AAPR.3 PC.AAPR.4 PC.AAPR.5 PC.AAPR.6 PC.AAPR.7 Standards Know

### SOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve

SOLUTIONS Problem. Find the critical points of the function f(x, y = 2x 3 3x 2 y 2x 2 3y 2 and determine their type i.e. local min/local max/saddle point. Are there any global min/max? Partial derivatives

### 1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

### 2.1 Three Dimensional Curves and Surfaces

. Three Dimensional Curves and Surfaces.. Parametric Equation of a Line An line in two- or three-dimensional space can be uniquel specified b a point on the line and a vector parallel to the line. The

### 6 EXTENDING ALGEBRA. 6.0 Introduction. 6.1 The cubic equation. Objectives

6 EXTENDING ALGEBRA Chapter 6 Extending Algebra Objectives After studying this chapter you should understand techniques whereby equations of cubic degree and higher can be solved; be able to factorise

### Mechanics 1: Conservation of Energy and Momentum

Mechanics : Conservation of Energy and Momentum If a certain quantity associated with a system does not change in time. We say that it is conserved, and the system possesses a conservation law. Conservation

### Lecture L3 - Vectors, Matrices and Coordinate Transformations

S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

### SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

### Section 1.4. Lines, Planes, and Hyperplanes. The Calculus of Functions of Several Variables

The Calculus of Functions of Several Variables Section 1.4 Lines, Planes, Hyperplanes In this section we will add to our basic geometric understing of R n by studying lines planes. If we do this carefully,

### Algebra 1 Course Title

Algebra 1 Course Title Course- wide 1. What patterns and methods are being used? Course- wide 1. Students will be adept at solving and graphing linear and quadratic equations 2. Students will be adept

### REVIEW EXERCISES DAVID J LOWRY

REVIEW EXERCISES DAVID J LOWRY Contents 1. Introduction 1 2. Elementary Functions 1 2.1. Factoring and Solving Quadratics 1 2.2. Polynomial Inequalities 3 2.3. Rational Functions 4 2.4. Exponentials and

### Chapter 11 Equilibrium

11.1 The First Condition of Equilibrium The first condition of equilibrium deals with the forces that cause possible translations of a body. The simplest way to define the translational equilibrium of

### Figure 1.1 Vector A and Vector F

CHAPTER I VECTOR QUANTITIES Quantities are anything which can be measured, and stated with number. Quantities in physics are divided into two types; scalar and vector quantities. Scalar quantities have

### Derive 5: The Easiest... Just Got Better!

Liverpool John Moores University, 1-15 July 000 Derive 5: The Easiest... Just Got Better! Michel Beaudin École de Technologie Supérieure, Canada Email; mbeaudin@seg.etsmtl.ca 1. Introduction Engineering

### Lecture 14: Section 3.3

Lecture 14: Section 3.3 Shuanglin Shao October 23, 2013 Definition. Two nonzero vectors u and v in R n are said to be orthogonal (or perpendicular) if u v = 0. We will also agree that the zero vector in

### Lecture L22-2D Rigid Body Dynamics: Work and Energy

J. Peraire, S. Widnall 6.07 Dynamics Fall 008 Version.0 Lecture L - D Rigid Body Dynamics: Work and Energy In this lecture, we will revisit the principle of work and energy introduced in lecture L-3 for

### 13.4 THE CROSS PRODUCT

710 Chapter Thirteen A FUNDAMENTAL TOOL: VECTORS 62. Use the following steps and the results of Problems 59 60 to show (without trigonometry) that the geometric and algebraic definitions of the dot product

### Recall that the gradient of a differentiable scalar field ϕ on an open set D in R n is given by the formula:

Chapter 7 Div, grad, and curl 7.1 The operator and the gradient: Recall that the gradient of a differentiable scalar field ϕ on an open set D in R n is given by the formula: ( ϕ ϕ =, ϕ,..., ϕ. (7.1 x 1

### Math 312 Homework 1 Solutions

Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

### MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

### Math 53 Worksheet Solutions- Minmax and Lagrange

Math 5 Worksheet Solutions- Minmax and Lagrange. Find the local maximum and minimum values as well as the saddle point(s) of the function f(x, y) = e y (y x ). Solution. First we calculate the partial

### MATH 551 - APPLIED MATRIX THEORY

MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points