# 4 MT210 Notebook Eigenvalues and Eigenvectors Definitions; Graphical Illustrations... 3

Save this PDF as:

Size: px
Start display at page:

Download "4 MT210 Notebook 4 3. 4.1 Eigenvalues and Eigenvectors... 3. 4.1.1 Definitions; Graphical Illustrations... 3"

## Transcription

1 MT Notebook Fall / prepared by Professor Jenny Baglivo c Copyright 9 by Jenny A. Baglivo. All Rights Reserved. Contents MT Notebook. Eigenvalues and Eigenvectors Definitions; Graphical Illustrations Eigenspaces, Characteristic Polynomials, Characteristic Equations Eigenanalysis and Powers; Eigenvector Bases; Special Cases Fundamental Theorem of Algebra, Complex Numbers and Eigenvalues Algebraic and Geometric Multiplicity; More About Eigenvector Bases Similar Matrices, Diagonalizable Matrices Applications: Population Projections and Stochastic Matrices Orthogonality and Orthogonal Projections Inner Product, Length, Distance Properties of Inner Product Orthogonal, Orthogonal Sets, Orthogonal Complement Fundamental Theorem of Linear Algebra Orthogonal Spanning Sets Angles, Inner Products, and Orthogonal Projections Gram-Schmidt Orthogonalization Process Least Squares Analysis Best Approximate Solutions; Normal Equations Application: Least Squares Analyses of Data Footnote: Eigenvalues, Eigenvectors and Least Squares Analysis

2

3 MT Notebook This notebook is concerned with further matrix concepts and their applications. In particular, we will study eigenvalues, eigenvectors, orthogonality and least squares. The notes correspond to material in Chapters and 6 of the Lay textbook.. Eigenvalues and Eigenvectors.. Definitions; Graphical Illustrations Let A be a square matrix of order n, and λ ( lambda ) be a scalar.. λ is said to be an eigenvalue of A if Ax = λx for some nonzero vector x.. If x O satisfies Ax = λx, then x is an eigenvector of A with eigenvalue λ. Note that if x is a nonzero eigenvector, then so is cx for each c R since A(cx) = c(ax) = c(λx) = λ(cx). Thus, the nonzero elements of Span(x) are all eigenvectors of A. Eigenvalues are exceptional values and eigenvectors are exceptional vectors. The prefix eigen comes from the German language meaning owned by or peculiar to. Example. Let A = [.. Then.. λ =. is an eigenvalue of A, with corresponding eigenvector e.. λ =. is an eigenvector of A, with corresponding eigenvector e. Consider the transformation with rule Then T (x) = Ax. T (x) contracts points in the x-direction; in particular, T (e ) =.e. T (x) expands points in the y-direction; in particular, T (e ) =.e. T (x) maps the unit circle to the ellipse shown in the plot.

4 Example. Let A = [.6.. Then..9. λ =. is an eigenvalue» of A, with corresponding eigenvector v = / /,. λ = is an eigenvector» of A, with corresponding eigenvector v = / 7 /. 7 Consider the transformation with rule Then T (x) = Ax. T (x) contracts points in the v -direction; in particular, T (v ) =.v. T (x) leaves points in the v direction fixed; in particular, T (v ) = v. T (x) maps the unit circle to the ellipse shown in the plot. The complete analysis of Example will be carried out in the next section. Note that a square matrix of order n with values in the real numbers (a ij R for all i, j) may not have eigenvalues λ R, and eigenvectors x R n. For example, matrices A corresponding to rotations around the origin: [ cos θ sin θ A =, where θ mπ for some integer m, sin θ cos θ leave no direction in -space fixed... Eigenspaces, Characteristic Polynomials, Characteristic Equations Let A be a square matrix of order n, and let λ be a scalar.. Eigenspace: If λ is an eigenvalue of A, then the eigenspace of λ is the set containing all x satisfying Ax = λx: Eigenspace(λ) = {x : Ax = λx}. The eigenspace of λ contains all eigenvectors with eigenvalue λ and the zero vector. Since Ax = λx = λi n x Ax λi n x = O (A λi n )x = O, we know that Eigenspace(λ) = Null(A λi n ). Further, since the eigenspace of λ must have positive dimension, we know that the matrix (A λi n ) must be singular.

5 . Characteristic Polynomial: The expression det(a λi) is an n th degree polynomial in the variable λ, and is called the characteristic polynomial of A.. Characteristic Equation: The equation det(a λi) = is called the characteristic equation of A. To find the eigenvalues of A we solve the characteristic equation for λ. Example, continued. Let A = () Since A λi =»».6. λ..9 [.6., as above...9 =».6 λ., the characteristic polynomial is..9 λ det(a λi) = (.6 λ)(.9 λ) (.)(.) =..λ + λ. = λ.λ +.. Since det(a λi) = (λ.)(λ ), the solutions to the characteristic equation are the eigenvalues. and listed earlier. () Let λ =.. Since A λi = [..,.. [ A λi O = [.... [ [ x x = x [ = x, where x is free, we know that Note that v = j» Eigenspace(.) = Null(A.I) = Span [ / / = [ Eigenspace(.). ff. () Let λ =. Since A λi = [...., [ A λi O = [.... [ / [ x / x = x [ / = x, where x is free, we know that Note that v = j» / Eigenspace() = Null(A I) = Span [ / 7 / 7 = 7 [ ff Eigenspace(.). j» = Span ff.

6 [.8. Problem. Let A =..7 Determine the eigenvalues of A, and write each eigenspace as the span of a set of vectors.. 6

7 [ Problem. Let A =.... Determine the eigenvalues of A, and write each eigenspace as the span of a set of vectors.. 7

8 Problem. Let A =.8.. Determine the eigenvalues of A, and write each eigenspace as the span of a set of vectors. 8

9 .. Eigenanalysis and Powers; Eigenvector Bases; Special Cases Conducting an eigenanalysis (that is, finding eigenvalues and eigenvectors) can be challenging. The following is an initial list of useful theorems for eigenanalysis:. Powers: If λ is an eigenvalue of A with eigenvector x and k is a positive integer, then x is an eigenvector for A k with corresponding eigenvalue λ k.. Bases and Powers: Let A be a square matrix of order n. Suppose that v i is an eigenvector for A with corresponding eigenvalue λ i, for i =,,..., n, and the set {v, v,..., v n } is a basis for R n. Then, for every vector x and positive integer k, A k x can be computed quickly using the unique representation of x in the eigenvector basis {v, v,..., v n }. Specifically, if x = c v + + c n v n for unique constants c i, then A k x = A k (c v + + c n v n ) = c (A k v ) + + c n (A k v n ) = c (λ ) k v + + c n (λ n ) k v n.. Diagonal Matrices: Let A be a diagonal matrix of order n. Then e i is an eigenvector for A with eigenvalue a ii, for i =,,..., n. Thus, the standard basis for R n is an eigenvector basis for the diagonal matrix A, and the eigenvalues are the diagonal elements of A.. Distinct Eigenvalues: Let A be a square matrix of order n. If A has n distinct eigenvalues, then A has an eigenvector basis. To construct an eigenvector basis, choose one nonzero vector from each eigenspace.. Triangular Matrices: Let A be a triangular matrix of order n. Then det(a λi) = (a λ)(a λ) (a nn λ) and the eigenvalues of A are a, a,..., a nn. (There may be repeats in the list.) 9

10 General application: projections over time. A general application of eigenanalysis is to the analysis of projections over time. In this type of application,. x represents information at time,. x = Ax represents information at time,. x = Ax = A x represents information at time, and so forth. If A has an eigenvector basis, then information at time k is x k = A k x = c (λ ) k v + + c n (λ n ) k v n where x = c v + + c n v n. We will see an important application of this methodology in Section..7 (page 7). As a simple illustration, consider A = Now, v =»» /, v =».6. once again. Let..9, λ =., λ =, and x = c v + c v. x k = A k x = c (.) k v + c () k v c v as k. Thus, information at time k is approximately equal to the v -component of information at time when k is large. Problem. Use the definitions of eigenvalue and eigenvector, and properties of matrices, to prove the following special case of the first theorem listed on the previous page: Let A be a square matrix of order n, and let x be an eigenvector of A with eigenvalue λ. Demonstrate that x is an eigenvector of A with eigenvalue λ.

11 Problem. Let A be a square matrix of order n, and assume that A has n distinct eigenvalues and let v i Eigenspace(λ i ) be a nonzero vector, for each i. Show that {v, v,..., v n } is a linearly independent set, thus forming a basis for R n.

12 Problem 6. The following triangular matrices each have eigenvalues,, : (a) A = ; (b) A = In each case, write Eigenspace() and Eigenspace() as spans of sets of vectors..

13 .. Fundamental Theorem of Algebra, Complex Numbers and Eigenvalues Let A be a square matrix of order n. To find the eigenvalues of A we need to solve the characteristic equation, which requires that we factor the characteristic polynomial, det(a λi). By the fundamental theorem of algebra, the characteristic polynomial can always be factored into n linear terms if we allow both real and complex numbers: det(a λi) = (λ λ)(λ λ) (λ n λ), where each λ i C. The eigenvalues are λ, λ,..., λ n. In general, not all λ i s are distinct. For example, let A = 6 6. Since A λi = λ 6 6 λ λ = ( λ) λ 6 6 λ = ( λ)(λ + 6) = ( λ)(6i λ)( 6i λ) = implies λ =, 6i, 6i, and the eigenvalues of A are, 6i and 6i. Further, Eigenspace() Eigenspace(6i) Eigenspace( 6i) = Null(A I) = Null(A 6iI) = Null(A + 6iI) 6 + i 6 i = Span = Span + 6i = Span 6i Note that matrices with complex eigenvalues and complex eigenvectors are common in applied mathematics. Examples include population projection matrices (see Section..7, page 7)... Algebraic and Geometric Multiplicity; More About Eigenvector Bases Let A be a square matrix of order n and let λ be an eigenvalue of A. Then. Algebraic Multiplicity: The algebraic multiplicity of λ is the number of times (λ λ) appears as a factor of the characteristic polynomial.. Geometric Multiplicity: The geometric multiplicity of λ is the dimension of Eigenspace(λ ). Note that, by the fundamental theorem of algebra, the sum of the algebraic multiplicities of the eigenvalues of A must be n.

14 Problem 6, continued. Fill-in the table below with information for the triangular matrices from the problem on page : (a) A = (b) A = Geometric Algebraic Geometric Algebraic Multiplicity Multiplicity Multiplicity Multiplicity of λ = of λ = of λ = of λ = More on finding eigenvector bases. doing eigenanalyses: Here are two additional theorems that are useful for. Algebraic and Geometric Multiplicity: Let A be a square matrix of order n and let λ be an eigenvalue of A. Then the algebraic and geometric multiplicities of λ must satisfy the following inequalities: Geometric Multiplicity of λ Algebraic Multiplicity of λ. (If the geometric multiplicity is strictly less than the algebraic multiplicity, then there is a deficiency of eigenvectors and we won t be able to find an eigenvector basis.). Pooling Eigenspace Bases: If A has p distinct eigenvalues (λ i for i =,,..., p) and B i is a basis for eigenspace of λ i for each i, then the set B B B p is a linearly independent set. (If you pool the bases, you get a linearly independent set.) Problem 6, continued. Do either of the matrices in Problem 6 have an eigenvector basis for R? If yes, explicitly write down a basis. If no, explain why.

15 ..6 Similar Matrices, Diagonalizable Matrices Similar matrices: Let A and B be square matrices of order n. Then A and B are said to be similar if there exists an invertible matrix P satisfying A = P BP (and B = P AP ). If A and B are similar matrices, then. they have the same determinant, det(a) = det(b),. they have the same eigenvalues, and. their k th powers satisfy A k = P B k P, for each positive integer k. The factorization A = P BP is useful when B is easier to work with than A. Diagonalizable matrices: Let A be a square matrix of order n. Then A is said to be diagonalizable if it is similar to a diagonal matrix. That is, if A = P DP where D is diagonal and P is invertible. The following theorem tells us exactly when A is diagonalizable. Theorem (Diagonalization). Let A be a square matrix of order n. Then A is diagonalizable if and only if A has n linearly independent eigenvectors. In fact, A = P DP iff the columns of P are n linearly independent eigenvectors and the diagonal entries of D are the corresponding eigenvalues. For example,. If A = [ If A = 6 6, then A = P DP where P = P = [ and D =, then A = P DP where 6 + i 6 i + 6i 6i [. and D =. 6i 6i.

16 Problem, continued. The square matrix A =.8 is diagonalizable.. Use the work you did on page 8 to find matrices P and D so that A = P DP. Diagonalization and Transformations. Suppose that A = P DP, where λ P = [ λ v v... v n and D =..., λ n and let B = {v, v,..., v n } be the basis whose elements are the columns of P. Then the action of A is to ( ): change from the standard basis of R n to basis B; ( ): operate as a diagonal matrix in basis B, and ( ): change back to the standard basis for interpretation. x = n i= c iv i A c c [x B = c n Ax = n i= λ ic i v i D [Ax B = λ c λ c 6. 7 λ nc n Similarly, the action of A k is to ( ): change from the standard basis of R n to basis B; ( ): operate as the k th power of a diagonal matrix in basis B, and ( ): change back to the standard basis for interpretation. x = n i= c iv i A k c c [x B = c n Ak x = n i= λk i c iv i D k [Ak x B = λ k c λ k c 6. 7 λ k nc n 6

17 ..7 Applications: Population Projections and Stochastic Matrices This section contains two applications of eigenanalysis. I. Population projections, and the northern spotted owl (Source: Lay text, p.6). Researchers used demographic data for the northern spotted owl to develop a stage-matrix model using life stages (juvenile, subadult and adult). Their goal was to track the population growth/decline of the owl in a particular old growth forest in the Pacific northwest. If j i is the number of juveniles, s i is the number of subadults and a i is the number of adults in the population at time i, then x i = j i s i is the population vector at time i, a i and the total population at time i is the sum of the components (j i + s i + a i ). The matrix that allows you to project one year is. A = Given x i, the population vector at time (i + ) is. j i.a i x i+ = Ax i =.8 s i =.8j i =.7.9 a i.7s i +.9a i j i+ s i+ a i+. Matrices used in population problems are generally diagonalizable, with both real and complex eigenvalues. For this problem, we can write A = P DP, where.98 D =. +.6i..6i and P =.8.6.9i.6 +.9i i..6i The diagonal elements of D are the eigenvalues of A. Since (.98) k, (. +.6i) k and (..6i) k as k, we know that the population will eventually crash given any initial population vector. 7

18 The author tells us that, if the (,)-entry of the A matrix was.6 (the proportion that would be appropriate for this species in a different location), then the population would grow. The new matrix (the one with the new (,)-entry) would be diagonalizable with i.77.i D =.6 +.8i and P = i i i Since (.6) k, (.6 +.8i) k and (.6.8i) k as k, the statement the author makes is correct. I simulated population growth over years starting with an initial population of individuals (j + s + a = ) and using both the true matrix and the matrix with the (,)-entry changed. The results are summarized in the plot below.. Left Plot: Using the true matrix, the total population declined over time.. Right Plot: Using the altered matrix, the total population increased over time with approximate geometric growth. Geometric growth kicks in when k is large enough so that k th powers of the last two eigenvalues are close to zero: (.6 +.8i) k and (.6.8i) k. II. Stochastic matrices, moving cars, and searching the internet. A probability vector is one whose entries are nonnegative real numbers with sum. A stochastic matrix is a square matrix whose columns are probability vectors. The following matrices are examples of stochastic matrices: [ Stochastic matrices are used to model population movement over time, where individuals move among n different locations. The following simple example considers the movement of rental cars over time. 8

19 Example. A car rental agency has three rental locations (,, ). A customer may rent a car from any of the three locations and return the car to any of the three locations. From past experience, management observes that:. Location : Cars rented from location are returned to locations,, with probabilities.,. and., respectively;. Location : Cars rented from location are returned to locations,, with probabilities.,.8 and, respectively; and. Location : Cars rented from location are returned to locations,, with probabilities.,. and., respectively. Suppose that we would like to determine the probabilities that a car initially rented from a given location (either, or ) will be returned to locations,, after k rental periods. Let A be the matrix whose columns are the probabilities listed above, let a i, b i and c i be the probabilities that the car is at locations,, after i rental periods, and let x i be the vector whose components are the probabilities a i, b i and c i :... a i A =..8., x i = b i... The matrix A can be used to project one rental period. That is, x i+ = Ax i for each i. The starting location vectors (x ) are for location, for location, for location. c i We can write A = P DP, where D =., P = [... v v v =.6.,.... and the first column of P has nonnegative terms with sum. Note that k, (.) k and (.) k as k. If x corresponds to the location vector, for example, then x = v (/)v + (/)v and x k = A k x v for large k. In fact, x k v after only time periods: k = k = k = k = k = k = k = 6 k = 7 k = 8 k = 9 k = k = a k b k c k

20 Similarly, if x corresponds to the location vector, then x = v + v v and x k = A k x v for large k, and if x corresponds to the location vector, then x = v + v + v and x k = A k x v for large k. Thus, if k is large, the probabilities that a car initially at any one of the three locations will be returned to locations,, after k rental periods are (approximately).,.6 and.. Surfing the web. Now, imagine yourself surfing the web starting from some initial location and randomly following hyperlinks. Assuming an appropriate A matrix can be created and analyzed as above, the probability that you will be at a given location after a sufficient number of steps can be determined. The designers of Google use the eventual probabilities to determine the order in which the results of a search are reported; specifically, webpages with higher probabilities are listed before those with lower probabilities. Their A matrix uses the hyperlink structure of the web and some proprietary information.. Orthogonality and Orthogonal Projections.. Inner Product, Length, Distance Let v and w be vectors in R k. Then. Inner product: The inner product (dot product) of v and w is the number v w = v T w = [ w v v v k 6 7. = v w + v w + + v k w k. w k. Length: The length of v is the number v = v v = w v + v + + v k.. Distance: The distance between v and w is the length of the difference vector v w: dist(v, w) = v w = (v w ) + (v w ) + + (v k w k ). Further, a unit vector is a vector of length one. If v O, then u = v is the unit vector in the direction of v. v

21 For example, if v = 8 and w =, then () v w = () The length of v is () The unit vector in the direction of v is () The distance between v and w is.. Properties of Inner Product Let u, v, w R k, c R. Then. Commutative: v w = w v.. Scalars: (cv) w = v (cw) = c(v w).. Distributive: (u + v) w = (u w) + (v w) and w (u + v) = (w u) + (w v).. Nonnegative: v v. Further v v = iff v = O. Problem. Let v, v and w be vectors in R k, and suppose that v w = and v w =. Use the properties of inner product to demonstrate that v w = for every v Span {v, v }.

22 .. Orthogonal, Orthogonal Sets, Orthogonal Complement The concept of orthogonality is important in applications.. Orthogonal: Let v and w be vectors in R k. Then v and w are said to be orthogonal if their inner product is zero: v w =.. Orthogonal Set: Let v, v,..., v p be vectors in R k. Then {v, v,..., v p } is said to be an orthogonal set if v i O for all i, and v i v j = when i j.. Orthogonal Complement: Let V be a subspace of R k. The orthogonal complement of V, denoted by V ( V -perp ), is the collection of all vectors orthogonal to V : Problem. Let v = a V = {w : w is orthogonal to each v V }., v =, v = b Find values of a, b so that {v, v, v } is an orthogonal set..

23 Properties of orthogonal complements. orthogonal complement. Then Let V be a subspace of R k and let V be its. Subspace: The orthogonal complement of V is a subspace of R k. Further, V V = {O} since O is the only vector satisfying x x =.. Orthogonal Complement of V : The orthogonal complement of V is V : ( V ) = V.. Spanning Sets: Suppose that V = Span{v, v,..., v p }. Then w V if and only if w v i = for i =,,..., p.. Pooling Bases: If B is a basis for V and B is a basis for V, then the union of the bases, B B, is a basis for R k. To illustrate orthogonal complements, let {[ V = Span{v} = Span, } and w = [ w w. Since = w v = w + w w = w [ where w is free, we know that {[ V = Span, }, as shown in the plot.

24 Problem. In each case, write V as a span. (a) V = Span 8 < :, 9 = ; (b) V = Span 8 < : 9 = ;

25 .. Fundamental Theorem of Linear Algebra Let A be an m n matrix and let A T be its transpose. The following theorem, known as the fundamental theorem of linear algebra, gives important relationships among the four subspaces related to A and its transpose. Fundamental Theorem of Linear Algebra. Let A be an m n matrix. Then. Null(A) and Row(A) = Col(A T ) are orthogonal complements in R n.. Null(A T ) and Row(A T ) = Col(A) are orthogonal complements in R m. Further, if rank(a) = r, then. dim(col(a)) = dim(col(a T )) = r,. dim(null(a)) = n r and. dim(null(a T )) = m r. Proof of first statement: It is instructive to demonstrate the first statement in the theorem. Let A T = [ α α α m and A = Now (complete the proof), Ax = O α T.. Then T α m T α α x T α. x = α x. = O. T α m α m x

26 Problem. Find bases for Null(A), Col(A), Null(A T ) and Col(A T ), where A =

27 .. Orthogonal Spanning Sets The following theorem tells us that a set of mutually orthogonal nonzero vectors is linearly independent. Further, the coordinates of a vector w with respect to a basis of mutually orthogonal nonzero vectors can be found quickly using dot products: Theorem (Orthogonal Spanning Sets). Let {v, v,..., v p } be an orthogonal set of vectors in R k and let V = Span{v, v,..., v p }. Then. {v, v,..., v p } is a basis for V.. If w V, then w = c v + + c p v p where c i = w v i v i v i for each i. Problem. In each case, use dot products to find the coordinates of the vector w with respect to the given orthogonal basis. 8 < (a) V = Span{v, v } = Span :, 9 = ;, and w =. 7

28 8 >< (b) V = Span{v, v, v } = Span 6 >: 7, 6 7, 6 9 >= 7, and w = 6 >; Angles, Inner Products, and Orthogonal Projections Angle between v and w. Let v and w be vectors in R or R represented as directed line segments beginning at the origin. The angle between v and w, θ, is the smaller of the two angles at the origin determined by v and w. The angle θ lies in the interval [, π. The following plots illustrate angles satisfying < θ < π (left) and π < θ < π (right): Analytic geometry can be used to demonstrate that v w = v w cos(θ). Note, in particular, that if θ = π, then v w =. 8

29 Orthogonal projection. For vectors in R or R, the (orthogonal) projection of w onto v, denoted by projv(w), is the vector highlighted in each diagram below for angles θ π :. Left Plot ( θ < π ) : The projection of w onto v is the vector that points in the direction of v and whose length is w cos(θ).. Right Plot ( π < θ π) : The projection of w onto v is the vector that points in the direction opposite to v and whose length is w cos(π θ). When θ = π, the projection of w onto v is the zero vector. Geometry, trigonometry and the relationship v w = v w cos(θ) can be used to demonstrate that the projection can be computed as follows: ( w v ) projv(w) = v. v v That is, the projection is the scalar multiple cv, where c is the ratio ( w v ) v v. [ 6 For example, let v = and w = Then () projv(w) = [. () w projv(w) = () The inner product of () and () is. 9

30 Orthogonal projection in k-space. Let v and w be vectors in R k, and let V = Span{v}. Then the (orthogonal) projection of w onto v (equivalently, the projection of w onto the subspace spanned by v) is defined as follows: ( w v ) projv(w) = proj V (w) = v. v v The projection, ŵ = projv(w) = proj V (w), is an element of the vector space V and satisfies the following properties:. Minimum Distance: ŵ is the unique vector in V closest to w.. Orthogonality: ŵ is the unique vector in V for which w ŵ is orthogonal to V. Thus, we can find the distance between w and V by computing w ŵ. Problem 6. (a) w = In each case, find the distance between w and V = Span{v}. 8 8 < and V = Span : 9 = ; (b) w = 7 8 < and V = Span : 9 = ;

31 Orthogonal projection onto V. Let V = Span{v, v,..., v p } be the span of an orthogonal set of vectors (a set of mutually orthogonal nonzero vectors) in R k and let w R k. Then the (orthogonal) projection of w onto V is defined as follows: ŵ = proj V (w) = c v + c v + + c p v p where c i = w v i v i v i for each i. This definition generalizes the case for projection onto the span of a single vector in R k, and requires that the spanning set is an orthogonal set. Note also that. If V i = Span{v i } for each i, then ŵ is the sum of projections in each coordinate direction:. If w V, then ŵ = w. ŵ = proj V (w) = proj V (w) + proj V (w) + + proj Vp (w). Properties of orthogonal projections are stated in the following theorem, and illustrated to the right. In the plot, the horizontal axis represents vector space V and the vertical axis represents the orthogonal complement, V ; and w is decomposed into the part of w in V and the part in V. Theorem (Orthogonal Projections). Let V be a subspace of R k, w be any vector in R k, and ŵ be the projection of w onto V. Then. Orthogonal Decomposition: The difference (w ŵ) is a vector in V, and the sum w = ŵ + (w ŵ) is the unique representation of w as the sum of a vector in V and a vector in V. (Thus, we have an orthogonal decomposition of w into the part of the vector in V and the part of the vector in V.). Best Approximation: The vector ŵ is the closest point in V to w.

32 Problem 7. In each case, find ŵ and (w ŵ). Note that each V has been written as the span of an orthogonal set. j» (a) V = Span 8 < (b) V = Span : 8 >< (c) V = Span 6 >: ff», w =, 7, 6 7, 6 9 = ;, w = 9 7 >= 7, w = 6 >;

33 ..7 Gram-Schmidt Orthogonalization Process Let V R k be a p-dimensional subspace and let {x, x,..., x p } be a basis for V. The Gram- Schmidt orthogonalization process allows us to construct an orthogonal basis {v, v,..., v p } for V starting with {x, x,..., x p }. The method is as follows:. Let v = x.. Let v = x proj V (x ) where V = Span{v } = Span{x }.. Let v = x proj V (x ) where V = Span{v, v } = Span{x, x }.. Let v = x proj V (x ) where V = Span{v, v, v } = Span{x, x, x }. And, so forth. The final set, {v, v,..., v p }, is an orthogonal basis for V. Problem 8. In each case, find an orthogonal basis for V. 8 < (a) V = Span :, 9 = ; 8 >< (b) V = Span 6 >: 7, 6 7, 6 9 >= 7 >;

34 . Least Squares Analysis.. Best Approximate Solutions; Normal Equations Let A be an m n coefficient matrix and assume that Ax = b is inconsistent. We propose to find approximate solutions to the system as follows: () Find the projection of b onto Col(A), b, and () Report solutions to the consistent system Ax = b. Observation : Since b is as close to b as possible, each approximate solution x satisfies b b = b Ax is as small as possible. The difference vector is b (a, x + a, x + + a,n x n ) b (a, x + a, x + + a,n x n ) b Ax =. b m (a m, x + a m, x + + a m,n x n ) and the square of the length of the difference vector is m (b i (a i, x + a i, x + + a i,n x n )). i= Each approximate solution x will minimize the above sum of squared differences. reason the approximate solutions are called least squares solutions. For this Observation : Since the difference vector (b b) = (b Ax) (Col(A)) and the orthogonal complement of the column space of A is the null space of the transpose of A, (Col(A)) = Null ( A T ), by the Fundamental Theorem of Linear Algebra, we know that A T (b b) = A T (b Ax) = O.

35 Further, A T (b Ax) = O A T b A T Ax = O A T Ax = A T b. Thus, least squares solutions can be found by solving the consistent system on the right (called the normal equation of the system). By using the normal equation, we do not need to find the projection of b on the column space of A. The following theorem gives the properties of this process: The Least Squares Theorem. Under the conditions above,. x is a least squares solution to Ax = b iff x is a solution to A T Ax = A T b.. A T A is invertible iff the columns of A are linearly independent. Thus, there is a unique least squares solution iff the columns of A are linearly independent. 6 For example, consider the inconsistent system Ax = b where A = and b =. Then [ A T A A T b = [ 6 [ 6 [ is the unique least squares solution to the inconsistent system above. [ x = Problem. In each case, find the least squares solution(s) to Ax = b. (a) A = and b = 6.

36 (b) A = (c) A = and b =. and b =

37 .. Application: Least Squares Analyses of Data The methodology from the last section can be applied to finding curves of best fit (as minimizing a sum of squared differences). As a simple illustration, consider the four data pairs (, ), (, ), (8, ) and (, ). These points lie close to a straight line with equation ŷ = a + bx, as illustrated in the left plot below. The intercept and slope of the line can be found by the method of least squares. Specifically, we start with a -by- system of linear equations, convert the system to a matrix equation Ax = b, and find the least squares solution(s) by solving the normal equation A T Ax = A T b. a + b = a + b = a + b 8 = a + b = [ a 8 = b [ [ a = b Thus, the least squares regression line is ŷ = ( ) ( 9 + 9) x. +.x, as shown on the left above. Let ŷ i = ( ) ( 9 + 9) xi and e i = y i ŷ i for i =,,, : ŷ i e i A plot of (ŷ i, e i ) pairs is shown on the right above. [ 96 [ a = b [ /9 /9 The following pages contain several applications of this general strategy to real data. 7

38 Example: Olympic winning times (Source: Hand et al, 99). Consider the following data pairs, where x i is the time in years since 9 and y i is the Olympic winning time in seconds for men in the final round of the meter event. i x i y i i x i y i The data cover all Olympic events held between 9 and 988. (Note that Olympic games were not held in 96, 9, and 9.) The twenty data pairs lie approximately on a straight line with equation ŷ = a + bx, whose intercept and slope can be estimated by the method of least squares. a + b =.8 a + b =. a + b88 = 9.9 [.8 [ [ a =. 9 a b = b [ [ a which implies that b [ Thus, the least squares regression line is ŷ =.898.x, as illustrated on the left above. Note that the origin of the plot is not (, ). The results suggest that the winning times have decreased at the rate of about. seconds per year during the 88 years of the study. Let ŷ i =.898.x i and e i = y i ŷ i for i =,,...,. A plot of (ŷ i, e i ) pairs is shown on the right above. 8

39 Example: Brain-body study (Source: Allison & Cicchetti, 976). As part of a study on sleep in mammals, researchers collected information on the average body weight (in kilograms) and average brain weight (in grams) for different species. Let x i = ln (Average Body Weight i ) and y i = ln (Average Brain Weight i ) for i =,,...,. The (x i, y i ) pairs lie approximately on a line with equation ŷ = a + bx, whose intercept and slope can be estimated by the method of least squares. Starting with the normal equation A T Ax = A T b: [ [.86 a = b [ [ a b [...78 Thus, the least squares regression line is ŷ =. +.78x. As with the earlier examples, the left plot above is a plot of (x i, y i )-pairs superimposed on the least squares regression line and the right plot is a plot of (ŷ i, e i ) pairs, for i =,,...,. It is instructive to examine the estimated relationship between average brain and body weights on their original scales. The graph of this relationship is shown to the right. The formula for this curve is (please complete). Note that Man s brain weight is much larger than expected given the modest body weight. The Asian Elephant has an enormous body weight and a correspondingly large brain weight. 9

40 Example: Timber yield study (Source: Hand et al, 99). As part of a study designed to estimate the volume of a tree (and therefore its yield) given its diameter and height, data were collected on the volume (in cubic feet), diameter at inches above the ground (in inches), and height (in feet) of black cherry trees in the Allegheny National Forest. Let x,i = ln(diameter i ), x,i = ln(height i ) and y i = ln(volume i ) for i =,...,. The (x,i, x,i, y i ) triples lie approximately on a plane with equation ŷ = a + bx + cx, whose coefficients can be estimated using the method of least squares. Starting with the normal equation A T Ax = A T b: a b = c a 6.6 b.98. c.7 Thus, the least squares regression equation is ŷ = x +.7x. The regression equation is plotted on the left above, along with the (x i,, x,i, y i ) triples. Triples lying under the surface appear slightly lighter in color. The right plot is a plot of (ŷ i, e i ) pairs, for i =,,...,. It is instructive to examine the estimated relationship among diameter, height and volume in their original scales. The graph of this relationship is shown to the right. The formula for this curve is (please complete). Any comments?

41 Example: Body fat study (Source: Johnson, 996). As part of a study to determine if the percentage of body fat can be predicted accurately using only a scale and measuring tape, data were collected on men. Let x,i, x,i, x,i and x,i be the abdomen, wrist, hip and neck circumferences (in centimeters) of the i th individual, and let y i be the man s percent body fat measured using an accurate underwater technique. Here is some summary information: Average Value Minimum Value Maximum Value Abdomen (x ) Wrist (x ) Hip (x ) Neck (x ) Body Fat (y) Consider fitting a linear function of the form ŷ = a + bx + cx + dx + ex using the method of least squares to estimate the coefficients (a through e). Starting with the normal equation A T Ax = A T b: a b 7 6c 7 d = 6 e a b 6c 7 d 6 e Thus, the least squares regression formula is ŷ = x.89x.6x.x. The (ŷ i, e i ) pairs are shown on the left below. Individuals with. the largest ŷ i and smallest e i, and measurements (Abdomen, Wrist, Hip, Neck, Body Fat) = (8.,., 7.7,.,.), and. the largest e i, and measurements (Abdomen, Wrist, Hip, Neck, Body Fat) = (9.9, 7.,., 9.,.9), have been highlighted in the plot. Any comments? If these two individuals are removed and a new least squares solution is computed, the pattern of errors does not change dramatically, as shown in the right plot above.

42 .. Footnote: Eigenvalues, Eigenvectors and Least Squares Analysis Let M = A T A be the matrix used in the normal equation for finding least squares solutions to inconsistent systems. M is a symmetric matrix, satisfying the following properties:. M is a diagonalizable matrix,. the eigenvalues of M are nonnegative real numbers,. M has an eigenvector basis that is an orthogonal set, and. M is invertible if and only if all eigenvalues are positive. If M is invertible, then the least squares solution is unique. Further, as long as the smallest eigenvalue is not too close to zero, then the computer will have no trouble finding the unique solution accurately. Body fat study example, continued. Consider again the body fat study from the last section. M = A T A can be written as M = P DP, where D and P The eigenvalues of M are written in decreasing order along the diagonal of D; all eigenvalues are comfortably greater than zero, implying that the computer had no trouble finding accurate least squares estimates of the coefficients of the prediction formula.. Finally, to improve the accuracy of least squares estimates in situations where eigenvalues may be close to zero (and M may be close to singular ), practitioners use a singular value decomposition of M before trying to find the estimates.

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### Orthogonal Diagonalization of Symmetric Matrices

MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

### MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

### Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

### Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

### α = u v. In other words, Orthogonal Projection

Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

### Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

### Chapter 6. Orthogonality

6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

### Inner Product Spaces

Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

### by the matrix A results in a vector which is a reflection of the given

Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

### 1 Eigenvalues and Eigenvectors

Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

### MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3

MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................

### Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

### Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

### Summary of week 8 (Lectures 22, 23 and 24)

WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry

### Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

### MATH 551 - APPLIED MATRIX THEORY

MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

### Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

### 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each)

Math 33 AH : Solution to the Final Exam Honors Linear Algebra and Applications 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) (1) If A is an invertible

### 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

### Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

### Chapter 5 Polar Coordinates; Vectors 5.1 Polar coordinates 1. Pole and polar axis

Chapter 5 Polar Coordinates; Vectors 5.1 Polar coordinates 1. Pole and polar axis 2. Polar coordinates A point P in a polar coordinate system is represented by an ordered pair of numbers (r, θ). If r >

### 3. INNER PRODUCT SPACES

. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

### October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

### INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL

SOLUTIONS OF THEORETICAL EXERCISES selected from INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL Eighth Edition, Prentice Hall, 2005. Dr. Grigore CĂLUGĂREANU Department of Mathematics

### 9.3 Advanced Topics in Linear Algebra

548 93 Advanced Topics in Linear Algebra Diagonalization and Jordan s Theorem A system of differential equations x = Ax can be transformed to an uncoupled system y = diag(λ,, λ n y by a change of variables

### MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

### 5.3 The Cross Product in R 3

53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or

### LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

### Problems for Advanced Linear Algebra Fall 2012

Problems for Advanced Linear Algebra Fall 2012 Class will be structured around students presenting complete solutions to the problems in this handout. Please only agree to come to the board when you are

### 1 Introduction to Matrices

1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

### Recall that two vectors in are perpendicular or orthogonal provided that their dot

Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

### Sec 4.1 Vector Spaces and Subspaces

Sec 4. Vector Spaces and Subspaces Motivation Let S be the set of all solutions to the differential equation y + y =. Let T be the set of all 2 3 matrices with real entries. These two sets share many common

### Linear Algebra Review. Vectors

Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

### Examination paper for TMA4115 Matematikk 3

Department of Mathematical Sciences Examination paper for TMA45 Matematikk 3 Academic contact during examination: Antoine Julien a, Alexander Schmeding b, Gereon Quick c Phone: a 73 59 77 82, b 40 53 99

### 1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

### 13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

### Applied Linear Algebra I Review page 1

Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

### 7 - Linear Transformations

7 - Linear Transformations Mathematics has as its objects of study sets with various structures. These sets include sets of numbers (such as the integers, rationals, reals, and complexes) whose structure

### MAT 242 Test 2 SOLUTIONS, FORM T

MAT 242 Test 2 SOLUTIONS, FORM T 5 3 5 3 3 3 3. Let v =, v 5 2 =, v 3 =, and v 5 4 =. 3 3 7 3 a. [ points] The set { v, v 2, v 3, v 4 } is linearly dependent. Find a nontrivial linear combination of these

### Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

### 1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v,

1.3. DOT PRODUCT 19 1.3 Dot Product 1.3.1 Definitions and Properties The dot product is the first way to multiply two vectors. The definition we will give below may appear arbitrary. But it is not. It

### [1] Diagonal factorization

8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

### MATHEMATICS (CLASSES XI XII)

MATHEMATICS (CLASSES XI XII) General Guidelines (i) All concepts/identities must be illustrated by situational examples. (ii) The language of word problems must be clear, simple and unambiguous. (iii)

### 2.1 Functions. 2.1 J.A.Beachy 1. from A Study Guide for Beginner s by J.A.Beachy, a supplement to Abstract Algebra by Beachy / Blair

2.1 J.A.Beachy 1 2.1 Functions from A Study Guide for Beginner s by J.A.Beachy, a supplement to Abstract Algebra by Beachy / Blair 21. The Vertical Line Test from calculus says that a curve in the xy-plane

### Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

### 1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

### We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P.

Volumes of parallelograms 1 Chapter 8 Volumes of parallelograms In the present short chapter we are going to discuss the elementary geometrical objects which we call parallelograms. These are going to

### Inner product. Definition of inner product

Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product

### Determinants. Dr. Doreen De Leon Math 152, Fall 2015

Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.

### WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE?

WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE? JOEL H. SHAPIRO Abstract. These notes supplement the discussion of linear fractional mappings presented in a beginning graduate course

### THREE DIMENSIONAL GEOMETRY

Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

### UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

### Vector Math Computer Graphics Scott D. Anderson

Vector Math Computer Graphics Scott D. Anderson 1 Dot Product The notation v w means the dot product or scalar product or inner product of two vectors, v and w. In abstract mathematics, we can talk about

### Section 2.1. Section 2.2. Exercise 6: We have to compute the product AB in two ways, where , B =. 2 1 3 5 A =

Section 2.1 Exercise 6: We have to compute the product AB in two ways, where 4 2 A = 3 0 1 3, B =. 2 1 3 5 Solution 1. Let b 1 = (1, 2) and b 2 = (3, 1) be the columns of B. Then Ab 1 = (0, 3, 13) and

### Introduction to Matrix Algebra

Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

### Section 1.1. Introduction to R n

The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

### 10.3 POWER METHOD FOR APPROXIMATING EIGENVALUES

55 CHAPTER NUMERICAL METHODS. POWER METHOD FOR APPROXIMATING EIGENVALUES In Chapter 7 we saw that the eigenvalues of an n n matrix A are obtained by solving its characteristic equation n c n n c n n...

### NOTES on LINEAR ALGEBRA 1

School of Economics, Management and Statistics University of Bologna Academic Year 205/6 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

### Solutions to Linear Algebra Practice Problems

Solutions to Linear Algebra Practice Problems. Find all solutions to the following systems of linear equations. (a) x x + x 5 x x x + x + x 5 (b) x + x + x x + x + x x + x + 8x Answer: (a) We create the

### Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible:

Cramer s Rule and the Adjugate Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Theorem [Cramer s Rule] If A is an invertible

### 4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

### Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

### Linear Algebra Test 2 Review by JC McNamara

Linear Algebra Test 2 Review by JC McNamara 2.3 Properties of determinants: det(a T ) = det(a) det(ka) = k n det(a) det(a + B) det(a) + det(b) (In some cases this is true but not always) A is invertible

### University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

### 5. Orthogonal matrices

L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

### 6. Vectors. 1 2009-2016 Scott Surgent (surgent@asu.edu)

6. Vectors For purposes of applications in calculus and physics, a vector has both a direction and a magnitude (length), and is usually represented as an arrow. The start of the arrow is the vector s foot,

### Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

### 2.1: Determinants by Cofactor Expansion. Math 214 Chapter 2 Notes and Homework. Evaluate a Determinant by Expanding by Cofactors

2.1: Determinants by Cofactor Expansion Math 214 Chapter 2 Notes and Homework Determinants The minor M ij of the entry a ij is the determinant of the submatrix obtained from deleting the i th row and the

### 10.3 POWER METHOD FOR APPROXIMATING EIGENVALUES

58 CHAPTER NUMERICAL METHODS. POWER METHOD FOR APPROXIMATING EIGENVALUES In Chapter 7 you saw that the eigenvalues of an n n matrix A are obtained by solving its characteristic equation n c nn c nn...

### Inner products on R n, and more

Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

### Applied Linear Algebra

Applied Linear Algebra OTTO BRETSCHER http://www.prenhall.com/bretscher Chapter 7 Eigenvalues and Eigenvectors Chia-Hui Chang Email: chia@csie.ncu.edu.tw National Central University, Taiwan 7.1 DYNAMICAL

### MT426 Notebook 3 Fall 2012 prepared by Professor Jenny Baglivo. 3 MT426 Notebook 3 3. 3.1 Definitions... 3. 3.2 Joint Discrete Distributions...

MT426 Notebook 3 Fall 2012 prepared by Professor Jenny Baglivo c Copyright 2004-2012 by Jenny A. Baglivo. All Rights Reserved. Contents 3 MT426 Notebook 3 3 3.1 Definitions............................................

### Practice Math 110 Final. Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16.

Practice Math 110 Final Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16. 1. Let A = 3 1 1 3 3 2. 6 6 5 a. Use Gauss elimination to reduce A to an upper triangular

### Vector and Matrix Norms

Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

Facts About Eigenvalues By Dr David Butler Definitions Suppose A is an n n matrix An eigenvalue of A is a number λ such that Av = λv for some nonzero vector v An eigenvector of A is a nonzero vector v

### Section 4.4 Inner Product Spaces

Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer

### 28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z

28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE 1.4 Cross Product 1.4.1 Definitions The cross product is the second multiplication operation between vectors we will study. The goal behind the definition

### Solution based on matrix technique Rewrite. ) = 8x 2 1 4x 1x 2 + 5x x1 2x 2 2x 1 + 5x 2

8.2 Quadratic Forms Example 1 Consider the function q(x 1, x 2 ) = 8x 2 1 4x 1x 2 + 5x 2 2 Determine whether q(0, 0) is the global minimum. Solution based on matrix technique Rewrite q( x1 x 2 = x1 ) =

### Eigenvalues and eigenvectors of a matrix

Eigenvalues and eigenvectors of a matrix Definition: If A is an n n matrix and there exists a real number λ and a non-zero column vector V such that AV = λv then λ is called an eigenvalue of A and V is

### 15.062 Data Mining: Algorithms and Applications Matrix Math Review

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

### MA 242 LINEAR ALGEBRA C1, Solutions to Second Midterm Exam

MA 4 LINEAR ALGEBRA C, Solutions to Second Midterm Exam Prof. Nikola Popovic, November 9, 6, 9:3am - :5am Problem (5 points). Let the matrix A be given by 5 6 5 4 5 (a) Find the inverse A of A, if it exists.

### Linear Algebra: Vectors

A Linear Algebra: Vectors A Appendix A: LINEAR ALGEBRA: VECTORS TABLE OF CONTENTS Page A Motivation A 3 A2 Vectors A 3 A2 Notational Conventions A 4 A22 Visualization A 5 A23 Special Vectors A 5 A3 Vector

### Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Chapter 3 Linear Least Squares Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

### 1.5 Elementary Matrices and a Method for Finding the Inverse

.5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:

The Hadamard Product Elizabeth Million April 12, 2007 1 Introduction and Basic Results As inexperienced mathematicians we may have once thought that the natural definition for matrix multiplication would

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

### CHARACTERISTIC ROOTS AND VECTORS

CHARACTERISTIC ROOTS AND VECTORS 1 DEFINITION OF CHARACTERISTIC ROOTS AND VECTORS 11 Statement of the characteristic root problem Find values of a scalar λ for which there exist vectors x 0 satisfying

### A matrix over a field F is a rectangular array of elements from F. The symbol

Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted

### MATH36001 Background Material 2015

MATH3600 Background Material 205 Matrix Algebra Matrices and Vectors An ordered array of mn elements a ij (i =,, m; j =,, n) written in the form a a 2 a n A = a 2 a 22 a 2n a m a m2 a mn is said to be

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### Least-Squares Intersection of Lines

Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a

### r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t)

Solutions HW 9.4.2 Write the given system in matrix form x = Ax + f r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + We write this as ( ) r (t) θ (t) = ( ) ( ) 2 r(t) θ(t) + ( ) sin(t) 9.4.4 Write the given system

### 4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

L. Vandenberghe EE133A (Spring 2016) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows

### Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round \$200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

### On the general equation of the second degree

On the general equation of the second degree S Kesavan The Institute of Mathematical Sciences, CIT Campus, Taramani, Chennai - 600 113 e-mail:kesh@imscresin Abstract We give a unified treatment of the

### LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

### Math 018 Review Sheet v.3

Math 018 Review Sheet v.3 Tyrone Crisp Spring 007 1.1 - Slopes and Equations of Lines Slopes: Find slopes of lines using the slope formula m y y 1 x x 1. Positive slope the line slopes up to the right.