4 MT210 Notebook Eigenvalues and Eigenvectors Definitions; Graphical Illustrations... 3

Size: px
Start display at page:

Download "4 MT210 Notebook 4 3. 4.1 Eigenvalues and Eigenvectors... 3. 4.1.1 Definitions; Graphical Illustrations... 3"

Transcription

1 MT Notebook Fall / prepared by Professor Jenny Baglivo c Copyright 9 by Jenny A. Baglivo. All Rights Reserved. Contents MT Notebook. Eigenvalues and Eigenvectors Definitions; Graphical Illustrations Eigenspaces, Characteristic Polynomials, Characteristic Equations Eigenanalysis and Powers; Eigenvector Bases; Special Cases Fundamental Theorem of Algebra, Complex Numbers and Eigenvalues Algebraic and Geometric Multiplicity; More About Eigenvector Bases Similar Matrices, Diagonalizable Matrices Applications: Population Projections and Stochastic Matrices Orthogonality and Orthogonal Projections Inner Product, Length, Distance Properties of Inner Product Orthogonal, Orthogonal Sets, Orthogonal Complement Fundamental Theorem of Linear Algebra Orthogonal Spanning Sets Angles, Inner Products, and Orthogonal Projections Gram-Schmidt Orthogonalization Process Least Squares Analysis Best Approximate Solutions; Normal Equations Application: Least Squares Analyses of Data Footnote: Eigenvalues, Eigenvectors and Least Squares Analysis

2

3 MT Notebook This notebook is concerned with further matrix concepts and their applications. In particular, we will study eigenvalues, eigenvectors, orthogonality and least squares. The notes correspond to material in Chapters and 6 of the Lay textbook.. Eigenvalues and Eigenvectors.. Definitions; Graphical Illustrations Let A be a square matrix of order n, and λ ( lambda ) be a scalar.. λ is said to be an eigenvalue of A if Ax = λx for some nonzero vector x.. If x O satisfies Ax = λx, then x is an eigenvector of A with eigenvalue λ. Note that if x is a nonzero eigenvector, then so is cx for each c R since A(cx) = c(ax) = c(λx) = λ(cx). Thus, the nonzero elements of Span(x) are all eigenvectors of A. Eigenvalues are exceptional values and eigenvectors are exceptional vectors. The prefix eigen comes from the German language meaning owned by or peculiar to. Example. Let A = [.. Then.. λ =. is an eigenvalue of A, with corresponding eigenvector e.. λ =. is an eigenvector of A, with corresponding eigenvector e. Consider the transformation with rule Then T (x) = Ax. T (x) contracts points in the x-direction; in particular, T (e ) =.e. T (x) expands points in the y-direction; in particular, T (e ) =.e. T (x) maps the unit circle to the ellipse shown in the plot.

4 Example. Let A = [.6.. Then..9. λ =. is an eigenvalue» of A, with corresponding eigenvector v = / /,. λ = is an eigenvector» of A, with corresponding eigenvector v = / 7 /. 7 Consider the transformation with rule Then T (x) = Ax. T (x) contracts points in the v -direction; in particular, T (v ) =.v. T (x) leaves points in the v direction fixed; in particular, T (v ) = v. T (x) maps the unit circle to the ellipse shown in the plot. The complete analysis of Example will be carried out in the next section. Note that a square matrix of order n with values in the real numbers (a ij R for all i, j) may not have eigenvalues λ R, and eigenvectors x R n. For example, matrices A corresponding to rotations around the origin: [ cos θ sin θ A =, where θ mπ for some integer m, sin θ cos θ leave no direction in -space fixed... Eigenspaces, Characteristic Polynomials, Characteristic Equations Let A be a square matrix of order n, and let λ be a scalar.. Eigenspace: If λ is an eigenvalue of A, then the eigenspace of λ is the set containing all x satisfying Ax = λx: Eigenspace(λ) = {x : Ax = λx}. The eigenspace of λ contains all eigenvectors with eigenvalue λ and the zero vector. Since Ax = λx = λi n x Ax λi n x = O (A λi n )x = O, we know that Eigenspace(λ) = Null(A λi n ). Further, since the eigenspace of λ must have positive dimension, we know that the matrix (A λi n ) must be singular.

5 . Characteristic Polynomial: The expression det(a λi) is an n th degree polynomial in the variable λ, and is called the characteristic polynomial of A.. Characteristic Equation: The equation det(a λi) = is called the characteristic equation of A. To find the eigenvalues of A we solve the characteristic equation for λ. Example, continued. Let A = () Since A λi =»».6. λ..9 [.6., as above...9 =».6 λ., the characteristic polynomial is..9 λ det(a λi) = (.6 λ)(.9 λ) (.)(.) =..λ + λ. = λ.λ +.. Since det(a λi) = (λ.)(λ ), the solutions to the characteristic equation are the eigenvalues. and listed earlier. () Let λ =.. Since A λi = [..,.. [ A λi O = [.... [ [ x x = x [ = x, where x is free, we know that Note that v = j» Eigenspace(.) = Null(A.I) = Span [ / / = [ Eigenspace(.). ff. () Let λ =. Since A λi = [...., [ A λi O = [.... [ / [ x / x = x [ / = x, where x is free, we know that Note that v = j» / Eigenspace() = Null(A I) = Span [ / 7 / 7 = 7 [ ff Eigenspace(.). j» = Span ff.

6 [.8. Problem. Let A =..7 Determine the eigenvalues of A, and write each eigenspace as the span of a set of vectors.. 6

7 [ Problem. Let A =.... Determine the eigenvalues of A, and write each eigenspace as the span of a set of vectors.. 7

8 Problem. Let A =.8.. Determine the eigenvalues of A, and write each eigenspace as the span of a set of vectors. 8

9 .. Eigenanalysis and Powers; Eigenvector Bases; Special Cases Conducting an eigenanalysis (that is, finding eigenvalues and eigenvectors) can be challenging. The following is an initial list of useful theorems for eigenanalysis:. Powers: If λ is an eigenvalue of A with eigenvector x and k is a positive integer, then x is an eigenvector for A k with corresponding eigenvalue λ k.. Bases and Powers: Let A be a square matrix of order n. Suppose that v i is an eigenvector for A with corresponding eigenvalue λ i, for i =,,..., n, and the set {v, v,..., v n } is a basis for R n. Then, for every vector x and positive integer k, A k x can be computed quickly using the unique representation of x in the eigenvector basis {v, v,..., v n }. Specifically, if x = c v + + c n v n for unique constants c i, then A k x = A k (c v + + c n v n ) = c (A k v ) + + c n (A k v n ) = c (λ ) k v + + c n (λ n ) k v n.. Diagonal Matrices: Let A be a diagonal matrix of order n. Then e i is an eigenvector for A with eigenvalue a ii, for i =,,..., n. Thus, the standard basis for R n is an eigenvector basis for the diagonal matrix A, and the eigenvalues are the diagonal elements of A.. Distinct Eigenvalues: Let A be a square matrix of order n. If A has n distinct eigenvalues, then A has an eigenvector basis. To construct an eigenvector basis, choose one nonzero vector from each eigenspace.. Triangular Matrices: Let A be a triangular matrix of order n. Then det(a λi) = (a λ)(a λ) (a nn λ) and the eigenvalues of A are a, a,..., a nn. (There may be repeats in the list.) 9

10 General application: projections over time. A general application of eigenanalysis is to the analysis of projections over time. In this type of application,. x represents information at time,. x = Ax represents information at time,. x = Ax = A x represents information at time, and so forth. If A has an eigenvector basis, then information at time k is x k = A k x = c (λ ) k v + + c n (λ n ) k v n where x = c v + + c n v n. We will see an important application of this methodology in Section..7 (page 7). As a simple illustration, consider A = Now, v =»» /, v =».6. once again. Let..9, λ =., λ =, and x = c v + c v. x k = A k x = c (.) k v + c () k v c v as k. Thus, information at time k is approximately equal to the v -component of information at time when k is large. Problem. Use the definitions of eigenvalue and eigenvector, and properties of matrices, to prove the following special case of the first theorem listed on the previous page: Let A be a square matrix of order n, and let x be an eigenvector of A with eigenvalue λ. Demonstrate that x is an eigenvector of A with eigenvalue λ.

11 Problem. Let A be a square matrix of order n, and assume that A has n distinct eigenvalues and let v i Eigenspace(λ i ) be a nonzero vector, for each i. Show that {v, v,..., v n } is a linearly independent set, thus forming a basis for R n.

12 Problem 6. The following triangular matrices each have eigenvalues,, : (a) A = ; (b) A = In each case, write Eigenspace() and Eigenspace() as spans of sets of vectors..

13 .. Fundamental Theorem of Algebra, Complex Numbers and Eigenvalues Let A be a square matrix of order n. To find the eigenvalues of A we need to solve the characteristic equation, which requires that we factor the characteristic polynomial, det(a λi). By the fundamental theorem of algebra, the characteristic polynomial can always be factored into n linear terms if we allow both real and complex numbers: det(a λi) = (λ λ)(λ λ) (λ n λ), where each λ i C. The eigenvalues are λ, λ,..., λ n. In general, not all λ i s are distinct. For example, let A = 6 6. Since A λi = λ 6 6 λ λ = ( λ) λ 6 6 λ = ( λ)(λ + 6) = ( λ)(6i λ)( 6i λ) = implies λ =, 6i, 6i, and the eigenvalues of A are, 6i and 6i. Further, Eigenspace() Eigenspace(6i) Eigenspace( 6i) = Null(A I) = Null(A 6iI) = Null(A + 6iI) 6 + i 6 i = Span = Span + 6i = Span 6i Note that matrices with complex eigenvalues and complex eigenvectors are common in applied mathematics. Examples include population projection matrices (see Section..7, page 7)... Algebraic and Geometric Multiplicity; More About Eigenvector Bases Let A be a square matrix of order n and let λ be an eigenvalue of A. Then. Algebraic Multiplicity: The algebraic multiplicity of λ is the number of times (λ λ) appears as a factor of the characteristic polynomial.. Geometric Multiplicity: The geometric multiplicity of λ is the dimension of Eigenspace(λ ). Note that, by the fundamental theorem of algebra, the sum of the algebraic multiplicities of the eigenvalues of A must be n.

14 Problem 6, continued. Fill-in the table below with information for the triangular matrices from the problem on page : (a) A = (b) A = Geometric Algebraic Geometric Algebraic Multiplicity Multiplicity Multiplicity Multiplicity of λ = of λ = of λ = of λ = More on finding eigenvector bases. doing eigenanalyses: Here are two additional theorems that are useful for. Algebraic and Geometric Multiplicity: Let A be a square matrix of order n and let λ be an eigenvalue of A. Then the algebraic and geometric multiplicities of λ must satisfy the following inequalities: Geometric Multiplicity of λ Algebraic Multiplicity of λ. (If the geometric multiplicity is strictly less than the algebraic multiplicity, then there is a deficiency of eigenvectors and we won t be able to find an eigenvector basis.). Pooling Eigenspace Bases: If A has p distinct eigenvalues (λ i for i =,,..., p) and B i is a basis for eigenspace of λ i for each i, then the set B B B p is a linearly independent set. (If you pool the bases, you get a linearly independent set.) Problem 6, continued. Do either of the matrices in Problem 6 have an eigenvector basis for R? If yes, explicitly write down a basis. If no, explain why.

15 ..6 Similar Matrices, Diagonalizable Matrices Similar matrices: Let A and B be square matrices of order n. Then A and B are said to be similar if there exists an invertible matrix P satisfying A = P BP (and B = P AP ). If A and B are similar matrices, then. they have the same determinant, det(a) = det(b),. they have the same eigenvalues, and. their k th powers satisfy A k = P B k P, for each positive integer k. The factorization A = P BP is useful when B is easier to work with than A. Diagonalizable matrices: Let A be a square matrix of order n. Then A is said to be diagonalizable if it is similar to a diagonal matrix. That is, if A = P DP where D is diagonal and P is invertible. The following theorem tells us exactly when A is diagonalizable. Theorem (Diagonalization). Let A be a square matrix of order n. Then A is diagonalizable if and only if A has n linearly independent eigenvectors. In fact, A = P DP iff the columns of P are n linearly independent eigenvectors and the diagonal entries of D are the corresponding eigenvalues. For example,. If A = [ If A = 6 6, then A = P DP where P = P = [ and D =, then A = P DP where 6 + i 6 i + 6i 6i [. and D =. 6i 6i.

16 Problem, continued. The square matrix A =.8 is diagonalizable.. Use the work you did on page 8 to find matrices P and D so that A = P DP. Diagonalization and Transformations. Suppose that A = P DP, where λ P = [ λ v v... v n and D =..., λ n and let B = {v, v,..., v n } be the basis whose elements are the columns of P. Then the action of A is to ( ): change from the standard basis of R n to basis B; ( ): operate as a diagonal matrix in basis B, and ( ): change back to the standard basis for interpretation. x = n i= c iv i A c c [x B = c n Ax = n i= λ ic i v i D [Ax B = λ c λ c 6. 7 λ nc n Similarly, the action of A k is to ( ): change from the standard basis of R n to basis B; ( ): operate as the k th power of a diagonal matrix in basis B, and ( ): change back to the standard basis for interpretation. x = n i= c iv i A k c c [x B = c n Ak x = n i= λk i c iv i D k [Ak x B = λ k c λ k c 6. 7 λ k nc n 6

17 ..7 Applications: Population Projections and Stochastic Matrices This section contains two applications of eigenanalysis. I. Population projections, and the northern spotted owl (Source: Lay text, p.6). Researchers used demographic data for the northern spotted owl to develop a stage-matrix model using life stages (juvenile, subadult and adult). Their goal was to track the population growth/decline of the owl in a particular old growth forest in the Pacific northwest. If j i is the number of juveniles, s i is the number of subadults and a i is the number of adults in the population at time i, then x i = j i s i is the population vector at time i, a i and the total population at time i is the sum of the components (j i + s i + a i ). The matrix that allows you to project one year is. A = Given x i, the population vector at time (i + ) is. j i.a i x i+ = Ax i =.8 s i =.8j i =.7.9 a i.7s i +.9a i j i+ s i+ a i+. Matrices used in population problems are generally diagonalizable, with both real and complex eigenvalues. For this problem, we can write A = P DP, where.98 D =. +.6i..6i and P =.8.6.9i.6 +.9i i..6i The diagonal elements of D are the eigenvalues of A. Since (.98) k, (. +.6i) k and (..6i) k as k, we know that the population will eventually crash given any initial population vector. 7

18 The author tells us that, if the (,)-entry of the A matrix was.6 (the proportion that would be appropriate for this species in a different location), then the population would grow. The new matrix (the one with the new (,)-entry) would be diagonalizable with i.77.i D =.6 +.8i and P = i i i Since (.6) k, (.6 +.8i) k and (.6.8i) k as k, the statement the author makes is correct. I simulated population growth over years starting with an initial population of individuals (j + s + a = ) and using both the true matrix and the matrix with the (,)-entry changed. The results are summarized in the plot below.. Left Plot: Using the true matrix, the total population declined over time.. Right Plot: Using the altered matrix, the total population increased over time with approximate geometric growth. Geometric growth kicks in when k is large enough so that k th powers of the last two eigenvalues are close to zero: (.6 +.8i) k and (.6.8i) k. II. Stochastic matrices, moving cars, and searching the internet. A probability vector is one whose entries are nonnegative real numbers with sum. A stochastic matrix is a square matrix whose columns are probability vectors. The following matrices are examples of stochastic matrices: [ Stochastic matrices are used to model population movement over time, where individuals move among n different locations. The following simple example considers the movement of rental cars over time. 8

19 Example. A car rental agency has three rental locations (,, ). A customer may rent a car from any of the three locations and return the car to any of the three locations. From past experience, management observes that:. Location : Cars rented from location are returned to locations,, with probabilities.,. and., respectively;. Location : Cars rented from location are returned to locations,, with probabilities.,.8 and, respectively; and. Location : Cars rented from location are returned to locations,, with probabilities.,. and., respectively. Suppose that we would like to determine the probabilities that a car initially rented from a given location (either, or ) will be returned to locations,, after k rental periods. Let A be the matrix whose columns are the probabilities listed above, let a i, b i and c i be the probabilities that the car is at locations,, after i rental periods, and let x i be the vector whose components are the probabilities a i, b i and c i :... a i A =..8., x i = b i... The matrix A can be used to project one rental period. That is, x i+ = Ax i for each i. The starting location vectors (x ) are for location, for location, for location. c i We can write A = P DP, where D =., P = [... v v v =.6.,.... and the first column of P has nonnegative terms with sum. Note that k, (.) k and (.) k as k. If x corresponds to the location vector, for example, then x = v (/)v + (/)v and x k = A k x v for large k. In fact, x k v after only time periods: k = k = k = k = k = k = k = 6 k = 7 k = 8 k = 9 k = k = a k b k c k

20 Similarly, if x corresponds to the location vector, then x = v + v v and x k = A k x v for large k, and if x corresponds to the location vector, then x = v + v + v and x k = A k x v for large k. Thus, if k is large, the probabilities that a car initially at any one of the three locations will be returned to locations,, after k rental periods are (approximately).,.6 and.. Surfing the web. Now, imagine yourself surfing the web starting from some initial location and randomly following hyperlinks. Assuming an appropriate A matrix can be created and analyzed as above, the probability that you will be at a given location after a sufficient number of steps can be determined. The designers of Google use the eventual probabilities to determine the order in which the results of a search are reported; specifically, webpages with higher probabilities are listed before those with lower probabilities. Their A matrix uses the hyperlink structure of the web and some proprietary information.. Orthogonality and Orthogonal Projections.. Inner Product, Length, Distance Let v and w be vectors in R k. Then. Inner product: The inner product (dot product) of v and w is the number v w = v T w = [ w v v v k 6 7. = v w + v w + + v k w k. w k. Length: The length of v is the number v = v v = w v + v + + v k.. Distance: The distance between v and w is the length of the difference vector v w: dist(v, w) = v w = (v w ) + (v w ) + + (v k w k ). Further, a unit vector is a vector of length one. If v O, then u = v is the unit vector in the direction of v. v

21 For example, if v = 8 and w =, then () v w = () The length of v is () The unit vector in the direction of v is () The distance between v and w is.. Properties of Inner Product Let u, v, w R k, c R. Then. Commutative: v w = w v.. Scalars: (cv) w = v (cw) = c(v w).. Distributive: (u + v) w = (u w) + (v w) and w (u + v) = (w u) + (w v).. Nonnegative: v v. Further v v = iff v = O. Problem. Let v, v and w be vectors in R k, and suppose that v w = and v w =. Use the properties of inner product to demonstrate that v w = for every v Span {v, v }.

22 .. Orthogonal, Orthogonal Sets, Orthogonal Complement The concept of orthogonality is important in applications.. Orthogonal: Let v and w be vectors in R k. Then v and w are said to be orthogonal if their inner product is zero: v w =.. Orthogonal Set: Let v, v,..., v p be vectors in R k. Then {v, v,..., v p } is said to be an orthogonal set if v i O for all i, and v i v j = when i j.. Orthogonal Complement: Let V be a subspace of R k. The orthogonal complement of V, denoted by V ( V -perp ), is the collection of all vectors orthogonal to V : Problem. Let v = a V = {w : w is orthogonal to each v V }., v =, v = b Find values of a, b so that {v, v, v } is an orthogonal set..

23 Properties of orthogonal complements. orthogonal complement. Then Let V be a subspace of R k and let V be its. Subspace: The orthogonal complement of V is a subspace of R k. Further, V V = {O} since O is the only vector satisfying x x =.. Orthogonal Complement of V : The orthogonal complement of V is V : ( V ) = V.. Spanning Sets: Suppose that V = Span{v, v,..., v p }. Then w V if and only if w v i = for i =,,..., p.. Pooling Bases: If B is a basis for V and B is a basis for V, then the union of the bases, B B, is a basis for R k. To illustrate orthogonal complements, let {[ V = Span{v} = Span, } and w = [ w w. Since = w v = w + w w = w [ where w is free, we know that {[ V = Span, }, as shown in the plot.

24 Problem. In each case, write V as a span. (a) V = Span 8 < :, 9 = ; (b) V = Span 8 < : 9 = ;

25 .. Fundamental Theorem of Linear Algebra Let A be an m n matrix and let A T be its transpose. The following theorem, known as the fundamental theorem of linear algebra, gives important relationships among the four subspaces related to A and its transpose. Fundamental Theorem of Linear Algebra. Let A be an m n matrix. Then. Null(A) and Row(A) = Col(A T ) are orthogonal complements in R n.. Null(A T ) and Row(A T ) = Col(A) are orthogonal complements in R m. Further, if rank(a) = r, then. dim(col(a)) = dim(col(a T )) = r,. dim(null(a)) = n r and. dim(null(a T )) = m r. Proof of first statement: It is instructive to demonstrate the first statement in the theorem. Let A T = [ α α α m and A = Now (complete the proof), Ax = O α T.. Then T α m T α α x T α. x = α x. = O. T α m α m x

26 Problem. Find bases for Null(A), Col(A), Null(A T ) and Col(A T ), where A =

27 .. Orthogonal Spanning Sets The following theorem tells us that a set of mutually orthogonal nonzero vectors is linearly independent. Further, the coordinates of a vector w with respect to a basis of mutually orthogonal nonzero vectors can be found quickly using dot products: Theorem (Orthogonal Spanning Sets). Let {v, v,..., v p } be an orthogonal set of vectors in R k and let V = Span{v, v,..., v p }. Then. {v, v,..., v p } is a basis for V.. If w V, then w = c v + + c p v p where c i = w v i v i v i for each i. Problem. In each case, use dot products to find the coordinates of the vector w with respect to the given orthogonal basis. 8 < (a) V = Span{v, v } = Span :, 9 = ;, and w =. 7

28 8 >< (b) V = Span{v, v, v } = Span 6 >: 7, 6 7, 6 9 >= 7, and w = 6 >; Angles, Inner Products, and Orthogonal Projections Angle between v and w. Let v and w be vectors in R or R represented as directed line segments beginning at the origin. The angle between v and w, θ, is the smaller of the two angles at the origin determined by v and w. The angle θ lies in the interval [, π. The following plots illustrate angles satisfying < θ < π (left) and π < θ < π (right): Analytic geometry can be used to demonstrate that v w = v w cos(θ). Note, in particular, that if θ = π, then v w =. 8

29 Orthogonal projection. For vectors in R or R, the (orthogonal) projection of w onto v, denoted by projv(w), is the vector highlighted in each diagram below for angles θ π :. Left Plot ( θ < π ) : The projection of w onto v is the vector that points in the direction of v and whose length is w cos(θ).. Right Plot ( π < θ π) : The projection of w onto v is the vector that points in the direction opposite to v and whose length is w cos(π θ). When θ = π, the projection of w onto v is the zero vector. Geometry, trigonometry and the relationship v w = v w cos(θ) can be used to demonstrate that the projection can be computed as follows: ( w v ) projv(w) = v. v v That is, the projection is the scalar multiple cv, where c is the ratio ( w v ) v v. [ 6 For example, let v = and w = Then () projv(w) = [. () w projv(w) = () The inner product of () and () is. 9

30 Orthogonal projection in k-space. Let v and w be vectors in R k, and let V = Span{v}. Then the (orthogonal) projection of w onto v (equivalently, the projection of w onto the subspace spanned by v) is defined as follows: ( w v ) projv(w) = proj V (w) = v. v v The projection, ŵ = projv(w) = proj V (w), is an element of the vector space V and satisfies the following properties:. Minimum Distance: ŵ is the unique vector in V closest to w.. Orthogonality: ŵ is the unique vector in V for which w ŵ is orthogonal to V. Thus, we can find the distance between w and V by computing w ŵ. Problem 6. (a) w = In each case, find the distance between w and V = Span{v}. 8 8 < and V = Span : 9 = ; (b) w = 7 8 < and V = Span : 9 = ;

31 Orthogonal projection onto V. Let V = Span{v, v,..., v p } be the span of an orthogonal set of vectors (a set of mutually orthogonal nonzero vectors) in R k and let w R k. Then the (orthogonal) projection of w onto V is defined as follows: ŵ = proj V (w) = c v + c v + + c p v p where c i = w v i v i v i for each i. This definition generalizes the case for projection onto the span of a single vector in R k, and requires that the spanning set is an orthogonal set. Note also that. If V i = Span{v i } for each i, then ŵ is the sum of projections in each coordinate direction:. If w V, then ŵ = w. ŵ = proj V (w) = proj V (w) + proj V (w) + + proj Vp (w). Properties of orthogonal projections are stated in the following theorem, and illustrated to the right. In the plot, the horizontal axis represents vector space V and the vertical axis represents the orthogonal complement, V ; and w is decomposed into the part of w in V and the part in V. Theorem (Orthogonal Projections). Let V be a subspace of R k, w be any vector in R k, and ŵ be the projection of w onto V. Then. Orthogonal Decomposition: The difference (w ŵ) is a vector in V, and the sum w = ŵ + (w ŵ) is the unique representation of w as the sum of a vector in V and a vector in V. (Thus, we have an orthogonal decomposition of w into the part of the vector in V and the part of the vector in V.). Best Approximation: The vector ŵ is the closest point in V to w.

32 Problem 7. In each case, find ŵ and (w ŵ). Note that each V has been written as the span of an orthogonal set. j» (a) V = Span 8 < (b) V = Span : 8 >< (c) V = Span 6 >: ff», w =, 7, 6 7, 6 9 = ;, w = 9 7 >= 7, w = 6 >;

33 ..7 Gram-Schmidt Orthogonalization Process Let V R k be a p-dimensional subspace and let {x, x,..., x p } be a basis for V. The Gram- Schmidt orthogonalization process allows us to construct an orthogonal basis {v, v,..., v p } for V starting with {x, x,..., x p }. The method is as follows:. Let v = x.. Let v = x proj V (x ) where V = Span{v } = Span{x }.. Let v = x proj V (x ) where V = Span{v, v } = Span{x, x }.. Let v = x proj V (x ) where V = Span{v, v, v } = Span{x, x, x }. And, so forth. The final set, {v, v,..., v p }, is an orthogonal basis for V. Problem 8. In each case, find an orthogonal basis for V. 8 < (a) V = Span :, 9 = ; 8 >< (b) V = Span 6 >: 7, 6 7, 6 9 >= 7 >;

34 . Least Squares Analysis.. Best Approximate Solutions; Normal Equations Let A be an m n coefficient matrix and assume that Ax = b is inconsistent. We propose to find approximate solutions to the system as follows: () Find the projection of b onto Col(A), b, and () Report solutions to the consistent system Ax = b. Observation : Since b is as close to b as possible, each approximate solution x satisfies b b = b Ax is as small as possible. The difference vector is b (a, x + a, x + + a,n x n ) b (a, x + a, x + + a,n x n ) b Ax =. b m (a m, x + a m, x + + a m,n x n ) and the square of the length of the difference vector is m (b i (a i, x + a i, x + + a i,n x n )). i= Each approximate solution x will minimize the above sum of squared differences. reason the approximate solutions are called least squares solutions. For this Observation : Since the difference vector (b b) = (b Ax) (Col(A)) and the orthogonal complement of the column space of A is the null space of the transpose of A, (Col(A)) = Null ( A T ), by the Fundamental Theorem of Linear Algebra, we know that A T (b b) = A T (b Ax) = O.

35 Further, A T (b Ax) = O A T b A T Ax = O A T Ax = A T b. Thus, least squares solutions can be found by solving the consistent system on the right (called the normal equation of the system). By using the normal equation, we do not need to find the projection of b on the column space of A. The following theorem gives the properties of this process: The Least Squares Theorem. Under the conditions above,. x is a least squares solution to Ax = b iff x is a solution to A T Ax = A T b.. A T A is invertible iff the columns of A are linearly independent. Thus, there is a unique least squares solution iff the columns of A are linearly independent. 6 For example, consider the inconsistent system Ax = b where A = and b =. Then [ A T A A T b = [ 6 [ 6 [ is the unique least squares solution to the inconsistent system above. [ x = Problem. In each case, find the least squares solution(s) to Ax = b. (a) A = and b = 6.

36 (b) A = (c) A = and b =. and b =

37 .. Application: Least Squares Analyses of Data The methodology from the last section can be applied to finding curves of best fit (as minimizing a sum of squared differences). As a simple illustration, consider the four data pairs (, ), (, ), (8, ) and (, ). These points lie close to a straight line with equation ŷ = a + bx, as illustrated in the left plot below. The intercept and slope of the line can be found by the method of least squares. Specifically, we start with a -by- system of linear equations, convert the system to a matrix equation Ax = b, and find the least squares solution(s) by solving the normal equation A T Ax = A T b. a + b = a + b = a + b 8 = a + b = [ a 8 = b [ [ a = b Thus, the least squares regression line is ŷ = ( ) ( 9 + 9) x. +.x, as shown on the left above. Let ŷ i = ( ) ( 9 + 9) xi and e i = y i ŷ i for i =,,, : ŷ i e i A plot of (ŷ i, e i ) pairs is shown on the right above. [ 96 [ a = b [ /9 /9 The following pages contain several applications of this general strategy to real data. 7

38 Example: Olympic winning times (Source: Hand et al, 99). Consider the following data pairs, where x i is the time in years since 9 and y i is the Olympic winning time in seconds for men in the final round of the meter event. i x i y i i x i y i The data cover all Olympic events held between 9 and 988. (Note that Olympic games were not held in 96, 9, and 9.) The twenty data pairs lie approximately on a straight line with equation ŷ = a + bx, whose intercept and slope can be estimated by the method of least squares. a + b =.8 a + b =. a + b88 = 9.9 [.8 [ [ a =. 9 a b = b [ [ a which implies that b [ Thus, the least squares regression line is ŷ =.898.x, as illustrated on the left above. Note that the origin of the plot is not (, ). The results suggest that the winning times have decreased at the rate of about. seconds per year during the 88 years of the study. Let ŷ i =.898.x i and e i = y i ŷ i for i =,,...,. A plot of (ŷ i, e i ) pairs is shown on the right above. 8

39 Example: Brain-body study (Source: Allison & Cicchetti, 976). As part of a study on sleep in mammals, researchers collected information on the average body weight (in kilograms) and average brain weight (in grams) for different species. Let x i = ln (Average Body Weight i ) and y i = ln (Average Brain Weight i ) for i =,,...,. The (x i, y i ) pairs lie approximately on a line with equation ŷ = a + bx, whose intercept and slope can be estimated by the method of least squares. Starting with the normal equation A T Ax = A T b: [ [.86 a = b [ [ a b [...78 Thus, the least squares regression line is ŷ =. +.78x. As with the earlier examples, the left plot above is a plot of (x i, y i )-pairs superimposed on the least squares regression line and the right plot is a plot of (ŷ i, e i ) pairs, for i =,,...,. It is instructive to examine the estimated relationship between average brain and body weights on their original scales. The graph of this relationship is shown to the right. The formula for this curve is (please complete). Note that Man s brain weight is much larger than expected given the modest body weight. The Asian Elephant has an enormous body weight and a correspondingly large brain weight. 9

40 Example: Timber yield study (Source: Hand et al, 99). As part of a study designed to estimate the volume of a tree (and therefore its yield) given its diameter and height, data were collected on the volume (in cubic feet), diameter at inches above the ground (in inches), and height (in feet) of black cherry trees in the Allegheny National Forest. Let x,i = ln(diameter i ), x,i = ln(height i ) and y i = ln(volume i ) for i =,...,. The (x,i, x,i, y i ) triples lie approximately on a plane with equation ŷ = a + bx + cx, whose coefficients can be estimated using the method of least squares. Starting with the normal equation A T Ax = A T b: a b = c a 6.6 b.98. c.7 Thus, the least squares regression equation is ŷ = x +.7x. The regression equation is plotted on the left above, along with the (x i,, x,i, y i ) triples. Triples lying under the surface appear slightly lighter in color. The right plot is a plot of (ŷ i, e i ) pairs, for i =,,...,. It is instructive to examine the estimated relationship among diameter, height and volume in their original scales. The graph of this relationship is shown to the right. The formula for this curve is (please complete). Any comments?

41 Example: Body fat study (Source: Johnson, 996). As part of a study to determine if the percentage of body fat can be predicted accurately using only a scale and measuring tape, data were collected on men. Let x,i, x,i, x,i and x,i be the abdomen, wrist, hip and neck circumferences (in centimeters) of the i th individual, and let y i be the man s percent body fat measured using an accurate underwater technique. Here is some summary information: Average Value Minimum Value Maximum Value Abdomen (x ) Wrist (x ) Hip (x ) Neck (x ) Body Fat (y) Consider fitting a linear function of the form ŷ = a + bx + cx + dx + ex using the method of least squares to estimate the coefficients (a through e). Starting with the normal equation A T Ax = A T b: a b 7 6c 7 d = 6 e a b 6c 7 d 6 e Thus, the least squares regression formula is ŷ = x.89x.6x.x. The (ŷ i, e i ) pairs are shown on the left below. Individuals with. the largest ŷ i and smallest e i, and measurements (Abdomen, Wrist, Hip, Neck, Body Fat) = (8.,., 7.7,.,.), and. the largest e i, and measurements (Abdomen, Wrist, Hip, Neck, Body Fat) = (9.9, 7.,., 9.,.9), have been highlighted in the plot. Any comments? If these two individuals are removed and a new least squares solution is computed, the pattern of errors does not change dramatically, as shown in the right plot above.

42 .. Footnote: Eigenvalues, Eigenvectors and Least Squares Analysis Let M = A T A be the matrix used in the normal equation for finding least squares solutions to inconsistent systems. M is a symmetric matrix, satisfying the following properties:. M is a diagonalizable matrix,. the eigenvalues of M are nonnegative real numbers,. M has an eigenvector basis that is an orthogonal set, and. M is invertible if and only if all eigenvalues are positive. If M is invertible, then the least squares solution is unique. Further, as long as the smallest eigenvalue is not too close to zero, then the computer will have no trouble finding the unique solution accurately. Body fat study example, continued. Consider again the body fat study from the last section. M = A T A can be written as M = P DP, where D and P The eigenvalues of M are written in decreasing order along the diagonal of D; all eigenvalues are comfortably greater than zero, implying that the computer had no trouble finding accurate least squares estimates of the coefficients of the prediction formula.. Finally, to improve the accuracy of least squares estimates in situations where eigenvalues may be close to zero (and M may be close to singular ), practitioners use a singular value decomposition of M before trying to find the estimates.

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3 MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

More information

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n. ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

5.3 The Cross Product in R 3

5.3 The Cross Product in R 3 53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

Examination paper for TMA4115 Matematikk 3

Examination paper for TMA4115 Matematikk 3 Department of Mathematical Sciences Examination paper for TMA45 Matematikk 3 Academic contact during examination: Antoine Julien a, Alexander Schmeding b, Gereon Quick c Phone: a 73 59 77 82, b 40 53 99

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

MAT 242 Test 2 SOLUTIONS, FORM T

MAT 242 Test 2 SOLUTIONS, FORM T MAT 242 Test 2 SOLUTIONS, FORM T 5 3 5 3 3 3 3. Let v =, v 5 2 =, v 3 =, and v 5 4 =. 3 3 7 3 a. [ points] The set { v, v 2, v 3, v 4 } is linearly dependent. Find a nontrivial linear combination of these

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v,

1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v, 1.3. DOT PRODUCT 19 1.3 Dot Product 1.3.1 Definitions and Properties The dot product is the first way to multiply two vectors. The definition we will give below may appear arbitrary. But it is not. It

More information

Vector Math Computer Graphics Scott D. Anderson

Vector Math Computer Graphics Scott D. Anderson Vector Math Computer Graphics Scott D. Anderson 1 Dot Product The notation v w means the dot product or scalar product or inner product of two vectors, v and w. In abstract mathematics, we can talk about

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Inner product. Definition of inner product

Inner product. Definition of inner product Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product

More information

28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z

28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z 28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE 1.4 Cross Product 1.4.1 Definitions The cross product is the second multiplication operation between vectors we will study. The goal behind the definition

More information

THREE DIMENSIONAL GEOMETRY

THREE DIMENSIONAL GEOMETRY Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Section 4.4 Inner Product Spaces

Section 4.4 Inner Product Spaces Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

Inner products on R n, and more

Inner products on R n, and more Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra OTTO BRETSCHER http://www.prenhall.com/bretscher Chapter 7 Eigenvalues and Eigenvectors Chia-Hui Chang Email: chia@csie.ncu.edu.tw National Central University, Taiwan 7.1 DYNAMICAL

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

n 2 + 4n + 3. The answer in decimal form (for the Blitz): 0, 75. Solution. (n + 1)(n + 3) = n + 3 2 lim m 2 1

n 2 + 4n + 3. The answer in decimal form (for the Blitz): 0, 75. Solution. (n + 1)(n + 3) = n + 3 2 lim m 2 1 . Calculate the sum of the series Answer: 3 4. n 2 + 4n + 3. The answer in decimal form (for the Blitz):, 75. Solution. n 2 + 4n + 3 = (n + )(n + 3) = (n + 3) (n + ) = 2 (n + )(n + 3) ( 2 n + ) = m ( n

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution

More information

6. Vectors. 1 2009-2016 Scott Surgent (surgent@asu.edu)

6. Vectors. 1 2009-2016 Scott Surgent (surgent@asu.edu) 6. Vectors For purposes of applications in calculus and physics, a vector has both a direction and a magnitude (length), and is usually represented as an arrow. The start of the arrow is the vector s foot,

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

Common Core Unit Summary Grades 6 to 8

Common Core Unit Summary Grades 6 to 8 Common Core Unit Summary Grades 6 to 8 Grade 8: Unit 1: Congruence and Similarity- 8G1-8G5 rotations reflections and translations,( RRT=congruence) understand congruence of 2 d figures after RRT Dilations

More information

Review of Fundamental Mathematics

Review of Fundamental Mathematics Review of Fundamental Mathematics As explained in the Preface and in Chapter 1 of your textbook, managerial economics applies microeconomic theory to business decision making. The decision-making tools

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

Least-Squares Intersection of Lines

Least-Squares Intersection of Lines Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Math 241, Exam 1 Information.

Math 241, Exam 1 Information. Math 241, Exam 1 Information. 9/24/12, LC 310, 11:15-12:05. Exam 1 will be based on: Sections 12.1-12.5, 14.1-14.3. The corresponding assigned homework problems (see http://www.math.sc.edu/ boylan/sccourses/241fa12/241.html)

More information

Linear Algebra: Vectors

Linear Algebra: Vectors A Linear Algebra: Vectors A Appendix A: LINEAR ALGEBRA: VECTORS TABLE OF CONTENTS Page A Motivation A 3 A2 Vectors A 3 A2 Notational Conventions A 4 A22 Visualization A 5 A23 Special Vectors A 5 A3 Vector

More information

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t)

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t) Solutions HW 9.4.2 Write the given system in matrix form x = Ax + f r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + We write this as ( ) r (t) θ (t) = ( ) ( ) 2 r(t) θ(t) + ( ) sin(t) 9.4.4 Write the given system

More information

Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information

Math 215 HW #6 Solutions

Math 215 HW #6 Solutions Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T

More information

13.4 THE CROSS PRODUCT

13.4 THE CROSS PRODUCT 710 Chapter Thirteen A FUNDAMENTAL TOOL: VECTORS 62. Use the following steps and the results of Problems 59 60 to show (without trigonometry) that the geometric and algebraic definitions of the dot product

More information

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions. Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.

More information

CURVE FITTING LEAST SQUARES APPROXIMATION

CURVE FITTING LEAST SQUARES APPROXIMATION CURVE FITTING LEAST SQUARES APPROXIMATION Data analysis and curve fitting: Imagine that we are studying a physical system involving two quantities: x and y Also suppose that we expect a linear relationship

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

Figure 1.1 Vector A and Vector F

Figure 1.1 Vector A and Vector F CHAPTER I VECTOR QUANTITIES Quantities are anything which can be measured, and stated with number. Quantities in physics are divided into two types; scalar and vector quantities. Scalar quantities have

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Session 7 Bivariate Data and Analysis

Session 7 Bivariate Data and Analysis Session 7 Bivariate Data and Analysis Key Terms for This Session Previously Introduced mean standard deviation New in This Session association bivariate analysis contingency table co-variation least squares

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes Solving Polynomial Equations 3.3 Introduction Linear and quadratic equations, dealt within Sections 3.1 and 3.2, are members of a class of equations, called polynomial equations. These have the general

More information

11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space

11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space 11 Vectors and the Geometry of Space 11.1 Vectors in the Plane Copyright Cengage Learning. All rights reserved. Copyright Cengage Learning. All rights reserved. 2 Objectives! Write the component form of

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

9 Multiplication of Vectors: The Scalar or Dot Product

9 Multiplication of Vectors: The Scalar or Dot Product Arkansas Tech University MATH 934: Calculus III Dr. Marcel B Finan 9 Multiplication of Vectors: The Scalar or Dot Product Up to this point we have defined what vectors are and discussed basic notation

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Lecture L3 - Vectors, Matrices and Coordinate Transformations S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

More information

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi Geometry of Vectors Carlo Tomasi This note explores the geometric meaning of norm, inner product, orthogonality, and projection for vectors. For vectors in three-dimensional space, we also examine the

More information

BX in ( u, v) basis in two ways. On the one hand, AN = u+

BX in ( u, v) basis in two ways. On the one hand, AN = u+ 1. Let f(x) = 1 x +1. Find f (6) () (the value of the sixth derivative of the function f(x) at zero). Answer: 7. We expand the given function into a Taylor series at the point x = : f(x) = 1 x + x 4 x

More information

Dot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product

Dot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product Dot product and vector projections (Sect. 12.3) Two definitions for the dot product. Geometric definition of dot product. Orthogonal vectors. Dot product and orthogonal projections. Properties of the dot

More information

EQUATIONS and INEQUALITIES

EQUATIONS and INEQUALITIES EQUATIONS and INEQUALITIES Linear Equations and Slope 1. Slope a. Calculate the slope of a line given two points b. Calculate the slope of a line parallel to a given line. c. Calculate the slope of a line

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

MATH1231 Algebra, 2015 Chapter 7: Linear maps

MATH1231 Algebra, 2015 Chapter 7: Linear maps MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales danielc@unsw.edu.au Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter

More information

APPLICATIONS. are symmetric, but. are not.

APPLICATIONS. are symmetric, but. are not. CHAPTER III APPLICATIONS Real Symmetric Matrices The most common matrices we meet in applications are symmetric, that is, they are square matrices which are equal to their transposes In symbols, A t =

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

More information

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...

More information

GRADES 7, 8, AND 9 BIG IDEAS

GRADES 7, 8, AND 9 BIG IDEAS Table 1: Strand A: BIG IDEAS: MATH: NUMBER Introduce perfect squares, square roots, and all applications Introduce rational numbers (positive and negative) Introduce the meaning of negative exponents for

More information

One advantage of this algebraic approach is that we can write down

One advantage of this algebraic approach is that we can write down . Vectors and the dot product A vector v in R 3 is an arrow. It has a direction and a length (aka the magnitude), but the position is not important. Given a coordinate axis, where the x-axis points out

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

Mathematics Pre-Test Sample Questions A. { 11, 7} B. { 7,0,7} C. { 7, 7} D. { 11, 11}

Mathematics Pre-Test Sample Questions A. { 11, 7} B. { 7,0,7} C. { 7, 7} D. { 11, 11} Mathematics Pre-Test Sample Questions 1. Which of the following sets is closed under division? I. {½, 1,, 4} II. {-1, 1} III. {-1, 0, 1} A. I only B. II only C. III only D. I and II. Which of the following

More information

Solutions to old Exam 1 problems

Solutions to old Exam 1 problems Solutions to old Exam 1 problems Hi students! I am putting this old version of my review for the first midterm review, place and time to be announced. Check for updates on the web site as to which sections

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information