Vector and Matrix Norms


 Abigail Golden
 5 years ago
 Views:
Transcription
1 Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a nonempty set of objects (called vectors) on which two binary operations, (vector) addition and (scalar) multiplication, are defined and satisfy the axioms below Addition: is a rule which associates a vector V with each pair of vectors x, y V and that member is called the sum x + y Scalar multiplication: is a rule which associates a vector V with each scalar, α F, and each vector x V, and it is called the scalar multiple αx For V to be called a Vector Space the two operations above must satisfy the following axioms x, y, w V : 1 Vector addition is commutative: x + y = y + x; 2 Vector addition is associative: x + (y + w) = (x + y) + w; 3 Vector addition has an identity: a vector in V, called the zero vector 0, such that x+0 = x; 4 Vector addition has an inverse: for each x V an inverse (or negative) element, ( x) V, such that x + ( x) = 0; 5 Distributivity holds for scalar multiplication over vector addition: α(x + y) = (αx + αy), α F; 4
2 6 Distributivity holds for scalar multiplication over field addition: (α+β)x = (αx+βx), α, β F; 7 Scalar multiplication is compatible with field multiplication: α(βx) = (αβ)x, α,β F; 8 Scalar multiplication has an identity element: 1x = x, where 1 is the multiplicative identity of F Any collection of objects together with two operations which satisfy the above axioms is a vector space In context of Numerical Analysis a vector space is often called a Linear Space Example 111 An obvious example of a linear space is R 3 with the usual definitions of addition of 3D vectors and scalar multiplication (Fig 11) Figure 11: A pictorial example of some vectors belonging to the linear space R 3 Example 112 Another example of a linear space is the set of all continuous functions f(x) on (, ) with the usual definition for the addition of functions and the scalar multiplication of functions, usually denoted by C(, ) If f(x) and g(x) C(, ) then f +g and αf(x) are also continuous on (, ) and the axioms are easily verified Example 113 A subspace of C(, ) is P n : the space of polynomials of degree at most n [We will meet this vector space later in the course] Note, it must be of degree at most n so that addition (and subtraction) produce members of P n, eg (x x 2 ) P 2, x 2 P 2 and (x x 2 ) + (x 2 ) = x P 2 12 The Basis and Dimension of a Vector Space If V is a vector space and S = {x 1,x 2,,x n } is a set of vectors in V such that 5
3 S is a linearly independent set A set of vectors is linearly independent if none of them can be written as a linear combination of finitely many other vectors in the set and S spans V A span is a set of vectors V for which all other vectors V can be written as a linear combination of the vectors in the span then S is a basis for V The dimension of V is n Example 121 A basis for Example 111, where V = R 3, is i = (1,0,0), j = (0,1,0) and k = (0,0,1) and the dimension of the space is 3 Example 122 Example 112 has infinite dimension as no finite set spans C(, ) Example 123 A basis for Example 113, where V = P n, is the set of monomials 1, x, x 2,, x n The dimension of the space is n Normed Linear Spaces 131 Vector Norm We require some method to measure the magnitude of a vector for error analysis We generalise the concept of a length of a vector in 3D to n dimensions by defining a norm Given a vector/linear space V, then a norm, denoted by x for x V, is a real number such that x > 0, x 0, ( 0 = 0) (11) αx = α x, α R (12) x + y x + y (13) The norm is a measure of the size of the vector x where Equation (11) requires the size to be positive, Equation (12) requires the size to be scaled as the vector is scaled, and Equation (13) is known as the triangular inequality and has its origins in the notion of distance in R 3 Any mapping of an ndimensional Vector Space onto a subset of R that satisfies these three requirements can be called a norm The space together with a defined norm is called a Normed Linear Space Example 131 For the vector space V = R n with x V given by x = (x 1,x 2,,x n ) an obvious 6
4 definition of a norm is x = ( x x x 2 n) 1/2 This is just a generalisation of the normed linear space V = R 3 with the norm defined as the magnitude of the vector x R 3 Example 132 Another norm on V = R n is x = max 1 i n { x i }, and it is easy to verify that the three axioms are obeyed (see Tutorial sheet 1) Example 133 Let V = C[a,b], the space of all continuous functions on the interval [a,b], and define This is also a normed linear space { 1/2 b f = (f(x)) dx} 2 a 132 Inner Product Space One of the most familiar norms is the magnitude of a vector, x = (x 1,x 2,x 3 ) R 3, (Example 131) This norm is commonly denoted x and equals x = x = x x2 2 + x2 3 = x x ie it is equal to the dot product of x The dot product of a vector is an example of an Inner Product Some other norms are also inner products In 131 we presented examples of Normed Linear Spaces Two of these can be obtained from the alternative route of Inner Product Spaces and it is useful to observe this fact as it gives access to an important result, namely the CauchySchwarz Inequality For such spaces where and f,g = x,y = b a x = { x,x } 1/2 x i y i in Example 131 f(x)g(x)dx in Example 133 Hence, inner products give rise to norms but not all norms can be cast in terms of inner products The formal definition of an Inner Product is given on Tutorial Sheet 1 and it is easy to show that the above two examples satisfy the stated requirements The CauchySchwarz Inequality is given by x,y 2 x,x y,y 7
5 and can be used to confirm the triangular inequality for norms For example, given x = { x,x } 1/2 x + y 2 = x + y,x + y = x,x + y,y + 2 x,y x,x + y,y + 2 x,x y,y (using C S inequality) x + y 2 ( x + y ) 2 (14) and hence the triangular inequality holds for all such norms A particular example is given on Tutorial Sheet Commonly Used Norms and Normed Linear Spaces In theory, any mapping of an ndimensional space onto real numbers that satisfies (11)(13) is a norm In practice we are only concerned with a small number of Normed Linear Spaces and the most frequently used are the following: 1 The linear space R n (Euclidean Space) where x = (x 1,x 2,,x i,,x n ) together with the norm ( n ) 1/p x p = x i p, p 1, is known as the L p normed linear space The most common are the one norm, L 1, and the two norm, L 2, linear spaces where p = 1 and p = 2, respectively 2 The other standard norm for the space R n is the infinity, or maximum, norm given by x = max 1 i n ( x i ) The vector space R n together with the infinity norm is commonly denoted L Example 134 Consider the vector x = (3, 1,2,0,4), which belongs to the vector space R 5 Then determine its (i) one norm, (ii) two norm and (iii) infinity norm (i) One norm: x 1 = = 10 (ii) Two norm: x 2 = = 30 (iii) Infinity norm: x = max( 3, 1, 2, 0, 4 ) = 4 Note: Each is a different way of measuring of the size of a vector R n For C[a,b] (Continuous functions) the corresponding norms are ( 1/p b f(x) p = f(x) dx) p, p 1, a 8
6 with p = 1 and p = 2 the usual cases, plus f(x) = max a x b f(x) 14 Subordinate Matrix Norms We are interested in analysing methods for solving linear systems so we need to be able to measure the size of vectors in R n and any associated matrices in a compatible way To achieve this end, we define a Subordinate Matrix Norm For the Normed Linear Space {R n, x }, where x is some norm, we define the norm of the matrix A n n which is subordinate to the vector norm x as ( ) Ax A = max x =0 x Note, Ax is a vector, x R n Ax R n, so A is the largest value of the vector norm of Ax normalised over all nonzero vectors x Not surprisingly, the three requirements of a vector norm (11 13) are properties of A There are two further properties which are a consequence of the definition for A Hence, subordinate matrix norms satisfy the following five rules: A > 0, A 0, ( 0 = 0),where 0 is the n n zero matrix, (15) αa = α A, α R, (16) A + B A + B, (17) Ax A x, (18) AB A B (19) These five rules, together with the three for vector norms (11 13), provide the means for an analysis of a linear system First, let us justify the above results: Rule 1 (15): For a matrix A n n where A 0, x R n where x 0 such that the vector Ax 0 So both Ax > 0 and x > 0, thus A > 0 Also, if A 0 then Ax = 0 x and A = 0 Rule 2 (16): Since Ax is a vector and therefore satisfies (12) we can show that { } { } { } αax α Ax Ax αa = max = max = α max = α A x =0 x x =0 x x =0 x 9
7 Rule 3 (17): { } (A + B)x A + B = max x =0 x where x m (x m R n ) maximises the right hand side = (A + B)x m x m Now, let us define Ax m = y m and Bx m = z m (y m, z m R n ) so that (A + B)x m = Ax m + Bx m = y m + z m y m + z m = Ax m + Bx m, therefore Hence A + B Ax m + Bx m x m max x =0 A + B A + B { } Ax + max x x =0 { } Bx x Rule 4 (18): By definition, A Ax x for anyx 0 Hence A x Ax x (includingx 0) Rule 5 (19): Finally, consider AB, where A and B are n n matrices Using a similar argument to that used in rule 3, we have { } ABx AB = max x =0 x where x m (x m R n ) maximises the right hand side = ABx m x m If we define Bx m = z m (z m R n ) and use rule 4 (18), then ABx m = Az m A z m = A Bx m A B x m Hence AB = ABx m x m A B x m x m = A B 141 Commonly Used Subordinate Matrix Norms Clearly, the subordinate matrix norm is not easy to calculate direct from its definition as in theory all vectors x R n should be considered Here we will consider the commonly used matrix norms and consider practical ways to calculate them 10
8 The easiest matrix norms to compute are matrix norms subordinate to the L 1 and L vector norms These are A 1 = max x 0 ( n (Ax) ) i n x i A 1 which is equivalent to the maximum column sum of absolute values of A and max ( (Ax) i ) 1 i n A = max x 0 max ( x i ) 1 i n A which is equivalent to the maximum row sum of absolute values of A Proof that A 1 is equivalent to the maximum column sum of absolute values of A (proof of the relation for A is similar) The L 1 norm is x 1 = x i Let y = Ax (x, y R n and A n n ) and so y i = n j=1 a ijx j and Now Thus Ax 1 = y 1 = a ij x j j=1 Ax 1 Changing the order of summation then gives Ax 1 y i = a ij x j a ij x j j=1 j=1 a ij x j j=1 a ij x j j=1 j=1 a ij x j { n } x j a ij j=1 where n a ij is the sum of absolute values in column j Each column will have a sum, S j = n a ij, 1 j n and S j S m, where m is the column with the largest sum So and hence Ax 1 n x j S j S m x j = S m x 1 j=1 j=1 Ax 1 x 1 S m, 11
9 where S m is the maximum column sum of absolute values This is true for all nonzero x Hence, A S m We need to determine now if S m is the maximum value of Ax 1 x 1, or whether it is merely an upper bound In other words, is it possible to pick an x R n for which we reach S m? Try by choosing the following: x = [ 0,,0,1,0,,0] T 0 j m where x j = 1 j = m Now, substituting this particular x into Ax = y implies y i = n j=1 a ijx j = a im and thus Ax 1 = y 1 = y i = a im = S m So, Ax 1 = S m x 1 1 = S m This means that the bound can be reached with a suitable x and so ( ) Ax 1 A 1 = max = S m, x =0 x 1 where S m is the maximum column sum of absolute values Hence, S m is not just an upper bound, it is actually the maximum! Example 141: Determine the matrix norm subordinate to (i) the one norm and (ii) the infinity norm for the following matrices: A = and B = A 1 = max(( ),( ),( ), = max(8,13,5) = 13 B 1 = max(( ),( ),( )) = max(6,5,2) = 6 A = max(( ),( ),( ), = max(11,8,7) = 11 B = max(( ),( ),( )) = max(6,5,2) = 6 Note, since B is symmetric, the maximum column sum of absolute values equals the maximum row sum of absolute values ie B 1 = B 12
10 15 Spectral Radius A useful, and important, quantity associated with matrices is the Spectral Radius of a matrix An n n matrix A has n eigenvalues λ i,i = 1,,n The Spectral Radius of A, which is denoted by ρ(a) is defined as Note, that ρ(a) 0 for all A 0 Furthermore, ρ(a) = max 1 i n ( λ i ) ρ(a) A, for all subordinate matrix norms Proof Let e i be an eigenvector of A, so Ae i = λ i e i where λ i is the associated eigenvalue Hence, e i 0 and λ i 0 Then Ae i = λ i e i = λ i e i and so or Ae i A e i, λ i e i A e i, λ i A Now this result holds true λ i and hence ρ(a) = max 1 i n ( λ i ) A Note, the spectral radius is not a norm! This can be easily shown Let which gives However, A = 1 0 and B = ρ(a) = max( 1, 0 ) = 1 and ρ(b) = max( 0, 1 ) = 1 A + B = 1 2 and ρ(a + B) = max( 1, 3 ) = hence ρ(a + B) ρ(a) + ρ(b) So the spectral radius does not satisfy (13), rule 3 for a norm 13
11 16 The Condition Number and IllConditioned Matrices Suppose we are interested in solving Ax = b where A n n, x n 1 & b n 1 and the elements of b are subject to small errors δb so that instead of finding x, we find x where A x = b + δb How big is the disturbance in x in relation to the disturbance in b? (ie How sensitive is this system of linear equations?) Ax = b and A x = b + δb so, A( x x) = δb and hence the critical change in x is given by ( x x) = A 1 δb We are interested in the size of the errors, so let us take the norm: x x = A 1 δb A 1 δb for some norm This gives a measure of the actual change in the solution However, a more useful measure is the relative change and so we also consider, b = Ax A x so Hence, the relative change in x, given by 1 x A b x x x is at worst the relative change in b times A A 1 A A 1 δb b, The quantity A A 1 = K(A) is called the condition number of A with respect to the norm chosen The condition number gives an indication of the potential sensitivity of a linear system of equations The smaller the conditional number, K(A), the smaller the change in x If K(A) is very large, the solution x, is considered unreliable We would like the condition number to be small, but it is easy to see that it will never be < 1 14
12 Note, AA 1 = I and thus AA 1 = I Using Rule 5 (19), we find AA 1 = I A A 1 = K(A) and using the definition of the subordinate matrix norm: ( ) ( ) Ix x I = max = max = 1 x =0 x x Hence, K(A) 1 If K(A) is modest then we can be confident that small changes to b do not seriously affect the solution However, if K(A) 1 (very large) then small changes to b may produce large changes in x When this happens, we say the system is illconditioned How large is large? This all depends on the system Remember, the condition number helps in finding the relative error: x x A A 1 δb x b If the relative error of δb / b = 10 6 and we wish to know x to within a relative error of b = 10 4 then provided the condition number K(A) < 100 then it would be considered small 161 An Example of Using Norms to Solve a Linear System of Equations Example 161: We will consider an innocent looking set of equations that result in a large condition number Consider the linear system Wx = b, where W is the Wilson Matrix, W = and the vector x = so that Wx = b = Now suppose we actually solve W x = b + δb = hence δb =
13 First, we must find the inverse of W, W = Then from W x = b + δb, we find 1 82 x = W 1 b + W δb = It is clear that the system is sensitive to changes: a small change to b has had a very large effect on the solution So the Wilson Matrix is an example of an illconditioned matrix and we would expect the condition number to be very large To evaluate the condition number, K(W) for the Wilson Matrix we need to select a particular norm First, we select the 1norm (maximum column sum of absolute values) and estimate the error W 1 = 33 and W 1 1 = 136, and hence, K 1 (W) = W 1 W 1 1 = = 4488, which of course is considerably bigger than 1 Remember that such that the x x 1 x 1 K 1 (W) δb 1 b 1 error in x 4488 ( ) ( ) = So far, we have used the 1norm but it is more natural to look at the norm, which deals with the maximum values Since W is symmetric, W 1 = W and W 1 1 = W 1 so, x x x K (W) δb max( 01, 01, 01, 01 ) 4488 = b max( 32, 23, 33, 31 ) This is exactly equal to the biggest error we found, so a very realistic (reliable) estimate For this example, the bound provides a good estimate of the scale of any change in the solution 16
14 17 Some Useful Results for Determining the Condition Number 171 Finding Norms of Inverse Matrices To estimate condition numbers of matrices we need to find norms of inverses To find an inverse, we have to solve a linear system and if the linear system is sensitive, how reliable is the value we get for A 1? Ideally, we wish to estimate A 1 without needing to find A 1 One possible result that might help is the following: Suppose B is a nonsingular matrix, Write B = I + A so BB 1 = I (I + A)(I + A) 1 = I, or so (I + A) 1 + A(I + A) 1 = I, (I + A) 1 = I A(I + A) 1 Taking norms and, using Rules 3 & 5 (17) and (19), we find (I + A) 1 = I A(I + A) 1 I + A(I + A) A (I + A) 1, so If A 1 then where B = I + A (I + A) 1 (1 A ) 1 B 1 = (I + A) A, Example 171: Consider the matrix 1 1/4 1/4 B = 1/4 1 1/4 1/4 1/4 1 17
15 then 0 1/4 1/4 A = B I = 1/4 0 1/4 1/4 1/4 0 Using the infinity norm, A = 1 2 < 1, so B /2 = 2, and B = 15 Hence, K (B) = B B or K (B) 3, which is quite modest 172 Limits on the Condition Number The Spectral Radius of a matrix states that subordinate matrix norms are always greater than (or equal to) the absolute values of the eigenvalues This is true for all eigenvalues Suppose λ 1 λ 2 λ n 1 λ n > 0 ie there is a largest and a smallest eigenvalue Then, the spectral radius for all norms implies that, A λ 1 = max eigenvalues = ρ(a) Furthermore, if the eigenvalues of A are λ i and A is nonsingular, then from Ae i = λe i we have 1 λ i e i = A 1 e i so the eigenvalues of A 1 are λ 1 (A nonsingular λ 0) Hence, the spectral radius implies A 1 max 1 i λ i or Thus, the condition number for A, A 1 1 λ n K(A) = A A 1 λ 1 λ n, where λ1 λ n is the ratio of the largest to the smallest eigenvalue This ratio gives an indication of just how big the condition number can be, ie if the ratio is large, then K(A) will be large (and the matrix is illconditioned) Also, this result illustrates another simple result, namely that scaling a matrix dos not affect the condition number K(αA) = K(A) 18
16 18 Examples of IllConditioned Matrices Example 181: An almost singular matrix is illconditioned Take A = then det(a) = 10 7, which is very small The eigenvalues of A are given by (1 λ) 2 ( ) = 0, or, using the expansion (1 + x) 1/ x x2 +, (1 λ) = ±( ) 1/2 ±( ) Hence, we find eigenvalues λ 1 2 and λ and hence the condition number K(A) = λ1 λ n So as you might expect, an almost singular matrix is illconditioned Example 182: The Hilbert Matrix is notorious for being illconditioned Consider the n n Hilbert matrix 1 1/2 1/3 1/n 1/2 1/3 1/4 H = 1/3 1/4 1/5 1/n 1/(2n 1) n n The condition numbers for various H n n matrices are given in the table below n K (H n n ) Typical single precision, floating point accuracy on a computer is approximately Apply Gaussian Elimination on a 6 6 Hilbert matrix on a computer with single precision arithmetic and the result is likely to be subject to significant error! Such error can be avoided with packages like 19
17 Maple which allows you to perform calculations using exact arithmetic, ie all digits will be carried in the calculations However, error can still occur if the matrix is not specified properly If the elements are not generated exactly, the wrong result can still be obtained 20
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationLinear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus ndimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More informationMATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.
MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More informationIntroduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra  1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
More information1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More information13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in threespace, we write a vector in terms
More informationOrthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
More informationSection 6.1  Inner Products and Norms
Section 6.1  Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationOctober 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix
Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,
More informationBANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An ndimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0534405967. Systems of Linear Equations Definition. An ndimensional vector is a row or a column
More information3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More informationBindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8
Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e
More informationT ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More informationa 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
More informationLecture 2 Matrix Operations
Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrixvector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or
More informationLinear Algebra I. Ronald van Luijk, 2012
Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.
More informationMATH 551  APPLIED MATRIX THEORY
MATH 55  APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points
More informationMatrix Representations of Linear Transformations and Changes of Coordinates
Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under
More informationMath 4310 Handout  Quotient Vector Spaces
Math 4310 Handout  Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 918/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More informationInner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 34 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
More informationby the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the yaxis We observe that
More informationMetric Spaces. Chapter 7. 7.1. Metrics
Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some
More informationMathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 19967 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
More informationLinear Algebra Notes
Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note
More informationLinear Algebra: Vectors
A Linear Algebra: Vectors A Appendix A: LINEAR ALGEBRA: VECTORS TABLE OF CONTENTS Page A Motivation A 3 A2 Vectors A 3 A2 Notational Conventions A 4 A22 Visualization A 5 A23 Special Vectors A 5 A3 Vector
More informationLecture 3: Finding integer solutions to systems of linear equations
Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture
More informationMA106 Linear Algebra lecture notes
MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector
More informationName: Section Registered In:
Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are
More informationMATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all ndimensional column
More informationSolving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
More informationMath 312 Homework 1 Solutions
Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please
More information4.5 Linear Dependence and Linear Independence
4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then
More information15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
More informationSo let us begin our quest to find the holy grail of real analysis.
1 Section 5.2 The Complete Ordered Field: Purpose of Section We present an axiomatic description of the real numbers as a complete ordered field. The axioms which describe the arithmetic of the real numbers
More information8 Square matrices continued: Determinants
8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You
More informationUniversity of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
More informationMethods for Finding Bases
Methods for Finding Bases Bases for the subspaces of a matrix Rowreduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,
More informationMATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar
More information3. Reaction Diffusion Equations Consider the following ODE model for population growth
3. Reaction Diffusion Equations Consider the following ODE model for population growth u t a u t u t, u 0 u 0 where u t denotes the population size at time t, and a u plays the role of the population dependent
More informationNOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
More informationChapter 20. Vector Spaces and Bases
Chapter 20. Vector Spaces and Bases In this course, we have proceeded stepbystep through lowdimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit
More informationCHAPTER 5 Roundoff errors
CHAPTER 5 Roundoff errors In the two previous chapters we have seen how numbers can be represented in the binary numeral system and how this is the basis for representing numbers in computers. Since any
More informationNotes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 201112 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
More informationMATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0
More informationI. GROUPS: BASIC DEFINITIONS AND EXAMPLES
I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called
More informationChapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
More informationQuotient Rings and Field Extensions
Chapter 5 Quotient Rings and Field Extensions In this chapter we describe a method for producing field extension of a given field. If F is a field, then a field extension is a field K that contains F.
More informationReview Jeopardy. Blue vs. Orange. Review Jeopardy
Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 03 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?
More informationExcel supplement: Chapter 7 Matrix and vector algebra
Excel supplement: Chapter 7 atrix and vector algebra any models in economics lead to large systems of linear equations. These problems are particularly suited for computers. The main purpose of this chapter
More information1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,
More informationRecall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
More informationNotes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
More informationBrief Introduction to Vectors and Matrices
CHAPTER 1 Brief Introduction to Vectors and Matrices In this chapter, we will discuss some needed concepts found in introductory course in linear algebra. We will introduce matrix, vector, vectorvalued
More information7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
More informationVector Spaces. Chapter 2. 2.1 R 2 through R n
Chapter 2 Vector Spaces One of my favorite dictionaries (the one from Oxford) defines a vector as A quantity having direction as well as magnitude, denoted by a line drawn from its original to its final
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationLeastSquares Intersection of Lines
LeastSquares Intersection of Lines Johannes Traa  UIUC 2013 This writeup derives the leastsquares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
More informationLecture 5: Singular Value Decomposition SVD (1)
EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25Sep02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system
More informationThese axioms must hold for all vectors ū, v, and w in V and all scalars c and d.
DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms
More informationSection 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
More informationRecall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.
ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the ndimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?
More informationFinite dimensional C algebras
Finite dimensional C algebras S. Sundar September 14, 2012 Throughout H, K stand for finite dimensional Hilbert spaces. 1 Spectral theorem for selfadjoint opertors Let A B(H) and let {ξ 1, ξ 2,, ξ n
More informationLinear Algebra: Determinants, Inverses, Rank
D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of
More informationBasics of Polynomial Theory
3 Basics of Polynomial Theory 3.1 Polynomial Equations In geodesy and geoinformatics, most observations are related to unknowns parameters through equations of algebraic (polynomial) type. In cases where
More informationα = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
More informationLINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LUdecomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
More informationSolution to Homework 2
Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if
More informationElementary Linear Algebra
Elementary Linear Algebra Kuttler January, Saylor URL: http://wwwsaylororg/courses/ma/ Saylor URL: http://wwwsaylororg/courses/ma/ Contents Some Prerequisite Topics Sets And Set Notation Functions Graphs
More informationLinear Programming. March 14, 2014
Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1
More informationNotes on metric spaces
Notes on metric spaces 1 Introduction The purpose of these notes is to quickly review some of the basic concepts from Real Analysis, Metric Spaces and some related results that will be used in this course.
More informationApplied Linear Algebra I Review page 1
Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties
More informationMathematical Methods of Engineering Analysis
Mathematical Methods of Engineering Analysis Erhan Çinlar Robert J. Vanderbei February 2, 2000 Contents Sets and Functions 1 1 Sets................................... 1 Subsets.............................
More informationSYSTEMS OF EQUATIONS AND MATRICES WITH THE TI89. by Joseph Collison
SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections
More informationLEARNING OBJECTIVES FOR THIS CHAPTER
CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. FiniteDimensional
More informationMAT 242 Test 2 SOLUTIONS, FORM T
MAT 242 Test 2 SOLUTIONS, FORM T 5 3 5 3 3 3 3. Let v =, v 5 2 =, v 3 =, and v 5 4 =. 3 3 7 3 a. [ points] The set { v, v 2, v 3, v 4 } is linearly dependent. Find a nontrivial linear combination of these
More informationOperation Count; Numerical Linear Algebra
10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floatingpoint
More information4. How many integers between 2004 and 4002 are perfect squares?
5 is 0% of what number? What is the value of + 3 4 + 99 00? (alternating signs) 3 A frog is at the bottom of a well 0 feet deep It climbs up 3 feet every day, but slides back feet each night If it started
More informationSolving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.
Solving Linear Systems of Equations Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:
More information2.3 Convex Constrained Optimization Problems
42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions
More informationDecember 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in twodimensional space (1) 2x y = 3 describes a line in twodimensional space The coefficients of x and y in the equation
More informationLecture 5 Principal Minors and the Hessian
Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and
More informationA linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form
Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...
More informationAlgebra I Vocabulary Cards
Algebra I Vocabulary Cards Table of Contents Expressions and Operations Natural Numbers Whole Numbers Integers Rational Numbers Irrational Numbers Real Numbers Absolute Value Order of Operations Expression
More informationSection 4.4 Inner Product Spaces
Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer
More informationMatrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.
Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that
More informationUnified Lecture # 4 Vectors
Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,
More information1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
More informationTOPIC 4: DERIVATIVES
TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the
More informationLS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
More information