# Vector and Matrix Norms

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty set of objects (called vectors) on which two binary operations, (vector) addition and (scalar) multiplication, are defined and satisfy the axioms below Addition: is a rule which associates a vector V with each pair of vectors x, y V and that member is called the sum x + y Scalar multiplication: is a rule which associates a vector V with each scalar, α F, and each vector x V, and it is called the scalar multiple αx For V to be called a Vector Space the two operations above must satisfy the following axioms x, y, w V : 1 Vector addition is commutative: x + y = y + x; 2 Vector addition is associative: x + (y + w) = (x + y) + w; 3 Vector addition has an identity: a vector in V, called the zero vector 0, such that x+0 = x; 4 Vector addition has an inverse: for each x V an inverse (or negative) element, ( x) V, such that x + ( x) = 0; 5 Distributivity holds for scalar multiplication over vector addition: α(x + y) = (αx + αy), α F; 4

2 6 Distributivity holds for scalar multiplication over field addition: (α+β)x = (αx+βx), α, β F; 7 Scalar multiplication is compatible with field multiplication: α(βx) = (αβ)x, α,β F; 8 Scalar multiplication has an identity element: 1x = x, where 1 is the multiplicative identity of F Any collection of objects together with two operations which satisfy the above axioms is a vector space In context of Numerical Analysis a vector space is often called a Linear Space Example 111 An obvious example of a linear space is R 3 with the usual definitions of addition of 3D vectors and scalar multiplication (Fig 11) Figure 11: A pictorial example of some vectors belonging to the linear space R 3 Example 112 Another example of a linear space is the set of all continuous functions f(x) on (, ) with the usual definition for the addition of functions and the scalar multiplication of functions, usually denoted by C(, ) If f(x) and g(x) C(, ) then f +g and αf(x) are also continuous on (, ) and the axioms are easily verified Example 113 A sub-space of C(, ) is P n : the space of polynomials of degree at most n [We will meet this vector space later in the course] Note, it must be of degree at most n so that addition (and subtraction) produce members of P n, eg (x x 2 ) P 2, x 2 P 2 and (x x 2 ) + (x 2 ) = x P 2 12 The Basis and Dimension of a Vector Space If V is a vector space and S = {x 1,x 2,,x n } is a set of vectors in V such that 5

3 S is a linearly independent set A set of vectors is linearly independent if none of them can be written as a linear combination of finitely many other vectors in the set and S spans V A span is a set of vectors V for which all other vectors V can be written as a linear combination of the vectors in the span then S is a basis for V The dimension of V is n Example 121 A basis for Example 111, where V = R 3, is i = (1,0,0), j = (0,1,0) and k = (0,0,1) and the dimension of the space is 3 Example 122 Example 112 has infinite dimension as no finite set spans C(, ) Example 123 A basis for Example 113, where V = P n, is the set of monomials 1, x, x 2,, x n The dimension of the space is n Normed Linear Spaces 131 Vector Norm We require some method to measure the magnitude of a vector for error analysis We generalise the concept of a length of a vector in 3D to n dimensions by defining a norm Given a vector/linear space V, then a norm, denoted by x for x V, is a real number such that x > 0, x 0, ( 0 = 0) (11) αx = α x, α R (12) x + y x + y (13) The norm is a measure of the size of the vector x where Equation (11) requires the size to be positive, Equation (12) requires the size to be scaled as the vector is scaled, and Equation (13) is known as the triangular inequality and has its origins in the notion of distance in R 3 Any mapping of an n-dimensional Vector Space onto a subset of R that satisfies these three requirements can be called a norm The space together with a defined norm is called a Normed Linear Space Example 131 For the vector space V = R n with x V given by x = (x 1,x 2,,x n ) an obvious 6

4 definition of a norm is x = ( x x x 2 n) 1/2 This is just a generalisation of the normed linear space V = R 3 with the norm defined as the magnitude of the vector x R 3 Example 132 Another norm on V = R n is x = max 1 i n { x i }, and it is easy to verify that the three axioms are obeyed (see Tutorial sheet 1) Example 133 Let V = C[a,b], the space of all continuous functions on the interval [a,b], and define This is also a normed linear space { 1/2 b f = (f(x)) dx} 2 a 132 Inner Product Space One of the most familiar norms is the magnitude of a vector, x = (x 1,x 2,x 3 ) R 3, (Example 131) This norm is commonly denoted x and equals x = x = x x2 2 + x2 3 = x x ie it is equal to the dot product of x The dot product of a vector is an example of an Inner Product Some other norms are also inner products In 131 we presented examples of Normed Linear Spaces Two of these can be obtained from the alternative route of Inner Product Spaces and it is useful to observe this fact as it gives access to an important result, namely the Cauchy-Schwarz Inequality For such spaces where and f,g = x,y = b a x = { x,x } 1/2 x i y i in Example 131 f(x)g(x)dx in Example 133 Hence, inner products give rise to norms but not all norms can be cast in terms of inner products The formal definition of an Inner Product is given on Tutorial Sheet 1 and it is easy to show that the above two examples satisfy the stated requirements The Cauchy-Schwarz Inequality is given by x,y 2 x,x y,y 7

5 and can be used to confirm the triangular inequality for norms For example, given x = { x,x } 1/2 x + y 2 = x + y,x + y = x,x + y,y + 2 x,y x,x + y,y + 2 x,x y,y (using C S inequality) x + y 2 ( x + y ) 2 (14) and hence the triangular inequality holds for all such norms A particular example is given on Tutorial Sheet Commonly Used Norms and Normed Linear Spaces In theory, any mapping of an n-dimensional space onto real numbers that satisfies (11)-(13) is a norm In practice we are only concerned with a small number of Normed Linear Spaces and the most frequently used are the following: 1 The linear space R n (Euclidean Space) where x = (x 1,x 2,,x i,,x n ) together with the norm ( n ) 1/p x p = x i p, p 1, is known as the L p normed linear space The most common are the one norm, L 1, and the two norm, L 2, linear spaces where p = 1 and p = 2, respectively 2 The other standard norm for the space R n is the infinity, or maximum, norm given by x = max 1 i n ( x i ) The vector space R n together with the infinity norm is commonly denoted L Example 134 Consider the vector x = (3, 1,2,0,4), which belongs to the vector space R 5 Then determine its (i) one norm, (ii) two norm and (iii) infinity norm (i) One norm: x 1 = = 10 (ii) Two norm: x 2 = = 30 (iii) Infinity norm: x = max( 3, 1, 2, 0, 4 ) = 4 Note: Each is a different way of measuring of the size of a vector R n For C[a,b] (Continuous functions) the corresponding norms are ( 1/p b f(x) p = f(x) dx) p, p 1, a 8

6 with p = 1 and p = 2 the usual cases, plus f(x) = max a x b f(x) 14 Sub-ordinate Matrix Norms We are interested in analysing methods for solving linear systems so we need to be able to measure the size of vectors in R n and any associated matrices in a compatible way To achieve this end, we define a Sub-ordinate Matrix Norm For the Normed Linear Space {R n, x }, where x is some norm, we define the norm of the matrix A n n which is sub-ordinate to the vector norm x as ( ) Ax A = max x =0 x Note, Ax is a vector, x R n Ax R n, so A is the largest value of the vector norm of Ax normalised over all non-zero vectors x Not surprisingly, the three requirements of a vector norm (11 13) are properties of A There are two further properties which are a consequence of the definition for A Hence, sub-ordinate matrix norms satisfy the following five rules: A > 0, A 0, ( 0 = 0),where 0 is the n n zero matrix, (15) αa = α A, α R, (16) A + B A + B, (17) Ax A x, (18) AB A B (19) These five rules, together with the three for vector norms (11 13), provide the means for an analysis of a linear system First, let us justify the above results: Rule 1 (15): For a matrix A n n where A 0, x R n where x 0 such that the vector Ax 0 So both Ax > 0 and x > 0, thus A > 0 Also, if A 0 then Ax = 0 x and A = 0 Rule 2 (16): Since Ax is a vector and therefore satisfies (12) we can show that { } { } { } αax α Ax Ax αa = max = max = α max = α A x =0 x x =0 x x =0 x 9

7 Rule 3 (17): { } (A + B)x A + B = max x =0 x where x m (x m R n ) maximises the right hand side = (A + B)x m x m Now, let us define Ax m = y m and Bx m = z m (y m, z m R n ) so that (A + B)x m = Ax m + Bx m = y m + z m y m + z m = Ax m + Bx m, therefore Hence A + B Ax m + Bx m x m max x =0 A + B A + B { } Ax + max x x =0 { } Bx x Rule 4 (18): By definition, A Ax x for anyx 0 Hence A x Ax x (includingx 0) Rule 5 (19): Finally, consider AB, where A and B are n n matrices Using a similar argument to that used in rule 3, we have { } ABx AB = max x =0 x where x m (x m R n ) maximises the right hand side = ABx m x m If we define Bx m = z m (z m R n ) and use rule 4 (18), then ABx m = Az m A z m = A Bx m A B x m Hence AB = ABx m x m A B x m x m = A B 141 Commonly Used Sub-ordinate Matrix Norms Clearly, the sub-ordinate matrix norm is not easy to calculate direct from its definition as in theory all vectors x R n should be considered Here we will consider the commonly used matrix norms and consider practical ways to calculate them 10

8 The easiest matrix norms to compute are matrix norms sub-ordinate to the L 1 and L vector norms These are A 1 = max x 0 ( n (Ax) ) i n x i A 1 which is equivalent to the maximum column sum of absolute values of A and max ( (Ax) i ) 1 i n A = max x 0 max ( x i ) 1 i n A which is equivalent to the maximum row sum of absolute values of A Proof that A 1 is equivalent to the maximum column sum of absolute values of A (proof of the relation for A is similar) The L 1 norm is x 1 = x i Let y = Ax (x, y R n and A n n ) and so y i = n j=1 a ijx j and Now Thus Ax 1 = y 1 = a ij x j j=1 Ax 1 Changing the order of summation then gives Ax 1 y i = a ij x j a ij x j j=1 j=1 a ij x j j=1 a ij x j j=1 j=1 a ij x j { n } x j a ij j=1 where n a ij is the sum of absolute values in column j Each column will have a sum, S j = n a ij, 1 j n and S j S m, where m is the column with the largest sum So and hence Ax 1 n x j S j S m x j = S m x 1 j=1 j=1 Ax 1 x 1 S m, 11

9 where S m is the maximum column sum of absolute values This is true for all non-zero x Hence, A S m We need to determine now if S m is the maximum value of Ax 1 x 1, or whether it is merely an upper bound In other words, is it possible to pick an x R n for which we reach S m? Try by choosing the following: x = [ 0,,0,1,0,,0] T 0 j m where x j = 1 j = m Now, substituting this particular x into Ax = y implies y i = n j=1 a ijx j = a im and thus Ax 1 = y 1 = y i = a im = S m So, Ax 1 = S m x 1 1 = S m This means that the bound can be reached with a suitable x and so ( ) Ax 1 A 1 = max = S m, x =0 x 1 where S m is the maximum column sum of absolute values Hence, S m is not just an upper bound, it is actually the maximum! Example 141: Determine the matrix norm sub-ordinate to (i) the one norm and (ii) the infinity norm for the following matrices: A = and B = A 1 = max(( ),( ),( ), = max(8,13,5) = 13 B 1 = max(( ),( ),( )) = max(6,5,2) = 6 A = max(( ),( ),( ), = max(11,8,7) = 11 B = max(( ),( ),( )) = max(6,5,2) = 6 Note, since B is symmetric, the maximum column sum of absolute values equals the maximum row sum of absolute values ie B 1 = B 12

10 15 Spectral Radius A useful, and important, quantity associated with matrices is the Spectral Radius of a matrix An n n matrix A has n eigenvalues λ i,i = 1,,n The Spectral Radius of A, which is denoted by ρ(a) is defined as Note, that ρ(a) 0 for all A 0 Furthermore, ρ(a) = max 1 i n ( λ i ) ρ(a) A, for all sub-ordinate matrix norms Proof Let e i be an eigenvector of A, so Ae i = λ i e i where λ i is the associated eigenvalue Hence, e i 0 and λ i 0 Then Ae i = λ i e i = λ i e i and so or Ae i A e i, λ i e i A e i, λ i A Now this result holds true λ i and hence ρ(a) = max 1 i n ( λ i ) A Note, the spectral radius is not a norm! This can be easily shown Let which gives However, A = 1 0 and B = ρ(a) = max( 1, 0 ) = 1 and ρ(b) = max( 0, 1 ) = 1 A + B = 1 2 and ρ(a + B) = max( 1, 3 ) = hence ρ(a + B) ρ(a) + ρ(b) So the spectral radius does not satisfy (13), rule 3 for a norm 13

11 16 The Condition Number and Ill-Conditioned Matrices Suppose we are interested in solving Ax = b where A n n, x n 1 & b n 1 and the elements of b are subject to small errors δb so that instead of finding x, we find x where A x = b + δb How big is the disturbance in x in relation to the disturbance in b? (ie How sensitive is this system of linear equations?) Ax = b and A x = b + δb so, A( x x) = δb and hence the critical change in x is given by ( x x) = A 1 δb We are interested in the size of the errors, so let us take the norm: x x = A 1 δb A 1 δb for some norm This gives a measure of the actual change in the solution However, a more useful measure is the relative change and so we also consider, b = Ax A x so Hence, the relative change in x, given by 1 x A b x x x is at worst the relative change in b times A A 1 A A 1 δb b, The quantity A A 1 = K(A) is called the condition number of A with respect to the norm chosen The condition number gives an indication of the potential sensitivity of a linear system of equations The smaller the conditional number, K(A), the smaller the change in x If K(A) is very large, the solution x, is considered unreliable We would like the condition number to be small, but it is easy to see that it will never be < 1 14

12 Note, AA 1 = I and thus AA 1 = I Using Rule 5 (19), we find AA 1 = I A A 1 = K(A) and using the definition of the subordinate matrix norm: ( ) ( ) Ix x I = max = max = 1 x =0 x x Hence, K(A) 1 If K(A) is modest then we can be confident that small changes to b do not seriously affect the solution However, if K(A) 1 (very large) then small changes to b may produce large changes in x When this happens, we say the system is ill-conditioned How large is large? This all depends on the system Remember, the condition number helps in finding the relative error: x x A A 1 δb x b If the relative error of δb / b = 10 6 and we wish to know x to within a relative error of b = 10 4 then provided the condition number K(A) < 100 then it would be considered small 161 An Example of Using Norms to Solve a Linear System of Equations Example 161: We will consider an innocent looking set of equations that result in a large condition number Consider the linear system Wx = b, where W is the Wilson Matrix, W = and the vector x = so that Wx = b = Now suppose we actually solve W x = b + δb = hence δb =

13 First, we must find the inverse of W, W = Then from W x = b + δb, we find 1 82 x = W 1 b + W δb = It is clear that the system is sensitive to changes: a small change to b has had a very large effect on the solution So the Wilson Matrix is an example of an ill-conditioned matrix and we would expect the condition number to be very large To evaluate the condition number, K(W) for the Wilson Matrix we need to select a particular norm First, we select the 1-norm (maximum column sum of absolute values) and estimate the error W 1 = 33 and W 1 1 = 136, and hence, K 1 (W) = W 1 W 1 1 = = 4488, which of course is considerably bigger than 1 Remember that such that the x x 1 x 1 K 1 (W) δb 1 b 1 error in x 4488 ( ) ( ) = So far, we have used the 1-norm but it is more natural to look at the -norm, which deals with the maximum values Since W is symmetric, W 1 = W and W 1 1 = W 1 so, x x x K (W) δb max( 01, 01, 01, 01 ) 4488 = b max( 32, 23, 33, 31 ) This is exactly equal to the biggest error we found, so a very realistic (reliable) estimate For this example, the bound provides a good estimate of the scale of any change in the solution 16

14 17 Some Useful Results for Determining the Condition Number 171 Finding Norms of Inverse Matrices To estimate condition numbers of matrices we need to find norms of inverses To find an inverse, we have to solve a linear system and if the linear system is sensitive, how reliable is the value we get for A 1? Ideally, we wish to estimate A 1 without needing to find A 1 One possible result that might help is the following: Suppose B is a non-singular matrix, Write B = I + A so BB 1 = I (I + A)(I + A) 1 = I, or so (I + A) 1 + A(I + A) 1 = I, (I + A) 1 = I A(I + A) 1 Taking norms and, using Rules 3 & 5 (17) and (19), we find (I + A) 1 = I A(I + A) 1 I + A(I + A) A (I + A) 1, so If A 1 then where B = I + A (I + A) 1 (1 A ) 1 B 1 = (I + A) A, Example 171: Consider the matrix 1 1/4 1/4 B = 1/4 1 1/4 1/4 1/4 1 17

15 then 0 1/4 1/4 A = B I = 1/4 0 1/4 1/4 1/4 0 Using the infinity norm, A = 1 2 < 1, so B /2 = 2, and B = 15 Hence, K (B) = B B or K (B) 3, which is quite modest 172 Limits on the Condition Number The Spectral Radius of a matrix states that sub-ordinate matrix norms are always greater than (or equal to) the absolute values of the eigenvalues This is true for all eigenvalues Suppose λ 1 λ 2 λ n 1 λ n > 0 ie there is a largest and a smallest eigenvalue Then, the spectral radius for all norms implies that, A λ 1 = max eigenvalues = ρ(a) Furthermore, if the eigenvalues of A are λ i and A is non-singular, then from Ae i = λe i we have 1 λ i e i = A 1 e i so the eigenvalues of A 1 are λ 1 (A non-singular λ 0) Hence, the spectral radius implies A 1 max 1 i λ i or Thus, the condition number for A, A 1 1 λ n K(A) = A A 1 λ 1 λ n, where λ1 λ n is the ratio of the largest to the smallest eigenvalue This ratio gives an indication of just how big the condition number can be, ie if the ratio is large, then K(A) will be large (and the matrix is ill-conditioned) Also, this result illustrates another simple result, namely that scaling a matrix dos not affect the condition number K(αA) = K(A) 18

16 18 Examples of Ill-Conditioned Matrices Example 181: An almost singular matrix is ill-conditioned Take A = then det(a) = 10 7, which is very small The eigenvalues of A are given by (1 λ) 2 ( ) = 0, or, using the expansion (1 + x) 1/ x x2 +, (1 λ) = ±( ) 1/2 ±( ) Hence, we find eigenvalues λ 1 2 and λ and hence the condition number K(A) = λ1 λ n So as you might expect, an almost singular matrix is ill-conditioned Example 182: The Hilbert Matrix is notorious for being ill-conditioned Consider the n n Hilbert matrix 1 1/2 1/3 1/n 1/2 1/3 1/4 H = 1/3 1/4 1/5 1/n 1/(2n 1) n n The condition numbers for various H n n matrices are given in the table below n K (H n n ) Typical single precision, floating point accuracy on a computer is approximately Apply Gaussian Elimination on a 6 6 Hilbert matrix on a computer with single precision arithmetic and the result is likely to be subject to significant error! Such error can be avoided with packages like 19

17 Maple which allows you to perform calculations using exact arithmetic, ie all digits will be carried in the calculations However, error can still occur if the matrix is not specified properly If the elements are not generated exactly, the wrong result can still be obtained 20

### Linear Algebra Review. Vectors

Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

### Inner Product Spaces

Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

### Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

### MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

### MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

### Matrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for

### 1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

### Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

### Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

### Introduction to Matrix Algebra

Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

### Orthogonal Diagonalization of Symmetric Matrices

MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

### 13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

### BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

### 3. INNER PRODUCT SPACES

. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

### NOTES on LINEAR ALGEBRA 1

School of Economics, Management and Statistics University of Bologna Academic Year 205/6 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

### 1 Introduction to Matrices

1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

### Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

### 4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

L. Vandenberghe EE133A (Spring 2016) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows

### Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

### Diagonal, Symmetric and Triangular Matrices

Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

### Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

### Inverses and powers: Rules of Matrix Arithmetic

Contents 1 Inverses and powers: Rules of Matrix Arithmetic 1.1 What about division of matrices? 1.2 Properties of the Inverse of a Matrix 1.2.1 Theorem (Uniqueness of Inverse) 1.2.2 Inverse Test 1.2.3

### Introduction to Matrix Algebra I

Appendix A Introduction to Matrix Algebra I Today we will begin the course with a discussion of matrix algebra. Why are we studying this? We will use matrix algebra to derive the linear regression model

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

### Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

### Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

### Sec 4.1 Vector Spaces and Subspaces

Sec 4. Vector Spaces and Subspaces Motivation Let S be the set of all solutions to the differential equation y + y =. Let T be the set of all 2 3 matrices with real entries. These two sets share many common

### a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

### ( % . This matrix consists of \$ 4 5 " 5' the coefficients of the variables as they appear in the original system. The augmented 3 " 2 2 # 2 " 3 4&

Matrices define matrix We will use matrices to help us solve systems of equations. A matrix is a rectangular array of numbers enclosed in parentheses or brackets. In linear algebra, matrices are important

### Solving Sets of Equations. 150 B.C.E., 九章算術 Carl Friedrich Gauss,

Solving Sets of Equations 5 B.C.E., 九章算術 Carl Friedrich Gauss, 777-855 Gaussian-Jordan Elimination In Gauss-Jordan elimination, matrix is reduced to diagonal rather than triangular form Row combinations

### Lecture 2 Matrix Operations

Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or

### MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector

### Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

### MATH 551 - APPLIED MATRIX THEORY

MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

### Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

### 1.5 Elementary Matrices and a Method for Finding the Inverse

.5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:

### 15.062 Data Mining: Algorithms and Applications Matrix Math Review

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

### Linear Algebra: Vectors

A Linear Algebra: Vectors A Appendix A: LINEAR ALGEBRA: VECTORS TABLE OF CONTENTS Page A Motivation A 3 A2 Vectors A 3 A2 Notational Conventions A 4 A22 Visualization A 5 A23 Special Vectors A 5 A3 Vector

### Math 312 Homework 1 Solutions

Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

### Using the Singular Value Decomposition

Using the Singular Value Decomposition Emmett J. Ientilucci Chester F. Carlson Center for Imaging Science Rochester Institute of Technology emmett@cis.rit.edu May 9, 003 Abstract This report introduces

### Name: Section Registered In:

Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

Facts About Eigenvalues By Dr David Butler Definitions Suppose A is an n n matrix An eigenvalue of A is a number λ such that Av = λv for some nonzero vector v An eigenvector of A is a nonzero vector v

### Metric Spaces. Chapter 7. 7.1. Metrics

Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

### Systems of Linear Equations

Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

### MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix.

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An m-by-n matrix is a rectangular array of numbers that has m rows and n columns: a 11

### Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

### T ( a i x i ) = a i T (x i ).

Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

### 10. Graph Matrices Incidence Matrix

10 Graph Matrices Since a graph is completely determined by specifying either its adjacency structure or its incidence structure, these specifications provide far more efficient ways of representing a

### > 2. Error and Computer Arithmetic

> 2. Error and Computer Arithmetic Numerical analysis is concerned with how to solve a problem numerically, i.e., how to develop a sequence of numerical calculations to get a satisfactory answer. Part

### 1 Eigenvalues and Eigenvectors

Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

### Linear Algebra Notes

Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

### Chapter 8. Matrices II: inverses. 8.1 What is an inverse?

Chapter 8 Matrices II: inverses We have learnt how to add subtract and multiply matrices but we have not defined division. The reason is that in general it cannot always be defined. In this chapter, we

### Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

### MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

### Homework One Solutions. Keith Fratus

Homework One Solutions Keith Fratus June 8, 011 1 Problem One 1.1 Part a In this problem, we ll assume the fact that the sum of two complex numbers is another complex number, and also that the product

### ME128 Computer-Aided Mechanical Design Course Notes Introduction to Design Optimization

ME128 Computer-ided Mechanical Design Course Notes Introduction to Design Optimization 2. OPTIMIZTION Design optimization is rooted as a basic problem for design engineers. It is, of course, a rare situation

### Notes on Symmetric Matrices

CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

### University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

### DETERMINANTS. b 2. x 2

DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

### by the matrix A results in a vector which is a reflection of the given

Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

### 8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

### Row and column operations

Row and column operations It is often very useful to apply row and column operations to a matrix. Let us list what operations we re going to be using. 3 We ll illustrate these using the example matrix

### 2.1: MATRIX OPERATIONS

.: MATRIX OPERATIONS What are diagonal entries and the main diagonal of a matrix? What is a diagonal matrix? When are matrices equal? Scalar Multiplication 45 Matrix Addition Theorem (pg 0) Let A, B, and

### MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

### Solving Linear Systems, Continued and The Inverse of a Matrix

, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

### 1. For each of the following matrices, determine whether it is in row echelon form, reduced row echelon form, or neither.

Math Exam - Practice Problem Solutions. For each of the following matrices, determine whether it is in row echelon form, reduced row echelon form, or neither. (a) 5 (c) Since each row has a leading that

### We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P.

Volumes of parallelograms 1 Chapter 8 Volumes of parallelograms In the present short chapter we are going to discuss the elementary geometrical objects which we call parallelograms. These are going to

### Linear Algebra Test 2 Review by JC McNamara

Linear Algebra Test 2 Review by JC McNamara 2.3 Properties of determinants: det(a T ) = det(a) det(ka) = k n det(a) det(a + B) det(a) + det(b) (In some cases this is true but not always) A is invertible

### 4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

### Linear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold:

Linear Systems Example: Find x, x, x such that the following three equations hold: x + x + x = 4x + x + x = x + x + x = 6 We can write this using matrix-vector notation as 4 {{ A x x x {{ x = 6 {{ b General

### 1 Sets and Set Notation.

LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

### Determinants. Dr. Doreen De Leon Math 152, Fall 2015

Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.

### The Inverse of a Matrix

The Inverse of a Matrix 7.4 Introduction In number arithmetic every number a ( 0) has a reciprocal b written as a or such that a ba = ab =. Some, but not all, square matrices have inverses. If a square

### Vector Spaces. Chapter 2. 2.1 R 2 through R n

Chapter 2 Vector Spaces One of my favorite dictionaries (the one from Oxford) defines a vector as A quantity having direction as well as magnitude, denoted by a line drawn from its original to its final

### Course 221: Analysis Academic year , First Semester

Course 221: Analysis Academic year 2007-08, First Semester David R. Wilkins Copyright c David R. Wilkins 1989 2007 Contents 1 Basic Theorems of Real Analysis 1 1.1 The Least Upper Bound Principle................

### Recall that two vectors in are perpendicular or orthogonal provided that their dot

Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

### Practical Numerical Training UKNum

Practical Numerical Training UKNum 7: Systems of linear equations C. Mordasini Max Planck Institute for Astronomy, Heidelberg Program: 1) Introduction 2) Gauss Elimination 3) Gauss with Pivoting 4) Determinants

### 2.1 Functions. 2.1 J.A.Beachy 1. from A Study Guide for Beginner s by J.A.Beachy, a supplement to Abstract Algebra by Beachy / Blair

2.1 J.A.Beachy 1 2.1 Functions from A Study Guide for Beginner s by J.A.Beachy, a supplement to Abstract Algebra by Beachy / Blair 21. The Vertical Line Test from calculus says that a curve in the xy-plane

### TOPIC 3: CONTINUITY OF FUNCTIONS

TOPIC 3: CONTINUITY OF FUNCTIONS. Absolute value We work in the field of real numbers, R. For the study of the properties of functions we need the concept of absolute value of a number. Definition.. Let

### MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

### Matrices in Statics and Mechanics

Matrices in Statics and Mechanics Casey Pearson 3/19/2012 Abstract The goal of this project is to show how linear algebra can be used to solve complex, multi-variable statics problems as well as illustrate

### Brief Introduction to Vectors and Matrices

CHAPTER 1 Brief Introduction to Vectors and Matrices In this chapter, we will discuss some needed concepts found in introductory course in linear algebra. We will introduce matrix, vector, vector-valued

### LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

### A matrix over a field F is a rectangular array of elements from F. The symbol

Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted

### Sergei Silvestrov, Christopher Engström, Karl Lundengård, Johan Richter, Jonas Österberg. November 13, 2014

Sergei Silvestrov,, Karl Lundengård, Johan Richter, Jonas Österberg November 13, 2014 Analysis Todays lecture: Course overview. Repetition of matrices elementary operations. Repetition of solvability of

### Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

### (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.

Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product

### Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

### Section 1.1. Introduction to R n

The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

### 3. Reaction Diffusion Equations Consider the following ODE model for population growth

3. Reaction Diffusion Equations Consider the following ODE model for population growth u t a u t u t, u 0 u 0 where u t denotes the population size at time t, and a u plays the role of the population dependent

### PROVING STATEMENTS IN LINEAR ALGEBRA

Mathematics V2010y Linear Algebra Spring 2007 PROVING STATEMENTS IN LINEAR ALGEBRA Linear algebra is different from calculus: you cannot understand it properly without some simple proofs. Knowing statements

### Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?