MATH 110 Spring 2015 Homework 6 Solutions

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "MATH 110 Spring 2015 Homework 6 Solutions"

Transcription

1 MATH 110 Spring 2015 Homework 6 Solutions Section Let α denote the standard basis for V = R 3. Let α = {e 1, e 2, e 3 } denote the dual basis of α for V. We would first like to show that β = {f 1, f 2, f 3 } is a basis of V. f 1 = e 1 2e 2, f 2 = e 1 + e 2 + e 3, f 3 = e 2 3e 3. Suppose af 1 + bf 2 + cf 3 = 0 = a(e 1 2e 2 ) + b(e 1 + e 2 + e 3 ) + c(e 2 3e 3 ) = (a + b)e 1 + ( 2a + b + c)e 2 + (b 3c)e 3 (1) Since α = {e 1, e 2, e 3 } is a basis of V, a + b = 2a + b + c = b 3c = 0. Solving this we get a = b = c = 0. This shows that {f 1, f 2, f 3 } is a linearly independent subset in V. Since dim(v ) = 3, β = {f 1, f 2, f 3 } is a basis. Now we need to find a basis β = {g 1, g 2, g 3 } of V = R 3 such that β is the dual basis of β. (Method 1) From the definition of the dual basis, we know that (x, y, z) = (x 2y)g 1 +(x+y+z)g 2 +(y 3z)g 3 Let G denote the matrix with column vectors being g 1, g 2, g 3, then the equation above is equivalent to the matrix equation below, G A = I, where A is the matrix Then G = A 1, and is given by The elements of the basis β are the columns of the matrix G. More explicitly, β = {(0.4, 0.3, 0.1), (0.6, 0.3, 0.1), (0.2, 0.1, 0.3)}. 1

2 (Method 2) We discuss a different method below using Theorem 2.25 of the textbook. The change of coordinate matrix from β to α, [I] α β (Here I denotes the identity transformation of V ), is given by Note that the identity transformation on V is the transpose of the identity transformation on V, according to the definition in Theorem 2.25 of the textbook. Then by Theorem 2.25, the change of coordinate matrix from β to α is the transpose of the change of coordinate matrix from α to β [I] β α. Therefore, [I] β α is given by ([I] α β )t = [I] β α Then the change of coordinate matrix from β to α, [I] α β = ([I]β α) 1 is obtained by calculating the inverse of the matrix above, and is given by The elements of the basis β are the columns of the matrix [I] α β. More explicitly, (a) T t (f) (R 2 ) is given by β = {(0.4, 0.3, 0.1), (0.6, 0.3, 0.1), (0.2, 0.1, 0.3)}. T t (f)(x, y) = ft (x, y) = f(3x + 2y, x) = 2(3x + 2y) + x = 7x + 4y. (b) Let [T t ] β = ([T t (f 1 )] β, [T t (f 2 )] β ) be ( a b c d such that T t (f 1 ) = af 1 + cf 2 and T t (f 2 ) = bf 1 + df 2. Let β = {e 1 = (1, 0), e 2 = (0, 1)} denote the standard ordered basis of R 2. T t (f 1 )(1, 0) = f 1 T (1, 0) = f 1 (3, 1) = 3 = (af 1 + cf 2 )(1, 0) = a. ) 2

3 T t (f 1 )(0, 1) = f 1 T (0, 1) = f 1 (2, 0) = 2 = (af 1 + cf 2 )(0, 1) = c. T t (f 2 )(1, 0) = f 2 T (1, 0) = f 2 (3, 1) = 1 = (bf 1 + df 2 )(1, 0) = b. T t (f 2 )(0, 1) = f 2 T (0, 1) = f 2 (2, 0) = 0 = (bf 1 + df 2 )(0, 1) = d. Therefore, [T t ] β = ( ) (c) [T ] β = ([T (1, 0)] β, [T (0, 1)] β ) = Then ([T ] β ) t = ( ( This is the same as [T t ] β as predicted by Theorem (a) We can check that f i (p(x) + cq(x)) = p(c i ) + cq(c i ) = f i (p(x)) + cf i (q(x)), so f i is linear and is in V?. Also we know that dim(v? ) = dim(v ) = dim(p n (F )) = n + 1. So now it?s enough to show that {f 0, f 1,..., f n } is linearly lindependent. Assume that Σ n i=0 a i f i = 0 for some scalars a i. We may define polynomials p i (x) = Π j i (x?cj) such that we know p i (c i ) 0 but p i (c j ) = 0 for all j i. Now fix an i {0, 1,..., n}? n k=0 a kf k (p i ) = a i f i (p i ) = a i p i (c i ) = 0. Since p i (c i ) 0, a i = 0. Therefore a i = 0 for all i and {f 0, f 1,..., f n } is a basis. (b) By the Corollary after Theorem 2.26 we have an ordered basis? = p 0, p 1,..., p n for V such that f 1, f 2,..., f n defined in (a) is its dual basis. Then we know that f j (p i ) = p i (c j ) = δ ij. Since β is a basis, every polynomial in V can be written as a unique linear combination of elements of β. Fix an integer b {0,..., n} Suppose a polynomial q has the property q(c j ) = δ bj, then we could write q as a linear combination q = Σ k a k p k. Plug in c j, j = 0,..., n, we get a b = 1 and a j = 0 for all j b. Therefore, q = p b. Since b is arbitrary, we know that {p 0,..., p n } is unique. (c) First, q(x) = Σ n i=0 a ip i (x) has the property q(c i ) = a i for 0 i n. Now suppose there is another polynomial r(x) that has this property. Express r(x) as a linear combination of the elements of the basis β defined in (b), r(x) = Σ n i=0 b ip i (x). Plug in c j, j = 0,..., n we get a i = b i for all 0 i n. Therefore r(x) = q(x) = Σ n i=0 a ip i (x). (d) By previous parts of this exercise, every polynomial p(x) could be uniquely written as a linear combination Σ n i=0 z ip i (x) such that z i = p(c i ). Therefore we have ) ) p(x) = Σ n i=0p(c i )p i (x). (e) Since there are only finitely many terms in the summation above, the order of integration and summation can be exchanged. Therefore we get 3

4 b a p(t)dt = Σ n i=0p(c i ) b a p i (t)dt. (a)we can check that if f, g S 0, c F, then f + g and cf are elements in S 0. This is because (f + g)(x) = f(x) + g(x) = 0 and (cf)(x) = cf(x) = 0. The zero functional is an element in S 0. (b) Let v 1, v 2,..., v k be the basis of W. Since x / W, we know that {v 1, v 2,..., v k+1 = x} is a linearly independent set and hence we can extend it to a basis {v 1, v 2,..., v n } for V. So we can define a linear functional f W 0 such that f(v i ) = δ i,(k+1) for i = 1,, k, k + 1,, n. Thus f is the desired functional. (c) Let W be the subspace span(s). We first prove that W 0 = S 0. Since every functional that is zero at every w W must be a functional that is zero at every s S, we know W 0 S 0. On the other hand, if a linear functional f has the property that f(x) = 0 for all xins, we can deduce that f(y) = 0 for all y W = span(s). Hence, we know that S 0 W 0 and therefore W 0 = S 0. Since (W 0 ) 0 = (S 0 ) 0 and span(ψ(s)) = ψ(w ) by the fact ψ is an isomorphism, we can just prove that (W 0 ) 0 = ψ(w ). Next, by Theorem 2.26 we may assume every element in (W 0 ) 0?V?? has the form ˆx for some x V. Let ˆx be an element in (W 0 ) 0. We have that ˆx(f) = f(x) = 0 if f W 0. Now if x is not an element of W, by part b, there exists some functional f?w 0 such that f(x) 0. This is a contradiction. So we know that ˆx is an element in ψ(w ) and (W 0 ) 0 ψ(w ). For the converse, we may assume that ˆx is an element in ψ(w ). Then for all f W 0 we have that ˆx(f) = f(x) = 0 since x is an element in W. So we know that ψ(w ) (W 0 ) 0. Therefore (W 0 ) 0 = ψ(w ) and we get the desired conclusion. (d) If W 1 = W 2, then we have W 0 1 = W 0 2. For the converse, if W 0 1 = W 0 2, then we have ψ(w 1 ) = (W 0 1 ) 0 = (W 0 2 ) 0 = ψ(w 2 ). Since that ψ is an isomorphism, W 1 = ψ?1 (ψ?(w 1 )) = ψ?1 (ψ(w 2 )) = W 2. (e) If f is an element in (W 1 + W 2 ) 0, we have that f(w 1 + w 2 ) = 0 for all w 1 W 1 and w 2 W 2. So we know that f(w 1 + 0) = 0 and f(0 + w 2 ) = 0 for all w 1 W 1 and w 2 W 2. This means f is an element in W 0 1 W 0 2. For the converse, if f is an element of W 1 0 W 2 0, we have that f(w 1 + w 2 ) = f(w 1 ) + f(w 2 ) = 0 for all w 1 W 1 and w 2 W 2. Hence we have that f is an element in (W 1 + W 2 ) If T t (f) = ft = 0, then f(y) = 0 for all y R(T ) and hence f (R(T )) 0. If f (R(T )) 0, then f(y) = 0 for all y R(T ) and hence T t (f)(x) = f(t (x)) = 0 for all x V. This means f is an element in N(T t ). 4

5 Section (a) False. The system 0x = 1 has no solution. (b) False. The system that 0x = 0 has infinitely many solutions. (c) True. It at least has the zero solution. (d) False. The system that 0 (x 1, x 2,, x n ) = 0 has infinitely many solutions. (e) False. The system that 0 (x 1, x 2,, x n ) = a, a being a nonzero vector in R n, has no solution. (f) False. The system 0x = 1 has no solution but the homogeneous system corresponding to it has infinitely many solutions. (g) True. If Ax = 0 and A is an invertible n n matrix, then we know x = A 1 0 = 0. (h) False. The system x = 1 has solution set {1}, which is not a subspace of F (b,c) (b) Subtract the first equation from the second equation and we get 3x 1 = x 3. Then from the first equation we get x 2 = x 3 x 1 = 2x 1. Therefore all solutions have the form {x 1 (1, 2, 3) x 1 R}. The set {(1, 2, 3)} is a basis of the solution set and the dimension is 1. (c) Add the two equations we get 3x 1 + 3x 2 = 0, therefore x 2 = x 1. From the first equation we find x 3 = x 1 + 2x 2 = x 1 2x 1 = x 1. Therefore all solutions have the form {x 1 (1, 1, 1) x 1 R} The set {(1, 1, 1)} is a basis of the solution set and the dimension is (b,c) Pick one particular solution b to this nonhomogeneous system of linear equations. The all solutions to this nonhomogeneous system is given by the sum of b and one solution to the corresponding homogeneous system of linear equations. (b) ( 2 3, 1 3, 0) is one solution to this nonhomogeneous system. Then all solutions have the form {( 2 3, 1 3, 0) + c (1, 2, 3) c R}. 5 (c) (3, 0, 0) is one particular solution to this nonhomegeneous system. Then all solutions have the form {(3, 0, 0) + c (1, 1, 1) c R}. Let A be the n n zero matrix. The system Ax = 0 has infinitely many solutions. 5

6 8(a) Solve a + b = 1, b 2c = 3, a + 2c = 2, and we get a = 0, b = 1, c = 1 is a solution. Therefore v R(T ). 6

Math 113 Homework 3. October 16, 2013

Math 113 Homework 3. October 16, 2013 Math 113 Homework 3 October 16, 2013 This homework is due Thursday October 17th at the start of class. Remember to write clearly, and justify your solutions. Please make sure to put your name on the first

More information

Review: Vector space

Review: Vector space Math 2F Linear Algebra Lecture 13 1 Basis and dimensions Slide 1 Review: Subspace of a vector space. (Sec. 4.1) Linear combinations, l.d., l.i. vectors. (Sec. 4.3) Dimension and Base of a vector space.

More information

Math 333 - Practice Exam 2 with Some Solutions

Math 333 - Practice Exam 2 with Some Solutions Math 333 - Practice Exam 2 with Some Solutions (Note that the exam will NOT be this long) Definitions (0 points) Let T : V W be a transformation Let A be a square matrix (a) Define T is linear (b) Define

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

MATHEMATICAL BACKGROUND

MATHEMATICAL BACKGROUND Chapter 1 MATHEMATICAL BACKGROUND This chapter discusses the mathematics that is necessary for the development of the theory of linear programming. We are particularly interested in the solutions of a

More information

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d. DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

Abstract Linear Algebra, Fall Solutions to Problems III

Abstract Linear Algebra, Fall Solutions to Problems III Abstract Linear Algebra, Fall 211 - Solutions to Problems III 1. Let P 3 denote the real vector space of all real polynomials p(t) of degree at most 3. Consider the linear map T : P 3 P 3 given by T p(t)

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

Math 215 Exam #1 Practice Problem Solutions

Math 215 Exam #1 Practice Problem Solutions Math 5 Exam # Practice Problem Solutions For each of the following statements, say whether it is true or false If the statement is true, prove it If false, give a counterexample (a) If A is a matrix such

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7 Linear Algebra-Lab 2

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7 Linear Algebra-Lab 2 Linear Algebra-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4) 2x + 3y + 6z = 10

More information

An Advanced Course in Linear Algebra. Jim L. Brown

An Advanced Course in Linear Algebra. Jim L. Brown An Advanced Course in Linear Algebra Jim L. Brown July 20, 2015 Contents 1 Introduction 3 2 Vector spaces 4 2.1 Getting started............................ 4 2.2 Bases and dimension.........................

More information

NON SINGULAR MATRICES. DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that

NON SINGULAR MATRICES. DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that NON SINGULAR MATRICES DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that AB = I n = BA. Any matrix B with the above property is called

More information

MATH 304 Linear Algebra Lecture 16: Basis and dimension.

MATH 304 Linear Algebra Lecture 16: Basis and dimension. MATH 304 Linear Algebra Lecture 16: Basis and dimension. Basis Definition. Let V be a vector space. A linearly independent spanning set for V is called a basis. Equivalently, a subset S V is a basis for

More information

MATH10212 Linear Algebra B Homework 7

MATH10212 Linear Algebra B Homework 7 MATH22 Linear Algebra B Homework 7 Students are strongly advised to acquire a copy of the Textbook: D C Lay, Linear Algebra and its Applications Pearson, 26 (or other editions) Normally, homework assignments

More information

Interpolating Polynomials Handout March 7, 2012

Interpolating Polynomials Handout March 7, 2012 Interpolating Polynomials Handout March 7, 212 Again we work over our favorite field F (such as R, Q, C or F p ) We wish to find a polynomial y = f(x) passing through n specified data points (x 1,y 1 ),

More information

Problems for Advanced Linear Algebra Fall 2012

Problems for Advanced Linear Algebra Fall 2012 Problems for Advanced Linear Algebra Fall 2012 Class will be structured around students presenting complete solutions to the problems in this handout. Please only agree to come to the board when you are

More information

Determinants. Dr. Doreen De Leon Math 152, Fall 2015

Determinants. Dr. Doreen De Leon Math 152, Fall 2015 Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.

More information

A general element of R 3 3 has the form A = d e f and the condition A T = A becomes. b e f, so that. b = d, c = g, and f = h. Thus, A = a b c.

A general element of R 3 3 has the form A = d e f and the condition A T = A becomes. b e f, so that. b = d, c = g, and f = h. Thus, A = a b c. Let V = {f(t) P 3 f() = f()} Show that V is a subspace of P 3 and find a basis of V The neutral element of P 3 is the function n(t) = In particular, n() = = n(), so n(t) V Let f, g V Then f() = f() and

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0

1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0 Solutions: Assignment 4.. Find the redundant column vectors of the given matrix A by inspection. Then find a basis of the image of A and a basis of the kernel of A. 5 A The second and third columns are

More information

MA 242 LINEAR ALGEBRA C1, Solutions to Second Midterm Exam

MA 242 LINEAR ALGEBRA C1, Solutions to Second Midterm Exam MA 4 LINEAR ALGEBRA C, Solutions to Second Midterm Exam Prof. Nikola Popovic, November 9, 6, 9:3am - :5am Problem (5 points). Let the matrix A be given by 5 6 5 4 5 (a) Find the inverse A of A, if it exists.

More information

(b) Prove that there is NO ordered basis γ of V s.t. Solution: (a) We have

(b) Prove that there is NO ordered basis γ of V s.t. Solution: (a) We have Advanced Linear Algebra, Fall 2011. Solutions to midterm #1. 1. Let V = P 2 (R), the vector space of polynomials of degree 2 over R. Let T : V V be the differentiation map, that is, T (f(x)) = f (x). (a)

More information

Lecture Note on Linear Algebra 15. Dimension and Rank

Lecture Note on Linear Algebra 15. Dimension and Rank Lecture Note on Linear Algebra 15. Dimension and Rank Wei-Shi Zheng, wszheng@ieee.org, 211 November 1, 211 1 What Do You Learn from This Note We still observe the unit vectors we have introduced in Chapter

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns L. Vandenberghe EE133A (Spring 2016) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Math 54. Selected Solutions for Week Is u in the plane in R 3 spanned by the columns

Math 54. Selected Solutions for Week Is u in the plane in R 3 spanned by the columns Math 5. Selected Solutions for Week 2 Section. (Page 2). Let u = and A = 5 2 6. Is u in the plane in R spanned by the columns of A? (See the figure omitted].) Why or why not? First of all, the plane in

More information

Math 315: Linear Algebra Solutions to Midterm Exam I

Math 315: Linear Algebra Solutions to Midterm Exam I Math 35: Linear Algebra s to Midterm Exam I # Consider the following two systems of linear equations (I) ax + by = k cx + dy = l (II) ax + by = 0 cx + dy = 0 (a) Prove: If x = x, y = y and x = x 2, y =

More information

3. Some commutative algebra Definition 3.1. Let R be a ring. We say that R is graded, if there is a direct sum decomposition, M n, R n,

3. Some commutative algebra Definition 3.1. Let R be a ring. We say that R is graded, if there is a direct sum decomposition, M n, R n, 3. Some commutative algebra Definition 3.1. Let be a ring. We say that is graded, if there is a direct sum decomposition, = n N n, where each n is an additive subgroup of, such that d e d+e. The elements

More information

Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible:

Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Cramer s Rule and the Adjugate Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Theorem [Cramer s Rule] If A is an invertible

More information

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix.

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An m-by-n matrix is a rectangular array of numbers that has m rows and n columns: a 11

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

2.1: MATRIX OPERATIONS

2.1: MATRIX OPERATIONS .: MATRIX OPERATIONS What are diagonal entries and the main diagonal of a matrix? What is a diagonal matrix? When are matrices equal? Scalar Multiplication 45 Matrix Addition Theorem (pg 0) Let A, B, and

More information

Linear Dependence Tests

Linear Dependence Tests Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

More information

Notes on Linear Recurrence Sequences

Notes on Linear Recurrence Sequences Notes on Linear Recurrence Sequences April 8, 005 As far as preparing for the final exam, I only hold you responsible for knowing sections,,, 6 and 7 Definitions and Basic Examples An example of a linear

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

Solutions to Review Problems

Solutions to Review Problems Chapter 1 Solutions to Review Problems Chapter 1 Exercise 42 Which of the following equations are not linear and why: (a x 2 1 + 3x 2 2x 3 = 5. (b x 1 + x 1 x 2 + 2x 3 = 1. (c x 1 + 2 x 2 + x 3 = 5. (a

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

Markov Chains, part I

Markov Chains, part I Markov Chains, part I December 8, 2010 1 Introduction A Markov Chain is a sequence of random variables X 0, X 1,, where each X i S, such that P(X i+1 = s i+1 X i = s i, X i 1 = s i 1,, X 0 = s 0 ) = P(X

More information

Linear Transformations

Linear Transformations a Calculus III Summer 2013, Session II Tuesday, July 23, 2013 Agenda a 1. Linear transformations 2. 3. a linear transformation linear transformations a In the m n linear system Ax = 0, Motivation we can

More information

MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector

More information

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product

More information

Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

More information

1 Eigenvalues and Eigenvectors

1 Eigenvalues and Eigenvectors Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

More information

it is easy to see that α = a

it is easy to see that α = a 21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coefficient ring is a field. We already know that such a polynomial ring is a UF. Therefore

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

DETERMINANTS. b 2. x 2

DETERMINANTS. b 2. x 2 DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

More information

MATH36001 Background Material 2015

MATH36001 Background Material 2015 MATH3600 Background Material 205 Matrix Algebra Matrices and Vectors An ordered array of mn elements a ij (i =,, m; j =,, n) written in the form a a 2 a n A = a 2 a 22 a 2n a m a m2 a mn is said to be

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

4.2. Linear Combinations and Linear Independence that a subspace contains the vectors

4.2. Linear Combinations and Linear Independence that a subspace contains the vectors 4.2. Linear Combinations and Linear Independence If we know that a subspace contains the vectors v 1 = 2 3 and v 2 = 1 1, it must contain other 1 2 vectors as well. For instance, the subspace also contains

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, Chapter 1: Linear Equations and Matrices MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

More information

( ) which must be a vector

( ) which must be a vector MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

Solutions to Linear Algebra Practice Problems 1. form (because the leading 1 in the third row is not to the right of the

Solutions to Linear Algebra Practice Problems 1. form (because the leading 1 in the third row is not to the right of the Solutions to Linear Algebra Practice Problems. Determine which of the following augmented matrices are in row echelon from, row reduced echelon form or neither. Also determine which variables are free

More information

Summary of week 8 (Lectures 22, 23 and 24)

Summary of week 8 (Lectures 22, 23 and 24) WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Some Basic Properties of Vectors in n

Some Basic Properties of Vectors in n These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation

More information

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

Riesz-Fredhölm Theory

Riesz-Fredhölm Theory Riesz-Fredhölm Theory T. Muthukumar tmk@iitk.ac.in Contents 1 Introduction 1 2 Integral Operators 1 3 Compact Operators 7 4 Fredhölm Alternative 14 Appendices 18 A Ascoli-Arzelá Result 18 B Normed Spaces

More information

LINEAR RECURSIVE SEQUENCES. The numbers in the sequence are called its terms. The general form of a sequence is. a 1, a 2, a 3,...

LINEAR RECURSIVE SEQUENCES. The numbers in the sequence are called its terms. The general form of a sequence is. a 1, a 2, a 3,... LINEAR RECURSIVE SEQUENCES BJORN POONEN 1. Sequences A sequence is an infinite list of numbers, like 1) 1, 2, 4, 8, 16, 32,.... The numbers in the sequence are called its terms. The general form of a sequence

More information

Practice Math 110 Final. Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16.

Practice Math 110 Final. Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16. Practice Math 110 Final Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16. 1. Let A = 3 1 1 3 3 2. 6 6 5 a. Use Gauss elimination to reduce A to an upper triangular

More information

Linear Algebra Concepts

Linear Algebra Concepts University of Central Florida School of Electrical Engineering Computer Science EGN-3420 - Engineering Analysis. Fall 2009 - dcm Linear Algebra Concepts Vector Spaces To define the concept of a vector

More information

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

More information

Abstract Vector Spaces, Linear Transformations, and Their Coordinate Representations

Abstract Vector Spaces, Linear Transformations, and Their Coordinate Representations Abstract Vector Spaces, Linear Transformations, and Their Coordinate Representations Contents 1 Vector Spaces 1 1.1 Definitions.......................................... 1 1.1.1 Basics........................................

More information

MATH 251 Homework 7 Solutions

MATH 251 Homework 7 Solutions MATH 51 Homework 7 Solutions 1. Let R be a ring and {S j j J} a collection of subrings resp. ideals of R. Prove that j J S j is a subring resp. ideal of R. Proof : Put S j J S j. Suppose that S j is a

More information

5. Linear algebra I: dimension

5. Linear algebra I: dimension 5. Linear algebra I: dimension 5.1 Some simple results 5.2 Bases and dimension 5.3 Homomorphisms and dimension 1. Some simple results Several observations should be made. Once stated explicitly, the proofs

More information

Math Practice Problems for Test 1

Math Practice Problems for Test 1 Math 290 - Practice Problems for Test 1 UNSUBSTANTIATED ANSWERS MAY NOT RECEIVE CREDIT. 3 4 5 1. Let c 1 and c 2 be the columns of A 5 2 and b 1. Show that b Span{c 1, c 2 } by 6 6 6 writing b as a linear

More information

THE UNIVERSITY OF TORONTO UNDERGRADUATE MATHEMATICS COMPETITION. In Memory of Robert Barrington Leigh. Saturday, March 5, 2016

THE UNIVERSITY OF TORONTO UNDERGRADUATE MATHEMATICS COMPETITION. In Memory of Robert Barrington Leigh. Saturday, March 5, 2016 THE UNIVERSITY OF TORONTO UNDERGRADUATE MATHEMATICS COMPETITION In Memory of Robert Barrington Leigh Saturday, March 5, 2016 Time: 3 1 2 hours No aids or calculators permitted. The grading is designed

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Cofactor Expansion: Cramer s Rule

Cofactor Expansion: Cramer s Rule Cofactor Expansion: Cramer s Rule MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Today we will focus on developing: an efficient method for calculating

More information

An important class of codes are linear codes in the vector space Fq n, where F q is a finite field of order q.

An important class of codes are linear codes in the vector space Fq n, where F q is a finite field of order q. Chapter 3 Linear Codes An important class of codes are linear codes in the vector space Fq n, where F q is a finite field of order q. Definition 3.1 (Linear code). A linear code C is a code in Fq n for

More information

Sec 4.1 Vector Spaces and Subspaces

Sec 4.1 Vector Spaces and Subspaces Sec 4. Vector Spaces and Subspaces Motivation Let S be the set of all solutions to the differential equation y + y =. Let T be the set of all 2 3 matrices with real entries. These two sets share many common

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

B such that AB = I and BA = I. (We say B is an inverse of A.) Definition A square matrix A is invertible (or nonsingular) if matrix

B such that AB = I and BA = I. (We say B is an inverse of A.) Definition A square matrix A is invertible (or nonsingular) if matrix Matrix inverses Recall... Definition A square matrix A is invertible (or nonsingular) if matrix B such that AB = and BA =. (We say B is an inverse of A.) Remark Not all square matrices are invertible.

More information

4.6 Null Space, Column Space, Row Space

4.6 Null Space, Column Space, Row Space NULL SPACE, COLUMN SPACE, ROW SPACE Null Space, Column Space, Row Space In applications of linear algebra, subspaces of R n typically arise in one of two situations: ) as the set of solutions of a linear

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Chapter 4: Binary Operations and Relations

Chapter 4: Binary Operations and Relations c Dr Oksana Shatalov, Fall 2014 1 Chapter 4: Binary Operations and Relations 4.1: Binary Operations DEFINITION 1. A binary operation on a nonempty set A is a function from A A to A. Addition, subtraction,

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

Matrix Solution of Equations

Matrix Solution of Equations Contents 8 Matrix Solution of Equations 8.1 Solution by Cramer s Rule 2 8.2 Solution by Inverse Matrix Method 13 8.3 Solution by Gauss Elimination 22 Learning outcomes In this Workbook you will learn to

More information

Facts About Eigenvalues

Facts About Eigenvalues Facts About Eigenvalues By Dr David Butler Definitions Suppose A is an n n matrix An eigenvalue of A is a number λ such that Av = λv for some nonzero vector v An eigenvector of A is a nonzero vector v

More information

THEORY OF SIMPLEX METHOD

THEORY OF SIMPLEX METHOD Chapter THEORY OF SIMPLEX METHOD Mathematical Programming Problems A mathematical programming problem is an optimization problem of finding the values of the unknown variables x, x,, x n that maximize

More information

ASS.PROF.DR Thamer Information Theory 4th Class in Communication. Finite Field Arithmetic. (Galois field)

ASS.PROF.DR Thamer Information Theory 4th Class in Communication. Finite Field Arithmetic. (Galois field) Finite Field Arithmetic (Galois field) Introduction: A finite field is also often known as a Galois field, after the French mathematician Pierre Galois. A Galois field in which the elements can take q

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

Inner products on R n, and more

Inner products on R n, and more Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

More information

Galois theory and the normal basis theorem

Galois theory and the normal basis theorem Galois theory and the normal basis theorem Arthur Ogus December 3, 2010 Recall the following key result: Theorem 1 (Independence of characters) Let M be a monoid and let K be a field. Then the set of monoid

More information

Week 9-10: Recurrence Relations and Generating Functions

Week 9-10: Recurrence Relations and Generating Functions Week 9-10: Recurrence Relations and Generating Functions March 31, 2015 1 Some number sequences An infinite sequence (or just a sequence for short is an ordered array a 0, a 1, a 2,..., a n,... of countably

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information