Name: Section Registered In:
|
|
|
- Bathsheba Knight
- 9 years ago
- Views:
Transcription
1 Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are solving this problem usingcramer s Rule. Solution: This system as a matrix equation is 3 4 x = y 8 Here A = 3 4 and det(a) = Let A 1 = 30 4 and det A 1 = 92. By Cramer s rule, x = det A det A = = Let A 2 = and det A 2 = 6. By Cramer s rule, y = det A det A = 6 10 = 3 5. The solution point is ( 46 5, 3 5 ).
2 2. (1pt each) Answer the following either True or False and justify your answer. (a) If {v 1, v 2, v 3 } is a linearly independent set, then so is the set {kv 1, kv 2, kv 3 } for every nonzero scalar k Solution: TRUE. Since {v 1, v 2, v 3 } is linearly independent, the only way to write the zero vector as a linear combination of v 1, v 2, and v 3 is 0v 1 + 0v 2 + 0v 3 = 0. Consider writing the zero vector as a linear combination of {kv 1, kv 2, kv 3 }. That is, what c 1, c 2, and c 3 satisfy c 1 kv 1 + c 2 kv 2 + c 3 kv 3 = 0. Dividing both sides of this equation by k results in c 1 v 1 + c 2 v 2 + c 3 v 3 = 0. Since the set {v 1, v 2, v 3 } is linearly independent, we know that c 1 = c 2 = c 3 = 0. Hence the set {kv 1, kv 2, kv 3 } is linearly independent. (b) If A x = b does not have any solutions, then b is not in the column space of A. Solution: TRUE. By theorem, a solution to A x = b exists if and only if the vector b can be written as a linear combination of the columns of A. An equivalent statement of the theorem is that a solution to A x = b exists if and only if the vector b lies in the column space of A. (c) There are three linearly independent vectors in R 2. Solution: FALSE. One solution is to note that any basis of R 2 has only two vectors since the dimension of R 2 is 2. Therefore, any set of three vectors can not be a basis and the set has to be linearly dependent. Another solution would be to remember that the homogeneous system ([ v 1 v 2 v 3 0]) formed from three vectors in R 2 that has only two rows. A homogeneous system of less equations
3 than unknowns has to have parametric solutions. Therefore the set of vectors is linearly dependent. Another solution would to draw a picture similar to Example 1(c) in Lesson 4.5. (d) Every nonzero subspace of R n has a unique basis. Solution: FALSE. Bases are not unique The number of vectors in a basis of a subspace is unique, but the vectors chosen for the basis are not unique. If the subspace is n-dimensional, any selection of n linearly independent vectors from the subspace will span the subspace. (e) Let A be a 3 3 matrix. Then the adjoint of A is the (i, j) cofactor. A 11 A 12 A 13 A 21 A 22 A 23, where A ij is A 31 A 32 A 33 A 11 A 21 A 31 Solution: FALSE. The definition of the adjoint of A is adj A = A 12 A 22 A 32. A 13 A 23 A 33
4 3. (a) (5pts) Define a subspace W of a vector space V. Solution: A subset W of V is a subspace of V if it is closed under V s operations of vector addition and scalar multiplication. (b) (5pts) Show that the set S of all vectors of the form (a, a + b, b) for all real numbers a and b is a subspace of R 3. Solution: One way to show that this is a subspace is to show that the set of vectors (a, a+b, b) is the span of a set of vectors. (a, a + b, b) = (a, a, 0) + (0, b, b) for all a and b = a(1, 1, 0) + b(0, 1, 1) for all a and b = span {(1, 1, 0), (0, 1, 1)} by def n of a span Since every span is a subspace, the set S is a subspace. Another way to show that this is a subspace is to show that the set is closed under addition and scalar multiplication. Let u = (a 1, a 1 + b 1, b 1 ) and v = (a 2, a 2 + b 2, b 2 ) be two different vectors in R 2. closed under addition u + v = (a 1 + a 2, a 1 + b 1 + a 2 + b 2, b 1 + b 2 ) = (a 1 + a 2, a 1 + a 2 + b 1 + b 2, b 1 + b 2 ) = ((a 1 + a 2 ), (a 1 + a 2 ) + (b 1 + b 2 ), (b 1 + b 2 )) which is still a vector of the form of the set. Therefore, the set S is closed under addition. closed under scalar multiplication
5 Let k be any real number. Then ku = k(a 1, a 1 + b 1, b 1 ) = (ka 1, k(a 1 + b 1 ), kb 1 ) = (ka 1, ka 1 + kb 1, kb 1 ) which is still a vector of the form of the set. Therefore, the set S is closed scalar multiplication.
6 4. (a) (3pts) Define a basis for a vector space V. Solution: A set of vectors is a basis for a vector space V if the set of vectors is linearly independent and the span of the set of vectors is all of V. (b) (5pts) Use a dependency table and find a basis for the column space of A = Solution: First, we create the initial dependency table using matrix A: e 1 e 2 e 3 a 1 a 2 a 3 a 4 a Moving the matrix into reduced row echelon form, we get: a 1 a 3 a 5 a 1 a 2 a 3 a 4 a We see that the set { a 1, a 3, a 5 } form a basis for column space. (c) (2pts) What is the dimension of the column space of A? Explain your answer. Solution: Since there are three vectors in the basis, we know that the dimension of the columns space of A is three.
7 5. (5pts) Let u and v be vectors in a vector space. Under what conditions will it be true that span(u) = span(v). Solution: We know that the span of a single vector forms a line. If span(u) = span(v), then the spans define the same line. In other words, the line x = tu for all t has to be the same line as x = sv for all s. Therefore, u and v have to be scalar multiples of each other. Another decent answer would be to discuss Example 1(e) is Lesson Let A be a 4 4 matrix with det A = 2. (a) (2pts) Does A 1 exist? Explain your answer. Solution: Since the determinant of A is nonzero, we know that A 1 exists. (b) (3pts) Can the linear system A x = b have more than one solution? Explain your answer. Solution: Since A is invertible, we know that the reduced row echelon form of A is the identity matrix I. Therefore [A b] has the reduced row echelon form of [I p] where p is the unique solution point to the linear system.
8 7. (a) (5pts) Define what it means for a vector to be the additive identity element of the vector space. Solution: The additive identity element is the vector of a vector space that satisfies property 4 of the definition of a vector space. That is, it is the vector 0 such that for any other vector v in the vector space, v + 0 = 0 + v = v. (b) (5pts) Show that it is not possible for a vector space to have two different zero vectors. That is, is it possible to have two different vectors 0 1 and 0 2 such that these vectors both satisfy the fourth property a vector space? Explain your reasoning. Solution: This is Homework 6, Problem 4. Assume that there are two different zero vectors 0 1 and 0 2. By the definition of a zero vector, for any v in the vector space v + 0 = v. For 0 1, this amounts to v = v. Let v be the other zero. Then = 0 2. Similarly, for 0 2, this amounts to v = v. Let v be the other zero. Then = 0 1. Since addition is commutative for the vector space, = Therefore, 0 1 = 0 2 ; showing that every vector space has exactly one zero element.
9 8. (a) (4pts) What are the standard basis vectors for R 4? Solution: e 1 = (1, 0, 0, 0), e 2 = (0, 1, 0, 0), e 3 = (0, 0, 1, 0), and e 4 = (0, 0, 0, 1). (b) vectors. (3pts) Write the vector (3, 5, 9, 2) as a linear combination of the standard basis Solution: (3, 5, 9, 2) = 3e 1 5e 2 + 9e 3 + 2e 4 (c) (3pts) Explain why the standard basis vectors form a basis for R 4. Solution: The standard basis vectors are a linearly independent set. The subspace of R 4 that is spanned by the standard basis vectors is four dimensional since there are four vectors in the set. Since R 4 itself is four dimensional, then the span of the standard basis vectors must be all of R 4.
10 Name: Section Registered In: Math 125 Exam 3 Version 2 April 24, total points possible 1. (5pts) Determine a basis for the subspace 3x 2y + 5z = 0 of R 3. Solution: To find a basis, we need to convert the this equation in to a vector equation. Solve the equation for x. x = 2 3 y 5 3 z. Now let y and z be parameters. Letting y = s and z = t, we find that any vector (x, y, z) that lies in the plane can be written as the vector equation (x, y, z) = ( 2 3 s 5 t, s, t) for all 3 s and t. Separating the parameters, we find that (x, y, z) = s( 2 3, 1, 0) + t( 5, 0, 1) for all s 3 and t. Therefore, the set {( 2 3, 1, 0), ( 5, 0, 1)} is a basis for the plane. 3
11 2. (1pt each) Answer the following either True or False and justify your answer. (a) If S is a finite set of vectors in a vector space V, then span S must be closed under vector addition and scalar multiplication. Solution: TRUE. The span of a set of vectors is a subspace. By definition, a subspace is closed under vector addition and scalar multiplication. (b) If span(s 1 ) = span(s 2 ) then S 1 = S 2. Solution: FALSE. The spanning sets describe the same subspace but that does not mean that the exact same set of vectors are are in each spanning set. For example, if S 1 and S 2 were bases (which they don t have to be in this question), we know that even bases are not unique. (c) If {v 1, v 2 } is a linearly dependent set of nonzero vectors, then each vector is a scalar multiple of the other. Solution: TRUE. By the definition of a linear dependent set, there exist constants c 1 and c 2 such that c 1 v 1 + c 2 v 2 = 0. This shows that either vector can be written as a scalar multiple of the other. For example, v 1 = c 2 c 1 v 2. (d) No set of two vectors can span R 3. Solution: TRUE. Since R 3 is 3-dimensional, a set of vectors that span all of R 3 has to have at least 3 vectors. (e) If A, B, and C are square matrices, then det ABC = det A det B det C. Solution: TRUE. Recall that we have the theorem det (AB) = det A det B To show this
12 equation is true, we need to apply the theorem twice. det ABC = det (AB)C = det (AB) det C = det A det B det C
13 3. (a) (3pts) Define what it means for a vector v to be a linear combination of a set of vectors. Solution: A vector v is a linear combination of a set of vectors if v can be written as the sum of scalar multiples of the set. In other words, if v is a linear combination of the set {v 1, v 2,..., v k } then v = c 1 v 1 + c 2 v c k v k for some constants c i. (b) (5pts) In R 3, determine if b = (5, 1, 1) can be written as a linear combination of the vectors (1, 9, 1), ( 1, 3, 1), and (1, 1, 1). If so, write the combination. Solution: We are determining if there are constants c 1, c 2 and c 3 such that c 1 (1, 9, 1) + c 2 ( 1, 3, 1) + c 3 (1, 1, 1) = (5, 1, 1). Forming the augmented matrix, The reduced row echelon form of this matrix is Therefore, the constants are c 1 = 1, c 2 = 3 and c 3 = 1 and b can be written as the combination b = (1, 9, 1) 3( 1, 3, 1) + (1, 1, 1). (c) (2pts) Without doing any computations, explain if b = (5, 1, 1) lies in the column space of Solution: The column space of is the span {(1, 9, 1), ( 1, 3, 1), (1, 1, 1)}. Above we have shown that b is a linear combination of these vectors. Therefore, b lies in the column space.
14 4. (a) (5pts) List 4 of the 10 properties in the definition of a real vector space. Solution: Let V be the vector space, u and v vectors in V and c and d be any real numbers. Then any four of the following will work: 1. V is closed under vector addition 2. addition is commutative 3. addition is associative 4. V has an additive identity element 5. every element has an additive inverse 6. V is closed under scalar multiplication 7. c(u + v) = cu + cv 8. (c + d)u = cu + du 9. c(du) = (cd)u 10. 1u = u (b) (5pts) Under the normal operations of addition and scalar multiplication, show that the set of integers is not a real vector space. Solution: The set of integers {..., 3, 2, 1, 0, 1, 2, 3,...} are not closed under scalar multiplication. Let n be any integer. Then kn does not have to be an integer. For example, if k = π then πn is not an integer.
15 5. (5pts) Under what conditions will two vectors in R 3 span a plane? A line? Clearly explain your answer. Solution: Let v 1 and v 2 be vectors in R 3. If the two vectors are linearly independent then the set {v 1, v 2 } form a basis for the subspace spanned by the set {v 1, v 2 }. Two basis vectors mean that the subspace is 2-dimensional. Hence, if the two vectors are linearly independent the span of them forms a plane. If v 1 and v 2 are linearly dependent, then span {v 1, v 2 } = span {v 1 } = span {v 2 } since v 1 = kv 2 for some constant k. The span of a single vector forms a line. 6. (5pts) Show that it is not possible for a vector u in a vector space to have two different negatives. That is, for the vector u, it is not possible to have two different vectors ( u) 1 and ( u) 2. Clearly justify your answer. Solution: This is Homework 6, Problem 5. Assume u does have two different additive inverses. By the definition of inverses, we have the two equations u + ( u) 1 = 0 and u + ( u) 2 = 0. Therefore u + ( u) 1 = u + ( u) 2. On the left-hand side of both sides of the equation, add one of u s negatives. (It doesn t matter which.) Then ( u) 1 + u + ( u) 1 = ( u) 1 + u + ( u) 2. [( u) 1 + u] + ( u) 1 = [( u) 1 + u] + ( u) 2 by associativity. [0] + ( u) 1 = [0] + ( u) 2 by the properties of negatives. ( u) 1 = ( u) 2 by the properties of zeros. Therefore every element can have only one unique additive inverse.
16 7. Let v 1 = ( 2, 0, 1), v 2 = (3, 2, 5), v 3 = (6, 1, 1), and v 4 = (7, 0, 2). (a) (3pts) What vector space are these vectors elements of? Explain your answer. Solution: Since every vector is an 3-tuple of numbers, the vectors are elements of R 3. (b) (4pts) Show that the set {v 1, v 2, v 3, v 4 } is linearly dependent. Solution: One solution would be to site the theorem that says that since we have more vectors (4) than dimensions of the the vector space (3), the four vectors have to be linearly dependent. This is a fine answer to part (b) but won t help us with (c). I will show dependency using the definition. We want to show that there exists a linear combination of the set, without all zero scalars, that forms the zero vector. Solving for the coefficients equates to the linear system [v 1 v 2 v 3 v 4 0]. Forming the augmented matrix, The reduced row echelon form of this matrix is / / /29 0. Since the system has parametric solutions, the set is linearly dependent. (c) (3pts) Find a dependency equation. Solution: Solving for the coefficients in the above reduced row echelon form, we get c 1 = t, c 2 = 3 29 t, c 3 = 6 29 t, and c 4 = t. Therefore, 0 = tv tv tv 3 + tv 4 for all t. To find a dependency equation, the easiest way is to let t = 1. This yields the dependency equation, 0 = v v v 3 + v 4.
17 8. Let A be a 4 4 matrix with det A = 2. (a) (3pts) If A is transformed into the reduced row echelon form matrix B, what is B? Explain your answer. Solution: Since the determinant is nonzero, we know that the matrix A is invertible. Since A is invertible, we know that the reduced row echelon form of the matrix is the identity matrix. (b) (2pts) Describe the set of all solutions to the homogeneous system A x = 0. Solution: Since the matrix is invertible, we know that the system has a unique solution. The only unique solution to the null space problem is the trivial solution (0, 0, 0, 0).
18 9. (5pts) Sink Inc. manufactures sinks from a steel alloy that is 76% iron, 16% nickel, and 8% chromium and counter tops from an alloy that is 70% iron, 20% nickel, and 10% chromium. They have located two supplies of scrap metal and can save substantially on the production costs if their products can be blended from these scraps. One scrap metal mixture contains 80% iron, 15% nickel, and 5% chromium, and the other scrap metal mixture contains 60% iron, 20% nickel, and 20% chromium. Determine if the products can be manufactured from a blend of these scrap mixtures; if so, find the blend. Solution: Interpreting the metal composition from the sources and the desired alloys as vectors, we can write the above information in the following form: S 1 = (80, 15, 5), S 2 = (60, 20, 20), A 1 = (76, 16, 8), and A 2 = (70, 20, 10). To determine if the alloys can be made from the sources, we look at the matrix [ S 1S2 A 1A2 ] Putting this matrix into reduced row echelon form, we get Note that only the Alloy 1 vector is consistent. Therefore, from these two sources, we can only make Alloy 1. The conclusion: To make Alloy 1 (and subsequently the sinks), use a combination of metal that consists of 80% of the metal being from Source 1 and 20% from Source 2.
These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.
DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms
Systems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =
MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
Solutions to Math 51 First Exam January 29, 2015
Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not
1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0
Solutions: Assignment 4.. Find the redundant column vectors of the given matrix A by inspection. Then find a basis of the image of A and a basis of the kernel of A. 5 A The second and third columns are
NOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation
( ) which must be a vector
MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column
1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar
Linear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3
MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................
Linear Algebra Notes
Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note
Vector Spaces 4.4 Spanning and Independence
Vector Spaces 4.4 and Independence October 18 Goals Discuss two important basic concepts: Define linear combination of vectors. Define Span(S) of a set S of vectors. Define linear Independence of a set
Orthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
4.5 Linear Dependence and Linear Independence
4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then
8 Square matrices continued: Determinants
8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You
Mathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
Matrix Representations of Linear Transformations and Changes of Coordinates
Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under
160 CHAPTER 4. VECTOR SPACES
160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results
1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
Section 1.7 22 Continued
Section 1.5 23 A homogeneous equation is always consistent. TRUE - The trivial solution is always a solution. The equation Ax = 0 gives an explicit descriptions of its solution set. FALSE - The equation
1.5 SOLUTION SETS OF LINEAR SYSTEMS
1-2 CHAPTER 1 Linear Equations in Linear Algebra 1.5 SOLUTION SETS OF LINEAR SYSTEMS Many of the concepts and computations in linear algebra involve sets of vectors which are visualized geometrically as
Math 312 Homework 1 Solutions
Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please
Similarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
Linearly Independent Sets and Linearly Dependent Sets
These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation
University of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
THE DIMENSION OF A VECTOR SPACE
THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0
Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:
Section 1.2: Row Reduction and Echelon Forms Echelon form (or row echelon form): 1. All nonzero rows are above any rows of all zeros. 2. Each leading entry (i.e. left most nonzero entry) of a row is in
Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
Methods for Finding Bases
Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,
MATH 551 - APPLIED MATRIX THEORY
MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points
Subspaces of R n LECTURE 7. 1. Subspaces
LECTURE 7 Subspaces of R n Subspaces Definition 7 A subset W of R n is said to be closed under vector addition if for all u, v W, u + v is also in W If rv is in W for all vectors v W and all scalars r
a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
Chapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms
Recall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
1 Determinants and the Solvability of Linear Systems
1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped
Linear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka [email protected] http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
Vector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
The Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
by the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
Lecture Notes 2: Matrices as Systems of Linear Equations
2: Matrices as Systems of Linear Equations 33A Linear Algebra, Puck Rombach Last updated: April 13, 2016 Systems of Linear Equations Systems of linear equations can represent many things You have probably
T ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
Notes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
Examination paper for TMA4115 Matematikk 3
Department of Mathematical Sciences Examination paper for TMA45 Matematikk 3 Academic contact during examination: Antoine Julien a, Alexander Schmeding b, Gereon Quick c Phone: a 73 59 77 82, b 40 53 99
Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)
MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of
MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 2
MATHEMATICS FO ENGINEES BASIC MATIX THEOY TUTOIAL This is the second of two tutorials on matrix theory. On completion you should be able to do the following. Explain the general method for solving simultaneous
Linear Equations in Linear Algebra
1 Linear Equations in Linear Algebra 1.5 SOLUTION SETS OF LINEAR SYSTEMS HOMOGENEOUS LINEAR SYSTEMS A system of linear equations is said to be homogeneous if it can be written in the form A 0, where A
Lecture 1: Systems of Linear Equations
MTH Elementary Matrix Algebra Professor Chao Huang Department of Mathematics and Statistics Wright State University Lecture 1 Systems of Linear Equations ² Systems of two linear equations with two variables
is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.
.4 COORDINATES EXAMPLE Let V be the plane in R with equation x +2x 2 +x 0, a two-dimensional subspace of R. We can describe a vector in this plane by its spatial (D)coordinates; for example, vector x 5
Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
Lecture 14: Section 3.3
Lecture 14: Section 3.3 Shuanglin Shao October 23, 2013 Definition. Two nonzero vectors u and v in R n are said to be orthogonal (or perpendicular) if u v = 0. We will also agree that the zero vector in
LINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
MAT 242 Test 2 SOLUTIONS, FORM T
MAT 242 Test 2 SOLUTIONS, FORM T 5 3 5 3 3 3 3. Let v =, v 5 2 =, v 3 =, and v 5 4 =. 3 3 7 3 a. [ points] The set { v, v 2, v 3, v 4 } is linearly dependent. Find a nontrivial linear combination of these
x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3
Math 24 FINAL EXAM (2/9/9 - SOLUTIONS ( Find the general solution to the system of equations 2 4 5 6 7 ( r 2 2r r 2 r 5r r x + y + z 2x + y + 4z 5x + 6y + 7z 2 2 2 2 So x z + y 2z 2 and z is free. ( r
Review Jeopardy. Blue vs. Orange. Review Jeopardy
Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?
Solution to Homework 2
Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if
Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
Arithmetic and Algebra of Matrices
Arithmetic and Algebra of Matrices Math 572: Algebra for Middle School Teachers The University of Montana 1 The Real Numbers 2 Classroom Connection: Systems of Linear Equations 3 Rational Numbers 4 Irrational
Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)
Section 8. Solving a System of Equations Using Matrices (Guassian Elimination) x + y + z = x y + 4z = x 4y + z = System of Equations x 4 y = 4 z A System in matrix form x A x = b b 4 4 Augmented Matrix
SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison
SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections
Lecture 4: Partitioned Matrices and Determinants
Lecture 4: Partitioned Matrices and Determinants 1 Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying
Question 2: How do you solve a matrix equation using the matrix inverse?
Question : How do you solve a matrix equation using the matrix inverse? In the previous question, we wrote systems of equations as a matrix equation AX B. In this format, the matrix A contains the coefficients
ISOMETRIES OF R n KEITH CONRAD
ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x
Solving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
Systems of Linear Equations
Chapter 1 Systems of Linear Equations 1.1 Intro. to systems of linear equations Homework: [Textbook, Ex. 13, 15, 41, 47, 49, 51, 65, 73; page 11-]. Main points in this section: 1. Definition of Linear
1.2 Solving a System of Linear Equations
1.. SOLVING A SYSTEM OF LINEAR EQUATIONS 1. Solving a System of Linear Equations 1..1 Simple Systems - Basic De nitions As noticed above, the general form of a linear system of m equations in n variables
Solving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
Lecture 2 Matrix Operations
Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or
Orthogonal Projections
Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors
Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors
Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col
Math 333 - Practice Exam 2 with Some Solutions
Math 333 - Practice Exam 2 with Some Solutions (Note that the exam will NOT be this long) Definitions (0 points) Let T : V W be a transformation Let A be a square matrix (a) Define T is linear (b) Define
LEARNING OBJECTIVES FOR THIS CHAPTER
CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional
Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain
Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal
18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in 2-106. Total: 175 points.
806 Problem Set 4 Solution Due Wednesday, March 2009 at 4 pm in 2-06 Total: 75 points Problem : A is an m n matrix of rank r Suppose there are right-hand-sides b for which A x = b has no solution (a) What
1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
PROJECTIVE GEOMETRY. b3 course 2003. Nigel Hitchin
PROJECTIVE GEOMETRY b3 course 2003 Nigel Hitchin [email protected] 1 1 Introduction This is a course on projective geometry. Probably your idea of geometry in the past has been based on triangles
LINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
Row Echelon Form and Reduced Row Echelon Form
These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation
α = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
To give it a definition, an implicit function of x and y is simply any relationship that takes the form:
2 Implicit function theorems and applications 21 Implicit functions The implicit function theorem is one of the most useful single tools you ll meet this year After a while, it will be second nature to
Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.
ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?
Section 6.1 - Inner Products and Norms
Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
Linear Algebra I. Ronald van Luijk, 2012
Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.
MA106 Linear Algebra lecture notes
MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector
Linear Programming. March 14, 2014
Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1
Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.
Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that
Here are some examples of combining elements and the operations used:
MATRIX OPERATIONS Summary of article: What is an operation? Addition of two matrices. Multiplication of a Matrix by a scalar. Subtraction of two matrices: two ways to do it. Combinations of Addition, Subtraction,
Brief Introduction to Vectors and Matrices
CHAPTER 1 Brief Introduction to Vectors and Matrices In this chapter, we will discuss some needed concepts found in introductory course in linear algebra. We will introduce matrix, vector, vector-valued
Homogeneous systems of algebraic equations. A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which
Homogeneous systems of algebraic equations A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x + + a n x n = a
Math 215 HW #6 Solutions
Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
Solutions to Homework Section 3.7 February 18th, 2005
Math 54W Spring 5 Solutions to Homeork Section 37 Februar 8th, 5 List the ro vectors and the column vectors of the matrix The ro vectors are The column vectors are ( 5 5 The matrix ( (,,,, 4, (5,,,, (
