Math Practice Exam 2 with Some Solutions
|
|
|
- Noel Robbins
- 9 years ago
- Views:
Transcription
1 Math Practice Exam 2 with Some Solutions (Note that the exam will NOT be this long) Definitions (0 points) Let T : V W be a transformation Let A be a square matrix (a) Define T is linear (b) Define the null space of T, null(t ) (c) Define the image of T, image(t ) (d) Define T is one-to-one (e) Define T is onto (f) Define T is invertible (g) Define T is an isomorphism (h) Define rank(t ) and nullity(t ) (i) Define A is invertible Solution: See your notes or textbook
2 2 Linear Transformations, Null Spaces, and Images 2 (0 points) Let T : P 2 (R) P 2 (R) be given by T (f(x)) = f(x) xf (x) (a) Show T is linear Solution: Let a R and f(x), g(x) P 2 (R) Then T (af(x) + g(x)) = [af(x) + g(x)] x[af(x) + g(x)] = af(x) + g(x) axf (x) xg (x) = a[f(x) xf (x)] + [g(x) xg (x)] = at (f(x)) + T (g(x)) (b) Find a basis for the image of T Solution: We know that a generating set for the image of T is the image of the standard basis of P 2 (R) Thus image(t ) = span({t (), T (x), T (x 2 )}) = span({, x x, x 2 2x 2 }) = span({, x 2 }) The vectors {, x 2 } are clearly linearly independent, so it will also be a basis (c) Is T one-to-one? Is T onto? Justify your answer Solution: Since rank(t ) = 2 and dim(p 2 (R)) = 3, T is clearly not onto Furthermore, the Dimension Theorem says the nullity(t ) =, so T is not one-to-one either 2
3 3 (0 points) Let T : R 3 R 3 be the linear transformation defined by T (x, y, z) = (x + y, x z, 2x + 3y + z) (a) Show T is linear (b) Find a basis for null(t ) (c) Find a basis for image(t ) (d) State the Dimension Theorem and verify that T satisfies it (e) Is T one-to-one? Onto? Explain 4 (0 points) Let V and W be finite-dimensional vector spaces and T : V W be linear (a) Prove that if dim(v ) < dim(w ), then T cannot be onto Solution: Suppose dim(v ) < dim(w ), and assume (by means of contradiction) that T is onto Then image(t ) = W, and thus rank(t ) = dim(w ) By the dimensions theorem, we know dim(v ) = rank(t ) + nullity(t ) = dim(w ) + nullity(t ) Since dim(v ) < dim(w ), this implies nullity(t ) = dim(v ) dim(w ) < 0, which is a contradiction since nullity can not be negative Thus T is NOT onto (b) Prove that if dim(v ) > dim(w ), then T cannot be one-to-one Solution: Similar argument to (a) See if you can get it 3
4 5 (0 points) Let T : V W be a linear transformation Prove the following theorems (a) Theorem 2: The sets null(t ) and image(t ) are subspaces of V and W, respectively (b) Theorem 22: Let β be a basis of V Then the set {T (β)} is a generating set for image(t ) (c) Theorem 24: T is one-to-one if and only if null(t ) = {0} Solution: See your notes or textbook 3 Matrix Representations and Change of Basis 6 (0 points) Consider the vector space V = P (R) (a) Explain why you know that the set β = { + x, 2x} is a basis of V Solution: Since neither vector is a multiple of the other, β is linearly independent Since the dimension of V is 2, β is a basis (b) Determine [p(x)] β, where p(x) = 2x 3 V Solution: ( Notice that ) p(x) = 2x 3 = ( 4/3)( + x) + ( 5/3)( 2x) Therefore 4/3 [p(x)] β = 5/3 7 (0 points) Let T : P 2 (R) R 2 be given by T (f(x)) = (f(0), f ()) (a) Show that T is linear Solution: Let f(x), g(x) P 2 (R) and c R Then T (cf(x) + g(x)) = (cf(0) + g(0), cf () + g ()) = c(f(0), f ()) + (g(0), g ()) = ct (f(x)) + T (g(x)) 4
5 (b) Determine the matrix of T with respect to the standard bases of P 2 (R) and R 2 Solution: First we recall that the standard basis of P 2 (R) is β = {, x, x 2 } and that the standard basis of R 2 is γ = {(, 0), (0, )} Now we look at the image of each element of the basis β under T T () = (, 0), T (x) = (0, ), and T (x 2 ) = (0, 2) Since we are using the standard basis of R 2 the columns of our matrix are the vectors we have just written So our matrix is [T ] γ β = ( ) 8 (0 points) Let β and γ be the following standard ordered bases of M 2 2 (R) and P 2 (R), respectively: {( ) ( ) ( ) ( )} β =,,, and γ = {, x, x 2 } Compute [T ] β γ if we define the linear transformation T : P 2 (R) M 2 2 (R) by ( ) f (0) 2f() T (f(x)) = 0 f (3) ( ) 0 2 Solution: First we see that T () = So the first column of [T ] β γ is the coordinate vector [T ()] β = (0, 2, 0, 0) Next T (x) = So the second column of [T ] β γ 0 0 ( ) ( ) is the coordinate vector [T (x)] β = (, 2, 0, 0) Finally T (x ) = So the third 0 2 column of [T ] β γ is the coordinate vector [T (x 2 )] β = (0, 2, 0, 2) So in total we get 0 0 [T ] β γ =
6 9 (0 points) Let V, W, and Z be vector spaces, and let T : V W and U : W Z be linear transformations (a) Prove that if U T is one-to-one, then T is one-to-one Solution: Suppose U T is one-to-one Then null(u T ) = {0} What is null(t )? Suppose x null(t ), then T (x) = 0 and (U T )(x) = U(T (x)) = U(0) = 0 So x is in null(u T ) also But 0 is the only thing in null(u T ), so x = 0, and we have shown that null(t ) = {0} Therefore T is one-to-one (b) Prove that if U T is onto, then U is onto Solution: Similar argument to (a) See if you can get it (c) Prove that if U and T are one-to-one and onto, then U T is also Solution: Similar argument to (a) See if you can get it 0 (0 points) Let T : R 3 R 2 be the linear transformation defined by T (x, y, z) = (x + y + z, x + 3y + 5z) Let β and γ be the standard bases for R 3 and R 2 respectively Also consider another basis α = {(,, ), (2, 3, 4), (3, 4, 6)} for R 3 (a) Compute the matrix representation [T ] γ β (b) Compute the matrix representation [T ] γ α (c) Compute Q the change of coordinate matrix from β to α (d) Check that [T ] γ α Q = [T ] γ β (e) Let x = (, 5, 7) What is [x] β? Use this, together with Q, to find [x] α 6
7 Solution: (a) Plugging basis ( β into T) and writing as a linear combination of the elements of γ, we get [T ] γ β = 3 5 (b) Plugging basis ( α into T and ) writing as a linear combination of the elements of γ, we get [T ] γ α = (c) To get the change of basis matrix, we must find the coordinate vectors of the elements of β with respect to α: 2 [(, 0, 0)] α = 2, [(0,, 0)] α = Therefore the change of basis matrix is Q = (d) [T ] γ α Q = ( (e) x = (, 5, 7) = [x] β = ) = 2 0 = [x] α = Q [x] β = , and [(0, 0, )] α = ( = ) = [T ] γ β 7
T ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
NOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
Section 6.1 - Inner Products and Norms
Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
Methods for Finding Bases
Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,
160 CHAPTER 4. VECTOR SPACES
160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results
Matrix Representations of Linear Transformations and Changes of Coordinates
Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under
( ) which must be a vector
MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
MA106 Linear Algebra lecture notes
MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector
Solutions to Math 51 First Exam January 29, 2015
Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not
Mathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)
MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of
Similarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
Inner product. Definition of inner product
Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product
5. Linear algebra I: dimension
5. Linear algebra I: dimension 5.1 Some simple results 5.2 Bases and dimension 5.3 Homomorphisms and dimension 1. Some simple results Several observations should be made. Once stated explicitly, the proofs
Name: Section Registered In:
Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are
α = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
MATH1231 Algebra, 2015 Chapter 7: Linear maps
MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales [email protected] Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter
Orthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
Recall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column
LINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
Math 319 Problem Set #3 Solution 21 February 2002
Math 319 Problem Set #3 Solution 21 February 2002 1. ( 2.1, problem 15) Find integers a 1, a 2, a 3, a 4, a 5 such that every integer x satisfies at least one of the congruences x a 1 (mod 2), x a 2 (mod
4.5 Linear Dependence and Linear Independence
4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then
1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
MATH 551 - APPLIED MATRIX THEORY
MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points
Section 4.4 Inner Product Spaces
Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer
Practice with Proofs
Practice with Proofs October 6, 2014 Recall the following Definition 0.1. A function f is increasing if for every x, y in the domain of f, x < y = f(x) < f(y) 1. Prove that h(x) = x 3 is increasing, using
Vector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
MAT 242 Test 2 SOLUTIONS, FORM T
MAT 242 Test 2 SOLUTIONS, FORM T 5 3 5 3 3 3 3. Let v =, v 5 2 =, v 3 =, and v 5 4 =. 3 3 7 3 a. [ points] The set { v, v 2, v 3, v 4 } is linearly dependent. Find a nontrivial linear combination of these
MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3
MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................
BANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
Math 4310 Handout - Quotient Vector Spaces
Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable
MAT 242 Test 3 SOLUTIONS, FORM A
MAT Test SOLUTIONS, FORM A. Let v =, v =, and v =. Note that B = { v, v, v } is an orthogonal set. Also, let W be the subspace spanned by { v, v, v }. A = 8 a. [5 points] Find the orthogonal projection
by the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.
MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α
[1] Diagonal factorization
8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:
MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =
MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the
GROUP ALGEBRAS. ANDREI YAFAEV
GROUP ALGEBRAS. ANDREI YAFAEV We will associate a certain algebra to a finite group and prove that it is semisimple. Then we will apply Wedderburn s theory to its study. Definition 0.1. Let G be a finite
4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION
4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:
6.2 Permutations continued
6.2 Permutations continued Theorem A permutation on a finite set A is either a cycle or can be expressed as a product (composition of disjoint cycles. Proof is by (strong induction on the number, r, of
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0
MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
Linearly Independent Sets and Linearly Dependent Sets
These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation
3 0 + 4 + 3 1 + 1 + 3 9 + 6 + 3 0 + 1 + 3 0 + 1 + 3 2 mod 10 = 4 + 3 + 1 + 27 + 6 + 1 + 1 + 6 mod 10 = 49 mod 10 = 9.
SOLUTIONS TO HOMEWORK 2 - MATH 170, SUMMER SESSION I (2012) (1) (Exercise 11, Page 107) Which of the following is the correct UPC for Progresso minestrone soup? Show why the other numbers are not valid
Orthogonal Projections and Orthonormal Bases
CS 3, HANDOUT -A, 3 November 04 (adjusted on 7 November 04) Orthogonal Projections and Orthonormal Bases (continuation of Handout 07 of 6 September 04) Definition (Orthogonality, length, unit vectors).
1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0
Solutions: Assignment 4.. Find the redundant column vectors of the given matrix A by inspection. Then find a basis of the image of A and a basis of the kernel of A. 5 A The second and third columns are
Inner products on R n, and more
Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +
1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
Math 67: Modern Linear Algebra (UC Davis, Fall 2011) Summary of lectures Prof. Dan Romik
Math 67: Modern Linear Algebra (UC Davis, Fall 2011) Summary of lectures Prof. Dan Romik [Version of November 30, 2011 this document will be updated occasionally with new material] Lecture 1 (9/23/11)
Linear Algebra. A vector space (over R) is an ordered quadruple. such that V is a set; 0 V ; and the following eight axioms hold:
Linear Algebra A vector space (over R) is an ordered quadruple (V, 0, α, µ) such that V is a set; 0 V ; and the following eight axioms hold: α : V V V and µ : R V V ; (i) α(α(u, v), w) = α(u, α(v, w)),
Let H and J be as in the above lemma. The result of the lemma shows that the integral
Let and be as in the above lemma. The result of the lemma shows that the integral ( f(x, y)dy) dx is well defined; we denote it by f(x, y)dydx. By symmetry, also the integral ( f(x, y)dx) dy is well defined;
Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
To give it a definition, an implicit function of x and y is simply any relationship that takes the form:
2 Implicit function theorems and applications 21 Implicit functions The implicit function theorem is one of the most useful single tools you ll meet this year After a while, it will be second nature to
I. GROUPS: BASIC DEFINITIONS AND EXAMPLES
I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called
Linear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka [email protected] http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.
DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms
Continuity. DEFINITION 1: A function f is continuous at a number a if. lim
Continuity DEFINITION : A function f is continuous at a number a if f(x) = f(a) REMARK: It follows from the definition that f is continuous at a if and only if. f(a) is defined. 2. f(x) and +f(x) exist.
FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES
FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES CHRISTOPHER HEIL 1. Cosets and the Quotient Space Any vector space is an abelian group under the operation of vector addition. So, if you are have studied
17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function
17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):
Section 1.7 22 Continued
Section 1.5 23 A homogeneous equation is always consistent. TRUE - The trivial solution is always a solution. The equation Ax = 0 gives an explicit descriptions of its solution set. FALSE - The equation
University of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
α α λ α = = λ λ α ψ = = α α α λ λ ψ α = + β = > θ θ β > β β θ θ θ β θ β γ θ β = γ θ > β > γ θ β γ = θ β = θ β = θ β = β θ = β β θ = = = β β θ = + α α α α α = = λ λ λ λ λ λ λ = λ λ α α α α λ ψ + α =
Finite dimensional C -algebras
Finite dimensional C -algebras S. Sundar September 14, 2012 Throughout H, K stand for finite dimensional Hilbert spaces. 1 Spectral theorem for self-adjoint opertors Let A B(H) and let {ξ 1, ξ 2,, ξ n
Linear Algebra Notes
Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note
LEARNING OBJECTIVES FOR THIS CHAPTER
CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional
Chapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
Review Jeopardy. Blue vs. Orange. Review Jeopardy
Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?
Suk-Geun Hwang and Jin-Woo Park
Bull. Korean Math. Soc. 43 (2006), No. 3, pp. 471 478 A NOTE ON PARTIAL SIGN-SOLVABILITY Suk-Geun Hwang and Jin-Woo Park Abstract. In this paper we prove that if Ax = b is a partial signsolvable linear
PROJECTIVE GEOMETRY. b3 course 2003. Nigel Hitchin
PROJECTIVE GEOMETRY b3 course 2003 Nigel Hitchin [email protected] 1 1 Introduction This is a course on projective geometry. Probably your idea of geometry in the past has been based on triangles
Chapter 6. Linear Transformation. 6.1 Intro. to Linear Transformation
Chapter 6 Linear Transformation 6 Intro to Linear Transformation Homework: Textbook, 6 Ex, 5, 9,, 5,, 7, 9,5, 55, 57, 6(a,b), 6; page 7- In this section, we discuss linear transformations 89 9 CHAPTER
x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3
Math 24 FINAL EXAM (2/9/9 - SOLUTIONS ( Find the general solution to the system of equations 2 4 5 6 7 ( r 2 2r r 2 r 5r r x + y + z 2x + y + 4z 5x + 6y + 7z 2 2 2 2 So x z + y 2z 2 and z is free. ( r
LS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
Quotient Rings and Field Extensions
Chapter 5 Quotient Rings and Field Extensions In this chapter we describe a method for producing field extension of a given field. If F is a field, then a field extension is a field K that contains F.
is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.
.4 COORDINATES EXAMPLE Let V be the plane in R with equation x +2x 2 +x 0, a two-dimensional subspace of R. We can describe a vector in this plane by its spatial (D)coordinates; for example, vector x 5
Orthogonal Projections
Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors
MATH 425, PRACTICE FINAL EXAM SOLUTIONS.
MATH 45, PRACTICE FINAL EXAM SOLUTIONS. Exercise. a Is the operator L defined on smooth functions of x, y by L u := u xx + cosu linear? b Does the answer change if we replace the operator L by the operator
1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
(a) Write each of p and q as a polynomial in x with coefficients in Z[y, z]. deg(p) = 7 deg(q) = 9
Homework #01, due 1/20/10 = 9.1.2, 9.1.4, 9.1.6, 9.1.8, 9.2.3 Additional problems for study: 9.1.1, 9.1.3, 9.1.5, 9.1.13, 9.2.1, 9.2.2, 9.2.4, 9.2.5, 9.2.6, 9.3.2, 9.3.3 9.1.1 (This problem was not assigned
Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
Linear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
2.3 Convex Constrained Optimization Problems
42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions
Introduction to Finite Fields (cont.)
Chapter 6 Introduction to Finite Fields (cont.) 6.1 Recall Theorem. Z m is a field m is a prime number. Theorem (Subfield Isomorphic to Z p ). Every finite field has the order of a power of a prime number
8 Square matrices continued: Determinants
8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You
Handout #1: Mathematical Reasoning
Math 101 Rumbos Spring 2010 1 Handout #1: Mathematical Reasoning 1 Propositional Logic A proposition is a mathematical statement that it is either true or false; that is, a statement whose certainty or
minimal polyonomial Example
Minimal Polynomials Definition Let α be an element in GF(p e ). We call the monic polynomial of smallest degree which has coefficients in GF(p) and α as a root, the minimal polyonomial of α. Example: We
The Mean Value Theorem
The Mean Value Theorem THEOREM (The Extreme Value Theorem): If f is continuous on a closed interval [a, b], then f attains an absolute maximum value f(c) and an absolute minimum value f(d) at some numbers
FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z
FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z DANIEL BIRMAJER, JUAN B GIL, AND MICHAEL WEINER Abstract We consider polynomials with integer coefficients and discuss their factorization
Separation Properties for Locally Convex Cones
Journal of Convex Analysis Volume 9 (2002), No. 1, 301 307 Separation Properties for Locally Convex Cones Walter Roth Department of Mathematics, Universiti Brunei Darussalam, Gadong BE1410, Brunei Darussalam
DERIVATIVES AS MATRICES; CHAIN RULE
DERIVATIVES AS MATRICES; CHAIN RULE 1. Derivatives of Real-valued Functions Let s first consider functions f : R 2 R. Recall that if the partial derivatives of f exist at the point (x 0, y 0 ), then we
Linear and quadratic Taylor polynomials for functions of several variables.
ams/econ 11b supplementary notes ucsc Linear quadratic Taylor polynomials for functions of several variables. c 010, Yonatan Katznelson Finding the extreme (minimum or maximum) values of a function, is
MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform
MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish
Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj
Section 5. l j v j = [ u u j u m ] l jj = l jj u j + + l mj u m. l mj Section 5. 5.. Not orthogonal, the column vectors fail to be perpendicular to each other. 5..2 his matrix is orthogonal. Check that
3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
Math 215 HW #6 Solutions
Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T
1 Completeness of a Set of Eigenfunctions. Lecturer: Naoki Saito Scribe: Alexander Sheynis/Allen Xue. May 3, 2007. 1.1 The Neumann Boundary Condition
MAT 280: Laplacian Eigenfunctions: Theory, Applications, and Computations Lecture 11: Laplacian Eigenvalue Problems for General Domains III. Completeness of a Set of Eigenfunctions and the Justification
On the representability of the bi-uniform matroid
On the representability of the bi-uniform matroid Simeon Ball, Carles Padró, Zsuzsa Weiner and Chaoping Xing August 3, 2012 Abstract Every bi-uniform matroid is representable over all sufficiently large
The last three chapters introduced three major proof techniques: direct,
CHAPTER 7 Proving Non-Conditional Statements The last three chapters introduced three major proof techniques: direct, contrapositive and contradiction. These three techniques are used to prove statements
THE DIMENSION OF A VECTOR SPACE
THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field
Principle of Data Reduction
Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then
