Examination paper for TMA4115 Matematikk 3
|
|
- Sylvia Taylor
- 7 years ago
- Views:
Transcription
1 Department of Mathematical Sciences Examination paper for TMA45 Matematikk 3 Academic contact during examination: Antoine Julien a, Alexander Schmeding b, Gereon Quick c Phone: a , b , c Examination date: 3 May 206 Examination time (from to): 09:00 3:00 Permitted examination support material: C: Simple Calculator (Casio fx-82es PLUS, Citizen SR- 270X, Citizen SR-270X College, or Hewlett Packard HP30S), Rottmann: Matematiske formelsamling. Other information: Give reasons for all answers, ensuring that it is clear how the answers have been reached. Each of the 0 problems has the same weight. Language: English Number of pages: 5 Number pages enclosed: 0 Checked by: Date Signature
2
3 TMA45 Matematikk 3 3. mai :00 3:00 Page of 5 Problem a) For z = ( + i 3), compute z 3 and z 6. b) Find all complex numbers z with z 3 = 8i and draw them in the complex plane. a) The polar form of ( + i 3) is 2e i2π/3. Hence ( + i 3) 3 = (2e i2π/3 ) 3 = 2 3 e (i2π/3)3) = 8e i2π = 8 and z 6 = z 3 z 3 = 64. b) The polar form of 8i is 8e iπ/2 = 2 3 e iπ/2. For z = re iθ, z 3 = 8i becomes 2 3 e iπ/2 = (re iθ ) 3 = r 3 e i3θ. This holds if and only if r = 2 and θ = π/6 + (2π/3)k for an integer k. It suffices to loook at the integers k = 0, k = and k = 2. These yield the solutions z 0 = 2e iπ/6 = 3 + i, z = 2e i5π/6 = 3 + i and z 2 = 2e i3π/2 = 2i.
4 Page 2 of 5 TMA45 Matematikk 3 3. mai :00 3:00 Problem 2 Consider the inhomogeneous differential equation y + 6y + 9y = cost () a) Find the general solution of the associated homogeneous equation. b) Find a particular solution of (). c) Find the unique solution of () that satisfies y(0) = y (0) = 0. a) The homogeneous equation is y + 6y + 9y = 0. The characteristic equation is λ 2 + 6λ + 9 = 0, which has a double solution λ = 3. Therefore, a fundamental system of solutions is y = e 3t and y 2 = te 3t. The general solution is y h = c e 3t + c 2 te 3t. b) We can look for a particular solution using undetermined coefficients, trying y p = acost + bsint y p = asint + bcost y p = acost bsint. By substituting, we get that y p is a solution if and only if ( a + 6b + 9a)cost + ( b 6a + 9b)sint = cost. Therefore we have a system 8a+6b = and 6a+8b = 0. The solution is a = 4/50 and b = 3/50. So y p = (/50)(4cost + 3sint). c) From the two previous questions we know that the general solution is y = c e 3t + c 2 te 3t + (/50)(4cost + 3sint). Setting y(0) = 0 we get c = 4/50. Deriving, we get y = 3c e 3t 3c 2 te 3t + c 2 e 3t + /50(3cost 4sint). Hence y (0) = 0 means 3c + c 2 + 3/50 = 0, that is c 2 = 5/50. In the end, the solution we want is y = (/50) ( ( 4 5t)e 3t + 4cost + 3sint ).
5 TMA45 Matematikk 3 3. mai :00 3:00 Page 3 of 5 Problem 3 Let a be a real number and A be the matrix 0 a. a 0 a) Find a fundamental set of real solutions to the differential equation x = Ax. b) Solve the initial value problem x = Ax, where x(0) = 2. a) The characteristic equation is λ 2 + a 2 = 0, hence λ = ±ai. We need an eigenvector corresponding to one of the eigenvalues. Take λ = ai. Then we must solve i v a = 0 i as we know the matrix is not invertible we can look at the first row only, and we get v 2 = iv. So a possible eigenvector is v =. i A fundamental set of real solutions is then given by the functions x (t) = Re(y(t)) and x 2 (t) = Im(y(t)), where y(t) = ve λt. We compute y = e ait i = (cosat + isinat) i cosat + isinat = sinat + icosat We therefore have x (t) = v 2 cosat and x sinat 2 (t) = sinat. cosat b) The solution to the initial value problem is then x(t) = c x + c 2 x 2. We find the constants from x(0) = c x (0) + c[ 2 x] 2 (0). This linear system is immediately 0 solvable as x (0) = and x 0 2 (0) =. We have c = 2 and c 2 =, and the solution to the initial value problem is 2cosat + sinat x(t) =. 2sinat + cosat
6 Page 4 of 5 TMA45 Matematikk 3 3. mai :00 3:00 Problem Let u = 2, v = 4 and w = 6 be vectors in R 3. 6 a) Write the vector p = 2 4 as a linear combination of u, v and w. 0 2 b) Can you write the vector q = 5 as a linear combination of u, v and w? 6 c) Are u, v, w linearly independent? 2 3 d) What is the determinant of the matrix A = 2 4 6? 6 a) In order to find scalars x, x 2, x 3 with x u + x 2 v + x 3 w = p, we need to solve the linear system Ax = p with 2 3 A = We do this by row operations on the augmented matrix Hence we can choose x 3 as a free variable, for example x 3 =. Then we get x 2 = 2 and x = 3. Hence p = 3u 2v + w. b) When we try the same for q, we get
7 TMA45 Matematikk 3 3. mai :00 3:00 Page 5 of 5 The second row corresponds to the false assertion 0 =. Hence there is no solution to Ax = q, and we cannot write q as a linear combination of u, v and w. c) They are not linearly independent. There are several ways to see that. For example, we can deduce from the calculation in a) that ( 5)u + v + w = 0; or we use the second or third argument in d). d) The determinant of A is 0. There are many ways to see that. For example, we have just observed that the columns of A (which are u, v and w) are linearly dependent; or we saw in a) that A is row equivalent to a matrix with a row with only zero entries; or we showed in b) that the linear transformation A: R 3 R 3, x Ax is not onto. Hence A is not invertible and deta = 0.
8 Page 6 of 5 TMA45 Matematikk 3 3. mai :00 3:00 Problem a) Find the inverse of the matrix A = b) Let T : R 3 R 3 be the linear transformation defined by Is T one-to-one? x 2x + 2x 2 T x 2 = x 3. x 3 4x + 2x 2 a) To find A we perform row operations on A to reduce it to the identity matrix: /2 0 / / /2 0 /2 Hence A = 0 / b) T is one-to-one. There are at least two ways to see this. First, we could observe that A is the standard matrix of T (recall: the columns of the standard matrix are the images of the standard basis vectors e, e 2, e 3 under T ). Since A is invertible, A is, in particular, also one-to-one. Hence T is one-to-one, since 0 = T (x) = Ax implies x = 0. Second, we could just set T (x) = 0. This can only be true if all three equations 2x + 2x 2 = 0, x 3 = 0 and 4x + 2x 2 = 0 hold. But this is true only if x = x 2 = x 3 = 0. Hence T (x) = 0 implies x = 0 and T is one-to-one.
9 TMA45 Matematikk 3 3. mai :00 3:00 Page 7 of 5 Problem 6 Let A be the matrix A = a) Bring A into row echelon form. b) Find a basis for Col(A) and determine the rank of A. c) Determine the dimension of Nul(A). d) Determine the dimensions of Row(A) and of Nul(A T ). a) We use row operations to bring A into echelon form: b) We observe that the first three columns of A are pivot columns. Hence the rank of A is 3, and the vectors 3, 4 6, form a basis of Col(A) c) The dimension of Nul(A) is 2. There are several ways to see this. One way is to observe that A has two non-pivot columns. Hence there are two free variables in the linear system corresponding to A. Another way is to use the Rank Theorem which tells us: number of columns of A = dimcol(a) + dimnul(a) Hence dimnul(a) = 5 3 = 2.
10 Page 8 of 5 TMA45 Matematikk 3 3. mai :00 3:00 d) The row echelon form of A tells us that A has three linearly independent rows. The dimension of the row space is therefore 3. The dimension of Nul(A T ) can be computed using the rank theorem and the fact Col(A T ) = Row(A): number of rows of A = dimcol(a T ) + dimnul(a T ). Hence dimnul(a T ) = 4 3 =.
11 TMA45 Matematikk 3 3. mai :00 3:00 Page 9 of 5 Problem 7 The temperature in Bymarka during winter season can be either above, equal to or below 0 Celsius. Trondheim s ski club observes the following fluctuation of temperatures from one day to the next: If the temperature has been above 0, there is a 70% chance that it will be above and a 0% chance that it will be below 0 the next day. If the temperature has been equal to 0, there is a 0% chance that it will be above and a 0% chance that it will be below 0 the next day. If the temperature has been below 0, there is a 0% chance that it will be above and a 70% chance that it will be below 0 the next day. After many days of this pattern in the winter, for what temperature should a skier prepare his/her skies? (Give the probabilities for the three possible temperatures.) The stochastic matrix which describes the probability of the temperature being above, equal or below 0 is A = We want to find the stationary vector, it is a probability vector (i.e. a vector whose entries are not negative and add up to ) which satisfies Av = v. Thus we need to solve the system of linear equations with matrix A I. After multiplying by 0, Gauss elimination gives 0(A I) = The solutions satisfy x 2 = 2x 3 and x = x 2 +3x 3 = x 3. Choosing x 3 = 0.25 yields x = 0.25 and x 2 = 0.5. Hence the stationary probability vector is 0.25 v =
12 Page 0 of 5 TMA45 Matematikk 3 3. mai :00 3:00 Thus the most likely case is that the temperature is 0 C with a 50% chance. The probability for a temperature above 0 C is 25% and the probability for a temperature below 0 C is also 25%.
13 TMA45 Matematikk 3 3. mai :00 3:00 Page of 5 Problem 8 Find the equation y = mx + c of the line that best fits the data points (0,4), (, ), (2,), (3, 3) and (4, ). We are looking for the least square solution of the system Ax = b with 0 A = 2, x = 3 4 m, b = c 4. 3 To find the solution we solve the system A T Ax = A T b which is Gauss elimination gives m c /5. 0 2/5 The solution is m = 6/5 and c = 2/5. Thus the best fitting line is y = 6/5x + 2/5.
14 Page 2 of 5 TMA45 Matematikk 3 3. mai :00 3:00 Problem 9 3 Let A be the matrix 3 and u be the vector. 3 a) Verify that 2 is an eigenvalue of A and that u is an eigenvector of A (possibly with an eigenvalue different from 2). b) Find all the eigenvalues of A and a basis for each eigenspace of A. c) Is A orthogonally diagonalizable? If so, orthogonally diagonalize A. a) To check that u is an eigenvector we just calculate 3 5 Au = 3 = 5 = 5u. 3 5 Hence u is an eigenvector to the eigenvalue 5. To verify that a is an eigenvalue of A we show via row reductions that (A 2I)x = 0 has a nontrivial solution: A 2I = Hence not every column in (A 2I) is a pivot column and (A 2I)x = 0 has a nontrivial solution. b) We have learned from a) that 5 and 2 are the eigenvalues of A. Moreover, every vector in R 3 of the form x x = x 2 = x 2 + x x 3
15 TMA45 Matematikk 3 3. mai :00 3:00 Page 3 of 5 is an eigenvector for the eigenvalue 2. Hence the two vectors v = and w = 0 0 form a basis of the eigenspace to eigenvalue 2. Since the dimensions of all eigenspaces have to add up to 3, we see that there are no other eigenvalues, and the eigenspace to eigenvalue 5 has dimension with u as a basis vector. c) It remains to orthonormalize the basis {u,v,w}. Since A is a symmetric matrix, we know that u is orthogonal to v and w (for u corresponds to a different eigenvalue). Hence we just need to normalize u and define a new basis vector v := u u = 3 = Next we orthogonalize v and w using the Gram-Schmidt process. We keep v and define a new vector w which is orthogonal to v: w := w w v v v v = 0 /2 = / It remains to normalize v and w to get the new vectors v 2 and v 3 : v 2 := v v = / 2 = / and v 3 := 2 w w = 3 /2 / 6 /2 = / 6 2/. 6 Hence we can orthogonally diagonalize A as A = P DP T with / 3 / 2 / 6 D = 0 2 0, P = / 3 / 2 / / 3 / 3 / / 3 0 2/, and P T = / 2 / / 6 / 6 2/. 6
16 Page 4 of 5 TMA45 Matematikk 3 3. mai :00 3:00 Problem 0 Let W R n be a subspace and W be its orthogonal complement. a) Show that W is a subspace of R n. b) Let w be a vector which lies both in W and in W (i.e. w W W ). Show that this implies w = 0. c) Let {w,...,w r } be a basis of W and let {v,...,v s } be a basis of W. Show that {w,...,w r,v,...,v s } is a basis of R n. a) Since the zero vector is orthogonal to every vector in R n, it is also an element in W. Let u, v be arbitrary vectors in W, w be an arbitrary vector in W, and λ be any real number. Then we have: (u + v) w = u w + v w = = 0, and (λu) w = λ(u w) = λ 0 = 0. Hence u + v W and λu W, and W is indeed a subspace of R n. b) Let w be a vector which lies both in W and W. Then w W implies that w is orthogonal to every vector in W and, in particular, w is orthogonal to itself. That means w w = 0, and hence w must be the zero vector in R n. c) We need to show that {w,...,w r,v,...,v s } is linearly independent and that it spans R n. We start with linear independence. Let λ,...,λ r,µ,...,µ s be real numbers such that This is equivalent to λ w λ r w r + µ v µ s v s = 0. λ w λ r w r = (µ v µ s v s ). But the vector λ w +...+λ r w r is an element in W, whereas the vector (µ v µ s v s ) is an element in W. By b), this implies λ w λ r w r = 0 = µ v µ s v s.
17 TMA45 Matematikk 3 3. mai :00 3:00 Page 5 of 5 Since both sets {w,...,w r } and {v,...,v s } are linearly independent, this implies λ =... = λ r = 0 and µ =... = µ s = 0. This shows that {w,...,w r,v,...,v s } is a linearly independent set of vectors. It remains to show that {w,...,w r,v,...,v s } spans R n. Let y be an arbitrary vector in R n. We learned that we can write y as a sum y = proj W y + z with proj W y W and z W. By the assumptions, we can write proj W y as a linear combination of w,...,w r and z as a linear combination of v,...,v s. Hence we can also write y as a linear combination of w,...,w r,v,...,v s.
MATH 551 - APPLIED MATRIX THEORY
MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More informationOrthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
More information[1] Diagonal factorization
8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:
More informationby the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
More informationMAT 242 Test 2 SOLUTIONS, FORM T
MAT 242 Test 2 SOLUTIONS, FORM T 5 3 5 3 3 3 3. Let v =, v 5 2 =, v 3 =, and v 5 4 =. 3 3 7 3 a. [ points] The set { v, v 2, v 3, v 4 } is linearly dependent. Find a nontrivial linear combination of these
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationNotes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
More informationMATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0
More informationr (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t)
Solutions HW 9.4.2 Write the given system in matrix form x = Ax + f r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + We write this as ( ) r (t) θ (t) = ( ) ( ) 2 r(t) θ(t) + ( ) sin(t) 9.4.4 Write the given system
More informationRecall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
More informationLectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain
Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal
More informationChapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
More informationMAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =
MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the
More informationUniversity of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
More informationRecall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.
ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?
More informationInner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
More informationName: Section Registered In:
Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are
More informationChapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
More informationLinear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
More information1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
More informationLINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
More informationSolutions to Math 51 First Exam January 29, 2015
Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationMath 215 HW #6 Solutions
Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More information1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationThe Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationInner products on R n, and more
Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +
More information1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0
Solutions: Assignment 4.. Find the redundant column vectors of the given matrix A by inspection. Then find a basis of the image of A and a basis of the kernel of A. 5 A The second and third columns are
More information1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More information9 Multiplication of Vectors: The Scalar or Dot Product
Arkansas Tech University MATH 934: Calculus III Dr. Marcel B Finan 9 Multiplication of Vectors: The Scalar or Dot Product Up to this point we have defined what vectors are and discussed basic notation
More informationNOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
More informationIntroduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
More information4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION
4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:
More information4 MT210 Notebook 4 3. 4.1 Eigenvalues and Eigenvectors... 3. 4.1.1 Definitions; Graphical Illustrations... 3
MT Notebook Fall / prepared by Professor Jenny Baglivo c Copyright 9 by Jenny A. Baglivo. All Rights Reserved. Contents MT Notebook. Eigenvalues and Eigenvectors................................... Definitions;
More informationLinearly Independent Sets and Linearly Dependent Sets
These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation
More informationSection 6.1 - Inner Products and Norms
Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
More informationSimilar matrices and Jordan form
Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive
More informationLecture 14: Section 3.3
Lecture 14: Section 3.3 Shuanglin Shao October 23, 2013 Definition. Two nonzero vectors u and v in R n are said to be orthogonal (or perpendicular) if u v = 0. We will also agree that the zero vector in
More informationOrthogonal Projections
Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors
More information4.5 Linear Dependence and Linear Independence
4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then
More informationSubspaces of R n LECTURE 7. 1. Subspaces
LECTURE 7 Subspaces of R n Subspaces Definition 7 A subset W of R n is said to be closed under vector addition if for all u, v W, u + v is also in W If rv is in W for all vectors v W and all scalars r
More informationChapter 20. Vector Spaces and Bases
Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit
More information13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms
More informationReview Jeopardy. Blue vs. Orange. Review Jeopardy
Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?
More informationThese axioms must hold for all vectors ū, v, and w in V and all scalars c and d.
DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms
More informationMethods for Finding Bases
Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,
More informationLinear Algebra Notes
Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note
More informationMATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3
MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................
More informationLS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
More informationMath 312 Homework 1 Solutions
Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please
More informationLinear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices
MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two
More informationSolving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
More information18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in 2-106. Total: 175 points.
806 Problem Set 4 Solution Due Wednesday, March 2009 at 4 pm in 2-06 Total: 75 points Problem : A is an m n matrix of rank r Suppose there are right-hand-sides b for which A x = b has no solution (a) What
More informationQuestion 2: How do you solve a matrix equation using the matrix inverse?
Question : How do you solve a matrix equation using the matrix inverse? In the previous question, we wrote systems of equations as a matrix equation AX B. In this format, the matrix A contains the coefficients
More informationNotes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
More informationSF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance
More informationInner product. Definition of inner product
Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product
More information160 CHAPTER 4. VECTOR SPACES
160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results
More informationSF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,
More informationApplied Linear Algebra I Review page 1
Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties
More informationMATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column
More informationMatrix Representations of Linear Transformations and Changes of Coordinates
Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under
More information17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function
17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):
More informationLecture notes on linear algebra
Lecture notes on linear algebra David Lerner Department of Mathematics University of Kansas These are notes of a course given in Fall, 2007 and 2008 to the Honors sections of our elementary linear algebra
More informationUsing row reduction to calculate the inverse and the determinant of a square matrix
Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
More information8 Square matrices continued: Determinants
8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You
More information( ) which must be a vector
MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are
More informationLinear Algebra I. Ronald van Luijk, 2012
Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.
More informationx + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3
Math 24 FINAL EXAM (2/9/9 - SOLUTIONS ( Find the general solution to the system of equations 2 4 5 6 7 ( r 2 2r r 2 r 5r r x + y + z 2x + y + 4z 5x + 6y + 7z 2 2 2 2 So x z + y 2z 2 and z is free. ( r
More informationDATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
More informationThe Matrix Elements of a 3 3 Orthogonal Matrix Revisited
Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation
More informationMATH1231 Algebra, 2015 Chapter 7: Linear maps
MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales danielc@unsw.edu.au Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter
More information1.5 SOLUTION SETS OF LINEAR SYSTEMS
1-2 CHAPTER 1 Linear Equations in Linear Algebra 1.5 SOLUTION SETS OF LINEAR SYSTEMS Many of the concepts and computations in linear algebra involve sets of vectors which are visualized geometrically as
More informationLinear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University
Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone
More informationα = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
More informationSALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated
SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET Course Title Course Number Department Linear Algebra Mathematics MAT-240 Action Taken (Please Check One) New Course Initiated
More informationThe last three chapters introduced three major proof techniques: direct,
CHAPTER 7 Proving Non-Conditional Statements The last three chapters introduced three major proof techniques: direct, contrapositive and contradiction. These three techniques are used to prove statements
More informationApplied Linear Algebra
Applied Linear Algebra OTTO BRETSCHER http://www.prenhall.com/bretscher Chapter 7 Eigenvalues and Eigenvectors Chia-Hui Chang Email: chia@csie.ncu.edu.tw National Central University, Taiwan 7.1 DYNAMICAL
More informationT ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
More informationEigenvalues and Eigenvectors
Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution
More informationa 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
More informationLecture 1: Schur s Unitary Triangularization Theorem
Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections
More informationSection 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj
Section 5. l j v j = [ u u j u m ] l jj = l jj u j + + l mj u m. l mj Section 5. 5.. Not orthogonal, the column vectors fail to be perpendicular to each other. 5..2 his matrix is orthogonal. Check that
More informationCITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION
No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August
More informationMathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
More informationOctober 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix
Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,
More informationAPPLICATIONS. are symmetric, but. are not.
CHAPTER III APPLICATIONS Real Symmetric Matrices The most common matrices we meet in applications are symmetric, that is, they are square matrices which are equal to their transposes In symbols, A t =
More informationEigenvalues, Eigenvectors, Matrix Factoring, and Principal Components
Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they
More informationSection 4.4 Inner Product Spaces
Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer
More information