Examination paper for TMA4115 Matematikk 3



Similar documents
MATH APPLIED MATRIX THEORY

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Orthogonal Diagonalization of Symmetric Matrices

[1] Diagonal factorization

by the matrix A results in a vector which is a reflection of the given

MAT 242 Test 2 SOLUTIONS, FORM T

Similarity and Diagonalization. Similar Matrices

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) Write the given system in matrix form x = Ax + f ( ) sin(t) x y z = dy cos(t)

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

Chapter 6. Orthogonality

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

University of Lille I PC first year list of exercises n 7. Review

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Inner Product Spaces and Orthogonality

Name: Section Registered In:

Chapter 17. Orthogonal Matrices and Symmetries of Space

Linear Algebra Review. Vectors

x = + x 2 + x

LINEAR ALGEBRA. September 23, 2010

Solutions to Math 51 First Exam January 29, 2015

Numerical Analysis Lecture Notes

Inner Product Spaces

Math 215 HW #6 Solutions

Linear Algebra Notes for Marsden and Tromba Vector Calculus

1 VECTOR SPACES AND SUBSPACES

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

The Characteristic Polynomial

1 Introduction to Matrices

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Inner products on R n, and more

A =

1 Sets and Set Notation.

Systems of Linear Equations

LINEAR ALGEBRA W W L CHEN

Notes on Determinant

9 Multiplication of Vectors: The Scalar or Dot Product

NOTES ON LINEAR TRANSFORMATIONS

Introduction to Matrix Algebra

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4 MT210 Notebook Eigenvalues and Eigenvectors Definitions; Graphical Illustrations... 3

Linearly Independent Sets and Linearly Dependent Sets

Section Inner Products and Norms

Similar matrices and Jordan form

Lecture 14: Section 3.3

Orthogonal Projections

4.5 Linear Dependence and Linear Independence

Subspaces of R n LECTURE Subspaces

Chapter 20. Vector Spaces and Bases

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Review Jeopardy. Blue vs. Orange. Review Jeopardy

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

Methods for Finding Bases

Linear Algebra Notes

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

LS.6 Solution Matrices

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Math 312 Homework 1 Solutions

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Solving Linear Systems, Continued and The Inverse of a Matrix

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in Total: 175 points.

Question 2: How do you solve a matrix equation using the matrix inverse?

Notes on Symmetric Matrices

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

Inner product. Definition of inner product

160 CHAPTER 4. VECTOR SPACES

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

Applied Linear Algebra I Review page 1

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Matrix Representations of Linear Transformations and Changes of Coordinates

17. Inner product spaces Definition Let V be a real vector space. An inner product on V is a function

Lecture notes on linear algebra

Using row reduction to calculate the inverse and the determinant of a square matrix

Vector and Matrix Norms

8 Square matrices continued: Determinants

( ) which must be a vector

Linear Algebra I. Ronald van Luijk, 2012

x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3

DATA ANALYSIS II. Matrix Algorithms

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

MATH1231 Algebra, 2015 Chapter 7: Linear maps

1.5 SOLUTION SETS OF LINEAR SYSTEMS

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

α = u v. In other words, Orthogonal Projection

SALEM COMMUNITY COLLEGE Carneys Point, New Jersey COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated

The last three chapters introduced three major proof techniques: direct,

Applied Linear Algebra

T ( a i x i ) = a i T (x i ).

Eigenvalues and Eigenvectors

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Lecture 1: Schur s Unitary Triangularization Theorem

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

Mathematics Course 111: Algebra I Part IV: Vector Spaces

October 3rd, Linear Algebra & Properties of the Covariance Matrix

APPLICATIONS. are symmetric, but. are not.

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Section 4.4 Inner Product Spaces

Transcription:

Department of Mathematical Sciences Examination paper for TMA45 Matematikk 3 Academic contact during examination: Antoine Julien a, Alexander Schmeding b, Gereon Quick c Phone: a 73 59 77 82, b 40 53 99 2, c 48 50 4 2 Examination date: 3 May 206 Examination time (from to): 09:00 3:00 Permitted examination support material: C: Simple Calculator (Casio fx-82es PLUS, Citizen SR- 270X, Citizen SR-270X College, or Hewlett Packard HP30S), Rottmann: Matematiske formelsamling. Other information: Give reasons for all answers, ensuring that it is clear how the answers have been reached. Each of the 0 problems has the same weight. Language: English Number of pages: 5 Number pages enclosed: 0 Checked by: Date Signature

TMA45 Matematikk 3 3. mai 206 09:00 3:00 Page of 5 Problem a) For z = ( + i 3), compute z 3 and z 6. b) Find all complex numbers z with z 3 = 8i and draw them in the complex plane. a) The polar form of ( + i 3) is 2e i2π/3. Hence ( + i 3) 3 = (2e i2π/3 ) 3 = 2 3 e (i2π/3)3) = 8e i2π = 8 and z 6 = z 3 z 3 = 64. b) The polar form of 8i is 8e iπ/2 = 2 3 e iπ/2. For z = re iθ, z 3 = 8i becomes 2 3 e iπ/2 = (re iθ ) 3 = r 3 e i3θ. This holds if and only if r = 2 and θ = π/6 + (2π/3)k for an integer k. It suffices to loook at the integers k = 0, k = and k = 2. These yield the solutions z 0 = 2e iπ/6 = 3 + i, z = 2e i5π/6 = 3 + i and z 2 = 2e i3π/2 = 2i.

Page 2 of 5 TMA45 Matematikk 3 3. mai 206 09:00 3:00 Problem 2 Consider the inhomogeneous differential equation y + 6y + 9y = cost () a) Find the general solution of the associated homogeneous equation. b) Find a particular solution of (). c) Find the unique solution of () that satisfies y(0) = y (0) = 0. a) The homogeneous equation is y + 6y + 9y = 0. The characteristic equation is λ 2 + 6λ + 9 = 0, which has a double solution λ = 3. Therefore, a fundamental system of solutions is y = e 3t and y 2 = te 3t. The general solution is y h = c e 3t + c 2 te 3t. b) We can look for a particular solution using undetermined coefficients, trying y p = acost + bsint y p = asint + bcost y p = acost bsint. By substituting, we get that y p is a solution if and only if ( a + 6b + 9a)cost + ( b 6a + 9b)sint = cost. Therefore we have a system 8a+6b = and 6a+8b = 0. The solution is a = 4/50 and b = 3/50. So y p = (/50)(4cost + 3sint). c) From the two previous questions we know that the general solution is y = c e 3t + c 2 te 3t + (/50)(4cost + 3sint). Setting y(0) = 0 we get c = 4/50. Deriving, we get y = 3c e 3t 3c 2 te 3t + c 2 e 3t + /50(3cost 4sint). Hence y (0) = 0 means 3c + c 2 + 3/50 = 0, that is c 2 = 5/50. In the end, the solution we want is y = (/50) ( ( 4 5t)e 3t + 4cost + 3sint ).

TMA45 Matematikk 3 3. mai 206 09:00 3:00 Page 3 of 5 Problem 3 Let a be a real number and A be the matrix 0 a. a 0 a) Find a fundamental set of real solutions to the differential equation x = Ax. b) Solve the initial value problem x = Ax, where x(0) = 2. a) The characteristic equation is λ 2 + a 2 = 0, hence λ = ±ai. We need an eigenvector corresponding to one of the eigenvalues. Take λ = ai. Then we must solve i v a = 0 i as we know the matrix is not invertible we can look at the first row only, and we get v 2 = iv. So a possible eigenvector is v =. i A fundamental set of real solutions is then given by the functions x (t) = Re(y(t)) and x 2 (t) = Im(y(t)), where y(t) = ve λt. We compute y = e ait i = (cosat + isinat) i cosat + isinat = sinat + icosat We therefore have x (t) = v 2 cosat and x sinat 2 (t) = sinat. cosat b) The solution to the initial value problem is then x(t) = c x + c 2 x 2. We find the constants from x(0) = c x (0) + c[ 2 x] 2 (0). This linear system is immediately 0 solvable as x (0) = and x 0 2 (0) =. We have c = 2 and c 2 =, and the solution to the initial value problem is 2cosat + sinat x(t) =. 2sinat + cosat

Page 4 of 5 TMA45 Matematikk 3 3. mai 206 09:00 3:00 Problem 4 2 3 Let u = 2, v = 4 and w = 6 be vectors in R 3. 6 a) Write the vector p = 2 4 as a linear combination of u, v and w. 0 2 b) Can you write the vector q = 5 as a linear combination of u, v and w? 6 c) Are u, v, w linearly independent? 2 3 d) What is the determinant of the matrix A = 2 4 6? 6 a) In order to find scalars x, x 2, x 3 with x u + x 2 v + x 3 w = p, we need to solve the linear system Ax = p with 2 3 A = 2 4 6. 6 We do this by row operations on the augmented matrix 2 3 2 2 3 2 2 3 2 5 0 7 2 4 6 4 0 0 0 0 0 0 0 0 0 0 0 0. 6 0 0 4 4 2 0 3 0 3 Hence we can choose x 3 as a free variable, for example x 3 =. Then we get x 2 = 2 and x = 3. Hence p = 3u 2v + w. b) When we try the same for q, we get 2 3 2 2 3 2 2 4 6 5 0 0 0. 6 6 0 4 4 4

TMA45 Matematikk 3 3. mai 206 09:00 3:00 Page 5 of 5 The second row corresponds to the false assertion 0 =. Hence there is no solution to Ax = q, and we cannot write q as a linear combination of u, v and w. c) They are not linearly independent. There are several ways to see that. For example, we can deduce from the calculation in a) that ( 5)u + v + w = 0; or we use the second or third argument in d). d) The determinant of A is 0. There are many ways to see that. For example, we have just observed that the columns of A (which are u, v and w) are linearly dependent; or we saw in a) that A is row equivalent to a matrix with a row with only zero entries; or we showed in b) that the linear transformation A: R 3 R 3, x Ax is not onto. Hence A is not invertible and deta = 0.

Page 6 of 5 TMA45 Matematikk 3 3. mai 206 09:00 3:00 Problem 5 2 2 0 a) Find the inverse of the matrix A = 0 0. 4 2 0 b) Let T : R 3 R 3 be the linear transformation defined by Is T one-to-one? x 2x + 2x 2 T x 2 = x 3. x 3 4x + 2x 2 a) To find A we perform row operations on A to reduce it to the identity matrix: 2 2 0 0 0 2 2 0 0 0 2 2 0 0 0 0 0 0 0 4 2 0 0 0 0 2 0 2 0 4 2 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 /2 0 /2 0 2 0 2 0 0 0 0 /2. 0 0 0 0 0 0 0 0 /2 0 /2 Hence A = 0 /2. 0 0 b) T is one-to-one. There are at least two ways to see this. First, we could observe that A is the standard matrix of T (recall: the columns of the standard matrix are the images of the standard basis vectors e, e 2, e 3 under T ). Since A is invertible, A is, in particular, also one-to-one. Hence T is one-to-one, since 0 = T (x) = Ax implies x = 0. Second, we could just set T (x) = 0. This can only be true if all three equations 2x + 2x 2 = 0, x 3 = 0 and 4x + 2x 2 = 0 hold. But this is true only if x = x 2 = x 3 = 0. Hence T (x) = 0 implies x = 0 and T is one-to-one.

TMA45 Matematikk 3 3. mai 206 09:00 3:00 Page 7 of 5 Problem 6 Let A be the matrix 2 0 3 2 4 5 4 A = 3 6 8 5. 5 4 8 a) Bring A into row echelon form. b) Find a basis for Col(A) and determine the rank of A. c) Determine the dimension of Nul(A). d) Determine the dimensions of Row(A) and of Nul(A T ). a) We use row operations to bring A into echelon form: 2 0 3 2 0 3 2 0 3 2 4 5 4 3 6 8 5 0 0 2 0 6 8 6 4 0 0 2 0 0 2. 5 4 8 0 6 8 6 4 0 0 0 0 0 b) We observe that the first three columns of A are pivot columns. Hence the rank 2 0 2 of A is 3, and the vectors 3, 4 6, form a basis of Col(A). 5 4 8 c) The dimension of Nul(A) is 2. There are several ways to see this. One way is to observe that A has two non-pivot columns. Hence there are two free variables in the linear system corresponding to A. Another way is to use the Rank Theorem which tells us: number of columns of A = dimcol(a) + dimnul(a) Hence dimnul(a) = 5 3 = 2.

Page 8 of 5 TMA45 Matematikk 3 3. mai 206 09:00 3:00 d) The row echelon form of A tells us that A has three linearly independent rows. The dimension of the row space is therefore 3. The dimension of Nul(A T ) can be computed using the rank theorem and the fact Col(A T ) = Row(A): number of rows of A = dimcol(a T ) + dimnul(a T ). Hence dimnul(a T ) = 4 3 =.

TMA45 Matematikk 3 3. mai 206 09:00 3:00 Page 9 of 5 Problem 7 The temperature in Bymarka during winter season can be either above, equal to or below 0 Celsius. Trondheim s ski club observes the following fluctuation of temperatures from one day to the next: If the temperature has been above 0, there is a 70% chance that it will be above and a 0% chance that it will be below 0 the next day. If the temperature has been equal to 0, there is a 0% chance that it will be above and a 0% chance that it will be below 0 the next day. If the temperature has been below 0, there is a 0% chance that it will be above and a 70% chance that it will be below 0 the next day. After many days of this pattern in the winter, for what temperature should a skier prepare his/her skies? (Give the probabilities for the three possible temperatures.) The stochastic matrix which describes the probability of the temperature being above, equal or below 0 is 0.7 0. 0. A = 0.2 0.8 0.2. 0. 0. 0.7 We want to find the stationary vector, it is a probability vector (i.e. a vector whose entries are not negative and add up to ) which satisfies Av = v. Thus we need to solve the system of linear equations with matrix A I. After multiplying by 0, Gauss elimination gives 0(A I) = 3 2 2 2 3 3 3 3 2 2 2 0 4 8 0 2 3 0 4 8 0 0 0 The solutions satisfy x 2 = 2x 3 and x = x 2 +3x 3 = x 3. Choosing x 3 = 0.25 yields x = 0.25 and x 2 = 0.5. Hence the stationary probability vector is 0.25 v = 0.5. 0.25

Page 0 of 5 TMA45 Matematikk 3 3. mai 206 09:00 3:00 Thus the most likely case is that the temperature is 0 C with a 50% chance. The probability for a temperature above 0 C is 25% and the probability for a temperature below 0 C is also 25%.

TMA45 Matematikk 3 3. mai 206 09:00 3:00 Page of 5 Problem 8 Find the equation y = mx + c of the line that best fits the data points (0,4), (, ), (2,), (3, 3) and (4, ). We are looking for the least square solution of the system Ax = b with 0 A = 2, x = 3 4 m, b = c 4. 3 To find the solution we solve the system A T Ax = A T b which is Gauss elimination gives 30 0 2 0 5 0 30 0 m 2. 0 5 c 0 2 6 2 0 2 6 0 5 2 0 6/5. 0 2/5 The solution is m = 6/5 and c = 2/5. Thus the best fitting line is y = 6/5x + 2/5.

Page 2 of 5 TMA45 Matematikk 3 3. mai 206 09:00 3:00 Problem 9 3 Let A be the matrix 3 and u be the vector. 3 a) Verify that 2 is an eigenvalue of A and that u is an eigenvector of A (possibly with an eigenvalue different from 2). b) Find all the eigenvalues of A and a basis for each eigenspace of A. c) Is A orthogonally diagonalizable? If so, orthogonally diagonalize A. a) To check that u is an eigenvector we just calculate 3 5 Au = 3 = 5 = 5u. 3 5 Hence u is an eigenvector to the eigenvalue 5. To verify that a is an eigenvalue of A we show via row reductions that (A 2I)x = 0 has a nontrivial solution: A 2I = 0 0 0. 0 0 0 Hence not every column in (A 2I) is a pivot column and (A 2I)x = 0 has a nontrivial solution. b) We have learned from a) that 5 and 2 are the eigenvalues of A. Moreover, every vector in R 3 of the form x x = x 2 = x 2 + x 3 0 0 x 3

TMA45 Matematikk 3 3. mai 206 09:00 3:00 Page 3 of 5 is an eigenvector for the eigenvalue 2. Hence the two vectors v = and w = 0 0 form a basis of the eigenspace to eigenvalue 2. Since the dimensions of all eigenspaces have to add up to 3, we see that there are no other eigenvalues, and the eigenspace to eigenvalue 5 has dimension with u as a basis vector. c) It remains to orthonormalize the basis {u,v,w}. Since A is a symmetric matrix, we know that u is orthogonal to v and w (for u corresponds to a different eigenvalue). Hence we just need to normalize u and define a new basis vector v := u u = 3 = 3 3. 3 Next we orthogonalize v and w using the Gram-Schmidt process. We keep v and define a new vector w which is orthogonal to v: w := w w v v v v = 0 /2 = /2. 2 0 It remains to normalize v and w to get the new vectors v 2 and v 3 : v 2 := v v = / 2 = / 2 2 0 0 and v 3 := 2 w w = 3 /2 / 6 /2 = / 6 2/. 6 Hence we can orthogonally diagonalize A as A = P DP T with 5 0 0 / 3 / 2 / 6 D = 0 2 0, P = / 3 / 2 / / 3 / 3 / 3 6 0 0 2 / 3 0 2/, and P T = / 2 / 2 0 6 / 6 / 6 2/. 6

Page 4 of 5 TMA45 Matematikk 3 3. mai 206 09:00 3:00 Problem 0 Let W R n be a subspace and W be its orthogonal complement. a) Show that W is a subspace of R n. b) Let w be a vector which lies both in W and in W (i.e. w W W ). Show that this implies w = 0. c) Let {w,...,w r } be a basis of W and let {v,...,v s } be a basis of W. Show that {w,...,w r,v,...,v s } is a basis of R n. a) Since the zero vector is orthogonal to every vector in R n, it is also an element in W. Let u, v be arbitrary vectors in W, w be an arbitrary vector in W, and λ be any real number. Then we have: (u + v) w = u w + v w = 0 + 0 = 0, and (λu) w = λ(u w) = λ 0 = 0. Hence u + v W and λu W, and W is indeed a subspace of R n. b) Let w be a vector which lies both in W and W. Then w W implies that w is orthogonal to every vector in W and, in particular, w is orthogonal to itself. That means w w = 0, and hence w must be the zero vector in R n. c) We need to show that {w,...,w r,v,...,v s } is linearly independent and that it spans R n. We start with linear independence. Let λ,...,λ r,µ,...,µ s be real numbers such that This is equivalent to λ w +... + λ r w r + µ v +... + µ s v s = 0. λ w +... + λ r w r = (µ v +... + µ s v s ). But the vector λ w +...+λ r w r is an element in W, whereas the vector (µ v +... + µ s v s ) is an element in W. By b), this implies λ w +... + λ r w r = 0 = µ v +... + µ s v s.

TMA45 Matematikk 3 3. mai 206 09:00 3:00 Page 5 of 5 Since both sets {w,...,w r } and {v,...,v s } are linearly independent, this implies λ =... = λ r = 0 and µ =... = µ s = 0. This shows that {w,...,w r,v,...,v s } is a linearly independent set of vectors. It remains to show that {w,...,w r,v,...,v s } spans R n. Let y be an arbitrary vector in R n. We learned that we can write y as a sum y = proj W y + z with proj W y W and z W. By the assumptions, we can write proj W y as a linear combination of w,...,w r and z as a linear combination of v,...,v s. Hence we can also write y as a linear combination of w,...,w r,v,...,v s.