CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION



Similar documents
Recall that two vectors in are perpendicular or orthogonal provided that their dot

1 Introduction to Matrices

RAJALAKSHMI ENGINEERING COLLEGE MA 2161 UNIT I - ORDINARY DIFFERENTIAL EQUATIONS PART A

Mathematics Course 111: Algebra I Part IV: Vector Spaces

LINEAR ALGEBRA W W L CHEN

( ) which must be a vector

1 VECTOR SPACES AND SUBSPACES

Linear Algebra Review. Vectors

Lecture 14: Section 3.3

5. Orthogonal matrices

Inner Product Spaces

α = u v. In other words, Orthogonal Projection

Similarity and Diagonalization. Similar Matrices

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

1 Sets and Set Notation.

NOTES ON LINEAR TRANSFORMATIONS

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Orthogonal Projections and Orthonormal Bases

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in Total: 175 points.

Math 312 Homework 1 Solutions

x = + x 2 + x

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

[1] Diagonal factorization

Orthogonal Projections

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

THREE DIMENSIONAL GEOMETRY

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

MATHEMATICAL METHODS OF STATISTICS

Inner Product Spaces and Orthogonality

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

How To Understand And Solve A Linear Programming Problem

x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3

University of Lille I PC first year list of exercises n 7. Review

4.5 Linear Dependence and Linear Independence

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

Systems of Linear Equations

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) Write the given system in matrix form x = Ax + f ( ) sin(t) x y z = dy cos(t)

Notes on Determinant

ISOMETRIES OF R n KEITH CONRAD

MATH APPLIED MATRIX THEORY

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Quantum Physics II (8.05) Fall 2013 Assignment 4

Matrix Representations of Linear Transformations and Changes of Coordinates

3. INNER PRODUCT SPACES

MATH1231 Algebra, 2015 Chapter 7: Linear maps

160 CHAPTER 4. VECTOR SPACES

MAT188H1S Lec0101 Burbulla

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Mean value theorem, Taylors Theorem, Maxima and Minima.

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

Linear Algebra Notes for Marsden and Tromba Vector Calculus

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Section Inner Products and Norms

Vector and Matrix Norms

BANACH AND HILBERT SPACE REVIEW

Linear Algebra Notes

by the matrix A results in a vector which is a reflection of the given

26. Determinants I. 1. Prehistory

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

APPLIED MATHEMATICS ADVANCED LEVEL

Chapter 17. Orthogonal Matrices and Symmetries of Space

Linear Algebra I. Ronald van Luijk, 2012

Lecture notes on linear algebra

T ( a i x i ) = a i T (x i ).

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

Chapter 6. Orthogonality

Chapter 6. Linear Transformation. 6.1 Intro. to Linear Transformation

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Vector Spaces. Chapter R 2 through R n

8 Square matrices continued: Determinants

Applied Linear Algebra I Review page 1

Inner products on R n, and more

General Theory of Differential Equations Sections 2.8, , 4.1

System of First Order Differential Equations

E3: PROBABILITY AND STATISTICS lecture notes

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Orthogonal Diagonalization of Symmetric Matrices

LS.6 Solution Matrices

Differentiation of vectors

Subspaces of R n LECTURE Subspaces

Question 2: How do you solve a matrix equation using the matrix inverse?

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

Lecture Notes 2: Matrices as Systems of Linear Equations

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

3.2 Sources, Sinks, Saddles, and Spirals

Section 4.4 Inner Product Spaces

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Solutions to Math 51 First Exam January 29, 2015

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Name: Section Registered In:

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL

LINES AND PLANES CHRIS JOHNSON

Elasticity Theory Basics

F Matrix Calculus F 1

Continued Fractions and the Euclidean Algorithm

Transcription:

No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August 2004 Time: 3 hours Attempt Five out of EIGHT questions

Question 1 (a) Using Venn diagrams (or otherwise) show that: P (A B C) = P (A) + P (B) + P (C) P (A B) P (A C) P (B C) + P (A B C) where A, B and C are any three events in a sample space S. (b) Define what is the conditional probability of an event A given an event B. Show that for two events A and B P (A B) = P (B A)P (A) P (B A)P (A) + P (B A)P (A) which is known as the total probability formula. (c) In answering a question on a multiple-choice test a student either knows the answer or guesses. Let p be the probability that (s)he knows the answer and 1 p the probability that (s)he guesses. Assume that a student who guesses at the answer will be correct with probability 1/m, where m is the number of multiple-choice alternatives. What is the conditional probability that a student knew the answer to a question given that (s)he answered it correctly? Question 2 (a) Explain what is the mean and what is the variance of a continuous random variable X. Suppose that X has a probability density function f X (x) = kx for 0 x 1 and f X (x) = 0 otherwise. Find: (i) The value of k; (ii) the mean of X; (iii) the variance of X, and (iv) the probability that X > 0.5. (b) A discrete random variable X is called Poisson with parameter λ > 0 if its probability mass function is given by: p X (k) = P [X = k] = e λ λ k k! (k = 0, 1, 2,...) (i) Show that p X (k) defined above has the correct properties of a probability mass function. (ii) Show that the mean value of X is E(X) = λ. Hint: The Taylor series expansion of e x is e x = 1 + x + x2 + x3 +... = x i 2! 3! i=0. i! Question 3 (a) In a university, a total of 1232 students have taken a course in Spanish, 879 have taken a course in French and 114 have taken a course in Russian. Further, 103 have taken courses in both Spanish and French, 23 have taken courses in both Spanish and Russian, and 14 in both French and Russian. (i) If 2092 students have taken at least one of Spanish, French and Russian, how many students have taken a course in all three languages? 2 of 7

(ii) Use a Venn diagram to illustrate all different subsets of students described above. [4 marks] (b) Use a truth table to prove that the following statements are logically equivalent: p (q r) (p q) (p r) Question 4 (a) Let A = {1, 2, 3, 4} be a set and define the relations on A by: R = {(1, 2), (1, 1), (1, 3), (2, 4), (3, 2)} S = {(1, 4), (1, 3), (2, 3), (3, 1), (4, 1)} (i) Define the composite relation S R using the arrow diagram representations of R and S. (ii) Define the matrix representations of S and R and use them to define the composition S R. (b) Let R be the set of real numbers, A = R R and define the relation R on A: (a, b)r(c, d) if and only if a 2 + b 2 = c 2 + d 2 (i) Show that R is an equivalence relation. [4 marks] (ii) Give a geometric interpretation of the elements in each equivalence class and define the set of all equivalence classes A\R. Question 5 (a) Using Laplace transforms (or otherwise) solve the following differential equation: subject to the initial conditions: d 2 y(t) dt 2 + 2 dy(t) dt + y(t) = e t y(0) = 1 and dy(0) dt = 1 Check your solution by substituting into the differential equation and by verifying that the initial conditions are satisfied. A table of Laplace transforms is provided at the end of the paper. (b) Show from first principles (i.e. without reference to the table of Laplace transforms) that: L(cos ωt) = 3 of 7 s s 2 + ω 2

Indicate clearly the region of convergence of the transform, i.e. the range of values of s for which your result is valid. Hint: Write cos ωt = 1 2 (ejωt + e jωt ) and apply the definition of the transform. (c) Consider the function f(t) = t sin ωt (t 0), f(t) = 0 (t < 0). Show by direct differentiation that f (t) = 2ω cos ωt ω 2 f(t) By using the properties of Laplace transforms of derivatives, show that L(t sin ωt) = 2ωs (s 2 + ω 2 ) 2 In your derivation you will need to use the Laplace transform of the function cos ωt you obtained in part (b). Question 6 (a) On the linear space R[0, 2π] (real valued-functions defined on the interval [0, 2π]) define the inner product of two functions f and g as: f, g = 2π 0 f(x)g(x)dx A set S of functions in R[0, 2π] is said to be orthonormal if: (i) f, g = 0 for any two functions f and g in S such that f g, and (ii) f, f = 1 for every function in f in S. Prove that the set of functions: S = { 1 2π, cos x π, } sin x π is orthonormal. [8 marks] (b) Consider the periodic function f(t) with period T = 2π, defined as f(t) = t 2 in the interval π < t π. The Fourier series expansion of f(t) is of the form: f(t) = a 0 2 + a n cos nt + where the a n s and b n s are unspecified coefficients. b n sin nt Show that f(t) is an even function and, as a result, b n = 0 for all n > 0 in the Fourier series expansion of f(t). Calculate the a n s (n 0) in closed form and hence show that: f(t) = π2 3 + 4( 1) n cos(nt) n 2 [2 marks] Hint: You need to integrate by parts (twice). [8 marks] 4 of 7

Show that: ( 1) n+1 Hint: Set t = 0 in your Fourier series expansion. n 2 = 1 1 2 1 2 2 + 1 3 2 +... = π2 12 [2 marks] Question 7 (a) Define the following terms of linear-algebra: A subspace of a vector space V. Direct sum of two subspaces. Linear independence of a list of vectors. Linear span of a list of vectors. Basis of a vector space V. Range and Kernel of a linear transformation. Give simple examples to illustrate your definitions. (b) Show that: (i) The intersection of two subspaces of a vector space V is a subspace of V. (ii) A list of vectors containing two identical vectors is linearly dependent. (c) Let S 2 2 denote the set of 2 2 symmetric matrices with real entries. (A matrix A is called symmetric if A = A T where A T is the transpose of A). For a fixed 2 2 symmetric matrix A define the transformation Π A : S 2 2 S 2 2 which maps 2 2 symmetric matrices X to 2 2 symmetric matrices Y according to the rule Y = AXA T. Show that Π A is a linear transformation. If ( A = ) 1 1 1 1 find the Range and Kernel of Π A, and hence verify the rank-nullity theorem. [8 marks] Question 8 (a) We wish to perform the following three elementary operations on the rows of a 3 3 matrix A: Multiply the first row by 2. Interchange the first and third rows. Add twice the second row to the third row. 5 of 7

Write down the three elementary matrices by which A must be pre-multiplied to perform each operation. If the three operations must be performed in sequence, find the overall transformation matrix and its inverse. (b) The matrix inversion lemma states that for four matrices A, B, C and D of compatible dimensions the following identity holds, provided the indicated inverses exist: (A BD 1 C) 1 = A 1 + A 1 B(D CA 1 B) 1 CA 1 By multiplying the matrix in the right-hand-side of the above equation by A BD 1 C (either from the left or the right) show that this identity is valid. What are the computational advantages of using this identity when A = I n, B is a column vector and C is a row vector (and hence D is a scalar)? (c) Give sufficient and necessary conditions for the linear system of equations Ax = b, where A R m n and x is the vector of unknowns, to have: (i) At least one solution, and (ii) Exactly one solution. [4 marks] (d) Find all solutions of the system of equations: 1 2 1 x 1 1 3 2 y = 4 0 1 1 z 3 Explain clearly why your result is consistent with the conditions you gave in part (c) above. 6 of 7

Table of Laplace Transforms f(t) F (s) f(t) F (s) s δ(t) 1 cos ωt s 2 +ω 2 ω 1 1/s sin ωt s 2 +ω 2 t 1/s 2 s cosh at s 2 a 2 t 2 2/s 3 a sinh at s 2 a 2 t n n! e at s a cos ωt s n+1 (s a) 2 +ω 2 e at 1 e at ω sin ωt s a (s a) 2 +ω 2 te at 1 t n 1 e at (n 1)! (s+a) 2 (s+a) n External Examiners: Prof. P.M. Taylor, Prof. M. Cripps Internal Examiner: Dr G. Halikias 7 of 7