COT4501 Spring 2012 Homework III

Similar documents
Linear Algebra Review. Vectors

3 Orthogonal Vectors and Matrices

Lecture 5: Singular Value Decomposition SVD (1)

by the matrix A results in a vector which is a reflection of the given

Chapter 6. Orthogonality

LINEAR ALGEBRA. September 23, 2010

Eigenvalues and Eigenvectors

Inner Product Spaces

Similar matrices and Jordan form

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Using row reduction to calculate the inverse and the determinant of a square matrix

Similarity and Diagonalization. Similar Matrices

CS3220 Lecture Notes: QR factorization and orthogonal transformations

Vector and Matrix Norms

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

x = + x 2 + x

Applied Linear Algebra I Review page 1

Review Jeopardy. Blue vs. Orange. Review Jeopardy

[1] Diagonal factorization

Solving Linear Systems, Continued and The Inverse of a Matrix

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Numerical Methods I Eigenvalue Problems

MAT 242 Test 2 SOLUTIONS, FORM T

Inner Product Spaces and Orthogonality

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Linear Algebra Methods for Data Mining

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.

MATH APPLIED MATRIX THEORY

Solving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department

Linear Algebra: Determinants, Inverses, Rank

Chapter 7. Lyapunov Exponents. 7.1 Maps

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Chapter 17. Orthogonal Matrices and Symmetries of Space

8 Square matrices continued: Determinants

DATA ANALYSIS II. Matrix Algorithms

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Brief Introduction to Vectors and Matrices

SALEM COMMUNITY COLLEGE Carneys Point, New Jersey COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated

Section 9.5: Equations of Lines and Planes

1 Introduction to Matrices

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

MAT 242 Test 3 SOLUTIONS, FORM A

AMS526: Numerical Analysis I (Numerical Linear Algebra)

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product

Operation Count; Numerical Linear Algebra

Introduction to Matrix Algebra

4 MT210 Notebook Eigenvalues and Eigenvectors Definitions; Graphical Illustrations... 3

Geometric description of the cross product of the vectors u and v. The cross product of two vectors is a vector! u x v is perpendicular to u and v

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Math 312 Homework 1 Solutions

LS.6 Solution Matrices

Examination paper for TMA4205 Numerical Linear Algebra

9 MATRICES AND TRANSFORMATIONS

A note on companion matrices

GCE Mathematics (6360) Further Pure unit 4 (MFP4) Textbook

6. Cholesky factorization

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

Vector Spaces 4.4 Spanning and Independence

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

A vector is a directed line segment used to represent a vector quantity.

Data Mining: Algorithms and Applications Matrix Math Review

Numerical Analysis Lecture Notes

Degrees of Freedom. This suggests a possible definition: degrees of freedom = # variables - # equations. Definition:

Inner products on R n, and more

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi

1 VECTOR SPACES AND SUBSPACES

The Projection Matrix

Factorization Theorems

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the school year.

Math Lecture 33 : Positive Definite Matrices

THREE DIMENSIONAL GEOMETRY

Section Continued

Math 241, Exam 1 Information.

Lecture notes on linear algebra

Orthogonal Projections

Vector Notation: AB represents the vector from point A to point B on a graph. The vector can be computed by B A.

A =

Dot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

α = u v. In other words, Orthogonal Projection

28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z

5. Orthogonal matrices

Vectors 2. The METRIC Project, Imperial College. Imperial College of Science Technology and Medicine, 1996.

Notes on Symmetric Matrices

Solution of Linear Systems

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

Orthogonal Diagonalization of Symmetric Matrices

SOLVING LINEAR SYSTEMS

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Exploratory Factor Analysis

Math 215 HW #6 Solutions

Section 1.4. Lines, Planes, and Hyperplanes. The Calculus of Functions of Several Variables

October 3rd, Linear Algebra & Properties of the Covariance Matrix

Transcription:

COT451 Spring 1 Homework III This assignment has six problems and they are equally weighted. The assignment is due in class on Tuesday, February 1, 1. There are four regular problems and two computer problems (using MATLAB). For the computer problems, turn in your results (e.g., graphs, plots, simple analysis and so on) and also a printout of your (MATLAB) code. Problem 1 1. Show that if the vector v, then the matrix H = I vv v v is orthogonal and symmetric. Solution: H is symmetric: H = I (vv ) = I vv v v v v formula that for two matrices A, B, (AB) = B A. = H. We have used the. Let a be any nonzero vector. If v = a α e 1, where α = ± a, and show that Ha = α e 1. Solution: By direct computation: H = I vv v v, and v v = (a α e 1 ) (a α e 1 ) = (a) a αa e 1 + α e 1 e 1 = α αa e 1, vv a = (a αe 1 )(a αe 1 ) a = (a αe 1 )(α αa e 1 ). Using the two results above, we have Ha = a vv a v v = a (a αe 1)(α αa e 1 ) α αa e 1 = αe 1. Problem Consider the vector a as an n 1 matrix. 1. Write out its QR factorization, showing the matrices Q and R explicitly. Solution: Q is simply the vector (one column matrix) a/ a, and R = a. 1

. What is the solution to the linear least squares problem ax b, where b is a given n-vector. Solution: Let u = a/ a, the Q.. The least squares minimizes the cost function ax b = QRx b = Rx Q b. The solution is simply given as (using Q and R in Part 1) ˆx = Q b R = u b = a b. a a 3. How do you interpret the above result geometrically? Solution: We compute the solution ˆx (which is one single number) as the ratio between the projection of the vector b in the a-direction, which is the vector ( a a b) a = a b a, a a and the vector a. Notice that these two vectors are indeed parallel (all multiples of a) and their ratio is the solution ˆx. Problem 3 Consider the following matrix A A = 1 1 where is a positive number smaller than mach in a given floating-point system. In class, we have shown that the matrix A A is singular in floating-point arithmetic. For this problem, show that if A = QR is the reduced QR factorization for this matrix A, then R is not singular, even in floating-point arithmetic. Solution: Using floating-point arithmetic, the first column a 1 of A has magnitude 1 + = 1. Therefore, the vector v 1 for the first Householder transformation is given by v 1 = a 1 +e 1 = [,, ]. Use this H v1, we have and similarly, H v1 a 1 = 1 H v1 a =, ( = (v 1 a 1 = + )) 4 + = 4 1 ( = (v 1 a )) 4 + = 4 Therefore, after first Householder transformation H v1 A =. Now for the second Householder transformation, v = 1 = = = (1 + )..,

Now we have H v = = (( + ) = (v (4 + ) (1 + ) = Therefore, after the second Householder transformation, we have H v H v1 A =, and the R matrix is the non-singular matrix [ ] R =. Problem 4 )). (1 + ) 1. What are the eigenvalues and corresponding eigenvectors of the following matrix? 1 4 1. 3 Solution: Because the matrix is upper-triangular, the three eigenvalues are 1, and 3 (the diagonal elements) with corresponding eigenvectors [1,, ], [, 1, ] and [, 1, 1].. What are the eigenvalues of the Householder transformation H = I vv v v, where v is any nonzero vector? Solution: We know that for any vector w orthogonal to v (v w = ), we have Hw = w, and Hv = v. There are n 1 (we assume that v is an n-dimensional vector) linearly independent vectors that are orthogonal to v and these are linearly independent eigenvectors of H with eigenvalue 1. v then furnishes the remaining eigenvector with eigenvalues. Therefore, H has two eigenvalues, 1 and. Since we know that H is orthogonal from Problem 1, its eigenvalues (as complex numbers) must have magnitude 1. 3

Computer Problem 1 1. Solve the following least squares roblem using any method you like:.16.1 [ ].6.17.11 x1.8. x. 1.9 3.31. Now solve the same least squares problem again, but this time use the slightly perturbed right-hand side,.7 b =.5. 3.33 3. Compare your results from parts 1 and. Can you explain this difference? Solution: 1 3.4. function cp3_4 % nearly rank deficient least squares problem A=[.16.1;.17.11;. 1.9]; 3 disp('(a)'); b_a=[.6;.8; 3.31]; x_a = A\b_a 4 disp('(b)'); b_b=[.7;.5; 3.33]; x_b = A\b_b 5 disp('(c)'); delx_over_x = norm(x_b-x_a)/norm(x_a) 6 conda = cond(a) 7 cos_theta = norm(a*x_a)/norm(b_a); 8 delb_over_b = norm(b_b-b_a)/norm(b_a) 9 bound = conda*cos_theta*delb_over_b Computer Problem A planet follows an elliptical orbit, which can be represented in a Cartesian (x, y) coordinate system by the equation a y + b xy + c x + d y + e = x. Use a library routine, or one of your own design, for linear least squares to determine the orbital parameters a, b, c, d, e given the following observations of the planet s position: x: 1..95.87.77.67.56.44 y:.39.3.7..18.15.13 ----------------------------------------------------- x:.3.16.1 y:.1.13.15 In addition to printing the values for the orbital parameters, plot the resulting orbit and the given data points in the (x, y) plane. 4

This least squares problem is nearly rank-deficient. To see what effect this has on the solution, use the following perturbed data x: 1.3.949.871.77.666.559 y:.388.319.7.1.177.15 --------------------------------------------------- x:.438.33.159.9 y:.19.117.134.151 and solve the least squares problem with the perturbed data. Compare the new values for the parameters with those previously computed. What effect does this difference have on the plot of the orbit? Can you explain this behavior? 1 function cp3_5 % least squares fit to planetary orbit data 3 x = [1.;.95;.87;.77;.67;.56;.44;.3;.16;.1]; 4 y = [.39;.3;.7;.;.18;.15;.13;.1;.13;.15]; 5 A = [y.ˆ x.*y x y ones(size(x))]; b = x.ˆ; disp('(a)'); 6 figure(1); hold on; 7 title('computer Problem 3.5(a) - Elliptical Orbit'); 8 alpha = A\b 9 1 [xs, ys] = meshgrid(-1:.1:, -1:.1:); 11 contour(-1:.1:, -1:.1:,... 1 alpha(1)*ys.ˆ+alpha()*xs.*ys+... 13 alpha(3)*xs+alpha(4)*ys+... 14 alpha(5)-xs.ˆ, [, ], 'k-'); 15 plot(x, y, 'bx'); disp('(b)'); 16 17 figure(); hold on; 18 title('computer Problem 3.5(b) - Perturbed Orbit'); 19 %x = x+(rand(size(x))*.1-.5); % y = y+(rand(size(y))*.1-.5); 1 x = [1.3.949.871.77.666.559.438.33.159....9]'; y = [.388.319.7.1.177.15.19.117.134....151]'; 3 4 A = [y.ˆ x.*y x y ones(size(x))]; 5 b = x.ˆ; alpha = A\b 6 [xs, ys] = meshgrid(-1:.1:, -1:.1:); 7 contour(-1:.1:, -1:.1:,... 8 alpha(1)*ys.ˆ+alpha()*xs.*ys+... 9 alpha(3)*xs+alpha(4)*ys+alpha(5)-xs.ˆ,... 3 [, ], 'r-'); 5

31 contour(-1:.1:, -1:.1:,... 3 alpha(1)*ys.ˆ+alpha()*xs.*ys+... 33 alpha(3)*xs+alpha(4)*ys+alpha(5)-xs.ˆ,... 34 [, ], 'k-'); 35 plot(x, y, 'bx'); 6