# 1 Quiz on Linear Equations with Answers. This is Theorem II.3.1. There are two statements of this theorem in the text.

Save this PDF as:

Size: px
Start display at page:

Download "1 Quiz on Linear Equations with Answers. This is Theorem II.3.1. There are two statements of this theorem in the text."

## Transcription

1 1 Quiz on Linear Equations with Answers (a) State the defining equations for linear transformations. (i) L(u + v) = L(u) + L(v), vectors u and v. (ii) L(Av) = AL(v), vectors v and numbers A. or combine these two equations as: L(Au + Bv) = AL(u) + BL(v), vectors v and w and numbers A and B. (b) State and prove the theorem on homogeneous linear equations. This is Theorem II.3.1. There are two statements of this theorem in the text. Proof by calculation: Given L(v) = and L(w) = and u = Av + Bw. Then L(Av + Bw) = AL(v) + BL(w) = A + B =. Thus L(u) =, A and B. (c) State the algorithm implied by the theorem on homogeneous linear equations. From text, immediately after Theorem II.3.1: Step. Observe/Check that the equation is indeed a homogeneous linear equation Step 1. Find the simplest solutions. Step 2. Take all their linear combinations. (d) State the theorem on non-homogeneous linear equations. This is Theorem II.4.3. (e) Given ẍ x = 3. (i) Using the Proposition on Recognizing linear transformation, prove that the relevant transformation is a linear transformation. State which parts of the Proposition on Recognizing linear transformation you used. Set L(x) = ẍ x = 3. Review Example II.2.5, use either scheme. (ii) Solve the equation. Indicate the line where the theorem on homogeneous linear equations is used. Indicate the line where the theorem on non-homogeneous linear equations is used. Similar to Example II.4.8 and Example II.3.7. For a simple solution to the non-homogeneous linear equation, try x c = C, where C is an unknown, to-be-found constant. Then L(C) = C = 3, = x c = C = 3. The associated homogeneous equation is L(u) = ü u =. Try u = e rt, where r is an

2 2 unknown, to-be-found constant. Plug in, solve for r = ±1. Hence u 1 = e t are two solutions of the associated homogeneous equation. and u 2 = e t Using the theorem on homogeneous linear equations: u = Au 1 + Bu 2 = Ae t + Be t, A and B. Using the theorem on non-homogeneous linear equations: x = 3+Ae t +Be t, A and B.

3 3 Exerise II.6.4 Is zero an eigenvalue, why or why not? Answer: For λ =, the main eigenvalue equation simplifies to L(X(x)) = X (x) =. Integrating this equation twice, yields: X(x) = Ax + B, A and B. Plugging in the boundary conditions, X() = = X(π) will yield: = X() = B; this simplies the function to X(x) = Ax. Then = X(π) = Aπ = A = = X(x) =. But eigenfunctions cannot be zero, hence X(x) = is not an eigenfunction. Without an eigenfunction, λ = cannot be an eigenvalue. Prove that there are no negative eigenvalues. For λ <, we note: λ >. The main eigenvalue equation is a homogeneous linear equation. The standard simple guess is X(x) = e rx. Plugging this in and then solving for r, yields: r 2 = λ, and hence r = ± λ. Thus X 1 = e λx and X 2 = e λx are two solutions. Then the Theorem on Homogeneous Linear Equations implies that X(x) = Ae λx + Be λx, A and B are many solutions. Plugging in the boundary conditions, X() = = X(7) will yield: = X() = A + B = A = B. = X(π) = Ae λπ + Be λπ = B( e λπ + e λπ ). Since e t is a strictly monotone increasing function, e t e t, except when t =. Hence e λπ e λπ, λ<. Hence B =. Thus A = = B, and hence X(x) =. But eigenfunctions cannot be zero, hence X(x) = is not an eigenfunction. Without an eigenfunction, all possible negative λ s cannot be eigenvalues.

4 4 Solutions to some Exercises Exercise III Shortcuts: Use a big zero instead of writing a block of many zeros. Observe that C = 1B, then C 2 = 1B 2 and C 3 = 1C 3, etc. B 6 = O = C 6. Exercise III An answer is supplied by Proposition III.5.9. Exercise III Note: The last matrix is a 1 1-mx; not a 2 1-mx. One tip-off is the x plus sign at the end of the top line. The other tip-off is that multiplying (x, y, z)s y z produces a 1 1-mx. Exercise III An answer is supplied in Ch. 7 Sec.2 by Observation 2.3 (turn page) and by the first 4 lines of Corollary (turn page). Exercise III.3.7. (c) By part (b), N = P MP 1, is the product of invertible matrices, hence it is also invertible. (General product Rule for Inverses.) Then using the formula for the inverse of the product of invertible matrices, ((ABC) 1 = C 1 B 1 A 1 ), N = P MP 1 = N 1 = (P 1 ) 1 M 1 P 1 = P M 1 P 1 Similarly, given M = P 1 NP, using the formula for the inverse of the product of invertible matrices, will provide M 1 = P 1 N 1 P.

5 Exercise III The eigenvectors associated with eigenvalue one, for the matrix are 1 x 1, x. The useful, steady-state vector, for a stochastic matrix, is the one that is a.5 probability vector. Here that is the vector:.5. Exercise III The purpose of this exercise is to find which matrices commute with 1 the diagonal matrix: D = 2. In math, the word which means all those 3 which. 1 The set of matrices, which commute with D = 2 is the set of all diagonal matrices. That is, if MD = DM, then M is a 3 3-diagonal matrix. a a n (iii) A general rule is that, if D = b, then D n = b n, n Z +. c c n For n =, one uses the definition: M = I n, n n matrices. This formula is also valid for n = 1. 2 Exercise III The matrices, which commute with D 2 = 2 are the matrices, 3 a b c d, a,b,c,d,e. e Exercise III.3.7. (c) Given invertible matrices, M and P, such that M = P 1 NP, prove that N is an invertible matrix. Multiply M = P 1 NP on the left by P and on the right by P 1, this yields: P 1 MP = P P 1 NP P 1 = N. Thus N is the product of three invertible matrices. Hence it is invertible. Exercise III.4.9. The stochastic matrix is (.56 (.44). Yesterday s probability vector was 1). ( ).5.7. Tomorrow s probability vector is.5.3 Exercise III After doing the matrix multiplication, both off diagonal terms are: b(sin 2 θ cos 2 θ) + (c a) sin θ cos θ =. Either (i) use the double angle formulas for sin 2θ and cos 2θ, and then solve for tan 2θ, or (ii) divide by sin θ cos θ, convert into a quadratic equation in tan θ, set x = tan θ, solve for x. When a = c, off diagonal terms simplify to b(sin 2 θ cos 2 θ) =, which implies that θ = π 4. 5

6 6 Exercise III.4.24 and 17. See the proof of Proposition VI.3.2. Rule III.6.4 (i) To prove (A 1 ) T = (A T ) 1 Remark. We will use the defining equation for inverse matrices, which says that two matrices, M and N are inverse matrices, when MN = I = NM. Or verbally, two matrices are inverse matrices, when their product (both ways) is the identity matrix. We will use the Product Rule for transposes. Remark. There are two basic ways to prove that two matrices are inverse matrices; the following two proofs are good examples: Proof. #1. We start with the given information that matrix A is an invertible matrix; we translate this fact into an equation, namely the defining equations: AA 1 = I = A 1 A We will now manipulate these equations, in order to obtain the defining equations for (A 1 ) T and (A T ) to be invertible matrices: Similarly: Thus the product (both ways) of (A 1 ) T (A 1 ) T = (A T ) 1. I = I T = (A 1 A) T = A T (A 1 ) T I = I T = (AA 1 ) T = (A 1 ) T A T. and (A T ) is the identity matrix; hence Proof. #2. The equation, (A 1 ) T = (A T ) 1 means that (A 1 ) T and (A T ) are inverse matrices. To prove this, we check that their product (both ways) is the identity matrix: hence (A 1 ) T = (A T ) 1. (A 1 ) T (A T ) = (A A 1 ) T = I T = I (A T ) (A 1 ) T = (A 1 A) T = I T = I Exercise III.6.14 (b) and key part of proof of Theorem III.7.5. Given P = M(M T M) 1 M T, with M T M, an invertible matrix; but M not an invertible matrix. Prove that P T = P. Proof. P T = [M (M T M) 1 M T ] T What is this? This is the transpose of the product of matrices. So we use the Product Rule (III.6.4(f)) for Transposes. Thus: P T = [M T ] T [(M T M) 1 ] T M T = M [(M T M) T ] 1 M T (M T M) T = M T [M T ] T = M T M

7 7 We have also used the rules [M T ] T = M and (A 1 ) T = (A T ) 1. Exercise III.6A.8. Let S be an invertible symmetric matrix. Check that S 1 is also symmetric. Proof. (S 1 ) T = (S T ) 1 = S 1, (using the inverse rule for tranposes.) Thus S 1 is also symmetric. Exercise III.6A.11. Given unknown 7 7 matrices, S, D and P, where P is an invertible matrix and S is a symmetric matrix, such that S = P T DP. Let v R 7 be a coordinate vector. Let w = P v. Given (z) = v T Sv. Show that (z) = w T Dw. Proof. (z) = v T Sv = v T P T DP v = w T Dw since v T P T = (P v) T = w.

8 8 Answers to some exercises for Ch. IV. ( ) 1 1 Exercise IV.3A.2. E J 2 =. M E 1 M 1 E 1 1 ( ) < 1. Therefore, M + E is an invertible matrix. Exercise IV.3A.3. M 1 E 1 M 1 E 1 ( ) > 1. Therefore, the lemma provides no info on the invertibility of the matrix, M + E. Remember, E is unknown, hence M 1 E is unknown and cannot be calculated. Only, bounds on E are known. Therefore, need to use M 1 E 1 M 1 E 1. ( ) ( ) Using E = , and calculating M E 1 = M will produce 1 1 a wrong answer. Exercise IV.3B.2. d =.42 < 1. Hence M + E is invertible. v Exercise IV.3B.3. d =.1 < 1. Hence M + E is invertible. Part (i) According to Theorem IV.3B.1, v = ; according to Corollary IV.3B.2, v 1 9. For this matrix and vector, the error bound provided by the theorem is much lower than the one provided by the corollary. Part (ii) According to both Theorem IV.3B.1 and Corollary IV.3B.2, v 1 9. For this matrix and vector, the two error bounds are the same. Exercise IV.3B.5. (Not Superposition) ( ).4 Part (a). v a..6 Part (b). d =.12 < 1. Hence M + E is invertible. v b =.668. Part (c). v c = Part (d). v a + v b =.724 <.7287, the bound for v c. Thus the error bound, due to the two errors, is larger than the sum of the two error bounds due to the individual errors. This is to be expected, since the total error due to two individual errors is often larger than the sum. Superposition does not occur for errors or their bounds.

9 Exercise IV.3B.6. (Iterative improvement) [( ) ( )] ( ) Part (b). v 6b =. This provides a significant reduction in the bound on the first coordinate but no reduction in the bound on the second coordinate (in comparison with the results of Exercise IV.3B.5 part (b)). [( ) ( ) ( )] ( ) Part (c) v 6c =. Again, this provides a significant reduction in the bound on the first coordinate but no reduction in the bound on the second coordinate (in comparison with the results of Exercise IV.3B.5 part (c)). Actually, the bound on the second coordinate is slightly higher; ( but ) since both pairs.4858 of bounds are valid, we may combine them to obtain: v 6c

### 1 Eigenvalues and Eigenvectors

Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

### MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

### MATH 551 - APPLIED MATRIX THEORY

MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### [1] Diagonal factorization

8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

### Presentation 3: Eigenvalues and Eigenvectors of a Matrix

Colleen Kirksey, Beth Van Schoyck, Dennis Bowers MATH 280: Problem Solving November 18, 2011 Presentation 3: Eigenvalues and Eigenvectors of a Matrix Order of Presentation: 1. Definitions of Eigenvalues

### Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

### 160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### B such that AB = I and BA = I. (We say B is an inverse of A.) Definition A square matrix A is invertible (or nonsingular) if matrix

Matrix inverses Recall... Definition A square matrix A is invertible (or nonsingular) if matrix B such that AB = and BA =. (We say B is an inverse of A.) Remark Not all square matrices are invertible.

### Solutions to Linear Algebra Practice Problems

Solutions to Linear Algebra Practice Problems. Find all solutions to the following systems of linear equations. (a) x x + x 5 x x x + x + x 5 (b) x + x + x x + x + x x + x + 8x Answer: (a) We create the

### Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

### Chapter 6. Orthogonality

6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

### Inner products on R n, and more

Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

### We know a formula for and some properties of the determinant. Now we see how the determinant can be used.

Cramer s rule, inverse matrix, and volume We know a formula for and some properties of the determinant. Now we see how the determinant can be used. Formula for A We know: a b d b =. c d ad bc c a Can we

### Similar matrices and Jordan form

Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

### Systems of Linear Equations

Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

### DETERMINANTS. b 2. x 2

DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

### C 1 x(t) = e ta C = e C n. 2! A2 + t3

Matrix Exponential Fundamental Matrix Solution Objective: Solve dt A x with an n n constant coefficient matrix A x (t) Here the unknown is the vector function x(t) x n (t) General Solution Formula in Matrix

### Cofactor Expansion: Cramer s Rule

Cofactor Expansion: Cramer s Rule MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Today we will focus on developing: an efficient method for calculating

### Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

### Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

### 13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

### Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

### Introduction to Matrix Algebra I

Appendix A Introduction to Matrix Algebra I Today we will begin the course with a discussion of matrix algebra. Why are we studying this? We will use matrix algebra to derive the linear regression model

### Matrix Methods for Linear Systems of Differential Equations

Matrix Methods for Linear Systems of Differential Equations We now present an application of matrix methods to linear systems of differential equations. We shall follow the development given in Chapter

### Basic Terminology for Systems of Equations in a Nutshell. E. L. Lady. 3x 1 7x 2 +4x 3 =0 5x 1 +8x 2 12x 3 =0.

Basic Terminology for Systems of Equations in a Nutshell E L Lady A system of linear equations is something like the following: x 7x +4x =0 5x +8x x = Note that the number of equations is not required

### NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

### Factorization Theorems

Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

### 15.062 Data Mining: Algorithms and Applications Matrix Math Review

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

### 5.3 The Cross Product in R 3

53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### Matrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for

### Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

### 8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

### Solution to Homework 2

Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

### Diagonal, Symmetric and Triangular Matrices

Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

### L1-2. Special Matrix Operations: Permutations, Transpose, Inverse, Augmentation 12 Aug 2014

L1-2. Special Matrix Operations: Permutations, Transpose, Inverse, Augmentation 12 Aug 2014 Unfortunately, no one can be told what the Matrix is. You have to see it for yourself. -- Morpheus Primary concepts:

### MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

### 88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a

88 CHAPTER. VECTOR FUNCTIONS.4 Curvature.4.1 Definitions and Examples The notion of curvature measures how sharply a curve bends. We would expect the curvature to be 0 for a straight line, to be very small

### The Inverse of a Matrix

The Inverse of a Matrix 7.4 Introduction In number arithmetic every number a ( 0) has a reciprocal b written as a or such that a ba = ab =. Some, but not all, square matrices have inverses. If a square

### The basic unit in matrix algebra is a matrix, generally expressed as: a 11 a 12. a 13 A = a 21 a 22 a 23

(copyright by Scott M Lynch, February 2003) Brief Matrix Algebra Review (Soc 504) Matrix algebra is a form of mathematics that allows compact notation for, and mathematical manipulation of, high-dimensional

### Lecture 6. Inverse of Matrix

Lecture 6 Inverse of Matrix Recall that any linear system can be written as a matrix equation In one dimension case, ie, A is 1 1, then can be easily solved as A x b Ax b x b A 1 A b A 1 b provided that

### 2.1 Functions. 2.1 J.A.Beachy 1. from A Study Guide for Beginner s by J.A.Beachy, a supplement to Abstract Algebra by Beachy / Blair

2.1 J.A.Beachy 1 2.1 Functions from A Study Guide for Beginner s by J.A.Beachy, a supplement to Abstract Algebra by Beachy / Blair 21. The Vertical Line Test from calculus says that a curve in the xy-plane

### Solutions to Linear Algebra Practice Problems 1. form (because the leading 1 in the third row is not to the right of the

Solutions to Linear Algebra Practice Problems. Determine which of the following augmented matrices are in row echelon from, row reduced echelon form or neither. Also determine which variables are free

### Name: Section Registered In:

Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

### Solving Linear Diophantine Matrix Equations Using the Smith Normal Form (More or Less)

Solving Linear Diophantine Matrix Equations Using the Smith Normal Form (More or Less) Raymond N. Greenwell 1 and Stanley Kertzner 2 1 Department of Mathematics, Hofstra University, Hempstead, NY 11549

### Class Meeting # 1: Introduction to PDEs

MATH 18.152 COURSE NOTES - CLASS MEETING # 1 18.152 Introduction to PDEs, Fall 2011 Professor: Jared Speck Class Meeting # 1: Introduction to PDEs 1. What is a PDE? We will be studying functions u = u(x

### 22 Matrix exponent. Equal eigenvalues

22 Matrix exponent. Equal eigenvalues 22. Matrix exponent Consider a first order differential equation of the form y = ay, a R, with the initial condition y) = y. Of course, we know that the solution to

### Math 313 Lecture #10 2.2: The Inverse of a Matrix

Math 1 Lecture #10 2.2: The Inverse of a Matrix Matrix algebra provides tools for creating many useful formulas just like real number algebra does. For example, a real number a is invertible if there is

### LS.6 Solution Matrices

LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

### 10.3 POWER METHOD FOR APPROXIMATING EIGENVALUES

58 CHAPTER NUMERICAL METHODS. POWER METHOD FOR APPROXIMATING EIGENVALUES In Chapter 7 you saw that the eigenvalues of an n n matrix A are obtained by solving its characteristic equation n c nn c nn...

### The Characteristic Polynomial

Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

### Polynomials and the Fast Fourier Transform (FFT) Battle Plan

Polynomials and the Fast Fourier Transform (FFT) Algorithm Design and Analysis (Wee 7) 1 Polynomials Battle Plan Algorithms to add, multiply and evaluate polynomials Coefficient and point-value representation

### Solution. Area(OABC) = Area(OAB) + Area(OBC) = 1 2 det( [ 5 2 1 2. Question 2. Let A = (a) Calculate the nullspace of the matrix A.

Solutions to Math 30 Take-home prelim Question. Find the area of the quadrilateral OABC on the figure below, coordinates given in brackets. [See pp. 60 63 of the book.] y C(, 4) B(, ) A(5, ) O x Area(OABC)

### 1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

### Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

### Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

### 9 MATRICES AND TRANSFORMATIONS

9 MATRICES AND TRANSFORMATIONS Chapter 9 Matrices and Transformations Objectives After studying this chapter you should be able to handle matrix (and vector) algebra with confidence, and understand the

### Solving Linear Systems, Continued and The Inverse of a Matrix

, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

### Vectors, Gradient, Divergence and Curl.

Vectors, Gradient, Divergence and Curl. 1 Introduction A vector is determined by its length and direction. They are usually denoted with letters with arrows on the top a or in bold letter a. We will use

### geometric transforms

geometric transforms 1 linear algebra review 2 matrices matrix and vector notation use column for vectors m 11 =[ ] M = [ m ij ] m 21 m 12 m 22 =[ ] v v 1 v = [ ] T v 1 v 2 2 3 matrix operations addition

### Vector Math Computer Graphics Scott D. Anderson

Vector Math Computer Graphics Scott D. Anderson 1 Dot Product The notation v w means the dot product or scalar product or inner product of two vectors, v and w. In abstract mathematics, we can talk about

### Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

### 4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

### Inner Product Spaces

Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

### December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

### Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

### Introduction to Matrices

Introduction to Matrices Tom Davis tomrdavis@earthlinknet 1 Definitions A matrix (plural: matrices) is simply a rectangular array of things For now, we ll assume the things are numbers, but as you go on

### Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

### Eigenvalues and eigenvectors of a matrix

Eigenvalues and eigenvectors of a matrix Definition: If A is an n n matrix and there exists a real number λ and a non-zero column vector V such that AV = λv then λ is called an eigenvalue of A and V is

### Orthogonal Projections

Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

### 7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

### Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

### LINEAR SYSTEMS. Consider the following example of a linear system:

LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x +2x 2 +3x 3 = 5 x + x 3 = 3 3x + x 2 +3x 3 = 3 x =, x 2 =0, x 3 = 2 In general we want to solve n equations in

### Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0.

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients We will now turn our attention to nonhomogeneous second order linear equations, equations with the standard

### Section 2.1. Section 2.2. Exercise 6: We have to compute the product AB in two ways, where , B =. 2 1 3 5 A =

Section 2.1 Exercise 6: We have to compute the product AB in two ways, where 4 2 A = 3 0 1 3, B =. 2 1 3 5 Solution 1. Let b 1 = (1, 2) and b 2 = (3, 1) be the columns of B. Then Ab 1 = (0, 3, 13) and

### 4.6 Null Space, Column Space, Row Space

NULL SPACE, COLUMN SPACE, ROW SPACE Null Space, Column Space, Row Space In applications of linear algebra, subspaces of R n typically arise in one of two situations: ) as the set of solutions of a linear

### 3.8 Finding Antiderivatives; Divergence and Curl of a Vector Field

3.8 Finding Antiderivatives; Divergence and Curl of a Vector Field 77 3.8 Finding Antiderivatives; Divergence and Curl of a Vector Field Overview: The antiderivative in one variable calculus is an important

### r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t)

Solutions HW 9.4.2 Write the given system in matrix form x = Ax + f r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + We write this as ( ) r (t) θ (t) = ( ) ( ) 2 r(t) θ(t) + ( ) sin(t) 9.4.4 Write the given system

### 5.1 Commutative rings; Integral Domains

5.1 J.A.Beachy 1 5.1 Commutative rings; Integral Domains from A Study Guide for Beginner s by J.A.Beachy, a supplement to Abstract Algebra by Beachy / Blair 23. Let R be a commutative ring. Prove the following

### Examination paper for TMA4115 Matematikk 3

Department of Mathematical Sciences Examination paper for TMA45 Matematikk 3 Academic contact during examination: Antoine Julien a, Alexander Schmeding b, Gereon Quick c Phone: a 73 59 77 82, b 40 53 99

### Vector Spaces II: Finite Dimensional Linear Algebra 1

John Nachbar September 2, 2014 Vector Spaces II: Finite Dimensional Linear Algebra 1 1 Definitions and Basic Theorems. For basic properties and notation for R N, see the notes Vector Spaces I. Definition

### MATH 110 Spring 2015 Homework 6 Solutions

MATH 110 Spring 2015 Homework 6 Solutions Section 2.6 2.6.4 Let α denote the standard basis for V = R 3. Let α = {e 1, e 2, e 3 } denote the dual basis of α for V. We would first like to show that β =

### 1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form

1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and

### Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix

The Hadamard Product Elizabeth Million April 12, 2007 1 Introduction and Basic Results As inexperienced mathematicians we may have once thought that the natural definition for matrix multiplication would

### MATH SOLUTIONS TO PRACTICE FINAL EXAM. (x 2)(x + 2) (x 2)(x 3) = x + 2. x 2 x 2 5x + 6 = = 4.

MATH 55 SOLUTIONS TO PRACTICE FINAL EXAM x 2 4.Compute x 2 x 2 5x + 6. When x 2, So x 2 4 x 2 5x + 6 = (x 2)(x + 2) (x 2)(x 3) = x + 2 x 3. x 2 4 x 2 x 2 5x + 6 = 2 + 2 2 3 = 4. x 2 9 2. Compute x + sin

### Equations, Inequalities & Partial Fractions

Contents Equations, Inequalities & Partial Fractions.1 Solving Linear Equations 2.2 Solving Quadratic Equations 1. Solving Polynomial Equations 1.4 Solving Simultaneous Linear Equations 42.5 Solving Inequalities

### Solving simultaneous equations using the inverse matrix

Solving simultaneous equations using the inverse matrix 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix

### Matrix Calculations: Applications of Eigenvalues and Eigenvectors; Inner Products

Matrix Calculations: Applications of Eigenvalues and Eigenvectors; Inner Products H. Geuvers Institute for Computing and Information Sciences Intelligent Systems Version: spring 2015 H. Geuvers Version:

### 3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

### Introduction to Matrix Algebra

Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

### 1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

### 8 Hyperbolic Systems of First-Order Equations

8 Hyperbolic Systems of First-Order Equations Ref: Evans, Sec 73 8 Definitions and Examples Let U : R n (, ) R m Let A i (x, t) beanm m matrix for i,,n Let F : R n (, ) R m Consider the system U t + A

### v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with