MATRICES WITH DISPLACEMENT STRUCTURE A SURVEY
|
|
|
- Emily Lambert
- 9 years ago
- Views:
Transcription
1 MATRICES WITH DISPLACEMENT STRUCTURE A SURVEY PLAMEN KOEV Abstract In the following survey we look at structured matrices with what is referred to as low displacement rank Matrices like Cauchy Vandermonde Polynomial Vandermonde Chebyshev Vandermonde Toeplitz Hankel and others only depend on O(n) parameters instead of n This suggests that linear systems of these types should be solvable with some degree of effort less than O(n 3 ) The same should also extend to the LU-factorization and inversion Also the inverses of (say) Vandermonde matrices do not have Vandermonde structure yet they should have similar properties when it comes to solving linear equations The property that describes the above structured matrices and their inverses (and Schur complements) is that they have a low displacement rank Exploiting the displacement structure of a matrix allows us to obtain O(n ) algorithms for solving Ax b obtaining the LU-factorization and for inversion of matrices with low displacement rank The present survey does not contain any new results and is entirely based on the excellent papers by Vadim Olshevsky and Thomas Kailath noted in the references Our task was to provide an outline of the main results for matrices with low displacement rank and provide the reader with an insight into the underlying logic of this theory Let the matrices F A C n n be given Let R C n n be a matrix satisfying a Sylvester type equation FA (R) F R R A G B for some rectangular matrices G C n α B C α n where the number α is small in comparison to n The pair of matrices G B above is referred to as the {F A}-generator of R and the smallest possible inner size α among all {F A}-generators is called a {F A}- displacement rank of R This is the so-called Toeplitz-like displacement operator The Hankel-like displacement operator is defined as: FA (R) R F R A G B Basic classes of Structured Matrices Toeplitz-like: F Z A Z Toeplitz-plus-Hankel-like: F Y 00 A Y Cauchy-like: F diag(c c n ) A diag(d d n ) Vandermonde-like: F diag( x ) A Z Chebyshev-Vandermonde-like: F diag(x ) A Y γδ Date: July 999
2 PLAMEN KOEV Here Z φ Y γδ φ γ δ ie Z φ is the lower shift φ-circulant matrix and Y γδ Z 0 + Z T 0 + γe e T + δe e T Example Cauchy Matrix: c d c d c c d c d c d n c c d c d c d n cn c n d c n d c n d n c d c d n d d dn c d c d n c n d c n d c n d n Therefore the displacement rank of a Cauchy matrix is one The Displacement Structure is Inherited During Inversion If F R RA GB then AR R F (R G)(BR ) so R has a similar displacement structure and the same {A F } displacement rank as the {F A} displacement rank of R Similarity Transformations Preserve the Displacement Rank If F R RA GB and R T RT then F R R A G B where F T F T A T AT G T G and B BT This allows us to transform a structured matrix from one class to another The Displacement Structure is Inherited During Schur Complementation Lemma Let the matrix R [ d u l R ()
3 MATRICES WITH DISPLACEMENT STRUCTURE A SURVEY 3 satisfy Sylvester type displacement equation: [ [ f 0 a FA (R ) R F R G 0 A B where G C n α and B C α n If d 0 then the Schur complement R R () lu d satisfies the displacement equation F R R A G B where [ [ 0 G G g [ 0 B B b d l [ d u and g and b are the first row of G and the first column of B respectively Proof From the standard Schur complementation formula [ [ 0 d 0 R I 0 R d l we get [ [ [ f 0 d 0 d 0 F 0 R 0 R [ [ d u 0 I [ a 0 A 0 d l I Equating the () block entries one obtains the desired result [ B G d u 0 I Note: The requirement that F and A be lower and upper triangular is essential Otherwise the above (and the Fast Algorithm) doesn t work What this means is that if the () entry of F and A is then one step of Gaussian Elimination must leave F and A T unchanged Fast Gaussian Elimination for a Structured Matrix Recover from the generator the first column and the first row of [ d u R l R () Note: This must take O() flops per entry or the cost of the algorithm goes beyond O(n ) Now one has the first column [ d l of L and the first row [ d u of U in the LU factorization of R Compute a generator of the Schur complement of R using [ [ 0 [ G G g d l 0 B B b [ d u where g and b are the first row of G and the first column of B respectively
4 4 PLAMEN KOEV Example Consider the Sylvester type equation for a Cauchy-like matrix: /3 / 3 /3 / /4 3/3 4/ /4 3/3 4/ /5 4/4 5/3 /5 4/4 5/ [ 3 0 The first column of L and the first row of U in the LU decomposition are: R LU 0 0 3/4 0 / /5 0 0 The generators of the Schur complement are: [ 0 0 3/4 [ 0 G 3/5 therefore Also therefore [ 0 B [ 3 0 G [ 0 B [ /4 /5 /4 0 0 /5 [ 3 9 [ [ 6 So the Schur complement R satisfies the following displacement equation: [ [ [ [ /4 6 R 0 6 R 0 3 /5 Therefore R [ /4 /4 /5 /5 (If diag(c i ) R R diag(d i ) G B then r ij gibj c i d j where g i and b j are the ith row of G and the jth column of B respectively) We continue the same way The second column of L and the second row of U in the LU decomposition are: R LU 3/ /5 8/5 The generators of the Schur complement are: [ [ [ 0 /4 G3 /5 8/5 so G 3 [ 0 /5 /3 3 0 /4 /4 0 0 [ /4 [ /5
5 MATRICES WITH DISPLACEMENT STRUCTURE A SURVEY 5 Also so [ 0 B3 [ 6 [ B 3 [ 7 [ [ The Schur complement R 3 satisfies the displacement equation [ 6 R3 R 3 [ 3 [ 0 /5 [ 7 [ 4/5 Therefore R 3 [ 4/5 and the LU decomposition is R LU 0 0 3/4 0 /3 3 0 /4 /4 3/5 8/ /5 Pivoting for Matrices with Displacement Structure Partial pivoting may be applied to matrices with displacement structure that satisfy the displacement equation F R R A G B with F -diagonal matrix After the row interchange the matrix ˆR P R satisfies the same displacement equation with the diagonal matrix F replaced by another diagonal matrix ˆF P F P T and with G replaced by Ĝ P G Now implies F R R A G B (P F P T )(P R ) (P R )A (P G )B Fast GEPP Algorithm for Structured Matrices Recover from the generator the first column of [ d u R l R () Note: This will depend on the form of the matrices F and A The procedure was specified for a Cauchy matrix Procedures exist for the recovery of the matrix R from its displacement equation for all basic classes of matrices with displacement structure Next determine the position (say) (k ) of the entry with maximal magnitude in the first column Let P be a permutation of the first and the k-th entries Interchange the first and the k-th diagonal entries of F and interchange the first and the k-th rows in the matrix G Then recover the first row of P R from the generator Now one has the first column [ d l of L and the first row [ d u of U in the LU factorization of P R
6 6 PLAMEN KOEV Compute a generator of the Schur complement of R of P R using [ [ 0 G G g d l [ 0 B B b [ d u where g and b are the first row of G and the first column of B respectively Proceeding recursively one finally obtains factorization R P LU where P P P n and P k is the permutation used at the k-th step of the recursion Fast Inversion for Matrices with Displacement Structure For the next paragraph we will assume that we know how to solve Rx b in O(n ) operations for the nonsingular matrix R that satisfies F R RA GB To do this we can either use Fast GE or first transform R into a Cauchy-like matrix (we will see later how) and use Fast GEPP From F R RA GB we obtain AR R F (R G)(BR ) thus R satisfies a very similar displacement equation If we know the {A F } generators { R G BR } of R then we can recover R from this displacement equation in O(n ) time Note that algorithms exist for the recovery of the matrix R from the displacement equation F R RA GB for most classes (actually all famous classes Toeplitz Chebyshev-Vandermonde etc) of matrices with low displacement rank in O() operations per entry ie in O(n ) operations for the entire matrix We can compute R G and BR in O(n ) time as follows First compute R G by solving α times the system Rx g i i α where g i are the columns of G and α is the displacement rank of G Since solving Rx b takes O(n ) time and α is small in comparison with n we can obtain R G in O(n ) time Then compute BR in the following way: BR (R T B T ) T R T B T is the solution of α equations R T x b i i α Each of those equations can be solved in O(n ) time because if R LU then R T U T L T If the matrix R was first transformed into another type of structured matrix (say Cauchylike from Toeplitz-like in order to apply GEPP) then we have T R T T U T L T We can still solve Rx b in O(n ) time because as we will see later the matrices T and T will be diagonal matrices Fast Trigonometric Transforms or products thereof Having obtained the generators of R in O(n ) time we can recover the matrix R from the generators and the displacement equation in O(n ) time The total time required for inversion of R is O(n ) Transformation of Toeplitz-like matrices into Cauchy-like matrices As described earlier we need to be able to convert the other classes of structured matrices into Cauchy-like matrices before we can apply partial pivoting
7 MATRICES WITH DISPLACEMENT STRUCTURE A SURVEY 7 If R is a Toeplitz matrix then Z R RZ GB where the rank of G is not greater than Matrices that satisfy Z R RZ GB where G is of low rank are referred to as Toeplitz-like matrices Here is how the Toeplitz-like matrices are transformed into Cauchy-like matrices Consider the (normalized) Discrete Fourier Transform matrix [ F n e πi n (k )(j ) and the matrices kj n D diag( e πi πi n e n (n ) ) D diag(e πi 3πi (n )πi n e n e n ) D 0 diag( e πi n e (n )πi n ) The following factorizations are well known: Z F D F Z D 0 F D F D 0 Substituting the above into Z R RZ GB one obtains D (F RD 0 F ) (F RD 0 F )D (F G)(BD 0F ) ie F RD0 F is a Cauchy-like matrix After applying the Fast GEPP we obtain F RD0 F P LU so we get the factorization R F P LUF D0 Solving Rx b will require O(n ) operations and will consist of the application of two (normalized) DFTs one diagonal scaling a permutation a forward and backward substitution for a total of O(n ) operations Transformation of Vandermonde-like into Cauchy-like matrices [ The Vandermonde matrix V satisfies the displacement equation x j i ij n D x V V ZT [ x x T [ 0 0 By analogy we shall refer to any matrix R with low {D ZT x }-displacement rank as a Vandermonde-like matrix If D V V ZT x GB then RF is a Cauchy-like matrix: D (RF ) (RF )D x G(BF ) Toeplitz-plus-Hankel-like matrices Let F Y 00 A Y T be a Toeplitz matrix and H be a Hankel matrix: t 0 t t n t 0 t t n t t 0 t n T H t t 0 t n t n+ t n+ t 0 t n t n t n We have rank (Y 00 (T + H) (T + H)Y ) 4 Matrices with low {Y 00 Y }-displacement rank are referred to as Toeplitz-plus- Hankel-like matrices
8 8 PLAMEN KOEV The matrix Y γδ with γ δ { } or γ δ 0 can be diagonalized by Fast Trigonometric Transform Matrices Y 00 SD S S Y CD C C T where C S [ n q (k )(j )π j cos n [ kjπ sin n + n + kj n kj n and are the (normalized) Discrete Cosine Tansform-II and Discrete Sine Transform-I matrices respectively (q q q n ) and D C diag ( cos π (n )π ) cos n n D S diag ( π cos n + cos nπ ) n + If R is a Toeplitz-plus-Hankel-like Matrix then SRC is a Cauchy-like matrix: The equation Y 00 R RY GB yields D S (SRC) (SRC)D C (SG)(BC) Chebyshev-Vandermonde Matrices Let T 0 T and U 0 U be the Chebyshev Polynomials of the first and the second kind respectively For nonzero x the matrices T 0 (x ) T (x ) T n (x ) T 0 (x ) T (x ) T n (x ) V T and T 0 ( ) T ( ) T n ( ) U 0 (x ) U (x ) U n (x ) U 0 (x ) U (x ) U n (x ) V U U 0 ( ) U ( ) U n ( ) are referred to as Chebyshev-Vandermonde matrices The Chebyshev polynomials satisfy the relations T 0 (x) T (x) x T n (x) xt n (x) T n (x) U 0 (x) U (x) x U n (x) xu n (x) U n (x) Consider F D x diag ( x x )
9 MATRICES WITH DISPLACEMENT STRUCTURE A SURVEY 9 and A W [ n ( ) i (Z0 T ) i i where Z 0 as above is the lower circular shift Let D 0 diag( ) The Chebyshev-Vandermonde matrices satisfy D x (V T (x)d 0 ) (V T (x)d 0 ) W D x V U (x) V U W x x x x [ [ By analogy we will refer to matrices with small {D W }-displacement rank as x Chebyshev-Vandermonde-like Alternatively one can prove that the Chebyshev-Vandermonde-like matrices have low {D x Y } {D x Y 00 } or {D x Z + Z T }-displacement rank All displacement operators above describe in fact the same class of matrices A matrix that has a low rank with respect to one displacement operator will have a low displacement rank with respect to the other operators (but not necessarily the same) If a matrix R has a low {D x Y } {D x Y 00 } or {D x Z + Z T }-displacement rank then RC RS or RF respectively are Cauchy-like matrices where S C and F are appropriate Discrete Trigonometric Transforms described earlier in the text References [ I Gohberg T Kailath and V Olshevsky Fast Gaussian elimination with partial pivoting for matrices with displacement structure Math Comp 64 (995) pp [ T Kailath and V Olshevsky Displacement structure approach to Chebyshev-Vandermonde and related matrices Integral Equations Operator Theory (995) pp 65 9 [3 T Kailath and V Olshevsky Displacement-structure approach to polynomial Vandermonde and related matrices Linear Algebra Appl 6 (997) pp 49 90
7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix
7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
6. Cholesky factorization
6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix
Direct Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
Solution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).
MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix
SOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
Similarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
Elementary Matrices and The LU Factorization
lementary Matrices and The LU Factorization Definition: ny matrix obtained by performing a single elementary row operation (RO) on the identity (unit) matrix is called an elementary matrix. There are three
Solving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
Applied Linear Algebra I Review page 1
Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties
5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal
Using row reduction to calculate the inverse and the determinant of a square matrix
Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible
Lecture 4: Partitioned Matrices and Determinants
Lecture 4: Partitioned Matrices and Determinants 1 Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying
LINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
Operation Count; Numerical Linear Algebra
10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point
Factorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
Lecture 2 Matrix Operations
Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or
CS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
Notes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
8 Square matrices continued: Determinants
8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You
Matrix Differentiation
1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have
1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
Linearly Independent Sets and Linearly Dependent Sets
These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation
Linear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
Solving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
A note on companion matrices
Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod
Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.
Math 312, Fall 2012 Jerry L. Kazdan Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. In addition to the problems below, you should also know how to solve
1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
Linear Algebra: Determinants, Inverses, Rank
D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of
CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
Solving Systems of Linear Equations Using Matrices
Solving Systems of Linear Equations Using Matrices What is a Matrix? A matrix is a compact grid or array of numbers. It can be created from a system of equations and used to solve the system of equations.
Lecture 3: Finding integer solutions to systems of linear equations
Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture
LS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
NOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation
1.2 Solving a System of Linear Equations
1.. SOLVING A SYSTEM OF LINEAR EQUATIONS 1. Solving a System of Linear Equations 1..1 Simple Systems - Basic De nitions As noticed above, the general form of a linear system of m equations in n variables
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column
Banded Matrices with Banded Inverses and A = LPU
Banded Matrices with Banded Inverses and A = LPU Gilbert Strang Abstract. If A is a banded matrix with a banded inverse, then A = BC = F... F N is a product of block-diagonal matrices. We review this factorization,
MAT188H1S Lec0101 Burbulla
Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u
13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms
MAT 242 Test 2 SOLUTIONS, FORM T
MAT 242 Test 2 SOLUTIONS, FORM T 5 3 5 3 3 3 3. Let v =, v 5 2 =, v 3 =, and v 5 4 =. 3 3 7 3 a. [ points] The set { v, v 2, v 3, v 4 } is linearly dependent. Find a nontrivial linear combination of these
Math 215 HW #6 Solutions
Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T
G. GRAPHING FUNCTIONS
G. GRAPHING FUNCTIONS To get a quick insight int o how the graph of a function looks, it is very helpful to know how certain simple operations on the graph are related to the way the function epression
T ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
Solution to Homework 2
Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if
a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
Numerical Methods for Solving Systems of Nonlinear Equations
Numerical Methods for Solving Systems of Nonlinear Equations by Courtney Remani A project submitted to the Department of Mathematical Sciences in conformity with the requirements for Math 4301 Honour s
Lecture 1: Schur s Unitary Triangularization Theorem
Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections
Similar matrices and Jordan form
Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive
Orthogonal Bases and the QR Algorithm
Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries
Eigenvalues and Eigenvectors
Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution
IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction
IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible
1 Determinants and the Solvability of Linear Systems
1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped
Linear Programming. March 14, 2014
Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1
v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.
3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with
3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.
Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R
MA106 Linear Algebra lecture notes
MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector
Physics 235 Chapter 1. Chapter 1 Matrices, Vectors, and Vector Calculus
Chapter 1 Matrices, Vectors, and Vector Calculus In this chapter, we will focus on the mathematical tools required for the course. The main concepts that will be covered are: Coordinate transformations
Chapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
Solving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
The Determinant: a Means to Calculate Volume
The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are
General Framework for an Iterative Solution of Ax b. Jacobi s Method
2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,
UNCOUPLING THE PERRON EIGENVECTOR PROBLEM
UNCOUPLING THE PERRON EIGENVECTOR PROBLEM Carl D Meyer INTRODUCTION Foranonnegative irreducible matrix m m with spectral radius ρ,afundamental problem concerns the determination of the unique normalized
Row Echelon Form and Reduced Row Echelon Form
These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation
Solving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department [email protected].
Solving Linear Systems of Equations Gerald Recktenwald Portland State University Mechanical Engineering Department [email protected] These slides are a supplement to the book Numerical Methods with Matlab:
We shall turn our attention to solving linear systems of equations. Ax = b
59 Linear Algebra We shall turn our attention to solving linear systems of equations Ax = b where A R m n, x R n, and b R m. We already saw examples of methods that required the solution of a linear system
A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form
Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...
F Matrix Calculus F 1
F Matrix Calculus F 1 Appendix F: MATRIX CALCULUS TABLE OF CONTENTS Page F1 Introduction F 3 F2 The Derivatives of Vector Functions F 3 F21 Derivative of Vector with Respect to Vector F 3 F22 Derivative
Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison
SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections
PYTHAGOREAN TRIPLES KEITH CONRAD
PYTHAGOREAN TRIPLES KEITH CONRAD 1. Introduction A Pythagorean triple is a triple of positive integers (a, b, c) where a + b = c. Examples include (3, 4, 5), (5, 1, 13), and (8, 15, 17). Below is an ancient
Vector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
DETERMINANTS TERRY A. LORING
DETERMINANTS TERRY A. LORING 1. Determinants: a Row Operation By-Product The determinant is best understood in terms of row operations, in my opinion. Most books start by defining the determinant via formulas
Practical Guide to the Simplex Method of Linear Programming
Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear
Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8
Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e
LU Factoring of Non-Invertible Matrices
ACM Communications in Computer Algebra, LU Factoring of Non-Invertible Matrices D. J. Jeffrey Department of Applied Mathematics, The University of Western Ontario, London, Ontario, Canada N6A 5B7 Revised
-4- Algorithm Fast LU Decomposition
Appears in SIGSAM Bulletin 22(2):4-49 (April 988). Analysis of the Binary Complexity of Asymptotically Fast Algorithms for Linear System Solving* Introduction Brent Gregory and Erich Kaltofen Department
Geometric Transformations
Geometric Transformations Definitions Def: f is a mapping (function) of a set A into a set B if for every element a of A there exists a unique element b of B that is paired with a; this pairing is denoted
Section 6.1 - Inner Products and Norms
Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
Lecture notes on linear algebra
Lecture notes on linear algebra David Lerner Department of Mathematics University of Kansas These are notes of a course given in Fall, 2007 and 2008 to the Honors sections of our elementary linear algebra
Chapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
More than you wanted to know about quadratic forms
CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit
FACTORING MATRICES INTO THE PRODUCT OF CIRCULANT AND DIAGONAL MATRICES
FACTORING MATRICES INTO THE PRODUCT OF CIRCULANT AND DIAGONAL MATRICES MARKO HUHTANEN AND ALLAN PERÄMÄKI Abstract A generic matrix A C n n is shown to be the product of circulant and diagonal matrices
Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics
Undergraduate Notes in Mathematics Arkansas Tech University Department of Mathematics An Introductory Single Variable Real Analysis: A Learning Approach through Problem Solving Marcel B. Finan c All Rights
Continuity of the Perron Root
Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North
Lecture 5 Least-squares
EE263 Autumn 2007-08 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property
Arithmetic and Algebra of Matrices
Arithmetic and Algebra of Matrices Math 572: Algebra for Middle School Teachers The University of Montana 1 The Real Numbers 2 Classroom Connection: Systems of Linear Equations 3 Rational Numbers 4 Irrational
Inner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
DATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
The continuous and discrete Fourier transforms
FYSA21 Mathematical Tools in Science The continuous and discrete Fourier transforms Lennart Lindegren Lund Observatory (Department of Astronomy, Lund University) 1 The continuous Fourier transform 1.1
THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS
THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar
Yousef Saad University of Minnesota Computer Science and Engineering. CRM Montreal - April 30, 2008
A tutorial on: Iterative methods for Sparse Matrix Problems Yousef Saad University of Minnesota Computer Science and Engineering CRM Montreal - April 30, 2008 Outline Part 1 Sparse matrices and sparsity
