Lecture 4: Partitioned Matrices and Determinants



Similar documents
Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

8 Square matrices continued: Determinants

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Lecture 2 Matrix Operations

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Using row reduction to calculate the inverse and the determinant of a square matrix

Notes on Determinant

The Determinant: a Means to Calculate Volume

LINEAR ALGEBRA. September 23, 2010

Systems of Linear Equations

Similarity and Diagonalization. Similar Matrices

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Unit 18 Determinants

α = u v. In other words, Orthogonal Projection

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra: Determinants, Inverses, Rank

CS3220 Lecture Notes: QR factorization and orthogonal transformations

DETERMINANTS TERRY A. LORING

Factorization Theorems

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

University of Lille I PC first year list of exercises n 7. Review

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Methods for Finding Bases

Chapter 17. Orthogonal Matrices and Symmetries of Space

Name: Section Registered In:

1 Introduction to Matrices

Notes on Symmetric Matrices

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

6. Cholesky factorization

1 VECTOR SPACES AND SUBSPACES

Elementary Matrices and The LU Factorization

1 Sets and Set Notation.

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

Direct Methods for Solving Linear Systems. Matrix Factorization

The Characteristic Polynomial

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

Matrices and Linear Algebra

Solving Systems of Linear Equations

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Recall that two vectors in are perpendicular or orthogonal provided that their dot

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Lecture 1: Schur s Unitary Triangularization Theorem

( ) which must be a vector

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

Modélisation et résolutions numérique et symbolique

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Suk-Geun Hwang and Jin-Woo Park

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Vector and Matrix Norms

Orthogonal Projections

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. (

Solving Linear Systems, Continued and The Inverse of a Matrix

Lecture 14: Section 3.3

Linear Algebra Notes

Question 2: How do you solve a matrix equation using the matrix inverse?

Math 312 Homework 1 Solutions

Chapter 6. Orthogonality

160 CHAPTER 4. VECTOR SPACES

1 Determinants and the Solvability of Linear Systems

T ( a i x i ) = a i T (x i ).

Matrix Representations of Linear Transformations and Changes of Coordinates

ISOMETRIES OF R n KEITH CONRAD

Lecture 5 Principal Minors and the Hessian

Lecture 5: Singular Value Decomposition SVD (1)

Notes on Linear Algebra. Peter J. Cameron

DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH

Orthogonal Diagonalization of Symmetric Matrices

MAT188H1S Lec0101 Burbulla

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

4.5 Linear Dependence and Linear Independence

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Lecture 1: Systems of Linear Equations

9 MATRICES AND TRANSFORMATIONS

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

GCE Mathematics (6360) Further Pure unit 4 (MFP4) Textbook

7 Gaussian Elimination and LU Factorization

Chapter 7. Permutation Groups

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Numerical Analysis Lecture Notes

LS.6 Solution Matrices

1.2 Solving a System of Linear Equations

Minimally Infeasible Set Partitioning Problems with Balanced Constraints

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

Introduction to Matrix Algebra

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

5. Orthogonal matrices

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Inner product. Definition of inner product

NOTES ON LINEAR TRANSFORMATIONS

More than you wanted to know about quadratic forms

A note on companion matrices

Applied Linear Algebra I Review page 1

Section Inner Products and Norms

Transcription:

Lecture 4: Partitioned Matrices and Determinants 1

Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying row i by a nonzero scalar α, denoted by E i (α), (2) adding β times row j to row i, denoted by E ij (β) (here β is any scalar), and (3) interchanging rows i and j, denoted by E ij, (here i j), called elementary row operations of types 1,2 and 3 resp. Illustrations for m = 4: E 2 (α)= 1 0 0 0 0 α 0 0 0 0 1 0 0 0 0 1, E42 (β)= 1 0 0 0 0 1 0 0 0 0 1 0 0 β 0 1 Q. Calculate the determinants of these matrices., E13 = 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 1. 2-a

Determinants Definition (Cullen and Gale). The determinant is the function det : C n n C such that (a) det(e i (α)) = α, for all α C, i 1, n, and (b) det(ab) = det(a) det(b), for all A, B C n n. Q. Prove that this definition is equivalent to the one you know. Q. Explain why det E ij = 1, or equivalently, why det E ij A = det A, A. The Binet Cauchy formula. If A C k n, B C n k then det(ab) = I Q k,n det A I det B I. Here Q k,n is the set of increasing sequences of k elements from 1, n, for example: Q 2,3 = {{1, 2}, {1, 3}, {2, 3}}. 3-b

Bordered matrices Theorem (Blattner). Let A Cr m n V satisfy and let the matrices U and (a) U C m (m r) (m r) and the columns of U are a basis for N(A ). (b) V C n (n r) (n r) Then the matrix and the columns of U are a basis for N(A). A V is nonsingular and its inverse is A V U O U, (1) O. (2) 5

The Cramer rule Given a matrix A and a vector b, A[j b] denotes the matrix obtained from A by replacing the j th column by b. Theorem (Cramer). Let A C n n be nonsingular. Then for any b C n, the solution x = [x j ] of is given by deta[j b] x j =, j 1, n. deta Proof (Robinson). Write Ax = b as and take determinants Ax = b (1) A I n [j x] = A[j b], j 1, n, deta deti n [j x] = deta[j b]. 4-a

Bordered matrices (cont d) Proof. Recall: A Cr m n, and the matrices U and V satisfy (a) U C m (m r) (m r) and the columns of U are a basis for N(A ). (b) V C n (n r) (n r) A V U O and the columns of U are a basis for N(A). A V U O = AA + UU AV. V A V V (1) R(U) = N(A ) = R(A) = AA + UU = I n (2) V A = V A AA = V A A A = (AV ) A A = O (3) AV = AV = A(V V V ) = A(V V V ) = AV V V = O, (4) V V = I n r. 6-a

A special case Corollary. Let A Cr m n V C n (n r) satisfy and let the matrices U C m (m r) and AV = O, V V = I n r, A U = O, and U U = I m r. Then the matrix A V U, O is nonsingular and its inverse is A U V. O 7

Corollary. Let A C m n r MNLSS and let the matrices U and V satisfy (a) U C m (m r) (m r) and the columns of U are a basis for N(A ). (b) V C n (n r) (n r) Consider the linear equation and the columns of U are a basis for N(A). Ax = b. (1) The solution x,y of A V U x = O y b. (2) 0 satisfies x = A b, the minimal norm least squares solution of (1), Uy = P N(A )b, the residual of (1). 8

Corollary. Let A C m n r MNLSS and let the matrices U and V satisfy (a) U C m (m r) (m r) and the columns of U are a basis for N(A ). (b) V C n (n r) (n r) Consider the linear equation and the columns of U are a basis for N(A). Ax = b. (1) The minimal norm least squares solution x = [x j ] of (1) is given by A[j b] det U V [j 0] O x j =, j 1, n. det A U V O 9

Greville s method Let A C m n, and let A k = A[, 1, k] C m k be partitioned as ] A k = [A k 1 a k (1) Let the vectors d k and c k be defined by Theorem (Greville). d k := A k 1 a k (2) c k := a k A k 1 d k = P N(A k 1 )a k (3) [ A k 1 a k ] = A k 1 d kb k b k, (4) where b k = c k if c k 0, b k = (1 + d kd k ) 1 d ka k 1 if c k = 0. 10

Let A be partitioned as A = Schur complement A 11 A 12, A 11 nonsingular, A 21 A 22 and consider the homogeneous equation A 11 x 1 + A 12 x 2 = 0 A 21 x 1 + A 22 x 2 = 0 Eliminating x 1 we get the equation for x 2 (A 22 A 21 A 1 11 A 12)x 2 = 0 The Schur complement of A 11 in A, denoted A/A 11, is A/A 11 := A 22 A 21 A 1 11 A 12. 11

Schur complement (cont d) Let A = A 11 A 12, A 11 nonsingular, A 21 A 22 A/A 11 := A 22 A 21 A 1 11 A 12. (a) If A is square, its determinant is deta = deta 11 det(a/a 11 ). (b) The quotient property. If A 11 is further partitioned as A 11 = E F, E nonsingular, then A/A 11 = (A/E)/(A 11 /E). G H (c) rank A = rank A 11 A/A 11 = O. 12

Schur complement (cont d) Let the equation Ax = b be partitioned as Then (1) is consistent if and only if A 11 A 12 x 1 = b 1 (1) A 21 A 22 x 2 (A/A 11 )x 2 = b 2 A 21 A 1 11 b 1 (2a) is consistent, in which case a solution is completed by b 2 x 1 = A 1 11 (b 1 A 12 x 2 ). (2b) Proof. Eliminate x 1 = A 1 11 (b 1 A 12 x 2 ) from the top of (1), and substitute in the bottom to get, (A 22 A 21 A 1 11 A 12)x 2 = b 2 A 21 A 1 11 b 1 13

Basic solutions Let A C m n n, b C m, and consider the equation Ax = b (1) Let I(A) be then index set of maximal full rank (nonsingular) submatrices I(A) = {I Q n,m : ranka[i, ] = n} For each I I(A), the I th basic solution of (1) is the vector the solution of the subsystem x I = A[I, ] 1 b[i], A[I, ]x = b[i]. There are at most ( m n) basic solutions. 14

Let A C m n n LSS, b C m. Then the LSS x of the equation Ax = b (1) is unique, x = A b and is a convex combination of the basic solutions x = λ I A[I, ] 1 b[i], I I(A) I I(A) λ I = 1, λ I 0, I. Not surprising, but the real surprise is that the weights λ I are proportional to the squares of the determinants of A[I, ], λ I det 2 A[I, ] 15-a

The Moore Penrose inverse and basic inverses Theorem. Let A Cn m n. Then A = I I(A) λ I A[I, ] 1, λ I = det 2 A[I, ] det 2 A[K, ] K I(A) Theorem. Let A Cr m n, and let N(A) be the index set of maximal nonsingular submatrices, N(A) := {(I, J) : rank A[I, J] = r} Then A is a convex combination, A = λ IJ = (I,J) N(A) λ IJ A[I, J] 1 det 2 A[I, J] det 2 A[K, L] (K,L) N(A) 16-a