2. Matrices, systems of linear equations. Tento učební text byl podpořen z Operačního programu Praha - Adaptabilita

Similar documents
Vector and Matrix Norms

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Using row reduction to calculate the inverse and the determinant of a square matrix

Factorization Theorems

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Notes on Determinant

Solution of Linear Systems

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

LINEAR ALGEBRA. September 23, 2010

Elementary Matrices and The LU Factorization

Similarity and Diagonalization. Similar Matrices

Direct Methods for Solving Linear Systems. Matrix Factorization

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

7 Gaussian Elimination and LU Factorization

Lecture 5: Singular Value Decomposition SVD (1)

by the matrix A results in a vector which is a reflection of the given

6. Cholesky factorization

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Solving Linear Systems, Continued and The Inverse of a Matrix

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Orthogonal Diagonalization of Symmetric Matrices

Inner Product Spaces and Orthogonality

1 Introduction to Matrices

Numerical Analysis Lecture Notes

The Characteristic Polynomial

Applied Linear Algebra I Review page 1

Linear Algebra Review. Vectors

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Section Inner Products and Norms

University of Lille I PC first year list of exercises n 7. Review

8 Square matrices continued: Determinants

Similar matrices and Jordan form

Linear Algebra: Determinants, Inverses, Rank

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Solving Systems of Linear Equations

Introduction to Matrix Algebra

Eigenvalues and Eigenvectors

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

DATA ANALYSIS II. Matrix Algorithms

Numerical Methods I Eigenvalue Problems

Operation Count; Numerical Linear Algebra

Solving Systems of Linear Equations Using Matrices

1 VECTOR SPACES AND SUBSPACES

Chapter 6. Orthogonality

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH APPLIED MATRIX THEORY

SOLVING LINEAR SYSTEMS

Linear Algebra and TI 89

Lecture notes on linear algebra

Continuity of the Perron Root

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Systems of Linear Equations

Lecture 1: Schur s Unitary Triangularization Theorem

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Data Mining: Algorithms and Applications Matrix Math Review

We shall turn our attention to solving linear systems of equations. Ax = b

[1] Diagonal factorization

Solving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department

General Framework for an Iterative Solution of Ax b. Jacobi s Method

October 3rd, Linear Algebra & Properties of the Covariance Matrix

Question 2: How do you solve a matrix equation using the matrix inverse?

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Mean value theorem, Taylors Theorem, Maxima and Minima.

CS3220 Lecture Notes: QR factorization and orthogonal transformations

1.2 Solving a System of Linear Equations

26. Determinants I. 1. Prehistory

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Linearly Independent Sets and Linearly Dependent Sets

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

5.5. Solving linear systems by the elimination method

Lecture 3: Finding integer solutions to systems of linear equations

Notes on Symmetric Matrices

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

u = [ 2 4 5] has one row with three components (a 3 v = [2 4 5] has three rows separated by semicolons (a 3 w = 2:5 generates the row vector w = [ 2 3

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Arithmetic and Algebra of Matrices

A note on companion matrices

Solving Systems of Linear Equations

3.1 State Space Models

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Inner product. Definition of inner product

LS.6 Solution Matrices

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances

Lecture 4: Partitioned Matrices and Determinants

Name: Section Registered In:

1 Sets and Set Notation.

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Methods for Finding Bases

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Solution to Homework 2

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Elementary Linear Algebra

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

Brief Introduction to Vectors and Matrices

Row Echelon Form and Reduced Row Echelon Form

Excel supplement: Chapter 7 Matrix and vector algebra

MAT 242 Test 2 SOLUTIONS, FORM T

1 Solving LPs: The Simplex Algorithm of George Dantzig

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

x = + x 2 + x

Transcription:

. Matrices, systems of linear equations Tento učební text byl podpořen z Operačního programu Praha - Adaptabilita Irena Sýkorová

Basic properties of matrices Let us have a square matrix A = (a ij ) n n. The matrix A is called symmetric if A = A, i.e. a ij = a ji for i, j = 1,..., n. A matrix A is called positively definite if all its main minors are positive, i.e. a 11 a 1... a 1i a D 1 > 0,..., D n > 0, where D i = 1 a... a i............ a i1 a i... a ii A matrix A is called diagonally dominant if the absolute value of the term on the diagonal is grater or equal to the sum of absolute values of the other terms either for all rows or for all columns, i.e. a ii a ij pro i = 1,..., n. i j A matrix is strictly diagonally dominant if the inequalities are strict. Example 1: Determine whether the matrix A is symmetric, positively definite, diagonally dominant, A = 4 1 0 3. 0 3 The matrix A is not symmetric, because a 1 a 1. For determining the definiteness we compute the minors D 1 = 4 (> 0), D = 4 1 3 = 14 (> 0), D 4 1 0 3 = 3 = 6 (> 0). 0 3 All the minors are positive, thus the matrix A is positively definite. For the diagonal dominance we compare the terms on the diagonal and sums of the rests first for the rows: 4 > 1 + 0 3 < + 3 > + 0 the condition is not satisfied for the second row, let us try the columns: 4 > + 0 3 1 + 3 > + 0 The condition holds for all the columns, thus the matrix A is diagonally dominant, but it is not strictly diagonally dominant (the inequality in the second column is not strict). MATLAB proof that A is symmetric, i.e. A = A or A A = O (zero matrix): >> A-A or comparison result: 1 (A is symmetric), 0 (A is not symmetric) >> isequal(a,a )

proof that the matrix A is positively definite all minors are positive >> A(1,1) >> det(a(1:,1:)) >> det(a(1:3,1:3)) >>... >> det(a) Eigenvalues of matrices Let us have a square matrix A = (a ij ) n n. If for a real number λ (generally complex) and a non-zero vector x the following condition is satisfied A x = λ x, (1) the number λ is called eigenvalue of the matrix A and the vector x is eigenvector of A assigned to λ. The equation (1) can be transformed, J is the unit (identity) matrix of the order n A x = λ x, A x λ x = ō, (A λj) x = ō. () This is a matrix form of the homogeneous system of n linear equations with n unknowns with the matrix A λj. We look for the non-zero vector x, i.e. non-zero solution of the system, which exists iff the matrix of the system is singular, i.e. det(a λj) = 0. (3) The matrix A λj is called characteristic matrix, polynomial p(λ) = det(a λj) is characteristic polynomial (of order n) and the equation (3) is characteristic equation. We find the roots of the polynomial of n-th order, and, thus there are n (real or complex, maybe multiple) eigenvalues of any matrix A. Spectral radius of a matrix A is the greatest of absolute values of eigenvalues ρ(a) = max{ λ i, where λ i is an eigenvalue of A}. If we know the eigenvalue λ then its eigenvector is every non-zero solution of the system (). Example : Find eigenvalues and spectral radius of the matrix 1 A =. 3 First form the characteristic matrix A-λJ = 1 3 λ 0 1 λ =, 0 λ 3 λ then the characteristic polynomial det(a λj) = 1 λ 3 λ = (1 λ)( λ) 6 = λ 3λ 4. Eigenvalues are the roots of the characteristic polynomial λ 3λ 4 = 0, thus the eigenvalues of A are λ 1 = 4, λ = 1. 3

The spectral radius is the greatest of absolute values of the eigenvalues: The spectral radius is ρ(a) = 4. Remark max{ λ 1, λ } = max{ 4, 1 } = max{4, 1} = 4. (1) a) If the given matrix is symmetric, then all its eigenvalues are real. () b) A matrix is symmetric and positively definite iff all its eigenvalues are positive. MATLAB eigenvalues of A: >> eig(a) stectral radius of A (maximum of absolute values of eigenvalues): >> max(abs(eig(a))) Norm of matrix Norm of matrix is a mapping R n n into R that satisfies the following conditions: (1) (i) A > 0 for every non-zero square matrix and A = 0 only for A = O, () (ii) c A = c A for every real number c, (3) (iii) A + B A + B for every pair of matrices of the same type, (4) (iv) A B A B for every pair of matrices of the same type. We say that a norm of matrix is consistent with a norm of vector if for each matrix A and vector x we have A x A x. Let be a norm of vector, then the norm A x A = sup x ō x = sup A x (4) x =1 is generated by. A norm of matrix generated by the norm of vector is consistent with this norm. Examples of matrix norms: n (1) a) A = max 1 i n j=1 a ij, row sum, infinity norm, n () b) A 1 = max 1 j n i=1 a ij, column sum, 1- norm, ( n ) 1 (3) c) A F =, Frobenius, Schur norm, i=1 n j=1 a ij (4) d) A = ρ(a A), spectral, Euclidean norm. Example 3: Compute all the norms of the matrix A = a) b) ( 1 5 6 ). A = max{ 1 + 5, + 6 } = max{6, 8} = 8. A 1 = max{ 1 +, 5 + 6 } = max{3, 11} = 11. 4

c) d) then find eigenvalues: A F = (1) + () + ( 5) + (6) = 1 + 4 + 5 + 36 = 66 = 8, 14. A A = 1 5 6 1 5 = 6 5 7, 7 61 (5 λ)(61 λ) 49 = 0 λ 66λ + 56 = 0 λ 1 = 61, 8617, λ = 4, 1383. Then ρ(a A) = 61, 8617 A = 61, 8617 = 7, 865. MATLAB norms of a matrix A: row sum norm: >> norm(a,inf) >> max(sum(abs(a ))) column sum norm: >> norm(a,1) >> max(sum(abs(a))) Frobenius norm: >> norm(a, fro ) >> sqrt(sum(diag(a * A))) spectral norm: >> norm(a,) >> norm(a) R e m a r k : Frobenius norm of matrix is not generated by Euclidean norm of vector, in fact for the unit matrix J of order n we have J F = n, but from (4) we have J F = 1. Euclidean norm of vector generates spectral norm of matrix. If the norm of a matrix is consistent with a norm of vector the following condition holds: ρ(a) A. Conditionality of matrix Example 4: Let us have the linear system x + 6y = 8 x + 6, 0001y = 8, 0001, with the solution x = 1, y = 1. Consider a small change in coefficients ( 10 4 ), and get the system x + 6y = 8 x + 5, 9999y = 8, 000 with the solution x = 10, y =. Thus a very small change in coefficients and right sides imply a big change in the solution. A matrix is called well conditioned if relatively small changes in coefficients and right sides imply relatively small changes in solution. A matrix is called badly conditioned if relatively small changes in coefficients and right sides imply relatively big changes in solution. 5

Consider a linear system A x = b (5) with a regular (non-singular, invertible) matrix A. Denote x the exact solution of the system A x = b and x c is the exact solution of the changed system (A + δa) x = b + δ b, then the estimation of the relative error of the solution x c x x depends directly on the product K(A) = A A 1, where the norm of matrix is generated by norm of vector. The number C(A) is called condition number of the matrix( A. The bigger ) it is the bigger the estimation of the relative error is. 6 The matrix A = from the foregoing example has a big condition number 6, 0001 C(A) = 4 10 5. MATLAB condition number of a matrix A: >> cond(a) Direct methods for solving linear systems We will solve a linear system A x = b, where A = (a ij ) is a real regular matrix of the order n, b = (b 1, b,..., b n ) is the column vector of right sides. Direct methods: Gaussian elimination, Jordan (Gauss-Jordan) elimination, method with inverse matrix, Cramer s rule,... Method with inverse matrix: When we know A 1 we have x = A 1 b, but sometimes finding A 1 is difficult. Gaussian elimination: We transform the augmented matrix of the given system (A b) using elementary row operations to (U y), where U is upper triangular matrix forward process. Then we solve the system U x = ȳ, from down we compute the coordinates of x backward process. Example 5: Solve the system by using the Gaussian elimination x 1 + x + x 3 = x 1 + x + 3x 3 = 5 x 1 x = 1. (6) een: We transform the augmented matrix of the system into a upper triangular form. In the k-th step we multiply the k-th row by a constant add this row to the others to eliminate the terms under 6

the diagonal (constants are on the right-hand side of the rows). If the term on the diagonal is zero we have to exchange the rows. 1 3 5 ( ) (1) 1 0 1 1 ( 1 ) 1 0 1 1 3 3 1 1 0 1 0 1 1 0 0 From here we easily get x 3 = 1, x = 1, x 1 =. In Gauss-Jordan elimination we transform A into a unit matrix and the solution is in the right-hand column. Elementary row operations can be done by multiplying the given matrix with special regular matrices from the left. If we add an m-multiple of i-th row to the j-th row in a matrix M, we get the product V M (not MV ), where V is the unit matrix with m instead of v ji. If j > i then V is lower triangular. Example 6: In the matrix M add the 4-multiple of the 1st row to the 3rd row (i = 1, j = 3, m = 4). M = 1 0 3 V = 1 0 0 0 1 0 4 5 4 0 1 V M = 1 0 0 0 1 0 1 0 3 = 1 0 3 4 0 1 4 5 0 3 If we exchange the i-th and j-th rows in M, we get the product P M (not MP ), where P is the unit matrix where the i-th and j-th rows are exchanged. The matrix P is called permutation matrix. Example 6: In M exchange the first and second rows (i = 1, j = ). M = 1 0 3 P = 0 1 0 1 0 0 4 5 0 0 1 P M = 0 1 0 1 0 0 1 0 3 = 3 1 0 0 0 1 4 5 4 5 We will use LU decomposition, LU factorization of a matrix, which means that we express a matrix A in the form of product A = LU, where L is lower triangular matrix with ones on diagonal and U is upper triangular matrix. The system (5) will be solved in two steps - two systems with triangular matrices: Lȳ = b U x = ȳ. Triangular matrices L and U can be found by using the elementary row operations on A, symbolically: (A J)... (U L ), where L is lower triangular with L A = U. We can just add a real multiple of a row to a row below. From L A = U we get A = (L ) 1 U = LU. We can interpret this process as the (step-by-step) multiplication of A with V k L A = V k... V 1 A = U A = (L ) 1 U = (V k... V 1 ) 1 U = (V1 1... V 1 k ) U = LU. Every L has ones on the diagonal and coefficients from the Gaussian elimination with below. 7

Example 7: Solve the system (6) by using LU decomposition. The matrix (6) can be decomposed Then we solve Lȳ = b A = LU = 1 0 0 1 0 1 0 1 = 1 3. 1 1 3 1 0 0 1 1 0 1 0 0 1 0 y 1 y = 5, 1 1 1 y 3 1 now we get from the first row y 1 =, and after substituting we get y = 1 from the second row, and from the third row get y 3 = 3, thus ȳ = Now solve the system U x = ȳ, which is the backward process of the Gaussian elimination: 1 3 1 0 1 x 1 x = 1 3 3 0 0 x 3. x 1 x = 1 x 3 1 Even if A is regular we can have zeros on the diagonal. We have to exchange rows first, and then decompose the obtained matrix: P A = LU. The solution of such a system: Lȳ = P b U x = ȳ. MATLAB direct methods for solving linear system (A matrix of szstem, b column vector of right sides): Matlab command: >> x = A \ b with inverse matrix: >> A1 = inv(a) >> x = A1 * b with LU decomposition (solve Ly = P b, Ux = y, where P is permutation matrix): >> [L, U, P] = lu(a) >> x = U \ ( L \ (P*b)) 8