Lecture 13 Review of Matrix Theory I. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

Similar documents
Review Jeopardy. Blue vs. Orange. Review Jeopardy

[1] Diagonal factorization

Chapter 6. Orthogonality

Linear Algebra Review. Vectors

Similar matrices and Jordan form

by the matrix A results in a vector which is a reflection of the given

Lecture 5: Singular Value Decomposition SVD (1)

Vector and Matrix Norms

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

x = + x 2 + x

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

LINEAR ALGEBRA. September 23, 2010

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

5. Orthogonal matrices

Similarity and Diagonalization. Similar Matrices

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

1 Introduction to Matrices

MATH APPLIED MATRIX THEORY

3 Orthogonal Vectors and Matrices

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Orthogonal Diagonalization of Symmetric Matrices

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

The Characteristic Polynomial

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

8 Square matrices continued: Determinants

Applied Linear Algebra I Review page 1

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Notes on Determinant

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Examination paper for TMA4205 Numerical Linear Algebra

Lecture 1: Schur s Unitary Triangularization Theorem

Introduction to Matrix Algebra

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

Eigenvalues and Eigenvectors

MAT 242 Test 2 SOLUTIONS, FORM T

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Numerical Analysis Lecture Notes

October 3rd, Linear Algebra & Properties of the Covariance Matrix

LS.6 Solution Matrices

CS3220 Lecture Notes: QR factorization and orthogonal transformations

Factorization Theorems

Inner Product Spaces and Orthogonality

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Numerical Methods I Eigenvalue Problems

Lecture 2 Matrix Operations

Linear Algebra and TI 89

DATA ANALYSIS II. Matrix Algorithms

Math Lecture 33 : Positive Definite Matrices

Factor analysis. Angela Montanari

6. Cholesky factorization

Section Inner Products and Norms

Notes on Symmetric Matrices

Examination paper for TMA4115 Matematikk 3

Solving Linear Systems, Continued and The Inverse of a Matrix

Derivative Free Optimization

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

7. Show that the expectation value function that appears in Lecture 1, namely

Continuity of the Perron Root

A note on companion matrices

Linearly Independent Sets and Linearly Dependent Sets

SALEM COMMUNITY COLLEGE Carneys Point, New Jersey COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

7 Gaussian Elimination and LU Factorization

ASEN Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1

Chapter 17. Orthogonal Matrices and Symmetries of Space

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

Name: Section Registered In:

Linear Algebra: Determinants, Inverses, Rank

Lecture 5 Principal Minors and the Hessian

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Nonlinear Iterative Partial Least Squares Method

Multidimensional data and factorial methods

1 Review of Least Squares Solutions to Overdetermined Systems

Linear Algebra Notes

A Direct Numerical Method for Observability Analysis

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

Elasticity Theory Basics

1 VECTOR SPACES AND SUBSPACES

Elementary Linear Algebra

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on

Mean value theorem, Taylors Theorem, Maxima and Minima.

Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems

Component Ordering in Independent Component Analysis Based on Data Power

9 MATRICES AND TRANSFORMATIONS

MULTIPLE-OBJECTIVE DECISION MAKING TECHNIQUE Analytical Hierarchy Process

Solving Systems of Linear Equations

x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3

ALGEBRAIC EIGENVALUE PROBLEM

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Row Echelon Form and Reduced Row Echelon Form

Using row reduction to calculate the inverse and the determinant of a square matrix

Factor Analysis. Chapter 420. Introduction

Matrices and Polynomials

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Systems of Linear Equations

University of Lille I PC first year list of exercises n 7. Review

Transcription:

Lecture 13 Review of Matrix heory I Dr. Radhakant Padhi sst. Professor Dept. of erospace Engineering Indian Institute of Science - Bangalore

Definition and Examples Definition: Matrix is a collection of elements (numbers) arranged in rows and columns. Examples: 1 1 3 13 = [ 1 3 ], B31 =, C3 =, 4 5 6 3 1 1 4 6 D3 = 3 4, E 3 3 4 5 = 5 6 6 5 3

Definitions Symmetric matrix: = Singular matrix: = 0 Inverse of a matrix: Orthogonal matrix: B is inverse of iff B= B= I ( ) 1 = adj / = = I cosθ sinθ Example: ( θ ) = sinθ cosθ Result: Columns of an orthogonal matrix are orthonormal. 3

Eigenvalues and Eigenvectors Matrices also act as linear operators with stretching and rotation operations. y1 x1 y1 Y = X = Q = y x y X P x 1 = x 4

Eigenvalues and Eigenvectors Question: Can we find a direction (vector), along which the matrix will act only as a stretching operator? nswer: If such a solution exists, then X = λx λi X = ( ) 0 For nontrivial solution, λi = 0 Utility: Stability and control, Model reduction, Principal component analysis etc. 5

erminology Definition Properties of Eigenvalues Positive definite > 0 X X > 0 X 0 λ > i 0, i Positive semi definite 0 X X 0 X 0 λ i 0, i Negative definite < 0 X X < 0 X 0 λ < i 0, i Negative semi definite 0 X X 0 X 0 λ i 0, i 6

Eigenvalues and Eigenvectors: Some useful properties If λ1, λ,, λ are eigenvalues of n n n then for any positive integer m, λ m 1, λ m,, λ m n m are eigenvalues of If is a nonsingular matrix with eigenvalues 1 1 1 λ, then 1,... 1, λ,, λn λ λ λn are 1 eigenvalues of For triangular matrix, the eigenvalues are the diagonal elements 7

Eigenvalues and Eigenvectors: Some useful properties If a n n matrix is symmetric, its eigenvalues are all REL. Moreover, it has n linearly-independent eigenvectors. If n n has n real eigenvalues and n real orthogonal eigenvectors, then the matrix is symmetric and are always positive semi definite. If is a positive definite symmetric matrix, then every principal sub-matrix of is also symmetric and positive definite. In particular, the diagonal elements of are positive. 8

Generalized Eigenvectors If an eigenvalue is repeated p times, there may or may not be p linearly independent eigenvectors corresponding to it. In case linearly independent eigenvalues cannot be found, generalized eigenvectors are the next option. Example: Suppose 3 3has eigenvalues λ1, λ, λ. hen eigenvectors V1, V and generalized eigenvector V can be found as follows: ( λ ) ( λ ) ( λ ) (a) I V = 0 3 1 1 (b) I V = 0 (c) I V = V 3 3 9

Vector Norm Vector norm is a real valued function with the following properties: (a) X > 0 and X = 0 only if X = 0 (b) α X = α X (c) X + Y X + Y X, Y 10

Vector Norm X = x + x + + x ( l norm) 1 3 1 n 1 ( ) n X = x + x + + x ( l norm) ( 3 3 3 ) n ( p p p ) 1 n 1/ 1 X = x + x + + x ( l norm) 1/3 1 3 ( ) 1 n 1/ p X = x + x + + x ( l norm) p 1/ X = x + x + + x = max x ( norm) i i p l 11

Matrix/Operator/Induced Norm Definition: Properties: X = max = max X 0 X X = 1 (a) > 0 and = 0 only if = 0 (b) α = α (c) + B + B (d) B B ( X ) 1

Matrix/Operator/Induced Norm 1-Norm 1 1 j n -Norm -Norm n = max aij : Largest of the absolute column sums 1 i n i = 1 ( ) = σ max : Largest Singular Value n = max aij : Largest of the absolute row sums j= 1 13

Matrix/Operator/Induced Norm Frobenius Norm Holds good for non-square matrices as well Used frequently in neural network and adaptive control literature m n F 1/ m n = aij i= 1 j= 1 14

Spectral Radius For with eigenvalues the spectral radius ρ is defined as Result: n n ( ) ρ = ( ) 1 ( ) max 1/ i n = ρ If is symmetric, then λ i λ1, λ,, λn 1/ 1/ 1/ ( ) ( ) ( ( )) ( ) = ρ = ρ = ρ = ρ 15

Least Square Solutions System: where X = b R, X R, b R m n n m Case 1: ( m= n and 0) (No. of equations = No. of variables) Unique solution: X = 1 b 16

Least square solutions ( m < n) Case : (under constrained problem) (No. of equations < No. of variables) In this case, there are infinitely many solutions. One way to get a meaningful solution is to formulate the following optimization problem: Minimize Solution J = X, Subject to X = b ( ) 1 ( ) + + X = b, where = right pseudo inverse his solution WILL satisfy the equation X = b exactly. 17

Least square solutions Case : ( m> n) (over constrained problem) (No. of equations > No. of variables) In this case, there is no solution. However, one way to get a meaningful (error minimizing) solution is to formulate the following optimization problem: Minimize: J = X b Solution: ( ) 1 ( ) + + X = b, where = left pseudo inverse his solution need not satisfy the equation X = b exactly. 18

Generalized/Pseudo Inverse Left pseudo inverse: Right pseudo inverse: Properties: ( ) ( ) + ( a) = ( b) = () c = + + ( d) = + + + + + + + ( ) 1 = = ( ) 1 + 1 ( e), if is square and 0 = 19

0

1