# ALGEBRAIC EIGENVALUE PROBLEM

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 ALGEBRAIC EIGENVALUE PROBLEM BY J. H. WILKINSON, M.A. (Cantab.), Sc.D. Technische Universes! Dsrmstedt FACHBEREICH (NFORMATiK BIBL1OTHEK Sachgebieto:. Standort: CLARENDON PRESS OXFORD 1965

2 Contents 1. THEORETICAL BACKGROUND Page Introduction 1 Definitions 2 Eigenvalues and eigenvectors of the transposed matrix 3 Distinct eigenvalues 4 Similarity transformations 6 Multiple eigenvalues and canonical forms for general matrices 7 Defective system of eigenvectors 9 The Jordan (classical) canonical form 10 The elementary divisors 12 Companion matrix of the characteristic polynomial of A 12 Non-derogatory matrices The Frobenius (rational) canonical form Relationship between the Jordan and Frobenius canonical forms 16 Equivalence transformations 17 Lambda matrices 18 Elementary operations 19 Smith's canonical form 19 The highest common factor offc-rowedminors of a A-matrix 22 Invariant factors of (A XI) The triangular canonical form Hermitian and symmetric matrices 24 Elementary properties of Hermitian matrices 25 Complex symmetric matrices 26 Reduction to triangular form by unitary transformations 27 Quadratic forms 27 Necessary and sufficient conditions for positive definiteness 28 Differential equations with constant coefficients Solutions corresponding to non-linear elementary divisors Differential equations of higher order 32 Second-order equations of special form 34 Explicit solution of By = Ay 35 Equations of the form (AB XI)x 0 35 The minimum polynomial of a vector 36 The minimum polynomial of a matrix 37 Cayley-Hamilton theorem 38 Relation between minimum polynomial and canonical forms 39 Principal vectors Elementary similarity transformations Properties of elementary matrices Reduction to triangular canonical form by elementary similarity 45 transformations 46 Elementary unitary transformations Elementary unitary Hermitian matrices Reduction to triangular form by elementary unitary transformations 50 Normal matrices 51 Commuting matrices 52

3 x CONTENTS Eigenvalues of A B 54 Vector and matrix norms 55 Subordinate matrix norms 56 The Euclidean and spectral norms 57 Norms and limits 58 Avoiding use of infinite matrix series PERTURBATION THEORY Introduction 62 Ostrowski's theorem on continuity of the eigenvalues 63 Algebraic functions 64 s 65 Perturbation theory for simple eigenvalues 66 Perturbation of corresponding eigenvectors 67 Matrix with linear elementary divisors 68 First-order perturbations of eigenvalues 68 First-order perturbations of eigenvectors 69 Higher-order perturbations 70 Multiple eigenvalues 70 Gerschgorin's theorems 71 Perturbation theory based on Gerschgorin's theorems 72 Case 1. Perturbation of a simple eigenvalue X x of a matrix having linear elementary divisors. 72 Case 2. Perturbation of a multiple eigenvalue X 1 of a matrix having linear elementary divisors. 75 Case 3. Perturbation of a simple eigenvalue of a matrix having one or more non-linear elementary divisors. 77 Case 4. Perturbations of the eigenvalues corresponding to a non-linear elementary divisor of a non-derogatory matrix. 79 Case 5. Perturbations of eigenvalues X { when there is more than one divisor involving (X t X) and at least one of them is non-linear. 80 Perturbations corresponding to the general distribution of non-linear divisors 81 Perturbation theory for the eigenvectors from Jordan canonical form 81 Perturbations of eigenvectors corresponding to a multiple eigenvalue (linear elementary divisors) * 83 Limitations of perturbation theory 84 Relationships between the 8f 85 The condition of a computing problem 86 Condition numbers 86 Spectral condition number of A with respect to its eigenproblem 87 Properties of spectral condition number 88 Invariant properties of condition numbers 89 Very ill-conditioned matrices 90 Perturbation theory for real symmetric matrices 93 Unsymmetric perturbations 93 Symmetric perturbations 94 Classical techniques 94 Symmetric matrix of rank unity 97 Extremal properties of eigenvalues 98 Minimax characterization of eigenvalues 99 Eigenvalues of the sum of two symmetric matrices 101

4 CONTENTS Practical applications 102 Further applications of minimax principle Separation theorem The Wielandt-Hoffman theorem ERROR ANALYSIS Introduction 110 Fixed-point operations 110 Accumulation of inner-products 111 Floating-point operations 112 Simplified expressions for error bounds 113 Error bounds for some basic floating-point computations 114 Bounds for norms of the error matrices 115 Accumulation of inner-products in floating-point arithmetic 116 Error bounds for some basic fl. 2 ( ) computations 117 Computation of square roots 118 Block-floating vectors and matrices 119 Fundamental limitations of 2-digit computation 120 Eigenvalue techniques based on reduction by similarity transformations 123 Error analysis of methods based on elementary non-unitary transformations 124 Error analysis of methods based on elementary unitary transformations 126 Superiority of the unitary transformation 128 Real symmetric matrices 129 Limitations of unitary transformations 129 Error analysis of floating-point computation of plane rotations 131 Multiplication by a plane rotation 133 Multiplication by a sequence of planetotations 134 Error in product of approximate plane rotations 139 Errors in similarity transforms 140 Symmetric matrices 141 Plane rotations in fixed-point arithmetic 143 Alternative computation of sin 0 and cos d 145 Pre-multiplication by an approximate fixed-point rotation i45 Multiplication by a sequence of plane rotations (fixed-point) 147 The computed product of an approximate set of plane rotations 148 Errors in similarity transformations 148 General comments on the error bounds 151 Elementary Hermitian matrices in floating-point 152 Error analysis of the computation of an elementary Hermitian matrix Pre-multiplication by an approximate elementary Hermitian matrix Multiplication by a sequence of approximate elementary Hermitians 160 Non-unitary elementary matrices analogous to plane rotations 162 Non-unitary elementary matrices analogous to elementary Hermitian matrices. Pre-multiplication by a sequence of non-unitary matrices A priori error bounds Departure from normality Simple examples 169 A posteriori bounds 170 A posteriori bounds for normal matrices 170 xi

5 xii CONTENTS Rayleigh quotient 172 Error in Rayleigh quotient Hermitian matrices ' Pathologically close eigenvalues Non-normal matrices Error analysis for a complete eigensystem 180 Conditions limiting attainable accuracy Non-linear elementary divisors 182 Approximate invariant subspaces 184 Almost normal matrices \ SOLUTION OF LINEAR ALGEBRAIC EQUATIONS Introduction 189 Perturbation theory 189 Condition numbers 191 Equilibrated matrices 192 Simple practical examples 193 Condition of matrix of eigenvectors 193 Explicit solution 194 General comments on condition of matrices 195 Relation of ill-conditioning to near-singularity 196 Limitations imposed by i-digit arithmetic 197 Algorithms for solving linear equations 198 Gaussian elimination 200 Triangular decomposition \ 201 Structure of triangular decomposition matrices 201 Explicit expressions for elements ofhhe triangles 202 Breakdown of Gaussian elimination ^\^ 204 Numerical stability 205 Significance of the interchanges Error analysis of Gaussian elimination 209 Upper bounds for the perturbation matrices using fixed-point arithmetic 211 Upper bound for elements of reduced matrices 212 Complete pivoting 212 Practical procedure with partial pivoting 214 Floating-point error analysis 214 Floating-point decomposition without pivoting 215 Loss of significant figures 217 A popular fallacy 217 Matrices of special form 218 Gaussian elimination on a high-speed computer 220 Solutions corresponding to different right-hand sides 221 Direct triangular decomposition 221 Relations between Gaussian elimination and direct triangular decomposition 223 Examples of failure and non-uniqueness of decomposition 224 Triangular decomposition with row interchanges 225 Error analysis of triangular decomposition 227 Evaluation of determinants 228 Cholesky decomposition 229

6 CONTENTS Symmetric matrices which are not positive definite 230 Error analysis of Cholesky decomposition in fixed-point arithmetic 231 An ill-conditioned matrix 233 Triangularization using elementary Hermitian matrices 233 Error analysis of Householder triangularization 236 Triangularization by elementary stabilized matrices of the type M' H 236 Evaluation of determinants of leading principal minors 237 Triangularization by plane rotations 239 Error analysis of Givens reduction 240 Uniqueness of orthogonal triangularization 241 Schmidt orthogonalization 242 Comparison of the methods of triangularization 244 Back-substitution 247 High accuracy of computed solutions of triangular sets of equations 249 Solution of a general set of equations 251 Computation of the inverse of a general matrix 252 Accuracy of computed solutions 253 Ill-conditioned matrices which give no small pivots 254 Iterative improvements of approximate solution 255 Effect of rounding errors on the iterative process The iterative procedure in fixed-point computation Simple example of iterative procedure 258 General comments on the iterative procedure 260 Related iterative procedures 261 Limitations of the iterative procedure 261 Rigorous justification of the iterative method HERMITIAN MATRICES Introduction 1 The classical Jacobi method for real symmetric matrices Rate of convergence \ Convergence to fixed diagonal matrix Serial Jacobi method The Gerschgorin discs Ultimate quadratic convergence of Jacobi methods 270 Close and multiple eigenvalues s Calculation of cos d and sin d Simpler determination of the angles of rotation The threshold Jacobi method Calculation of the eigenvectors Error analysis of the Jacobi method 279 Accuracy of the computed eigenvectors 280 Error bounds for fixed-point computation 281 Organizational problems 282 Givens' method Givens' process on a computer with a two-level store Floating-point error analysis of Givens' process 286 Fixed-point error analysis Householder's method 290 xiii

7 v CONTENTS Taking advantage of symmetry 292 Storage considerations Householder's process on a computer with a two-level store Householder's method in fixed-point arithmetic Error analyses of Householder's method Eigenvalues of a symmetric tri-diagonal matrix Sturm sequence property 300 Method of bisection 302 Numerical stability of the bisection method 302 ^ General comments on the bisection method Small eigenvalues 307 Close eigenvalues and small ft 308 Fixed-point computation of the eigenvalues 312 Computation of the eigenvectors of a tri-diagonal form Instability of the explicit expression for the eigenvector s 319 Inverse iteration 321 Choice of initial vector Error analysis Close eigenvalues and small ft 327 Independent vectors corresponding to coincident eigenvalues 328 Alternative method for computing the eigenvectors 330 Comments on the eigenproblem for tri-diagonal matrices Completion of the Givens and Householder methods 333 Comparison of methods 334 Quasi-symmetric tri-diagonal matrices 335 Calculation of the eigenvectors 336 Equations of the form Ax = XBx and ABx = Xx 337 Simultaneous reduction of A and B to diagonal form Tri-diagonal A and B 340 Complex Hermitian matrices 342. REDUCTION OF/A GENERAL MATRIX TO CONDENSED FORM ^^ Introduction Givens' method Householder's method 347 Storage considerations 350 Error analysis 350 Relationship between the Givens and Householder methods 351 Elementary stabilized transformations 353 Significance of the permutations Direct reduction to Hessenberg form Incorporation of interchanges Error analysis Related error analyses

8 CONTENTS Poor determination of the Hessenberg matrix Reduction to Hessenberg form using stabilized matrices of the type M' H The method of Krylov Gaussian elimination by columns Practical difficulties Condition of 0 for some standard distributions of eigenvalues Initial vectors of grade less than n 374 Practical experience 376 Generalized Hessenberg processes 377 Failure of the generalized Hessenberg process 378 The Hessenberg method ^ 379 Practical procedure 380 Relation between the Hessenberg method and earlier methods 381 The method of Arnoldi 382 Practical considerations 383 Significance of re-orthogonalization 385 The method of Lanczos 388 Failure of procedure The practical Lanczos process General comments on the unsymmetric Lanczos process 394 The symmetric Lanczos process 394 Reduction of a Hessenberg matrix to a more compact form 395 Reduction of a lower Hessenberg matrix to tri-diagonal form 396 The use of interchanges 397 Effect of a small pivotal element 398 Error analysis 399 The Hessenberg process applied to a lower Hessenberg matrix 402 Relationship between the Hessenberg process and the Lanczos process 402 Reduction of a general matrix to tri-diagonal form 403 Comparison with Lanczos method 404 Re-examination of reduction to tri-diagonal form 404 Reduction from upper Hessenberg form to Frobenius form 405 Effect of small pivot General comments on the stability Specialized upper Hessenberg form Direct determination of the characteristic polynomial 410, EIGENVALUES OF MATRICES OF CONDENSED FORMS Introduction 413 Explicit polynomial form 413 Condition numbers of explicit polynomials 416 Some typical distributions of zeros 417 Final assessment of Krylov's method General comments on explicit polynomials Tri-diagonal matrices Determinants of Hessenberg matrices Effect of rounding errors 427 Floating-point accumulation 428 Evaluation by orthogonal transformations 429 xv

9 ri CONTENTS Evaluation of determinants of general matrices. 431 The generalized eigenvalue problem Indirect determinations of the characteristic polynomial Le Verrier's method Iterative methods based on interpolation Asymptotic rate of convergence Multiple zeros Inversion of the functional relationship 439 The method of bisection 440 Newton's method 441 Comparison of Newton's method with interpolation 442 Methods giving cubic convergence 443 Laguerre's method Complex zeros Complex conjugate zeros 447 Bairstow's method 449 The generalized Bairstow method 450 Practical considerations Effect of rounding errors on asymptotic convergence The method of bisection 453 Successive linear interpolation 455 Multiple and pathologically close eigenvalues 457 Other interpolation methods 458 Methods involving the use of a derivative 459 Criterion for acceptance of a zero Effect of rounding errors Suppression of computed zeros 464 Deflation for Hessenberg matrices 465 Deflation of tri-diagonal matrices 468 Deflation by rotations or stabilized elementary transformations 469 Stability of the deflation 472 General comments on deflation Suppression of computed zeros Suppression of computed quadratic factors General comments on the methods of suppression Asymptotic rates of convergence 478 Convergence in the large Complex zeros Recommendations 482 Complex matrices Matrices containing an independent parameter THE LR AND QR ALGORITHMS Introduction. 485 Real matrices with complex eigenvalues The LR algorithm Proof of the convergence of the A s 489 Positive definite Hermitian matrices 493 Complex conjugate eigenvalues Introduction of interchanges Convergence of the modified process

10 CONTENTS Preliminary reduction of original matrix 501 Invariance of upper Hessenberg form Simultaneous row and column operations Acceleration of convergence Incorporation of shifts of origin Choice of shift of origin Deflation of the matrix Practical experience of convergence 510 Improved shift strategy 511 Complex conjugate eigenvalues 512 Criticisms of the modified LR algorithm 515 The QR algorithm Convergence of the QR algorithm Formal proof of convergence Disorder of the eigenvalues Eigenvalues of equal modulus Alternative proof for the LR technique Practical application of the QR algorithm 523 Shifts of origin 524 Decomposition of A s Practical procedure Avoiding complex conjugate shifts Double QR step using elementary Hermitians 532 Computational details Decomposition of A s Double-shift technique for LR Assessment of LR and QR algorithms Multiple eigenvalues. 540 Special use of the deflation process 543 Symmetric matrices 544 Relationship between LR and QR algorithms 545 Convergence of the Cholesky LR algorithm 546 Cubic convergence of the QR algorithm 548 Shift of origin in Cholesky LR 549 Failure of the Cholesky decomposition 550 Cubically convergent LR process 551 Band matrices QR decomposition of a band matrix Error analysis 561 Unsymmetric band matrices Simultaneous decomposition and recombination in QR algorithm Reduction of band width ITERATIVE METHODS Introduction 570 The power method 570 Direct iteration with a single vector 571 Shift of origin 572 Effect of rounding errors 573 Variation of p 576 Ad hoc choice of p 577 xvii

11 xviii CONTENTS Aitken's acceleration technique 578 Complex conjugate eigenvalues Calculation of the complex eigenvector Shift of origin Non-linear divisors Simultaneous determination of several eigenvalues 583 Complex matrices 584 Deflation 584 Deflation based on similarity transformations 585 Deflation using invariant subspaces 587 Deflation using stabilized elementary transformations 587 Deflation using unitary transformations 589 Numerical stability 590 Stability of unitary transformations Deflation by non-similarity transformations General reduction using invariant subspaces Practical application 601 Treppen-iteration 602 Accurate determination of complex conjugate eigenvalues 604 Very close eigenvalues 606 Orthogonalization techniques 606 Analogue of treppen-iteration using orthogonalization 607 Bi-iteration 609 Richardson's purification process Matrix squaring 615 Numerical stability 616 Use of Chebyshev polynomials 617 General assessment of methods based on direct iteration 618 Inverse iteration Error analysis of inverse iteration General comments on the analysis 621 Further refinement of eigenvectors 622 Non-linear elementary divisors 626 Inverse iteration with Hessenberg matrices 626 Degenerate cases Inverse iteration with band matrices Complex conjugate eigenvectors 629 Error analysis The generalized eigenvalue problem 633 Variation of approximate eigenvalues 635 Refinement of eigensystems Refinement of the eigenvectors Complex conjugate eigenvalues Coincident and pathologically close eigenvalues 644 Comments on the ACE programmes 646 BIBLIOGRAPHY 649 INDEX 657

### Applied Linear Algebra I Review page 1

Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

### AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc.,

### Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)

### ADVANCED LINEAR ALGEBRA FOR ENGINEERS WITH MATLAB. Sohail A. Dianat. Rochester Institute of Technology, New York, U.S.A. Eli S.

ADVANCED LINEAR ALGEBRA FOR ENGINEERS WITH MATLAB Sohail A. Dianat Rochester Institute of Technology, New York, U.S.A. Eli S. Saber Rochester Institute of Technology, New York, U.S.A. (g) CRC Press Taylor

### Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Chapter 3 Linear Least Squares Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

### Advanced Higher Mathematics Course Assessment Specification (C747 77)

Advanced Higher Mathematics Course Assessment Specification (C747 77) Valid from August 2015 This edition: April 2016, version 2.4 This specification may be reproduced in whole or in part for educational

### 10.3 POWER METHOD FOR APPROXIMATING EIGENVALUES

55 CHAPTER NUMERICAL METHODS. POWER METHOD FOR APPROXIMATING EIGENVALUES In Chapter 7 we saw that the eigenvalues of an n n matrix A are obtained by solving its characteristic equation n c n n c n n...

### Mean value theorem, Taylors Theorem, Maxima and Minima.

MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.

### Numerical Recipes in C

2008 AGI-Information Management Consultants May be used for personal purporses only or by libraries associated to dandelon.com network. Numerical Recipes in C The Art of Scientific Computing Second Edition

### Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001,

### Linear Algebraic and Equations

Next: Optimization Up: Numerical Analysis for Chemical Previous: Roots of Equations Subsections Gauss Elimination Solving Small Numbers of Equations Naive Gauss Elimination Pitfalls of Elimination Methods

### Solving Sets of Equations. 150 B.C.E., 九章算術 Carl Friedrich Gauss,

Solving Sets of Equations 5 B.C.E., 九章算術 Carl Friedrich Gauss, 777-855 Gaussian-Jordan Elimination In Gauss-Jordan elimination, matrix is reduced to diagonal rather than triangular form Row combinations

### Chapter 6. Orthogonality

6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

### Iterative Methods for Computing Eigenvalues and Eigenvectors

The Waterloo Mathematics Review 9 Iterative Methods for Computing Eigenvalues and Eigenvectors Maysum Panju University of Waterloo mhpanju@math.uwaterloo.ca Abstract: We examine some numerical iterative

### 7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

### Matrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for

### Numerical Analysis An Introduction

Walter Gautschi Numerical Analysis An Introduction 1997 Birkhauser Boston Basel Berlin CONTENTS PREFACE xi CHAPTER 0. PROLOGUE 1 0.1. Overview 1 0.2. Numerical analysis software 3 0.3. Textbooks and monographs

### Advanced Techniques for Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Advanced Techniques for Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Vectors Arrays of numbers Vectors represent a point in a n dimensional

### Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

### Bindel, Fall 2012 Matrix Computations (CS 6210) Week 8: Friday, Oct 12

Why eigenvalues? Week 8: Friday, Oct 12 I spend a lot of time thinking about eigenvalue problems. In part, this is because I look for problems that can be solved via eigenvalues. But I might have fewer

### LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

### Numerical Analysis Introduction. Student Audience. Prerequisites. Technology.

Numerical Analysis Douglas Faires, Youngstown State University, (Chair, 2012-2013) Elizabeth Yanik, Emporia State University, (Chair, 2013-2015) Graeme Fairweather, Executive Editor, Mathematical Reviews,

### Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

### 10.3 POWER METHOD FOR APPROXIMATING EIGENVALUES

58 CHAPTER NUMERICAL METHODS. POWER METHOD FOR APPROXIMATING EIGENVALUES In Chapter 7 you saw that the eigenvalues of an n n matrix A are obtained by solving its characteristic equation n c nn c nn...

### Advanced Computational Techniques in Engineering

Course Details Course Code: 5MWCI432 Level: M Tech CSE Semester: Semester 4 Instructor: Prof Dr S P Ghrera Advanced Computational Techniques in Engineering Outline Course Description: Data Distributions,

### Factorization Theorems

Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

### MAT Solving Linear Systems Using Matrices and Row Operations

MAT 171 8.5 Solving Linear Systems Using Matrices and Row Operations A. Introduction to Matrices Identifying the Size and Entries of a Matrix B. The Augmented Matrix of a System of Equations Forming Augmented

### MATHEMATICAL METHODS FOURIER SERIES

MATHEMATICAL METHODS FOURIER SERIES I YEAR B.Tech By Mr. Y. Prabhaker Reddy Asst. Professor of Mathematics Guru Nanak Engineering College Ibrahimpatnam, Hyderabad. SYLLABUS OF MATHEMATICAL METHODS (as

### Eigenvalues and eigenvectors of a matrix

Eigenvalues and eigenvectors of a matrix Definition: If A is an n n matrix and there exists a real number λ and a non-zero column vector V such that AV = λv then λ is called an eigenvalue of A and V is

### Linear Algebra Review. Vectors

Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

### Numerical Recipes in C++

Numerical Recipes in C++ The Art of Scientific Computing Second Edition William H. Press Los Alamos National Laboratory Saul A. Teukolsky Department of Physics, Cornell University William T. Vetterling

### Orthogonal Bases and the QR Algorithm

Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries

### Nonlinear Iterative Partial Least Squares Method

Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

### Contents. Gbur, Gregory J. Mathematical methods for optical physics and engineering digitalisiert durch: IDS Basel Bern

Preface page xv 1 Vector algebra 1 1.1 Preliminaries 1 1.2 Coordinate System invariance 4 1.3 Vector multiplication 9 1.4 Useful products of vectors 12 1.5 Linear vector Spaces 13 1.6 Focus: periodic media

### for each i =;2; ::; n. Example 2. Theorem of Linear Independence or Orthogonal set of vectors. An orthogonal set of vectors not containing 0 is LI. De

Session 9 Approximating Eigenvalues Ernesto Gutierrez-Miravete Fall, 200 Linear Algebra Eigenvalues Recall that for any matrix A, the zeros of the characteristic polynomial pèçè = detèa, çiè are the eigenvalues

### 10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

578 CHAPTER 1 NUMERICAL METHODS 1. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

### 1.5 Elementary Matrices and a Method for Finding the Inverse

.5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:

### The Characteristic Polynomial

Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

### Similar matrices and Jordan form

Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

### MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

### Eigenvectors. Chapter Motivation Statistics. = ( x i ˆv) ˆv since v v = v 2

Chapter 5 Eigenvectors We turn our attention now to a nonlinear problem about matrices: Finding their eigenvalues and eigenvectors. Eigenvectors x and their corresponding eigenvalues λ of a square matrix

### Orthogonal Diagonalization of Symmetric Matrices

MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

### Using the Singular Value Decomposition

Using the Singular Value Decomposition Emmett J. Ientilucci Chester F. Carlson Center for Imaging Science Rochester Institute of Technology emmett@cis.rit.edu May 9, 003 Abstract This report introduces

### Elementary Linear Algebra

Elementary Linear Algebra Kuttler January, Saylor URL: http://wwwsaylororg/courses/ma/ Saylor URL: http://wwwsaylororg/courses/ma/ Contents Some Prerequisite Topics Sets And Set Notation Functions Graphs

### [1] Diagonal factorization

8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

### e-issn: p-issn: X Page 67

International Journal of Computer & Communication Engineering Research (IJCCER) Volume 2 - Issue 2 March 2014 Performance Comparison of Gauss and Gauss-Jordan Yadanar Mon 1, Lai Lai Win Kyi 2 1,2 Information

### 2.5 Gaussian Elimination

page 150 150 CHAPTER 2 Matrices and Systems of Linear Equations 37 10 the linear algebra package of Maple, the three elementary 20 23 1 row operations are 12 1 swaprow(a,i,j): permute rows i and j 3 3

### Linear Systems COS 323

Linear Systems COS 323 Last time: Constrained Optimization Linear constrained optimization Linear programming (LP) Simplex method for LP General optimization With equality constraints: Lagrange multipliers

### MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

### Mathematics Notes for Class 12 chapter 3. Matrices

1 P a g e Mathematics Notes for Class 12 chapter 3. Matrices A matrix is a rectangular arrangement of numbers (real or complex) which may be represented as matrix is enclosed by [ ] or ( ) or Compact form

### Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

### Solving Systems of Linear Equations

LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

### Linear Least Squares

Linear Least Squares Suppose we are given a set of data points {(x i,f i )}, i = 1,...,n. These could be measurements from an experiment or obtained simply by evaluating a function at some points. One

### Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

### Generalized Inverse of Matrices and its Applications

Generalized Inverse of Matrices and its Applications C. RADHAKRISHNA RAO, Sc.D., F.N.A., F.R.S. Director, Research and Training School Indian Statistical Institute SUJIT KUMAR MITRA, Ph.D. Professor of

### PURE MATHEMATICS AM 27

AM Syllabus (015): Pure Mathematics AM SYLLABUS (015) PURE MATHEMATICS AM 7 SYLLABUS 1 AM Syllabus (015): Pure Mathematics Pure Mathematics AM 7 Syllabus (Available in September) Paper I(3hrs)+Paper II(3hrs)

### PURE MATHEMATICS AM 27

AM SYLLABUS (013) PURE MATHEMATICS AM 7 SYLLABUS 1 Pure Mathematics AM 7 Syllabus (Available in September) Paper I(3hrs)+Paper II(3hrs) 1. AIMS To prepare students for further studies in Mathematics and

### 15.062 Data Mining: Algorithms and Applications Matrix Math Review

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

### by the matrix A results in a vector which is a reflection of the given

Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

### Stability of invariant subspaces of matrices. Stability of invariant subspaces André Ran 1

Stability of invariant subspaces of matrices André Ran Leiba Rodman Stability of invariant subspaces André Ran 1 I II LQ-optimal control in discrete time. System is given with a cost function x(t + 1)

### Summary of week 8 (Lectures 22, 23 and 24)

WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry

### Eigenvalues and Eigenvectors

Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution

### Construction of the Real Line 2 Is Every Real Number Rational? 3 Problems Algebra of the Real Numbers 7

About the Author v Preface to the Instructor xiii WileyPLUS xviii Acknowledgments xix Preface to the Student xxi 1 The Real Numbers 1 1.1 The Real Line 2 Construction of the Real Line 2 Is Every Real Number

### WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE?

WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE? JOEL H. SHAPIRO Abstract. These notes supplement the discussion of linear fractional mappings presented in a beginning graduate course

### Numerical Methods for Civil Engineers. Linear Equations

Numerical Methods for Civil Engineers Lecture Notes CE 311K Daene C. McKinney Introduction to Computer Methods Department of Civil, Architectural and Environmental Engineering The University of Texas at

### since by using a computer we are limited to the use of elementary arithmetic operations

> 4. Interpolation and Approximation Most functions cannot be evaluated exactly: x, e x, ln x, trigonometric functions since by using a computer we are limited to the use of elementary arithmetic operations

### 13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

### 5. Orthogonal matrices

L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

### Lecture 5 - Triangular Factorizations & Operation Counts

LU Factorization Lecture 5 - Triangular Factorizations & Operation Counts We have seen that the process of GE essentially factors a matrix A into LU Now we want to see how this factorization allows us

### MATHEMATICS (MATH) 3. Provides experiences that enable graduates to find employment in sciencerelated

194 / Department of Natural Sciences and Mathematics MATHEMATICS (MATH) The Mathematics Program: 1. Provides challenging experiences in Mathematics, Physics, and Physical Science, which prepare graduates

### Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Chapter 10 Boundary Value Problems for Ordinary Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

### Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

### NUMERICAL METHODS FOR LARGE EIGENVALUE PROBLEMS

NUMERICAL METHODS FOR LARGE EIGENVALUE PROBLEMS Second edition Yousef Saad Copyright c 2011 by the Society for Industrial and Applied Mathematics Contents Preface to Classics Edition Preface xiii xv 1

### We seek a factorization of a square matrix A into the product of two matrices which yields an

LU Decompositions We seek a factorization of a square matrix A into the product of two matrices which yields an efficient method for solving the system where A is the coefficient matrix, x is our variable

### Linear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold:

Linear Systems Example: Find x, x, x such that the following three equations hold: x + x + x = 4x + x + x = x + x + x = 6 We can write this using matrix-vector notation as 4 {{ A x x x {{ x = 6 {{ b General

### Error Analysis of Elimination Methods for Equality Constrained Quadratic Programming Problems

Draft of the paper in: Numerical Methods and Error Bounds (G. Alefeld, J. Herzberger eds.), Akademie Verlag, Berlin, 996, 07-2. ISBN: 3-05-50696-3 Series: Mathematical Research, Vol. 89. (Proceedings of

### Mathematics (MAT) MAT 061 Basic Euclidean Geometry 3 Hours. MAT 051 Pre-Algebra 4 Hours

MAT 051 Pre-Algebra Mathematics (MAT) MAT 051 is designed as a review of the basic operations of arithmetic and an introduction to algebra. The student must earn a grade of C or in order to enroll in MAT

### MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

### R mp nq. (13.1) a m1 B a mn B

This electronic version is for personal use and may not be duplicated or distributed Chapter 13 Kronecker Products 131 Definition and Examples Definition 131 Let A R m n, B R p q Then the Kronecker product

### Syllabus for the course «Introduction to Advanced Mathematics» (Введение в высшую математику)

Government of Russian Federation Federal State Autonomous Educational Institution of High Professional Education «National Research University Higher School of Economics» National Research University Higher

### Arithmetic and Algebra of Matrices

Arithmetic and Algebra of Matrices Math 572: Algebra for Middle School Teachers The University of Montana 1 The Real Numbers 2 Classroom Connection: Systems of Linear Equations 3 Rational Numbers 4 Irrational

### Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

### Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

### Numerical Linear Algebra Chap. 4: Perturbation and Regularisation

Numerical Linear Algebra Chap. 4: Perturbation and Regularisation Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation TUHH Heinrich Voss Numerical Linear

### The Inverse of a Matrix

The Inverse of a Matrix 7.4 Introduction In number arithmetic every number a ( 0) has a reciprocal b written as a or such that a ba = ab =. Some, but not all, square matrices have inverses. If a square

### The Second Undergraduate Level Course in Linear Algebra

The Second Undergraduate Level Course in Linear Algebra University of Massachusetts Dartmouth Joint Mathematics Meetings New Orleans, January 7, 11 Outline of Talk Linear Algebra Curriculum Study Group

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each)

Math 33 AH : Solution to the Final Exam Honors Linear Algebra and Applications 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) (1) If A is an invertible

### (57) (58) (59) (60) (61)

Module 4 : Solving Linear Algebraic Equations Section 5 : Iterative Solution Techniques 5 Iterative Solution Techniques By this approach, we start with some initial guess solution, say for solution and

### SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING. Self Study Course

SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING Self Study Course MODULE 17 MATRICES II Module Topics 1. Inverse of matrix using cofactors 2. Sets of linear equations 3. Solution of sets of linear

### 7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse

### CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

### Vector and Matrix Norms

Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

### Diagonal, Symmetric and Triangular Matrices

Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

### Solutions to Linear Algebra Practice Problems

Solutions to Linear Algebra Practice Problems. Find all solutions to the following systems of linear equations. (a) x x + x 5 x x x + x + x 5 (b) x + x + x x + x + x x + x + 8x Answer: (a) We create the