2.29 Numerical Fluid Mechanics Spring 2015 Lecture 7

Similar documents
SOLVING LINEAR SYSTEMS

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

ALGEBRAIC EIGENVALUE PROBLEM

Solution of Linear Systems

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

6. Cholesky factorization

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

General Framework for an Iterative Solution of Ax b. Jacobi s Method

GEC320 COURSE COMPACT. Four hours per week for 15 weeks (60 hours)

7 Gaussian Elimination and LU Factorization

LINEAR ALGEBRA. September 23, 2010

Mean value theorem, Taylors Theorem, Maxima and Minima.

1 Introduction to Matrices

Similar matrices and Jordan form

Solving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department

Operation Count; Numerical Linear Algebra

Direct Methods for Solving Linear Systems. Matrix Factorization

Factorization Theorems

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

5. Orthogonal matrices

Applied Linear Algebra I Review page 1

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

LS.6 Solution Matrices

Solving Systems of Linear Equations

CS3220 Lecture Notes: QR factorization and orthogonal transformations

Lecture 5: Singular Value Decomposition SVD (1)

Notes on Determinant

The Characteristic Polynomial

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Systems of Linear Equations

MATRICES WITH DISPLACEMENT STRUCTURE A SURVEY

Solution to Homework 2

Numerical Methods for Solving Systems of Nonlinear Equations

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

LECTURES IN BASIC COMPUTATIONAL NUMERICAL ANALYSIS

Vector and Matrix Norms

Lecture 2 Matrix Operations

Data Mining: Algorithms and Applications Matrix Math Review

Introduction to Matrix Algebra

Section Inner Products and Norms

1 Finite difference example: 1D implicit heat equation

a 11 a 12 a a 1n x 1 b 1 b 2..., (6.2)

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

[1] Diagonal factorization

Numerical Methods I Eigenvalue Problems

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

1 Determinants and the Solvability of Linear Systems

Math 312 Homework 1 Solutions

Iterative Methods for Solving Linear Systems

1 Introduction. 2 Power Flow. J.L. Kirtley Jr.

The Basics of FEA Procedure

x = + x 2 + x

Orthogonal Diagonalization of Symmetric Matrices

Lecture 5 Principal Minors and the Hessian

A note on companion matrices

Lecture notes on linear algebra

Solving Systems of Linear Equations Using Matrices

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers

P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition

Yousef Saad University of Minnesota Computer Science and Engineering. CRM Montreal - April 30, 2008

1 VECTOR SPACES AND SUBSPACES

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Matrices and Polynomials

Orthogonal Bases and the QR Algorithm

Chapter 6. Orthogonality

Continuity of the Perron Root

MAT 242 Test 2 SOLUTIONS, FORM T

Linear Algebra Review. Vectors

Inner Product Spaces and Orthogonality

DATA ANALYSIS II. Matrix Algorithms

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Matrix Multiplication

Matrix Differentiation

Similarity and Diagonalization. Similar Matrices

MAT188H1S Lec0101 Burbulla

Solving Linear Systems, Continued and The Inverse of a Matrix

Classification of Cartan matrices

Outline. Generalize Simple Example

Using row reduction to calculate the inverse and the determinant of a square matrix

ASEN Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances

By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

Eigenvalues and Eigenvectors

A Direct Numerical Method for Observability Analysis

Introduction to Matrices for Engineers

Numerical Analysis Introduction. Student Audience. Prerequisites. Technology.

Linearly Independent Sets and Linearly Dependent Sets

Elementary Matrices and The LU Factorization

Lecture 3: Finding integer solutions to systems of linear equations

6.3 Multigrid Methods

(Quasi-)Newton methods

Numerical Analysis An Introduction

Derivative Free Optimization

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

3. Interpolation. Closing the Gaps of Discretization... Beyond Polynomials

Transcription:

REVIEW Lecture 6: Spring 215 Lecture 7 Direct Methods for solving linear algebraic equations LU decomposition/factorization Separates time-consuming elimination for A from that for b / B Derivation, assuming no pivoting needed: Number of Ops: Same as for Gauss Elimination Pivoting: Use pivot element pointer vector Variations: Doolittle and Crout decompositions, Matrix Inverse Error Analysis for Linear Systems Matrix norms Condition Number for Perturbed RHS and LHS: Special Matrices: Intro PFJL Lecture 7, 1

TODAY (Lecture 7): Systems of Linear Equations III Direct Methods Gauss Elimination LU decomposition/factorization Error Analysis for Linear Systems Special Matrices: LU Decompositions Tri-diagonal systems: Thomas Algorithm General Banded Matrices Algorithm, Pivoting and Modes of storage Sparse and Banded Matrices Symmetric, positive-definite Matrices Iterative Methods Definitions and Properties, Choleski Decomposition Jacobi s method Gauss-Seidel iteration Convergence PFJL Lecture 7, 2

Reading Assignment Chapters 11 of Chapra and Canale, Numerical Methods for Engineers, 26/21/214. Any chapter on Solving linear systems of equations in references on CFD references provided. For example: chapter 5 of J. H. Ferziger and M. Peric, Computational Methods for Fluid Dynamics. Springer, NY, 3 rd edition, 22 PFJL Lecture 7, 3

Special Matrices Certain Matrices have particular structures that can be exploited, i.e. Reduce number of ops and memory needs Banded Matrices: Square banded matrix that has all elements equal to zero, excepted for a band around the main diagonal. Frequent in engineering and differential equations: Tri-diagonal Matrices Wider bands for higher-order schemes Gauss Elimination or LU decomposition inefficient because, if pivoting is not necessary, all elements outside of the band remain zero (but direct GE/LU would manipulate these zero elements anyway) Symmetric Matrices Iterative Methods: Employ initial guesses, than iterate to refine solution Can be subject to round-off errors PFJL Lecture 7, 4

Special Matrices: Tri-diagonal Systems Example Forced Vibration of a String f(x,t) Example of a travelling pluse: x i Y(x,t) Consider the case of a Harmonic excitation f(x,t) =- f(x) cos(wt) Applying Newton s law leads to the wave equation: With separation of variables, one obtains the equation for modal amplitudes, see eq. (1) below: 2 Ytt c Yxx f ( x, t) Y ( x, t) ( t) y( x) Differential Equation for the amplitude: (1) Boundary Conditions: PFJL Lecture 7, 5

Special Matrices: Tri-diagonal Systems Forced Vibration of a String f(x,t) x i Y(x,t) Harmonic excitation f(x,t) = f(x) cos(wt) Finite Difference + O(h 2 ) Discrete Difference Equations + Matrix Form: Differential Equation: (1) y Boundary Conditions: Tridiagonal Matrix If kh 1 or kh 3symmetric, negative or positive definite: No pivoting needed Note: for < kh <1 Negative definite => Write: A =-A and y' y ' to render matrix positive definite PFJL Lecture 7, 6

Special Matrices: Tri-diagonal Systems General Tri-diagonal Systems: Bandwidth of 3 LU Decomposition Three steps for LU scheme: 1. Decomposition (GE): 2. Forward substitution 3. Backward substitution PFJL Lecture 7, 7

Special Matrices: Tri-diagonal Systems Thomas Algorithm By identification with the general LU decomposition, one obtains, 1. Factorization/Decomposition 2. Forward Substitution 3. Back Substitution i Number of Operations: Thomas Algorithm LU Factorization: 3*(n-1) operations Forward substitution: 2*(n-1) operations Back substitution: 3*(n-1)+1 operations Total: 8*(n-1) ~ O(n) operations PFJL Lecture 7, 8

Special Matrices: General, Banded Matrix p General Banded Matrix (p q) q Banded Symmetric Matrix (p = q = b) p super-diagonals q sub-diagonals w = p + q + 1 bandwidth w = 2 b + 1 is called the bandwidth b is the half-bandwidth PFJL Lecture 7, 9

i j Special Matrices: General, Banded Matrix LU Decomposition via Gaussian Elimination If No Pivoting: the zeros are preserved j p i q = = m ij a ( j) ij ( ) a jj if j i or i j q j u a (i i 1) ij ij ij m i,i a ) a ( (i 1) 1 i 1, j u ij if i j or j i p (as gen. case) (banded) (as gen. case) (banded) PFJL Lecture 7, 1

Consider pivoting the 2 rows as below: p Special Matrices: General, Banded Matrix LU Decomposition via Gaussian Elimination With Partial Pivoting (by rows): q q Then, the bandwidth of L remains unchanged, m if j i or i j q ij = q w p but the bandwidth of U becomes as that of A u if i j or j i p q ij w = p + 2 q +1 bandwidth PFJL Lecture 7, 11

Special Matrices: General, Banded Matrix Compact Storage p Diagonal (length n) q p q Needed for Pivoting only q i i n q j Matrix size: n 2 j =j i+ q Matrix size: n (p+2q+1) PFJL Lecture 7, 12

Special Matrices: Sparse and Banded Matrix Skyline Systems (typically for symmetric matrices) Skyline.. Storage.. Pointers 1 4 9 11 16 2.. Skyline storage applicable when no pivoting is needed, e.g. for banded, symmetric, and positive definite matrices: FEM and FD methods. Skyline solvers are usually based on Cholesky factorization (which preserves the skyline) PFJL Lecture 7, 13

Special Matrices: Symmetric (Positive-Definite) Matrix Symmetric Coefficient Matrices: If no pivoting, the matrix remains symmetric after Gauss Elimination/LU decompositions a a ( k) ( k) ij ji ( k 1) ( k 1) ij ji Proof: Show that if then using: a a Gauss Elimination symmetric (use only the upper triangular portion of A): a a m a ( k 1) ( k ) ( k ) ij ij ik kj a mik, i k 1, k 2,..., n j i, i 1,..., n a ( k ) ki ( k ) kk About half the total number of ops than full GE PFJL Lecture 7, 14

Special Matrices: Symmetric, Positive Definite Matrix 1. Sylvester Criterion: A symmetric matrix is Positive Definite if and only if: det(a k ) > for k=1,2,,n, where A k is matrix of k first lines/columns Symmetric Positive Definite matrices frequent in engineering 2. For a symmetric positive definite A, one thus has the following properties a) The maximum elements of A are on the main diagonal b) For a Symmetric, Positive Definite A: No pivoting needed ( k 1) ( k) 2 c) The elimination is stable: a 2 a. To show this, use a a a in a a m a ( k 1) ( k ) ( k ) ij ij ik kj a mik, i k 1, k 2,..., n j i, i 1,..., n a ( k ) ki ( k ) kk ii ii kj kk jj PFJL Lecture 7, 15

Special Matrices: Symmetric, Positive Definite Matrix The general GE becomes: a a m a ( k 1) ( k ) ( k ) ij ij ik kj a mik, i k 1, k 2,..., n j i, i 1,..., n a ( k ) ki ( k ) kk Choleski Factorization Complex Conjugate where No pivoting needed Complex Conjugate and Transpose PFJL Lecture 7, 16

Linear Systems of Equations: Iterative Methods Sparse (large) Full-bandwidth Systems (frequent in practice) x x x x x ps: B and c could be function of k (non-stationary) x x x x x Iterative Methods are then efficient Analogous to iterative methods obtained for roots of equations, i.e. Open Methods: Fixed-point, Newton-Raphson, Secant Example of Iteration equation A x b A x b x x A x b k 1 k k k x x A x b A I x b General Stationary Iteration Formula x B x c k 1 k k ( ),1,2,... Compatibility condition for Ax=b to be the solution: Write c C b 1 ( ) or 1 1 A b BA b Cb I B A C B I CA PFJL Lecture 7, 17

Linear Systems of Equations: Iterative Methods Convergence Convergence Convergence Analysis Iteration Matrix form Sufficient Condition for Convergence: PFJL Lecture 7, 18

B <1 for a chosen matrix norm Infinite norm often used in practice Maximum Column Sum Maximum Row Sum The Frobenius norm (also called Euclidean norm), which for matrices differs from: The l-2 norm (also called spectral norm) PFJL Lecture 7, 19

Linear Systems of Equations: Iterative Methods Convergence: Necessary and Sufficient Condition Convergence Convergence Analysis Iteration Matrix form Necessary and Sufficient Condition for Convergence: Spectral radius of B is smaller than one: (proof: use eigendecomposition of B) ( B) max i 1, where i eigenvalue( Bn n) i 1... n (This ensures B <1) PFJL Lecture 7, 2

MIT OpenCourseWare http://ocw.mit.edu Spring 215 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.