CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 8



Similar documents
Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Chapter 5: Inner Product Spaces

Department of Computer Science, University of Otago

FIBONACCI NUMBERS: AN APPLICATION OF LINEAR ALGEBRA. 1. Powers of a matrix

1. MATHEMATICAL INDUCTION

Properties of MLE: consistency, asymptotic normality. Fisher information.

Lecture 4: Cheeger s Inequality

Your organization has a Class B IP address of Before you implement subnetting, the Network ID and Host ID are divided as follows:

Lecture 3. denote the orthogonal complement of S k. Then. 1 x S k. n. 2 x T Ax = ( ) λ x. with x = 1, we have. i = λ k x 2 = λ k.

CS103X: Discrete Structures Homework 4 Solutions

Convexity, Inequalities, and Norms

Theorems About Power Series

Lecture 5: Span, linear independence, bases, and dimension

In nite Sequences. Dr. Philippe B. Laval Kennesaw State University. October 9, 2008

3. Greatest Common Divisor - Least Common Multiple

SAMPLE QUESTIONS FOR FINAL EXAM. (1) (2) (3) (4) Find the following using the definition of the Riemann integral: (2x + 1)dx

CS103A Handout 23 Winter 2002 February 22, 2002 Solving Recurrence Relations

Soving Recurrence Relations

Week 3 Conditional probabilities, Bayes formula, WEEK 3 page 1 Expected value of a random variable

5 Boolean Decision Trees (February 11)

Trigonometric Form of a Complex Number. The Complex Plane. axis. ( 2, 1) or 2 i FIGURE The absolute value of the complex number z a bi is

Infinite Sequences and Series

THE REGRESSION MODEL IN MATRIX FORM. For simple linear regression, meaning one predictor, the model is. for i = 1, 2, 3,, n

, a Wishart distribution with n -1 degrees of freedom and scale matrix.


1 Computing the Standard Deviation of Sample Means

Vladimir N. Burkov, Dmitri A. Novikov MODELS AND METHODS OF MULTIPROJECTS MANAGEMENT

A probabilistic proof of a binomial identity

Permutations, the Parity Theorem, and Determinants

Section 8.3 : De Moivre s Theorem and Applications

Chapter 6: Variance, the law of large numbers and the Monte-Carlo method

University of California, Los Angeles Department of Statistics. Distributions related to the normal distribution

Lecture 4: Cauchy sequences, Bolzano-Weierstrass, and the Squeeze theorem

How To Solve The Homewor Problem Beautifully

Example 2 Find the square root of 0. The only square root of 0 is 0 (since 0 is not positive or negative, so those choices don t exist here).

Chapter 7 Methods of Finding Estimators

Normal Distribution.

I. Chi-squared Distributions

where: T = number of years of cash flow in investment's life n = the year in which the cash flow X n i = IRR = the internal rate of return

THIN SEQUENCES AND THE GRAM MATRIX PAMELA GORKIN, JOHN E. MCCARTHY, SANDRA POTT, AND BRETT D. WICK

Solutions to Selected Problems In: Pattern Classification by Duda, Hart, Stork

Notes on exponential generating functions and structures.

Definition. A variable X that takes on values X 1, X 2, X 3,...X k with respective frequencies f 1, f 2, f 3,...f k has mean

Modified Line Search Method for Global Optimization

DAME - Microsoft Excel add-in for solving multicriteria decision problems with scenarios Radomir Perzina 1, Jaroslav Ramik 2

Building Blocks Problem Related to Harmonic Series

Basic Elements of Arithmetic Sequences and Series

Factors of sums of powers of binomial coefficients

Factoring x n 1: cyclotomic and Aurifeuillian polynomials Paul Garrett <garrett@math.umn.edu>

2-3 The Remainder and Factor Theorems

Repeating Decimals are decimal numbers that have number(s) after the decimal point that repeat in a pattern.

Running Time ( 3.1) Analysis of Algorithms. Experimental Studies ( 3.1.1) Limitations of Experiments. Pseudocode ( 3.1.2) Theoretical Analysis

Incremental calculation of weighted mean and variance

Sequences and Series

Confidence Intervals for One Mean

5.3. Generalized Permutations and Combinations

Overview on S-Box Design Principles

1 The Gaussian channel

Output Analysis (2, Chapters 10 &11 Law)


Lesson 15 ANOVA (analysis of variance)

S. Tanny MAT 344 Spring be the minimum number of moves required.

SEQUENCES AND SERIES

The Stable Marriage Problem

Solutions to Exercises Chapter 4: Recurrence relations and generating functions

Class Meeting # 16: The Fourier Transform on R n

MARTINGALES AND A BASIC APPLICATION

A Faster Clause-Shortening Algorithm for SAT with No Restriction on Clause Length

Section 11.3: The Integral Test

WHEN IS THE (CO)SINE OF A RATIONAL ANGLE EQUAL TO A RATIONAL NUMBER?

SECTION 1.5 : SUMMATION NOTATION + WORK WITH SEQUENCES

7.1 Finding Rational Solutions of Polynomial Equations

Asymptotic Growth of Functions

Ekkehart Schlicht: Economic Surplus and Derived Demand

INVESTMENT PERFORMANCE COUNCIL (IPC)

Discrete Mathematics and Probability Theory Spring 2014 Anant Sahai Note 13

FOUNDATIONS OF MATHEMATICS AND PRE-CALCULUS GRADE 10

CHAPTER 3 THE TIME VALUE OF MONEY

Overview of some probability distributions.

Chapter 7: Confidence Interval and Sample Size

The Binomial Multi- Section Transformer

An Efficient Polynomial Approximation of the Normal Distribution Function & Its Inverse Function

Chatpun Khamyat Department of Industrial Engineering, Kasetsart University, Bangkok, Thailand

Systems Design Project: Indoor Location of Wireless Devices

Escola Federal de Engenharia de Itajubá

THE ARITHMETIC OF INTEGERS. - multiplication, exponentiation, division, addition, and subtraction

Analysis Notes (only a draft, and the first one!)

Project Deliverables. CS 361, Lecture 28. Outline. Project Deliverables. Administrative. Project Comments

Heat (or Diffusion) equation in 1D*

Elementary Theory of Russian Roulette

Proof of Geeratig Fuctio For J.B.S.A.R.D.T.a.a.

Determining the sample size

How To Understand The Theory Of Coectedess

Domain 1: Designing a SQL Server Instance and a Database Solution

Transcription:

CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 8 GENE H GOLUB 1 Positive Defiite Matrices A matrix A is positive defiite if x Ax > 0 for all ozero x A positive defiite matrix has real ad positive eigevalues, ad its leadig pricipal submatrices all have positive determiats From the defiitio, it is easy to see that all diagoal elemets are positive To solve the system Ax = b where A is positive defiite, we ca compute the Cholesky decompositio A = F F where F is upper triagular This decompositio exists if ad oly if A is symmetric ad positive defiite I fact, attemptig to compute the Cholesky decompositio of A is a efficiet method for checkig whether A is symmetric positive defiite It is importat to distiguish the Cholesky decompositio from the square root factorizatio A square root of a matrix A is defied as a matrix S such that S = SS = A Note that the matrix F i A = F F is ot the square root of A, sice it does ot hold that F = A uless A is a diagoal matrix The square root of a symmetric positive defiite A ca be computed by usig the fact that A has a eigedecompositio A = UΛU where Λ is a diagoal matrix whose diagoal elemets are the positive eigevalues of A ad U is a orthogoal matrix whose colums are the eigevectors of A It follows that ad so S = UΛ 1/ U is a square root of A A = UΛU = (UΛ 1/ U )(UΛ 1/ U ) = SS The Cholesky Decompositio The Cholesky decompositio ca be computed directly from the matrix equatio A = F F Examiig this equatio o a elemet-by-elemet basis yields the equatios a 11 = f 11, a 1j = f 11 f 1j, a kk = f 1k + f k + + f kk, a kj = f 1k f 1j + + f kk f kj, ad the resultig algorithm that rus for k = 1,, : f kk = ( a kk k 1 j=1 f jk ) 1/ j =,, j = k + 1,, f kj = ( a kj k 1 l=1 f lkf lj ) / fkk, j = k + 1,, This algorithm requires roughly half as may operatios as Gaussia elimiatio So if A is symmetric positive defiite, the we could compute the decompositio Date: September 0, 011, versio 10 A = F F, 1

kow as the Cholesky decompostio I fact, there are several ways to write A = GG for some matrix G sice A = F F = F QQ F = (F Q)(F Q) = GG for ay orthogoal matrix Q, but for the Cholesky decompositio, we require that F is lower triagular, with positive diagoal elemets We ca compute F by examiig the matrix equatio A = F F o a elemet-by-elemet basis, writig a 11 a 1 f 11 f 11 f 1 f 1 a 1 a = f 1 f f a 1 a f 1 f f f From the above matrix multiplicatio we see that f 11 = a 11, from which it follows that f 11 = a 11 From the relatioship f 11 f i1 = a i1 ad the fact that we already kow f 11, we obtai f i1 = a i1 f 11, i =,, Proceedig to the secod colum of F, we see that f1 + f = a Sice we already kow f 1, we have f = a f1 Next, we use the relatio f 1 f i1 + f f i = a i to compute f i1 = a i f 1 f i1 f I geeral, we ca use the relatioship a ij = f i f j to compute f ij, where f i is the ith colum of F Aother method for computig the Cholesky decompositio is to compute f 1 = 1 a11 a 1 where a i is the ith colum of A The we set A (1) = A ad compute 0 0 0 A () = A (1) f 1 f1 0 = A 0 Note that [ A (1) 1 0 = B B 0 A where B is the idetity matrix with its first colum replaced by f 1 Writig C = B 1, we see that A is positive defiite sice [ 1 0 = CAC 0 A is positive defiite So we may repeat the process o A [ We partitio the matrix A ito colums, writig A = f = 1 [ 0 a () a () We the compute A 3 = A () f f a () a () 3 a () ad the compute

ad so o Note that which implies that a kk = f k1 + f k + + f kk, f ki a kk I other words, the elemets of F are bouded We also have the relatioship det A = det F det F = (det F ) = f 11f f Is the Cholesky decomposito uique? Employig a similar approach to the oe used to prove the uiquess of the LU decompositio, we assume that A has two Cholesky decompositios A = F 1 F 1 = F F The F 1 F 1 = F F1, but sice F 1 ad F are lower triagular, both matrices must be diagoal Let F 1 F 1 = D = F F 1 So F 1 = F D ad thus F 1 = DF ad we get D 1 = F F 1 I other words, D 1 = D or D = I Hece D must have diagoal elemets equal to ±1 Sice we require that the diagoal elemets be positive, it follows that the decompositio is uique I computig the Cholesky decompositio, o row iterchages are ecessary because A is positive defiite, so the umber of operatios required to compute F is approximately 3 /3 A variat of the Cholesky decompositio is kow as the square-root-free Cholesky decompositio, ad has the form A = LDL where L is a uit lower triagular matrix, ad D is a diagoal matrix with positive diagoal elemets This is a special case of the A = LDM factorizatio previously discussed The LDL ad Cholesky decompositios are related by F = LD 1/ 3 Baded Matrices A baded matrix has all of its ozero elemets cotaied withi a bad cosistig of select diagoals Specifically, a matrix A that has upper badwidth p ad lower badwidth q has the form a 11 a 1,p+1 a 1 a,p+1 a,p+ A = a q+1,1 a q+1,q+1 a q+1, Matrices of this form arise frequetly from discretizatio of partial differetial equatios The simplest baded matrix is a tridiagoal matrix, which has upper badwidth 1 ad lower badwidth 1 Such a matrix ca be stored usig oly three vectors istead of a two-dimesioal array Computig the LU decompositio of a tridiagoal matrix without pivotig requires oly O() operatios, ad produces bidiagoal L ad U Whe pivotig is used, this desirable structure is lost, ad the process as a whole is more expesive i terms of computatio time ad storage space 3

Various applicatios, such as the solutio of partial differetial equatios i two or more space dimesios, yield symmetric block tridiagoal matrices, which have a block Cholesky decompositio: A 1 B F 1 F1 G B B = G G B A G F F From the above matrix equatio, we determie that A 1 = F 1 F 1, B = G F 1 from which it follows that we ca compute the Cholesky decompositio of A 1 to obtai F 1, ad the compute G = B (F1 ) 1 Next, we use the relatioship A = G G + F F to obtai F F = A G G = A B (F 1 ) 1 F 1 1 B = A B A 1 1 B It is iterestig to ote that i the case of =, the matrix A B A 1 1 B is kow as the Schur complemet of A 1 Cotiuig with the block tridiagoal case with =, suppose that we wish to compute the factorizatio [ A B B 0 [ F [F = G G + [ 0 0 0 X It is easy to see that X = B A 1 B, but this matrix is egative defiite Therefore, we caot compute a block Cholesky decompositio, but we ca achieve the factorizatio [ [ [ A B F 0 F G B = 0 G K 0 K where K is the Cholesky factor of the positive defiite matrix B A 1 B 4 Parallelism of Gaussia Elimiatio Suppose that we wish to perform Gaussia elimiatio o the matrix A = [ a 1 a Durig the first step of the elimiatio, we compute P (1) Π 1 A = [ P (1) Π 1 a 1 P (1) Π 1 a Clearly we ca work o each colum idepedetly, leadig to a parallel algorithm As the elimiatio proceeds, we obtai less beefit from parallelism sice fewer colums are beig modified at each step 5 Error Aalysis of Gaussia Elimiatio Suppose that we wish to solve the system Ax = b Our computed solutio x satisfies a perturbed system (A + ) x = b It ca be show that x x x A 1 1 A 1 A A 1 A 1 A A 1 A κ(a)r 1 κ(a)r where κ(a) = A A 1 is the coditio umber of A ad r = / A The coditio umber has the followig properties: κ(αa) = κ(a) where α is a ozero scalar 4

κ(i) = 1 κ(q) = 1 whe Q Q = I The perturbatio matrix is typically a fuctio of the algorithm used to solve Ax = b I this sectio, we will cosider the case of Gaussia elimiatio ad perform a detailed error aalysis, illustratig the aalysis origially carried out by JH Wilkiso The process of solvig Ax = b cosists of three stages: (1) Factorig A = LU, resultig i a approximate LU decompositio A + E = LŪ () Solvig Ly = b, or, umerically, computig y such that ( L + δ L)(y + δy) = b (3) Solvig Ux = y, or, umerically, computig x such that (Ū + δū)(x + δx) = y + δy Combiig these stages, we see that b = ( L + δ L)(Ū + δū)(x + δx) where = δ LŪ + LδŪ + δ LδŪ = ( LŪ + δ LŪ + LδŪ + δ LδŪ)(x + δx) = (A + E + δ LŪ + LδŪ + δ LδŪ)(x + δx) = (A + )(x + δx) Departmet of Computer Sciece, Gates Buildig B, Room 80, Staford, CA 94305-905 E-mail address: golub@stafordedu 5