Numerical Linear Algebra Chap. 4: Perturbation and Regularisation


 Leon Simon
 2 years ago
 Views:
Transcription
1 Numerical Linear Algebra Chap. 4: Perturbation and Regularisation Heinrich Voss Hamburg University of Technology Institute of Numerical Simulation TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
2 Linear systems Sensitivity of linear systems Consider the linear system of equation Ax = b (1) where A R (n,n) is a nonsingular matrix, and a perturbed system (A + A)(x + x) = b + b. (2) TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
3 Linear systems Sensitivity of linear systems Consider the linear system of equation Ax = b (1) where A R (n,n) is a nonsingular matrix, and a perturbed system (A + A)(x + x) = b + b. (2) Our aim is to examine how perturbations of A and of b affect the solution of the system. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
4 Remarks Linear systems Small perturbations always have to be kept in mind when solving practical problems since TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
5 Remarks Linear systems Small perturbations always have to be kept in mind when solving practical problems since the data A and/or b may be obtained from measurements, and therefore they are erroneous TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
6 Remarks Linear systems Small perturbations always have to be kept in mind when solving practical problems since the data A and/or b may be obtained from measurements, and therefore they are erroneous using computers the representation of data as floating point numbers always produces errors. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
7 Remarks Linear systems Small perturbations always have to be kept in mind when solving practical problems since the data A and/or b may be obtained from measurements, and therefore they are erroneous using computers the representation of data as floating point numbers always produces errors. Hence, one always has to emanate from the fact that one solves a perturbed linear system instead of the given one. However, usually the pertubations are quite small. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
8 Perturbation lemma Linear systems Lemma Let B R (n,n), and assume that for some vector norm and the associate matrix norm the following inequality is satisfied B < 1. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
9 Perturbation lemma Linear systems Lemma Let B R (n,n), and assume that for some vector norm and the associate matrix norm the following inequality is satisfied B < 1. Then the matrix I B is nonsingular, and it holds that (I B) B. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
10 Proof Linear systems For every x R n, x 0, (I B)x x Bx x B x = (1 B ) x > 0. Therefore, the linear system (I B)x = 0 has the unique solution x = 0, and I B is nonsingular. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
11 Proof Linear systems For every x R n, x 0, (I B)x x Bx x B x = (1 B ) x > 0. Therefore, the linear system (I B)x = 0 has the unique solution x = 0, and I B is nonsingular. The estimate of the norm of the inverse of I B follows from TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
12 Proof Linear systems For every x R n, x 0, (I B)x x Bx x B x = (1 B ) x > 0. Therefore, the linear system (I B)x = 0 has the unique solution x = 0, and I B is nonsingular. The estimate of the norm of the inverse of I B follows from 1 = (I B) 1 (I B) = (I B) 1 (I B) 1 B (I B) 1 (I B) 1 B (I B) 1 (I B) 1 B = (1 B ) (I B) 1. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
13 Corollary Linear systems Let A R n be a nonsingular matrix, and A R n. Assume that A 1 A 1 for a matrix norm which is subordinate to some vector norm. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
14 Corollary Linear systems Let A R n be a nonsingular matrix, and A R n. Assume that A 1 A 1 for a matrix norm which is subordinate to some vector norm. Then A + A is nonsingular, and it holds that (A + A) 1 A 1 1 A 1 A A 1 1 A 1 A TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
15 Proof Linear systems The existence of (A + A) 1 follows from the perturbation lemma since A < 1 A 1 1 > A 1 A A 1 A and A + A = A(I + A 1 A). TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
16 Proof Linear systems The existence of (A + A) 1 follows from the perturbation lemma since A < 1 A 1 1 > A 1 A A 1 A and A + A = A(I + A 1 A). (A + A) 1 = (I + A 1 A) 1 A 1 A 1 (I + A 1 A) 1 A 1 1 A 1 A A 1 1 A 1 A TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
17 Remark Linear systems The Corollary demonstrates that for a nonsingular matrix A the perturbed matrix A + A is also nonsingular if the perturbation A is sufficiently small. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
18 Linear systems Perturbed linear system We consider the perturbed linear system (A + A)(x + x) = b + b, and we assume that the perturbation A is so small that the condition of the Corollary is satisfied. Then A + A is nonsingular. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
19 Linear systems Perturbed linear system We consider the perturbed linear system (A + A)(x + x) = b + b, and we assume that the perturbation A is so small that the condition of the Corollary is satisfied. Then A + A is nonsingular. Solving for x one obtains the absolute error which is caused by the perturbations of A and b: x = (A + A) 1 ( b Ax) = (I + A 1 A) 1 A 1 ( b Ax). TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
20 Linear systems Perturbed linear system We consider the perturbed linear system (A + A)(x + x) = b + b, and we assume that the perturbation A is so small that the condition of the Corollary is satisfied. Then A + A is nonsingular. Solving for x one obtains the absolute error which is caused by the perturbations of A and b: x = (A + A) 1 ( b Ax) = (I + A 1 A) 1 A 1 ( b Ax). Hence, with an arbitrary vector norm and the subordinate matrix norm we obtain x (I + A 1 A) 1 A 1 ( b + A x ). TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
21 Linear systems Perturbed linear system ct. For b 0 and as a consequence x 0 it holds for the relative error x / x that ( ) x b x (I + A 1 A) 1 A 1 x + A. (3) TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
22 Linear systems Perturbed linear system ct. For b 0 and as a consequence x 0 it holds for the relative error x / x that ( ) x b x (I + A 1 A) 1 A 1 x + A. (3) and the Corollary yields x x A 1 ( 1 A 1 A b A A 1 A 1 A 1 A A A b + A ) ( A A + b ). (4) b TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
23 Linear systems Perturbed linear system ct. For b 0 and as a consequence x 0 it holds for the relative error x / x that ( ) x b x (I + A 1 A) 1 A 1 x + A. (3) and the Corollary yields x x A 1 ( 1 A 1 A b A A 1 A 1 A 1 A A A b + A ) ( A A + b ). (4) b Hence, for small perturbations (such that the denominator does not deviate very much from 1) the relative error b of the right hand side and the relative error A of the system matrix are amplified by the factor A 1 A. This amplification factor is called condition of the matrix A. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
24 Definition Linear systems Let A C (n,n) be a nonsingular matrix, and let be a matrix norm on C (n,n) which is subordinate to some vector norm. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
25 Definition Linear systems Let A C (n,n) be a nonsingular matrix, and let be a matrix norm on C (n,n) which is subordinate to some vector norm. Then κ(a) := A 1 A is called condition of the matrix A (or of the linear system of equations (1)) corresponding to the norm. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
26 Definition Linear systems Let A C (n,n) be a nonsingular matrix, and let be a matrix norm on C (n,n) which is subordinate to some vector norm. Then κ(a) := A 1 A is called condition of the matrix A (or of the linear system of equations (1)) corresponding to the norm. Remark For every nonsingular matrix A and every norm it holds that κ(a) 1, because 1 = I = AA 1 A A 1 = κ(a). TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
27 Theorem Linear systems Let A, A R (n,n) and b, b R n, b 0, such that A is nonsingular, and assume that A 1 A < 1 for some matrix norm which is subordinate to some vector norm. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
28 Theorem Linear systems Let A, A R (n,n) and b, b R n, b 0, such that A is nonsingular, and assume that A 1 A < 1 for some matrix norm which is subordinate to some vector norm. Let x and x + x be the solution of the linear system (1) and the perturbed system (2), respectively, and the following estimation of the relative error holds x x κ(a) ( A 1 κ(a) A A A where κ(a) := A A 1 denotes the condition of A. + b ). b TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
29 Remark Linear systems Assume that the length of the mantissa (i.e. the number of leading digits in floating point representation) of our computer is l. Then that the relative input data error of A and b is 5 10 l. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
30 Remark Linear systems Assume that the length of the mantissa (i.e. the number of leading digits in floating point representation) of our computer is l. Then that the relative input data error of A and b is 5 10 l. If κ(a) = 10 γ, then (not considering the round of errors which occur in the numerical method for solving the linear system) we have to expect a relative error of approximately 5 10 γ l for a numerical solution the linear system Ax = b. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
31 Remark Linear systems Assume that the length of the mantissa (i.e. the number of leading digits in floating point representation) of our computer is l. Then that the relative input data error of A and b is 5 10 l. If κ(a) = 10 γ, then (not considering the round of errors which occur in the numerical method for solving the linear system) we have to expect a relative error of approximately 5 10 γ l for a numerical solution the linear system Ax = b. Roughly speaking solving a linear system numerically we are loosing γ digits Stellen if the order of magnitude of the condition of the system matrix A is 10 γ. This loss of accuracy has nothing to do with the algorithm of choice. It is problem immanent. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
32 Example Linear systems Consider the linear system of equations ( ) 1 1 x = which obviously has the solution x = (1, 1) T. ( ) 2, TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
33 Example Linear systems Consider the linear system of equations ( ) 1 1 x = which obviously has the solution x = (1, 1) T. ( ) 2, For x + x := (5, 3.002) T it holds that ( ) A(x + x) = =: b + b TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
34 Example Linear systems Consider the linear system of equations ( ) 1 1 x = which obviously has the solution x = (1, 1) T. ( ) 2, For x + x := (5, 3.002) T it holds that ( ) A(x + x) = =: b + b Hence, b b = and x x = 4.002, TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
35 Example ct. Linear systems and it follows for the condition κ (A) = TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
36 Example ct. Linear systems and it follows for the condition κ (A) = Indeed and therefore A 1 = ( 999 ) κ (A) = TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
37 Example ct. Linear systems and it follows for the condition κ (A) = Indeed and therefore A 1 = ( 999 ) κ (A) = This example demonstrates that the estimation of the relative error of the solution of a perturbed system is sharp. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
38 Linear systems Geometric condition The following Theorem contains a geometric characterization of the condition number. It says that the relative distance of a nonsingular matrix to the closest singular matrix with respect to Euclidean norm is the reciprokal of the condition number. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
39 Linear systems Geometric condition The following Theorem contains a geometric characterization of the condition number. It says that the relative distance of a nonsingular matrix to the closest singular matrix with respect to Euclidean norm is the reciprokal of the condition number. Theorem Let A R (n,n) be nonsingular. Then it holds that { A 2 min A 2 } : A + A singular} = 1 κ 2 (A). TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
40 Proof Linear systems It suffices to prove that min { A 2 : A + A singular} = 1/ A 1 2. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
41 Proof Linear systems It suffices to prove that min { A 2 : A + A singular} = 1/ A 1 2. That the minimum is at least 1/ A 1 2 follows from the perturbation lemma: for A 2 < 1/ A 1 2 it holds that 1 > A 2 A 1 2 A 1 A 2. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
42 Proof Linear systems It suffices to prove that min { A 2 : A + A singular} = 1/ A 1 2. That the minimum is at least 1/ A 1 2 follows from the perturbation lemma: for A 2 < 1/ A 1 2 it holds that 1 > A 2 A 1 2 A 1 A 2. Hence, and is invertible. I + A 1 A = A 1 (A + A), A + A TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
43 Proof ct. Linear systems We now construct a matrix A, such that A + Ais singular and A 2 = 1/ A 1 2 which demonstrates that the minimum is greater or equal to 1/ A 1 2. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
44 Proof ct. Linear systems We now construct a matrix A, such that A + Ais singular and A 2 = 1/ A 1 2 which demonstrates that the minimum is greater or equal to 1/ A 1 2. From A 1 2 = max x 0 A 1 x 2 x 2 it follows that there exists x satisfying x 2 = 1 and A 1 2 = A 1 x 2 > 0. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
45 Proof ct. Linear systems We now construct a matrix A, such that A + Ais singular and A 2 = 1/ A 1 2 which demonstrates that the minimum is greater or equal to 1/ A 1 2. From A 1 2 = max x 0 A 1 x 2 x 2 it follows that there exists x satisfying x 2 = 1 and A 1 2 = A 1 x 2 > 0. With this x we define y := A 1 x A 1 = A 1 x x 2 A 1 and A := xy T 2 A 1. 2 TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
46 Proof ct. Linear systems Then it holds that y 2 = 1 and A 2 = max z 0 xy T z 2 A 1 = max 2 z 2 z 0 where the maximum is attained for z = y, e.g. y T z z 2 x 2 A 1 2 = 1 A 1 2, TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
47 Proof ct. Linear systems Then it holds that y 2 = 1 and A 2 = max z 0 xy T z 2 A 1 = max 2 z 2 z 0 where the maximum is attained for z = y, e.g. y T z z 2 x 2 A 1 2 = 1 A 1 2, From (A + A)y = Ay xy T y A 1 2 = we obtain the singularity of A + A. x A 1 2 x A 1 2 = 0 TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
48 Theorem Least squares problems Let A = UΣV H be the singular value decomposition of A R m n where σ 1 σ 2 σ r > σ r+1 = = σ min(m,n) = 0. Then it holds that TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
49 Theorem Least squares problems Let A = UΣV H be the singular value decomposition of A R m n where σ 1 σ 2 σ r > σ r+1 = = σ min(m,n) = 0. Then it holds that (i) rank(a) = r, TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
50 Theorem Least squares problems Let A = UΣV H be the singular value decomposition of A R m n where σ 1 σ 2 σ r > σ r+1 = = σ min(m,n) = 0. Then it holds that (i) rank(a) = r, (ii) null(a) := {x R n : Ax = 0} = span{v r+1,..., v n }, TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
51 Theorem Least squares problems Let A = UΣV H be the singular value decomposition of A R m n where σ 1 σ 2 σ r > σ r+1 = = σ min(m,n) = 0. Then it holds that (i) rank(a) = r, (ii) null(a) := {x R n : Ax = 0} = span{v r+1,..., v n }, (iii) range(a) := {Ax : x R n } = span{u 1,..., u r }, TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
52 Theorem Least squares problems Let A = UΣV H be the singular value decomposition of A R m n where σ 1 σ 2 σ r > σ r+1 = = σ min(m,n) = 0. Then it holds that (i) rank(a) = r, (ii) null(a) := {x R n : Ax = 0} = span{v r+1,..., v n }, (iii) range(a) := {Ax : x R n } = span{u 1,..., u r }, (iv) A = r i=1 σ i u i (v i ) T = U r Σ r V T r with U r = (u 1,..., u r ), V r = (v 1,..., v r ), Σ r = diag(σ 1,..., σ r ), TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
53 Theorem Least squares problems Let A = UΣV H be the singular value decomposition of A R m n where σ 1 σ 2 σ r > σ r+1 = = σ min(m,n) = 0. Then it holds that (i) rank(a) = r, (ii) null(a) := {x R n : Ax = 0} = span{v r+1,..., v n }, (iii) range(a) := {Ax : x R n } = span{u 1,..., u r }, (iv) A = r i=1 (v) A 2 S := m σ i u i (v i ) T = U r Σ r V T r with U r = (u 1,..., u r ), V r = (v 1,..., v r ), Σ r = diag(σ 1,..., σ r ), i=1 j=1 n aij 2 = r σi 2, i=1 TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
54 Theorem Least squares problems Let A = UΣV H be the singular value decomposition of A R m n where σ 1 σ 2 σ r > σ r+1 = = σ min(m,n) = 0. Then it holds that (i) rank(a) = r, (ii) null(a) := {x R n : Ax = 0} = span{v r+1,..., v n }, (iii) range(a) := {Ax : x R n } = span{u 1,..., u r }, (iv) A = r i=1 (v) A 2 S := m (vi) A 2 := σ 1. σ i u i (v i ) T = U r Σ r V T r with U r = (u 1,..., u r ), V r = (v 1,..., v r ), Σ r = diag(σ 1,..., σ r ), i=1 j=1 n aij 2 = r σi 2, i=1 TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
55 Proof Least squares problems (i): Multiplication by nonsingular matrices U T und V does not change the rank of A. Therefore, rank(a) = rank(σ) = r. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
56 Proof Least squares problems (i): Multiplication by nonsingular matrices U T und V does not change the rank of A. Therefore, rank(a) = rank(σ) = r. (ii): From V T v i = e i it follows that Av i = UΣV T v i = UΣe i = 0 for i = r + 1,..., n. Hence, v r+1,..., v n null(a). TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
57 Proof Least squares problems (i): Multiplication by nonsingular matrices U T und V does not change the rank of A. Therefore, rank(a) = rank(σ) = r. (ii): From V T v i = e i it follows that Av i = UΣV T v i = UΣe i = 0 for i = r + 1,..., n. Hence, v r+1,..., v n null(a). dim null(a) = n r implies that these vectors form a basis of null(a). TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
58 Proof ct. Least squares problems (iii): From A = UΣV T we obtain Range(A) = U Range(Σ) = U span(e 1,..., e r ) = span(u 1,..., u r ). TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
59 Proof ct. Least squares problems (iii): From A = UΣV T we obtain Range(A) = U Range(Σ) = U span(e 1,..., e r ) = span(u 1,..., u r ). (iv): Blockmatrix multiplication yields A = UΣV T = ( (v 1 ) T u 1... u m) Σ. = (v n ) T r σ i u i (v i ) T. i=1 TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
60 Proof ct. Least squares problems (v): Let A = (a 1,..., a n ). TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
61 Proof ct. Least squares problems (v): Let A = (a 1,..., a n ). Multiplication by the orthogonal matrix U T does not change the Euclidean length of a vector. Jence, A 2 S = n a i 2 2 = i=1 n U T a i 2 2 = U T A 2 S. i=1 TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
62 Proof ct. Least squares problems (v): Let A = (a 1,..., a n ). Multiplication by the orthogonal matrix U T does not change the Euclidean length of a vector. Jence, A 2 S = n a i 2 2 = i=1 n U T a i 2 2 = U T A 2 S. i=1 Similarly, multiplying the rows of U T A by the orthogonal matrix V from the right does not change their length, from which we get A 2 S = UT ΣV 2 S = Σ 2 S = r σi 2. i=1 TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
63 Proof ct. Least squares problems (v): Let A = (a 1,..., a n ). Multiplication by the orthogonal matrix U T does not change the Euclidean length of a vector. Jence, A 2 S = n a i 2 2 = i=1 n U T a i 2 2 = U T A 2 S. i=1 Similarly, multiplying the rows of U T A by the orthogonal matrix V from the right does not change their length, from which we get A 2 S = UT ΣV 2 S = Σ 2 S = r σi 2. i=1 (vi): A 2 is a singular value of A, i.e. A 2 σ 1 (cf. proof of the existence theorem of the SVD). Thus A 2 = max{ Ax 2 : x 2 = 1} Av 1 2 = σ 1. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
64 Least squares problems Condition of a matrix Let A = UΣV T be the SVD of a nonsingular matrix A. Then A 1 = V Σ 1 U T is the SVD of A 1, from which we get A 2 = σ 1 and A 1 2 = 1 σ n. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
65 Least squares problems Condition of a matrix Let A = UΣV T be the SVD of a nonsingular matrix A. Then A 1 = V Σ 1 U T is the SVD of A 1, from which we get A 2 = σ 1 and A 1 2 = 1 σ n. Hence, the condition of A with respect to the Euclidean norm is κ 2 (A) := σ 1 σ n. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
66 Remark Least squares problems Let A R (n,n) have eigenvalues µ 1,..., µ n. Then it follows from Ax i = µ i x i µ i 2 = (Ax i ) H Ax i (x i ) H x i = (x i ) H A T Ax i (x i ) H x i. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
67 Remark Least squares problems Let A R (n,n) have eigenvalues µ 1,..., µ n. Then it follows from Ax i = µ i x i µ i 2 = (Ax i ) H Ax i (x i ) H x i = (x i ) H A T Ax i (x i ) H x i. Rayleigh s principle yields λ min x H A T Ax x H x λ max für alle x C n, x 0, where λ min and λ max is the minimal und maximal eigenvalue of A T A, respectively. Hence, σ 1 µ i σ n for every i. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
68 Remark Least squares problems Let A R (n,n) have eigenvalues µ 1,..., µ n. Then it follows from Ax i = µ i x i µ i 2 = (Ax i ) H Ax i (x i ) H x i = (x i ) H A T Ax i (x i ) H x i. Rayleigh s principle yields λ min x H A T Ax x H x λ max für alle x C n, x 0, where λ min and λ max is the minimal und maximal eigenvalue of A T A, respectively. Hence, σ 1 µ i σ n for every i. For symmetric A it folds that σ 1 = µ 1 and σ r = µ r. For non symmetric matrices this is in general not the case. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
69 Least squares problems Numerical computation The singular values of A are the square roots of the eigenvalues of A T A. Hence, in principle the SVD of A can be determined with any eigensolver. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
70 Least squares problems Numerical computation The singular values of A are the square roots of the eigenvalues of A T A. Hence, in principle the SVD of A can be determined with any eigensolver. To this end one has to evaluate A T A and AA T which is costly and which deteriorates the condition number considerably. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
71 Least squares problems Numerical computation The singular values of A are the square roots of the eigenvalues of A T A. Hence, in principle the SVD of A can be determined with any eigensolver. To this end one has to evaluate A T A and AA T which is costly and which deteriorates the condition number considerably. Actually, one uses an algorithm of Golub and Reinsch (1971), which takes advantage of the QR algorithm for computing the eigenvalues of A T A, but which avoids the explicit computation of A T A and AA T. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
72 Least squares problems Data compression The singular value decomposition can be used for data compression. This is based upon the following theorem: TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
73 Least squares problems Data compression The singular value decomposition can be used for data compression. This is based upon the following theorem: Theorem Let A = UΣV T be the singular value decomposition of A R m n, and let U = (u 1,..., u m ) and V = (v 1,..., v n ). TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
74 Least squares problems Data compression The singular value decomposition can be used for data compression. This is based upon the following theorem: Theorem Let A = UΣV T be the singular value decomposition of A R m n, and let U = (u 1,..., u m ) and V = (v 1,..., v n ). Then for k < n A k := k σ j u j (v j ) T j=1 is the best approximation of A with rank(a k ) = k with respect to the spectral norm, and it holds that A A k 2 = σ k+1. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
75 Proof Least squares problems It holds that A A k 2 = n j=k+1 σ j u j (v j ) T 2 = Udiag{0,..., 0, σ k+1,..., σ n }V T 2 = σ k+1, and it remains to show, that there does not exist a matrix of rank k, the distance to A of which is less than σ k+1. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
76 Proof Least squares problems It holds that A A k 2 = n j=k+1 σ j u j (v j ) T 2 = Udiag{0,..., 0, σ k+1,..., σ n }V T 2 = σ k+1, and it remains to show, that there does not exist a matrix of rank k, the distance to A of which is less than σ k+1. Let B be any matrix with rank(b) = k. Then the dimension of the null space of B is n k. The dimension of span{v 1,..., v k+1 } is k + 1, and therefore the intersection of these two spaces contains a nontrivial vector w with w 2 = 1. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
77 Proof Least squares problems It holds that A A k 2 = n j=k+1 σ j u j (v j ) T 2 = Udiag{0,..., 0, σ k+1,..., σ n }V T 2 = σ k+1, and it remains to show, that there does not exist a matrix of rank k, the distance to A of which is less than σ k+1. Let B be any matrix with rank(b) = k. Then the dimension of the null space of B is n k. The dimension of span{v 1,..., v k+1 } is k + 1, and therefore the intersection of these two spaces contains a nontrivial vector w with w 2 = 1. Hence, A B 2 2 (A B)w 2 2 = Aw 2 2 = UΣV T w 2 2 = Σ(V T w) 2 2 σk+1 V 2 T w 2 2 = σk+1. 2 TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
78 Least squares problems Data compression Let A R (m,n) be a matrix the elements a ij of which are color values of pixels of a picture. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
79 Least squares problems Data compression Let A R (m,n) be a matrix the elements a ij of which are color values of pixels of a picture. If A = UΣV T is the singular value decomposition of A, then A k = k σ j u j (v j ) T, k = 1,..., min(n, m) j=1 is an approximation to A. The storage of A k requires only k (n + m + 1)/(n m) memory cells whereas A requires mn. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
80 Least squares problems Data compression Let A R (m,n) be a matrix the elements a ij of which are color values of pixels of a picture. If A = UΣV T is the singular value decomposition of A, then A k = k σ j u j (v j ) T, k = 1,..., min(n, m) j=1 is an approximation to A. The storage of A k requires only k (n + m + 1)/(n m) memory cells whereas A requires mn. Notice that that using the SVD in this manner is a very simple way of data compression. There are algorithm in image processing which are much less costly. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
81 Example Least squares problems Original Figure: Original TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
82 Example ct. Least squares problems k=5; 2.6% Figure: Compression: k = 5 TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
83 Example ct. Least squares problems k=10; 5.3% Figure: Compression: k = 10 TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
84 Example ct. Least squares problems k=20; 10.5% Figure: Compression: k = 20 TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
85 Pseudoinverse Least squares problems Consider the linear least squares problem Let A R (m,n) and b R m with m n. Find x R n such that Ax b 2 = min! (1) We examine this problem taking advantage of the singular value decomposition. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
86 Pseudoinverse Least squares problems Consider the linear least squares problem Let A R (m,n) and b R m with m n. Find x R n such that Ax b 2 = min! (1) We examine this problem taking advantage of the singular value decomposition. In the following we denote with σ 1 σ 2 σ r > 0 = σ r+1 = = σ n = 0 the singular values of A. A = UΣV T is the singular value decomposition of A, and u j and v k are the left and right singular vectors, respectively, i.e. the columns of U and V. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
87 Least squares problems Pseudoinverse ct. Theorem Let c := U T b R m. The set of solutions of the linear least squares problem is L = x + null(a), (2) where x is the following particular solution (1): x := r i=1 c i σ i v i. (3) TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
88 Pseudoinverse Least squares problems Multiplying a vector by an orthogonal matrix does not change its length. Hence, with z := V T x it holds that Ax b 2 2 = U T (Ax b) 2 2 = ΣV T x U T b 2 2 = Σz c 2 2 = (σ 1 z 1 c 1,..., σ r z r c r, c r+1,..., c m ) T 2 2. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
89 Pseudoinverse Least squares problems Multiplying a vector by an orthogonal matrix does not change its length. Hence, with z := V T x it holds that Ax b 2 2 = U T (Ax b) 2 2 = ΣV T x U T b 2 2 = Σz c 2 2 = (σ 1 z 1 c 1,..., σ r z r c r, c r+1,..., c m ) T 2 2. Therefore, the solution of problem (1) reads: z i := c i σ i, i = 1,..., r, und z i R beliebig für i = r + 1,..., n, i.e. x = r i=1 c i σ i v i + n i=r+1 z i v i, z i R, i = r + 1,..., n. (4) TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
90 Pseudoinverse Least squares problems Multiplying a vector by an orthogonal matrix does not change its length. Hence, with z := V T x it holds that Ax b 2 2 = U T (Ax b) 2 2 = ΣV T x U T b 2 2 = Σz c 2 2 = (σ 1 z 1 c 1,..., σ r z r c r, c r+1,..., c m ) T 2 2. Therefore, the solution of problem (1) reads: z i := c i σ i, i = 1,..., r, und z i R beliebig für i = r + 1,..., n, i.e. x = r i=1 c i σ i v i + n i=r+1 z i v i, z i R, i = r + 1,..., n. (4) Since the tailing n r columns of V span the null space of A, the set L of solutions of problem (1) has the form (2), (3). TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
91 Least squares problems Pseudonormal solution This theorem demonstrates again that the linear least squares problem (1) has a unique solution if and only if r = rank(a) = n. We enforce the uniqueness also in the case r < n requiring additionally that the Euclidean norm of the solution is minimal. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
92 Least squares problems Pseudonormal solution This theorem demonstrates again that the linear least squares problem (1) has a unique solution if and only if r = rank(a) = n. We enforce the uniqueness also in the case r < n requiring additionally that the Euclidean norm of the solution is minimal. Definition Let L be the solution set of the linear least squares problem (1). x L is called pseudonormal solution of (1), if x 2 x 2 for every x L. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
93 Least squares problems Pseudonormal solution ct. The representation (4) of the general solution of (1) yields that x in (3) is the pseudonormal solution of (1): n n x + z i v i 2 2 = x z i 2 v i 2 2 x 2 2. i=r+1 i=r+1 TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
94 Least squares problems Pseudonormal solution ct. The representation (4) of the general solution of (1) yields that x in (3) is the pseudonormal solution of (1): n n x + z i v i 2 2 = x z i 2 v i 2 2 x 2 2. i=r+1 i=r+1 The pseudonormal solution is unique, and x obviously is the only solution of (1) with x null(a) L. Hence, we obtained TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
95 Least squares problems Pseudonormal solution ct. The representation (4) of the general solution of (1) yields that x in (3) is the pseudonormal solution of (1): n n x + z i v i 2 2 = x z i 2 v i 2 2 x 2 2. i=r+1 i=r+1 The pseudonormal solution is unique, and x obviously is the only solution of (1) with x null(a) L. Hence, we obtained Satz There exists a unique pseudonormal solution x of problem (1) which is characterized by x null(a) L. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
96 Pseudoinverse Least squares problems For every A R (m,n) R m b x R n : A x b 2 Ax b 2 x R n, x 2 minimal defines a mapping which obviously is linear (cf. the representation of x in (3)). Therefore, it can be representated as a matrix A R (n,m). TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
97 Pseudoinverse Least squares problems For every A R (m,n) R m b x R n : A x b 2 Ax b 2 x R n, x 2 minimal defines a mapping which obviously is linear (cf. the representation of x in (3)). Therefore, it can be representated as a matrix A R (n,m). Definition For A R (m,n) the matrix A R (n,m), such that x := A b for every b R m is the pseudonormal solution of the linear least squares problem (1) is called pseudo inverse ( or MoorePenrose inverse) of A. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
98 Pseudo inverse Least squares problems If rank(a) = n and m n, then the least squares problem (1) is uniquely solvable, and it follows from the normal equations that the solution is x = (A T A) 1 A T b. Hence, in this case A = (A T A) 1 A T. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
99 Pseudo inverse Least squares problems If rank(a) = n and m n, then the least squares problem (1) is uniquely solvable, and it follows from the normal equations that the solution is x = (A T A) 1 A T b. Hence, in this case A = (A T A) 1 A T. If n = m and A is nonsingular, then it holds that A = A 1. Hence, the pseudo inverse is the usual inverse, if this one exists, and the pseudo inverse is consistent extension of the inverse. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
100 Least squares problems Pseudo inverse ct. Theorem let A R (m,n) and A = UΣV T, its singular value decomposition Σ = (σ i δ ij ) i,j TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
101 Least squares problems Pseudo inverse ct. Theorem let A R (m,n) and A = UΣV T, its singular value decomposition Σ = (σ i δ ij ) i,j Then it holds that (i) Σ = (τ i δ ij ) j,i, τ i = { σ 1 i, falls σ i 0 0, falls σ i = 0, TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
102 Least squares problems Pseudo inverse ct. Theorem let A R (m,n) and A = UΣV T, its singular value decomposition Σ = (σ i δ ij ) i,j Then it holds that (i) Σ = (τ i δ ij ) j,i, τ i = (ii) A = V Σ U T. { σ 1 i, falls σ i 0 0, falls σ i = 0, TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
103 Least squares problems Pseudo inverse ct. Remark The explicit representation of the pseudo inverse is needed only for theoretical considerations and is never computed explicitly (similarly to the inverse of a nonsingular matrix). TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
104 Least squares problems Pseudo inverse ct. Remark The explicit representation of the pseudo inverse is needed only for theoretical considerations and is never computed explicitly (similarly to the inverse of a nonsingular matrix). Corollary For every matrix A R (m,n) it holds THAT A = A and (A ) T = (A T ). TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
105 Least squares problems Pseudo inverse ct. Remark The explicit representation of the pseudo inverse is needed only for theoretical considerations and is never computed explicitly (similarly to the inverse of a nonsingular matrix). Corollary For every matrix A R (m,n) it holds THAT A = A and (A ) T = (A T ). A has the well known properties of the inverse A 1 of a nonsingular matrix A with the only exception that in general (AB) B A. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
106 Example Least squares problems Let A = ( ) 1 1 = I 0 0 ( ) ( ) 1 1 2, 1 1 TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
107 Example Least squares problems Let A = ( ) 1 1 = I 0 0 ( ) ( ) 1 1 2, 1 1 Its pseudo inverse is A = 1 ( ) ( ) 1 1 1/ 2 0 I = ( ) TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
108 Example Least squares problems Let A = ( ) 1 1 = I 0 0 ( ) ( ) 1 1 2, 1 1 Its pseudo inverse is A = 1 ( ) ( ) 1 1 1/ 2 0 I = ( ) Then A 2 = A and (A ) 2 = 1 2 A, i.e. (A 2 ) (A ) 2. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
109 Least squares problems Perturbation of least squares problems Consider the linear least squares problem Ax b 2 = min! (1) with A R (m,n), rank(a) = r, and a perturbed problem A(x + x) (b + b) 2 = min!, (2) where we incorporate only perturbations of the right hand side b, but not of the system matrix A. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
110 Least squares problems Perturbation of least squares problems Consider the linear least squares problem Ax b 2 = min! (1) with A R (m,n), rank(a) = r, and a perturbed problem A(x + x) (b + b) 2 = min!, (2) where we incorporate only perturbations of the right hand side b, but not of the system matrix A. Let x = A b and x + x = A (b + b) the pseudo normal solution of (1) and (2), respectively. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
111 Least squares problems Perturbation of least squares problems Consider the linear least squares problem Ax b 2 = min! (1) with A R (m,n), rank(a) = r, and a perturbed problem A(x + x) (b + b) 2 = min!, (2) where we incorporate only perturbations of the right hand side b, but not of the system matrix A. Let x = A b and x + x = A (b + b) the pseudo normal solution of (1) and (2), respectively. Then x = A b, and from A 2 = 1 σ r it follows x 2 A 2 b 2 = 1 σ r b 2. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
112 Least squares problems Perturbation of least squares problems It holds that r x 2 2 = c 2 i σ 2 i=1 i 1 σ 2 1 r ci 2 = 1 r 2 σ1 2 c i u i. i=1 i=1 2 TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
113 Least squares problems Perturbation of least squares problems It holds that Obviously, x 2 2 = r c 2 i σ 2 i=1 i 1 σ 2 1 r ci 2 = 1 r 2 σ1 2 c i u i. i=1 r c i u i is the projection of b to the range of A. Therefore it follows i=1 for the relative error i=1 x 2 σ 1 b 2. (3) x 2 σ r P range(a)b 2 2 TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
114 Least squares problems Perturbation of least squares problems It holds that Obviously, x 2 2 = r c 2 i σ 2 i=1 i 1 σ 2 1 r ci 2 = 1 r 2 σ1 2 c i u i. i=1 r c i u i is the projection of b to the range of A. Therefore it follows i=1 for the relative error i=1 x 2 σ 1 b 2. (3) x 2 σ r P range(a)b 2 2 This inequality specifies, how the relative error of the right hand side of a linear least squares problem effects the solution of the problem TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
115 condition Least squares problems Definition For A R (m,n) let A = UΣV T be the singular value decomposition, and let rank(a) = r. Then κ 2 (A) := σ 1 σ r is called condition of A. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
116 condition Least squares problems Definition For A R (m,n) let A = UΣV T be the singular value decomposition, and let rank(a) = r. Then κ 2 (A) := σ 1 σ r is called condition of A. If A R (n,n) is nonsingular then this definition coincides with the one with respect to the Euclidean norm for quadratic matrices given before. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
117 condition Least squares problems Definition For A R (m,n) let A = UΣV T be the singular value decomposition, and let rank(a) = r. Then κ 2 (A) := σ 1 σ r is called condition of A. If A R (n,n) is nonsingular then this definition coincides with the one with respect to the Euclidean norm for quadratic matrices given before. κ 2 (A T A) = κ 2 (A) 2 Hence, the normal equation of a linear least squares problem are much worse conditioned than the system matrix of the problem. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
118 Least squares problems Perturbed least squares problems For perturbations of the system matrix the following theorem holds TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
119 Least squares problems Perturbed least squares problems For perturbations of the system matrix the following theorem holds Theorem Assume that A R (m,n), m n, is not deficient, i.e. rank(a) = n. Let x be the solution of the least squares problem (1) and x be the solution of the perturbed problem (A + A)x (b + b) 2 = min!, (4) where ( A 2 ε := max, b ) 2 < 1 A 2 b 2 κ 2 (A) = σ n(a) σ 1 (A). (5) TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
120 Least squares problems Perturbed least squares problems For perturbations of the system matrix the following theorem holds Theorem Assume that A R (m,n), m n, is not deficient, i.e. rank(a) = n. Let x be the solution of the least squares problem (1) and x be the solution of the perturbed problem (A + A)x (b + b) 2 = min!, (4) where ( A 2 ε := max, b ) 2 < 1 A 2 b 2 κ 2 (A) = σ n(a) σ 1 (A). (5) Then it holds that x x 2 x 2 ( ) 2κ2 (A) ε cos θ + tan θ κ2 2(A) + O(ε 2 ), (6) where θ is the angle between b and its projection to range(a). TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
121 Example Regularization Consider the orthogonal projection of a given function f : [0, 1] R to the space Π n 1 of polynomial of maximum degree n 1 with respect to the scalar product f, g := 1 0 f (x)g(x) dx. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
122 Example Regularization Consider the orthogonal projection of a given function f : [0, 1] R to the space Π n 1 of polynomial of maximum degree n 1 with respect to the scalar product f, g := 1 0 f (x)g(x) dx. Choosing the basis {1, x,..., x n 1 } one obtains the linear system with Ay = b (1) A = (a ij ) i,j=1,...,n, a ij := 1 i + j 1, (2) the so called Hilbert Matrix, and b R n, b i := f, x i 1. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
123 Example ct. Regularization For dimensions n = 10, n = 20 and n = 40 we choose the right hand side of (1) such that y = (1,..., 1) T is the unique solution, and we solve the resulting system by the known methods. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
124 Example ct. Regularization For dimensions n = 10, n = 20 and n = 40 we choose the right hand side of (1) such that y = (1,..., 1) T is the unique solution, and we solve the resulting system by the known methods. The LU factorization with column pivoting (in MATLAB A\b), the Cholesky factorization, the QR decomposition of A and the singular value decomposition of A yield the following errors with respect to the Euclidean norm: TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
125 Example ct. Regularization For dimensions n = 10, n = 20 and n = 40 we choose the right hand side of (1) such that y = (1,..., 1) T is the unique solution, and we solve the resulting system by the known methods. The LU factorization with column pivoting (in MATLAB A\b), the Cholesky factorization, the QR decomposition of A and the singular value decomposition of A yield the following errors with respect to the Euclidean norm: n = 10 n = 20 n = 40 LU factorization 5.24 E E E+2 Cholesky 7.15 E4 numer. nicht pos. def. QR decomposition 1.41 E E E+3 SVD 8.24 E E E+2 TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
126 Example ct. Regularization A similar behavior is observed for the least square problem. For n = 10, n = 20 and n = 40 and m = n + 10 we consider the least squares problem Ax b 2 = min! with the Hilbert matrix A R (m,n), where b is chosen such that x = (1,..., 1) T solves the problem with residual Ax b = 0. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
127 Example ct. Regularization A similar behavior is observed for the least square problem. For n = 10, n = 20 and n = 40 and m = n + 10 we consider the least squares problem Ax b 2 = min! with the Hilbert matrix A R (m,n), where b is chosen such that x = (1,..., 1) T solves the problem with residual Ax b = 0. n = 10 n = 20 n = 40 Normal equations 2.91 E E E+2 QR factorization 1.93 E E E+1 SVD 4.67 E E E+2 TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
128 Regularization Regularization For badly conditioned least squares problem or linear systems the following approach can yield reliable solutions: TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
129 Regularization Regularization For badly conditioned least squares problem or linear systems the following approach can yield reliable solutions: Determine the singular value decomposition A = UΣVT of A, and define { Σ σ 1 τ = diag(η i δ ji ), η i := i falls σ i τ, 0 sonst, where τ > 0 is a given threshold, and A τ := V Σ τ U T, x τ := A τ b. TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
130 Regularization Regularization For badly conditioned least squares problem or linear systems the following approach can yield reliable solutions: Determine the singular value decomposition A = UΣVT of A, and define { Σ σ 1 τ = diag(η i δ ji ), η i := i falls σ i τ, 0 sonst, where τ > 0 is a given threshold, and A τ := V Σ τ U T, x τ := A τ b. A τ is called effektive pseudo inverse of A. This method of approximatively solving Ax = b is called regularization by truncation TUHH Heinrich Voss Numerical Linear Algebra Chap. 4: Perturbation and Regularisation / 55
Matrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo
Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for
More informationThe Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonaldiagonalorthogonal type matrix decompositions Every
More informationNUMERICALLY EFFICIENT METHODS FOR SOLVING LEAST SQUARES PROBLEMS
NUMERICALLY EFFICIENT METHODS FOR SOLVING LEAST SQUARES PROBLEMS DO Q LEE Abstract. Computing the solution to Least Squares Problems is of great importance in a wide range of fields ranging from numerical
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 3 Linear Least Squares Prof. Michael T. Heath Department of Computer Science University of Illinois at UrbanaChampaign Copyright c 2002. Reproduction
More information4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns
L. Vandenberghe EE133A (Spring 2016) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows
More informationNOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
More informationNOTES on LINEAR ALGEBRA 1
School of Economics, Management and Statistics University of Bologna Academic Year 205/6 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura
More information13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in threespace, we write a vector in terms
More informationCS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
More informationInner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 34 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
More informationFactorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
More informationLecture 6. Inverse of Matrix
Lecture 6 Inverse of Matrix Recall that any linear system can be written as a matrix equation In one dimension case, ie, A is 1 1, then can be easily solved as A x b Ax b x b A 1 A b A 1 b provided that
More informationChapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
More informationSec 4.1 Vector Spaces and Subspaces
Sec 4. Vector Spaces and Subspaces Motivation Let S be the set of all solutions to the differential equation y + y =. Let T be the set of all 2 3 matrices with real entries. These two sets share many common
More information3 Orthogonal Vectors and Matrices
3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a nonempty
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationLinear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
More informationLecture 5: Singular Value Decomposition SVD (1)
EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25Sep02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system
More informationα = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
More informationT ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
More informationCSE 494 CSE/CBS 598 (Fall 2007): Numerical Linear Algebra for Data Exploration Clustering Instructor: Jieping Ye
CSE 494 CSE/CBS 598 Fall 2007: Numerical Linear Algebra for Data Exploration Clustering Instructor: Jieping Ye 1 Introduction One important method for data compression and classification is to organize
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More informationNotes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 201112 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
More informationApplied Linear Algebra I Review page 1
Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties
More informationMATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all ndimensional column
More information1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each)
Math 33 AH : Solution to the Final Exam Honors Linear Algebra and Applications 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) (1) If A is an invertible
More informationThe MoorePenrose Inverse and Least Squares
University of Puget Sound MATH 420: Advanced Topics in Linear Algebra The MoorePenrose Inverse and Least Squares April 16, 2014 Creative Commons License c 2014 Permission is granted to others to copy,
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More information9. Numerical linear algebra background
Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)
More information7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
More informationOrthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
More informationTheta Functions. Lukas Lewark. Seminar on Modular Forms, 31. Januar 2007
Theta Functions Lukas Lewark Seminar on Modular Forms, 31. Januar 007 Abstract Theta functions are introduced, associated to lattices or quadratic forms. Their transformation property is proven and the
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationSection 6.1  Inner Products and Norms
Section 6.1  Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
More informationby the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the yaxis We observe that
More informationCONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
More informationSolving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
More informationMAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =
MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the
More informationMath 312 Homework 1 Solutions
Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please
More information1 Eigenvalues and Eigenvectors
Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x
More informationVector Spaces II: Finite Dimensional Linear Algebra 1
John Nachbar September 2, 2014 Vector Spaces II: Finite Dimensional Linear Algebra 1 1 Definitions and Basic Theorems. For basic properties and notation for R N, see the notes Vector Spaces I. Definition
More informationMatrices, Determinants and Linear Systems
September 21, 2014 Matrices A matrix A m n is an array of numbers in rows and columns a 11 a 12 a 1n r 1 a 21 a 22 a 2n r 2....... a m1 a m2 a mn r m c 1 c 2 c n We say that the dimension of A is m n (we
More informationMatrix Representations of Linear Transformations and Changes of Coordinates
Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationLecture Note on Linear Algebra 15. Dimension and Rank
Lecture Note on Linear Algebra 15. Dimension and Rank WeiShi Zheng, wszheng@ieee.org, 211 November 1, 211 1 What Do You Learn from This Note We still observe the unit vectors we have introduced in Chapter
More information4.5 Linear Dependence and Linear Independence
4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then
More informationMath 313 Lecture #10 2.2: The Inverse of a Matrix
Math 1 Lecture #10 2.2: The Inverse of a Matrix Matrix algebra provides tools for creating many useful formulas just like real number algebra does. For example, a real number a is invertible if there is
More information4.6 Null Space, Column Space, Row Space
NULL SPACE, COLUMN SPACE, ROW SPACE Null Space, Column Space, Row Space In applications of linear algebra, subspaces of R n typically arise in one of two situations: ) as the set of solutions of a linear
More information17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function
17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):
More information3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.
Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R
More information5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 51 Orthonormal
More informationMatrix Differentiation
1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have
More information2.1: MATRIX OPERATIONS
.: MATRIX OPERATIONS What are diagonal entries and the main diagonal of a matrix? What is a diagonal matrix? When are matrices equal? Scalar Multiplication 45 Matrix Addition Theorem (pg 0) Let A, B, and
More informationMATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar
More informationSolutions to Review Problems
Chapter 1 Solutions to Review Problems Chapter 1 Exercise 42 Which of the following equations are not linear and why: (a x 2 1 + 3x 2 2x 3 = 5. (b x 1 + x 1 x 2 + 2x 3 = 1. (c x 1 + 2 x 2 + x 3 = 5. (a
More informationMATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.
MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α
More informationRecall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
More informationBindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8
Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e
More informationLecture Notes: Matrix Inverse. 1 Inverse Definition
Lecture Notes: Matrix Inverse Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Inverse Definition We use I to represent identity matrices,
More informationLectures notes on orthogonal matrices (with exercises) 92.222  Linear Algebra II  Spring 2004 by D. Klain
Lectures notes on orthogonal matrices (with exercises) 92.222  Linear Algebra II  Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n realvalued matrix A is said to be an orthogonal
More information1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
More informationDiagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions
Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential
More informationINTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL
SOLUTIONS OF THEORETICAL EXERCISES selected from INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL Eighth Edition, Prentice Hall, 2005. Dr. Grigore CĂLUGĂREANU Department of Mathematics
More informationLINEAR ALGEBRA & MATRICES
LINEAR ALGEBRA & MATRICES C.T. Abdallah, M. Ariola, F. Amato January 21, 2006 Contents 1 Review of Matrix Algebra 2 1.1 Determinants, Minors, and Cofactors................ 4 1.2 Rank, Trace, and Inverse......................
More informationLecture Topic: LowRank Approximations
Lecture Topic: LowRank Approximations LowRank Approximations We have seen principal component analysis. The extraction of the first principle eigenvalue could be seen as an approximation of the original
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus ndimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number
More information10.3 POWER METHOD FOR APPROXIMATING EIGENVALUES
58 CHAPTER NUMERICAL METHODS. POWER METHOD FOR APPROXIMATING EIGENVALUES In Chapter 7 you saw that the eigenvalues of an n n matrix A are obtained by solving its characteristic equation n c nn c nn...
More informationDETERMINANTS. b 2. x 2
DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in
More informationx1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.
Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability
More informationLINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LUdecomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
More information6. Cholesky factorization
6. Cholesky factorization EE103 (Fall 201112) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix
More information1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form
1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and
More informationThe determinant of a skewsymmetric matrix is a square. This can be seen in small cases by direct calculation: 0 a. 12 a. a 13 a 24 a 14 a 23 a 14
4 Symplectic groups In this and the next two sections, we begin the study of the groups preserving reflexive sesquilinear forms or quadratic forms. We begin with the symplectic groups, associated with
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationThe Inverse of a Square Matrix
These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for inclass presentation
More information18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in 2106. Total: 175 points.
806 Problem Set 4 Solution Due Wednesday, March 2009 at 4 pm in 206 Total: 75 points Problem : A is an m n matrix of rank r Suppose there are righthandsides b for which A x = b has no solution (a) What
More informationMATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An ndimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0534405967. Systems of Linear Equations Definition. An ndimensional vector is a row or a column
More informationMATH 551  APPLIED MATRIX THEORY
MATH 55  APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points
More information1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
More informationLinear Algebra: Determinants, Inverses, Rank
D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of
More informationMethods for Finding Bases
Methods for Finding Bases Bases for the subspaces of a matrix Rowreduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,
More information160 CHAPTER 4. VECTOR SPACES
160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results
More informationWe seek a factorization of a square matrix A into the product of two matrices which yields an
LU Decompositions We seek a factorization of a square matrix A into the product of two matrices which yields an efficient method for solving the system where A is the coefficient matrix, x is our variable
More informationLecture 4: Partitioned Matrices and Determinants
Lecture 4: Partitioned Matrices and Determinants 1 Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying
More informationConstrained Least Squares
Constrained Least Squares Authors: G.H. Golub and C.F. Van Loan Chapter 12 in Matrix Computations, 3rd Edition, 1996, pp.580587 CICN may05/1 Background The least squares problem: min Ax b 2 x Sometimes,
More informationRecall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.
ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the ndimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?
More informationWHICH LINEARFRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE?
WHICH LINEARFRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE? JOEL H. SHAPIRO Abstract. These notes supplement the discussion of linear fractional mappings presented in a beginning graduate course
More informationLinear Codes. In the V[n,q] setting, the terms word and vector are interchangeable.
Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a subvector space of V[n,q]. If the subspace of V[n,q]
More informationMATH1231 Algebra, 2015 Chapter 7: Linear maps
MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales danielc@unsw.edu.au Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter
More informationLinear Dependence Tests
Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks
More informationA note on companion matrices
Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod
More informationThe Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
More informationThe Matrix Elements of a 3 3 Orthogonal Matrix Revisited
Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, ThreeDimensional Proper and Improper Rotation Matrices, I provided a derivation
More information