A matrix-free preconditioner for sparse symmetric positive definite systems and least square problems

Size: px
Start display at page:

Download "A matrix-free preconditioner for sparse symmetric positive definite systems and least square problems"

Transcription

1 A matrix-free preconditioner for sparse symmetric positive definite systems and least square problems Stefania Bellavia Dipartimento di Ingegneria Industriale Università degli Studi di Firenze Joint work with Jacek Gondzio and Benedetta Morini Lavoro svolto nellambito del Progetto INdAM-GNCS 2012 Metodi e software numerici per il precondizionamento di sistemi lineari nella risoluzione di PDE e di problemi di ottimizzazione Algebra Lineare Numerica e sue Applicazioni, Rome Jan 2013 Stefania Bellavia, Università di Firenze 1 / 28

2 Introduction The Problem Consider systems of the form Hx = b, with H R m m spd Special interest in the case H = AΘA T with A R m n sparse and Θ R n n diagonal spd They arise in at least two prominent applications in the area of optimization: Newton-like methods for weighted least-squares problems, interior point methods Stefania Bellavia, Università di Firenze 2 / 28

3 Introduction We assume that H is too large and/or too difficult to be formed and solved directly We will solve it using an iterative Conjugate Gradient (CG) like approach We are interested in preconditioning H with a reliable algorithm that does not require forming the whole matrix H at a time (matrix-free) We are also interested in solving sequences of linear systems arising in optimization methods Stefania Bellavia, Università di Firenze 3 / 28

4 Introduction Preconditioning H Incomplete Cholesky (IC) factorizations are matrix-free in the sense that the columns of H can be computed one at a time, and then discarded Breakdown-free when H is an H-matrix IC factorizations relying on drop tolerances to reduce fill-in have unpredictable memory requirements Alternative approaches with predictable memory requirements depend on the entries of H, [Jones, Plassmann, ACM Trans Math Software 1995], [Lin, Moré, SISC 1999] Eg, let n k = nnz(tril(h(:, k), 1)) and retain the n k + p largest elements in the strict lower triangular part of the kth column of the factor, for some fixed p > 0 High storage requirements if H is dense Stefania Bellavia, Università di Firenze 4 / 28

5 Introduction Preconditioning H Approximate Inverse preconditioners form factorized sparse approximations for H 1 The Stabilized Approximate Inverse preconditioner (SAINV) by [Benzi, Cullum, Tuma, SISC 2000] is based on a modified Gram-Schmidt process It is matrix-free, ie it employs H multiplicatively and may work entirely with A T It preserves sparsity in the factors by dropping small elements In exact arithmetic, it is applicable to any SPD matrix without breakdowns The underlying assumption is that most entries of H 1 are small in magnitude Stefania Bellavia, Università di Firenze 5 / 28

6 Introduction Properties of our preconditioner Limited memory: memory bounded by O(m) rather than O(nz(H)) Matrix free: only the action of H on a vector is needed Only a small number k m of general matrix-vector products is required The diagonal of H or its approximation is needed: we expect that in many practical applications we will be able to compute or estimate the diagonal of H at low cost Stefania Bellavia, Università di Firenze 6 / 28

7 Introduction Properties of our preconditioner Limited memory: memory bounded by O(m) rather than O(nz(H)) Matrix free: only the action of H on a vector is needed Only a small number k m of general matrix-vector products is required The diagonal of H or its approximation is needed: we expect that in many practical applications we will be able to compute or estimate the diagonal of H at low cost PARTIAL CHOLESKY + DEFLATED CG Stefania Bellavia, Università di Firenze 6 / 28

8 LMP Preconditioner The preconditioner Partial Cholesky factorization limited to a small number k of columns of H + diagonal approximation of the Schur complement, [Gondzio, COAP 2011] 1 Choose k m Consider the formal partition of H [ H11 H H = 21 T H 21 H 22 ], H 11 R k k, H 21 R (m k) k, H 22 R (m k) (m k) 2 Form the first k columns of H, ie H 11, H 21 Stefania Bellavia, Università di Firenze 7 / 28

9 The preconditioner The preconditioner ced 3 Compute the Cholesky factorization [ L11 L 21 ] of H limited to [ H11 H 21 ] Compute the LDL T factorization H 11 = L 11 Q 11 L T 11 (Discard H 11) Solve L 11 Q 11 L T 21 = HT 21 for L 21, ie L 21 = H 21 L T 11 Q 1 11 (Discard H 21 ) Stefania Bellavia, Università di Firenze 8 / 28

10 The preconditioner The preconditioner ced 3 Compute the Cholesky factorization [ L11 L 21 ] of H limited to [ H11 H 21 ] Compute the LDL T factorization H 11 = L 11 Q 11 L T 11 (Discard H 11) Solve L 11 Q 11 L T 21 = HT 21 for L 21, ie L 21 = H 21 L T 11 Q 1 11 (Discard H 21 ) It follows where H = [ L11 L 21 I m k ] [ Q11 S S = H 22 H 21 H 1 11 HT 21, is the Schur complement of H 11 in H ] [ L T 11 L T 21 I m k ], Stefania Bellavia, Università di Firenze 8 / 28

11 The preconditioner The preconditioner ced 4 Set Q 22 = diag(s) = diag(h 22 ) diag(l 21 Q 11 L T 21) and P = [ L11 L 21 I m k ] } {{ } L [ Q11 Q 22 } {{ } Q ] [ L T 11 L T 21 I m k ] } {{ } L T The algorithm for constructing P has some good properties: it cannot break down in exact arithmetic; it has predictable memory requirements, nnz(l) O(km) Stefania Bellavia, Università di Firenze 9 / 28

12 The preconditioner Storage and computational cost The complete diagonal of H is required If it is not available and H = AΘA T : (H) ii = A T e i 2 2, i = 1,, m Storage: one (sparse) vector A T e i at a time and a vector for the diagonal of H The first k columns of H are computed and stored: He i, i = 1,, k The additional cost of this step is k products of H times a vector The products He i are cheap if H (or A) is sparse The k products He i are expected to be cheaper than the products Hv required by PCG where the vectors v involved are tipically dense Stefania Bellavia, Università di Firenze 10 / 28

13 The preconditioner Factorized form of P 1 By P = [ L11 L 21 I m k ] [ Q11 Q 22 ] [ L T 11 L T 21 I m k ], it follows [ P 1 L T = 11 L T 11 LT 21 0 I m k ] [ Q Q 1 22 ] [ L L 21 L 1 11 I m k ] ie a factorized sparse approximation for H 1 Stefania Bellavia, Università di Firenze 11 / 28

14 The preconditioner Factorized form of P 1 By P = [ L11 L 21 I m k ] [ Q11 Q 22 ] [ L T 11 L T 21 I m k ], it follows [ P 1 L T = 11 L T 11 LT 21 0 I m k ] [ Q Q 1 22 ] [ L L 21 L 1 11 I m k ] ie a factorized sparse approximation for H 1 [ ] [ ] L11 Q 1/2 11 Letting R = L 21 I m k Q 1/2 22 P 1 H is similar to the block diagonal matrix [ Ik 0 ] 0 Q22 1 we have P = R T R Stefania Bellavia, Università di Firenze 11 / 28

15 The preconditioner Spectral analysis of P 1 H k eigenvalues of P 1 H are equal to 1 The other eigenvalues are eigenvalues of Q 1 22 S and λ(q 1 22 S) λ min (S) λ max (Q 22 ) λ min (H) λ max (diag(s)) λ(q 1 22 S) λ max(s) λ min (Q 22 ) λ max(h 22 ) λ min (diag(s)) Stefania Bellavia, Università di Firenze 12 / 28

16 The preconditioner Reordering of H A greedy heuristic technique acts on the largest eigenvalues of H Since H is SPD, λ max (H) tr(h) = tr(h 11 ) + tr(h 22 ) [ ] If Q 22 = I, then P 1 Ik 0 H is similar to, and 0 S ([ ]) λ max (P 1 Ik 0 H) tr = k + tr(s) 0 S Permuting rows and columns of H so that H 11 contains the k largest elements of diag(h) would imply k + tr(s) tr(h) and a large reduction in the value of λ max (P 1 H) with respect to λ max (H) Stefania Bellavia, Università di Firenze 13 / 28

17 Deflated CG Handling small eigenvalues Applying the greedy technique requires no extra storage In most cases, the greedy reordering takes care of the largest eigenvalues of H and κ 2 (R 1 HR T ) is reduced considerably with respect to κ 2 (H) On the other hand, the smallest eigenvalues of H are sligtly modified or moved towards the origin When the convergence of CG (or CG-like) method is hampered by a small number of eigenvalues of P 1 H close to zero, the Preconditioned Deflated-CG or CG-like algorithm can be useful, [Saad, Yeung, Erhel, Guyomarc h, SISC 2000] Stefania Bellavia, Università di Firenze 14 / 28

18 Deflated CG Preconditioned Deflated-CG Let the eigenvalues of P 1 H be labeled in increasing order: λ 1 (P 1 H) λ m (P 1 H) Ideal case: Inject l exact eigenvectors of P 1 H associated to λ 1 (P 1 H),, λ l (P 1 H), into the Krylov subspace ( ) µ 1 j x x j H 2 x x 0 H, µ = λ m(p 1 H) µ + 1 λ l+1 (P 1 H) Therefore, convergence of CG method is improved if a few eigenvalues are close to the origin and well separated from the others If the l eigenvectors of P 1 H are numerically approximated, one can expect µ λ m (P 1 H)/λ l+1 (P 1 H) Stefania Bellavia, Università di Firenze 15 / 28

19 Deflated CG Preconditioned Deflated-CG ced Apply Deflated-CG to the split-preconditioned system R T HR 1 y = R T b, x = R 1 y using a few eigenvectors associated to the smallest eigenvalues of R T HR 1 Symmetric Lanczos processes for sparse symmetric eigenvalue problems require products of R T HR 1 times a vector Each product has the cost of one preconditioned PCG iteration To amortize the cost of approximating eigenvectors, Preconditioned Deflated-CG is suitable for solving systems with multiple right-hand sides and sequences of slowly varying linear systems Stefania Bellavia, Università di Firenze 16 / 28

20 Numerical results Numerical experiments We implemented the preconditioner in Matlab, ϵ m = Initial guess for PCG: x 0 = (0,, 0) T Stopping criterion: Hx j b b 2 A failure is declared after 1000 iterations H = AA T, 35 matrices A from the University of Florida Sparse Matrix Collection, Groups: LPnetlib, Meszaros for Linear Programming problems 1090 m dens(a) , dens(h) Stefania Bellavia, Università di Firenze 17 / 28

21 Numerical results Numerical experiments ced Experiments with SAINV preconditioner H 1 ZD 1 Z T where Z is unit upper triangular, D is diagonal Code from Sparselab package developed by M Tuma First drop tolerance tested: 10 1 In case of failure, the tolerance is progressively reduced by a factor 10 Stefania Bellavia, Università di Firenze 18 / 28

22 Numerical results Cost Comparison Tabella : Cost of the construction and application of LMP and SAINV Type Construction Application LMP m sparse-to-sparse products Θ 1/2 (A T e i ) 2 backsolves with L 11 k sparse-to-sparse products AΘ(A T e i ) 1 mat-vec product with D 1 m k backsolves with L 11 m k scalar products in R k m k scalar products in R k k scalar products in R m k SAINV m sparse-to-sparse products AΘ(A T v) 2 mat-vec products with Z 1 mat-vec product with D Stefania Bellavia, Università di Firenze 19 / 28

23 Numerical results Comparison between LMP(50) and LMP(100) LMP(100) outperforms LMP(50) in terms of PCG iterations Performance profile,execution time LMP(50) LMP(100) 06 π s (τ) τ Stefania Bellavia, Università di Firenze 20 / 28

24 Numerical results Comparison between LMP(50) and SAINV SAINV solved 21 systemsperformance profile on the tests successfully solved by all preconditioners 1 Performance profile, CG iterations π s (τ) LMP(50) SAINV τ Performance profile,execution time 1 π s (τ) LMP(50) SAINV τ Stefania Bellavia, Università di Firenze 21 / 28

25 Numerical results Preconditioner density 10 0 density of H and of the factors L and Z 10 2 L Z H density of the factors L and L L L Stefania Bellavia, Università di Firenze 22 / 28

26 Numerical results Experiments with Preconditioned Deflated-CG A few eigenvectors of R T HR 1 are computed by the Matlab package PROPACK [RM Larsen, 1998] The symmetric Lanczos algorithm with partial reorthogonalization is applied A loose accuracy for the convergence criterion, 10 1, is fixed along with a specified maximum dimension, DIM L, of the Lanczos basis allowed The number of products of matrix-vector products is at most DIM L In the Preconditioned Deflated-CG we injected the estimated eigenvectors If convergence was not achieved, the vectors associated with eigenvalues smaller than a prescribed tolerance are selected Stefania Bellavia, Università di Firenze 23 / 28

27 Numerical results Solution of a single system Prec Prec H P 1 H Defl-CG CG Test name λ max λ min λ max λ min IT L IT L lp d2q06c 127e6 637e-4 648e0 339e lp pilot 110e5 155e-2 122e1 258e lp pilot87 101e6 152e-2 222e1 201e lp stocfor2 160e6 198e-3 771e0 117e lpi bgindy 897e3 407e-2 555e0 829e ge 189e8 490e-5 121e1 878e nl 826e4 700e-3 730e0 161e scrs8-2c 185e3 349e-5 539e1 832e Preconditioner formed with k = 50 Number of small eigenvalues estimated: 5 Maximum dimension of the Lanczos basis: 50 Stefania Bellavia, Università di Firenze 24 / 28

28 Numerical results Sequences of normal equations from least-squares problems Sequences of normal equations arise in the solution of constrained and unconstrained least-squares problems If the coefficient matrices vary slowly, a preconditioner freeze strategy for LMP coupled with Deflated-CGLS can be used We solved the Nonnegative Linear Least-Squares problems 1 min x 0 2 Bx d 2 2, B full rank, by the interior Newton-like method [Bellavia, Macconi, Morini, NLAA 2006] The trial step at jth nonlinear iteration solves ( ) ( ) min BSj Bxj d 2 p IR n p +, W j 0 Stefania Bellavia, Università di Firenze 25 / 28 2

29 Numerical results LMP in NNLS The matrix of the normal equation is H j = A j A T j, A j = ( S j B T W j ), j = 0, 1, where S j and W j are matrices with entries in (0, 1] and [0, 1] respectively We solve the sequence of linear systems with a frozen preconditioner For a seed matrix, say H 0, we form the LMP preconditioner and compute l approximate eigenvectors associated to the smallest eigenvalues We reuse the preconditioner and the eigenvectors troughout the nonlinear iterations until the preconditioner deteriorates, ie the limit of CGLS iterations is reached Then, the LMP preconditioner and l eigenvectors are refreshed for the current matrix Stefania Bellavia, Università di Firenze 26 / 28

30 Numerical results LMP(100), 5 small eigs estimated, Lanczos basis dim: 50 Prec Defl-CGLS Prec CGLS Test IT NL(R) IT L IT NL(R) IT L Savings in mat-vec prod lp pilot87 27(1) (1) % lp ken % lp ken % lp ken % lp pds % lp pds % lp truss % deter % deter % deter % fxm (3) (2) % ge 35(3) (3) % nl 28(5) (6) % scrs8-2c * Stefania Bellavia, Università di Firenze 27 / 28

31 Numerical results Final comments Work in progress: We are using LMP preconditioner in the solution of linear systems arising in Electrostatic and Electromagnetic problems, in cooperation with A Tamburrino, S Ventre, University of Cassino The matrix H is spd can be decomposed as H = H far + H near, -H near is available and includes the diagonal of H -H far is not available, the action of H far on a vector can be (approximated) computed Stefania Bellavia, Università di Firenze 28 / 28

32 Numerical results Final comments Work in progress: We are using LMP preconditioner in the solution of linear systems arising in Electrostatic and Electromagnetic problems, in cooperation with A Tamburrino, S Ventre, University of Cassino The matrix H is spd can be decomposed as H = H far + H near, -H near is available and includes the diagonal of H -H far is not available, the action of H far on a vector can be (approximated) computed S B, J Gondzio, B Morini, A matrix-free preconditioner for sparse symmetric positive definite systems and least-squares problems, SISC in corso di stampa J Gondzio, Interior point methods 25 years later, EJOR (2012) Stefania Bellavia, Università di Firenze 28 / 28

33 Numerical results Final comments Work in progress: We are using LMP preconditioner in the solution of linear systems arising in Electrostatic and Electromagnetic problems, in cooperation with A Tamburrino, S Ventre, University of Cassino The matrix H is spd can be decomposed as H = H far + H near, -H near is available and includes the diagonal of H -H far is not available, the action of H far on a vector can be (approximated) computed S B, J Gondzio, B Morini, A matrix-free preconditioner for sparse symmetric positive definite systems and least-squares problems, SISC in corso di stampa J Gondzio, Interior point methods 25 years later, EJOR (2012) Stefania Bellavia, Università di Firenze 28 / 28

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001,

More information

Yousef Saad University of Minnesota Computer Science and Engineering. CRM Montreal - April 30, 2008

Yousef Saad University of Minnesota Computer Science and Engineering. CRM Montreal - April 30, 2008 A tutorial on: Iterative methods for Sparse Matrix Problems Yousef Saad University of Minnesota Computer Science and Engineering CRM Montreal - April 30, 2008 Outline Part 1 Sparse matrices and sparsity

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

ALGEBRAIC EIGENVALUE PROBLEM

ALGEBRAIC EIGENVALUE PROBLEM ALGEBRAIC EIGENVALUE PROBLEM BY J. H. WILKINSON, M.A. (Cantab.), Sc.D. Technische Universes! Dsrmstedt FACHBEREICH (NFORMATiK BIBL1OTHEK Sachgebieto:. Standort: CLARENDON PRESS OXFORD 1965 Contents 1.

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix 7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse

More information

Limited Memory Solution of Complementarity Problems arising in Video Games

Limited Memory Solution of Complementarity Problems arising in Video Games Laboratoire d Arithmétique, Calcul formel et d Optimisation UMR CNRS 69 Limited Memory Solution of Complementarity Problems arising in Video Games Michael C. Ferris Andrew J. Wathen Paul Armand Rapport

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition

P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition K. Osypov* (WesternGeco), D. Nichols (WesternGeco), M. Woodward (WesternGeco) & C.E. Yarman (WesternGeco) SUMMARY Tomographic

More information

(Quasi-)Newton methods

(Quasi-)Newton methods (Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

MODIFIED INCOMPLETE CHOLESKY FACTORIZATION FOR SOLVING ELECTROMAGNETIC SCATTERING PROBLEMS

MODIFIED INCOMPLETE CHOLESKY FACTORIZATION FOR SOLVING ELECTROMAGNETIC SCATTERING PROBLEMS Progress In Electromagnetics Research B, Vol. 13, 41 58, 2009 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION FOR SOLVING ELECTROMAGNETIC SCATTERING PROBLEMS T.-Z. Huang, Y. Zhang, and L. Li School of Applied

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Solving Very Large Financial Planning Problems on Blue Gene

Solving Very Large Financial Planning Problems on Blue Gene U N I V E R S School of Mathematics T H E O I T Y H F G E D I N U R Solving Very Large Financial Planning Problems on lue Gene ndreas Grothey, University of Edinburgh joint work with Jacek Gondzio, Marco

More information

General Framework for an Iterative Solution of Ax b. Jacobi s Method

General Framework for an Iterative Solution of Ax b. Jacobi s Method 2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

3 Orthogonal Vectors and Matrices

3 Orthogonal Vectors and Matrices 3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first

More information

Examination paper for TMA4205 Numerical Linear Algebra

Examination paper for TMA4205 Numerical Linear Algebra Department of Mathematical Sciences Examination paper for TMA4205 Numerical Linear Algebra Academic contact during examination: Markus Grasmair Phone: 97580435 Examination date: December 16, 2015 Examination

More information

Computational Optical Imaging - Optique Numerique. -- Deconvolution --

Computational Optical Imaging - Optique Numerique. -- Deconvolution -- Computational Optical Imaging - Optique Numerique -- Deconvolution -- Winter 2014 Ivo Ihrke Deconvolution Ivo Ihrke Outline Deconvolution Theory example 1D deconvolution Fourier method Algebraic method

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated

SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET Course Title Course Number Department Linear Algebra Mathematics MAT-240 Action Taken (Please Check One) New Course Initiated

More information

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

Solving polynomial least squares problems via semidefinite programming relaxations

Solving polynomial least squares problems via semidefinite programming relaxations Solving polynomial least squares problems via semidefinite programming relaxations Sunyoung Kim and Masakazu Kojima August 2007, revised in November, 2007 Abstract. A polynomial optimization problem whose

More information

Manifold Learning Examples PCA, LLE and ISOMAP

Manifold Learning Examples PCA, LLE and ISOMAP Manifold Learning Examples PCA, LLE and ISOMAP Dan Ventura October 14, 28 Abstract We try to give a helpful concrete example that demonstrates how to use PCA, LLE and Isomap, attempts to provide some intuition

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on CS 5614: (Big) Data Management Systems B. Aditya Prakash Lecture #18: Dimensionality Reduc7on Dimensionality Reduc=on Assump=on: Data lies on or near a low d- dimensional subspace Axes of this subspace

More information

Performance of First- and Second-Order Methods for Big Data Optimization

Performance of First- and Second-Order Methods for Big Data Optimization Noname manuscript No. (will be inserted by the editor) Performance of First- and Second-Order Methods for Big Data Optimization Kimon Fountoulakis Jacek Gondzio Received: date / Accepted: date Abstract

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

Solving linear equations on parallel distributed memory architectures by extrapolation Christer Andersson Abstract Extrapolation methods can be used to accelerate the convergence of vector sequences. It

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

LU Factoring of Non-Invertible Matrices

LU Factoring of Non-Invertible Matrices ACM Communications in Computer Algebra, LU Factoring of Non-Invertible Matrices D. J. Jeffrey Department of Applied Mathematics, The University of Western Ontario, London, Ontario, Canada N6A 5B7 Revised

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

ANALYSIS, THEORY AND DESIGN OF LOGISTIC REGRESSION CLASSIFIERS USED FOR VERY LARGE SCALE DATA MINING

ANALYSIS, THEORY AND DESIGN OF LOGISTIC REGRESSION CLASSIFIERS USED FOR VERY LARGE SCALE DATA MINING ANALYSIS, THEORY AND DESIGN OF LOGISTIC REGRESSION CLASSIFIERS USED FOR VERY LARGE SCALE DATA MINING BY OMID ROUHANI-KALLEH THESIS Submitted as partial fulfillment of the requirements for the degree of

More information

Interior-Point Algorithms for Quadratic Programming

Interior-Point Algorithms for Quadratic Programming Interior-Point Algorithms for Quadratic Programming Thomas Reslow Krüth Kongens Lyngby 2008 IMM-M.Sc-2008-19 Technical University of Denmark Informatics and Mathematical Modelling Building 321, DK-2800

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

Using row reduction to calculate the inverse and the determinant of a square matrix

Using row reduction to calculate the inverse and the determinant of a square matrix Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible

More information

Derivative Free Optimization

Derivative Free Optimization Department of Mathematics Derivative Free Optimization M.J.D. Powell LiTH-MAT-R--2014/02--SE Department of Mathematics Linköping University S-581 83 Linköping, Sweden. Three lectures 1 on Derivative Free

More information

NUMERICAL METHODS FOR LARGE EIGENVALUE PROBLEMS

NUMERICAL METHODS FOR LARGE EIGENVALUE PROBLEMS NUMERICAL METHODS FOR LARGE EIGENVALUE PROBLEMS Second edition Yousef Saad Copyright c 2011 by the Society for Industrial and Applied Mathematics Contents Preface to Classics Edition Preface xiii xv 1

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

Mean value theorem, Taylors Theorem, Maxima and Minima.

Mean value theorem, Taylors Theorem, Maxima and Minima. MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.

More information

Journal of Computational and Applied Mathematics

Journal of Computational and Applied Mathematics Journal of Computational and Applied Mathematics 226 (2009) 92 102 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

More information

Multigrid preconditioning for nonlinear (degenerate) parabolic equations with application to monument degradation

Multigrid preconditioning for nonlinear (degenerate) parabolic equations with application to monument degradation Multigrid preconditioning for nonlinear (degenerate) parabolic equations with application to monument degradation M. Donatelli 1 M. Semplice S. Serra-Capizzano 1 1 Department of Science and High Technology

More information

Notes on Cholesky Factorization

Notes on Cholesky Factorization Notes on Cholesky Factorization Robert A. van de Geijn Department of Computer Science Institute for Computational Engineering and Sciences The University of Texas at Austin Austin, TX 78712 rvdg@cs.utexas.edu

More information

Politecnico di Torino. Porto Institutional Repository

Politecnico di Torino. Porto Institutional Repository Politecnico di Torino Porto Institutional Repository [Proceeding] Multiport network analyzer self-calibration: a new approach and some interesting results Original Citation: G.L. Madonna, A. Ferrero, U.

More information

Linear Algebra: Determinants, Inverses, Rank

Linear Algebra: Determinants, Inverses, Rank D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of

More information

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 General Integer Linear Program: (ILP) min c T x Ax b x 0 integer Assumption: A, b integer The integrality condition

More information

AN INTERFACE STRIP PRECONDITIONER FOR DOMAIN DECOMPOSITION METHODS

AN INTERFACE STRIP PRECONDITIONER FOR DOMAIN DECOMPOSITION METHODS AN INTERFACE STRIP PRECONDITIONER FOR DOMAIN DECOMPOSITION METHODS by M. Storti, L. Dalcín, R. Paz Centro Internacional de Métodos Numéricos en Ingeniería - CIMEC INTEC, (CONICET-UNL), Santa Fe, Argentina

More information

Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

More information

Preconditioning Sparse Matrices for Computing Eigenvalues and Solving Linear Systems of Equations. Tzu-Yi Chen

Preconditioning Sparse Matrices for Computing Eigenvalues and Solving Linear Systems of Equations. Tzu-Yi Chen Preconditioning Sparse Matrices for Computing Eigenvalues and Solving Linear Systems of Equations by Tzu-Yi Chen B.S. (Massachusetts Institute of Technology) 1995 B.S. (Massachusetts Institute of Technology)

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: SVD revisited; Software for Linear Algebra Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 9 Outline 1 Computing

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers Variance Reduction The statistical efficiency of Monte Carlo simulation can be measured by the variance of its output If this variance can be lowered without changing the expected value, fewer replications

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

A note on fast approximate minimum degree orderings for symmetric matrices with some dense rows

A note on fast approximate minimum degree orderings for symmetric matrices with some dense rows NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. (2009) Published online in Wiley InterScience (www.interscience.wiley.com)..647 A note on fast approximate minimum degree orderings

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING. ----Changsheng Liu 10-30-2014

LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING. ----Changsheng Liu 10-30-2014 LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING ----Changsheng Liu 10-30-2014 Agenda Semi Supervised Learning Topics in Semi Supervised Learning Label Propagation Local and global consistency Graph

More information

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen (für Informatiker) M. Grepl J. Berger & J.T. Frings Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2010/11 Problem Statement Unconstrained Optimality Conditions Constrained

More information

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

More information

FAST EXACT AFFINE PROJECTION ALGORITHM USING DISPLACEMENT STRUCTURE THEORY. Manolis C. Tsakiris and Patrick A. Naylor

FAST EXACT AFFINE PROJECTION ALGORITHM USING DISPLACEMENT STRUCTURE THEORY. Manolis C. Tsakiris and Patrick A. Naylor FAST EXACT AFFINE PROJECTION ALGORITHM USING DISPLACEMENT STRUCTURE THEORY Manolis C Tsakiris and Patrick A Naylor Dept of Electrical and Electronic Engineering, Imperial College London Communications

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

Solving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.

Solving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx. Solving Linear Systems of Equations Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n. ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

It s Not A Disease: The Parallel Solver Packages MUMPS, PaStiX & SuperLU

It s Not A Disease: The Parallel Solver Packages MUMPS, PaStiX & SuperLU It s Not A Disease: The Parallel Solver Packages MUMPS, PaStiX & SuperLU A. Windisch PhD Seminar: High Performance Computing II G. Haase March 29 th, 2012, Graz Outline 1 MUMPS 2 PaStiX 3 SuperLU 4 Summary

More information

The Geometry of Polynomial Division and Elimination

The Geometry of Polynomial Division and Elimination The Geometry of Polynomial Division and Elimination Kim Batselier, Philippe Dreesen Bart De Moor Katholieke Universiteit Leuven Department of Electrical Engineering ESAT/SCD/SISTA/SMC May 2012 1 / 26 Outline

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

MAT 242 Test 2 SOLUTIONS, FORM T

MAT 242 Test 2 SOLUTIONS, FORM T MAT 242 Test 2 SOLUTIONS, FORM T 5 3 5 3 3 3 3. Let v =, v 5 2 =, v 3 =, and v 5 4 =. 3 3 7 3 a. [ points] The set { v, v 2, v 3, v 4 } is linearly dependent. Find a nontrivial linear combination of these

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BI-DIAGONAL REPRESENTATIONS FOR PHASE TYPE DISTRIBUTIONS AND MATRIX-EXPONENTIAL DISTRIBUTIONS

SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BI-DIAGONAL REPRESENTATIONS FOR PHASE TYPE DISTRIBUTIONS AND MATRIX-EXPONENTIAL DISTRIBUTIONS Stochastic Models, 22:289 317, 2006 Copyright Taylor & Francis Group, LLC ISSN: 1532-6349 print/1532-4214 online DOI: 10.1080/15326340600649045 SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BI-DIAGONAL

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder APPM4720/5720: Fast algorithms for big data Gunnar Martinsson The University of Colorado at Boulder Course objectives: The purpose of this course is to teach efficient algorithms for processing very large

More information

A Direct Numerical Method for Observability Analysis

A Direct Numerical Method for Observability Analysis IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 15, NO 2, MAY 2000 625 A Direct Numerical Method for Observability Analysis Bei Gou and Ali Abur, Senior Member, IEEE Abstract This paper presents an algebraic method

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

MAT188H1S Lec0101 Burbulla

MAT188H1S Lec0101 Burbulla Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u

More information

Continuity of the Perron Root

Continuity of the Perron Root Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10 Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

Parallel Interior Point Solver for Structured Quadratic Programs: Application to Financial Planning Problems

Parallel Interior Point Solver for Structured Quadratic Programs: Application to Financial Planning Problems Parallel Interior Point Solver for Structured uadratic Programs: Application to Financial Planning Problems Jacek Gondzio Andreas Grothey April 15th, 2003 MS-03-001 For other papers in this series see

More information

Toward a New Metric for Ranking High Performance Computing Systems

Toward a New Metric for Ranking High Performance Computing Systems SANDIA REPORT SAND2013-4744 Unlimited Release Printed June 2013 Toward a New Metric for Ranking High Performance Computing Systems Jack Dongarra, University of Tennessee Michael A. Heroux, Sandia National

More information