Direct methods for sparse matrices

Size: px
Start display at page:

Download "Direct methods for sparse matrices"

Transcription

1 Direct methods for sparse matrices Iain S. Duff STFC Rutherford Appleton Laboratory Oxfordshire, UK. and CERFACS, Toulouse, France CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.1/50

2 Introduction Lecture 1 Sparse matrices and Gaussian elimination CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.2/50

3 Introduction Wish to solve Ax = b where A is LARGE and S P A R S E CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.3/50

4 Introduction By LARGE we mean matrices of large order n. This is a function of time. t n(t) , , , , , ,000, ,000,000 The meaning of sparse is not so simple SPARSE... NUMBER ENTRIES kn k 2 - logn CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.4/50

5 Introduction NUMERICAL APPLICATIONS Stiff ODEs... BDF... Sparse Jacobian Linear Programming... simplex... interior point Optimization/Nonlinear Equations Elliptic Partial Differential equations Eigensystem Solution Two Point Boundary Value Problems Least Squares Calculations CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.5/50

6 Introduction APPLICATION AREAS Physics Chemistry Civil engineering Electrical engineering Geography Demography Economics Behavioural sciences Politics Psychology Business administration Operations research CFD Lattice gauge Atomic spectra Quantum chemistry Chemical engineering Structural analysis Power systems Circuit simulation Device simulation Geodesy Migration Economic modelling Industrial relations Trading Social dominance Bureaucracy Linear Programming CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.6/50

7 Sparse matrices THERE FOLLOWS PICTURES OF SPARSE MATRICES FROM VARIOUS APPLICATIONS This is done to illustrate different structures for sparse matrices CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.7/50

8 Sparse matrices Thermal Simulation; SHERMAN2 CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.8/50

9 Sparse matrices Weather Matrix; FS CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.9/50

10 Sparse matrices Dynamic Calculation in Structures; BCSSTM13 CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.10/50

11 Sparse matrices Power Systems; BCSPWR07 CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.11/50

12 Sparse matrices Simulation of Computing Systems; GRE 1107 CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.12/50

13 Sparse matrices Chemical Engineering; WEST0381 CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.13/50

14 Sparse matrices Economic Modelling; ORANI678 CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.14/50

15 Sparse matrices STANDARD SETS OF SPARSE MATRICES Original set Harwell-Boeing Sparse Matrix Collection. Available through the web page Extended set of test matrices available from: Tim Davis and Matrix market CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.15/50

16 Sparse matrices Large and increasing collection maintained by the GRID-TLSE Project Also see the report: The Rutherford-Boeing Sparse Matrix Collection. Iain S. Duff, Roger G. Grimes and John G. Lewis. Report RAL-TR , Rutherford Appleton Laboratory. CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.16/50

17 Solving sparse systems Ax = b has solution x = A 1 b BUT This is notational only. Do not use or even think of inverse of A. For sparse A, A 1 is usually dense. Examples are: Tridiagonal and arrowhead CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.17/50

18 Solving sparse systems DIRECT METHOD Gaussian Elimination PAQ LU Permutations P and Q chosen to preserve sparsity and maintain stability L : Lower triangular (sparse) U : Upper triangular (sparse) SOLVE: Ax = b by Ly = Pb then UQ T x = y CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.18/50

19 Direct methods DIRECT METHODS Easy to package High accuracy Method of choice in many applications Not dramatically affected by conditioning Reasonably independent of structure However High time and storage requirement Typically limited to n So Use on subproblem Use as preconditioner CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.19/50

20 Hybrid methods COMBINING DIRECT AND ITERATIVE METHODS (can be thought of as sophisticated preconditioning) Multigrid Using direct method as coarse grid solver. Domain Decomposition Using direct method on local subdomains and direct preconditioner on interface. Block Iterative Methods Direct solver on sub-blocks. Partial factorization as preconditioner. Factorization of nearby problem as a preconditioner. CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.20/50

21 Matrix storage schemes For efficient solution of sparse equations we must Only store nonzeros (or exceptionally a few zeros also) Only perform arithmetic with nonzeros Preserve sparsity during computation CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.21/50

22 Direct methods Direct Sparse Matrix Codes A Paradigm for Large Scale Scientific Computing 1. Much non-numeric processing 2. A significant amount of data handling 3. Sometimes out-of-core 4. Sometimes much time outwith innermost loop 5. Innermost loop sometimes quite complicated CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.22/50

23 Direct codes TYPICAL INNERMOST LOOP DO 590 JJ = J1, J2 J = ICN (JJ) IF (IQ(J). GT.0) GO TO 590 IOP = IOP + 1 PIVROW = IJPOS - IQ (J) A(JJ) = A(JJ) + AU x A (PIVROW) IF (LBIG) BIG = DMAXI (DABS(A(JJ)), BIG) IF (DABS(A(JJ)). LT. TOL) IDROP = IDROP + 1 ICN (PIVROW) = ICN (PIVROW) 590 CONTINUE CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.23/50

24 Addition of sparse vectors a + b a j 1 j 2 j 3 j 4 a 1 a 2 a 3 a 4 + b j 5 b 1 j 2 b 2 j 6 b 3 j 4 b 4 a is stored as a 1 a 2 a 3 a 4 j 1 j 2 j 3 j 4 b is stored as b 1 b 2 b 3 b 4 j 5 j 2 j 6 j 4 CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.24/50

25 Addition of sparse vectors a + b Expand a into a full vector W then: Scan index list of vector b Add value of b entry to each component of W Merge index lists from a and b Collect result vector from components of W corresponding to this merged list CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.25/50

26 Addition of sparse vectors Benefits of scheme No scans of length n No need to hold indices in order No need to scan two sparse vectors in parallel However, It requires a vector of length n In identifying fill-in (that is, computing pattern of vector a + b), there can be a confusion between explicitly held zeros and entries not in the sparsity pattern CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.26/50

27 Addition of sparse vectors EXERCISE Design an algorithm to add two sparse vectors without requiring a real vector of length n and which does not confuse explicitly held zeros with entries not in the sparsity pattern. The algorithm should also be efficiently adapted to the case where different multiples of a single vector are added to a sequence of different vectors (as is the case in Gaussian elimination). You are allowed temporarily to have two copies of the pivot row and you can assume that there exists an integer vector of length n all of whose entries are positive. However, any work in restoring this vector to its original form must be included. You should show that this algorithm also satisfies the requirements that the vectors need not be ordered, and that scans of length n are avoided. You should show the complexity of this algorithm, probably by coding it and running on vectors of different length and density. The algorithm should be independent of n and linear in the number of nonzero entries. CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.27/50

28 The ordering problem In using Gaussian elimination to solve Ax = b, we perform the decomposition PAQ = LU where P and Q (P if A is symmetric) are permutation matrices chosen to: 1. preserve sparsity and reduce work in decomposition and solution 2. enable use of efficient data structures 3. ensure stability 4. take advantage of structure CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.28/50

29 Ordering Benefits of Sparsity Ordering on Matrix of Order 2021 with 7353 entries Total storage Flops Time (secs) Procedure (Kwords) (10 6 ) CRAY J90 Treating system as dense Storing and operating only on nonzero entries Using sparsity pivoting CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.29/50

30 Ordering Two categories... LOCAL Ordering and GLOBAL Ordering CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.30/50

31 Ordering for symmetric matrices L/U L/U CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.31/50

32 Minimum degree (Tinney S ) At each stage choose diagonal entry with least number of entries in row of reduced matrix. Using this to minimize the fill-in is NP-complete Usually it is a good heuristic and gives a good ordering. However, being a local algorithm, it does not take full account of the global underlying structure of the problem. Its performance can be unpredictable and it is sensitive to how ties are broken. The code for efficient implementation can be remarkably complicated. Often the most time consuming part of the algorithm is the degree update Approximate Minimum Degree (AMD). CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.32/50

33 Local ordering for unsymmetric matrices v v v v v v r i = 3, c j = 4 Minimize (r i 1) (c j 1) MARKOWITZ ORDERING (Markowitz 1957) Choose nonzero satisfying a numerical criterion with best Markowitz count. CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.33/50

34 Threshold pivoting for stability a ij is admissible as a pivot if a ij umax k a kj 0 < u 1 So choose as pivot at each stage the entry with lowest Markowitz cost satisfying the above inequality. Called threshold pivoting CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.34/50

35 Growth at one step is bounded by max i and overall growth by max i Threshold pivoting for stability a (k+1) ij (1 + 1/u)max a (k) i ij a (k) ij (1 + 1/u)p j max a ij i where p j is the number of off-diagonal entries in the jth column of U. If then H = ˆLÛ A h ij 5.01ɛnmax k a (k) ij This means that through u, the threshold parameter, we can directly influence backward error in factorization. CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.35/50

36 Threshold pivoting for stability Threshold Entries in factors Error in solution Effect of changing threshold parameter. Matrix is order 541 with 4285 entries. CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.36/50

37 Efficient implementation of Markowitz U L A (k) If you search all of A (k) each time, the complexity is O(nτ) O(n 2 ). Want O(1) search every time... Search rows/columns in order of increasing sparsity Can limit search to small number of rows/columns CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.37/50

38 Local ordering for unsymmetric matrices (PAQ) T (PAQ) = Q T A T P T PAQ = Q T A T AQ So choose the column ordering Q using the minimum degree ordering algorithm on A T A COLAMD (Matlab) does this without forming A T A Row ordering can be chosen later for stability. CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.38/50

39 Global ordering for symmetric matrices PARTITIONING Divide and conquer paradigm (Nested) dissection Domain decomposition Substructuring CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.39/50

40 Global ordering for symmetric matrices Computational Graph CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.40/50

41 Global ordering for symmetric matrices Chooses last pivots first Dissect problem into two parts Resulting matrix has form: Dissection Hence to nested dissection CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.41/50

42 Nested dissection Nested dissection was originally proposed by Alan George in his thesis in Although good on regular grids and for complexity was not used for general symmetric matrices until 1990s Key to achieving good ordering on general problems is Bisection Algorithm CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.42/50

43 Spectral Bisection (Pothen, Simon, Liou 1990) Laplacian matrix, L, of a matrix A l ii = degree of i l ij = 0, a ij = 0, i j l ij If x is such that x i = +1 for i in partition C and x i = 1 for i in partition D, then x T Lx = a i,j 0 = 1, a ij 0, i j (x i x j ) 2 = 4 #cut edges CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.43/50

44 Spectral Bisection (Pothen, Simon, Liou 1990) So graph bisection problem is: min x T Lx x x i = ±1 xi = 0 Continuous problem is to min {x T Lx/x T x} over all vectors that are orthogonal to the vector of all ones. So we search for the Fiedler vector of L. This is the eigenvector corresponding to the second smallest eigenvalue. CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.44/50

45 Comparison of AMD and Nested Dissection A nested dissection ordering can be generated by the graph partitioning software called METIS (Karypis and Kumar) For large problems this often beats a local ordering. For example, AMD (Amestoy, Davis, and Duff) Matrices are from PARASOL test set and runs are from a very early version of MUMPS (more about this code later). Entries in factors Operations in factorization (10 6 ) (10 9 ) MIXTANK AMD ND INVEXTR1 AMD ND BBMAT AMD ND CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.45/50

46 Sparse Complexity COMPLEXITY of LU factorization on dense matrices of order n 2 3 n3 + O (n 2 ) flops n 2 storage For band matrix (order n, bandwidth k) 2k 2 n work, nk storage Five-diagonal matrix (on k k grid) O (k 3 ) work and O (k 2 logk) storage Tridiagonal + Arrowhead matrix O (n) work and storage Target O (n) + O (τ) for sparse matrix of order n with τ entries. CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.46/50

47 MA48 An example of a code that uses Markowitz and Threshold Pivoting is the HSL Code MA48 CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.47/50

48 Features of MA48 Uses Markowitz with threshold pivoting Factorizes rectangular and singular systems Block Triangularization Done at higher level Internal routines work only on single block Switch to full code at all phases of solution Three factorize entries Only pivot order is input Fast factorize.. uses structures generated in first factorize Only factorizes later columns of matrix Iterative refinement and error estimator in Solve phase Can employ drop tolerance Low storage in analyse phase CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.48/50

49 Sparse HSL code MA48 Matrix Order Entries MA48 DGESV KALLMAN FS GRE WEST ORANI LNS LHR07C Comparison between a sparse code (MA48 from HSL) and a dense code (DGESV from LAPACK). Times for analyse/factorize/solve in seconds on a Compaq/HP Alpha DS 20 workstation. CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.49/50

50 MA48.. why look at other codes Why look at other codes?? MA48 does not naturally vectorize or parallelize short vector lengths There is heavy use of indirect addressing in MA48 (viz. references of form W(IND(I)) Even with h/ware indirect addressing is 2-4 times slower more memory traffic non-localization of data MA48 can perform poorly when there is significant fill-in We would like to take more advantage of the Level 3 BLAS CEA-EDF-INDRIA Schools. Sophia Antipolis. March 30 - April p.50/50

Yousef Saad University of Minnesota Computer Science and Engineering. CRM Montreal - April 30, 2008

Yousef Saad University of Minnesota Computer Science and Engineering. CRM Montreal - April 30, 2008 A tutorial on: Iterative methods for Sparse Matrix Problems Yousef Saad University of Minnesota Computer Science and Engineering CRM Montreal - April 30, 2008 Outline Part 1 Sparse matrices and sparsity

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

HSL and its out-of-core solver

HSL and its out-of-core solver HSL and its out-of-core solver Jennifer A. Scott j.a.scott@rl.ac.uk Prague November 2006 p. 1/37 Sparse systems Problem: we wish to solve where A is Ax = b LARGE Informal definition: A is sparse if many

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis

More information

A note on fast approximate minimum degree orderings for symmetric matrices with some dense rows

A note on fast approximate minimum degree orderings for symmetric matrices with some dense rows NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. (2009) Published online in Wiley InterScience (www.interscience.wiley.com)..647 A note on fast approximate minimum degree orderings

More information

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix 7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001,

More information

Operation Count; Numerical Linear Algebra

Operation Count; Numerical Linear Algebra 10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point

More information

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication). MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

Modification of the Minimum-Degree Algorithm by Multiple Elimination

Modification of the Minimum-Degree Algorithm by Multiple Elimination Modification of the Minimum-Degree Algorithm by Multiple Elimination JOSEPH W. H. LIU York University The most widely used ordering scheme to reduce fills and operations in sparse matrix computation is

More information

Matrix Multiplication

Matrix Multiplication Matrix Multiplication CPS343 Parallel and High Performance Computing Spring 2016 CPS343 (Parallel and HPC) Matrix Multiplication Spring 2016 1 / 32 Outline 1 Matrix operations Importance Dense and sparse

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)

More information

Mesh Generation and Load Balancing

Mesh Generation and Load Balancing Mesh Generation and Load Balancing Stan Tomov Innovative Computing Laboratory Computer Science Department The University of Tennessee April 04, 2012 CS 594 04/04/2012 Slide 1 / 19 Outline Motivation Reliable

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc.,

More information

Numerical Analysis Introduction. Student Audience. Prerequisites. Technology.

Numerical Analysis Introduction. Student Audience. Prerequisites. Technology. Numerical Analysis Douglas Faires, Youngstown State University, (Chair, 2012-2013) Elizabeth Yanik, Emporia State University, (Chair, 2013-2015) Graeme Fairweather, Executive Editor, Mathematical Reviews,

More information

ALGEBRAIC EIGENVALUE PROBLEM

ALGEBRAIC EIGENVALUE PROBLEM ALGEBRAIC EIGENVALUE PROBLEM BY J. H. WILKINSON, M.A. (Cantab.), Sc.D. Technische Universes! Dsrmstedt FACHBEREICH (NFORMATiK BIBL1OTHEK Sachgebieto:. Standort: CLARENDON PRESS OXFORD 1965 Contents 1.

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

Solving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.

Solving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx. Solving Linear Systems of Equations Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: SVD revisited; Software for Linear Algebra Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 9 Outline 1 Computing

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

It s Not A Disease: The Parallel Solver Packages MUMPS, PaStiX & SuperLU

It s Not A Disease: The Parallel Solver Packages MUMPS, PaStiX & SuperLU It s Not A Disease: The Parallel Solver Packages MUMPS, PaStiX & SuperLU A. Windisch PhD Seminar: High Performance Computing II G. Haase March 29 th, 2012, Graz Outline 1 MUMPS 2 PaStiX 3 SuperLU 4 Summary

More information

A FAST AND HIGH QUALITY MULTILEVEL SCHEME FOR PARTITIONING IRREGULAR GRAPHS

A FAST AND HIGH QUALITY MULTILEVEL SCHEME FOR PARTITIONING IRREGULAR GRAPHS SIAM J. SCI. COMPUT. Vol. 20, No., pp. 359 392 c 998 Society for Industrial and Applied Mathematics A FAST AND HIGH QUALITY MULTILEVEL SCHEME FOR PARTITIONING IRREGULAR GRAPHS GEORGE KARYPIS AND VIPIN

More information

Lecture 12: Partitioning and Load Balancing

Lecture 12: Partitioning and Load Balancing Lecture 12: Partitioning and Load Balancing G63.2011.002/G22.2945.001 November 16, 2010 thanks to Schloegel,Karypis and Kumar survey paper and Zoltan website for many of today s slides and pictures Partitioning

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

1 Finite difference example: 1D implicit heat equation

1 Finite difference example: 1D implicit heat equation 1 Finite difference example: 1D implicit heat equation 1.1 Boundary conditions Neumann and Dirichlet We solve the transient heat equation ρc p t = ( k ) (1) on the domain L/2 x L/2 subject to the following

More information

Performance of Dynamic Load Balancing Algorithms for Unstructured Mesh Calculations

Performance of Dynamic Load Balancing Algorithms for Unstructured Mesh Calculations Performance of Dynamic Load Balancing Algorithms for Unstructured Mesh Calculations Roy D. Williams, 1990 Presented by Chris Eldred Outline Summary Finite Element Solver Load Balancing Results Types Conclusions

More information

Mathematical Libraries and Application Software on JUROPA and JUQUEEN

Mathematical Libraries and Application Software on JUROPA and JUQUEEN Mitglied der Helmholtz-Gemeinschaft Mathematical Libraries and Application Software on JUROPA and JUQUEEN JSC Training Course May 2014 I.Gutheil Outline General Informations Sequential Libraries Parallel

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

Introduction to the Finite Element Method

Introduction to the Finite Element Method Introduction to the Finite Element Method 09.06.2009 Outline Motivation Partial Differential Equations (PDEs) Finite Difference Method (FDM) Finite Element Method (FEM) References Motivation Figure: cross

More information

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder APPM4720/5720: Fast algorithms for big data Gunnar Martinsson The University of Colorado at Boulder Course objectives: The purpose of this course is to teach efficient algorithms for processing very large

More information

A matrix-free preconditioner for sparse symmetric positive definite systems and least square problems

A matrix-free preconditioner for sparse symmetric positive definite systems and least square problems A matrix-free preconditioner for sparse symmetric positive definite systems and least square problems Stefania Bellavia Dipartimento di Ingegneria Industriale Università degli Studi di Firenze Joint work

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

A Parallel Lanczos Algorithm for Eigensystem Calculation

A Parallel Lanczos Algorithm for Eigensystem Calculation A Parallel Lanczos Algorithm for Eigensystem Calculation Hans-Peter Kersken / Uwe Küster Eigenvalue problems arise in many fields of physics and engineering science for example in structural engineering

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 General Integer Linear Program: (ILP) min c T x Ax b x 0 integer Assumption: A, b integer The integrality condition

More information

Iterative Solvers for Linear Systems

Iterative Solvers for Linear Systems 9th SimLab Course on Parallel Numerical Simulation, 4.10 8.10.2010 Iterative Solvers for Linear Systems Bernhard Gatzhammer Chair of Scientific Computing in Computer Science Technische Universität München

More information

Adaptive Time-Dependent CFD on Distributed Unstructured Meshes

Adaptive Time-Dependent CFD on Distributed Unstructured Meshes Adaptive Time-Dependent CFD on Distributed Unstructured Meshes Chris Walshaw and Martin Berzins School of Computer Studies, University of Leeds, Leeds, LS2 9JT, U K e-mails: chris@scsleedsacuk, martin@scsleedsacuk

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Techniques of the simplex basis LU factorization update

Techniques of the simplex basis LU factorization update Techniques of the simplex basis L factorization update Daniela Renata Cantane Electric Engineering and Computation School (FEEC), State niversity of Campinas (NICAMP), São Paulo, Brazil Aurelio Ribeiro

More information

SIXTY STUDY QUESTIONS TO THE COURSE NUMERISK BEHANDLING AV DIFFERENTIALEKVATIONER I

SIXTY STUDY QUESTIONS TO THE COURSE NUMERISK BEHANDLING AV DIFFERENTIALEKVATIONER I Lennart Edsberg, Nada, KTH Autumn 2008 SIXTY STUDY QUESTIONS TO THE COURSE NUMERISK BEHANDLING AV DIFFERENTIALEKVATIONER I Parameter values and functions occurring in the questions belowwill be exchanged

More information

Notes on Cholesky Factorization

Notes on Cholesky Factorization Notes on Cholesky Factorization Robert A. van de Geijn Department of Computer Science Institute for Computational Engineering and Sciences The University of Texas at Austin Austin, TX 78712 rvdg@cs.utexas.edu

More information

Object-oriented scientific computing

Object-oriented scientific computing Object-oriented scientific computing Pras Pathmanathan Summer 2012 The finite element method Advantages of the FE method over the FD method Main advantages of FE over FD 1 Deal with Neumann boundary conditions

More information

Algorithmic Research and Software Development for an Industrial Strength Sparse Matrix Library for Parallel Computers

Algorithmic Research and Software Development for an Industrial Strength Sparse Matrix Library for Parallel Computers The Boeing Company P.O.Box3707,MC7L-21 Seattle, WA 98124-2207 Final Technical Report February 1999 Document D6-82405 Copyright 1999 The Boeing Company All Rights Reserved Algorithmic Research and Software

More information

Solving polynomial least squares problems via semidefinite programming relaxations

Solving polynomial least squares problems via semidefinite programming relaxations Solving polynomial least squares problems via semidefinite programming relaxations Sunyoung Kim and Masakazu Kojima August 2007, revised in November, 2007 Abstract. A polynomial optimization problem whose

More information

Iterative Methods for Solving Linear Systems

Iterative Methods for Solving Linear Systems Chapter 5 Iterative Methods for Solving Linear Systems 5.1 Convergence of Sequences of Vectors and Matrices In Chapter 2 we have discussed some of the main methods for solving systems of linear equations.

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Numerical Methods for Solving Systems of Nonlinear Equations

Numerical Methods for Solving Systems of Nonlinear Equations Numerical Methods for Solving Systems of Nonlinear Equations by Courtney Remani A project submitted to the Department of Mathematical Sciences in conformity with the requirements for Math 4301 Honour s

More information

Linear Algebra: Determinants, Inverses, Rank

Linear Algebra: Determinants, Inverses, Rank D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Lecture Topic: Low-Rank Approximations

Lecture Topic: Low-Rank Approximations Lecture Topic: Low-Rank Approximations Low-Rank Approximations We have seen principal component analysis. The extraction of the first principle eigenvalue could be seen as an approximation of the original

More information

Elementary Matrices and The LU Factorization

Elementary Matrices and The LU Factorization lementary Matrices and The LU Factorization Definition: ny matrix obtained by performing a single elementary row operation (RO) on the identity (unit) matrix is called an elementary matrix. There are three

More information

An Approximate Minimum Degree Ordering Algorithm

An Approximate Minimum Degree Ordering Algorithm An Approximate Minimum Degree Ordering Algorithm Patrick R. Amestoy Timothy A. Davis Iain S. Duff SIAM J. Matrix Analysis & Applic., Vol 1, no, pp. -0, Dec. 1 Abstract An Approximate Minimum Degree ordering

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Poisson Equation Solver Parallelisation for Particle-in-Cell Model

Poisson Equation Solver Parallelisation for Particle-in-Cell Model WDS'14 Proceedings of Contributed Papers Physics, 233 237, 214. ISBN 978-8-7378-276-4 MATFYZPRESS Poisson Equation Solver Parallelisation for Particle-in-Cell Model A. Podolník, 1,2 M. Komm, 1 R. Dejarnac,

More information

Continuity of the Perron Root

Continuity of the Perron Root Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North

More information

Preconditioning Sparse Matrices for Computing Eigenvalues and Solving Linear Systems of Equations. Tzu-Yi Chen

Preconditioning Sparse Matrices for Computing Eigenvalues and Solving Linear Systems of Equations. Tzu-Yi Chen Preconditioning Sparse Matrices for Computing Eigenvalues and Solving Linear Systems of Equations by Tzu-Yi Chen B.S. (Massachusetts Institute of Technology) 1995 B.S. (Massachusetts Institute of Technology)

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10 Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

More information

AN OUT-OF-CORE SPARSE SYMMETRIC INDEFINITE FACTORIZATION METHOD

AN OUT-OF-CORE SPARSE SYMMETRIC INDEFINITE FACTORIZATION METHOD AN OUT-OF-CORE SPARSE SYMMETRIC INDEFINITE FACTORIZATION METHOD OMER MESHAR AND SIVAN TOLEDO Abstract. We present a new out-of-core sparse symmetric-indefinite factorization algorithm. The most significant

More information

General Framework for an Iterative Solution of Ax b. Jacobi s Method

General Framework for an Iterative Solution of Ax b. Jacobi s Method 2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

Matrix Differentiation

Matrix Differentiation 1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have

More information

Derivative Free Optimization

Derivative Free Optimization Department of Mathematics Derivative Free Optimization M.J.D. Powell LiTH-MAT-R--2014/02--SE Department of Mathematics Linköping University S-581 83 Linköping, Sweden. Three lectures 1 on Derivative Free

More information

Orthogonal Bases and the QR Algorithm

Orthogonal Bases and the QR Algorithm Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

Binary Image Reconstruction

Binary Image Reconstruction A network flow algorithm for reconstructing binary images from discrete X-rays Kees Joost Batenburg Leiden University and CWI, The Netherlands kbatenbu@math.leidenuniv.nl Abstract We present a new algorithm

More information

A Direct Numerical Method for Observability Analysis

A Direct Numerical Method for Observability Analysis IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 15, NO 2, MAY 2000 625 A Direct Numerical Method for Observability Analysis Bei Gou and Ali Abur, Senior Member, IEEE Abstract This paper presents an algebraic method

More information

Mean value theorem, Taylors Theorem, Maxima and Minima.

Mean value theorem, Taylors Theorem, Maxima and Minima. MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.

More information

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers Variance Reduction The statistical efficiency of Monte Carlo simulation can be measured by the variance of its output If this variance can be lowered without changing the expected value, fewer replications

More information

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A = MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

More information

Linear Algebra and TI 89

Linear Algebra and TI 89 Linear Algebra and TI 89 Abdul Hassen and Jay Schiffman This short manual is a quick guide to the use of TI89 for Linear Algebra. We do this in two sections. In the first section, we will go over the editing

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

The world s largest matrix computation. (This chapter is out of date and needs a major overhaul.)

The world s largest matrix computation. (This chapter is out of date and needs a major overhaul.) Chapter 7 Google PageRank The world s largest matrix computation. (This chapter is out of date and needs a major overhaul.) One of the reasons why Google TM is such an effective search engine is the PageRank

More information

Examination paper for TMA4205 Numerical Linear Algebra

Examination paper for TMA4205 Numerical Linear Algebra Department of Mathematical Sciences Examination paper for TMA4205 Numerical Linear Algebra Academic contact during examination: Markus Grasmair Phone: 97580435 Examination date: December 16, 2015 Examination

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

How High a Degree is High Enough for High Order Finite Elements?

How High a Degree is High Enough for High Order Finite Elements? This space is reserved for the Procedia header, do not use it How High a Degree is High Enough for High Order Finite Elements? William F. National Institute of Standards and Technology, Gaithersburg, Maryland,

More information

STATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. Clarificationof zonationprocedure described onpp. 238-239

STATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. Clarificationof zonationprocedure described onpp. 238-239 STATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. by John C. Davis Clarificationof zonationprocedure described onpp. 38-39 Because the notation used in this section (Eqs. 4.8 through 4.84) is inconsistent

More information

Lecture 2 Linear functions and examples

Lecture 2 Linear functions and examples EE263 Autumn 2007-08 Stephen Boyd Lecture 2 Linear functions and examples linear equations and functions engineering examples interpretations 2 1 Linear equations consider system of linear equations y

More information

P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition

P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition K. Osypov* (WesternGeco), D. Nichols (WesternGeco), M. Woodward (WesternGeco) & C.E. Yarman (WesternGeco) SUMMARY Tomographic

More information

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression Logistic Regression Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

A numerically adaptive implementation of the simplex method

A numerically adaptive implementation of the simplex method A numerically adaptive implementation of the simplex method József Smidla, Péter Tar, István Maros Department of Computer Science and Systems Technology University of Pannonia 17th of December 2014. 1

More information

A Spectral Clustering Approach to Validating Sensors via Their Peers in Distributed Sensor Networks

A Spectral Clustering Approach to Validating Sensors via Their Peers in Distributed Sensor Networks A Spectral Clustering Approach to Validating Sensors via Their Peers in Distributed Sensor Networks H. T. Kung Dario Vlah {htk, dario}@eecs.harvard.edu Harvard School of Engineering and Applied Sciences

More information

On fast factorization pivoting methods for sparse symmetric indefinite systems

On fast factorization pivoting methods for sparse symmetric indefinite systems On fast factorization pivoting methods for sparse symmetric indefinite systems by Olaf Schenk 1, and Klaus Gärtner 2 Technical Report CS-2004-004 Department of Computer Science, University of Basel Submitted

More information