HSL and its out-of-core solver
|
|
- Caitlin Merritt
- 7 years ago
- Views:
Transcription
1 HSL and its out-of-core solver Jennifer A. Scott Prague November 2006 p. 1/37
2 Sparse systems Problem: we wish to solve where A is Ax = b LARGE Informal definition: A is sparse if many entries are zero it is worthwhile to exploit these zeros. s p a r s e Prague November 2006 p. 2/37
3 Sparse matrices Many application areas in science, engineering, and finance lead to sparse systems computational fluid dynamics chemical engineering circuit simulation economic modelling fluid flow oceanography linear programming structural engineering... But all have different patterns and characteristics. Prague November 2006 p. 3/37
4 Circuit simulation circuit nz = Prague November 2006 p. 4/37
5 Reservoir modelling nz = 3474 Prague November 2006 p. 5/37
6 Economic modelling nz = 7682 Prague November 2006 p. 6/37
7 Structural engineering nz = Prague November 2006 p. 7/37
8 Acoustics nz = Prague November 2006 p. 8/37
9 Chemical engineering nz = Prague November 2006 p. 9/37
10 Linear programming nz = 4841 Prague November 2006 p. 10/37
11 Direct methods Direct methods involve explicit factorization eg PAQ = LU L, U lower and upper triangular matrices P, Q are permutation matrices Solution process completed by triangular solves Ly = Pb and Uz = y then x = Qz If A is sparse, it is crucial to try to ensure L and U are sparse. Prague November 2006 p. 11/37
12 Direct methods Direct methods involve explicit factorization eg PAQ = LU L, U lower and upper triangular matrices P, Q are permutation matrices Solution process completed by triangular solves Ly = Pb and Uz = y then x = Qz If A is sparse, it is crucial to try to ensure L and U are sparse. Suppose A is n n with nz nonzeros. Gaussian elimination for dense problem requires O(n 2 ) storage and O(n 3 ) flops. Hence infeasible for large n. Target complexity for sparse matrix computations is O(n) + O(nz). Prague November 2006 p. 11/37
13 Direct solvers Most sparse direct solvers have a number of phases, typically ORDER: preorder the matrix to exploit structure ANALYSE: analyse matrix structure to produce data structures for factorization FACTORIZE: perform numerical factorization SOLVE: use factors to solve one or more systems Writing an efficient direct solver is non-trivial so let someone else do it! Prague November 2006 p. 12/37
14 Mathematical software libraries Benefits and advantages of using high quality mathematical software libraries include: Shorten application development cycle, cutting time-to-market and gaining competitive advantage Reduce overall development costs More time to focus on specialist aspects of applications Improve application accuracy and robustness Fully supported and maintained software Prague November 2006 p. 13/37
15 HSL HSL began as Harwell Subroutine Library in Collection of portable, fully documented and tested Fortran packages. Primarily written and developed by Numerical Analysis Group at RAL. Each package performs a basic numerical task (eg solve linear system, find eigenvalues) and has been designed to be incorporated into programs. Particular strengths in: sparse matrix computations optimization large-scale system solution Prague November 2006 p. 14/37
16 HSL HSL began as Harwell Subroutine Library in Collection of portable, fully documented and tested Fortran packages. Primarily written and developed by Numerical Analysis Group at RAL. Each package performs a basic numerical task (eg solve linear system, find eigenvalues) and has been designed to be incorporated into programs. Particular strengths in: sparse matrix computations optimization large-scale system solution HSL has international reputation for reliability and efficiency. It is used by academics and commercial organisations and has been incorporated into large number of commercial products. Prague November 2006 p. 14/37
17 Development of HSL HSL is both revolutionary and evolutionary. Revolutionary: some codes are radically different in technique and algorithm design, including MA18: First sparse direct code (1971) MA27: First multifrontal code (1982) Prague November 2006 p. 15/37
18 Development of HSL HSL is both revolutionary and evolutionary. Revolutionary: some codes are radically different in technique and algorithm design, including MA18: First sparse direct code (1971) MA27: First multifrontal code (1982) Evolutionary: some codes evolve (major algorithm developments, language changes, added functionality... ) eg MA18 MA28 MA48 (unsymmetric sparse systems) MA17 MA27 MA57 (symmetric sparse systems) Prague November 2006 p. 15/37
19 Organisation of HSL Since 2000, HSL divided into the main HSL library and the HSL Archive HSL Archive consists of older packages that have been superseded either by improved HSL packages (eg MA28 superseded by MA48 and MA27 by MA57) or by public domain libraries such as LAPACK HSL Archive is free to all for non-commercial use but its use is not supported Prague November 2006 p. 16/37
20 Organisation of HSL Since 2000, HSL divided into the main HSL library and the HSL Archive HSL Archive consists of older packages that have been superseded either by improved HSL packages (eg MA28 superseded by MA48 and MA27 by MA57) or by public domain libraries such as LAPACK HSL Archive is free to all for non-commercial use but its use is not supported New release of HSL every 2-3 years... currently HSL 2004 HSL is marketed by HyproTech UK (part of AspenTech) Prague November 2006 p. 16/37
21 The latest HSL sparse solver Problem sizes constantly grow larger 40 years ago large might have meant order 10 2 Today order > 10 7 not unusual For direct methods storage requirements grow more rapidly than problem size Prague November 2006 p. 17/37
22 The latest HSL sparse solver Problem sizes constantly grow larger 40 years ago large might have meant order 10 2 Today order > 10 7 not unusual For direct methods storage requirements grow more rapidly than problem size Possible options: Iterative method... but preconditioner? Combine iterative and direct methods? Buy a bigger machine... but expensive and inflexible Use an out-of-core solver Prague November 2006 p. 17/37
23 The latest HSL sparse solver Problem sizes constantly grow larger 40 years ago large might have meant order 10 2 Today order > 10 7 not unusual For direct methods storage requirements grow more rapidly than problem size Possible options: Iterative method... but preconditioner? Combine iterative and direct methods? Buy a bigger machine... but expensive and inflexible Use an out-of-core solver An out-of-core solver holds the matrix factors in files and may also hold the matrix data and some work arrays in files. Prague November 2006 p. 17/37
24 Out-of-core solvers Idea of out-of-core solvers not new: band and frontal solvers developed in 1970s and 1980s held matrix data and factors out-of-core. For example, MA32 in HSL (superseded in 1990s by MA42). 30 years ago John Reid developed a Cholesky out-of-core multifrontal code TREESOLV for element applications. Prague November 2006 p. 18/37
25 Out-of-core solvers Idea of out-of-core solvers not new: band and frontal solvers developed in 1970s and 1980s held matrix data and factors out-of-core. For example, MA32 in HSL (superseded in 1990s by MA42). 30 years ago John Reid developed a Cholesky out-of-core multifrontal code TREESOLV for element applications. More recent codes include: BCSEXT-LIB (Boeing) Oblio (Dobrian and Pothen) TAUCS (Toledo and students) Prague November 2006 p. 18/37
26 Out-of-core solvers Idea of out-of-core solvers not new: band and frontal solvers developed in 1970s and 1980s held matrix data and factors out-of-core. For example, MA32 in HSL (superseded in 1990s by MA42). 30 years ago John Reid developed a Cholesky out-of-core multifrontal code TREESOLV for element applications. More recent codes include: BCSEXT-LIB (Boeing) Oblio (Dobrian and Pothen) TAUCS (Toledo and students) Our new out-of-core solver is HSL MA77 Prague November 2006 p. 18/37
27 Key features of HSL MA77 HSL MA77 is designed to solve LARGE sparse symmetric systems Matrix data, matrix factor, and the main work space (optionally) held in files First release for positive definite problems (Cholesky A = LL T ); next release also for indefinite problems Matrix A may be either in assembled form or a sum of element matrices Prague November 2006 p. 19/37
28 Key features of HSL MA77 HSL MA77 is designed to solve LARGE sparse symmetric systems Matrix data, matrix factor, and the main work space (optionally) held in files First release for positive definite problems (Cholesky A = LL T ); next release also for indefinite problems Matrix A may be either in assembled form or a sum of element matrices A = m k=1 A (k) where A (k) has nonzeros in a small number of rows and columns and corresponds to the matrix from element k. Prague November 2006 p. 19/37
29 Key features of HSL MA77 HSL MA77 is designed to solve LARGE sparse symmetric systems Matrix data, matrix factor, and the main work space (optionally) held in files First release for positive definite problems (Cholesky A = LL T ); next release also for indefinite problems Matrix A may be either in assembled form or a sum of element matrices Reverse communication interface with input by rows or by elements Prague November 2006 p. 19/37
30 Key features of HSL MA77 HSL MA77 is designed to solve LARGE sparse symmetric systems Matrix data, matrix factor, and the main work space (optionally) held in files First release for positive definite problems (Cholesky A = LL T ); next release also for indefinite problems Matrix A may be either in assembled form or a sum of element matrices Reverse communication interface with input by rows or by elements HSL MA77 implements a multifrontal algorithm Prague November 2006 p. 19/37
31 Basic multifrontal algorithm Assume that A is a sum of element matrices. Basic multifrontal algorithm may be described as follows: Given a pivot sequence: do for each pivot assemble all elements that contain the pivot into a dense matrix; eliminate the pivot and any other variables that are found only here; treat the reduced matrix as a new generated element end do Prague November 2006 p. 20/37
32 Multifrontal method ASSEMBLY TREE Each leaf node represents an original element. Each non-leaf node represents set of eliminations and the corresponding generated element Prague November 2006 p. 21/37
33 Multifrontal method At each non-leaf node F F F T 12 F 22 Pivot can only be chosen from F 11 block since F 22 is NOT fully summed. F 22 F 22 F T 12F 1 11 F 12 Prague November 2006 p. 22/37
34 Summary multifrontal method Pass element from children to parent Prague November 2006 p. 23/37
35 Summary multifrontal method Pass element from children to parent At parent perform ASSEMBLY into dense matrix Prague November 2006 p. 23/37
36 Summary multifrontal method Pass element from children to parent At parent perform ASSEMBLY into dense matrix Then perform ELIMINATIONS using dense Gaussian elimination (allows Level 3 BLAS TRSM and GEMM) Prague November 2006 p. 23/37
37 Summary multifrontal method Pass element from children to parent At parent perform ASSEMBLY into dense matrix Then perform ELIMINATIONS using dense Gaussian elimination (allows Level 3 BLAS TRSM and GEMM) Prague November 2006 p. 23/37
38 Language HSL is a Fortran library HSL MA77 written in Fortran 95, PLUS we use allocatable structure components and dummy arguments (part of Fortran 2003, implemented by current compilers). Advantages of using allocatables: more efficient than using pointers pointers must allow for the array being associated with an array section (eg a(i,:)) that is not a contiguous part of its parent optimization of a loop involving a pointer may be inhibited by the possibility that its target is also accessed in another way in the loop avoids the memory-leakage dangers of pointers Prague November 2006 p. 24/37
39 Language (continued) Other features of F95 that are important in design of HSL MA77: Automatic and allocatable arrays significantly reduce complexity of code and user interface, (especially in indefinite case) We selectively use long (64-bit) integers (selected int kind(18)) Multifrontal algorithm can be naturally formulated using recursive procedures... call factor (root)... recursive subroutine factor (node)! Loop over children over node do i = 1,number_children call factor (child(i)) end do! Assemble frontal matrix and partially factorize... end subroutine factor Prague November 2006 p. 25/37
40 Virtual memory management Essential to our code design is our virtual memory management system This was part of the original TREESOLV package Separate package HSL OF01 handles all i/o Prague November 2006 p. 26/37
41 Virtual memory management Essential to our code design is our virtual memory management system This was part of the original TREESOLV package Separate package HSL OF01 handles all i/o Provides read/write facilities for one or more direct access files through a single in-core buffer (work array) Aim is to avoiding actual input-output operations whenever possible Each set of data is accessed as a virtual array i.e. as if it were a very long array Any contiguous section of the virtual array may be read or written Each virtual array is associated with a primary file If too large for a single file, one or more secondary files are used Prague November 2006 p. 26/37
42 Virtual memory management Buffer Virtual arrays Superfiles main_file main_file1 main_file2 temp_file In this example, two superfiles associated with the buffer First superfile has two secondaries, the second has none Prague November 2006 p. 27/37
43 Use of the buffer Buffer divided into fixed length pages Most recently accessed pages of the virtual array held in buffer For each page in buffer, we store: unit number of its primary file page number within corresponding virtual array Required page(s) found using simple hash function Prague November 2006 p. 28/37
44 Use of the buffer Buffer divided into fixed length pages Most recently accessed pages of the virtual array held in buffer For each page in buffer, we store: unit number of its primary file page number within corresponding virtual array Required page(s) found using simple hash function Aim to minimise number of i/o operations by: using wanted pages that are already in buffer first if buffer full, free the least recently accessed page only write page to file if it has changed since entry into buffer Prague November 2006 p. 28/37
45 Advantages Advantages of this approach for developing sparse solvers: All i/o is isolated... assists with code design, development, debugging, and maintenance User is shielded from i/o but can control where files are written and can save data for future solves i/o is not needed if user has supplied long buffer HSL OF01 can be used in development of other solvers Prague November 2006 p. 29/37
46 Use of HSL OF01 within HSL MA77 HSL MA77 has an integer buffer and a real buffer The integer buffer is associated with a file that holds the integer data for the input matrix and the matrix factor The real buffer is associated with two files: one holds the real data for the input matrix and the matrix factor the other is used for the multifrontal stack The indefinite case will use two further files (to hold the integer and real data associated with delayed pivots) The user must supply pathnames and filenames for all the files Prague November 2006 p. 30/37
47 Use of HSL OF01 within HSL MA77 HSL MA77 has an integer buffer and a real buffer The integer buffer is associated with a file that holds the integer data for the input matrix and the matrix factor The real buffer is associated with two files: one holds the real data for the input matrix and the matrix factor the other is used for the multifrontal stack The indefinite case will use two further files (to hold the integer and real data associated with delayed pivots) The user must supply pathnames and filenames for all the files NOTE: We include an option for the files to be replaced by in-core arrays (faster for problems for which user has enough memory) Prague November 2006 p. 30/37
48 Numerical experiments Test set of 26 problems of order up to 10 6 from a range of applications All available in University of Florida Sparse Matrix Collection Tests used double precision (64-bit) reals on a single 3.6 GHz Intel Xeon processor of a Dell Precision 670 with 4 Gbytes of RAM g95 compiler with the -O option and ATLAS BLAS and LAPACK Comparisons with flagship HSL solver MA57 (Duff) All times are wall clock times in seconds Prague November 2006 p. 31/37
49 Effect of varying npage and lpage npage lpage af shell3 crankseg 2 m t1 shipsec in-core Prague November 2006 p. 32/37
50 Times for the different phases of HSL_MA77 Phase af shell3 cfd2 fullb thread (n = 504, 855) (n = 123, 440) (n = 199, 187) (n = 29, 736) Input Ordering MA77 analyse MA77 factor(0) MA77 factor(1) MA77 solve(1) MA77 solve(10) MA77 solve(100) AFS AF S Prague November 2006 p. 33/37
51 Factorization time compared with MA57 2 MA57 MA77 in core Time / (MA77 out of core time) Problem Index Prague November 2006 p. 34/37
52 Solve time compared with MA MA57 MA77 in core Time / (MA77 out of core time) Problem Index Prague November 2006 p. 35/37
53 Complete solution time compared with MA57 2 MA57 MA77 in core Time / (MA77 out of core time) Problem Index Prague November 2006 p. 36/37
54 Concluding remarks Writing the solver has been (and still is) a major project Positive definite code performing well Out-of-core working adds an overhead but not prohibitive Indefinite kernel currently under development (need for pivoting adds to complexity) Version for complex arithmetic will be developed Also plan version for unsymmetric problems that have (almost) symmetric structure Prague November 2006 p. 37/37
55 Concluding remarks Writing the solver has been (and still is) a major project Positive definite code performing well Out-of-core working adds an overhead but not prohibitive Indefinite kernel currently under development (need for pivoting adds to complexity) Version for complex arithmetic will be developed Also plan version for unsymmetric problems that have (almost) symmetric structure References: An out-of-core sparse Cholesky solver, J. K. Reid and J. A. Scott, RAL-TR HSL_OF01, a virtual memory system in Fortran, J. K. Reid and J. A. Scott, RAL-TR Prague November 2006 p. 37/37
7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix
7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse
More informationA note on fast approximate minimum degree orderings for symmetric matrices with some dense rows
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. (2009) Published online in Wiley InterScience (www.interscience.wiley.com)..647 A note on fast approximate minimum degree orderings
More information6. Cholesky factorization
6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix
More informationAN OUT-OF-CORE SPARSE SYMMETRIC INDEFINITE FACTORIZATION METHOD
AN OUT-OF-CORE SPARSE SYMMETRIC INDEFINITE FACTORIZATION METHOD OMER MESHAR AND SIVAN TOLEDO Abstract. We present a new out-of-core sparse symmetric-indefinite factorization algorithm. The most significant
More information7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
More informationMatrix Multiplication
Matrix Multiplication CPS343 Parallel and High Performance Computing Spring 2016 CPS343 (Parallel and HPC) Matrix Multiplication Spring 2016 1 / 32 Outline 1 Matrix operations Importance Dense and sparse
More informationDirect Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
More informationSolution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
More informationSOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis
More informationGeneral Framework for an Iterative Solution of Ax b. Jacobi s Method
2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,
More informationYousef Saad University of Minnesota Computer Science and Engineering. CRM Montreal - April 30, 2008
A tutorial on: Iterative methods for Sparse Matrix Problems Yousef Saad University of Minnesota Computer Science and Engineering CRM Montreal - April 30, 2008 Outline Part 1 Sparse matrices and sparsity
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: SVD revisited; Software for Linear Algebra Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 9 Outline 1 Computing
More informationIt s Not A Disease: The Parallel Solver Packages MUMPS, PaStiX & SuperLU
It s Not A Disease: The Parallel Solver Packages MUMPS, PaStiX & SuperLU A. Windisch PhD Seminar: High Performance Computing II G. Haase March 29 th, 2012, Graz Outline 1 MUMPS 2 PaStiX 3 SuperLU 4 Summary
More informationHigh Performance Matrix Inversion with Several GPUs
High Performance Matrix Inversion on a Multi-core Platform with Several GPUs Pablo Ezzatti 1, Enrique S. Quintana-Ortí 2 and Alfredo Remón 2 1 Centro de Cálculo-Instituto de Computación, Univ. de la República
More informationModification of the Minimum-Degree Algorithm by Multiple Elimination
Modification of the Minimum-Degree Algorithm by Multiple Elimination JOSEPH W. H. LIU York University The most widely used ordering scheme to reduce fills and operations in sparse matrix computation is
More informationNotes on Cholesky Factorization
Notes on Cholesky Factorization Robert A. van de Geijn Department of Computer Science Institute for Computational Engineering and Sciences The University of Texas at Austin Austin, TX 78712 rvdg@cs.utexas.edu
More informationNumerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems
Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001,
More informationACCELERATING COMMERCIAL LINEAR DYNAMIC AND NONLINEAR IMPLICIT FEA SOFTWARE THROUGH HIGH- PERFORMANCE COMPUTING
ACCELERATING COMMERCIAL LINEAR DYNAMIC AND Vladimir Belsky Director of Solver Development* Luis Crivelli Director of Solver Development* Matt Dunbar Chief Architect* Mikhail Belyi Development Group Manager*
More information8 Square matrices continued: Determinants
8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You
More informationMathematical Libraries and Application Software on JUROPA and JUQUEEN
Mitglied der Helmholtz-Gemeinschaft Mathematical Libraries and Application Software on JUROPA and JUQUEEN JSC Training Course May 2014 I.Gutheil Outline General Informations Sequential Libraries Parallel
More informationChapter 12 File Management
Operating Systems: Internals and Design Principles, 6/E William Stallings Chapter 12 File Management Dave Bremer Otago Polytechnic, N.Z. 2008, Prentice Hall Roadmap Overview File organisation and Access
More informationPoisson Equation Solver Parallelisation for Particle-in-Cell Model
WDS'14 Proceedings of Contributed Papers Physics, 233 237, 214. ISBN 978-8-7378-276-4 MATFYZPRESS Poisson Equation Solver Parallelisation for Particle-in-Cell Model A. Podolník, 1,2 M. Komm, 1 R. Dejarnac,
More informationChapter 12 File Management. Roadmap
Operating Systems: Internals and Design Principles, 6/E William Stallings Chapter 12 File Management Dave Bremer Otago Polytechnic, N.Z. 2008, Prentice Hall Overview Roadmap File organisation and Access
More informationFactorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
More informationParallel Programming at the Exascale Era: A Case Study on Parallelizing Matrix Assembly For Unstructured Meshes
Parallel Programming at the Exascale Era: A Case Study on Parallelizing Matrix Assembly For Unstructured Meshes Eric Petit, Loïc Thebault, Quang V. Dinh May 2014 EXA2CT Consortium 2 WPs Organization Proto-Applications
More informationEvaluation of CUDA Fortran for the CFD code Strukti
Evaluation of CUDA Fortran for the CFD code Strukti Practical term report from Stephan Soller High performance computing center Stuttgart 1 Stuttgart Media University 2 High performance computing center
More informationMulticore Parallel Computing with OpenMP
Multicore Parallel Computing with OpenMP Tan Chee Chiang (SVU/Academic Computing, Computer Centre) 1. OpenMP Programming The death of OpenMP was anticipated when cluster systems rapidly replaced large
More informationAdaptive Stable Additive Methods for Linear Algebraic Calculations
Adaptive Stable Additive Methods for Linear Algebraic Calculations József Smidla, Péter Tar, István Maros University of Pannonia Veszprém, Hungary 4 th of July 204. / 2 József Smidla, Péter Tar, István
More information1 Bull, 2011 Bull Extreme Computing
1 Bull, 2011 Bull Extreme Computing Table of Contents HPC Overview. Cluster Overview. FLOPS. 2 Bull, 2011 Bull Extreme Computing HPC Overview Ares, Gerardo, HPC Team HPC concepts HPC: High Performance
More information1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++
Answer the following 1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++ 2) Which data structure is needed to convert infix notations to postfix notations? Stack 3) The
More informationRecommended hardware system configurations for ANSYS users
Recommended hardware system configurations for ANSYS users The purpose of this document is to recommend system configurations that will deliver high performance for ANSYS users across the entire range
More information4/1/2017. PS. Sequences and Series FROM 9.2 AND 9.3 IN THE BOOK AS WELL AS FROM OTHER SOURCES. TODAY IS NATIONAL MANATEE APPRECIATION DAY
PS. Sequences and Series FROM 9.2 AND 9.3 IN THE BOOK AS WELL AS FROM OTHER SOURCES. TODAY IS NATIONAL MANATEE APPRECIATION DAY 1 Oh the things you should learn How to recognize and write arithmetic sequences
More informationEmbedded Systems. Review of ANSI C Topics. A Review of ANSI C and Considerations for Embedded C Programming. Basic features of C
Embedded Systems A Review of ANSI C and Considerations for Embedded C Programming Dr. Jeff Jackson Lecture 2-1 Review of ANSI C Topics Basic features of C C fundamentals Basic data types Expressions Selection
More informationCS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
More informationAbstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).
MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix
More informationPARALLEL ALGORITHMS FOR PREDICTIVE MODELLING
PARALLEL ALGORITHMS FOR PREDICTIVE MODELLING MARKUS HEGLAND Abstract. Parallel computing enables the analysis of very large data sets using large collections of flexible models with many variables. The
More informationLecture 3: Finding integer solutions to systems of linear equations
Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture
More informationTHE NAS KERNEL BENCHMARK PROGRAM
THE NAS KERNEL BENCHMARK PROGRAM David H. Bailey and John T. Barton Numerical Aerodynamic Simulations Systems Division NASA Ames Research Center June 13, 1986 SUMMARY A benchmark test program that measures
More informationRevoScaleR Speed and Scalability
EXECUTIVE WHITE PAPER RevoScaleR Speed and Scalability By Lee Edlefsen Ph.D., Chief Scientist, Revolution Analytics Abstract RevoScaleR, the Big Data predictive analytics library included with Revolution
More informationAlgorithmic Research and Software Development for an Industrial Strength Sparse Matrix Library for Parallel Computers
The Boeing Company P.O.Box3707,MC7L-21 Seattle, WA 98124-2207 Final Technical Report February 1999 Document D6-82405 Copyright 1999 The Boeing Company All Rights Reserved Algorithmic Research and Software
More informationANSYS Solvers: Usage and Performance. Ansys equation solvers: usage and guidelines. Gene Poole Ansys Solvers Team, April, 2002
ANSYS Solvers: Usage and Performance Ansys equation solvers: usage and guidelines Gene Poole Ansys Solvers Team, April, 2002 Outline Basic solver descriptions Direct and iterative methods Why so many choices?
More informationArithmetic and Algebra of Matrices
Arithmetic and Algebra of Matrices Math 572: Algebra for Middle School Teachers The University of Montana 1 The Real Numbers 2 Classroom Connection: Systems of Linear Equations 3 Rational Numbers 4 Irrational
More informationChapter 7: Additional Topics
Chapter 7: Additional Topics In this chapter we ll briefly cover selected advanced topics in fortran programming. All the topics come in handy to add extra functionality to programs, but the feature you
More informationMathematical Libraries on JUQUEEN. JSC Training Course
Mitglied der Helmholtz-Gemeinschaft Mathematical Libraries on JUQUEEN JSC Training Course May 10, 2012 Outline General Informations Sequential Libraries, planned Parallel Libraries and Application Systems:
More informationLinear Programming. March 14, 2014
Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1
More information64-Bit versus 32-Bit CPUs in Scientific Computing
64-Bit versus 32-Bit CPUs in Scientific Computing Axel Kohlmeyer Lehrstuhl für Theoretische Chemie Ruhr-Universität Bochum March 2004 1/25 Outline 64-Bit and 32-Bit CPU Examples
More informationPROBLEMS (Cap. 4 - Istruzioni macchina)
98 CHAPTER 2 MACHINE INSTRUCTIONS AND PROGRAMS PROBLEMS (Cap. 4 - Istruzioni macchina) 2.1 Represent the decimal values 5, 2, 14, 10, 26, 19, 51, and 43, as signed, 7-bit numbers in the following binary
More informationOperation Count; Numerical Linear Algebra
10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point
More information10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method
578 CHAPTER 1 NUMERICAL METHODS 1. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after
More informationRecord Storage and Primary File Organization
Record Storage and Primary File Organization 1 C H A P T E R 4 Contents Introduction Secondary Storage Devices Buffering of Blocks Placing File Records on Disk Operations on Files Files of Unordered Records
More informationHigh Performance Computing for Operation Research
High Performance Computing for Operation Research IEF - Paris Sud University claude.tadonki@u-psud.fr INRIA-Alchemy seminar, Thursday March 17 Research topics Fundamental Aspects of Algorithms and Complexity
More informationSection IV.1: Recursive Algorithms and Recursion Trees
Section IV.1: Recursive Algorithms and Recursion Trees Definition IV.1.1: A recursive algorithm is an algorithm that solves a problem by (1) reducing it to an instance of the same problem with smaller
More informationGood FORTRAN Programs
Good FORTRAN Programs Nick West Postgraduate Computing Lectures Good Fortran 1 What is a Good FORTRAN Program? It Works May be ~ impossible to prove e.g. Operating system. Robust Can handle bad data e.g.
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
More informationby the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
More informationMixed Precision Iterative Refinement Methods Energy Efficiency on Hybrid Hardware Platforms
Mixed Precision Iterative Refinement Methods Energy Efficiency on Hybrid Hardware Platforms Björn Rocker Hamburg, June 17th 2010 Engineering Mathematics and Computing Lab (EMCL) KIT University of the State
More informationA numerically adaptive implementation of the simplex method
A numerically adaptive implementation of the simplex method József Smidla, Péter Tar, István Maros Department of Computer Science and Systems Technology University of Pannonia 17th of December 2014. 1
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More informationSimple File Input & Output
Simple File Input & Output Handout Eight Although we are looking at file I/O (Input/Output) rather late in this course, it is actually one of the most important features of any programming language. The
More informationScalable Distributed Schur Complement Solvers for Internal and External Flow Computations on Many-Core Architectures
Scalable Distributed Schur Complement Solvers for Internal and External Flow Computations on Many-Core Architectures Dr.-Ing. Achim Basermann, Dr. Hans-Peter Kersken, Melven Zöllner** German Aerospace
More informationSolving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
More information1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
More informationSolving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
More information1. The memory address of the first element of an array is called A. floor address B. foundation addressc. first address D.
1. The memory address of the first element of an array is called A. floor address B. foundation addressc. first address D. base address 2. The memory address of fifth element of an array can be calculated
More informationThree Paths to Faster Simulations Using ANSYS Mechanical 16.0 and Intel Architecture
White Paper Intel Xeon processor E5 v3 family Intel Xeon Phi coprocessor family Digital Design and Engineering Three Paths to Faster Simulations Using ANSYS Mechanical 16.0 and Intel Architecture Executive
More informationHY345 Operating Systems
HY345 Operating Systems Recitation 2 - Memory Management Solutions Panagiotis Papadopoulos panpap@csd.uoc.gr Problem 7 Consider the following C program: int X[N]; int step = M; //M is some predefined constant
More informationKrishna Institute of Engineering & Technology, Ghaziabad Department of Computer Application MCA-213 : DATA STRUCTURES USING C
Tutorial#1 Q 1:- Explain the terms data, elementary item, entity, primary key, domain, attribute and information? Also give examples in support of your answer? Q 2:- What is a Data Type? Differentiate
More informationSPARSE matrices are commonly present in computer memory
Proceedings of the 2013 Federated Conference on Computer Science and Information Systems pp. 479 486 Storing Sparse Matrices to Files in the Adaptive-Blocking Hierarchical Storage Format Daniel Langr,
More informationSYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison
SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections
More informationCSE 6040 Computing for Data Analytics: Methods and Tools
CSE 6040 Computing for Data Analytics: Methods and Tools Lecture 12 Computer Architecture Overview and Why it Matters DA KUANG, POLO CHAU GEORGIA TECH FALL 2014 Fall 2014 CSE 6040 COMPUTING FOR DATA ANALYSIS
More informationA Lab Course on Computer Architecture
A Lab Course on Computer Architecture Pedro López José Duato Depto. de Informática de Sistemas y Computadores Facultad de Informática Universidad Politécnica de Valencia Camino de Vera s/n, 46071 - Valencia,
More informationAPPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder
APPM4720/5720: Fast algorithms for big data Gunnar Martinsson The University of Colorado at Boulder Course objectives: The purpose of this course is to teach efficient algorithms for processing very large
More informationSimple Fortran Multitasking Library for the Apple Macintosh Computer
Simple Fortran Multitasking Library for the Apple Macintosh Computer Viktor K. Decyk Department of Physics and Astronomy UCLA Los Angeles, California 90095-1547 decyk@physics.ucla.edu The Apple Macintosh
More informationAn Overview Of Software For Convex Optimization. Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt.
An Overview Of Software For Convex Optimization Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt.edu In fact, the great watershed in optimization isn t between linearity
More informationA Survey of Out-of-Core Algorithms in Numerical Linear Algebra
DIMACS Series in Discrete Mathematics and Theoretical Computer Science A Survey of Out-of-Core Algorithms in Numerical Linear Algebra Sivan Toledo Abstract. This paper surveys algorithms that efficiently
More informationSolving Systems of Linear Equations Using Matrices
Solving Systems of Linear Equations Using Matrices What is a Matrix? A matrix is a compact grid or array of numbers. It can be created from a system of equations and used to solve the system of equations.
More informationParallel and Distributed Computing Programming Assignment 1
Parallel and Distributed Computing Programming Assignment 1 Due Monday, February 7 For programming assignment 1, you should write two C programs. One should provide an estimate of the performance of ping-pong
More information2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system
1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables
More informationPhysical Data Organization
Physical Data Organization Database design using logical model of the database - appropriate level for users to focus on - user independence from implementation details Performance - other major factor
More informationSpring 2011 Prof. Hyesoon Kim
Spring 2011 Prof. Hyesoon Kim Today, we will study typical patterns of parallel programming This is just one of the ways. Materials are based on a book by Timothy. Decompose Into tasks Original Problem
More informationSolving polynomial least squares problems via semidefinite programming relaxations
Solving polynomial least squares problems via semidefinite programming relaxations Sunyoung Kim and Masakazu Kojima August 2007, revised in November, 2007 Abstract. A polynomial optimization problem whose
More informationPES Institute of Technology-BSC QUESTION BANK
PES Institute of Technology-BSC Faculty: Mrs. R.Bharathi CS35: Data Structures Using C QUESTION BANK UNIT I -BASIC CONCEPTS 1. What is an ADT? Briefly explain the categories that classify the functions
More informationThe Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
More informationOptimizing matrix multiplication Amitabha Banerjee abanerjee@ucdavis.edu
Optimizing matrix multiplication Amitabha Banerjee abanerjee@ucdavis.edu Present compilers are incapable of fully harnessing the processor architecture complexity. There is a wide gap between the available
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
More informationClassifying Large Data Sets Using SVMs with Hierarchical Clusters. Presented by :Limou Wang
Classifying Large Data Sets Using SVMs with Hierarchical Clusters Presented by :Limou Wang Overview SVM Overview Motivation Hierarchical micro-clustering algorithm Clustering-Based SVM (CB-SVM) Experimental
More informationThe Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge,
The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge, cheapest first, we had to determine whether its two endpoints
More informationPerformance Evaluation and Optimization of A Custom Native Linux Threads Library
Center for Embedded Computer Systems University of California, Irvine Performance Evaluation and Optimization of A Custom Native Linux Threads Library Guantao Liu and Rainer Dömer Technical Report CECS-12-11
More informationSources: On the Web: Slides will be available on:
C programming Introduction The basics of algorithms Structure of a C code, compilation step Constant, variable type, variable scope Expression and operators: assignment, arithmetic operators, comparison,
More informationPROGRAMMABLE LOGIC CONTROL
PROGRAMMABLE LOGIC CONTROL James Vernon: control systems principles.co.uk ABSTRACT: This is one of a series of white papers on systems modelling, analysis and control, prepared by Control Systems Principles.co.uk
More informationTechniques of the simplex basis LU factorization update
Techniques of the simplex basis L factorization update Daniela Renata Cantane Electric Engineering and Computation School (FEEC), State niversity of Campinas (NICAMP), São Paulo, Brazil Aurelio Ribeiro
More informationHardware-Aware Analysis and. Presentation Date: Sep 15 th 2009 Chrissie C. Cui
Hardware-Aware Analysis and Optimization of Stable Fluids Presentation Date: Sep 15 th 2009 Chrissie C. Cui Outline Introduction Highlights Flop and Bandwidth Analysis Mehrstellen Schemes Advection Caching
More informationLinear Algebra Notes
Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note
More informationParallelization: Binary Tree Traversal
By Aaron Weeden and Patrick Royal Shodor Education Foundation, Inc. August 2012 Introduction: According to Moore s law, the number of transistors on a computer chip doubles roughly every two years. First
More informationAnalysis of Compression Algorithms for Program Data
Analysis of Compression Algorithms for Program Data Matthew Simpson, Clemson University with Dr. Rajeev Barua and Surupa Biswas, University of Maryland 12 August 3 Abstract Insufficient available memory
More informationSymbol Tables. Introduction
Symbol Tables Introduction A compiler needs to collect and use information about the names appearing in the source program. This information is entered into a data structure called a symbol table. The
More information3 P0 P0 P3 P3 8 P1 P0 P2 P3 P1 P2
A Comparison of 1-D and 2-D Data Mapping for Sparse LU Factorization with Partial Pivoting Cong Fu y Xiangmin Jiao y Tao Yang y Abstract This paper presents a comparative study of two data mapping schemes
More informationASSEMBLY PROGRAMMING ON A VIRTUAL COMPUTER
ASSEMBLY PROGRAMMING ON A VIRTUAL COMPUTER Pierre A. von Kaenel Mathematics and Computer Science Department Skidmore College Saratoga Springs, NY 12866 (518) 580-5292 pvonk@skidmore.edu ABSTRACT This paper
More informationAdvanced Computational Software
Advanced Computational Software Scientific Libraries: Part 2 Blue Waters Undergraduate Petascale Education Program May 29 June 10 2011 Outline Quick review Fancy Linear Algebra libraries - ScaLAPACK -PETSc
More information