Optimizing matrix multiplication Amitabha Banerjee
|
|
|
- Valerie Manning
- 9 years ago
- Views:
Transcription
1 Optimizing matrix multiplication Amitabha Banerjee Present compilers are incapable of fully harnessing the processor architecture complexity. There is a wide gap between the available and achieved performance of software. Thereby, the need for performance tuning. Performance tuning of the simple matrix multiplication has indeed been a very tough and challenging project. In this work, we discuss some of the optimization techniques, which gave us substantial improvements. Test Setup: The machine used for the test had a Pentium IV processor with a 16KB L1 cache and 512 KB L2 cache. We started from the short sweet matrix multiply code, which we call basic matrix multiply. We used the gcc version compiler. Basic_matrix_multiple (A,B,C,m) for i= 1 to m for j= 1 to m for k= 1 to m C(i,j) = C(i,j)+ A(i,k)*B(k,j) The optimization techniques were applied in the following steps: 1) L1 cache blocking optimizations : Here the idea is to partition the big matrices into uniform blocks. Matrix multiplication is carried out block by block. Details of the algorithm are in [1]. Choosing the optimal block size is very important as well. The idea here is to fit the two blocks, which are being multiplied into the L1 cache so that the self-interference [2] misses are minimized. The optimal block size can be calculated as: 2 * (blocksize) 2 * wordsize = L1 cache size. In our case, the wordsize is 8 bytes because the matrix is of double data type. Substituting the values, we calculate the blocksize to be 32. To verify, we plot the performance of blocked matrix multiply on a 512 * 512 matrix while varying the block sizes from 16 to 64 in Figure 1. Note that we choose only multiples of 2 here, the reason being that the L1 cache has a line size of 4 words, and therefore non-multiples of 2, make the block matrix size non-multiples of 4, which tends to be inefficient. As expected we observe that the performance peaks at block size of 32. 2) Compiler flag optimizations: gcc provides various compiling options to improve performance. A detailed list of all the compiler options can be found using the man command. In our case we found the 3 compiler options: -O4 funroll-loops ffast-math particularly useful in speeding up the performance. The ffast-math options is not recommended always, because it makes optimizations to the IEEE floating point
2 standard which may yield incorrect results in some cases. However in the case of matrix multiply it worked pretty well. Performance with varying block sizes. Matrix multiply of 2 512*512 matrices Performance in MFlop/s Block size Figure 1: Performance with varying the block factor. 3) Software optimizations: There is a bunch of software optimizations that we considered. Most of these are based on C performance coding guidelines, which may be found in detail in [1, 3]. Over here, we shall elaborate on ideas relevant to our context. a) Software unrolling and register reuse: We unrolled the innermost loop two times. This helped us load a common value used in an outer loop into a register. It is important to spot data, which is used recurrently and store them in registers explicitly, so that they do not have to be repeatedly fetched from the L1 or L2 caches. b) Avoiding pointer arithmetic in arrays: If c were an array, it is much more efficient to address an element as c[10] instead of *(c + 10). The former can be executed in one LOAD instructions, because LOAD instructions in most processors allow address computations. The latter takes two instructions. c) Avoiding inequality comparisons: The BRNZ (Branch if Nonzero) instruction is extremely efficient, so it helps make use of it as much as possible. Therefore its is helpful to convert the inequalities to equality operations. For example, a inequality based for-do loop may be replaced by an equality based do-while loop. d) Interspacing add-multiply operations: Typically add and multiply operations are performed in different processing units. Placing a bunch of multiply operations together reduces efficiency because the instructions
3 might be stalled because of unavailability of the multiplication unit, while the addition unit is free. In this context, it is useful to remember that multiplication operations typically have much higher latencies than arithmetic operations. Therefore it is efficient to have add and multiply operations interleaved. 4) Copy optimization: We observed that there were substantial dips in performance on matrices of sizes of powers of 2. e.g. On 128 * 128, 256 * 256 and 512 * 512 matrices. This can be explained as follows. In matrix multiplication a column of data multiplies a row of data. Assume that the matrix is stored in row-major format. When the matrix is of size 2^N, the addresses of the elements in a column are also separated by powers of 2. Since the L1 uses a direct mapped cache, all these elements in a column map to the same cache line. This leads to an extraordinary number of cache misses. The remedy is a technique known as copy optimization. Before starting matrix multiplication, the matrices are copied into memory locations, the first matrix being stored in row-major format block-wise, while the second matrix is stored in column major format block-wise. This also helps in achieving spatial locality. Copy optimization thus helps reduce the dips in performance at powers of 2, while improving performance also because of spatial locality. Because copy optimization involves the additional overhead of copying data, it is not recommended for small sized matrices for which the impact is much less but overhead is more. 5) L2 cache blocking optimizations : The final step optimization involves an additional level of blocking for the L2 cache. The L2 cache size is 512 KB. So the maximum matrix size that it can hold may be computed by the formula: 2 * (blocksize) 2 * wordsize = L2 cache size which gives the optimal block size to be 181. We found the optimal achieved at the block size of 160. Figure 2 shows the results of our optimizations. Note that the performance of basic matrix-matrix multiply degrades heavily with huge dips at around powers of 2. The factor improvements with each degree of optimization on a 479 * 479 size matrix are indicated in Table 1. We note that substantial improvements are achieved at each step. We also note that the dips in performance at powers of 2 are much less after the copy optimizations.
4 Optimizations made in matrix multiply 2000 Performance in MFlop/s Basic matrix multiply Blocked for L1 caches with 32 size (Step 1) Compiler optimizations (Step 2) Software Optimizations (Step 3) Copy optimization (Step 4) Blocked for L2 caches with 160 size (Step 5) BLAS Size of matrix Figure 2: Performance tuning for matrix multiply. Optimization Performance improvement Basic matrix multiply 1.0 L1 cache blocking optimizations (1) 1.8 Compiler flag optimizations (2) 3.1 Software optimizations (3) 3.8 Copy optimization (4) 4.6 L2 cache blocking optimizations (5) 4.7 BLAS from ATLAS 9.9 Table 1: Performance improvement on a 479 * 479 matrix After all these optimizations we compare the performance with that achieved using the BLAS library supplied by ATLAS which can be downloaded at [4]. As can be seen in Table 1, BLAS beats our best performance by greater than a factor of 2. BLAS has advantages of taking into account the complete architecture of the machine, such as the number of floating point and integer results, the number of buses available etc. All these optimizations are tailored to our machine. We now run the optimizations on a different machine. This time we choose a Pentium III processor. The results are plotted in Figure 3. We observe that the improvements are not as much as we obtained on a Pentium IV machine. Some of our optimizations like L1 block sizes and L2 block sizes
5 were considered particular to the Pentium IV processor and they are not tuned to the Pentium III processor. Moreover the disparities between the achieved performances for matrix multiply and available performance for a Pentium IV is huge, and thus our optimizations kick in much better. This shows that with faster processors, the need for performance tuning is very important to harness the full capacity of the processor. Performance on a Pentium III Performance (MFlop/s) Basic matrix multiply Blocked with size 32 Compiler optimizations Loop unrolling, register reuse, code optimizations Copy optimization Matrix size Some more optimizations, which have been documented in literature, which may lead to marginal improvements, are as follows: 1) Fringe handling: Since the matrix size is not always a multiple of the block size, some blocks are the left over fringes after all the blocks of block size have been carved out. Some optimizations recommended for handling fringes are to have fine-tuned code for 1* 2, 1 * 3, 2 * 3, 2 * 4, 3 * 4 etc. fringes, and call these when required. 2) Memory pre-fetch: A lot of cache misses are intrinsic misses [2], during which the L1, L2 caches are being loaded with data from the memory. Pre-fetch can help reduce these latencies. Usually compilers do not support these techniques and hence these operations have to be done in assembly level code, which makes it difficult to implement them. Important references: [1] D. Parello, O. Temam and Jean-Marie Verdun, On increasing architecture awareness in program optimizations to bridge the gap between peak and sustained processor performance Matrix-Multiply revisited., Supercomputing, 2002.
6 [2] Monica S. Lam, Edward E. Rothberg and Michael E. Wolf, The Cache Performance and Optimizations of Blocked Algorithms, ASPLOS IV, [3] PHiPAC technical report available at: [4] ATLAS BLAS available at
Matrix Multiplication
Matrix Multiplication CPS343 Parallel and High Performance Computing Spring 2016 CPS343 (Parallel and HPC) Matrix Multiplication Spring 2016 1 / 32 Outline 1 Matrix operations Importance Dense and sparse
Bindel, Spring 2010 Applications of Parallel Computers (CS 5220) Week 1: Wednesday, Jan 27
Logistics Week 1: Wednesday, Jan 27 Because of overcrowding, we will be changing to a new room on Monday (Snee 1120). Accounts on the class cluster (crocus.csuglab.cornell.edu) will be available next week.
COMPUTER ORGANIZATION ARCHITECTURES FOR EMBEDDED COMPUTING
COMPUTER ORGANIZATION ARCHITECTURES FOR EMBEDDED COMPUTING 2013/2014 1 st Semester Sample Exam January 2014 Duration: 2h00 - No extra material allowed. This includes notes, scratch paper, calculator, etc.
Parallel and Distributed Computing Programming Assignment 1
Parallel and Distributed Computing Programming Assignment 1 Due Monday, February 7 For programming assignment 1, you should write two C programs. One should provide an estimate of the performance of ping-pong
Instruction Set Architecture (ISA)
Instruction Set Architecture (ISA) * Instruction set architecture of a machine fills the semantic gap between the user and the machine. * ISA serves as the starting point for the design of a new machine
Operation Count; Numerical Linear Algebra
10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point
Interpreters and virtual machines. Interpreters. Interpreters. Why interpreters? Tree-based interpreters. Text-based interpreters
Interpreters and virtual machines Michel Schinz 2007 03 23 Interpreters Interpreters Why interpreters? An interpreter is a program that executes another program, represented as some kind of data-structure.
Software Pipelining by Modulo Scheduling. Philip Sweany University of North Texas
Software Pipelining by Modulo Scheduling Philip Sweany University of North Texas Overview Opportunities for Loop Optimization Software Pipelining Modulo Scheduling Resource and Dependence Constraints Scheduling
CUDA Programming. Week 4. Shared memory and register
CUDA Programming Week 4. Shared memory and register Outline Shared memory and bank confliction Memory padding Register allocation Example of matrix-matrix multiplication Homework SHARED MEMORY AND BANK
Advanced Computer Architecture-CS501. Computer Systems Design and Architecture 2.1, 2.2, 3.2
Lecture Handout Computer Architecture Lecture No. 2 Reading Material Vincent P. Heuring&Harry F. Jordan Chapter 2,Chapter3 Computer Systems Design and Architecture 2.1, 2.2, 3.2 Summary 1) A taxonomy of
COMP 250 Fall 2012 lecture 2 binary representations Sept. 11, 2012
Binary numbers The reason humans represent numbers using decimal (the ten digits from 0,1,... 9) is that we have ten fingers. There is no other reason than that. There is nothing special otherwise about
High-speed image processing algorithms using MMX hardware
High-speed image processing algorithms using MMX hardware J. W. V. Miller and J. Wood The University of Michigan-Dearborn ABSTRACT Low-cost PC-based machine vision systems have become more common due to
Performance Metrics and Scalability Analysis. Performance Metrics and Scalability Analysis
Performance Metrics and Scalability Analysis 1 Performance Metrics and Scalability Analysis Lecture Outline Following Topics will be discussed Requirements in performance and cost Performance metrics Work
Hardware performance monitoring. Zoltán Majó
Hardware performance monitoring Zoltán Majó 1 Question Did you take any of these lectures: Computer Architecture and System Programming How to Write Fast Numerical Code Design of Parallel and High Performance
HY345 Operating Systems
HY345 Operating Systems Recitation 2 - Memory Management Solutions Panagiotis Papadopoulos [email protected] Problem 7 Consider the following C program: int X[N]; int step = M; //M is some predefined constant
CSE 6040 Computing for Data Analytics: Methods and Tools
CSE 6040 Computing for Data Analytics: Methods and Tools Lecture 12 Computer Architecture Overview and Why it Matters DA KUANG, POLO CHAU GEORGIA TECH FALL 2014 Fall 2014 CSE 6040 COMPUTING FOR DATA ANALYSIS
SmartArrays and Java Frequently Asked Questions
SmartArrays and Java Frequently Asked Questions What are SmartArrays? A SmartArray is an intelligent multidimensional array of data. Intelligent means that it has built-in knowledge of how to perform operations
14:440:127 Introduction to Computers for Engineers. Notes for Lecture 06
14:440:127 Introduction to Computers for Engineers Notes for Lecture 06 Rutgers University, Spring 2010 Instructor- Blase E. Ur 1 Loop Examples 1.1 Example- Sum Primes Let s say we wanted to sum all 1,
2: Computer Performance
2: Computer Performance http://people.sc.fsu.edu/ jburkardt/presentations/ fdi 2008 lecture2.pdf... John Information Technology Department Virginia Tech... FDI Summer Track V: Parallel Programming 10-12
Measuring Cache and Memory Latency and CPU to Memory Bandwidth
White Paper Joshua Ruggiero Computer Systems Engineer Intel Corporation Measuring Cache and Memory Latency and CPU to Memory Bandwidth For use with Intel Architecture December 2008 1 321074 Executive Summary
AMD Opteron Quad-Core
AMD Opteron Quad-Core a brief overview Daniele Magliozzi Politecnico di Milano Opteron Memory Architecture native quad-core design (four cores on a single die for more efficient data sharing) enhanced
Pitfalls of Object Oriented Programming
Sony Computer Entertainment Europe Research & Development Division Pitfalls of Object Oriented Programming Tony Albrecht Technical Consultant Developer Services A quick look at Object Oriented (OO) programming
Chapter 07: Instruction Level Parallelism VLIW, Vector, Array and Multithreaded Processors. Lesson 05: Array Processors
Chapter 07: Instruction Level Parallelism VLIW, Vector, Array and Multithreaded Processors Lesson 05: Array Processors Objective To learn how the array processes in multiple pipelines 2 Array Processor
High Performance Matrix Inversion with Several GPUs
High Performance Matrix Inversion on a Multi-core Platform with Several GPUs Pablo Ezzatti 1, Enrique S. Quintana-Ortí 2 and Alfredo Remón 2 1 Centro de Cálculo-Instituto de Computación, Univ. de la República
Software Pipelining. for (i=1, i<100, i++) { x := A[i]; x := x+1; A[i] := x
Software Pipelining for (i=1, i
Module: Software Instruction Scheduling Part I
Module: Software Instruction Scheduling Part I Sudhakar Yalamanchili, Georgia Institute of Technology Reading for this Module Loop Unrolling and Instruction Scheduling Section 2.2 Dependence Analysis Section
High Performance Computing. Course Notes 2007-2008. HPC Fundamentals
High Performance Computing Course Notes 2007-2008 2008 HPC Fundamentals Introduction What is High Performance Computing (HPC)? Difficult to define - it s a moving target. Later 1980s, a supercomputer performs
Pipelining Review and Its Limitations
Pipelining Review and Its Limitations Yuri Baida [email protected] [email protected] October 16, 2010 Moscow Institute of Physics and Technology Agenda Review Instruction set architecture Basic
High Performance Computing Lab Exercises
High Performance Computing Lab Exercises (Make sense of the theory!) Rubin H Landau With Sally Haerer and Scott Clark 6 GB/s CPU cache RAM cache Main Store 32 KB 2GB 2MB 32 TB@ 111Mb/s Computational Physics
Java Virtual Machine: the key for accurated memory prefetching
Java Virtual Machine: the key for accurated memory prefetching Yolanda Becerra Jordi Garcia Toni Cortes Nacho Navarro Computer Architecture Department Universitat Politècnica de Catalunya Barcelona, Spain
The Big Picture. Cache Memory CSE 675.02. Memory Hierarchy (1/3) Disk
The Big Picture Cache Memory CSE 5.2 Computer Processor Memory (active) (passive) Control (where ( brain ) programs, Datapath data live ( brawn ) when running) Devices Input Output Keyboard, Mouse Disk,
64-Bit versus 32-Bit CPUs in Scientific Computing
64-Bit versus 32-Bit CPUs in Scientific Computing Axel Kohlmeyer Lehrstuhl für Theoretische Chemie Ruhr-Universität Bochum March 2004 1/25 Outline 64-Bit and 32-Bit CPU Examples
Cache Conscious Programming in Undergraduate Computer Science
Appears in ACM SIGCSE Technical Symposium on Computer Science Education (SIGCSE 99), Copyright ACM, Cache Conscious Programming in Undergraduate Computer Science Alvin R. Lebeck Department of Computer
ADVANCED PROCESSOR ARCHITECTURES AND MEMORY ORGANISATION Lesson-12: ARM
ADVANCED PROCESSOR ARCHITECTURES AND MEMORY ORGANISATION Lesson-12: ARM 1 The ARM architecture processors popular in Mobile phone systems 2 ARM Features ARM has 32-bit architecture but supports 16 bit
IA-64 Application Developer s Architecture Guide
IA-64 Application Developer s Architecture Guide The IA-64 architecture was designed to overcome the performance limitations of today s architectures and provide maximum headroom for the future. To achieve
An Introduction to Assembly Programming with the ARM 32-bit Processor Family
An Introduction to Assembly Programming with the ARM 32-bit Processor Family G. Agosta Politecnico di Milano December 3, 2011 Contents 1 Introduction 1 1.1 Prerequisites............................. 2
Numerical Matrix Analysis
Numerical Matrix Analysis Lecture Notes #10 Conditioning and / Peter Blomgren, [email protected] Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research
PROBLEMS #20,R0,R1 #$3A,R2,R4
506 CHAPTER 8 PIPELINING (Corrisponde al cap. 11 - Introduzione al pipelining) PROBLEMS 8.1 Consider the following sequence of instructions Mul And #20,R0,R1 #3,R2,R3 #$3A,R2,R4 R0,R2,R5 In all instructions,
Quiz for Chapter 1 Computer Abstractions and Technology 3.10
Date: 3.10 Not all questions are of equal difficulty. Please review the entire quiz first and then budget your time carefully. Name: Course: Solutions in Red 1. [15 points] Consider two different implementations,
Parallel Computing. Benson Muite. [email protected] http://math.ut.ee/ benson. https://courses.cs.ut.ee/2014/paralleel/fall/main/homepage
Parallel Computing Benson Muite [email protected] http://math.ut.ee/ benson https://courses.cs.ut.ee/2014/paralleel/fall/main/homepage 3 November 2014 Hadoop, Review Hadoop Hadoop History Hadoop Framework
A Review of Customized Dynamic Load Balancing for a Network of Workstations
A Review of Customized Dynamic Load Balancing for a Network of Workstations Taken from work done by: Mohammed Javeed Zaki, Wei Li, Srinivasan Parthasarathy Computer Science Department, University of Rochester
Habanero Extreme Scale Software Research Project
Habanero Extreme Scale Software Research Project Comp215: Java Method Dispatch Zoran Budimlić (Rice University) Always remember that you are absolutely unique. Just like everyone else. - Margaret Mead
The AVR Microcontroller and C Compiler Co-Design Dr. Gaute Myklebust ATMEL Corporation ATMEL Development Center, Trondheim, Norway
The AVR Microcontroller and C Compiler Co-Design Dr. Gaute Myklebust ATMEL Corporation ATMEL Development Center, Trondheim, Norway Abstract High Level Languages (HLLs) are rapidly becoming the standard
A Robust Dynamic Load-balancing Scheme for Data Parallel Application on Message Passing Architecture
A Robust Dynamic Load-balancing Scheme for Data Parallel Application on Message Passing Architecture Yangsuk Kee Department of Computer Engineering Seoul National University Seoul, 151-742, Korea Soonhoi
and RISC Optimization Techniques for the Hitachi SR8000 Architecture
1 KONWIHR Project: Centre of Excellence for High Performance Computing Pseudo-Vectorization and RISC Optimization Techniques for the Hitachi SR8000 Architecture F. Deserno, G. Hager, F. Brechtefeld, G.
Next Generation GPU Architecture Code-named Fermi
Next Generation GPU Architecture Code-named Fermi The Soul of a Supercomputer in the Body of a GPU Why is NVIDIA at Super Computing? Graphics is a throughput problem paint every pixel within frame time
Guidelines for Software Development Efficiency on the TMS320C6000 VelociTI Architecture
Guidelines for Software Development Efficiency on the TMS320C6000 VelociTI Architecture WHITE PAPER: SPRA434 Authors: Marie Silverthorn Leon Adams Richard Scales Digital Signal Processing Solutions April
HSL and its out-of-core solver
HSL and its out-of-core solver Jennifer A. Scott [email protected] Prague November 2006 p. 1/37 Sparse systems Problem: we wish to solve where A is Ax = b LARGE Informal definition: A is sparse if many
Solving Systems of Linear Equations Using Matrices
Solving Systems of Linear Equations Using Matrices What is a Matrix? A matrix is a compact grid or array of numbers. It can be created from a system of equations and used to solve the system of equations.
SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison
SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections
Hardware-Aware Analysis and. Presentation Date: Sep 15 th 2009 Chrissie C. Cui
Hardware-Aware Analysis and Optimization of Stable Fluids Presentation Date: Sep 15 th 2009 Chrissie C. Cui Outline Introduction Highlights Flop and Bandwidth Analysis Mehrstellen Schemes Advection Caching
2) Write in detail the issues in the design of code generator.
COMPUTER SCIENCE AND ENGINEERING VI SEM CSE Principles of Compiler Design Unit-IV Question and answers UNIT IV CODE GENERATION 9 Issues in the design of code generator The target machine Runtime Storage
SOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis
Chapter 5 Names, Bindings, Type Checking, and Scopes
Chapter 5 Names, Bindings, Type Checking, and Scopes Chapter 5 Topics Introduction Names Variables The Concept of Binding Type Checking Strong Typing Scope Scope and Lifetime Referencing Environments Named
PROBLEMS (Cap. 4 - Istruzioni macchina)
98 CHAPTER 2 MACHINE INSTRUCTIONS AND PROGRAMS PROBLEMS (Cap. 4 - Istruzioni macchina) 2.1 Represent the decimal values 5, 2, 14, 10, 26, 19, 51, and 43, as signed, 7-bit numbers in the following binary
Performance Monitoring of the Software Frameworks for LHC Experiments
Proceedings of the First EELA-2 Conference R. mayo et al. (Eds.) CIEMAT 2009 2009 The authors. All rights reserved Performance Monitoring of the Software Frameworks for LHC Experiments William A. Romero
THE NAS KERNEL BENCHMARK PROGRAM
THE NAS KERNEL BENCHMARK PROGRAM David H. Bailey and John T. Barton Numerical Aerodynamic Simulations Systems Division NASA Ames Research Center June 13, 1986 SUMMARY A benchmark test program that measures
AC 2007-2027: A PROCESSOR DESIGN PROJECT FOR A FIRST COURSE IN COMPUTER ORGANIZATION
AC 2007-2027: A PROCESSOR DESIGN PROJECT FOR A FIRST COURSE IN COMPUTER ORGANIZATION Michael Black, American University Manoj Franklin, University of Maryland-College Park American Society for Engineering
CUDA Optimization with NVIDIA Tools. Julien Demouth, NVIDIA
CUDA Optimization with NVIDIA Tools Julien Demouth, NVIDIA What Will You Learn? An iterative method to optimize your GPU code A way to conduct that method with Nvidia Tools 2 What Does the Application
An Introduction to Computer Science and Computer Organization Comp 150 Fall 2008
An Introduction to Computer Science and Computer Organization Comp 150 Fall 2008 Computer Science the study of algorithms, including Their formal and mathematical properties Their hardware realizations
CISC, RISC, and DSP Microprocessors
CISC, RISC, and DSP Microprocessors Douglas L. Jones ECE 497 Spring 2000 4/6/00 CISC, RISC, and DSP D.L. Jones 1 Outline Microprocessors circa 1984 RISC vs. CISC Microprocessors circa 1999 Perspective:
OpenFOAM: Computational Fluid Dynamics. Gauss Siedel iteration : (L + D) * x new = b - U * x old
OpenFOAM: Computational Fluid Dynamics Gauss Siedel iteration : (L + D) * x new = b - U * x old What s unique about my tuning work The OpenFOAM (Open Field Operation and Manipulation) CFD Toolbox is a
Embedded Systems. Review of ANSI C Topics. A Review of ANSI C and Considerations for Embedded C Programming. Basic features of C
Embedded Systems A Review of ANSI C and Considerations for Embedded C Programming Dr. Jeff Jackson Lecture 2-1 Review of ANSI C Topics Basic features of C C fundamentals Basic data types Expressions Selection
A Lab Course on Computer Architecture
A Lab Course on Computer Architecture Pedro López José Duato Depto. de Informática de Sistemas y Computadores Facultad de Informática Universidad Politécnica de Valencia Camino de Vera s/n, 46071 - Valencia,
FPGA-based Multithreading for In-Memory Hash Joins
FPGA-based Multithreading for In-Memory Hash Joins Robert J. Halstead, Ildar Absalyamov, Walid A. Najjar, Vassilis J. Tsotras University of California, Riverside Outline Background What are FPGAs Multithreaded
EEM 486: Computer Architecture. Lecture 4. Performance
EEM 486: Computer Architecture Lecture 4 Performance EEM 486 Performance Purchasing perspective Given a collection of machines, which has the» Best performance?» Least cost?» Best performance / cost? Design
Data Storage - II: Efficient Usage & Errors
Data Storage - II: Efficient Usage & Errors Week 10, Spring 2005 Updated by M. Naci Akkøk, 27.02.2004, 03.03.2005 based upon slides by Pål Halvorsen, 12.3.2002. Contains slides from: Hector Garcia-Molina
Memory Management Outline. Background Swapping Contiguous Memory Allocation Paging Segmentation Segmented Paging
Memory Management Outline Background Swapping Contiguous Memory Allocation Paging Segmentation Segmented Paging 1 Background Memory is a large array of bytes memory and registers are only storage CPU can
Building an Inexpensive Parallel Computer
Res. Lett. Inf. Math. Sci., (2000) 1, 113-118 Available online at http://www.massey.ac.nz/~wwiims/rlims/ Building an Inexpensive Parallel Computer Lutz Grosz and Andre Barczak I.I.M.S., Massey University
Computer Architecture. Secure communication and encryption.
Computer Architecture. Secure communication and encryption. Eugeniy E. Mikhailov The College of William & Mary Lecture 28 Eugeniy Mikhailov (W&M) Practical Computing Lecture 28 1 / 13 Computer architecture
Solution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
Big-data Analytics: Challenges and Opportunities
Big-data Analytics: Challenges and Opportunities Chih-Jen Lin Department of Computer Science National Taiwan University Talk at 台 灣 資 料 科 學 愛 好 者 年 會, August 30, 2014 Chih-Jen Lin (National Taiwan Univ.)
Administration. Instruction scheduling. Modern processors. Examples. Simplified architecture model. CS 412 Introduction to Compilers
CS 4 Introduction to Compilers ndrew Myers Cornell University dministration Prelim tomorrow evening No class Wednesday P due in days Optional reading: Muchnick 7 Lecture : Instruction scheduling pr 0 Modern
EE361: Digital Computer Organization Course Syllabus
EE361: Digital Computer Organization Course Syllabus Dr. Mohammad H. Awedh Spring 2014 Course Objectives Simply, a computer is a set of components (Processor, Memory and Storage, Input/Output Devices)
Performance Characteristics of a Cost-Effective Medium-Sized Beowulf Cluster Supercomputer
Res. Lett. Inf. Math. Sci., 2003, Vol.5, pp 1-10 Available online at http://iims.massey.ac.nz/research/letters/ 1 Performance Characteristics of a Cost-Effective Medium-Sized Beowulf Cluster Supercomputer
1. Memory technology & Hierarchy
1. Memory technology & Hierarchy RAM types Advances in Computer Architecture Andy D. Pimentel Memory wall Memory wall = divergence between CPU and RAM speed We can increase bandwidth by introducing concurrency
Petascale Software Challenges. William Gropp www.cs.illinois.edu/~wgropp
Petascale Software Challenges William Gropp www.cs.illinois.edu/~wgropp Petascale Software Challenges Why should you care? What are they? Which are different from non-petascale? What has changed since
GPU Hardware Performance. Fall 2015
Fall 2015 Atomic operations performs read-modify-write operations on shared or global memory no interference with other threads for 32-bit and 64-bit integers (c. c. 1.2), float addition (c. c. 2.0) using
Gradient An EII Solution From Infosys
Gradient An EII Solution From Infosys Keywords: Grid, Enterprise Integration, EII Introduction New arrays of business are emerging that require cross-functional data in near real-time. Examples of such
EE282 Computer Architecture and Organization Midterm Exam February 13, 2001. (Total Time = 120 minutes, Total Points = 100)
EE282 Computer Architecture and Organization Midterm Exam February 13, 2001 (Total Time = 120 minutes, Total Points = 100) Name: (please print) Wolfe - Solution In recognition of and in the spirit of the
1. Convert the following base 10 numbers into 8-bit 2 s complement notation 0, -1, -12
C5 Solutions 1. Convert the following base 10 numbers into 8-bit 2 s complement notation 0, -1, -12 To Compute 0 0 = 00000000 To Compute 1 Step 1. Convert 1 to binary 00000001 Step 2. Flip the bits 11111110
Enterprise Applications
Enterprise Applications Chi Ho Yue Sorav Bansal Shivnath Babu Amin Firoozshahian EE392C Emerging Applications Study Spring 2003 Functionality Online Transaction Processing (OLTP) Users/apps interacting
Chapter 12: Multiprocessor Architectures. Lesson 01: Performance characteristics of Multiprocessor Architectures and Speedup
Chapter 12: Multiprocessor Architectures Lesson 01: Performance characteristics of Multiprocessor Architectures and Speedup Objective Be familiar with basic multiprocessor architectures and be able to
A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment
A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment Panagiotis D. Michailidis and Konstantinos G. Margaritis Parallel and Distributed
Lecture 3: Finding integer solutions to systems of linear equations
Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture
Concept of Cache in web proxies
Concept of Cache in web proxies Chan Kit Wai and Somasundaram Meiyappan 1. Introduction Caching is an effective performance enhancing technique that has been used in computer systems for decades. However,
SPARC64 VIIIfx: CPU for the K computer
SPARC64 VIIIfx: CPU for the K computer Toshio Yoshida Mikio Hondo Ryuji Kan Go Sugizaki SPARC64 VIIIfx, which was developed as a processor for the K computer, uses Fujitsu Semiconductor Ltd. s 45-nm CMOS
