An Overview of High- Performance Computing and Challenges for the Future

Similar documents
Supercomputing Status und Trends (Conference Report) Peter Wegner

BSC - Barcelona Supercomputer Center

Trends in High-Performance Computing for Power Grid Applications

64-Bit versus 32-Bit CPUs in Scientific Computing

The Green Index: A Metric for Evaluating System-Wide Energy Efficiency in HPC Systems

Getting the Performance Out Of High Performance Computing. Getting the Performance Out Of High Performance Computing

Introduction to the HPC Challenge Benchmark Suite

Building a Top500-class Supercomputing Cluster at LNS-BUAP

HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK

Parallel Programming Survey

Introduction to High Performance Cluster Computing. Cluster Training for UCL Part 1

High Performance Computing

GPU System Architecture. Alan Gray EPCC The University of Edinburgh

Current Trend of Supercomputer Architecture

1 Bull, 2011 Bull Extreme Computing

Cluster Computing in a College of Criminal Justice

Hardware-Aware Analysis and. Presentation Date: Sep 15 th 2009 Chrissie C. Cui

Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi

COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1)

The Assessment of Benchmarks Executed on Bare-Metal and Using Para-Virtualisation

High Performance Computing, an Introduction to

Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing

GPU Hardware and Programming Models. Jeremy Appleyard, September 2015

Lecture 1: the anatomy of a supercomputer

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales


GPUs for Scientific Computing

Linux Cluster Computing An Administrator s Perspective

Performance Characteristics of a Cost-Effective Medium-Sized Beowulf Cluster Supercomputer

Lecture 11: Multi-Core and GPU. Multithreading. Integration of multiple processor cores on a single chip.

HPC with Multicore and GPUs

A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures

Energy efficient computing on Embedded and Mobile devices. Nikola Rajovic, Nikola Puzovic, Lluis Vilanova, Carlos Villavieja, Alex Ramirez

A Flexible Cluster Infrastructure for Systems Research and Software Development

High Performance Matrix Inversion with Several GPUs

ANALYSIS OF SUPERCOMPUTER DESIGN

The TOP500 Project: Looking Back over 15 Years of Supercomputing Experience

How To Write A Parallel Computer Program

Petascale Software Challenges. William Gropp

Large Scale Simulation on Clusters using COMSOL 4.2

Recommended hardware system configurations for ANSYS users

SERVER CLUSTERING TECHNOLOGY & CONCEPT

Parallel Algorithm Engineering

A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS

Data Centric Systems (DCS)

Cluster performance, how to get the most out of Abel. Ole W. Saastad, Dr.Scient USIT / UAV / FI April 18 th 2013

Sun Constellation System: The Open Petascale Computing Architecture

Cluster Computing at HRI

Obtaining Hardware Performance Metrics for the BlueGene/L Supercomputer

High Performance Computing. Course Notes HPC Fundamentals

Performance Evaluation of Amazon EC2 for NASA HPC Applications!

Performance of the JMA NWP models on the PC cluster TSUBAME.

Kashif Iqbal - PhD Kashif.iqbal@ichec.ie

Performance Monitoring of Parallel Scientific Applications

Dell High-Performance Computing Clusters and Reservoir Simulation Research at UT Austin.

Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC

Petascale Software Challenges. Piyush Chaudhary High Performance Computing

Mixed Precision Iterative Refinement Methods Energy Efficiency on Hybrid Hardware Platforms

Performance Metrics and Scalability Analysis. Performance Metrics and Scalability Analysis

Benchmarking Large Scale Cloud Computing in Asia Pacific

How To Compare Amazon Ec2 To A Supercomputer For Scientific Applications

Next Generation GPU Architecture Code-named Fermi

Lattice QCD Performance. on Multi core Linux Servers

LS DYNA Performance Benchmarks and Profiling. January 2009

OpenMP Programming on ScaleMP

Retargeting PLAPACK to Clusters with Hardware Accelerators

Generations of the computer. processors.

Performance of Various Computers Using Standard Linear Equations Software

Matrix Multiplication

Multi-Threading Performance on Commodity Multi-Core Processors

Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca

PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN

High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/ CAE Associates

ST810 Advanced Computing

22S:295 Seminar in Applied Statistics High Performance Computing in Statistics

Parallel Computing. Introduction

The K computer: Project overview

A New, High-Performance, Low-Power, Floating-Point Embedded Processor for Scientific Computing and DSP Applications

and RISC Optimization Techniques for the Hitachi SR8000 Architecture

Parallel Computing with MATLAB

Virtuoso and Database Scalability

BLM 413E - Parallel Programming Lecture 3

Binary search tree with SIMD bandwidth optimization using SSE

High Performance Computing in the Multi-core Area

2: Computer Performance

Lecture 3: Evaluating Computer Architectures. Software & Hardware: The Virtuous Cycle?

Power-Aware High-Performance Scientific Computing

HSL and its out-of-core solver

Transcription:

The 2006 International Conference on Computational Science and its Applications (ICCSA 2006) An Overview of High- Performance Computing and Challenges for the Future Jack Dongarra University of Tennessee and Oak Ridge National Laboratory 5/10/2006 1 Overview Look at current state of high performance computing Past, present and a look ahead Potential gains by exploiting lower precision devices GPUs, Cell, SSE2, AltaVec New performance evaluation tools HPCS HPC Challenge 2 1

H. Meuer, H. Simon, E. Strohmaier, & JD - Listing of the 500 most powerful Computers in the World - Yardstick: Rmax from LINPACK MPP Ax=b, dense problem TPP performance - Updated twice a year Size SC xy in the States in November Meeting in Germany in June - All data available from www.top500.org Rate 3 Performance Development 1 Pflop/s 2.3 PF/s 280.6 TF/s 100 Tflop/s 10 Tflop/s 1 Tflop/s 100 Gflop/s 10 Gflop/s 1 Gflop/s 1.167 TF/s 59.7 GF/s Fujitsu 'NWT' NAL 0.4 GF/s SUM N=1 Intel ASCI Red Sandia N=500 NEC Earth Simulator IBM ASCI White LLNL My Laptop IBM BlueGene/L 1.646 TF/s 100 Mflop/s 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 4 2

Tightly Architecture/Systems Continuum Coupled Custom processor Best processor performance for with custom interconnect codes Custom that are not cache 80% friendly Cray X1 NEC SX-8 Good communication performance IBM Regatta Simpler programming model IBM Blue Gene/L 60% Most expensive Loosely Coupled Commodity processor with custom interconnect SGI Altix Intel Itanium 2 Cray XT3, XD1 AMD Opteron Commodity processor with commodity interconnect Clusters Pentium, Itanium, Opteron, Alpha GigE, Infiniband, Myrinet, Quadrics NEC TX7 IBM eserver Dawning 100% 40% 20% 0% Good communication Hybrid performance Good scalability Best price/performance (for codes that work well with caches Commod and are latency tolerant) More complex programming model Jun-93 Dec-93 Jun-94 Dec-94 Jun-95 Dec-95 Jun-96 Dec-96 Jun-97 Dec-97 Jun-98 Dec-98 Jun-99 Dec-99 Jun-00 Dec-00 Jun-01 Dec-01 Jun-02 Dec-02 Jun-03 Dec-03 Jun-04 5 Processor Types 500 SIMD 400 Sparc Vector 300 MIPS Alpha 200 HP 100 AMD IBM Power 0 Intel 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 Intel + IBM Power PC + AMD = 91% 6 3

Interconnects / Systems 500 400 300 200 100 0 Others Cray Interconnect SP Switch Crossbar Quadrics Infiniband Myrinet (101) Gigabit Ethernet (249) N/A 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 GigE + Myrinet = 70% 7 Parallelism in the Top500 500 400 300 200 100 0 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 64k-128k 32k-64k 16k-32k 8k-16k 4k-8k 2049-4096 1025-2048 513-1024 257-512 129-256 65-128 33-64 17-32 9-16 5-8 3-4 2 1 8 4

26th List: The TOP10 Manufacturer Computer Rmax [TF/s] Installation Site Country Year #Proc 1 IBM BlueGene/L eserver Blue Gene 280.6 DOE Lawrence Livermore Nat Lab USA custom 131072 2 IBM BGW eserver Blue Gene 91.29 IBM Thomas Watson Research USA custom 40960 3 IBM ASC Purple Power5 p575 63.39 DOE Lawrence Livermore Nat Lab USA custom 10240 4 3 SGI Columbia Altix, Itanium/Infiniband 51.87 NASA Ames USA 2004 hybrid 10160 5 6 10 7 4 Dell Cray NEC Thunderbird Pentium/Infiniband Red Storm Cray XT3 AMD Earth-Simulator SX-6 38.27 36.19 35.86 DOE Sandia Nat Lab DOE Sandia Nat Lab Earth Simulator Center USA USA Japan commod hybrid 2002 custom 8000 10880 5120 8 5 IBM MareNostrum PPC 970/Myrinet 27.91 Barcelona Supercomputer Center Spain commod 4800 9 6 IBM eserver Blue Gene 27.45 ASTRON University Groningen Netherlands custom 12288 10 Cray Jaguar Cray XT3 AMD 20.53 DOE Oak Ridge Nat Lab USA hybrid 5200 9 Performance Projection 1 Eflop/s 100 Pflop/s 10 Pflop/s 1 Pflop/s 100 Tflop/s 10 Tflop/s 1 Tflop/s SUM DARPA HPCS 100 Gflop/s 10 Gflop/s 1 Gflop/s 100 Mflop/s N=1 N=500 My My Laptop Laptop 1993 1995 1997 1999 2001 2003 2007 2009 2011 2013 2015 10 5

A PetaFlop Computer by the End of the Decade 10 Companies working on a building a Petaflop system by the end of the decade. Cray } IBM Sun Dawning } Galactic Chinese Companies Lenovo Hitachi } Japanese NEC Life Simulator (10 Pflop/s) Fujitsu Bull 11 Increasing the number of gates into a tight knot and decreasing the cycle time of the processor 10,000 Sun s s Surface Power Density (W/cm 2 ) 1,000 100 10 4004 8008 1 8080 8086 8085 286 386 Hot Plate 486 Nuclear Reactor Rocket Nozzle Pentium 70 80 90 00 10 Intel Developer Forum, Spring 2004 - Pat Gelsinger (Pentium at 90 W) Square relationship between the cycle time and power. 12 6

Lower Voltage Increase Clock Rate & Transistor Density Cache Core C1 C2 Cache C3 C4 Cache Core Core C1 C2 C1 C2 C1 C2 C3 C4 C3 C4 Cache C1 C2 C1 C2 C3 C4 C3 C4 C3 C4 We have seen increasing number of gates on a chip and increasing clock speed. Heat becoming an unmanageable problem, Intel Processors > 100 Watts We will not see the dramatic increases in clock speeds in the future. However, the number of gates on a chip will continue to increase. 13 Operations per second for serial code Free Lunch For Traditional Software (It just runs twice as fast every 18 months with no change to the code!) 24 GHz, 1 Core 12 GHz, 1 Core 6 GHz 1 Core 3GHz 1 Core 1 Core 3 GHz 2 Cores No Free Lunch For Traditional Software (Without highly concurrent software it won t get any faster!) 2 Cores 3 GHz, 4 Cores 4 Cores 3 GHz, 8 Cores 8 Cores Additional operations per second if code can take advantage of concurrency From Craig Mundie, Microsoft 14 7

CPU Desktop Trends Change is Coming Relative processing power will continue to double every 18 months 256 logical processors per chip in late 2010 300 250 200 150 100 50 0 2004 2006 Year 2007 2008 2009 2010 Hardware Threads Per Chip Cores Per Processor Chip 15 Commodity Processor Trends Bandwidth/Latency is the Critical Issue, not FLOPS Annual increase Typical value in 2006 Typical value in 2010 Got Bandwidth? Typical value in 2020 Single-chip floating-point performance 59% 4 GFLOP/s 32 GFLOP/s 3300 GFLOP/s Front-side bus bandwidth 23% 1 GWord/s = 0.25 word/flop 3.5 GWord/s = 0.11 word/flop 27 GWord/s = 0.008 word/flop DRAM latency (5.5%) 70 ns = 280 FP ops = 70 loads 50 ns = 1600 FP ops = 170 loads 28 ns = 94,000 FP ops = 780 loads Source: Getting Up to Speed: The Future of Supercomputing, National Research Council, 222 pages, 2004, National Academies Press, Washington DC, ISBN 0-309-09502-6. 16 8

That Was the Good News Bad news: the effect of the hardware change on the existing software base Must rethink the design of our software Another disruptive technology Rethink and rewrite the applications, algorithms, and software 17 LAPACK - ScaLAPACK Numerical libraries for linear algebra LAPACK Late 1980 s Sequential and SMPs ScaLAPACK Early 1990 s Message passing systems 18 9

Right-Looking LU factorization (LAPACK) DGETF2 DLSWP DLSWP DTRSM DGEMM DGETF2 Unblocked LU DLSWP row swaps DTRSM triangular solve with many right-hand sides DGEMM matrix-matrix multiply 19 Steps in the LAPACK LU DGETF2 LAPACK DLSWP LAPACK DLSWP LAPACK DTRSM BLAS DGEMM BLAS 20 10

LU Timing Profile LAPACK + BLAS threads Time for each component 1D decomposition and SGI Origin DGETF2 DLASWP(L) DLASWP(R) DTRSM DGEMM LU Timing Profile LAPACK + BLAS threads Threads no lookahead Time for each component In this case the performance difference comes from parallelizing row exchanges (DLASWP) and threads in the LU algorithm. 1D decomposition and SGI Origin DGETF2 DLASWP(L) DLASWP(R) DTRSM DGEMM 11

Right-Looking Right-Looking LU factorization LU Factorization 23 Right-Looking LU with a Lookahead 12

Pivot Rearrangement and Lookahead 4 Processor runs Lookahead = 0 1 2 3 25 Pivot Rearrangement and Lookahead 16 SMP runs 26 13

Motivated by The PlayStation 3's CPU based on a chip codenamed "Cell" Each Cell contains 8 APUs. An APU is a self contained vector processor which acts independently from the others. 4 floating point units capable of a total of 32 Gflop/s (8 Gflop/s each) 256 Gflop/s peak! 32 bit floating point; 64 bit floating point at 25 Gflop/s. IEEE format, but only rounds toward zero in 32 bit, overflow set to largest According to IBM, the SPE s double precision unit is fully IEEE854 compliant. Datapaths lite 27 GPU Performance GPU Vendor NVIDIA NVIDIA ATI Model 6800Ultra 7800GTX X1900XTX Release Year 2004 2006 32-bit Performance 60 GFLOPS 200 GFLOPS 400 GFLOPS 64-bit Performance must be emulated in software 28 14

Idea Something Like This Exploit 32 bit floating point as much as possible. Especially for the bulk of the computation Correct or update the solution with selective use of 64 bit floating point to provide a refined results Intuitively: Compute a 32 bit result, Calculate a correction to 32 bit result using selected higher precision and, Perform the update of the 32 bit results with the correction using high precision. 29 32 and 64 Bit Floating Point Arithmetic Iterative refinement for dense systems can work this way. Solve Ax = b in lower precision, save the factorization (L*U = A*P); O(n 3 ) Compute in higher precision r = b A*x; O(n 2 ) Requires the original data A (stored in high precision) Solve Az = r; using the lower precision factorization; O(n 2 ) Update solution x + = x + z using high precision; O(n) Iterate until converged. Wilkinson, Moler, Stewart, & Higham provide error bound for SP fl pt results when using DP fl pt. We can show using this approach that we can compute the solution to 64-bit floating point precision. Requires extra storage, total is 1.5 times normal; O(n 3 ) work is done in lower precision O(n 2 ) work is done in high precision Problems if the matrix is ill-conditioned in sp; O(10 8 ) 30 15

On the Way to Understanding How to Use the Cell Something Else Happened Realized have the similar situation on our commodity processors. That is, SP is 2X as fast as DP on many systems The Intel Pentium and AMD Opteron have SSE2 2 flops/cycle DP 4 flops/cycle SP IBM PowerPC has AltiVec 8 flops/cycle SP 4 flops/cycle DP No DP on AltiVec Processor and BLAS Library Pentium III Katmai (0.6GHz) Goto BLAS Pentium III CopperMine (0.9GHz) Goto BLAS Pentium Xeon Northwood (2.4GHz) Goto BLAS Pentium Xeon Prescott (3.2GHz) Goto BLAS Pentium IV Prescott (3.4GHz) Goto BLAS AMD Opteron 240 (1.4GHz) Goto BLAS PowerPC G5 (2.7GHz) AltiVec SGEMM (GFlop/s) 0.98 1.59 7.68 10.54 11.09 4.89 18.28 DGEMM (GFlop/s) 0.46 0.79 3.88 5.15 5.61 2.48 9.98 Speedup SP/DP 2.13 2.01 1.98 2.05 1.98 1.97 1.83 31 Performance of single precision and double precision matrix multiply (SGEMM and DGEMM) with n=m=k=1000 Speedups (Ratio of Times) Cray X1 (libsci) Architecture (BLAS) Intel Pentium IV-M Northwood (Goto) Intel Pentium III Katmai (Goto) Intel Pentium III Coppermine (Goto) Intel Pentium IV Prescott (Goto) AMD Opteron (Goto) Sun UltraSPARC IIe (Sunperf) IBM Power PC G5 (2.7 GHz) (VecLib) Compaq Alpha EV6 (CXML) IBM SP Power3 (ESSL) SGI Octane (ATLAS) n 4000 3000 3500 4000 4000 3000 5000 4000 3000 3000 2000 DGEMM /SGEMM 2.02 2.12 2.10 2.00 1.98 1.45 2.29 1.68 0.99 1.03 1.08 DP Solve /SP Solve 1.98 2.11 2.24 1.86 1.93 1.79 2.05 1.57 1.08 1.13 1.13 DP Solve /Iter Ref 1.54 1.79 1.92 1.57 1.53 1.58 1.24 1.32 1.01 1.00 0.91 # iter 5 4 4 5 5 4 5 7 4 3 4 Architecture (BLAS-MPI) AMD Opteron (Goto OpenMPI MX) AMD Opteron (Goto OpenMPI MX) # procs 32 64 n 22627 32000 DP Solve /SP Solve 1.85 1.90 DP Solve /Iter Ref 1.79 1.83 # iter 6 6 32 16

Refinement Technique Using Single/Double Precision Linear Systems LU (dense and sparse) Cholesky QR Factorization Eigenvalue Symmetric eigenvalue problem SVD Same idea as with dense systems, Reduce to tridiagonal/bi-diagonal in lower precision, retain original data and improve with iterative technique using the lower precision to solve systems and use higher precision to calculate residual with original data. O(n 2 ) per value/vector Iterative Linear System Relaxed GMRES Inner/outer scheme LAPACK Working Note 33 Motivation for Additional Benchmarks Linpack Benchmark Good One number Simple to define & easy to rank Allows problem size to change with machine and over time Bad Emphasizes only peak CPU speed and number of CPUs Does not stress local bandwidth Does not stress the network Does not test gather/scatter Ignores Amdahl s Law (Only does weak scaling) Ugly Benchmarketeering hype From Linpack Benchmark and Top500: no single number can reflect overall performance Clearly need something more than Linpack HPC Challenge Benchmark Test suite stresses not only the processors, but the memory system and the interconnect. The real utility of the HPCC benchmarks are that architectures can be described with a wider range of metrics than just Flop/s from Linpack. 34 17

DARPA s High Productivity Computing Systems Full Scale Development Half-Way Point Phase 2 Technology Assessment Review Vendors Petascale/s Systems Validated Procurement Evaluation Methodology Advanced Design & Prototypes Concept Study Productivity Team Test Evaluation Framework New Evaluation Framework Phase 1 $10M Phase 2 (2003-) $50M Phase 3 (2006-2010) $100M 35 Goals HPC Challenge Benchmark Stress CPU, memory system, interconnect To complement the Top500 list To provide benchmarks that bound the performance of many real applications as a function of memory access characteristics e.g., spatial and temporal locality Allow for optimizations Record effort needed for tuning Base run requires MPI and BLAS Provide verification of results Archive results 36 18

Tests on Single Processor and System Local - only a single processor is performing computations. Embarrassingly Parallel - each processor in the entire system is performing computations but they do no communicate with each other explicitly. Global - all processors in the system are performing computations and they explicitly communicate with each other. 37 HPC Challenge Benchmark Consists of basically 7 benchmarks; Think of it as a framework or harness for adding benchmarks of interest. 1. HPL (LINPACK) MPI Global (Ax = b) 2. STREAM Local; single CPU *STREAM Embarrassingly parallel 3. PTRANS (A A + B T ) MPI Global 4. RandomAccess Local; single CPU *RandomAccess Embarrassingly parallel RandomAccess MPI Global 5. BW and Latency MPI Random integer read; update; & write 6. FFT - Global, single CPU, and EP 7. Matrix Multiply single CPU and EP proc i proc k 38 19

Computational Resources and HPC Challenge Benchmarks CPU computational speed Computational resources Memory bandwidth Node Interconnect bandwidth 39 Computational Resources and HPC Challenge Benchmarks HPL Matrix Multiply CPU computational speed Computational resources STREAM Memory bandwidth Node Interconnect bandwidth Random & Natural Ring Bandwidth & Latency 40 20

Memory Access Patterns high Spatial locality Computational Fluid Dynamics Applications Traveling Sales Person Radar Cross Section Digital Signal Processing low Temporal locality high 41 Memory Access Patterns high Spatial locality low STREAM (EP & SP) PTRANS (G) Computational Fluid Dynamics Applications Traveling Sales Person RandomAccess (G, EP, & SP) Radar Cross Section Temporal locality Digital Signal Processing HPL Linpack (G) Matrix Mult (EP & SP) FFT (G, EP, & SP) high 42 21

http://icl.cs.utk.edu/hpcc/ web 43 44 22

http://icl.cs.utk.edu/hpcc/ web 45 46 23

Summary of Current Unmet Needs Performance / Portability Fault tolerance Memory bandwidth/latency Adaptability: Some degree of autonomy to self optimize, test, or monitor. Able to change mode of operation: static or dynamic Better programming models Global shared address space Visible locality Maybe coming soon (incremental, yet offering real benefits): Global Address Space (GAS) languages: UPC, Co-Array Fortran, Titanium, Chapel, X10, Fortress) Minor extensions to existing languages More convenient than MPI Have performance transparency via explicit remote memory references What s needed is a long-term, balanced investment in hardware, software, algorithms and applications in the HPC Ecosystem. 47 Real Crisis With HPC Is With The Software Our ability to configure a hardware system capable of 1 PetaFlop (10 15 ops/s) is without question just a matter of time and $$. A supercomputer application and software are usually much more longlived than a hardware Hardware life typically five years at most. Apps 20-30 years Fortran and C are the main programming models (still!!) The REAL CHALLENGE is Software Programming hasn t changed since the 70 s HUGE manpower investment MPI is that all there is? Often requires HERO programming Investments in the entire software stack is required (OS, libs, etc.) Software is a major cost component of modern technologies. The tradition in HPC system procurement is to assume that the software is free SOFTWARE COSTS (over and over) What s needed is a long-term, balanced investment in the HPC Ecosystem: hardware, software, algorithms and applications. 48 24

Collaborators / Support Top500 Team Erich Strohmaier, NERSC Hans Meuer, Mannheim Horst Simon, NERSC Sca/LAPACK Julien Langou Jakub Kurzak Piotr Luszczek Stan Tomov Julie Langou 49 25