Introduction to Parallel Programming (w/ JAVA)
|
|
|
- Phoebe Davis
- 10 years ago
- Views:
Transcription
1 Introduction to Parallel Programming (w/ JAVA) Dec. 21st, 2015 Christian Terboven IT Center, RWTH Aachen University 1
2 Moore s Law Cramming More Components onto Integrated Circuits Gordon Moore, Electronics, 1965 (ftp://download.intel.com/museum/moores_law/articles- Press_Releases/Gordon_Moore_1965_Article.pdf) # of transistors / cost-effective integrated circuit double every N months (12 <= N <= 24) 2
3 There is no free lunch anymore The number of transistors on a chip is still increasing, but no longer the clock speed! Instead, we see many cores per chip. Parallelization has become a necessity to exploit the performance potential of current microprocessors! 3
4 Multi-Core Processor Design Rule of thumb: Reduction of 1% voltage and 1% frequency reduces the power consumption by 3% and the performance by 0.66%. Cache Cache Core Core Core Voltage = 1 Freq = 1 Area = 1 Power = 1 Perf = 1 Voltage = -15% Freq = -15% Area = 2 Power = ~1 Perf = ~1.8 (Based on slides from Shekhar Borkar, Intel Corp.) 4
5 Multi-Core Multi-Socket Compute Node Design Set of processors is organized inside a locality domain with a locally connected memory. Core Core Core Core The memory of all locality domains is accessible over a shared virtual address on-chip cache on-chip cache on-chip cache on-chip cache space. interconnect Other locality domains are access over a interconnect, the local domain can be accessed very efficiently without resorting to a network of any kind memory memory 5
6 Multi-Core Multi-Socket Compute Node Cluster Design System where memory is distributed among nodes No other node than the local one has direct access to the local memory CPU 1 CPU 2 CPU 3 CPU 4 CPU N Socket Cache Cache Cache Cache Cache MEM MEM MEM MEM MEM Node NET IF NET IF NET IF NET IF NET IF Network interface NETWORK 6
7 What is High Performance Computing (HPC)? From Wikipedia: A supercomputer is a computer at the frontline of current processing capacity, particularly speed of calculation. Simulation, Optimization Virtual Reality Historically there were two principles of science: Theory and Experiment. Computational Science extends them as a third 7 Theory Models, Differential Equations, linear equation systems Experiments Observation and prototypes empirical studies/sciences
8 Agenda I hope you are motivated by now Basic Concepts of Threading Matrix Multiplication: from Serial to Multi-Core Amdahl s Law and Efficiency Matrix Multiplication Reviewed Basic GPGPU Concepts Matrix Multiplication: from Serial to Many-Core Summary Christmas Exercise 8
9 Basic Concepts of Threading 9
10 Processes vs. Threads Applications have to create a team of threads to run on multiple cores simultaneously Threads share the global (shared) data of the program, typically the data on the heap Every thread has its own stack, which may contain private data only visible to the thread Operating systems and/or programming languages offer facilities to creat and manage threads in the application Stack thread Stack thread main() Stack thread Code segment Data segment 10
11 Shared Memory Parallelization Memory can be accessed by several threads running on different cores in a multi-socket multi-core system: CPU 1 CPU 2 c=3+a a a=4 Decompose data into distinct chunks to be processed independently (data parallelism) 11 Look for tasks that can be executed simultaneously (task parallelism)
12 Parallel Programming in Theory and Practice Parallelism has to be exploited by the programmer Theory Practice 12
13 Example of parallel work Example: 4 cars are produced in parallel Prof. Dr. G. Wellein, Dr. G. Hager, Uni Erlangen-Nürnberg 13
14 Limits of scalability Parts of the manufacturing process can not be parallelized Example: Delivery of components (all workers have to wait) 14
15 Limits of scalability (cont.) Individual steps may take more or less time Load imbalances lead to unused resources 15
16 How are you doing? 16
17 Matrix Multiplication: from Serial to Multi-Core 17
18 Illustration of Matrix Multiplication Simple thing: C = A times B, with naive implementation in O(n^3) results in the following computations source: Wikipedia Independent computations exploitable for parallelization: rows of the matrix C 18
19 Illustration of Matrix Multiplication (cont.) Class Matrix.java: final public class Matrix { private final int M; private final double[][] data; // number of rows and columns // M-by-M array // create M-by-N matrix of 0's public Matrix(int dim) { this.m = dim; data = new double[m][m]; } [...] } // end of class Matrix 19
20 Illustration of Matrix Multiplication (cont.) Class Matrix.java, Matrix Multiplication implementation: // return C = A * B public Matrix times(matrix B) { Matrix A = this; if (A.M!= B.M) throw new RuntimeException("Illegal matrix dimensions."); Matrix C = new Matrix(A.M); for (int i = 0; i < M; i++) for (int j = 0; j < M; j++) for (int k = 0; k < M; k++) C.data[i][j] += (A.data[i][k] * B.data[k][j]); 20 } return C;
21 Illustration of Matrix Multiplication (cont.) Class Matrix.java, Matrix Multiplication implementation: // return C = A * B public Matrix times(matrix B) { Matrix A = this; if (A.M!= B.M) throw new RuntimeException("Illegal matrix dimensions."); Independent for every i: parallelize this loop over the threads Matrix C = new Matrix(A.M); for (int i = 0; i < M; i++) for (int j = 0; j < M; j++) for (int k = 0; k < M; k++) C.data[i][j] += (A.data[i][k] * B.data[k][j]); 21 } return C;
22 Thread-level Parallelization 1. Determine the number of threads to be used Initial Thread Serial Part by querying the number of cores, or by user input 2. Compute iteration chunks for every individual thread Rows per Chunk = M / number-of-threads 3. Create a team of threads and start the threads Slave Threads Slave Threads Worker Threads Java class Thread encapsulates all thread mgmt. tasks Parallel Region 22 Provide suitable function: matrix multiplication on given chunk Start thread via start() method 4. Wait for all threads to complete their chunk For each thread call join() method
23 Thread-level Parallelization (cont) Class Matrix.java, threaded Matrix Multiplication implementation: [...] Thread threads[] = new Thread[num_threads]; // start num_threads threads with their individual tasks for (int i = 0; i < num_threads; i++) { } // compute chunk int rowsperchunk = M / num_threads; int srow = i * rowsperchunk; int erow = [...]; // initialize task, create thread, start thread MultiplicationAsExecutor task = new MultiplicationAsExecutor (srow, erow, C.data, A.data, B.data, M); threads[i] = new Thread(task); threads[i].start(); 23 // wait for all threads to finish for (int i = 0; i < num_threads; i++) { } [...] threads[i].join();
24 Thread-level Parallelization (cont) Class MultiplicationAsExecutor.java: public class MultiplicationAsExecutor implements Runnable { [...] } 24 // initialization by storing local chunk public MultiplicationAsExecutor(int srow, int erow, double[][] dc, double[][] da, double[][] db, int dim) { } this.startrow = srow; this.endrow = erow; this.c = dc; this.a = da; this.b = db; this.dim = dim; // perform the actual computation public void run() { } for (int i = startrow; i < endrow; i++) for (int j = 0; j < dim; j++) for (int k = 0; k < dim; k++) c[i][j] += (a[i][k] * b[k][j]); // execute immediately public void execute(runnable r) { } r.run();
25 Runtime [sec.] Performance Evaluation ,079 19, ,138 7, ,44 4,19 0 2D, serial 2D, threaded: 1 2D, threaded: 2 2D, threaded: 3 2D, threaded: 4 2D, threaded: 8 25
26 Amdahl s Law and Efficiency 26
27 Parallelization Overhead Overhead introduced by the parallelization: Time to start / end / manage threads Time to send / exchange data Time spent in synchronization of threads / processes With parallelization: The total CPU time increases, The Wall time decreases, The System time stays the same. Efficient parallelization is about minimizing the overhead introduced by the parallelization itself! 27
28 Speedup and Efficiency Time using 1 CPU: T(1) Time using p CPUs:T(p) Speedup S: S(p)=T(1)/T(p) Measures how much faster the parallel computation is! Efficiency E: E(p)=S(p)/p Ideal case: T(p)=T(1)/p S(p)=p E(p)=1.0 28
29 Amdahl s Law Illustrated If 80% (measured in program runtime) of your work can be parallelized and just 20% are still running sequential, then your speedup will be: 1 processor: time: 100% speedup: 1 2 processors: time: 60% speedup: processors: time: 40% speedup: 2.5 processors: time: 20% speedup: 5 29
30 Matrix Multiplication Reviewed 30
31 Performance Considerations The Issue: 2D arrays in JAVA result in bad performance Better: 1D array with index fct. Caches only work well for consecutive memoy accesses! core on-chip cache CPU is fast Caches: off-chip cache Fast, but expensive, thus small [MBs] Memory is slow Slow, but cheap, thus large [GBs] memory 31
32 Runtime [sec.] Performance Evaluation , ,79 14, , ,274 4,331 3, D, serial 1D, serial 1D, threaded: 1 1D, threaded: 2 1D, threaded: 3 1D, threaded: 4 1D, threaded: 8 32
33 Basic GPGPU Concepts 33
34 Comparison CPU GPU Similar # of transistors but different design NVIDIA Corporation 2010 CPU Optimized for low-latency access to cached data sets Control logic for out-of-order and speculative execution GPU Optimized for data-parallel, throughput computation Architecture tolerant of memory latency More transistors dedicated to computation 34
35 NVIDIA Corporation 2010 GPGPU architecture: NVIDIA s Fermi 3 billion transistors 448 Cores/ Streaming Processors (SP) E.g. floating point and integer unit 14 Streaming Multiprocessors (SM, MP) 32 cores per MP Memory hierarchy Processing flow Copy data from host to device Execute kernel Copy data from device to host GPU 35
36 Comparison CPU GPU Weak memory model PCI Bus MEMORY CPU 3 1 Host + device memory = separate entities No coherence between host + device Data transfers needed Host-directed execution model Copy input data from CPU mem. to device mem. Execute the device program Copy results from device mem. to CPU mem. MEMORY GPU 2 36
37 NVIDIA Corporation 2010 Programming model Definitions Host: CPU, executes functions Device: usually GPU, executes kernels Parallel portion of application executed on device as kernel Kernel is executed as array of threads All threads execute the same code Threads are identified by IDs float x = input[threadid]; float y = func(x); output[threadid] = y; Select input/output data Control decisions 37
38 Time Programming model (cont.) Threads are grouped into blocks, Blocks are grouped into a grid. Kernel is executed as a grid of blocks of threads Host Kernel 1 Block 0 Block 4 Device Block 1 Block 5 Block 2 Block 6 Block 3 Block 7 Dimensions of blocks and grids: 3 Kernel 2 Block (0,0) Block (1,0) Block (0,1) Block (1,1) Block (0,2) Block (1,2) Block (0,3) Block (1,3) Block (1,3) Thread Thread Thread Thread (0,0,0) Thread (0,0,0) Thread (0,0,0) (0,0,0) (1,0,0) (2,0,0) Thread Thread Thread Thread (0,0,0) Thread (0,0,0) Thread (0,0,0) (0,1,0) (1,1,0) (2,1,0) 38
39 Putting it all together execution model logical hierarchy memory model Core thread / vector Registers Registers streaming multiprocessor (SM) instruction cache registers block of threads / gang hardware/ software cache device: GPU grid (kernel) sync possible, shared mem Shared Mem SM-1 L1 L2 Shared Mem SM-n L1 Device Global Memory Host CPU PCIe CPU Mem Host Host Memory 39 Vector, worker, gang mapping is compiler dependent.
40 Matrix Multiplication: from Multi- to Many-Core 40
41 GPGPU Parallelization 1. Create a CUDA-kernel well-suited for the GPGPU device simple-enough and data-parallel code 2. Setup JCuda and Compile your program accordingly kernel code has to be compiled to.ptx file with NVIDIA s compiler 3. Initialize JCuda environment load driver library, initialize device 4. Transfer data from host to device all data necessary on the device 5. Execute CUDA-kernel launch kernel on device 6. Transfer results from device to host all data necessary after the kernel execution on the host 41
42 GPGPU Parallelization (cont.) CUDA-kernel: C is often close enough to JAVA extern "C" global void matmult(int dim, double *c, double *a, double *b) { int row = blockdim.y * blockidx.y + threadidx.y; int col = blockdim.x * blockidx.x + threadidx.x; if (row > dim col > dim) return; } 42 double prod = 0; int kk; for (kk = 0; kk < dim; ++kk){ } prod += a[row * dim + kk] * b[kk * dim + col]; c[row*dim + col] = prod;
43 GPGPU Parallelization (cont.) Class Matrix.java, CUDA Matrix Multiplication implementation: [...] // allocate memory int size = A.M * A.M * Sizeof.DOUBLE; CUdeviceptr a_dev = new CUdeviceptr(); CUdeviceptr b_dev = new CUdeviceptr(); CUdeviceptr c_dev = new CUdeviceptr(); cudamalloc(a_dev, size); cudamalloc(b_dev, size); cudamalloc(c_dev, size); // load code CUmodule module = new CUmodule(); cumoduleload(module, "JCudaMatmulKernel.ptx"); CUfunction function = new CUfunction(); cumodulegetfunction(function, module, "matmult"); 43
44 GPGPU Parallelization (cont.) Class Matrix.java, CUDA Matrix Multiplication implementation: // copy data cumemcpyhtod(a_dev, Pointer.to(A.data), size); cumemcpyhtod(b_dev, Pointer.to(B.data), size); // launch kernel Pointer parameters = Pointer.to( Pointer.to(new int[] { A.M }), Pointer.to(c_dev), Pointer.to(a_dev), Pointer.to(b_dev) ); final int threadsperdim = 32; int grids = (int) Math.ceil(((double) A.M) / threadsperdim); culaunchkernel(function, grids, grids, 1, threadsperdim, threadsperdim, 1, 0, null, parameters, null); cuctxsynchronize(); [...] // cleanup code omissed for brevity 44
45 Runtime [sec.] Performance Evaluation ,79 14, ,579 1,373 0,185 1D, serial 1D, threaded: 1 1D, threaded: 8 1D, cuda 1D, cuda (no copy) 45
46 Performance: Critical Discussion FLOPS: performance rate (no. floating-point operations per second) Matrix Multiplication Algorithm: n^2 + 2n complexity, here n = 1536 Result: mega double precision floating-point operations performed Host: Intel Xeon E Cores, with Hyper-Threading 2.4 GHz clock frequency SSE4.2: 4 floating-ypoint operations per cycle peak 4 * 2.4 * 4 = 38.4 GFLOPs peak performance Our multi-threaded code run at 2025 MFLOPS 5.3 % efficiency 46
47 Performance: Critical Discussion (cont.) GPU: NVIDIA Tesla C CUDA cores 1.15 GHz clock frequency 448 * 1.15 = 515 GFLOPS peak performance Our CUDA kernel runs at 5278 MFLOPS 10.1 % efficiency Note on the GPU: the data transfer is the most costly part Kernel execution time incl. data transfer: sec. Kernel execution time excl. data transfer: sec. The GPU would profit from a larger problem size, or 47 repeated executions of the kernel
48 Performance: Critical Discussion (cont.) Can we do better with the same algorithm? Yes! Matrix Multiplication can profit from blocking, that is the reuse of data in the caches, for both the CPU and the GPU. And: Matrix Multiplication is a standard problem, there are libraries for that: BLAS (dgemm). GPGPU Performance with cublas Kernel execution time incl. data transfer: sec. Kernel execution time excl. data transfer: sec. => The CUDA kernel itself runs at 310 GFLOPS 48
49 Runtime [sec.] Performance Evaluation 1,6 1,4 1,2 1,373 1, ,8 0,6 0,4 0,2 0 0,185 2,34E-002 1D, cuda 1D, cuda (no copy) 1D, cublas 1D, cublas (no copy) 49
50 Summary 50
51 Summary What did we learn? Parallelization has become a necessity to exploit the performance potential of modern multi- and many-core architectures! Efficient programming is about optimizing memory access, efficient parallelization is about minimizing overhead. Shared Memory parallelization: work is distributed over threads on separate cores, threads share global data Heterogeneous architectures (here: GPGPUs): separate memories require explicit data management, well-suited problems can benefit from special architectures with large amount of parallelism. Not covered today: Cache Blocking to achieve even better perf. Not covered today: Distributed Memory parallelization 51
52 Lectures SS 2016 Lecture: Performance & correctness analysis of parallel programs Software Lab: Parallel Programming Models for Applications in the Area of High-Performance Computation (HPC) Seminar: Current Topics in High-Performance Computing (HPC) WS 2016/17 Lecture: Introduction to High-Performance Computing Seminar: Current Topics in High-Performance Computing (HPC) 52
53 Christmas Exercise 53
54 Game of Life Game of Life: zero-player game Evolution is determined by initial state and game rules Rules: 2D orthogonal grid of cells every cell has only two possible states: alive (black) or dead (white) every cell interacts with its neighbours, and at each time step: any live cell with fewer than two live neighbours dies, any live cell with two or three live neighbours lives on, any live cell with more than three live neighbours dies, 54 any dead cell with exactly three live neighbours becomes a live cell.
55 There is something to win! A really nice book on HPC from really nice people: One book for each in the group with the highest performance in a multithreaded solution the highest performance in a CUDAparallel solution random drawing winner Requirement: fill out and hand in the questionnaire Measurements will be done by us on linuxc8 (RWTH Compute Cluster) 55
56 Umfrage zur Produktivität beim Entwickeln im Bereich HPC Allgemeine Umfrage (einmalig gültig bis ) Hilft unseren Forschungsaktivitäten! Bitte nehmen Sie teil! Am Ende der Weihnachtsaufgabe ausfüllen & auf Weihnachtsaufgabe beziehen! Einflussfaktoren auf Programmieraufwand? Benötigter Programmieraufwand? Anzahl der programmierten Code-Zeilen? Erreichte Performance? Zum leichten und qualitativen Ausfüllen der Umfrage bitte während des Entwickelns obige Daten schon festhalten ProductivityHPCStudents/ 56
57 Questions? 57
Parallel Programming Survey
Christian Terboven 02.09.2014 / Aachen, Germany Stand: 26.08.2014 Version 2.3 IT Center der RWTH Aachen University Agenda Overview: Processor Microarchitecture Shared-Memory
GPU Hardware and Programming Models. Jeremy Appleyard, September 2015
GPU Hardware and Programming Models Jeremy Appleyard, September 2015 A brief history of GPUs In this talk Hardware Overview Programming Models Ask questions at any point! 2 A Brief History of GPUs 3 Once
Lecture 11: Multi-Core and GPU. Multithreading. Integration of multiple processor cores on a single chip.
Lecture 11: Multi-Core and GPU Multi-core computers Multithreading GPUs General Purpose GPUs Zebo Peng, IDA, LiTH 1 Multi-Core System Integration of multiple processor cores on a single chip. To provide
HPC with Multicore and GPUs
HPC with Multicore and GPUs Stan Tomov Electrical Engineering and Computer Science Department University of Tennessee, Knoxville CS 594 Lecture Notes March 4, 2015 1/18 Outline! Introduction - Hardware
NVIDIA CUDA Software and GPU Parallel Computing Architecture. David B. Kirk, Chief Scientist
NVIDIA CUDA Software and GPU Parallel Computing Architecture David B. Kirk, Chief Scientist Outline Applications of GPU Computing CUDA Programming Model Overview Programming in CUDA The Basics How to Get
Overview. Lecture 1: an introduction to CUDA. Hardware view. Hardware view. hardware view software view CUDA programming
Overview Lecture 1: an introduction to CUDA Mike Giles [email protected] hardware view software view Oxford University Mathematical Institute Oxford e-research Centre Lecture 1 p. 1 Lecture 1 p.
Next Generation GPU Architecture Code-named Fermi
Next Generation GPU Architecture Code-named Fermi The Soul of a Supercomputer in the Body of a GPU Why is NVIDIA at Super Computing? Graphics is a throughput problem paint every pixel within frame time
GPU System Architecture. Alan Gray EPCC The University of Edinburgh
GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems
Lecture 3: Modern GPUs A Hardware Perspective Mohamed Zahran (aka Z) [email protected] http://www.mzahran.com
CSCI-GA.3033-012 Graphics Processing Units (GPUs): Architecture and Programming Lecture 3: Modern GPUs A Hardware Perspective Mohamed Zahran (aka Z) [email protected] http://www.mzahran.com Modern GPU
GPU Parallel Computing Architecture and CUDA Programming Model
GPU Parallel Computing Architecture and CUDA Programming Model John Nickolls Outline Why GPU Computing? GPU Computing Architecture Multithreading and Arrays Data Parallel Problem Decomposition Parallel
Introduction to Cloud Computing
Introduction to Cloud Computing Parallel Processing I 15 319, spring 2010 7 th Lecture, Feb 2 nd Majd F. Sakr Lecture Motivation Concurrency and why? Different flavors of parallel computing Get the basic
Introduction to GPU hardware and to CUDA
Introduction to GPU hardware and to CUDA Philip Blakely Laboratory for Scientific Computing, University of Cambridge Philip Blakely (LSC) GPU introduction 1 / 37 Course outline Introduction to GPU hardware
Applications to Computational Financial and GPU Computing. May 16th. Dr. Daniel Egloff +41 44 520 01 17 +41 79 430 03 61
F# Applications to Computational Financial and GPU Computing May 16th Dr. Daniel Egloff +41 44 520 01 17 +41 79 430 03 61 Today! Why care about F#? Just another fashion?! Three success stories! How Alea.cuBase
U N C L A S S I F I E D
CUDA and Java GPU Computing in a Cross Platform Application Scot Halverson [email protected] LA-UR-13-20719 Slide 1 What s the goal? Run GPU code alongside Java code Take advantage of high parallelization Utilize
Trends in High-Performance Computing for Power Grid Applications
Trends in High-Performance Computing for Power Grid Applications Franz Franchetti ECE, Carnegie Mellon University www.spiral.net Co-Founder, SpiralGen www.spiralgen.com This talk presents my personal views
Intro to GPU computing. Spring 2015 Mark Silberstein, 048661, Technion 1
Intro to GPU computing Spring 2015 Mark Silberstein, 048661, Technion 1 Serial vs. parallel program One instruction at a time Multiple instructions in parallel Spring 2015 Mark Silberstein, 048661, Technion
CUDA Debugging. GPGPU Workshop, August 2012. Sandra Wienke Center for Computing and Communication, RWTH Aachen University
CUDA Debugging GPGPU Workshop, August 2012 Sandra Wienke Center for Computing and Communication, RWTH Aachen University Nikolay Piskun, Chris Gottbrath Rogue Wave Software Rechen- und Kommunikationszentrum
CUDA Basics. Murphy Stein New York University
CUDA Basics Murphy Stein New York University Overview Device Architecture CUDA Programming Model Matrix Transpose in CUDA Further Reading What is CUDA? CUDA stands for: Compute Unified Device Architecture
Introducing PgOpenCL A New PostgreSQL Procedural Language Unlocking the Power of the GPU! By Tim Child
Introducing A New PostgreSQL Procedural Language Unlocking the Power of the GPU! By Tim Child Bio Tim Child 35 years experience of software development Formerly VP Oracle Corporation VP BEA Systems Inc.
Introduction to GP-GPUs. Advanced Computer Architectures, Cristina Silvano, Politecnico di Milano 1
Introduction to GP-GPUs Advanced Computer Architectures, Cristina Silvano, Politecnico di Milano 1 GPU Architectures: How do we reach here? NVIDIA Fermi, 512 Processing Elements (PEs) 2 What Can It Do?
OpenMP and Performance
Dirk Schmidl IT Center, RWTH Aachen University Member of the HPC Group [email protected] IT Center der RWTH Aachen University Tuning Cycle Performance Tuning aims to improve the runtime of an
LBM BASED FLOW SIMULATION USING GPU COMPUTING PROCESSOR
LBM BASED FLOW SIMULATION USING GPU COMPUTING PROCESSOR Frédéric Kuznik, frederic.kuznik@insa lyon.fr 1 Framework Introduction Hardware architecture CUDA overview Implementation details A simple case:
OpenPOWER Outlook AXEL KOEHLER SR. SOLUTION ARCHITECT HPC
OpenPOWER Outlook AXEL KOEHLER SR. SOLUTION ARCHITECT HPC Driving industry innovation The goal of the OpenPOWER Foundation is to create an open ecosystem, using the POWER Architecture to share expertise,
22S:295 Seminar in Applied Statistics High Performance Computing in Statistics
22S:295 Seminar in Applied Statistics High Performance Computing in Statistics Luke Tierney Department of Statistics & Actuarial Science University of Iowa August 30, 2007 Luke Tierney (U. of Iowa) HPC
Parallel Algorithm Engineering
Parallel Algorithm Engineering Kenneth S. Bøgh PhD Fellow Based on slides by Darius Sidlauskas Outline Background Current multicore architectures UMA vs NUMA The openmp framework Examples Software crisis
GPU Computing with CUDA Lecture 4 - Optimizations. Christopher Cooper Boston University August, 2011 UTFSM, Valparaíso, Chile
GPU Computing with CUDA Lecture 4 - Optimizations Christopher Cooper Boston University August, 2011 UTFSM, Valparaíso, Chile 1 Outline of lecture Recap of Lecture 3 Control flow Coalescing Latency hiding
Binary search tree with SIMD bandwidth optimization using SSE
Binary search tree with SIMD bandwidth optimization using SSE Bowen Zhang, Xinwei Li 1.ABSTRACT In-memory tree structured index search is a fundamental database operation. Modern processors provide tremendous
GPU Computing with CUDA Lecture 2 - CUDA Memories. Christopher Cooper Boston University August, 2011 UTFSM, Valparaíso, Chile
GPU Computing with CUDA Lecture 2 - CUDA Memories Christopher Cooper Boston University August, 2011 UTFSM, Valparaíso, Chile 1 Outline of lecture Recap of Lecture 1 Warp scheduling CUDA Memory hierarchy
OpenCL Optimization. San Jose 10/2/2009 Peng Wang, NVIDIA
OpenCL Optimization San Jose 10/2/2009 Peng Wang, NVIDIA Outline Overview The CUDA architecture Memory optimization Execution configuration optimization Instruction optimization Summary Overall Optimization
Accelerating Intensity Layer Based Pencil Filter Algorithm using CUDA
Accelerating Intensity Layer Based Pencil Filter Algorithm using CUDA Dissertation submitted in partial fulfillment of the requirements for the degree of Master of Technology, Computer Engineering by Amol
Programming models for heterogeneous computing. Manuel Ujaldón Nvidia CUDA Fellow and A/Prof. Computer Architecture Department University of Malaga
Programming models for heterogeneous computing Manuel Ujaldón Nvidia CUDA Fellow and A/Prof. Computer Architecture Department University of Malaga Talk outline [30 slides] 1. Introduction [5 slides] 2.
Computer Graphics Hardware An Overview
Computer Graphics Hardware An Overview Graphics System Monitor Input devices CPU/Memory GPU Raster Graphics System Raster: An array of picture elements Based on raster-scan TV technology The screen (and
Graphics Cards and Graphics Processing Units. Ben Johnstone Russ Martin November 15, 2011
Graphics Cards and Graphics Processing Units Ben Johnstone Russ Martin November 15, 2011 Contents Graphics Processing Units (GPUs) Graphics Pipeline Architectures 8800-GTX200 Fermi Cayman Performance Analysis
Embedded Systems: map to FPGA, GPU, CPU?
Embedded Systems: map to FPGA, GPU, CPU? Jos van Eijndhoven [email protected] Bits&Chips Embedded systems Nov 7, 2013 # of transistors Moore s law versus Amdahl s law Computational Capacity Hardware
Introduction to Numerical General Purpose GPU Computing with NVIDIA CUDA. Part 1: Hardware design and programming model
Introduction to Numerical General Purpose GPU Computing with NVIDIA CUDA Part 1: Hardware design and programming model Amin Safi Faculty of Mathematics, TU dortmund January 22, 2016 Table of Contents Set
Overview on Modern Accelerators and Programming Paradigms Ivan Giro7o [email protected]
Overview on Modern Accelerators and Programming Paradigms Ivan Giro7o [email protected] Informa(on & Communica(on Technology Sec(on (ICTS) Interna(onal Centre for Theore(cal Physics (ICTP) Mul(ple Socket
Lecture 3: Evaluating Computer Architectures. Software & Hardware: The Virtuous Cycle?
Lecture 3: Evaluating Computer Architectures Announcements - Reminder: Homework 1 due Thursday 2/2 Last Time technology back ground Computer elements Circuits and timing Virtuous cycle of the past and
Case Study on Productivity and Performance of GPGPUs
Case Study on Productivity and Performance of GPGPUs Sandra Wienke [email protected] ZKI Arbeitskreis Supercomputing April 2012 Rechen- und Kommunikationszentrum (RZ) RWTH GPU-Cluster 56 Nvidia
~ Greetings from WSU CAPPLab ~
~ Greetings from WSU CAPPLab ~ Multicore with SMT/GPGPU provides the ultimate performance; at WSU CAPPLab, we can help! Dr. Abu Asaduzzaman, Assistant Professor and Director Wichita State University (WSU)
Introduction to GPGPU. Tiziano Diamanti [email protected]
[email protected] Agenda From GPUs to GPGPUs GPGPU architecture CUDA programming model Perspective projection Vectors that connect the vanishing point to every point of the 3D model will intersecate
Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms
Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Family-Based Platforms Executive Summary Complex simulations of structural and systems performance, such as car crash simulations,
Optimizing Parallel Reduction in CUDA. Mark Harris NVIDIA Developer Technology
Optimizing Parallel Reduction in CUDA Mark Harris NVIDIA Developer Technology Parallel Reduction Common and important data parallel primitive Easy to implement in CUDA Harder to get it right Serves as
HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK
HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK Steve Oberlin CTO, Accelerated Computing US to Build Two Flagship Supercomputers SUMMIT SIERRA Partnership for Science 100-300 PFLOPS Peak Performance
Assessing the Performance of OpenMP Programs on the Intel Xeon Phi
Assessing the Performance of OpenMP Programs on the Intel Xeon Phi Dirk Schmidl, Tim Cramer, Sandra Wienke, Christian Terboven, and Matthias S. Müller [email protected] Rechen- und Kommunikationszentrum
CUDA Optimization with NVIDIA Tools. Julien Demouth, NVIDIA
CUDA Optimization with NVIDIA Tools Julien Demouth, NVIDIA What Will You Learn? An iterative method to optimize your GPU code A way to conduct that method with Nvidia Tools 2 What Does the Application
GPGPU accelerated Computational Fluid Dynamics
t e c h n i s c h e u n i v e r s i t ä t b r a u n s c h w e i g Carl-Friedrich Gauß Faculty GPGPU accelerated Computational Fluid Dynamics 5th GACM Colloquium on Computational Mechanics Hamburg Institute
E6895 Advanced Big Data Analytics Lecture 14:! NVIDIA GPU Examples and GPU on ios devices
E6895 Advanced Big Data Analytics Lecture 14: NVIDIA GPU Examples and GPU on ios devices Ching-Yung Lin, Ph.D. Adjunct Professor, Dept. of Electrical Engineering and Computer Science IBM Chief Scientist,
Multi-core architectures. Jernej Barbic 15-213, Spring 2007 May 3, 2007
Multi-core architectures Jernej Barbic 15-213, Spring 2007 May 3, 2007 1 Single-core computer 2 Single-core CPU chip the single core 3 Multi-core architectures This lecture is about a new trend in computer
GPUs for Scientific Computing
GPUs for Scientific Computing p. 1/16 GPUs for Scientific Computing Mike Giles [email protected] Oxford-Man Institute of Quantitative Finance Oxford University Mathematical Institute Oxford e-research
Multi-Threading Performance on Commodity Multi-Core Processors
Multi-Threading Performance on Commodity Multi-Core Processors Jie Chen and William Watson III Scientific Computing Group Jefferson Lab 12000 Jefferson Ave. Newport News, VA 23606 Organization Introduction
CUDA programming on NVIDIA GPUs
p. 1/21 on NVIDIA GPUs Mike Giles [email protected] Oxford University Mathematical Institute Oxford-Man Institute for Quantitative Finance Oxford eresearch Centre p. 2/21 Overview hardware view
CSE 6040 Computing for Data Analytics: Methods and Tools
CSE 6040 Computing for Data Analytics: Methods and Tools Lecture 12 Computer Architecture Overview and Why it Matters DA KUANG, POLO CHAU GEORGIA TECH FALL 2014 Fall 2014 CSE 6040 COMPUTING FOR DATA ANALYSIS
High Performance Cloud: a MapReduce and GPGPU Based Hybrid Approach
High Performance Cloud: a MapReduce and GPGPU Based Hybrid Approach Beniamino Di Martino, Antonio Esposito and Andrea Barbato Department of Industrial and Information Engineering Second University of Naples
GPU Computing with CUDA Lecture 3 - Efficient Shared Memory Use. Christopher Cooper Boston University August, 2011 UTFSM, Valparaíso, Chile
GPU Computing with CUDA Lecture 3 - Efficient Shared Memory Use Christopher Cooper Boston University August, 2011 UTFSM, Valparaíso, Chile 1 Outline of lecture Recap of Lecture 2 Shared memory in detail
GPU Computing - CUDA
GPU Computing - CUDA A short overview of hardware and programing model Pierre Kestener 1 1 CEA Saclay, DSM, Maison de la Simulation Saclay, June 12, 2012 Atelier AO and GPU 1 / 37 Content Historical perspective
Hardware-Aware Analysis and. Presentation Date: Sep 15 th 2009 Chrissie C. Cui
Hardware-Aware Analysis and Optimization of Stable Fluids Presentation Date: Sep 15 th 2009 Chrissie C. Cui Outline Introduction Highlights Flop and Bandwidth Analysis Mehrstellen Schemes Advection Caching
FPGA-based Multithreading for In-Memory Hash Joins
FPGA-based Multithreading for In-Memory Hash Joins Robert J. Halstead, Ildar Absalyamov, Walid A. Najjar, Vassilis J. Tsotras University of California, Riverside Outline Background What are FPGAs Multithreaded
Performance Characteristics of Large SMP Machines
Performance Characteristics of Large SMP Machines Dirk Schmidl, Dieter an Mey, Matthias S. Müller [email protected] Rechen- und Kommunikationszentrum (RZ) Agenda Investigated Hardware Kernel Benchmark
Performance Evaluations of Graph Database using CUDA and OpenMP Compatible Libraries
Performance Evaluations of Graph Database using CUDA and OpenMP Compatible Libraries Shin Morishima 1 and Hiroki Matsutani 1,2,3 1Keio University, 3 14 1 Hiyoshi, Kohoku ku, Yokohama, Japan 2National Institute
ultra fast SOM using CUDA
ultra fast SOM using CUDA SOM (Self-Organizing Map) is one of the most popular artificial neural network algorithms in the unsupervised learning category. Sijo Mathew Preetha Joy Sibi Rajendra Manoj A
GPU File System Encryption Kartik Kulkarni and Eugene Linkov
GPU File System Encryption Kartik Kulkarni and Eugene Linkov 5/10/2012 SUMMARY. We implemented a file system that encrypts and decrypts files. The implementation uses the AES algorithm computed through
Lecture 3. Optimising OpenCL performance
Lecture 3 Optimising OpenCL performance Based on material by Benedict Gaster and Lee Howes (AMD), Tim Mattson (Intel) and several others. - Page 1 Agenda Heterogeneous computing and the origins of OpenCL
An Introduction to Parallel Computing/ Programming
An Introduction to Parallel Computing/ Programming Vicky Papadopoulou Lesta Astrophysics and High Performance Computing Research Group (http://ahpc.euc.ac.cy) Dep. of Computer Science and Engineering European
Software and the Concurrency Revolution
Software and the Concurrency Revolution A: The world s fastest supercomputer, with up to 4 processors, 128MB RAM, 942 MFLOPS (peak). 2 Q: What is a 1984 Cray X-MP? (Or a fractional 2005 vintage Xbox )
ACCELERATING SELECT WHERE AND SELECT JOIN QUERIES ON A GPU
Computer Science 14 (2) 2013 http://dx.doi.org/10.7494/csci.2013.14.2.243 Marcin Pietroń Pawe l Russek Kazimierz Wiatr ACCELERATING SELECT WHERE AND SELECT JOIN QUERIES ON A GPU Abstract This paper presents
Evaluation of CUDA Fortran for the CFD code Strukti
Evaluation of CUDA Fortran for the CFD code Strukti Practical term report from Stephan Soller High performance computing center Stuttgart 1 Stuttgart Media University 2 High performance computing center
FPGA-based MapReduce Framework for Machine Learning
FPGA-based MapReduce Framework for Machine Learning Bo WANG 1, Yi SHAN 1, Jing YAN 2, Yu WANG 1, Ningyi XU 2, Huangzhong YANG 1 1 Department of Electronic Engineering Tsinghua University, Beijing, China
Course Development of Programming for General-Purpose Multicore Processors
Course Development of Programming for General-Purpose Multicore Processors Wei Zhang Department of Electrical and Computer Engineering Virginia Commonwealth University Richmond, VA 23284 [email protected]
CUDA Programming. Week 4. Shared memory and register
CUDA Programming Week 4. Shared memory and register Outline Shared memory and bank confliction Memory padding Register allocation Example of matrix-matrix multiplication Homework SHARED MEMORY AND BANK
Chapter 12: Multiprocessor Architectures. Lesson 01: Performance characteristics of Multiprocessor Architectures and Speedup
Chapter 12: Multiprocessor Architectures Lesson 01: Performance characteristics of Multiprocessor Architectures and Speedup Objective Be familiar with basic multiprocessor architectures and be able to
CS 159 Two Lecture Introduction. Parallel Processing: A Hardware Solution & A Software Challenge
CS 159 Two Lecture Introduction Parallel Processing: A Hardware Solution & A Software Challenge We re on the Road to Parallel Processing Outline Hardware Solution (Day 1) Software Challenge (Day 2) Opportunities
Multi-core and Linux* Kernel
Multi-core and Linux* Kernel Suresh Siddha Intel Open Source Technology Center Abstract Semiconductor technological advances in the recent years have led to the inclusion of multiple CPU execution cores
Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi
Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi ICPP 6 th International Workshop on Parallel Programming Models and Systems Software for High-End Computing October 1, 2013 Lyon, France
Scheduling Task Parallelism" on Multi-Socket Multicore Systems"
Scheduling Task Parallelism" on Multi-Socket Multicore Systems" Stephen Olivier, UNC Chapel Hill Allan Porterfield, RENCI Kyle Wheeler, Sandia National Labs Jan Prins, UNC Chapel Hill Outline" Introduction
18-742 Lecture 4. Parallel Programming II. Homework & Reading. Page 1. Projects handout On Friday Form teams, groups of two
age 1 18-742 Lecture 4 arallel rogramming II Spring 2005 rof. Babak Falsafi http://www.ece.cmu.edu/~ece742 write X Memory send X Memory read X Memory Slides developed in part by rofs. Adve, Falsafi, Hill,
Lecture 1: an introduction to CUDA
Lecture 1: an introduction to CUDA Mike Giles [email protected] Oxford University Mathematical Institute Oxford e-research Centre Lecture 1 p. 1 Overview hardware view software view CUDA programming
Parallelism and Cloud Computing
Parallelism and Cloud Computing Kai Shen Parallel Computing Parallel computing: Process sub tasks simultaneously so that work can be completed faster. For instances: divide the work of matrix multiplication
Making Multicore Work and Measuring its Benefits. Markus Levy, president EEMBC and Multicore Association
Making Multicore Work and Measuring its Benefits Markus Levy, president EEMBC and Multicore Association Agenda Why Multicore? Standards and issues in the multicore community What is Multicore Association?
Workshare Process of Thread Programming and MPI Model on Multicore Architecture
Vol., No. 7, 011 Workshare Process of Thread Programming and MPI Model on Multicore Architecture R. Refianti 1, A.B. Mutiara, D.T Hasta 3 Faculty of Computer Science and Information Technology, Gunadarma
Turbomachinery CFD on many-core platforms experiences and strategies
Turbomachinery CFD on many-core platforms experiences and strategies Graham Pullan Whittle Laboratory, Department of Engineering, University of Cambridge MUSAF Colloquium, CERFACS, Toulouse September 27-29
ACCELERATING COMMERCIAL LINEAR DYNAMIC AND NONLINEAR IMPLICIT FEA SOFTWARE THROUGH HIGH- PERFORMANCE COMPUTING
ACCELERATING COMMERCIAL LINEAR DYNAMIC AND Vladimir Belsky Director of Solver Development* Luis Crivelli Director of Solver Development* Matt Dunbar Chief Architect* Mikhail Belyi Development Group Manager*
Porting the Plasma Simulation PIConGPU to Heterogeneous Architectures with Alpaka
Porting the Plasma Simulation PIConGPU to Heterogeneous Architectures with Alpaka René Widera1, Erik Zenker1,2, Guido Juckeland1, Benjamin Worpitz1,2, Axel Huebl1,2, Andreas Knüpfer2, Wolfgang E. Nagel2,
OpenCL Programming for the CUDA Architecture. Version 2.3
OpenCL Programming for the CUDA Architecture Version 2.3 8/31/2009 In general, there are multiple ways of implementing a given algorithm in OpenCL and these multiple implementations can have vastly different
High Performance Computing in the Multi-core Area
High Performance Computing in the Multi-core Area Arndt Bode Technische Universität München Technology Trends for Petascale Computing Architectures: Multicore Accelerators Special Purpose Reconfigurable
Seeking Opportunities for Hardware Acceleration in Big Data Analytics
Seeking Opportunities for Hardware Acceleration in Big Data Analytics Paul Chow High-Performance Reconfigurable Computing Group Department of Electrical and Computer Engineering University of Toronto Who
10- High Performance Compu5ng
10- High Performance Compu5ng (Herramientas Computacionales Avanzadas para la Inves6gación Aplicada) Rafael Palacios, Fernando de Cuadra MRE Contents Implemen8ng computa8onal tools 1. High Performance
Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca
Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca Carlo Cavazzoni CINECA Supercomputing Application & Innovation www.cineca.it 21 Aprile 2015 FERMI Name: Fermi Architecture: BlueGene/Q
Efficient Parallel Graph Exploration on Multi-Core CPU and GPU
Efficient Parallel Graph Exploration on Multi-Core CPU and GPU Pervasive Parallelism Laboratory Stanford University Sungpack Hong, Tayo Oguntebi, and Kunle Olukotun Graph and its Applications Graph Fundamental
Texture Cache Approximation on GPUs
Texture Cache Approximation on GPUs Mark Sutherland Joshua San Miguel Natalie Enright Jerger {suther68,enright}@ece.utoronto.ca, [email protected] 1 Our Contribution GPU Core Cache Cache
Keys to node-level performance analysis and threading in HPC applications
Keys to node-level performance analysis and threading in HPC applications Thomas GUILLET (Intel; Exascale Computing Research) IFERC seminar, 18 March 2015 Legal Disclaimer & Optimization Notice INFORMATION
Architecture of Hitachi SR-8000
Architecture of Hitachi SR-8000 University of Stuttgart High-Performance Computing-Center Stuttgart (HLRS) www.hlrs.de Slide 1 Most of the slides from Hitachi Slide 2 the problem modern computer are data
RWTH GPU Cluster. Sandra Wienke [email protected] November 2012. Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky
RWTH GPU Cluster Fotos: Christian Iwainsky Sandra Wienke [email protected] November 2012 Rechen- und Kommunikationszentrum (RZ) The RWTH GPU Cluster GPU Cluster: 57 Nvidia Quadro 6000 (Fermi) innovative
Design and Optimization of OpenFOAM-based CFD Applications for Hybrid and Heterogeneous HPC Platforms
Design and Optimization of OpenFOAM-based CFD Applications for Hybrid and Heterogeneous HPC Platforms Amani AlOnazi, David E. Keyes, Alexey Lastovetsky, Vladimir Rychkov Extreme Computing Research Center,
Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing
Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing Innovation Intelligence Devin Jensen August 2012 Altair Knows HPC Altair is the only company that: makes HPC tools
The Uintah Framework: A Unified Heterogeneous Task Scheduling and Runtime System
The Uintah Framework: A Unified Heterogeneous Task Scheduling and Runtime System Qingyu Meng, Alan Humphrey, Martin Berzins Thanks to: John Schmidt and J. Davison de St. Germain, SCI Institute Justin Luitjens
Parallel Firewalls on General-Purpose Graphics Processing Units
Parallel Firewalls on General-Purpose Graphics Processing Units Manoj Singh Gaur and Vijay Laxmi Kamal Chandra Reddy, Ankit Tharwani, Ch.Vamshi Krishna, Lakshminarayanan.V Department of Computer Engineering
Parallel Computing: Strategies and Implications. Dori Exterman CTO IncrediBuild.
Parallel Computing: Strategies and Implications Dori Exterman CTO IncrediBuild. In this session we will discuss Multi-threaded vs. Multi-Process Choosing between Multi-Core or Multi- Threaded development
Parallel Computing with MATLAB
Parallel Computing with MATLAB Scott Benway Senior Account Manager Jiro Doke, Ph.D. Senior Application Engineer 2013 The MathWorks, Inc. 1 Acceleration Strategies Applied in MATLAB Approach Options Best
