HPC Practical Course Part 4.1. Open Computing Language (OpenCL)

Size: px
Start display at page:

Download "HPC Practical Course Part 4.1. Open Computing Language (OpenCL)"

Transcription

1 HPC Practical Course Part 4.1 Open Computing Language (OpenCL) V. Akishina, I. Kisel, I. Kulakov, M. Zyzak Goethe University of Frankfurt am Main 11 June 2014

2 Computer Architectures Single Instruction Single Data Single Instruction Multiple Data Multiple Instruction Multiple Data Taken from: 11 June 2014 HPC, V. Akishina, I. Kisel, I. Kulakov, M. Zyzak 2 /20

3 OpenCL Architecture OpenCL is a framework for writing programs that execute across heterogeneous platforms consisting of central processing unit (CPUs), graphics processing unit (GPUs), and other processors. Platform Layer API - A hardware abstraction layer over diverse computational resources - Query, select and initialise compute devices - Create compute contexts and work-queues Runtime API - Execute compute kernels - Manage scheduling, compute, and memory resources Language Specification - C-based cross-platform programming interface - Subset of ISO C99 with language extensions - familiar to developers - Defined numerical accuracy - IEEE 754 rounding with specified maximum error - Online or offline compilation and build of compute kernel executables - Rich set of built-in functions Practicality, flexibility and retargetability 11 June 2014 HPC, V. Akishina, I. Kisel, I. Kulakov, M. Zyzak 3 /20

4 Programming Model Host - run the main program - run the compilation - distributes tasks between compute devices - tasks are distributed via queues Compute devises - example - CPU or GPU - consists of one or more Compute units Compute units - example - set of cores of CPU, streaming multiprocessor of GPU - consists of one more Processing elements Processing elements - example - one core of CPU, one core of GPU 11 June 2014 HPC, V. Akishina, I. Kisel, I. Kulakov, M. Zyzak 4 /20

5 Memory Model global memory space - largest memory space available to the device. Each compute unit on the device has a local memory, which is typically on the processor die, and therefore has much higher bandwidth and lower latency than global memory. Local memory can be read and written by any workitem in a work-group, and thus allows for local communication between work-groups. Additionally, attached to each processing element is a private memory, which is typically not used directly by programmers, but is used to hold data for each work-item that does not fit in the processing element s registers. Private Memory Work-Item Workgroup Compute Device Host Work-Item Local Memory Private Memory Workgroup Global/Constant Memory Host Memory Private Memory Work-Item Work-Item Local Memory Private Memory 11 June 2014 HPC, V. Akishina, I. Kisel, I. Kulakov, M. Zyzak 5 /20

6 Execution Model OpenCL application runs on a host which submits work to the compute devices - Context: The environment within which work-items executes, includes devices and their memories and command queues - Program: Collection of kernels and other functions (Analogous to a dynamic library) - Kernel: the code for a. Basically a C function - Work item: the basic unit of work on an OpenCL device - Each processing element works on one - Work items are combined into working group - Each working group is assigned to the compute unit Applications queue kernel execution - Executed in-order or out-of-order work group... work group work group... work group size work group size work group size global size 11 June 2014 HPC, V. Akishina, I. Kisel, I. Kulakov, M. Zyzak 6 /20

7 Host-program structure OpenCL CPU GPU Context Programs Kernels Memory Objects Command Queues Get a platform Get a device Set a context kernel void dp_mul(global const float *a, global const float *b, global float *c) { int id = get_global_id(0); c[id] = a[id] * b[id]; } dp_mul CPU program binary dp_mul GPU program binary dp_mul arg arg [0] [0] arg[0] value value value arg arg [1] [1] arg[1] value value value arg arg [2] [2] arg[2] value value value Images Buffers In In Order Queue GPU GPU Out of of Order Queue Create a command-queue Create memory buffer Write the buffer Create a program Compile the program Create a kernel Set the kernel arguments Call the kernel Read the buffer Clean the memory 11 June 2014 HPC, V. Akishina, I. Kisel, I. Kulakov, M. Zyzak 7 /20

8 Host Program Structure Platform Device Context // Returns the error code cl_int oclgetplatformid (cl_platform_id *platforms) // Pointer to the platform object // Returns the error code cl_int clgetdeviceids (cl_platform_id platform, cl_device_type device_type, // Bitfield identifying the type. For the GPU we use CL_DEVICE_TYPE_GPU cl_uint num_entries, // Number of devices, typically 1 cl_device_id *devices, // Pointer to the device object cl_uint *num_devices) // Puts here the number of devices matching the device_type // Returs the context cl_context clcreatecontext (const cl_context_properties *properties, // Bitwise with the properties (see specification) cl_uint num_devices, // Number of devices const cl_device_id *devices, // Pointer to the devices object void (*pfn_notify)(const char *errinfo, const void *private_info, size_t cb, void *user_data), // (don't worry about this) void *user_data, // (don't worry about this) cl_int *errcode_ret) // error code result 11 June 2014 HPC, V. Akishina, I. Kisel, I. Kulakov, M. Zyzak 8 /20

9 Host Program Structure Command queue Buffer: create Buffer: write cl_command_queue clcreatecommandqueue (cl_context context, cl_device_id device, cl_command_queue_properties properties, // Bitwise with the properties cl_int *errcode_ret) // error code result // Returns the cl_mem object referencing the memory allocated on the device cl_mem clcreatebuffer (cl_context context, // The context where the memory will be allocated cl_mem_flags flags, size_t size, // The size in bytes CL_MEM_READ_WRITE void *host_ptr, CL_MEM_WRITE_ONLY cl_int *errcode_ret) CL_MEM_READ_ONLY CL_MEM_USE_HOST_PTR CL_MEM_ALLOC_HOST_PTR CL_MEM_COPY_HOST_PTR 11 June 2014 HPC, V. Akishina, I. Kisel, I. Kulakov, M. Zyzak 9 /20

10 Host Program Structure Program: create Program: build Error log: Kernel: create // Returns the OpenCL program cl_program clcreateprogramwithsource (cl_context context, cl_uint count, // number of files const char **strings, // array of strings, each one is a file const size_t *lengths, // array specifying the file lengths cl_int *errcode_ret) // error code to be returned cl_int clbuildprogram (cl_program program, cl_uint num_devices, const cl_device_id *device_list, const char *options, // Compiler options, see the specifications for more details void (*pfn_notify)(cl_program, void *user_data), void *user_data) cl_int clgetprogrambuildinfo (cl_program program, cl_device_id device, cl_program_build_info param_name, size_t param_value_size, void *param_value, // The answer size_t *param_value_size_ret) // The parameter we want to know CL_PROGRAM_BUILD_STATUS CL_PROGRAM_BUILD_OPTIONS CL_PROGRAM_BUILD_LOG cl_kernel clcreatekernel (cl_program program, // The program where the kernel is const char *kernel_name, // The name of the kernel, i.e. the name of the kernel function as it's declared in the code cl_int *errcode_ret) 11 June 2014 HPC, V. Akishina, I. Kisel, I. Kulakov, M. Zyzak 10/20

11 Host Program Structure Kernel: arguments Kernel: call Profile cl_int clsetkernelarg (cl_kernel kernel, // Which kernel cl_uint arg_index, // Which argument size_t arg_size, // Size of the next argument (not of the value pointed by it) const void *arg_value) // Value cl_int clenqueuendrangekernel (cl_command_queue command_queue, cl_kernel kernel, cl_uint work_dim, // Choose if we are using 1D, 2D or 3D work- items and work- groups const size_t *global_work_offset, const size_t *global_work_size, // The total number of work- items (must have work_dim dimensions) const size_t *local_work_size, // The number of work- items per work- group (must have work_dim dimensions) cl_uint num_events_in_wait_list, const cl_event *event_wait_list, cl_event *event) 11 June 2014 HPC, V. Akishina, I. Kisel, I. Kulakov, M. Zyzak 11/20

12 Host Program Structure Buffer: read Clean: cl_int clenqueuereadbuffer (cl_command_queue command_queue, cl_mem buffer, // from which buffer cl_bool blocking_read, // whether is a blocking or non- blocking read size_t offset, // offset from the beginning size_t cb, // size to be read (in bytes) void *ptr, // pointer to the host memory cl_uint num_events_in_wait_list, const cl_event *event_wait_list, cl_event *event) // Cleaning up delete[] src_a_h; delete[] src_b_h; delete[] res_h; delete[] check; clreleasekernel(vector_add_k); clreleasecommandqueue(queue); clreleasecontext(context); clreleasememobject(src_a_d); clreleasememobject(src_b_d); clreleasememobject(res_d); 11 June 2014 HPC, V. Akishina, I. Kisel, I. Kulakov, M. Zyzak 12/20

13 Vectorization with OpenCL Vc int_v float_v store() load() OpenCL int4 float4 vstore4() vload4() Example: increase 12,13,14 and 15-th elements of array A by one int A[1000]; int4 a = vload4( 3, A ); a++; vstore4( a, 3, A ); 11 June 2014 HPC, V. Akishina, I. Kisel, I. Kulakov, M. Zyzak 13/20

14 GPU Memory Global memory - very large, typically gigabytes - readable and writable by all s - state only well defined after kernel has finished - slow, but much faster with streaming access (coalescing) - sometimes cached Constant memory - read-only part of the global memory (writable from host) - often cached - prefer constant memory for constant values Local memory - very fast on-chip memory - shared among s within the same work group - versatile (explicit global memory cache, etc.) Private memory - private to a single - usually physically a part of the global memory, slow - will be used to store the s registers if the register file is exhausted (must be avoided) 11 June 2014 HPC, V. Akishina, I. Kisel, I. Kulakov, M. Zyzak 14/20

15 Structure of AMD Radeon HD compute units Up to 925MHz Engine Clock 3GB GDDR5 Memory 3.79 TFLOPS Single Precision compute power 11 June 2014 HPC, V. Akishina, I. Kisel, I. Kulakov, M. Zyzak 15/20

16 GCN Compute Unit Figure 3: GCN Compute Unit 16 floats per SIMD lane 64 floats in total per Compute Unit 11 June 2014 HPC, V. Akishina, I. Kisel, I. Kulakov, M. Zyzak 16/20

17 Exercise 0 An example code is given. It computes vector sum C = A + B. 1. Part 1 1. Run and understand 2. Check error codes, returned by each function (they should be equal to CL_SUCCESS==0) 3. Play: try to change size of the arrays (try 128, 64, 16, 1023), type (try float), etc. 4. Solution is main1.cpp 2. Part 2 1. Display build log 2. Measure the execution time 1. for comparison implement scalar version 2. try more complicated computations (log, sqrt) 3. Solution is main2.cpp 3. Part 3: SIMDize 1. Solution is main3.cpp 11 June 2014 HPC, V. Akishina, I. Kisel, I. Kulakov, M. Zyzak 17/20

18 Exercise 0 (continue) 4. Part 4: Create sub devices 1. Create sub devices with 2. Try CL_DEVICE_PARTITION_EQUALLY, CL_DEVICE_PARTITION_BY_COUNTS and 5. Part 5: cl_int clcreatesubdevices ( cl_device_id in_device, const cl_device_partition_property *properties, cl_uint num_devices, cl_device_id *out_devices, cl_uint *num_devices_ret ) CL_DEVICE_PARTITION_BY_AFFINITY_DOMAIN properties (more information you can find here: Solution is main4.cpp 1. Create a function into the kernel function for a sum calculation 2. We suggest to build the program in c++-like style: clbuildprogram(program, 1, &out_devices[0], "-x clc++", NULL, NULL); 3. Solution is main5.cpp 11 June 2014 HPC, V. Akishina, I. Kisel, I. Kulakov, M. Zyzak 18/20

19 Exercise 0 (continue) 6. Part 6: Run on GPU 1. Try SIMD and scalar versions 2. Try different sizes of working groups 3. Solution is main6.cpp 11 June 2014 HPC, V. Akishina, I. Kisel, I. Kulakov, M. Zyzak 19/20

20 Exercise 1. SIMD KF Implement SIMD KF package with OpenCL: Implement the host part in Fit.cxx (find TODO) Finish the Fit.cl file: implement kernel function, describe data structures. Functionality is already there Measure the scalability with OpenCL: cd TimeHisto. ~/pandaroot/trunk/build/config.sh root -l make_timehisto_stat_complex_opencl.c 11 June 2014 HPC, V. Akishina, I. Kisel, I. Kulakov, M. Zyzak 20/20

GPGPU. General Purpose Computing on. Diese Folien wurden von Mathias Bach und David Rohr erarbeitet

GPGPU. General Purpose Computing on. Diese Folien wurden von Mathias Bach und David Rohr erarbeitet GPGPU General Purpose Computing on Graphics Processing Units Diese Folien wurden von Mathias Bach und David Rohr erarbeitet Volker Lindenstruth (www.compeng.de) 15. November 2011 Copyright, Goethe Uni,

More information

Mitglied der Helmholtz-Gemeinschaft. OpenCL Basics. Parallel Computing on GPU and CPU. Willi Homberg. 23. März 2011

Mitglied der Helmholtz-Gemeinschaft. OpenCL Basics. Parallel Computing on GPU and CPU. Willi Homberg. 23. März 2011 Mitglied der Helmholtz-Gemeinschaft OpenCL Basics Parallel Computing on GPU and CPU Willi Homberg Agenda Introduction OpenCL architecture Platform model Execution model Memory model Programming model Platform

More information

Lecture 3. Optimising OpenCL performance

Lecture 3. Optimising OpenCL performance Lecture 3 Optimising OpenCL performance Based on material by Benedict Gaster and Lee Howes (AMD), Tim Mattson (Intel) and several others. - Page 1 Agenda Heterogeneous computing and the origins of OpenCL

More information

Introduction to OpenCL Programming. Training Guide

Introduction to OpenCL Programming. Training Guide Introduction to OpenCL Programming Training Guide Publication #: 137-41768-10 Rev: A Issue Date: May, 2010 Introduction to OpenCL Programming PID: 137-41768-10 Rev: A May, 2010 2010 Advanced Micro Devices

More information

OpenCL. Administrivia. From Monday. Patrick Cozzi University of Pennsylvania CIS 565 - Spring 2011. Assignment 5 Posted. Project

OpenCL. Administrivia. From Monday. Patrick Cozzi University of Pennsylvania CIS 565 - Spring 2011. Assignment 5 Posted. Project Administrivia OpenCL Patrick Cozzi University of Pennsylvania CIS 565 - Spring 2011 Assignment 5 Posted Due Friday, 03/25, at 11:59pm Project One page pitch due Sunday, 03/20, at 11:59pm 10 minute pitch

More information

Course materials. In addition to these slides, C++ API header files, a set of exercises, and solutions, the following are useful:

Course materials. In addition to these slides, C++ API header files, a set of exercises, and solutions, the following are useful: Course materials In addition to these slides, C++ API header files, a set of exercises, and solutions, the following are useful: OpenCL C 1.2 Reference Card OpenCL C++ 1.2 Reference Card These cards will

More information

WebCL for Hardware-Accelerated Web Applications. Won Jeon, Tasneem Brutch, and Simon Gibbs

WebCL for Hardware-Accelerated Web Applications. Won Jeon, Tasneem Brutch, and Simon Gibbs WebCL for Hardware-Accelerated Web Applications Won Jeon, Tasneem Brutch, and Simon Gibbs What is WebCL? WebCL is a JavaScript binding to OpenCL. WebCL enables significant acceleration of compute-intensive

More information

Programming Guide. ATI Stream Computing OpenCL. June 2010. rev1.03

Programming Guide. ATI Stream Computing OpenCL. June 2010. rev1.03 Programming Guide ATI Stream Computing OpenCL June 2010 rev1.03 2010 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, ATI, the ATI logo, Radeon, FireStream, FirePro, Catalyst,

More information

GPU Architectures. A CPU Perspective. Data Parallelism: What is it, and how to exploit it? Workload characteristics

GPU Architectures. A CPU Perspective. Data Parallelism: What is it, and how to exploit it? Workload characteristics GPU Architectures A CPU Perspective Derek Hower AMD Research 5/21/2013 Goals Data Parallelism: What is it, and how to exploit it? Workload characteristics Execution Models / GPU Architectures MIMD (SPMD),

More information

AMD Accelerated Parallel Processing. OpenCL Programming Guide. November 2013. rev2.7

AMD Accelerated Parallel Processing. OpenCL Programming Guide. November 2013. rev2.7 AMD Accelerated Parallel Processing OpenCL Programming Guide November 2013 rev2.7 2013 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, AMD Accelerated Parallel Processing, the

More information

Introducing PgOpenCL A New PostgreSQL Procedural Language Unlocking the Power of the GPU! By Tim Child

Introducing PgOpenCL A New PostgreSQL Procedural Language Unlocking the Power of the GPU! By Tim Child Introducing A New PostgreSQL Procedural Language Unlocking the Power of the GPU! By Tim Child Bio Tim Child 35 years experience of software development Formerly VP Oracle Corporation VP BEA Systems Inc.

More information

OpenCL Programming for the CUDA Architecture. Version 2.3

OpenCL Programming for the CUDA Architecture. Version 2.3 OpenCL Programming for the CUDA Architecture Version 2.3 8/31/2009 In general, there are multiple ways of implementing a given algorithm in OpenCL and these multiple implementations can have vastly different

More information

Overview. Lecture 1: an introduction to CUDA. Hardware view. Hardware view. hardware view software view CUDA programming

Overview. Lecture 1: an introduction to CUDA. Hardware view. Hardware view. hardware view software view CUDA programming Overview Lecture 1: an introduction to CUDA Mike Giles mike.giles@maths.ox.ac.uk hardware view software view Oxford University Mathematical Institute Oxford e-research Centre Lecture 1 p. 1 Lecture 1 p.

More information

VOCL: An Optimized Environment for Transparent Virtualization of Graphics Processing Units

VOCL: An Optimized Environment for Transparent Virtualization of Graphics Processing Units VOCL: An Optimized Environment for Transparent Virtualization of Graphics Processing Units Shucai Xiao Pavan Balaji Qian Zhu 3 Rajeev Thakur Susan Coghlan 4 Heshan Lin Gaojin Wen 5 Jue Hong 5 Wu-chun Feng

More information

Experiences on using GPU accelerators for data analysis in ROOT/RooFit

Experiences on using GPU accelerators for data analysis in ROOT/RooFit Experiences on using GPU accelerators for data analysis in ROOT/RooFit Sverre Jarp, Alfio Lazzaro, Julien Leduc, Yngve Sneen Lindal, Andrzej Nowak European Organization for Nuclear Research (CERN), Geneva,

More information

OpenCL. An Introduction for HPC programmers. Tim Mattson, Intel

OpenCL. An Introduction for HPC programmers. Tim Mattson, Intel OpenCL An Introduction for HPC programmers (based on a tutorial presented at ISC 11 by Tim Mattson and Udeepta Bordoloi) Tim Mattson, Intel Acknowledgements: Ben Gaster (AMD), Udeepta Bordoloi (AMD) and

More information

Introduction to GP-GPUs. Advanced Computer Architectures, Cristina Silvano, Politecnico di Milano 1

Introduction to GP-GPUs. Advanced Computer Architectures, Cristina Silvano, Politecnico di Milano 1 Introduction to GP-GPUs Advanced Computer Architectures, Cristina Silvano, Politecnico di Milano 1 GPU Architectures: How do we reach here? NVIDIA Fermi, 512 Processing Elements (PEs) 2 What Can It Do?

More information

OpenCL Optimization. San Jose 10/2/2009 Peng Wang, NVIDIA

OpenCL Optimization. San Jose 10/2/2009 Peng Wang, NVIDIA OpenCL Optimization San Jose 10/2/2009 Peng Wang, NVIDIA Outline Overview The CUDA architecture Memory optimization Execution configuration optimization Instruction optimization Summary Overall Optimization

More information

OpenCL Static C++ Kernel Language Extension

OpenCL Static C++ Kernel Language Extension OpenCL Static C++ Kernel Language Extension Document Revision: 04 Advanced Micro Devices Authors: Ofer Rosenberg, Benedict R. Gaster, Bixia Zheng, Irina Lipov December 15, 2011 Contents 1 Overview... 3

More information

GPU System Architecture. Alan Gray EPCC The University of Edinburgh

GPU System Architecture. Alan Gray EPCC The University of Edinburgh GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems

More information

OpenACC 2.0 and the PGI Accelerator Compilers

OpenACC 2.0 and the PGI Accelerator Compilers OpenACC 2.0 and the PGI Accelerator Compilers Michael Wolfe The Portland Group michael.wolfe@pgroup.com This presentation discusses the additions made to the OpenACC API in Version 2.0. I will also present

More information

CUDA SKILLS. Yu-Hang Tang. June 23-26, 2015 CSRC, Beijing

CUDA SKILLS. Yu-Hang Tang. June 23-26, 2015 CSRC, Beijing CUDA SKILLS Yu-Hang Tang June 23-26, 2015 CSRC, Beijing day1.pdf at /home/ytang/slides Referece solutions coming soon Online CUDA API documentation http://docs.nvidia.com/cuda/index.html Yu-Hang Tang @

More information

AMD GPU Architecture. OpenCL Tutorial, PPAM 2009. Dominik Behr September 13th, 2009

AMD GPU Architecture. OpenCL Tutorial, PPAM 2009. Dominik Behr September 13th, 2009 AMD GPU Architecture OpenCL Tutorial, PPAM 2009 Dominik Behr September 13th, 2009 Overview AMD GPU architecture How OpenCL maps on GPU and CPU How to optimize for AMD GPUs and CPUs in OpenCL 2 AMD GPU

More information

Cross-Platform GP with Organic Vectory BV Project Services Consultancy Services Expertise Markets 3D Visualization Architecture/Design Computing Embedded Software GIS Finance George van Venrooij Organic

More information

Maximize Application Performance On the Go and In the Cloud with OpenCL* on Intel Architecture

Maximize Application Performance On the Go and In the Cloud with OpenCL* on Intel Architecture Maximize Application Performance On the Go and In the Cloud with OpenCL* on Intel Architecture Arnon Peleg (Intel) Ben Ashbaugh (Intel) Dave Helmly (Adobe) Legal INFORMATION IN THIS DOCUMENT IS PROVIDED

More information

How OpenCL enables easy access to FPGA performance?

How OpenCL enables easy access to FPGA performance? How OpenCL enables easy access to FPGA performance? Suleyman Demirsoy Agenda Introduction OpenCL Overview S/W Flow H/W Architecture Product Information & design flow Applications Additional Collateral

More information

Monte Carlo Method for Stock Options Pricing Sample

Monte Carlo Method for Stock Options Pricing Sample Monte Carlo Method for Stock Options Pricing Sample User's Guide Copyright 2013 Intel Corporation All Rights Reserved Document Number: 325264-003US Revision: 1.0 Document Number: 325264-003US Intel SDK

More information

COSCO 2015 Heterogeneous Computing Programming

COSCO 2015 Heterogeneous Computing Programming COSCO 2015 Heterogeneous Computing Programming Michael Meyer, Shunsuke Ishikuro Supporters: Kazuaki Sasamoto, Ryunosuke Murakami July 24th, 2015 Heterogeneous Computing Programming 1. Overview 2. Methodology

More information

PDC Summer School Introduction to High- Performance Computing: OpenCL Lab

PDC Summer School Introduction to High- Performance Computing: OpenCL Lab PDC Summer School Introduction to High- Performance Computing: OpenCL Lab Instructor: David Black-Schaffer Introduction This lab assignment is designed to give you experience

More information

CUDA Basics. Murphy Stein New York University

CUDA Basics. Murphy Stein New York University CUDA Basics Murphy Stein New York University Overview Device Architecture CUDA Programming Model Matrix Transpose in CUDA Further Reading What is CUDA? CUDA stands for: Compute Unified Device Architecture

More information

Optimization. NVIDIA OpenCL Best Practices Guide. Version 1.0

Optimization. NVIDIA OpenCL Best Practices Guide. Version 1.0 Optimization NVIDIA OpenCL Best Practices Guide Version 1.0 August 10, 2009 NVIDIA OpenCL Best Practices Guide REVISIONS Original release: July 2009 ii August 16, 2009 Table of Contents Preface... v What

More information

Next Generation GPU Architecture Code-named Fermi

Next Generation GPU Architecture Code-named Fermi Next Generation GPU Architecture Code-named Fermi The Soul of a Supercomputer in the Body of a GPU Why is NVIDIA at Super Computing? Graphics is a throughput problem paint every pixel within frame time

More information

Introduction GPU Hardware GPU Computing Today GPU Computing Example Outlook Summary. GPU Computing. Numerical Simulation - from Models to Software

Introduction GPU Hardware GPU Computing Today GPU Computing Example Outlook Summary. GPU Computing. Numerical Simulation - from Models to Software GPU Computing Numerical Simulation - from Models to Software Andreas Barthels JASS 2009, Course 2, St. Petersburg, Russia Prof. Dr. Sergey Y. Slavyanov St. Petersburg State University Prof. Dr. Thomas

More information

Java GPU Computing. Maarten Steur & Arjan Lamers

Java GPU Computing. Maarten Steur & Arjan Lamers Java GPU Computing Maarten Steur & Arjan Lamers Overzicht OpenCL Simpel voorbeeld Casus Tips & tricks Vragen Waarom GPU Computing Afkortingen CPU, GPU, APU Khronos: OpenCL, OpenGL Nvidia: CUDA JogAmp JOCL,

More information

ATI Radeon 4800 series Graphics. Michael Doggett Graphics Architecture Group Graphics Product Group

ATI Radeon 4800 series Graphics. Michael Doggett Graphics Architecture Group Graphics Product Group ATI Radeon 4800 series Graphics Michael Doggett Graphics Architecture Group Graphics Product Group Graphics Processing Units ATI Radeon HD 4870 AMD Stream Computing Next Generation GPUs 2 Radeon 4800 series

More information

gpus1 Ubuntu 10.04 Available via ssh

gpus1 Ubuntu 10.04 Available via ssh gpus1 Ubuntu 10.04 Available via ssh root@gpus1:[~]#lspci -v grep VGA 01:04.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200eW WPCM450 (rev 0a) 03:00.0 VGA compatible controller: nvidia Corporation

More information

Binary search tree with SIMD bandwidth optimization using SSE

Binary search tree with SIMD bandwidth optimization using SSE Binary search tree with SIMD bandwidth optimization using SSE Bowen Zhang, Xinwei Li 1.ABSTRACT In-memory tree structured index search is a fundamental database operation. Modern processors provide tremendous

More information

Introduction to GPU Architecture

Introduction to GPU Architecture Introduction to GPU Architecture Ofer Rosenberg, PMTS SW, OpenCL Dev. Team AMD Based on From Shader Code to a Teraflop: How GPU Shader Cores Work, By Kayvon Fatahalian, Stanford University Content 1. Three

More information

Intro to GPU computing. Spring 2015 Mark Silberstein, 048661, Technion 1

Intro to GPU computing. Spring 2015 Mark Silberstein, 048661, Technion 1 Intro to GPU computing Spring 2015 Mark Silberstein, 048661, Technion 1 Serial vs. parallel program One instruction at a time Multiple instructions in parallel Spring 2015 Mark Silberstein, 048661, Technion

More information

The High Performance Internet of Things: using GVirtuS for gluing cloud computing and ubiquitous connected devices

The High Performance Internet of Things: using GVirtuS for gluing cloud computing and ubiquitous connected devices WS on Models, Algorithms and Methodologies for Hierarchical Parallelism in new HPC Systems The High Performance Internet of Things: using GVirtuS for gluing cloud computing and ubiquitous connected devices

More information

CUDA programming on NVIDIA GPUs

CUDA programming on NVIDIA GPUs p. 1/21 on NVIDIA GPUs Mike Giles mike.giles@maths.ox.ac.uk Oxford University Mathematical Institute Oxford-Man Institute for Quantitative Finance Oxford eresearch Centre p. 2/21 Overview hardware view

More information

Parallel Programming Survey

Parallel Programming Survey Christian Terboven 02.09.2014 / Aachen, Germany Stand: 26.08.2014 Version 2.3 IT Center der RWTH Aachen University Agenda Overview: Processor Microarchitecture Shared-Memory

More information

Lecture 1: an introduction to CUDA

Lecture 1: an introduction to CUDA Lecture 1: an introduction to CUDA Mike Giles mike.giles@maths.ox.ac.uk Oxford University Mathematical Institute Oxford e-research Centre Lecture 1 p. 1 Overview hardware view software view CUDA programming

More information

SOFTWARE DEVELOPMENT TOOLS USING GPGPU POTENTIALITIES

SOFTWARE DEVELOPMENT TOOLS USING GPGPU POTENTIALITIES SOFTWARE DEVELOPMENT TOOLS USING GPGPU POTENTIALITIES V.A. Dudnik, V.I. Kudryavtsev, T.M. Sereda, S.A. Us, M.V. Shestakov National Science Center Kharkov Institute of Physics and Technology, 61108, Kharkov,

More information

Altera SDK for OpenCL

Altera SDK for OpenCL Altera SDK for OpenCL Best Practices Guide Subscribe OCL003-15.0.0 101 Innovation Drive San Jose, CA 95134 www.altera.com TOC-2 Contents...1-1 Introduction...1-1 FPGA Overview...1-1 Pipelines... 1-2 Single

More information

Introduction to GPU hardware and to CUDA

Introduction to GPU hardware and to CUDA Introduction to GPU hardware and to CUDA Philip Blakely Laboratory for Scientific Computing, University of Cambridge Philip Blakely (LSC) GPU introduction 1 / 37 Course outline Introduction to GPU hardware

More information

The Importance of Software in High-Performance Computing

The Importance of Software in High-Performance Computing The Importance of Software in High-Performance Computing Christian Bischof FG Scientific Computing Hochschulrechenzentrum TU Darmstadt christian.bischof@tu-darmstadt.de 04.06.2013 FB Computer Science Scientific

More information

Radeon GPU Architecture and the Radeon 4800 series. Michael Doggett Graphics Architecture Group June 27, 2008

Radeon GPU Architecture and the Radeon 4800 series. Michael Doggett Graphics Architecture Group June 27, 2008 Radeon GPU Architecture and the series Michael Doggett Graphics Architecture Group June 27, 2008 Graphics Processing Units Introduction GPU research 2 GPU Evolution GPU started as a triangle rasterizer

More information

Accelerating sequential computer vision algorithms using OpenMP and OpenCL on commodity parallel hardware

Accelerating sequential computer vision algorithms using OpenMP and OpenCL on commodity parallel hardware Accelerating sequential computer vision algorithms using OpenMP and OpenCL on commodity parallel hardware 25 August 2014 Copyright 2001 2014 by NHL Hogeschool and Van de Loosdrecht Machine Vision BV All

More information

GPGPU for Real-Time Data Analytics: Introduction. Nanyang Technological University, Singapore 2

GPGPU for Real-Time Data Analytics: Introduction. Nanyang Technological University, Singapore 2 GPGPU for Real-Time Data Analytics: Introduction Bingsheng He 1, Huynh Phung Huynh 2, Rick Siow Mong Goh 2 1 Nanyang Technological University, Singapore 2 A*STAR Institute of High Performance Computing,

More information

Evaluation of CUDA Fortran for the CFD code Strukti

Evaluation of CUDA Fortran for the CFD code Strukti Evaluation of CUDA Fortran for the CFD code Strukti Practical term report from Stephan Soller High performance computing center Stuttgart 1 Stuttgart Media University 2 High performance computing center

More information

Intel Media Server Studio - Metrics Monitor (v1.1.0) Reference Manual

Intel Media Server Studio - Metrics Monitor (v1.1.0) Reference Manual Intel Media Server Studio - Metrics Monitor (v1.1.0) Reference Manual Overview Metrics Monitor is part of Intel Media Server Studio 2015 for Linux Server. Metrics Monitor is a user space shared library

More information

GPU Hardware Performance. Fall 2015

GPU Hardware Performance. Fall 2015 Fall 2015 Atomic operations performs read-modify-write operations on shared or global memory no interference with other threads for 32-bit and 64-bit integers (c. c. 1.2), float addition (c. c. 2.0) using

More information

GPUs for Scientific Computing

GPUs for Scientific Computing GPUs for Scientific Computing p. 1/16 GPUs for Scientific Computing Mike Giles mike.giles@maths.ox.ac.uk Oxford-Man Institute of Quantitative Finance Oxford University Mathematical Institute Oxford e-research

More information

Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi

Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi ICPP 6 th International Workshop on Parallel Programming Models and Systems Software for High-End Computing October 1, 2013 Lyon, France

More information

Keil C51 Cross Compiler

Keil C51 Cross Compiler Keil C51 Cross Compiler ANSI C Compiler Generates fast compact code for the 8051 and it s derivatives Advantages of C over Assembler Do not need to know the microcontroller instruction set Register allocation

More information

Introduction to GPU Programming Languages

Introduction to GPU Programming Languages CSC 391/691: GPU Programming Fall 2011 Introduction to GPU Programming Languages Copyright 2011 Samuel S. Cho http://www.umiacs.umd.edu/ research/gpu/facilities.html Maryland CPU/GPU Cluster Infrastructure

More information

Graphics Cards and Graphics Processing Units. Ben Johnstone Russ Martin November 15, 2011

Graphics Cards and Graphics Processing Units. Ben Johnstone Russ Martin November 15, 2011 Graphics Cards and Graphics Processing Units Ben Johnstone Russ Martin November 15, 2011 Contents Graphics Processing Units (GPUs) Graphics Pipeline Architectures 8800-GTX200 Fermi Cayman Performance Analysis

More information

Let s put together a Manual Processor

Let s put together a Manual Processor Lecture 14 Let s put together a Manual Processor Hardware Lecture 14 Slide 1 The processor Inside every computer there is at least one processor which can take an instruction, some operands and produce

More information

GPU Parallel Computing Architecture and CUDA Programming Model

GPU Parallel Computing Architecture and CUDA Programming Model GPU Parallel Computing Architecture and CUDA Programming Model John Nickolls Outline Why GPU Computing? GPU Computing Architecture Multithreading and Arrays Data Parallel Problem Decomposition Parallel

More information

White Paper AMD GRAPHICS CORES NEXT (GCN) ARCHITECTURE

White Paper AMD GRAPHICS CORES NEXT (GCN) ARCHITECTURE White Paper AMD GRAPHICS CORES NEXT (GCN) ARCHITECTURE Table of Contents INTRODUCTION 2 COMPUTE UNIT OVERVIEW 3 CU FRONT-END 5 SCALAR EXECUTION AND CONTROL FLOW 5 VECTOR EXECUTION 6 VECTOR REGISTERS 6

More information

www.quilogic.com SQL/XML-IMDBg GPU boosted In-Memory Database for ultra fast data management Harald Frick CEO QuiLogic In-Memory DB Technology

www.quilogic.com SQL/XML-IMDBg GPU boosted In-Memory Database for ultra fast data management Harald Frick CEO QuiLogic In-Memory DB Technology SQL/XML-IMDBg GPU boosted In-Memory Database for ultra fast data management Harald Frick CEO QuiLogic In-Memory DB Technology The parallel revolution Future computing systems are parallel, but Programmers

More information

clopencl - Supporting Distributed Heterogeneous Computing in HPC Clusters

clopencl - Supporting Distributed Heterogeneous Computing in HPC Clusters clopencl - Supporting Distributed Heterogeneous Computing in HPC Clusters Albano Alves 1, José Rufino 1, António Pina 2, Luís Santos 2 1 Polytechnic Institute of Bragança, Portugal 2 University of Minho,

More information

GPGPU Computing. Yong Cao

GPGPU Computing. Yong Cao GPGPU Computing Yong Cao Why Graphics Card? It s powerful! A quiet trend Copyright 2009 by Yong Cao Why Graphics Card? It s powerful! Processor Processing Units FLOPs per Unit Clock Speed Processing Power

More information

GPU Computing - CUDA

GPU Computing - CUDA GPU Computing - CUDA A short overview of hardware and programing model Pierre Kestener 1 1 CEA Saclay, DSM, Maison de la Simulation Saclay, June 12, 2012 Atelier AO and GPU 1 / 37 Content Historical perspective

More information

Rethinking SIMD Vectorization for In-Memory Databases

Rethinking SIMD Vectorization for In-Memory Databases SIGMOD 215, Melbourne, Victoria, Australia Rethinking SIMD Vectorization for In-Memory Databases Orestis Polychroniou Columbia University Arun Raghavan Oracle Labs Kenneth A. Ross Columbia University Latest

More information

FLOATING-POINT ARITHMETIC IN AMD PROCESSORS MICHAEL SCHULTE AMD RESEARCH JUNE 2015

FLOATING-POINT ARITHMETIC IN AMD PROCESSORS MICHAEL SCHULTE AMD RESEARCH JUNE 2015 FLOATING-POINT ARITHMETIC IN AMD PROCESSORS MICHAEL SCHULTE AMD RESEARCH JUNE 2015 AGENDA The Kaveri Accelerated Processing Unit (APU) The Graphics Core Next Architecture and its Floating-Point Arithmetic

More information

Multi-Threading Performance on Commodity Multi-Core Processors

Multi-Threading Performance on Commodity Multi-Core Processors Multi-Threading Performance on Commodity Multi-Core Processors Jie Chen and William Watson III Scientific Computing Group Jefferson Lab 12000 Jefferson Ave. Newport News, VA 23606 Organization Introduction

More information

GPU Architecture. An OpenCL Programmer s Introduction. Lee Howes November 3, 2010

GPU Architecture. An OpenCL Programmer s Introduction. Lee Howes November 3, 2010 GPU Architecture An OpenCL Programmer s Introduction Lee Howes November 3, 2010 The aim of this webinar To provide a general background to modern GPU architectures To place the AMD GPU designs in context:

More information

Lecture 1 Introduction to Android

Lecture 1 Introduction to Android These slides are by Dr. Jaerock Kwon at. The original URL is http://kettering.jrkwon.com/sites/default/files/2011-2/ce-491/lecture/alecture-01.pdf so please use that instead of pointing to this local copy

More information

How To Port A Program To Dynamic C (C) (C-Based) (Program) (For A Non Portable Program) (Un Portable) (Permanent) (Non Portable) C-Based (Programs) (Powerpoint)

How To Port A Program To Dynamic C (C) (C-Based) (Program) (For A Non Portable Program) (Un Portable) (Permanent) (Non Portable) C-Based (Programs) (Powerpoint) TN203 Porting a Program to Dynamic C Introduction Dynamic C has a number of improvements and differences compared to many other C compiler systems. This application note gives instructions and suggestions

More information

Illustration 1: Diagram of program function and data flow

Illustration 1: Diagram of program function and data flow The contract called for creation of a random access database of plumbing shops within the near perimeter of FIU Engineering school. The database features a rating number from 1-10 to offer a guideline

More information

GPU Architecture. Michael Doggett ATI

GPU Architecture. Michael Doggett ATI GPU Architecture Michael Doggett ATI GPU Architecture RADEON X1800/X1900 Microsoft s XBOX360 Xenos GPU GPU research areas ATI - Driving the Visual Experience Everywhere Products from cell phones to super

More information

Introduction to GPGPU. Tiziano Diamanti t.diamanti@cineca.it

Introduction to GPGPU. Tiziano Diamanti t.diamanti@cineca.it t.diamanti@cineca.it Agenda From GPUs to GPGPUs GPGPU architecture CUDA programming model Perspective projection Vectors that connect the vanishing point to every point of the 3D model will intersecate

More information

MPLAB TM C30 Managed PSV Pointers. Beta support included with MPLAB C30 V3.00

MPLAB TM C30 Managed PSV Pointers. Beta support included with MPLAB C30 V3.00 MPLAB TM C30 Managed PSV Pointers Beta support included with MPLAB C30 V3.00 Contents 1 Overview 2 1.1 Why Beta?.............................. 2 1.2 Other Sources of Reference..................... 2 2

More information

ultra fast SOM using CUDA

ultra fast SOM using CUDA ultra fast SOM using CUDA SOM (Self-Organizing Map) is one of the most popular artificial neural network algorithms in the unsupervised learning category. Sijo Mathew Preetha Joy Sibi Rajendra Manoj A

More information

LSN 2 Computer Processors

LSN 2 Computer Processors LSN 2 Computer Processors Department of Engineering Technology LSN 2 Computer Processors Microprocessors Design Instruction set Processor organization Processor performance Bandwidth Clock speed LSN 2

More information

Enabling OpenCL Acceleration of Web Applications

Enabling OpenCL Acceleration of Web Applications Enabling OpenCL Acceleration of Web Applications Tasneem Brutch Samsung Electronics Khronos WebCL Working Group Chair LinuxCon Sept. 17, 2013, New Orleans, LA Outline WebCL tutorial, demos and coding examples

More information

x64 Servers: Do you want 64 or 32 bit apps with that server?

x64 Servers: Do you want 64 or 32 bit apps with that server? TMurgent Technologies x64 Servers: Do you want 64 or 32 bit apps with that server? White Paper by Tim Mangan TMurgent Technologies February, 2006 Introduction New servers based on what is generally called

More information

Lecture 11: Multi-Core and GPU. Multithreading. Integration of multiple processor cores on a single chip.

Lecture 11: Multi-Core and GPU. Multithreading. Integration of multiple processor cores on a single chip. Lecture 11: Multi-Core and GPU Multi-core computers Multithreading GPUs General Purpose GPUs Zebo Peng, IDA, LiTH 1 Multi-Core System Integration of multiple processor cores on a single chip. To provide

More information

Embedded Systems. Review of ANSI C Topics. A Review of ANSI C and Considerations for Embedded C Programming. Basic features of C

Embedded Systems. Review of ANSI C Topics. A Review of ANSI C and Considerations for Embedded C Programming. Basic features of C Embedded Systems A Review of ANSI C and Considerations for Embedded C Programming Dr. Jeff Jackson Lecture 2-1 Review of ANSI C Topics Basic features of C C fundamentals Basic data types Expressions Selection

More information

SYCL for OpenCL. Andrew Richards, CEO Codeplay & Chair SYCL Working group GDC, March 2014. Copyright Khronos Group 2014 - Page 1

SYCL for OpenCL. Andrew Richards, CEO Codeplay & Chair SYCL Working group GDC, March 2014. Copyright Khronos Group 2014 - Page 1 SYCL for OpenCL Andrew Richards, CEO Codeplay & Chair SYCL Working group GDC, March 2014 Copyright Khronos Group 2014 - Page 1 Where is OpenCL today? OpenCL: supported by a very wide range of platforms

More information

CS222: Systems Programming

CS222: Systems Programming CS222: Systems Programming The Basics January 24, 2008 A Designated Center of Academic Excellence in Information Assurance Education by the National Security Agency Agenda Operating System Essentials Windows

More information

Radeon HD 2900 and Geometry Generation. Michael Doggett

Radeon HD 2900 and Geometry Generation. Michael Doggett Radeon HD 2900 and Geometry Generation Michael Doggett September 11, 2007 Overview Introduction to 3D Graphics Radeon 2900 Starting Point Requirements Top level Pipeline Blocks from top to bottom Command

More information

NVIDIA GeForce GTX 580 GPU Datasheet

NVIDIA GeForce GTX 580 GPU Datasheet NVIDIA GeForce GTX 580 GPU Datasheet NVIDIA GeForce GTX 580 GPU Datasheet 3D Graphics Full Microsoft DirectX 11 Shader Model 5.0 support: o NVIDIA PolyMorph Engine with distributed HW tessellation engines

More information

Introduction to Numerical General Purpose GPU Computing with NVIDIA CUDA. Part 1: Hardware design and programming model

Introduction to Numerical General Purpose GPU Computing with NVIDIA CUDA. Part 1: Hardware design and programming model Introduction to Numerical General Purpose GPU Computing with NVIDIA CUDA Part 1: Hardware design and programming model Amin Safi Faculty of Mathematics, TU dortmund January 22, 2016 Table of Contents Set

More information

ST810 Advanced Computing

ST810 Advanced Computing ST810 Advanced Computing Lecture 17: Parallel computing part I Eric B. Laber Hua Zhou Department of Statistics North Carolina State University Mar 13, 2013 Outline computing Hardware computing overview

More information

HPC with Multicore and GPUs

HPC with Multicore and GPUs HPC with Multicore and GPUs Stan Tomov Electrical Engineering and Computer Science Department University of Tennessee, Knoxville CS 594 Lecture Notes March 4, 2015 1/18 Outline! Introduction - Hardware

More information

Symbol Tables. Introduction

Symbol Tables. Introduction Symbol Tables Introduction A compiler needs to collect and use information about the names appearing in the source program. This information is entered into a data structure called a symbol table. The

More information

OpenCL Programming Guide for the CUDA Architecture. Version 2.3

OpenCL Programming Guide for the CUDA Architecture. Version 2.3 OpenCL Programming Guide for the CUDA Architecture Version 2.3 8/27/2009 Table of Contents Chapter 1. Introduction... 5 1.1 From Graphics Processing to General-Purpose Parallel Computing... 5 1.2 CUDA

More information

GPU Profiling with AMD CodeXL

GPU Profiling with AMD CodeXL GPU Profiling with AMD CodeXL Software Profiling Course Hannes Würfel OUTLINE 1. Motivation 2. GPU Recap 3. OpenCL 4. CodeXL Overview 5. CodeXL Internals 6. CodeXL Profiling 7. CodeXL Debugging 8. Sources

More information

CUDA. Multicore machines

CUDA. Multicore machines CUDA GPU vs Multicore computers Multicore machines Emphasize multiple full-blown processor cores, implementing the complete instruction set of the CPU The cores are out-of-order implying that they could

More information

THE FUTURE OF THE APU BRAIDED PARALLELISM Session 2901

THE FUTURE OF THE APU BRAIDED PARALLELISM Session 2901 THE FUTURE OF THE APU BRAIDED PARALLELISM Session 2901 Benedict R. Gaster AMD Programming Models Architect Lee Howes AMD MTS Fusion System Software PROGRAMMING MODELS A Track Introduction Benedict Gaster

More information

C++FA 3.1 OPTIMIZING C++

C++FA 3.1 OPTIMIZING C++ C++FA 3.1 OPTIMIZING C++ Ben Van Vliet Measuring Performance Performance can be measured and judged in different ways execution time, memory usage, error count, ease of use and trade offs usually have

More information

Android Renderscript. Stephen Hines, Shih-wei Liao, Jason Sams, Alex Sakhartchouk srhines@google.com November 18, 2011

Android Renderscript. Stephen Hines, Shih-wei Liao, Jason Sams, Alex Sakhartchouk srhines@google.com November 18, 2011 Android Renderscript Stephen Hines, Shih-wei Liao, Jason Sams, Alex Sakhartchouk srhines@google.com November 18, 2011 Outline Goals/Design of Renderscript Components Offline Compiler Online JIT Compiler

More information

Accelerating Intensity Layer Based Pencil Filter Algorithm using CUDA

Accelerating Intensity Layer Based Pencil Filter Algorithm using CUDA Accelerating Intensity Layer Based Pencil Filter Algorithm using CUDA Dissertation submitted in partial fulfillment of the requirements for the degree of Master of Technology, Computer Engineering by Amol

More information

More on Pipelining and Pipelines in Real Machines CS 333 Fall 2006 Main Ideas Data Hazards RAW WAR WAW More pipeline stall reduction techniques Branch prediction» static» dynamic bimodal branch prediction

More information

Computers. Hardware. The Central Processing Unit (CPU) CMPT 125: Lecture 1: Understanding the Computer

Computers. Hardware. The Central Processing Unit (CPU) CMPT 125: Lecture 1: Understanding the Computer Computers CMPT 125: Lecture 1: Understanding the Computer Tamara Smyth, tamaras@cs.sfu.ca School of Computing Science, Simon Fraser University January 3, 2009 A computer performs 2 basic functions: 1.

More information

GPGPU Parallel Merge Sort Algorithm

GPGPU Parallel Merge Sort Algorithm GPGPU Parallel Merge Sort Algorithm Jim Kukunas and James Devine May 4, 2009 Abstract The increasingly high data throughput and computational power of today s Graphics Processing Units (GPUs), has led

More information

Architecture of Hitachi SR-8000

Architecture of Hitachi SR-8000 Architecture of Hitachi SR-8000 University of Stuttgart High-Performance Computing-Center Stuttgart (HLRS) www.hlrs.de Slide 1 Most of the slides from Hitachi Slide 2 the problem modern computer are data

More information