# Introduction to Cloud Computing

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Introduction to Cloud Computing Parallel Processing I , spring th Lecture, Feb 2 nd Majd F. Sakr

2 Lecture Motivation Concurrency and why? Different flavors of parallel computing Get the basic idea of the benefits of concurrency and the challenges in achieving concurrent execution.

3 Lecture Outline What is Parallel Computing? Motivation for Parallel Computing Parallelization Levels granularities Parallel Computers Classification Parallel Computing Memory Architecture Parallel Programming Models

4 What is Parallel Computing? instructions Problem.. time instructions.. Problem.... time 4

5 Parallel Computing Resources Multiple processors on the same computer Multiple computers connected by a network Combination of both instructions.. Problem.... time

6 History of Parallel Computing Interest Started 1981: Cosmic Cube constructed ( 64 Intel Microprocessor ) Parallel Systems made of off the shelf processors (e.g. Beowulf Clusters) 1998: 2 nd Supercomputer (ASCI Blue Pacific) (by IBM, 3 trillion operations/s) 100 trillion Operations/second Today shared memory & multiprocessors working on shared data Clusters & Companies began selling Parallel Computers 1997: 1 st supercomputer (ASCI Red) (1 trillion operations/second) 3 rd supercomputer (ASCI While) (by IBM, 10 trillion operations/second) Parallel Computing based on multi-core processors

7 When can a computation be Parallelized? (1/3) When using multiple compute resources to solve the problem saves it more time than using single resource At any moment, we can execute multiple program chunks Dependencies

8 When can a computation be Parallelized? (2/3) Problem s ability to be broken into discrete pieces to be solved simultaneously Check data dependencies To Do: add a, b, c Initial values: a=1 b=3 c=-1 e=2 d=??? mult d, a, e Sequential Parallel add a, b, c add a, b, c mult d, a, e b=3 + c=-1 b=3 + c=-1 a=1 x e=2 time a=2 a=2 d=2 mult d, a, e a=2 x e=2 d=4

9 When can a computation be Parallelized? (3/3) Problem s ability to be broken into discrete pieces to be solved simultaneously Check data dependencies To Do: add a, b, c Initial values: a=??? b=3 c=-1 e=2 f=0 d=??? mult d, e, f Sequential Parallel add a, b, c add a, b, c mult d, e, f b=3 + c=-1 b=3 + c=-1 e=2 x f=0 time a=2 a=2 d=0 mult d, e, f e=2 x f=0 d=0

10 Lecture Outline What is Parallel Computing? Motivation for Parallel Computing Parallelization Levels Parallel Computers Classification Parallel Computing Memory Architecture Parallel Programming Models

11 Uses of Parallel Computing (1/2) Modeling difficult scientific and engineering problems Physics: applied, nuclear, particle, condensed matter, high pressure, fusion, photonics Geology Molecular sciences Electrical Engineering: circuit design, condense matter, high pressure Commercial applications Databases, Datamining Network video and multi national corporations Financial and economic modeling 11

12 Uses of Parallel Computing (2/2) User Applications Image and Video Editing Entertainment Applications Games & Real time Graphics High Definition Video Playback 3 D Animation 12

13 Why Parallel Computing? (1/2) Why not simply build faster serial computer?? The speed at which data can move through hardware determines the speed of a serial computer. So we need to increase proximity of data. Limits to miniaturization: even though Economic Limitations: cheaper to use multiple commodity processors to achieve fast performance than to build a single fast processor.

14 Why Parallel Computing? (2/2) Saving time Solving large complex problems Take advantage of concurrency

15 How much faster can s get? Towards the early 2000 s we hit a frequency wall for processors. Processor frequencies topped out at 4 GHz due to the thermal limit of Silicon. Christian Thostrup

16 Solution : More chips in Parallel! Enter Multicore. Cram more processors into the chip rather than increase clock frequency. Traditional s are available with up to Quad and Six Cores. The Cell Broadband Engine (Playstation( 3) has 9 cores. Intel s s 48 core processor. Intel

17 Multicore s Have fully functioning cores in a processor. Have individual L1 and shared L2 caches. OS and applications see each core as an individual processor. Applications have to be specifically rewritten for performance. Source: hardwaresecrets.com

18 Graphics Processing Units (GPUs) Massively Parallel Computational engines for Graphics. More ALUs, less cache Crunches lots of numbers in parallel but can t move data very fast. Is an accelerator, cannot work without a. Custom Programming Model and API. NVIDIA NVIDIA

19 Lecture Outline What is Parallel Computing? Motivation of Parallel Computing Parallelization Levels Parallel Computers Classification Parallel Computing Memory Architecture Parallel Programming Models

20 Terms Program: an executable file with one or multiple tasks. Process: instance of a program in execution. It has its own address space, and interacts with other processes only through communication mechanisms managed by the OS. Task: execution path through the address space that has many instructions. (Some times task and process are used interchangeably). Thread: stream of execution used to run a task. It s a coding construct that does not affect the architecture of an application. A process might contain one or multiple threads all sharing the same address space and interacting directly. Instruction: a single operation of a processor. A thread has one or multiple instructions.

22 Parallelization levels (1/8) Data level parallelism Task level parallelism Instruction level parallelism Bit level parallelism

23 Parallelization levels (2/8) Data level Parallelism Carnegie Mellon Associated with loops. Same operation is being performed on different partitions of the same data structure. Each processor performs the task of its part of the data. Ex: add x to all the elements of an array But, can all loops can be parallelized? Loop carried dependencies: if each iteration of the loop depends on results from the previous one, loop can not be parallelized. X * = A 0 A 39 Time Task 1: Task 2: Task 3: Task 4: X * = A 0 A 9 X * = A 10 A 19 X * = A 20 X * = A 30 A 29 A 39 Time

24 Parallelization levels (3/8) Carnegie Mellon Task level Parallelism Completely different operations are performed on the same or different sets of data. Each processor is given a different task. As each processor works in its own task, it communicates with other processors to pass and get data. Serial Task Time Task-Level Parallelized Task Time x = a + b x = a + b f = a * b * c y = (x * d) + e z= y 2 f = a * b * c y = (x * d) + e z= y 2 m = a + b p = x + c m = a + b p = x + c Total Time Total Time

25 Parallelization levels (4/8) Carnegie Mellon Instruction level parallelism (ILP) Reordering the instructions and combining them into groups so the computer can execute them simultaneously. Example: x = a + b y = c + d z = x + y Serial If each instruction takes 1 time unit, sequential performance requires 3 time units. Time Instruction-level Parallelized The first two instructions are independent, so they can performed simultaneously. The third instruction depends on them so it must be performed afterwards. Totally, this will take 2 time units. Time

26 Parallelization levels (5/8) Carnegie Mellon Micro architectural techniques that make use of ILP include Instructions pipelining: Increasing the instruction throughput by splitting the instruction into different stages. Each pipeline stage corresponds to a different action the processor performs on that instruction in that stage. Machine with N pipelines can have up to N different instructions at different stages of completion. Instruction # Clock Cycle IF ID IF EX ID IF Pipeline Stage MEM EX ID IF WB MEM EX ID IF WB MEM EX ID WB MEM EX Processing rate of each step takes less time than processing the whole instruction (all stages) at once. A part of each instruction can be done before the next clock signal. This reduces the delay of waiting for the signal. IF: Instruction Fetch ID: Instruction Decode EX: Execute MEM: Memory Access WB: Write Back to Reister

27 Parallelization levels (6/8) Instruction level parallelism (ILP) IF IF IF Order of superscalars ID ID ID Superscalar of a core is divided at levels: A level x core is the one with x functional units. The more functional units: the more parallelism the system can do & the less is the time spent on execution. ALU RD RD RD Dispatch Buffer MEM1 FP1 MEM2 FP2 FP3 BR In Order Out-of-order Execute Stages Out-of-order Re-Order Buffer WB WB WB In Order

28 Parallelization levels (7/8) Carnegie Mellon Micro architectural techniques that make use of ILP include Superscalar processors: Implements ILP within a single processor so that a processor can execute more than one instruction at a time (per clock cycle). It does that by sending multiple instructions to redundant functional units on the processor simultaneously. These functional unit are not separate processors! They are execution resources (ALU, multiplier, ) within a single processor. This enables the processor to accept multiple instructions per clock cycle. Instruction # Clock Cycle IF ID IF EX ID IF Pipeline Stage MEM EX ID IF WB 2 IF ID EX MEM WB 3 IF ID EX MEM WB MEM EX ID IF WB 5 IF ID EX MEM WB MEM EX ID WB 7 IF ID EX MEM MEM 9 IF ID EX EX instructions are fetched and sent at a time, so 2 instructions can be completed per clock cycle!

29 Parallelization levels (8/8) Bit level parallelism Based on increasing the word size of the processor. Reduces the number of instructions required to perform an operation on variables whose sizes are bigger than the processor word size. Example: Adding 2 16 bit integers with an 8 bit processor requires 2 instructions Adding them with a 16 bit processor required one instruction

30 Lecture Outline What is Parallel Computing? Motivation of Parallel Computing Parallelization Levels Parallel Computers Classification Parallel Computing Memory Architecture Parallel Programming Models

31 Parallel Computers Classification (1/5) Flynn's Classical Taxonomy Differentiate between the architectures of multi processor computers based on two independent aspects: Instructions streaming to a processor and Data the processor uses. Single Data Single Instruction SISD MISD Multiple Instruction SIMD MIMD Multiple Data 31

32 Parallel Computers Classification (2/5) Flynn's Classical Taxonomy Differentiate between the architectures of multi processor computers based on two independent aspects: Instructions streaming to a processor and Data the processor uses. SISD Serial (non-parallel) computer Deterministic execution Older generation mainframes, minicomputers & workstations, most modern day PCs Single Data Single Instruction SISD MISD Multiple Instruction SIMD MIMD Multiple Data 32

33 Parallel Computers Classification (3/5) Flynn's Classical Taxonomy Differentiate between the architectures of multi processor computers based on two independent aspects: Instructions streaming to a processor and Data the processor uses. Single Data Single Instruction SISD MISD Multiple Instruction SIMD A type of parallel computer Synchronous (lockstep) and deterministic execution Best suited for specialized problems characterized by a high degree of regularity, such as graphics/image processing. Most modern computers, particularly those with graphics processor units (GPUs) SIMD Multiple Data MIMD 33

34 Parallel Computers Classification (4/5) Flynn's Classical Taxonomy Differentiate between multi processor computer architectures based on two independent dimensions: Instructions streaming to the processor and Data the processor uses. Single Data Single data stream is fed into multiple processing units. Each processing unit operates on the data independently using independent streams of instructions. Few examples exist: One is the experimental Carnegie-Mellon C.mmp computer (1971). MISD Single Instruction SISD MISD Multiple Instruction SIMD MIMD Multiple Data 34

35 Parallel Computers Classification (5/5) Flynn's Classical Taxonomy Differentiate between the architectures of multi processor computers based on two independent aspects: Instructions streaming to a processor and Data the processor uses. Single Data Single Instruction SISD MISD Multiple Instruction MIMD SIMD Multiple Data MIMD The most common type of parallel computing. Execution can by synchronous or asynchronous, deterministic or non deterministic Many MIMD architecture include SIMD execution sub components Examples: Most current supercomputers, networked parallel computer clusters and grids, multi processor SMP computers, multi core PCs 35

36 Lecture Outline What is Parallel Computing? Motivation of Parallel Computing Parallelization Levels Parallel Computers Classification Parallel Computing Memory Architecture Parallel Programming Models

37 Parallel Computing Memory Architecture (1/9) Shared Memory Architecture Distributed Memory Architecture Hybrid Distributed Shared Memory Architecture 37

38 Parallel Computing Memory Architecture (2/9) Shared Memory Architecture Processors perform their operations independently but they share the same resources since they have access to all the memory as global address space. When a processor changes the value of a location in the memory, the effect is observed by all other processors. Based on memory access times: Shared memory machines can be divided into two main classes UMA and NUMA.

39 Parallel Computing Memory Architecture (3/9) Shared Memory Architecture Uniform Memory Access: Equal access rights and access times to memory. Represented by Symmetric Multiprocessor (SMP). Main Memory Sometimes called Cache Coherent UMA (CC UMA).

40 Parallel Computing Memory Architecture (4/9) Shared Memory Architecture Non Uniform Memory Access: Usually delivered by physically linking two or more SMPs so they can access the memory of each other directly. Not all processors have equal access time to all memories. Memory access across the physical link is slow. Called Cache Coherent NUMA (CC NUMA) if cache coherency is maintained. Main Main Memory Memory Bus Interconnect Main Main Memory Memory

41 Parallel Computing Memory Architecture (5/9) Shared Memory Architecture Carnegie Mellon Advantages Program development can be simplified. Global address space makes referencing data stored in memory similar to traditional single processor programs. Memory is proximate to s which makes data sharing between tasks fast and uniform. Disadvantages Lack of scalability between memory and s. Adding more s can increases on the path between shared memory and s. Maintaining data integrity is complex. Synchronization is required to ensure correct access of memory.

42 Parallel Computing Memory Architecture (6/9) Distributed Memory Architecture Each processor has its own address space (local memory). Changes done by each processor is not visible by others. Processors are connected to each other over a communication network. The program must define a way to transfer data when it is required between processors. Main Memory Main Memory Main Memory Main Memory

43 Parallel Computing Memory Architecture (7/9) Distributed Memory Architecture Carnegie Mellon Advantages Scalability in memory size is along number of processors. Each processor has fast access to its own memory with no interference or overhead. Cost effectiveness: uses multiple low cost processors and networking. Disadvantages Data communication between processors is the programmer s responsibility. It s difficult to map existing data structures, based on global memory, to such organization. Challenge: How to distribute a task over multiple processors (each with its own memory), and then reassemble the results from each processors into one solution?

44 Parallel Computing Memory Architecture (8/9) Hybrid Distributed Shared Memory Architecture Used in most of today s fast and large parallel computers The shared memory components are SMP nodes. The distributed memory component is a network of SMP nodes. Advantages and disadvantages are the common points between the two architectures. Main Main Memory Memory Main Main Memory Memory

45 Parallel Computing Memory Architecture (9/9) Hybrid Distributed Shared Memory Architecture Processors on an SMP machine address the machine's memory as global. Each SMP knows only about the SMP s own memory, but not the memory on another SMP. Therefore, network communications are required to move data from one SMP to another. Main Main Memory Memory Main Main Memory Memory

46 Lecture Outline What is Parallel Computing? Motivation of Parallel Computing Parallelization Levels Parallel Computers Classification Parallel Computing Memory Architecture Parallel Programming Models

47 Parallel Programming Models (1/12) What is Programming Model? A programming model presents an abstraction of the computer system. A programming model enables the expression of ideas in a specific way. A programming language is then used to put these ideas in practice. Parallel Programming Model: Software technologies that are used as an abstraction above hardware and memory architectures to express parallel algorithms or to match algorithms with parallel systems.

48 Parallel Programming Models (2/12) Shared Memory Model Threads Model Data Parallel Model Message Passing Model Others SPMD MPMD 48

49 Parallel Programming Models (3/12) Shared Memory Model Processors read from and write to variables stored in a shared address space asynchronously. Mechanisms such as locks and semaphores are used to control access to the shared memory. x = a + b y = c + d z = x + y a b Shred Memory y m d RF c f z PC IR CU x l RF PC IR CU l = b - a m = 2 + d RF PC IR CU x = a * 2 f = 5 z = x - f

50 Parallel Programming Models (4/12) Shared Memory Model Advantage: Program development can be simplified since no process owns the data stored in memory. Because all processors can access the same variables, referencing data stored in memory is similar to traditional single processor programs. Disadvantage: it s difficult to understand and manage data locality. Keeping data local to the processor that is working on it conserves memory access to this processor. This causes bus traffic when multiple processors are trying to access the same data.

51 Parallel Programming Models (5/12) Threads Model A single process is divided into multiple, concurrent execution paths, each path is a thread. Process execution time is reduced because threads are distributed and executed on different processors concurrently. Serial Threaded Task Time Task Time Function 1 Function 1 T1 Function 2 Function 2 T2 Function 3 Function 3 T3 Function 4 Function 4 T4 Total Time Total Time

52 Parallel Programming Models (6/12) Threads Model: How does it work? Each thread has local data that is specific to this thread and not replicated on other threads. But also, all threads share and communicate through the global memory. Results of each threads execution can then be combined to form the result of the main process. Threads are associated with shared memory architectures. Implementations: POSIX Threads and OpenMP

54 Parallel Programming Models (8/12) Data Parallel Model A data set (ex: an array) is divided into chunks, operations are performed on each chunk concurrently. A set of tasks are carried out on the same data structure. But, each task is performed on a different part of this data structure. Tasks that are carried out on the same part of the data structure do the same operations on each instance of this data. (ex: multiply each element of the array by 2). X * = Time A 0 A 39 Task 1: Task 2: Task 3: Task 4: X * = A 0 A 9 X * = A 10 A 19 X * = A 20 X * = A 30 Time A 29 A 39

55 Parallel Programming Models (9/12) Data Parallel Model Implementation Data parallel compilers need the programmer to provide information that specifies how data is to be partitioned into tasks and how the data is distributed over the processors. On shared memory architectures, all tasks can have access to the data structure through global memory so the data are not actually divided. On distributed memory architectures, the data structure is divided into chunks and each chunk resides in the local memory of each task.

56 Parallel Programming Models (10/12) Message Passing Model Tasks use their own local memory. Tasks can be on the same machine or across multiple machines. Data are exchanges between tasks by sending and receiving messages. Machine 1 Task 1 Data send() Task 4 Data recv() Machine 2 Task 2 Data send() Task 5 Data recv() Data transfer requires cooperative operations between processes (ex: send followed by a matching receive). Machine 3 Task 3 Data send() Task 6 Data recv()

57 Parallel Programming Models (11/12) Single Program Multiple Data (SPMD): Could be built by combining any of the mentioned parallel programming models. All tasks may use different data. All tasks work simultaneously to execute one single program. At any point during execution time, tasks can be executing the same or different instructions of the same program using different data resources. Tasks do not always execute all of the programs, sometimes only parts of it. Data Source F1 F2 F3 F4 F5 Program Task Time P P P P Task 1 Task 2 Task 3 Task n

58 Parallel Programming Models (12/12) Multiple Program Multiple Data (MPMD): Could be built by combining any of the mentioned parallel programming models. Has many programs. The programs runs in parallel, but each task could be performing the same or different part of the program that other tasks perform. Data Source F1 F2 F3 F4 F5 Program Task Time P1 P2 P3 Task 1 Task 2 Task 3 All tasks may use different data P2 Task n

59 Lecture Outline What is Parallel Computing? Motivation of Parallel Computing Parallelization Levels Parallel Computers Classification Parallel Computing Memory Architecture Parallel Programming Models

60 Next Lecture Parallel Computing Design Considerations Limits and Costs of Parallel Computing Parallel Computing Performance Analysis Examples of Problems Solved By Parallel Computing

61 References el /programming.html allel.pdf Week2 TypesOfPara.pdf 61

### Lecture 2 Parallel Programming Platforms

Lecture 2 Parallel Programming Platforms Flynn s Taxonomy In 1966, Michael Flynn classified systems according to numbers of instruction streams and the number of data stream. Data stream Single Multiple

### UNIT 2 CLASSIFICATION OF PARALLEL COMPUTERS

UNIT 2 CLASSIFICATION OF PARALLEL COMPUTERS Structure Page Nos. 2.0 Introduction 27 2.1 Objectives 27 2.2 Types of Classification 28 2.3 Flynn s Classification 28 2.3.1 Instruction Cycle 2.3.2 Instruction

### CMSC 611: Advanced Computer Architecture

CMSC 611: Advanced Computer Architecture Parallel Computation Most slides adapted from David Patterson. Some from Mohomed Younis Parallel Computers Definition: A parallel computer is a collection of processing

### Lecture 11: Multi-Core and GPU. Multithreading. Integration of multiple processor cores on a single chip.

Lecture 11: Multi-Core and GPU Multi-core computers Multithreading GPUs General Purpose GPUs Zebo Peng, IDA, LiTH 1 Multi-Core System Integration of multiple processor cores on a single chip. To provide

### Lecture 23: Multiprocessors

Lecture 23: Multiprocessors Today s topics: RAID Multiprocessor taxonomy Snooping-based cache coherence protocol 1 RAID 0 and RAID 1 RAID 0 has no additional redundancy (misnomer) it uses an array of disks

### An Introduction to Parallel Computing/ Programming

An Introduction to Parallel Computing/ Programming Vicky Papadopoulou Lesta Astrophysics and High Performance Computing Research Group (http://ahpc.euc.ac.cy) Dep. of Computer Science and Engineering European

### Scalability and Classifications

Scalability and Classifications 1 Types of Parallel Computers MIMD and SIMD classifications shared and distributed memory multicomputers distributed shared memory computers 2 Network Topologies static

### Computer Architecture TDTS10

why parallelism? Performance gain from increasing clock frequency is no longer an option. Outline Computer Architecture TDTS10 Superscalar Processors Very Long Instruction Word Processors Parallel computers

### Parallel Programming

Parallel Programming Parallel Architectures Diego Fabregat-Traver and Prof. Paolo Bientinesi HPAC, RWTH Aachen fabregat@aices.rwth-aachen.de WS15/16 Parallel Architectures Acknowledgements Prof. Felix

### Introduction to GPU Programming Languages

CSC 391/691: GPU Programming Fall 2011 Introduction to GPU Programming Languages Copyright 2011 Samuel S. Cho http://www.umiacs.umd.edu/ research/gpu/facilities.html Maryland CPU/GPU Cluster Infrastructure

### Chapter 2 Parallel Architecture, Software And Performance

Chapter 2 Parallel Architecture, Software And Performance UCSB CS140, T. Yang, 2014 Modified from texbook slides Roadmap Parallel hardware Parallel software Input and output Performance Parallel program

### BLM 413E - Parallel Programming Lecture 3

BLM 413E - Parallel Programming Lecture 3 FSMVU Bilgisayar Mühendisliği Öğr. Gör. Musa AYDIN 14.10.2015 2015-2016 M.A. 1 Parallel Programming Models Parallel Programming Models Overview There are several

More on Pipelining and Pipelines in Real Machines CS 333 Fall 2006 Main Ideas Data Hazards RAW WAR WAW More pipeline stall reduction techniques Branch prediction» static» dynamic bimodal branch prediction

### Chapter 2 Parallel Computer Architecture

Chapter 2 Parallel Computer Architecture The possibility for a parallel execution of computations strongly depends on the architecture of the execution platform. This chapter gives an overview of the general

### Parallel Programming Survey

Christian Terboven 02.09.2014 / Aachen, Germany Stand: 26.08.2014 Version 2.3 IT Center der RWTH Aachen University Agenda Overview: Processor Microarchitecture Shared-Memory

### Multi-core architectures. Jernej Barbic 15-213, Spring 2007 May 3, 2007

Multi-core architectures Jernej Barbic 15-213, Spring 2007 May 3, 2007 1 Single-core computer 2 Single-core CPU chip the single core 3 Multi-core architectures This lecture is about a new trend in computer

### High Performance Computing

High Performance Computing Trey Breckenridge Computing Systems Manager Engineering Research Center Mississippi State University What is High Performance Computing? HPC is ill defined and context dependent.

### Making Multicore Work and Measuring its Benefits. Markus Levy, president EEMBC and Multicore Association

Making Multicore Work and Measuring its Benefits Markus Levy, president EEMBC and Multicore Association Agenda Why Multicore? Standards and issues in the multicore community What is Multicore Association?

### Learning Outcomes. Simple CPU Operation and Buses. Composition of a CPU. A simple CPU design

Learning Outcomes Simple CPU Operation and Buses Dr Eddie Edwards eddie.edwards@imperial.ac.uk At the end of this lecture you will Understand how a CPU might be put together Be able to name the basic components

### Multi-Threading Performance on Commodity Multi-Core Processors

Multi-Threading Performance on Commodity Multi-Core Processors Jie Chen and William Watson III Scientific Computing Group Jefferson Lab 12000 Jefferson Ave. Newport News, VA 23606 Organization Introduction

### Chapter 1 Computer System Overview

Operating Systems: Internals and Design Principles Chapter 1 Computer System Overview Eighth Edition By William Stallings Operating System Exploits the hardware resources of one or more processors Provides

### LSN 2 Computer Processors

LSN 2 Computer Processors Department of Engineering Technology LSN 2 Computer Processors Microprocessors Design Instruction set Processor organization Processor performance Bandwidth Clock speed LSN 2

### Middleware and Distributed Systems. Introduction. Dr. Martin v. Löwis

Middleware and Distributed Systems Introduction Dr. Martin v. Löwis 14 3. Software Engineering What is Middleware? Bauer et al. Software Engineering, Report on a conference sponsored by the NATO SCIENCE

### GPU System Architecture. Alan Gray EPCC The University of Edinburgh

GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems

### Parallel Algorithm Engineering

Parallel Algorithm Engineering Kenneth S. Bøgh PhD Fellow Based on slides by Darius Sidlauskas Outline Background Current multicore architectures UMA vs NUMA The openmp framework Examples Software crisis

### Symmetric Multiprocessing

Multicore Computing A multi-core processor is a processing system composed of two or more independent cores. One can describe it as an integrated circuit to which two or more individual processors (called

### High Performance Computing. Course Notes 2007-2008. HPC Fundamentals

High Performance Computing Course Notes 2007-2008 2008 HPC Fundamentals Introduction What is High Performance Computing (HPC)? Difficult to define - it s a moving target. Later 1980s, a supercomputer performs

### GPU Architectures. A CPU Perspective. Data Parallelism: What is it, and how to exploit it? Workload characteristics

GPU Architectures A CPU Perspective Derek Hower AMD Research 5/21/2013 Goals Data Parallelism: What is it, and how to exploit it? Workload characteristics Execution Models / GPU Architectures MIMD (SPMD),

### Client/Server Computing Distributed Processing, Client/Server, and Clusters

Client/Server Computing Distributed Processing, Client/Server, and Clusters Chapter 13 Client machines are generally single-user PCs or workstations that provide a highly userfriendly interface to the

### Chapter 18: Database System Architectures. Centralized Systems

Chapter 18: Database System Architectures! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems! Network Types 18.1 Centralized Systems! Run on a single computer system and

### INSTRUCTION LEVEL PARALLELISM PART VII: REORDER BUFFER

Course on: Advanced Computer Architectures INSTRUCTION LEVEL PARALLELISM PART VII: REORDER BUFFER Prof. Cristina Silvano Politecnico di Milano cristina.silvano@polimi.it Prof. Silvano, Politecnico di Milano

### INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET)

INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET) International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 ISSN 0976 6464(Print)

### ELE 356 Computer Engineering II. Section 1 Foundations Class 6 Architecture

ELE 356 Computer Engineering II Section 1 Foundations Class 6 Architecture History ENIAC Video 2 tj History Mechanical Devices Abacus 3 tj History Mechanical Devices The Antikythera Mechanism Oldest known

### PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN

1 PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN Introduction What is cluster computing? Classification of Cluster Computing Technologies: Beowulf cluster Construction

### ~ Greetings from WSU CAPPLab ~

~ Greetings from WSU CAPPLab ~ Multicore with SMT/GPGPU provides the ultimate performance; at WSU CAPPLab, we can help! Dr. Abu Asaduzzaman, Assistant Professor and Director Wichita State University (WSU)

### Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.

### A Comparison of Distributed Systems: ChorusOS and Amoeba

A Comparison of Distributed Systems: ChorusOS and Amoeba Angelo Bertolli Prepared for MSIT 610 on October 27, 2004 University of Maryland University College Adelphi, Maryland United States of America Abstract.

### Principles and characteristics of distributed systems and environments

Principles and characteristics of distributed systems and environments Definition of a distributed system Distributed system is a collection of independent computers that appears to its users as a single

### VII ENCUENTRO IBÉRICO DE ELECTROMAGNETISMO COMPUTACIONAL, MONFRAGÜE, CÁCERES, 19-21 MAYO 2010 29

VII ENCUENTRO IBÉRICO DE ELECTROMAGNETISMO COMPUTACIONAL, MONFRAGÜE, CÁCERES, 19-21 MAYO 2010 29 Shared Memory Supercomputing as Technique for Computational Electromagnetics César Gómez-Martín, José-Luis

### Next Generation GPU Architecture Code-named Fermi

Next Generation GPU Architecture Code-named Fermi The Soul of a Supercomputer in the Body of a GPU Why is NVIDIA at Super Computing? Graphics is a throughput problem paint every pixel within frame time

### OC By Arsene Fansi T. POLIMI 2008 1

IBM POWER 6 MICROPROCESSOR OC By Arsene Fansi T. POLIMI 2008 1 WHAT S IBM POWER 6 MICROPOCESSOR The IBM POWER6 microprocessor powers the new IBM i-series* and p-series* systems. It s based on IBM POWER5

### Annotation to the assignments and the solution sheet. Note the following points

Computer rchitecture 2 / dvanced Computer rchitecture Seite: 1 nnotation to the assignments and the solution sheet This is a multiple choice examination, that means: Solution approaches are not assessed

### Basic Concepts in Parallelization

1 Basic Concepts in Parallelization Ruud van der Pas Senior Staff Engineer Oracle Solaris Studio Oracle Menlo Park, CA, USA IWOMP 2010 CCS, University of Tsukuba Tsukuba, Japan June 14-16, 2010 2 Outline

### SOC architecture and design

SOC architecture and design system-on-chip (SOC) processors: become components in a system SOC covers many topics processor: pipelined, superscalar, VLIW, array, vector storage: cache, embedded and external

### GPUs for Scientific Computing

GPUs for Scientific Computing p. 1/16 GPUs for Scientific Computing Mike Giles mike.giles@maths.ox.ac.uk Oxford-Man Institute of Quantitative Finance Oxford University Mathematical Institute Oxford e-research

### Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi

Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi ICPP 6 th International Workshop on Parallel Programming Models and Systems Software for High-End Computing October 1, 2013 Lyon, France

### Centralized Systems. A Centralized Computer System. Chapter 18: Database System Architectures

Chapter 18: Database System Architectures Centralized Systems! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems! Network Types! Run on a single computer system and do

### Tools Page 1 of 13 ON PROGRAM TRANSLATION. A priori, we have two translation mechanisms available:

Tools Page 1 of 13 ON PROGRAM TRANSLATION A priori, we have two translation mechanisms available: Interpretation Compilation On interpretation: Statements are translated one at a time and executed immediately.

### Logical Operations. Control Unit. Contents. Arithmetic Operations. Objectives. The Central Processing Unit: Arithmetic / Logic Unit.

Objectives The Central Processing Unit: What Goes on Inside the Computer Chapter 4 Identify the components of the central processing unit and how they work together and interact with memory Describe how

### Lecture 1. Course Introduction

Lecture 1 Course Introduction Welcome to CSE 262! Your instructor is Scott B. Baden Office hours (week 1) Tues/Thurs 3.30 to 4.30 Room 3244 EBU3B 2010 Scott B. Baden / CSE 262 /Spring 2011 2 Content Our

### GPU File System Encryption Kartik Kulkarni and Eugene Linkov

GPU File System Encryption Kartik Kulkarni and Eugene Linkov 5/10/2012 SUMMARY. We implemented a file system that encrypts and decrypts files. The implementation uses the AES algorithm computed through

### White Paper The Numascale Solution: Extreme BIG DATA Computing

White Paper The Numascale Solution: Extreme BIG DATA Computing By: Einar Rustad ABOUT THE AUTHOR Einar Rustad is CTO of Numascale and has a background as CPU, Computer Systems and HPC Systems De-signer

### NVIDIA CUDA Software and GPU Parallel Computing Architecture. David B. Kirk, Chief Scientist

NVIDIA CUDA Software and GPU Parallel Computing Architecture David B. Kirk, Chief Scientist Outline Applications of GPU Computing CUDA Programming Model Overview Programming in CUDA The Basics How to Get

### numascale White Paper The Numascale Solution: Extreme BIG DATA Computing Hardware Accellerated Data Intensive Computing By: Einar Rustad ABSTRACT

numascale Hardware Accellerated Data Intensive Computing White Paper The Numascale Solution: Extreme BIG DATA Computing By: Einar Rustad www.numascale.com Supemicro delivers 108 node system with Numascale

### Reliable Systolic Computing through Redundancy

Reliable Systolic Computing through Redundancy Kunio Okuda 1, Siang Wun Song 1, and Marcos Tatsuo Yamamoto 1 Universidade de São Paulo, Brazil, {kunio,song,mty}@ime.usp.br, http://www.ime.usp.br/ song/

### 18-742 Lecture 4. Parallel Programming II. Homework & Reading. Page 1. Projects handout On Friday Form teams, groups of two

age 1 18-742 Lecture 4 arallel rogramming II Spring 2005 rof. Babak Falsafi http://www.ece.cmu.edu/~ece742 write X Memory send X Memory read X Memory Slides developed in part by rofs. Adve, Falsafi, Hill,

### Computer Performance. Topic 3. Contents. Prerequisite knowledge Before studying this topic you should be able to:

55 Topic 3 Computer Performance Contents 3.1 Introduction...................................... 56 3.2 Measuring performance............................... 56 3.2.1 Clock Speed.................................

### Parallel Computing. Introduction

Parallel Computing Introduction Thorsten Grahs, 14. April 2014 Administration Lecturer Dr. Thorsten Grahs (that s me) t.grahs@tu-bs.de Institute of Scientific Computing Room RZ 120 Lecture Monday 11:30-13:00

### Lecture 3: Modern GPUs A Hardware Perspective Mohamed Zahran (aka Z) mzahran@cs.nyu.edu http://www.mzahran.com

CSCI-GA.3033-012 Graphics Processing Units (GPUs): Architecture and Programming Lecture 3: Modern GPUs A Hardware Perspective Mohamed Zahran (aka Z) mzahran@cs.nyu.edu http://www.mzahran.com Modern GPU

### Historically, Huge Performance Gains came from Huge Clock Frequency Increases Unfortunately.

Historically, Huge Performance Gains came from Huge Clock Frequency Increases Unfortunately. Hardware Solution Evolution of Computer Architectures Micro-Scopic View Clock Rate Limits Have Been Reached

### HPC Wales Skills Academy Course Catalogue 2015

HPC Wales Skills Academy Course Catalogue 2015 Overview The HPC Wales Skills Academy provides a variety of courses and workshops aimed at building skills in High Performance Computing (HPC). Our courses

### CHAPTER 1 INTRODUCTION

1 CHAPTER 1 INTRODUCTION 1.1 MOTIVATION OF RESEARCH Multicore processors have two or more execution cores (processors) implemented on a single chip having their own set of execution and architectural recourses.

### Bringing Big Data Modelling into the Hands of Domain Experts

Bringing Big Data Modelling into the Hands of Domain Experts David Willingham Senior Application Engineer MathWorks david.willingham@mathworks.com.au 2015 The MathWorks, Inc. 1 Data is the sword of the

### Course Development of Programming for General-Purpose Multicore Processors

Course Development of Programming for General-Purpose Multicore Processors Wei Zhang Department of Electrical and Computer Engineering Virginia Commonwealth University Richmond, VA 23284 wzhang4@vcu.edu

### Introduction to GPU hardware and to CUDA

Introduction to GPU hardware and to CUDA Philip Blakely Laboratory for Scientific Computing, University of Cambridge Philip Blakely (LSC) GPU introduction 1 / 37 Course outline Introduction to GPU hardware

### Petascale Visualization: Approaches and Initial Results

Petascale Visualization: Approaches and Initial Results James Ahrens Li-Ta Lo, Boonthanome Nouanesengsy, John Patchett, Allen McPherson Los Alamos National Laboratory LA-UR- 08-07337 Operated by Los Alamos

### Overview on Modern Accelerators and Programming Paradigms Ivan Giro7o igiro7o@ictp.it

Overview on Modern Accelerators and Programming Paradigms Ivan Giro7o igiro7o@ictp.it Informa(on & Communica(on Technology Sec(on (ICTS) Interna(onal Centre for Theore(cal Physics (ICTP) Mul(ple Socket

### Principles of Operating Systems CS 446/646

Principles of Operating Systems CS 446/646 1. Introduction to Operating Systems a. Role of an O/S b. O/S History and Features c. Types of O/S Mainframe systems Desktop & laptop systems Parallel systems

### Infrastructure Matters: POWER8 vs. Xeon x86

Advisory Infrastructure Matters: POWER8 vs. Xeon x86 Executive Summary This report compares IBM s new POWER8-based scale-out Power System to Intel E5 v2 x86- based scale-out systems. A follow-on report

### Intro to GPU computing. Spring 2015 Mark Silberstein, 048661, Technion 1

Intro to GPU computing Spring 2015 Mark Silberstein, 048661, Technion 1 Serial vs. parallel program One instruction at a time Multiple instructions in parallel Spring 2015 Mark Silberstein, 048661, Technion

### COMP 422, Lecture 3: Physical Organization & Communication Costs in Parallel Machines (Sections 2.4 & 2.5 of textbook)

COMP 422, Lecture 3: Physical Organization & Communication Costs in Parallel Machines (Sections 2.4 & 2.5 of textbook) Vivek Sarkar Department of Computer Science Rice University vsarkar@rice.edu COMP

### Multi-GPU Load Balancing for Simulation and Rendering

Multi- Load Balancing for Simulation and Rendering Yong Cao Computer Science Department, Virginia Tech, USA In-situ ualization and ual Analytics Instant visualization and interaction of computing tasks

### Chapter 4 System Unit Components. Discovering Computers 2012. Your Interactive Guide to the Digital World

Chapter 4 System Unit Components Discovering Computers 2012 Your Interactive Guide to the Digital World Objectives Overview Differentiate among various styles of system units on desktop computers, notebook

### PROBLEMS #20,R0,R1 #\$3A,R2,R4

506 CHAPTER 8 PIPELINING (Corrisponde al cap. 11 - Introduzione al pipelining) PROBLEMS 8.1 Consider the following sequence of instructions Mul And #20,R0,R1 #3,R2,R3 #\$3A,R2,R4 R0,R2,R5 In all instructions,

### Parallelism and Cloud Computing

Parallelism and Cloud Computing Kai Shen Parallel Computing Parallel computing: Process sub tasks simultaneously so that work can be completed faster. For instances: divide the work of matrix multiplication

### Program Optimization for Multi-core Architectures

Program Optimization for Multi-core Architectures Sanjeev K Aggarwal (ska@iitk.ac.in) M Chaudhuri (mainak@iitk.ac.in) R Moona (moona@iitk.ac.in) Department of Computer Science and Engineering, IIT Kanpur

### Vorlesung Rechnerarchitektur 2 Seite 178 DASH

Vorlesung Rechnerarchitektur 2 Seite 178 Architecture for Shared () The -architecture is a cache coherent, NUMA multiprocessor system, developed at CSL-Stanford by John Hennessy, Daniel Lenoski, Monica

### COMPUTER ORGANIZATION ARCHITECTURES FOR EMBEDDED COMPUTING

COMPUTER ORGANIZATION ARCHITECTURES FOR EMBEDDED COMPUTING 2013/2014 1 st Semester Sample Exam January 2014 Duration: 2h00 - No extra material allowed. This includes notes, scratch paper, calculator, etc.

### GPU Parallel Computing Architecture and CUDA Programming Model

GPU Parallel Computing Architecture and CUDA Programming Model John Nickolls Outline Why GPU Computing? GPU Computing Architecture Multithreading and Arrays Data Parallel Problem Decomposition Parallel

### Systolic Computing. Fundamentals

Systolic Computing Fundamentals Motivations for Systolic Processing PARALLEL ALGORITHMS WHICH MODEL OF COMPUTATION IS THE BETTER TO USE? HOW MUCH TIME WE EXPECT TO SAVE USING A PARALLEL ALGORITHM? HOW

### OpenPOWER Outlook AXEL KOEHLER SR. SOLUTION ARCHITECT HPC

OpenPOWER Outlook AXEL KOEHLER SR. SOLUTION ARCHITECT HPC Driving industry innovation The goal of the OpenPOWER Foundation is to create an open ecosystem, using the POWER Architecture to share expertise,

### Graphics Cards and Graphics Processing Units. Ben Johnstone Russ Martin November 15, 2011

Graphics Cards and Graphics Processing Units Ben Johnstone Russ Martin November 15, 2011 Contents Graphics Processing Units (GPUs) Graphics Pipeline Architectures 8800-GTX200 Fermi Cayman Performance Analysis

### Discovering Computers 2011. Living in a Digital World

Discovering Computers 2011 Living in a Digital World Objectives Overview Differentiate among various styles of system units on desktop computers, notebook computers, and mobile devices Identify chips,

### Workshare Process of Thread Programming and MPI Model on Multicore Architecture

Vol., No. 7, 011 Workshare Process of Thread Programming and MPI Model on Multicore Architecture R. Refianti 1, A.B. Mutiara, D.T Hasta 3 Faculty of Computer Science and Information Technology, Gunadarma

### ADVANCED PROCESSOR ARCHITECTURES AND MEMORY ORGANISATION Lesson-12: ARM

ADVANCED PROCESSOR ARCHITECTURES AND MEMORY ORGANISATION Lesson-12: ARM 1 The ARM architecture processors popular in Mobile phone systems 2 ARM Features ARM has 32-bit architecture but supports 16 bit

### Stream Processing on GPUs Using Distributed Multimedia Middleware

Stream Processing on GPUs Using Distributed Multimedia Middleware Michael Repplinger 1,2, and Philipp Slusallek 1,2 1 Computer Graphics Lab, Saarland University, Saarbrücken, Germany 2 German Research

### Introduction to High Performance Cluster Computing. Cluster Training for UCL Part 1

Introduction to High Performance Cluster Computing Cluster Training for UCL Part 1 What is HPC HPC = High Performance Computing Includes Supercomputing HPCC = High Performance Cluster Computing Note: these

### Bindel, Spring 2010 Applications of Parallel Computers (CS 5220) Week 1: Wednesday, Jan 27

Logistics Week 1: Wednesday, Jan 27 Because of overcrowding, we will be changing to a new room on Monday (Snee 1120). Accounts on the class cluster (crocus.csuglab.cornell.edu) will be available next week.

### Introduction to Cloud Computing

Introduction to Cloud Computing Distributed Systems 15-319, spring 2010 11 th Lecture, Feb 16 th Majd F. Sakr Lecture Motivation Understand Distributed Systems Concepts Understand the concepts / ideas

### Pipeline Hazards. Structure hazard Data hazard. ComputerArchitecture_PipelineHazard1

Pipeline Hazards Structure hazard Data hazard Pipeline hazard: the major hurdle A hazard is a condition that prevents an instruction in the pipe from executing its next scheduled pipe stage Taxonomy of

### This Unit: Putting It All Together. CIS 501 Computer Architecture. Sources. What is Computer Architecture?

This Unit: Putting It All Together CIS 501 Computer Architecture Unit 11: Putting It All Together: Anatomy of the XBox 360 Game Console Slides originally developed by Amir Roth with contributions by Milo

### Interconnection Networks

Advanced Computer Architecture (0630561) Lecture 15 Interconnection Networks Prof. Kasim M. Al-Aubidy Computer Eng. Dept. Interconnection Networks: Multiprocessors INs can be classified based on: 1. Mode

### GPU Hardware and Programming Models. Jeremy Appleyard, September 2015

GPU Hardware and Programming Models Jeremy Appleyard, September 2015 A brief history of GPUs In this talk Hardware Overview Programming Models Ask questions at any point! 2 A Brief History of GPUs 3 Once

### Multicore Programming with LabVIEW Technical Resource Guide

Multicore Programming with LabVIEW Technical Resource Guide 2 INTRODUCTORY TOPICS UNDERSTANDING PARALLEL HARDWARE: MULTIPROCESSORS, HYPERTHREADING, DUAL- CORE, MULTICORE AND FPGAS... 5 DIFFERENCES BETWEEN

### Designing and Building Applications for Extreme Scale Systems CS598 William Gropp www.cs.illinois.edu/~wgropp

Designing and Building Applications for Extreme Scale Systems CS598 William Gropp www.cs.illinois.edu/~wgropp Welcome! Who am I? William (Bill) Gropp Professor of Computer Science One of the Creators of

### Hardware-Aware Analysis and. Presentation Date: Sep 15 th 2009 Chrissie C. Cui

Hardware-Aware Analysis and Optimization of Stable Fluids Presentation Date: Sep 15 th 2009 Chrissie C. Cui Outline Introduction Highlights Flop and Bandwidth Analysis Mehrstellen Schemes Advection Caching