Introduction to Multicore Programming

Similar documents
Multi-Threading Performance on Commodity Multi-Core Processors

Elemental functions: Writing data-parallel code in C/C++ using Intel Cilk Plus

Scheduling Task Parallelism" on Multi-Socket Multicore Systems"

Parallel Computing. Parallel shared memory computing with OpenMP

All ju The State of Software Development Today: A Parallel View. June 2012

Multi-core architectures. Jernej Barbic , Spring 2007 May 3, 2007

A Comparison Of Shared Memory Parallel Programming Models. Jace A Mogill David Haglin

Spring 2011 Prof. Hyesoon Kim

Course Development of Programming for General-Purpose Multicore Processors

CS2101a Foundations of Programming for High Performance Computing

Multi-core Programming System Overview

INTEL PARALLEL STUDIO EVALUATION GUIDE. Intel Cilk Plus: A Simple Path to Parallelism

Threads Scheduling on Linux Operating Systems

Parallel Algorithm Engineering

Parallel Programming Survey

Introduction to Cloud Computing

Parallel Processing and Software Performance. Lukáš Marek

Parallel Computing. Shared memory parallel programming with OpenMP

Basic Concepts in Parallelization

Shared Address Space Computing: Programming

AMD Opteron Quad-Core

Debugging with TotalView

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

Improving System Scalability of OpenMP Applications Using Large Page Support

An Introduction to Parallel Computing/ Programming

A Pattern-Based Comparison of OpenACC & OpenMP for Accelerators

DAGViz: A DAG Visualization Tool for Analyzing Task Parallel Programs (DAGViz: DAG )

Control 2004, University of Bath, UK, September 2004

Understanding Hardware Transactional Memory

What is Multi Core Architecture?

CPU Scheduling Outline

MPI and Hybrid Programming Models. William Gropp

Parallel Computing 37 (2011) Contents lists available at ScienceDirect. Parallel Computing. journal homepage:

Introducing PgOpenCL A New PostgreSQL Procedural Language Unlocking the Power of the GPU! By Tim Child

OpenACC 2.0 and the PGI Accelerator Compilers

Multicore Programming with LabVIEW Technical Resource Guide

Symmetric Multiprocessing

Embedded Parallel Computing

The Quest for Speed - Memory. Cache Memory. A Solution: Memory Hierarchy. Memory Hierarchy

Programming the Cell Multiprocessor: A Brief Introduction

Parallelization: Binary Tree Traversal

MAQAO Performance Analysis and Optimization Tool

Operating Systems. 05. Threads. Paul Krzyzanowski. Rutgers University. Spring 2015

SWARM: A Parallel Programming Framework for Multicore Processors. David A. Bader, Varun N. Kanade and Kamesh Madduri

Experiences on using GPU accelerators for data analysis in ROOT/RooFit

Overview. Lecture 1: an introduction to CUDA. Hardware view. Hardware view. hardware view software view CUDA programming

Scalability evaluation of barrier algorithms for OpenMP

Cost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation:

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

Hybrid Programming with MPI and OpenMP

COMP/CS 605: Introduction to Parallel Computing Lecture 21: Shared Memory Programming with OpenMP

Using the Intel Inspector XE

Software and the Concurrency Revolution

Lecture 11: Multi-Core and GPU. Multithreading. Integration of multiple processor cores on a single chip.

Testing Database Performance with HelperCore on Multi-Core Processors

CSE 6040 Computing for Data Analytics: Methods and Tools

Advanced topics: reentrant function

High Performance Computing. Course Notes HPC Fundamentals

A Comparison of Task Pools for Dynamic Load Balancing of Irregular Algorithms

PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions. Outline. Performance oriented design

Introduction. Reading. Today MPI & OpenMP papers Tuesday Commutativity Analysis & HPF. CMSC 818Z - S99 (lect 5)

Performance Evaluation and Optimization of A Custom Native Linux Threads Library

RevoScaleR Speed and Scalability

Intro to GPU computing. Spring 2015 Mark Silberstein, , Technion 1

Real-time Process Network Sonar Beamformer

- An Essential Building Block for Stable and Reliable Compute Clusters

Multi-core and Linux* Kernel

Why Computers Are Getting Slower (and what we can do about it) Rik van Riel Sr. Software Engineer, Red Hat

aicas Technology Multi Core und Echtzeit Böse Überraschungen vermeiden Dr. Fridtjof Siebert CTO, aicas OOP 2011, 25 th January 2011

Institutionen för datavetenskap Department of Computer and Information Science

Multi-GPU Load Balancing for Simulation and Rendering

The SpiceC Parallel Programming System of Computer Systems

GPUs for Scientific Computing

Informatica e Sistemi in Tempo Reale

COMPUTER ORGANIZATION ARCHITECTURES FOR EMBEDDED COMPUTING

Bindel, Spring 2010 Applications of Parallel Computers (CS 5220) Week 1: Wednesday, Jan 27

COMPUTER HARDWARE. Input- Output and Communication Memory Systems

Next Generation GPU Architecture Code-named Fermi

Multi- and Many-Core Technologies: Architecture, Programming, Algorithms, & Application

Operating Systems 4 th Class

CHAPTER 1 INTRODUCTION

evm Virtualization Platform for Windows

Parallel Computing for Data Science

Chapter 1 Computer System Overview

CSC230 Getting Starting in C. Tyler Bletsch

An Easier Way for Cross-Platform Data Acquisition Application Development

System Requirements Table of contents

Data-Flow Awareness in Parallel Data Processing

Parallel and Distributed Computing Programming Assignment 1

ultra fast SOM using CUDA

Workshare Process of Thread Programming and MPI Model on Multicore Architecture

Resource Utilization of Middleware Components in Embedded Systems

FPGA-based Multithreading for In-Memory Hash Joins

OpenMP* 4.0 for HPC in a Nutshell

Memory Systems. Static Random Access Memory (SRAM) Cell

Programming and Scheduling Model for Supporting Heterogeneous Architectures in Linux

Contributions to Gang Scheduling

Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi

OpenACC Programming and Best Practices Guide

Eliminate Memory Errors and Improve Program Stability

Transcription:

Introduction to Multicore Programming Marc Moreno Maza University of Western Ontario, London, Ontario (Canada) CS 4435 - CS 9624 (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 1 / 60

Plan 1 Multi-core Architecture Multi-core processor CPU Cache CPU Coherence 2 Concurrency Platforms PThreads TBB Open MP Cilk ++ Race Conditions and Cilkscreen MMM in Cilk++ (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 2 / 60

Plan Multi-core Architecture 1 Multi-core Architecture Multi-core processor CPU Cache CPU Coherence 2 Concurrency Platforms PThreads TBB Open MP Cilk ++ Race Conditions and Cilkscreen MMM in Cilk++ (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 3 / 60

Multi-core Architecture Multi-core processor (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 4 / 60

Multi-core Architecture Multi-core processor (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 5 / 60

Multi-core Architecture Multi-core processor (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 6 / 60

Multi-core Architecture Multi-core processor Memory I/O Network $ $ $ P P P Chip Multiprocessor (CMP) (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 7 / 60

Multi-core Architecture Multi-core processor Multi-core processor A multi-core processor is an integrated circuit to which two or more individual processors (called cores in this sense) have been attached. In a many-core processor the number of cores is large enough that traditional multi-processor techniques are no longer efficient. Cores on a multi-core device can be coupled tightly or loosely: may share or may not share a cache, implement inter-core communications methods or message passing. Cores on a multi-core implement the same architecture features as single-core systems such as instruction pipeline parallelism (ILP), vector-processing, SIMD or multi-threading. Many applications do not realize yet large speedup factors: parallelizing algorithms and software is a major on-going research area. (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 8 / 60

Multi-core Architecture CPU Cache CPU Cache (1/7) A CPU cache is an auxiliary memory which is smaller, faster memory than the main memory and which stores copies of of the main memory locations that are expectedly frequently used. Most modern desktop and server CPUs have at least three independent caches: the data cache, the instruction cache and the translation look-aside buffer. (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 9 / 60

Multi-core Architecture CPU Cache CPU Cache (2/7) Each location in each memory (main or cache) has a datum (cache line) which ranges between 8 and 512 bytes in size, while a datum requested by a CPU instruction ranges between 1 and 16. a unique index (called address in the case of the main memory) In the cache, each location has also a tag (storing the address of the corresponding cached datum). (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 10 / 60

Multi-core Architecture CPU Cache CPU Cache (3/7) When the CPU needs to read or write a location, it checks the cache: if it finds it there, we have a cache hit if not, we have a cache miss and (in most cases) the processor needs to create a new entry in the cache. Making room for a new entry requires a replacement policy: the Least Recently Used (LRU) discards the least recently used items first; this requires to use age bits. (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 11 / 60

CPU Cache (4/7) Multi-core Architecture CPU Cache Read latency (time to read a datum from the main memory) requires to keep the CPU busy with something else: out-of-order execution: attempt to execute independent instructions arising after the instruction that is waiting due to the cache miss hyper-threading (HT): allows an alternate thread to use the CPU (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 12 / 60

CPU Cache (5/7) Multi-core Architecture CPU Cache Modifying data in the cache requires a write policy for updating the main memory - write-through cache: writes are immediately mirrored to main memory - write-back cache: the main memory is mirrored when that data is evicted from the cache The cache copy may become out-of-date or stale, if other processors modify the original entry in the main memory. (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 13 / 60

CPU Cache (6/7) Multi-core Architecture CPU Cache The replacement policy decides where in the cache a copy of a particular entry of main memory will go: - fully associative: any entry in the cache can hold it - direct mapped: only one possible entry in the cache can hold it - N-way set associative: N possible entries can hold it (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 14 / 60

Multi-core Architecture CPU Cache Cache Performance for SPEC CPU2000 by J.F. Cantin and M.D. Hill. The SPEC CPU2000 suite is a collection of 26 compute-intensive, non-trivial programs used to evaluate the performance of a computer s CPU, memory system, and compilers (http://www.spec.org/osg/cpu2000 ). (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 15 / 60

Multi-core Architecture Cache Coherence (1/6) CPU Coherence x=3 Load x x=3 P P P Figure: Processor P 1 reads x=3 first from the backing store (higher-level memory) (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 16 / 60

Multi-core Architecture Cache Coherence (2/6) CPU Coherence x=3 Load x x=3 x=3 P P P Figure: Next, Processor P 2 loads x=3 from the same memory (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 17 / 60

Multi-core Architecture Cache Coherence (3/6) CPU Coherence x=3 Load x x=3 x=3 x=3 P P P Figure: Processor P 4 loads x=3 from the same memory (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 18 / 60

Multi-core Architecture Cache Coherence (4/6) CPU Coherence x=3 Store x=5 x=3 x=3 x=3 P P P Figure: Processor P 2 issues a write x=5 (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 19 / 60

Multi-core Architecture Cache Coherence (5/6) CPU Coherence x=3 Store x=5 x=3 x=5 x=3 P P P Figure: Processor P 2 writes x=5 in his local cache (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 20 / 60

Multi-core Architecture Cache Coherence (6/6) CPU Coherence x=3 Load x x=3 x=5 x=3 P P P Figure: Processor P 1 issues a read x, which is now invalid in its cache (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 21 / 60

Multi-core Architecture CPU Coherence MSI Protocol In this cache coherence protocol each block contained inside a cache can have one of three possible states: - M: the cache line has been modified and the corresponding data is inconsistent with the backing store; the cache has the responsibility to write the block to the backing store when it is evicted. - S: this block is unmodified and is shared, that is, exists in at least one cache. The cache can evict the data without writing it to the backing store. - I: this block is invalid, and must be fetched from memory or another cache if the block is to be stored in this cache. These coherency states are maintained through communication between the caches and the backing store. The caches have different responsibilities when blocks are read or written, or when they learn of other caches issuing reads or writes for a block. (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 22 / 60

Multi-core Architecture CPU Coherence True Sharing and False Sharing True sharing: True sharing cache misses occur whenever two processors access the same data word True sharing requires the processors involved to explicitly synchronize with each other to ensure program correctness. A computation is said to have temporal locality if it re-uses much of the data it has been accessing. Programs with high temporal locality tend to have less true sharing. False sharing: False sharing results when different processors use different data that happen to be co-located on the same cache line A computation is said to have spatial locality if it uses multiple words in a cache line before the line is displaced from the cache Enhancing spatial locality often minimizes false sharing See Data and Computation Transformations for Multiprocessors by J.M. Anderson, S.P. Amarasinghe and M.S. Lam http://suif.stanford.edu/papers/anderson95/paper.html (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 23 / 60

Multi-core Architecture Multi-core processor (cntd) CPU Coherence Advantages: Cache coherency circuitry operate at higher rate than off-chip. Reduced power consumption for a dual core vs two coupled single-core processors (better quality communication signals, cache can be shared) Challenges: Adjustments to existing software (including OS) are required to maximize performance Production yields down (an Intel quad-core is in fact a double dual-core) Two processing cores sharing the same bus and memory bandwidth may limit performances High levels of false or true sharing and synchronization can easily overwhelm the advantage of parallelism (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 24 / 60

Plan Concurrency Platforms 1 Multi-core Architecture Multi-core processor CPU Cache CPU Coherence 2 Concurrency Platforms PThreads TBB Open MP Cilk ++ Race Conditions and Cilkscreen MMM in Cilk++ (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 25 / 60

Concurrency Platforms Concurrency Platforms Programming directly on processor cores is painful and error-prone. Concurrency platforms abstract processor cores, handles synchronization, communication protocols (optionally) perform load balancing Examples of concurrency platforms: Pthreads Threading Building Blocks (TBB) OpenMP Cilk++ We use an implementation of the Fibonacci sequence F n+2 = F n+1 + F n to compare these four concurrency platforms. (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 26 / 60

Fibonacci Execution Concurrency Platforms fib(4) fib(3) fib(2) fib(2) fib(1) fib(1) fib(0) fib(1) fib(0) Key idea for parallelization The calculations l of fib(n-1) and fib(n-2) can be executed simultaneously without mutual interference. } int fib(int n) { if (n < 2) return n; else { int x = fib(n-1); int y = fib(n-2); return x + y; } (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 27 / 60

PThreads Concurrency Platforms PThreads Pthreads is a POSIX standard for threads, communicating though shared memory. Pthreads defines a set of C programming language types, functions and constants. It is implemented with a pthread.h header and a thread library. Programmers can use Pthreads to create, manipulate and manage threads. In particular, programmers can synchronize between threads using mutexes, condition variables and semaphores. This is a Do-it-yourself concurrency platform: programmers have to map threads onto the computer resources (static scheduling). (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 28 / 60

Key PThread Function Concurrency Platforms PThreads int pthread_create( pthread_t *thread, //returned identifier for the new thread const pthread_attr_t *attr, //object to set thread attributes (NULL for default) void *(*func)(void *), //routine executed after creation void *arg //a single argument passed to func ) //returns error status int pthread_join ( pthread_t thread, //identifier of thread to wait for void **status //terminating thread s status (NULL to ignore) ) //returns error status *WinAPI threads provide similar functionality. (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 29 / 60

PThreads Concurrency Platforms PThreads Overhead: The cost of creating a thread is more than 10,000 cycles. This enforces coarse-grained concurrency. (Thread pools can help.) Scalability: Fibonacci code gets about 1.5 speedup for 2 cores for computing fib(40). Indeed the thread creation overhead is so large that only one thread is used, see below. Consequently, one needs to rewrite the code for more than 2 cores. Simplicity: Programmers must engage in error-prone protocols in order to schedule and load-balance. (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 30 / 60

Concurrency Platforms PThreads #include <stdio.h> #include <stdlib.h> #include <pthread.h> int fib(int n) { if (n < 2) return n; else { int x = fib(n-1); int y = fib(n-2); return x + y; } } typedef struct { int input; int output; } thread_args; void *thread_func ( void *ptr ) { int i = ((thread_args *) ptr)->input; ((thread_args *) ptr)->output = fib(i); return NULL; } (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 31 / 60

Concurrency Platforms PThreads int main(int argc, char *argv[]) { pthread_t thread; thread_args args; int status; int result; int thread_result; if (argc < 2) return 1; int n = atoi(argv[1]); if (n < 30) result = fib(n); else { args.input = n-1; status = pthread_create(&thread, NULL, thread_func, (void*) &args ); // main can continue executing result = fib(n-2); // Wait for the thread to terminate. pthread_join(thread, NULL); result += args.output; } printf("fibonacci of %d is %d.\n", n, result); return 0; } (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 32 / 60

TBB (1/2) Concurrency Platforms TBB A C++ library that run on top of native threads Programmers specify tasks rather than threads: Tasks are objects. Each task object has an input parameter and an output parameter. One needs to define (at least) the methods: one for creating a task and one for executing it. Tasks are launched by a spawn or a spawn and wait for all statement. Tasks are automatically load-balanced across the threads using the work stealing principle. TBB Developed by Intel and focus on performance (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 33 / 60

TBB (2/2) Concurrency Platforms TBB TBB provides many C++ templates to express common patterns simply, such as: parallel for for loop parallelism, parallel reduce for data aggregation pipeline and filter for software pipelining TBB provides concurrent container classes which allow multiple threads to safely access and update items in the container concurrently. TBB also provides a variety of mutual-exclusion library functions, including locks. (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 34 / 60

Concurrency Platforms TBB class FibTask: public task { public: const long n; long* const sum; FibTask( long n_, long* sum_ ) : n(n_), sum(sum_) {} task* execute() { if( n < 2 ) { *sum = n; } else { long x, y; FibTask& a = *new( allocate_child() ) FibTask(n-1,&x); FibTask& b = *new( allocate_child() ) FibTask(n-2,&y); set_ref_count(3); spawn( b ); spawn_and_wait_for_all( a ); *sum = x+y; } return NULL; } }; (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 35 / 60

Concurrency Platforms Open MP Open MP Several compilers available, both open-source and Visual Studio. Runs on top of native threads Linguistic extensions to C/C++ or Fortran in the form of compiler pragmas (compiler directives): # pragma omp task shared(x) implies that the next statement is an independent task; moreover sharing of memory is managed explicitly other pragmas express directives for scheduling, loop parallelism amd data aggregation. Supports loop parallelism and, more recently in Version 3.0, task parallelism with dynamic scheduling. OpenMP provides a variety of synchronization constructs (barriers, mutual-exclusion locks, etc.) (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 36 / 60

Concurrency Platforms Open MP int fib(int n) { if (n < 2) return n; int x, y; #pragma omp task shared(x) x = fib(n - 1); #pragma omp task shared(y) y = fib(n - 2); #pragma omp taskwait return x+y; } (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 37 / 60

From Cilk to Cilk++ Concurrency Platforms Cilk ++ Cilk has been developed since 1994 at the MIT Laboratory for Computer Science by Prof. Charles E. Leiserson and his group, in particular by Matteo Frigo. Besides being used for research and teaching, Cilk was the system used to code the three world-class chess programs: Tech, Socrates, and Cilkchess. Over the years, the implementations of Cilk have run on computers ranging from networks of Linux laptops to an 1824-nodes Intel Paragon. From 2007 to 2009 Cilk has lead to Cilk++, developed by Cilk Arts, an MIT spin-off, which was acquired by Intel in July 2009. Today, it can be freely downloaded. The place where to start is http://www.cilk.com/ Cilk is still developed at MIT http://supertech.csail.mit.edu/cilk/ (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 38 / 60

Cilk ++ Concurrency Platforms Cilk ++ Cilk++ (resp. Cilk) is a small set of linguistic extensions to C++ (resp. C) supporting fork-join parallelism Both Cilk and Cilk++ feature a provably efficient work-stealing scheduler. Cilk++ provides a hyperobject library for parallelizing code with global variables and performing reduction for data aggregation. Cilk++ includes the Cilkscreen race detector and the Cilkview performance analyzer. (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 39 / 60

Concurrency Platforms Cilk ++ Nested Parallelism in Cilk ++ int fib(int n) { if (n < 2) return n; int x, y; x = cilk_spawn fib(n-1); y = fib(n-2); cilk_sync; return x+y; } The named child function cilk spawn fib(n-1) may execute in parallel with its parent Cilk++ keywords cilk spawn and cilk sync grant permissions for parallel execution. They do not command parallel execution. (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 40 / 60

Concurrency Platforms Cilk ++ Loop Parallelism in Cilk ++ a 11 a 12 a 1n a 21 a 22 a 2n a 11 a 21 a n1 a 12 a 22 a n2 a n1 a n2 a nn a 1n a 2n a nn n1 n2 nn 1n 2n nn A A T // indices run from 0, not 1 cilk_for (int i=1; i<n; ++i) { for (int j=0; j<i; ++j) { double temp = A[i][j]; A[i][j] = A[j][i]; A[j][i] = temp; } } The iterations of a cilk for loop may execute in parallel. (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 41 / 60

Concurrency Platforms Cilk ++ Serial Semantics (1/2) Cilk (resp. Cilk++) is a multithreaded language for parallel programming that generalizes the semantics of C (resp. C++) by introducing linguistic constructs for parallel control. Cilk (resp. Cilk++) is a faithful extension of C (resp. C++): The C (resp. C++) elision of a Cilk (resp. Cilk++) is a correct implementation of the semantics of the program. Moreover, on one processor, a parallel Cilk (resp. Cilk++) program scales down to run nearly as fast as its C (resp. C++) elision. To obtain the serialization of a Cilk++ program #define cilk_for for #define cilk_spawn #define cilk_sync (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 42 / 60

Serial Semantics (2/2) Concurrency Platforms Cilk ++ int fib (int n) { if (n<2) return (n); else { int x,y; x = cilk_spawn fib(n-1); y = fib(n-2); cilk_sync; return (x+y); } } Cilk++ source int fib (int n) { if (n<2) return (n); else { int x,y; x = fib(n-1); y = fib(n-2); return (x+y); } } Serialization (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 43 / 60

Scheduling (1/3) Concurrency Platforms Cilk ++ int fib (int n) { if (n<2) return (n); else { int x,y; x = cilk_spawn fib(n-1); y = fib(n-2); cilk_sync; return (x+y); } } Memory I/O Network $ P $ $ P P P (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 44 / 60

Concurrency Platforms Cilk ++ Scheduling (2/3) Cilk/Cilk++ randomized work-stealing scheduler load-balances the computation at run-time. Each processor maintains a ready deque: A ready deque is a double ended queue, where each entry is a procedure instance that is ready to execute. Adding a procedure instance to the bottom of the deque represents a procedure call being spawned. A procedure instance being deleted from the bottom of the deque represents the processor beginning/resuming execution on that procedure. Deletion from the top of the deque corresponds to that procedure instance being stolen. A mathematical proof guarantees near-perfect linear speed-up on applications with sufficient parallelism, as long as the architecture has sufficient memory bandwidth. A spawn/return in Cilk is over 100 times faster than a Pthread create/exit and less than 3 times slower than an ordinary C function call on a modern Intel processor. (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 45 / 60

Scheduling (2/3) Concurrency Platforms Cilk ++ (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 46 / 60

Concurrency Platforms Cilk ++ (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 47 / 60

Concurrency Platforms Cilk ++ (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 48 / 60

Concurrency Platforms Cilk ++ (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 49 / 60

Concurrency Platforms Cilk ++ (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 50 / 60

Concurrency Platforms Cilk ++ (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 51 / 60

Concurrency Platforms Cilk ++ (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 52 / 60

Concurrency Platforms Cilk ++ (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 53 / 60

The Cilk++ Platform Concurrency Platforms Cilk ++ int fib (int n) { if (n<2) return (n); else { int x,y; x = cilk_spawn fib(n-1); y = fib(n-2); cilk_sync; return (x+y); } } Cilk++ source 1 int fib (int n) { if (n<2) return (n); else { int x,y; x = fib(n-1); y = fib(n-2); return (x+y); } } Serialization 2 Cilk++ Compiler Conventional Compiler Linker Binary 3 Hyperobject Library 6 Cilkview Scalability Analyzer 5 Cilkscreen Race Detector Conventional Regression Tests 4 Runtime System Parallel Regression Tests Reliable Single- Threaded Code Exceptional Performance Reliable Multi- Threaded Code (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 54 / 60

Race Bugs (1/3) Concurrency Platforms Race Conditions and Cilkscreen Example int x = 0; A int x = 0; cilk_for(int i=0, i<2, ++i) { B C x++; B } x++; x++; D assert(x == 2); A C assert(x == 2); D Dependency Graph Iterations of a cilk for should be independent. Between a cilk spawn and the corresponding cilk sync, the code of the spawned child should be independent of the code of the parent, including code executed by additional spawned or called children. The arguments to a spawned function are evaluated in the parent before the spawn occurs. (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 55 / 60

Race Bugs (2/3) Concurrency Platforms Race Conditions and Cilkscreen A int x = 0; 1 x = 0; 2 r1 = x; 4 r2 = x; B x++; x++; C 3 r1++; 5 r2++; assert(x == 2); D 7 x = r1; 6 x = r2; 8 assert(x == 2);? 01 1? 0? 01 r1 x r2 (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 56 / 60

Race Bugs (3/3) Concurrency Platforms Race Conditions and Cilkscreen Watch out for races in packed data structures such as: struct{ char a; char b; } Updating x.a and x.b in parallel can cause races. If an ostensibly deterministic Cilk++ program run on a given input could possibly behave any differently than its serialization, Cilkscreen race detector guarantees to report and localize the offending race. Employs a regression-test methodology (where the programmer provides test inputs) and dynamic instrumentation of binary code. Identifies files-names, lines and variables involved in the race. Runs about 20 times slower than real-time. (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 57 / 60

Concurrency Platforms MMM in Cilk++ template<typename T> void multiply_iter_par(int ii, int jj, int kk, T* C) { cilk_for(int i = 0; i < ii; ++i) for (int k = 0; k < kk; ++k) cilk_for(int j = 0; j < jj; ++j) C[i * jj + j] += A[i * kk + k] + B[k * jj + j]; } Does not scale up well due to a poor locality and uncontrolled granularity. (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 58 / 60

Concurrency Platforms MMM in Cilk++ template<typename T> void multiply_rec_seq_helper(int i0, int i1, int j0, int j1, int k0, int k1, T* A, ptrdiff_t lda, T* B, ptrdiff_t ldb, T* C, ptrdiff_t ldc) { int di = i1 - i0; int dj = j1 - j0; int dk = k1 - k0; if (di >= dj && di >= dk && di >= RECURSION_THRESHOLD) { int mi = i0 + di / 2; multiply_rec_seq_helper(i0, mi, j0, j1, k0, k1, A, lda, B, ldb, C, ldc); multiply_rec_seq_helper(mi, i1, j0, j1, k0, k1, A, lda, B, ldb, C, ldc); } else if (dj >= dk && dj >= RECURSION_THRESHOLD) { int mj = j0 + dj / 2; multiply_rec_seq_helper(i0, i1, j0, mj, k0, k1, A, lda, B, ldb, C, ldc); multiply_rec_seq_helper(i0, i1, mj, j1, k0, k1, A, lda, B, ldb, C, ldc); } else if (dk >= RECURSION_THRESHOLD) { int mk = k0 + dk / 2; multiply_rec_seq_helper(i0, i1, j0, j1, k0, mk, A, lda, B, ldb, C, ldc); multiply_rec_seq_helper(i0, i1, j0, j1, mk, k1, A, lda, B, ldb, C, ldc); } else { for (int i = i0; i < i1; ++i) for (int k = k0; k < k1; ++k) for (int j = j0; j < j1; ++j) C[i * ldc + j] += A[i * lda + k] * B[k * ldb + j]; } } (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 59 / 60

Concurrency Platforms MMM in Cilk++ template<typename T> inline void multiply_rec_seq(int ii, int jj, in T* B, T* C) { multiply_rec_seq_helper(0, ii, 0, jj, 0, kk, A, kk, B, jj, C, jj } Multiplying a 4000x8000 matrix by a 8000x4000 matrix on 32 cores = 8 sockets x 4 cores (Quad Core AMD Opteron 8354) per socket. The 32 cores share a L3 32-way set-associative cache of 2 Mbytes. #core Elision (s) Parallel (s) speedup 8 420.906 51.365 8.19 16 432.419 25.845 16.73 24 413.681 17.361 23.83 32 389.300 13.051 29.83 (Moreno Maza) Introduction to Multicore Programming CS 433 - CS 9624 60 / 60