Towards Performance and Productivity at Scale



Similar documents
A Tutorial on X10 and its Implementation

Programming Language X10 on Multiple JVMs

Petascale Software Challenges. Piyush Chaudhary High Performance Computing

Data Centric Systems (DCS)

- An Essential Building Block for Stable and Reliable Compute Clusters

Scheduling Task Parallelism" on Multi-Socket Multicore Systems"

Cloud Computing. Up until now

Main Memory Map Reduce (M3R)

Enabling High performance Big Data platform with RDMA

Multi-core Programming System Overview

Scalability and Classifications

Fachbereich Informatik und Elektrotechnik SunSPOT. Ubiquitous Computing. Ubiquitous Computing, Helmut Dispert

Petascale Software Challenges. William Gropp

MOSIX: High performance Linux farm

Applications to Computational Financial and GPU Computing. May 16th. Dr. Daniel Egloff

A Comparative Study on Vega-HTTP & Popular Open-source Web-servers

Next Generation GPU Architecture Code-named Fermi

Bringing Big Data Modelling into the Hands of Domain Experts

Chapter 11 I/O Management and Disk Scheduling

Stream Processing on GPUs Using Distributed Multimedia Middleware

Parallel Databases. Parallel Architectures. Parallelism Terminology 1/4/2015. Increase performance by performing operations in parallel

Bigdata High Availability (HA) Architecture

Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi

Informatica Ultra Messaging SMX Shared-Memory Transport

Delivering Quality in Software Performance and Scalability Testing

Parallel Computing. Benson Muite. benson.

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

Java Virtual Machine: the key for accurated memory prefetching

High Performance Computing. Course Notes HPC Fundamentals

A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM

Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand

Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage

Seeking Opportunities for Hardware Acceleration in Big Data Analytics

A Comparison Of Shared Memory Parallel Programming Models. Jace A Mogill David Haglin

Resource Utilization of Middleware Components in Embedded Systems

Chapter 2 Parallel Architecture, Software And Performance

Chapter 3 Operating-System Structures

Parallel Computing for Data Science

Restraining Execution Environments

Debugging in Heterogeneous Environments with TotalView. ECMWF HPC Workshop 30 th October 2014

Can High-Performance Interconnects Benefit Memcached and Hadoop?

Garbage Collection in the Java HotSpot Virtual Machine

Cloud Computing through Virtualization and HPC technologies

General Introduction

Performance Analysis of Mixed Distributed Filesystem Workloads

Scientific Computing Programming with Parallel Objects

How To Scale Out Of A Nosql Database

Chapter 6, The Operating System Machine Level

Benchmarking Hadoop & HBase on Violin

PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions. Outline. Performance oriented design

Parallel Programming Survey

Chapter 2 System Structures

11.1 inspectit inspectit

GraySort on Apache Spark by Databricks

Overlapping Data Transfer With Application Execution on Clusters

Linux Performance Optimizations for Big Data Environments

UTS: An Unbalanced Tree Search Benchmark

Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software

ELEC 377. Operating Systems. Week 1 Class 3

Introducing PgOpenCL A New PostgreSQL Procedural Language Unlocking the Power of the GPU! By Tim Child

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

Mission-Critical Java. An Oracle White Paper Updated October 2008

BigData. An Overview of Several Approaches. David Mera 16/12/2013. Masaryk University Brno, Czech Republic

Introduction to Cloud Computing

Lecture 2 Parallel Programming Platforms

Chapter 3: Operating-System Structures. Common System Components

Azul Compute Appliances

GPU Hardware and Programming Models. Jeremy Appleyard, September 2015

Amazon EC2 Product Details Page 1 of 5

CSCI E 98: Managed Environments for the Execution of Programs

Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer

IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads

Write a technical report Present your results Write a workshop/conference paper (optional) Could be a real system, simulation and/or theoretical

FPGA-based Multithreading for In-Memory Hash Joins

Tackling Big Data with MATLAB Adam Filion Application Engineer MathWorks, Inc.

PERFORMANCE ENHANCEMENTS IN TreeAge Pro 2014 R1.0

Accelerating Hadoop MapReduce Using an In-Memory Data Grid

SYSTAP / bigdata. Open Source High Performance Highly Available. 1 bigdata Presented to CSHALS 2/27/2014

Cloud Operating Systems for Servers

Infrastructure Matters: POWER8 vs. Xeon x86

Welcome to the unit of Hadoop Fundamentals on Hadoop architecture. I will begin with a terminology review and then cover the major components

Fast Analytics on Big Data with H20

Parallel Computing: Strategies and Implications. Dori Exterman CTO IncrediBuild.

The Fastest Way to Parallel Programming for Multicore, Clusters, Supercomputers and the Cloud.

Multi-Channel Clustered Web Application Servers

Trends in High-Performance Computing for Power Grid Applications

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

Virtual Machine Monitors. Dr. Marc E. Fiuczynski Research Scholar Princeton University

Le langage OCaml et la programmation des GPU

Parallel Programming at the Exascale Era: A Case Study on Parallelizing Matrix Assembly For Unstructured Meshes

Apache Flink Next-gen data analysis. Kostas

Programming Languages for Large Scale Parallel Computing. Marc Snir

TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance

Virtual Machines.

Jonathan Worthington Scarborough Linux User Group

Cray: Enabling Real-Time Discovery in Big Data

High Performance Computing in CST STUDIO SUITE

Big Graph Processing: Some Background

Developing Scalable Parallel Applications in X10

Availability Digest. Raima s High-Availability Embedded Database December 2011

Transcription:

Reflections on X10 Towards Performance and Productivity at Scale David Grove IBM TJ Watson Research Center This material is based upon work supported in part by the Defense Advanced Research Projects Agency under its Agreement No. HR0011-07-9-0002

Acknowledgments X10 research project started in 2004 Major technical contributions from many people IBM X10 team (current and former) > 50 people External research collaborators X10 user community IBM PERCS project team Support from DARPA HPCS US Department of Energy US Air Force Research Lab IBM 2

Talk Objectives Fundamental challenges: What is X10 all about? Insights into the APGAS Programming Model Applicability beyond the X10 language Realization of APGAS Model in X10 X10 In Use X10 At Scale X10 + Java Commercial Applications X10 Community Activities Language design and implementation alternatives and their implications 3

Talk Outline X10 Programming Model Overview Code snippets Larger examples X10 in Use Scaling X10 Agent simulation (Megaffic/XAXIS) Main Memory Map Reduce (M3R) Implementing X10 Compiling X10 X10 Runtime Final Thoughts 4

X10 Genesis: DARPA HPCS Program (2004) High Productivity Computing Systems Central Challenge: Productive Programming of Large Scale Supercomputers Clustered systems 1000 s of SMP Nodes connected by high-performance interconnect Large aggregate memory, disk storage, etc. Massively Parallel Processor Massively Parallel Processor Systems Systems SMP SMPClusters Clusters SMP Node SMP Node PEs, PEs,...... PEs, PEs, Memory... Memory Interconnect IBM IBMBlue BlueGene Gene 5

Programming Model Challenges Scale Out Program must run across many nodes (distributed memory; network capabilities) Scale Up Program must exploit multi-core and accelerators (concurrency; heterogeneity) Both Productivity and Performance Bring modern commercial tooling/languages/practices to HPC programmers Support high-level abstractions, code reuse, rapid prototyping While still enabling full utilization of HPC hardware capabilities at scale 6

X10 Performance and Productivity at Scale X10 Language Java-like language (statically typed, object oriented, garbage-collected) Ability to specify scale-out computations (exploit modern networks, clusters) Ability to specify fine-grained concurrency (exploit multi-core) Single programming model for computation offload and heterogeneity (exploit GPUs) X10 Tool chain Open source compiler, runtime, class libraries Dual path: Managed X10 (X10 Java) and Native X10 (X10 C++) Command-line tooling (compile/execute) Linux, Mac OSX, Windows, AIX; x86, Power; sockets, MPI, PAMI, DCMF Eclipse-based IDE (edit/browse/compile/execute) 7

Partitioned Global Address Space (PGAS) Languages Managing locality is a key programming task in a distributed-memory system PGAS combines a single global address space with locality awareness PGAS languages: Titanium, UPC, CAF, X10, Chapel Single address space across all shared-memory nodes any task or object can refer to any object (local or remote) Partitioned to reflect locality each partition (X10 place) must fit within a shared-memory node each partition contains a collection of tasks and objects In X10 tasks and objects are mapped to places explicitly objects are immovable tasks must spawn remote task or shift place to access remote objects 8

X10 Combines PGAS with Asynchrony (APGAS) Global Reference Local Heap Activities Activities Place 0 Place N Fine-grain concurrency Atomicity async S when(c) S finish S atomic S Place-shifting operations 9 Local Heap Distributed heap at(p) S GlobalRef[T] at(p) e PlaceLocalHandle[T]

Concurrency: async and finish async S Creates a new child activity that executes statement S Returns immediately S may reference variables in enclosing blocks Activities cannot be named/cancelled. finish S Execute S, but wait until all (transitively) spawned asyncs have terminated. finish is useful for expressing synchronous operations on (local or) remote data. // Compute the Fibonacci // sequence in parallel. def fib(n:int):int { if (n < 2) return 1; val f1:int; val f2:int; finish { async f1 = fib(n-1); f2 = fib(n-2); } return f1+f2; } Rooted exception model Trap all exceptions and throw an (aggregate) exception if any spawned async terminates exceptionally. 10

Atomicity: atomic and when atomic S Execute statement S atomically Atomic blocks are conceptually executed in a serialized order with respect to all other atomic blocks in a Place: isolation and weak atomicity. An atomic block body (S) must be nonblocking, sequential, and local when (E) S Activity suspends until a state in which the guard E is true. In that state, S is executed atomically and in isolation. Guard E is a boolean expression and must be nonblocking, sequential, local, and pure 11 // push data onto concurrent // list-stack val node = new Node(data); atomic { node.next = head; head = node; } class OneBuffer { var datum:object = null; var filled:boolean = false; def receive():object { when (filled) { val v = datum; datum = null; filled = false; return v; } }

Atomicity: atomic and when (and locks) X10 currently implements atomic and when trivially with a per-place lock All atomic/when statements serialized within a Place Scheduler re-evaluates pending when conditions on exit of atomic section Poor scalability on multi-core nodes; when especially inefficient Pragmatics: class library provides lower-level alternatives x10.util.concurrent.lock pthread mutex x10.util.concurrent.atomicinteger et al wrap machine atomic update operations An aspect of X10 where our implementation has not yet matched our ambitions Area for future research Natural fit for transactional memory (STM/HTM/Hybrid) 12

Distribution: Places and at at (p) S Execute statement S at place p Current activity is blocked until S completes Deep copy of local object graph to target place; the variables referenced from S define the roots of the graph to be copied. GlobalRef[T] used to create remote pointers across at boundaries class C { var x:int; def this(n:int) { x = n; } } // Increment remote counter def inc(c:globalref[c]) { at (c.home) c().x++; } // Create GR of C static def make(init:int) { val c = new C(init); return GlobalRef[C](c); } 13

X10 s Distributed Object Model Objects live in a single place Objects can only be accessed in the place where they live Cross-place references GlobalRef[T] reference to an object at one place that can be transmitted to other places PlaceLocalHandle[T] handle for a distributed data structure with state (objects) at many places. Optimized representation for a collection of GlobalRef[T] (one per place). Implementing at Compiler analyzes the body of at and identifies roots to copy (exposed variables) Complete object graph reachable from roots is serialized and sent to destination place A new (unrelated) copy of the object graph is created at the destination place Controlling object graph serialization Instance fields of class may be declared transient (won t be copied) GlobalRef[T]/PlaceLocalHandle[T] serializes id, not the referenced object Classes may implement CustomSerialization interface for arbitrary user-defined behavior Major evolutions in object model: X10 1.5, 1.7, 2.0, and 2.1 (fourth time is the charm? ) 14

Hello Whole World 1/class HelloWholeWorld { 2/ public static def main(args:rail[string]) { 3/ finish 4/ for (p in Place.places()) 5/ at (p) 6/ async 7/ Console.OUT.println(p+" says " +args(0)); 8/ Console.OUT.println( Goodbye ); 9/ } 10/} % x10c++ HelloWholeWorld.x10 % X10_NPLACES=4;./a.out hello Place 0 says hello Place 2 says hello (" Place 3 says hello Place 1 says hello Goodbye 15

APGAS Idioms Remote evaluation v = at (p) evalthere(arg1, arg2); SPMD finish for (p in Place.places()) { at(p) async runeverywhere(); } Active message at (p) async runthere(arg1, arg2); Atomic remote update at (ref) async atomic ref() += v; Recursive parallel decomposition def fib(n:int):int { if (n < 2) return 1; val f1:int; val f2:int; finish { async f1 = fib(n-1); f2 = fib(n-2); } return f1 + f2; } Data exchange // swap row i local and j remote val h = here; val row_i = rows()(i); finish at(p) async { val row_j = rows()(j); rows()(j) = row_i; at(h) async row()(i) = row_j; } A handful of key constructs cover a broad spectrum of patterns 16

Outline X10 Programming Model Overview Code snippets Larger examples Row Swap from LU Unbalanced Tree Search X10 in Use Scaling X10 Agent simulation (Megaffic/XAXIS) Main Memory Map Reduce (M3R) Implementing X10 Compiling X10 X10 Runtime Final Thoughts 17

Row Swap from LU Benchmark Programming problem Efficiently exchange swap rows in distributed matrix with another Place Exploit network capabilities Overlap communication with computation Local setup at (dst) async { } Blocked asynccopy get On Finish Blocked on finish asynccopy put Local swap Local swap Initiating Place 18 Destination Place

Row Swap from X10 LU Benchmark // swap row with index srcrow located here with row with index dstrow located at place dst def rowswap(matrix:placelocalhandle[matrix[double]], srcrow:int, dstrow:int, dst:place) { val srcbuffer = buffers(); val srcbufferref = new RemoteRail(srcBuffer); val size = matrix().getrow(srcrow, srcbuffer); finish { at (dst) async { finish { val dstbuffer = buffers(); Array.asyncCopy[Double](srcBufferRef, 0, dstbuffer, 0, size); } matrix().swaprow(dstrow, dstbuffer); Array.asyncCopy[Double](dstBuffer, 0, srcbufferref, 0, size); } } matrix().setrow(srcrow, srcbuffer); } 19

Global Load Balancing Unbalanced Tree Search Benchmark count nodes in randomly generated tree separable cryptographic random number generator highly unbalanced trees unpredictable tree traversal can be easily relocated (no data dependencies, no locality) Problems to be solved: Dynamically balance work across a large number of places efficiently Detect termination of the computation quickly. Conventional Approach: Worker steals from randomly chosen victim when local work is exhausted. Key problem: When does a worker know there is no more work? 20

Global Load Balance: The Lifeline Solution Intuition: X0 already has a framework for distributed termination detection finish. Use it! But this requires activities terminate at some point! Idea: Let activities terminate after w rounds of failed steals. But ensure that a worker with some work can distribute work (= use at(p) async S) to nodes known to be idle. Lifeline graphs. Before a worker dies it creates z lifelines to other nodes. These form a distribution overlay graph. What kind of graph? Need low out-degree, low diameter graph. Our paper/implementation uses hyper-cube. 21

Scalable Global Load Balancing Unbalanced Tree Search Lifeline-based global work stealing [PPoPP 11] n random victims then p lifelines (hypercube) fixed graph with low degree and low diameter synchronous (steal) then asynchronous (deal) Root finish accounts for startup asyncs + lifeline asyncs not random steal attempts Compact work queue (for shallow trees) represent intervals of siblings thief steals half of each work item Sparse communication graph bounded list of potential random victims finish trades contention for latency genuine APGAS algorithm 22

UTS Main Loop def process() { alive = true; while (!empty()) { while (!empty()) { processatmostn(); Runtime.probe(); deal(); } steal(); } alive = false; } def steal() { val h = here.id; for (i:int in 0..w) { if (!empty()) break; finish at (Place(victims(rnd.nextInt(m)))) async request(h, false); } for (lifeline:int in lifelines) { if (!empty()) break; if (!lifelinesactivated(lifeline)) { lifelinesactivated(lifeline) = true; finish at (Place(lifeline)) async request(h, true); } } } 23

UTS Handling Thieves def request(thief:int, lifeline:boolean) { val nodes = take(); // grab nodes from the local queue if (nodes == null) { if (lifeline) lifelinethieves.push(thief); return; } at (Place(thief)) async { if (lifeline) lifelineactivated(thief) = false; enqueue(nodes); // add nodes to the local queue } } def deal() { while (!lifelinethieves.empty()) { val nodes = take(); // grab nodes from the local queue if (nodes == null) return; val thief = lifelinethieves.pop(); at (Place(thief)) async { lifelineactivated(thief) = false; enqueue(nodes); // add nodes to the local queue if (!alive) process(); } } } 24

Outline X10 Programming Model Overview Code snippets Larger examples Row Swap from LU Unbalanced Tree Search X10 in Use Scaling X10 Agent simulation (Megaffic/XAXIS) Main Memory Map Reduce (M3R) Implementing X10 Compiling X10 X10 Runtime Final Thoughts 25

X10 At Scale X10 has places and asyncs in each place We want to Handle millions of asyncs ( billions) Handle tens of thousands of places ( millions) We need to Scale up shared memory parallelism (today: 32 cores per place) schedule many asyncs with a few hardware threads Scale out distributed memory parallelism (today: 50K places) provide mechanisms for efficient distribution (data & control) support distributed load balancing 26

DARPA PERCS Prototype (Power 775) Compute Node 32 Power7 cores 3.84 GHz 128 GB DRAM peak performance: 982 Gflops Torrent interconnect Drawer 8 nodes Rack 8 to 12 drawers Full Prototype up to 1,740 compute nodes up to 55,680 cores up to 1.7 petaflops 1 petaflops with 1,024 compute nodes 27

Eight Benchmarks HPC Challenge benchmarks Linpack TOP500 (flops) Stream Triad local memory bandwidth Random Access distributed memory bandwidth Fast Fourier Transform mix Machine learning kernels KMEANS SSCA1 SSCA2 UTS graph clustering pattern matching irregular graph traversal unbalanced tree traversal Implemented in X10 as pure scale out tests One core = one place = one main async Native libraries for sequential math kernels: ESSL, FFTW, SHA1 28

Performance at Scale (Weak Scaling) cores absolute performance at scale parallel efficiency (weak scaling) performance relative to best implementation available Stream 55,680 397 TB/s 98% 85% (lack of prefetching) FFT 32,768 27 Tflops 93% 40% (no tuning of seq. code) Linpack 32,768 589 Tflops 80% 80% (mix of limitations) RandomAccess 32,768 843 Gups 100% 76% (network stack overhead) KMeans 47,040 depends on parameters 97.8% 66% (vectorization issue) SSCA1 47,040 depends on parameters 98.5% 100% SSCA2 47,040 245 B edges/s > 75% no comparison data UTS (geometric) 55,680 596 B nodes/s 98% reference code does not scale 4x to 16x faster than UPC code 29 29

HPCC Class 2 Competition: Best Performance Award 30

IBM Mega Traffic Simulator (Megaffic) BM Research has created a Large-scale multi-agent traffic simulator. Map Data Analysis by Mathematical Modeling Link Cost Prediction Road Transition cost Driver Behavior Model Traffic Simulator Visualization Megaffic Driver Behavior Modeling Probe Car Data Traffic Demands Data Traffic Demands Estimation Graph Editor Traffic Demand Data Simulation Base Map Data (XAXIS) IBM extensible Agent execution InfraStructure e.g. CO2 emission moving time speed vehicle num Driver Behavior Modeling: Minimum travel time Automatically generate Origin preferences of drivers from Minimum Destination probe car data. A preference is distance a set of weights over different policies such as minimum Actual path travel time, minimum # of turns, etc. The weights are Minimum number of turns automatically learned from the data. Preference Scalable simulation engine for supporting parallel execution environment Driver ID This project is funded by the PREDICT project of Ministry of Internal Affairs and Communications, Japan. 31

32

City Planning Example: What-if Simulation of Sendai City After the Great East-Japan Earthquake, there were some cavings on the roads in Sendai city, and there was heavy traffic jam. The below picture shows simulation results of different (imaginary) scenarios of actions to reduce traffic jam by road closure. This what-if simulation capability can help traffic administrators pick the best decision out of some choices even in new situations. Scenario 1: No Action caving Scenario 2: Road Closure A You can see several KPIs to know how good a scenario is. Scenario 3: Road Closure B Road closure. Scenario 4: Road Closure C Road closure. Road closures. Traffic jam in the center area of the city 33 Traffic jam disappears in the center, but appears in the upper

Why X10? X10/Java interoperability lowered risk of adopting X10 Scale out existing Java-based simulation framework with APGAS primitives Gradual porting of core code to X10 (enabled Native X10 & BlueGene/Q) X10 Place 1 CP2 Road 1 Road 2 X10 Place 2 CP1 X10 Place 0 Road 0 CP0 CP3 CP: Cross Point 34

XAXIS Software Stack The following diagram illustrates the software stack of XAXIS and its applications. Social Social Simulation(Java)(e.g. Simulation(Java)(e.g. Traffic, Traffic, Social Social Simulatio Simulatio CO2 CO2 Emission, Emission, Auction, Auction, Marketing) Marketing) nn (X10) (X10) XAXIS API Java JavaBridge Bridge X10 X10Bridge Bridge XAXIS XAXIS :: X10-Based X10-Based Simulation Simulation Runtime Runtime X10 X10 (Java, (Java, C++) C++) 35 35

M3R A Hadoop re-implementation in Main Memory, using X10 Hadoop Popular Java API for Map/Reduce programming Out of core, resilient, scalable (1000 nodes) Based on HDFS (a resilient distributed filesystem) M3R/Hadoop Reimplementation of Hadoop API using Managed X10 (X10 compiled to Java) X10 provides scalable multi-jvm runtime with efficient communication Existing Hadoop 1.0 applications just work Reuse HDFS (and some other parts of Hadoop) In-memory: problem size must fit in aggregate cluster RAM Not resilient: cluster scales until MTBF barrier But considerably faster (closer to HPC speeds) Trade resilience for performance (both latency and throughput) 36

M3R Architecture Java Hadoop jobs Hadoop/M3R adaptor X10 M3R jobs Java M3R jobs JVM only Hadoop M/R Engine HDFS HDFS data JVM/Native M3R Engine HDFS X10 Java The core M3R engine provides X10 and Java Map/Reduce APIs against which application programs can be written. On top of the engine, a Hadoop API compatibility layer allows unmodified Hadoop Map/Reduce jobs to be executed on the engine. The compatibility layer is written using a mix of X10 and Java code and heavily uses the Java interoperability capabilities of Managed X10. 37

Intuition: Where does M3R gain performance relative to Hadoop? Reducing Disk I/O Reducing network communication Reducing serialization/deserialization Reduce translation of object graphs to byte buffers & back Reducing other costs Job submission speed Startup time (JVM reuse synergistic with JIT compilation) Partition Stability (iterative jobs) Enable application programmer to pin data in memory within Hadoop APIs The reducer associated with a given partition number will always be run in the same JVM For details, see [Shinnar et al VLDB 12] 38

Additional X10 Community Applications Additional X10 Applications/Frameworks ANUChem http://cs.anu.edu.au/~josh.milthorpe/anuchem.html, [Milthorpe IPDPS 2013] ScaleGraph https://sites.google.com/site/scalegraph/ [Dayarathna et al X10 2012] Invasive Computing [Bungartz et al X10 2013] X10 as a coordination language for scale-out SatX10 http://x10-lang.org/satx10, [SAT 12 Tools] Power system contingency analysis [Khaitan & McCalley X10 2013] X10 as a target language MatLab http://www.sable.mcgill.ca/mclab/mix10.html, [Kumar & Hendren X10 2013] StreamX10 [Wei et al X10 2012] X10 Publications: http://www.x10-lang.org/x10-community/publications-using-x10.html 39

Outline X10 Programming Model Overview Code snippets Larger examples Row Swap from LU Unbalanced Tree Search X10 in Use Scaling X10 Agent simulation (Megaffic/XAXIS) Main Memory Map Reduce (M3R) Implementing X10 Compiling X10 X10 Runtime Final Thoughts 40

X10 Target Environments High-end large HPC clusters BlueGene/P and BlueGene/Q Power775 (aka PERCS machine, P7IH) x86 + InfiniBand, Power + InfiniBand Goal: deliver scalable performance competitive with C+MPI Medium-scale commodity systems ~100 nodes (~1000 core and ~1 terabyte main memory) Goal: deliver main-memory performance with simple programming model (accessible to Java programmers) Developer laptops Linux, Mac OSX, Windows. Eclipse-based IDE Goal: support developer productivity 41

X10 Implementation Summary X10 Implementations C++ based ( Native X10 ) Multi-process (one place per process; multi-node) Linux, AIX, MacOS, Cygwin, BlueGene x86, x86_64, PowerPC, GPUs (CUDA, language subset) JVM based ( Managed X10 ) Multi-process (one place per JVM process; multi-node) Current limitation on Windows to single process (single place) Runs on any Java 6 or Java 7 JVM X10DT (X10 IDE) available for Windows, Linux, Mac OS X Based on Eclipse 3.7 Supports many core development tasks including remote build/execute facilities 42

X10 Compilation & Execution X10 Compiler Front-End Parsing / Type Check X10 Source AST Optimizations AST Lowering X10 AST Managed X10 Java Interop Support Java Back-End XRJ Existing Java Application Native X10 X10 AST Java Code Generation C++ Code Generation Java Source C++ Source XRX Java Compiler Java Byteode Platform Compilers Native executable C++ Back-End CUDA Source XRC Existing Native (C/C+ +/etc) Application JNI Java VMs 43 X10RT Native Environment (CPU, GPU, etc)

Java vs. C++ as Implementation Substrate Java Just-in-time compilation (blessing & curse) Sophisticated optimizations and runtime services for OO language features Straying too far from Java semantics can be quite painful Implementing a language runtime in vanilla Java is limiting Object model trickery Implementing Cilk-style workstealing C++ Ahead-of-time compilation (blessing & curse) Minimal optimization of OO language features Implementing language runtime layer Ability to write low-level/unsafe code (flexibility) Much fewer built-in services to leverage (blessing & curse) Dual path increases effort and constrains language design, but also widens applicability and creates interesting opportunities 44

X10 Runtime X10RT (X10 runtime transport) active messages, collectives, RDMAs implemented in C; emulation layer Native runtime processes, threads, atomic operations object model (layout, rtt, serialization) two versions: C++ and Java XRX (X10 runtime in X10) implements APGAS: async, finish, at X10 code compiled to C++ or Java Core X10 libraries x10.array, io, util, util.concurrent 45 X10 Application X10 Core Class Libraries XRX Native Runtime X10RT PAMI TCP/IP MPI DCMF CUDA

XRX: Async Implementation Many more logical tasks (asyncs) than execution units (threads) Each async is encoded as an X10 Activity object async body encoded as an X10 closure reference to governing finish state: clocks the activity object (or reference) is not exposed to the programmer Per-place scheduler per-place pool of worker threads per-worker deque of pending activities cooperative activity assigned to one thread from start to finish number of threads in pool dynamically adjusted to compensated for blocked activities work stealing worker processes its pending activities first then steals activity from random coworker 46

XRX: At Implementation at (p) async source side: synthesize active message async id + serialized heap + control state (finish, clocks) compiler identifies captured variables (roots) runtime serializes heap reachable from roots destination side: decode active message polling (when idle + on runtime entry) new Activity object pushed to worker s deque at (p) implemented as async at + return message parent activity blocks waiting for return message normal or abnormal termination (propagate exceptions and stack traces) ateach (broadcast) elementary software routing 47

XRX: Finish Implementation Distributed termination detection is hard arbitrary message reordering Base algorithm one row of n counters per place with n places increment on spawn, decrement on termination, message on decrement finish triggered when sum of each column is zero Basic dynamic optimization Assume local, until first at executed local aggregation and message batching (up to local quiescence) 48

XRX: Scalable Finish Implementation Distributed termination detection is hard arbitrary message reordering Base algorithm one row of n counters per place with n places increment on spawn, decrement on termination, message on decrement finish triggered when sum of each column is zero Basic dynamic optimization Assume local, until first at executed local aggregation and message batching (up to local quiescence) Additional optimizations needed for scaling (@50k Places) pattern-based specialization local finish, SPMD finish, ping pong, single async software routing uncounted asyncs runtime optimizations + static analysis + pragmas 49 good fit for APGAS

Scalable Communication High-Performance Interconnects RDMAs efficient remote memory operations fundamentally asynchronous async semantics good fit for APGAS Array.asyncCopy[Double](src, srcindex, dst, dstindex, size); Collectives multi-point coordination and communication all kinds of restrictions today poor fit for APGAS today Team.WORLD.barrier(here.id); columnteam.addreduce(columnrole, localmax, Team.MAX); bright future (MPI-3 and much more ) 50 50 good fit for APGAS

Scalable Memory Management Garbage collector problem 1: distributed heap solution: segregate local/remote refs GC for local refs; distributed GC experiment problem 2: risk of overhead and jitter solution: maximize memory reuse Congruent memory allocator problem: not all pages are created equal large pages required to minimize TLB misses registered pages required for RDMAs congruent addresses required for RDMAs at scale solution: dedicated memory allocator configurable congruent registered memory region backed by large pages if available only used for performance critical arrays 51 not an issue in practice issue is contained

Final Thoughts X10 Approach Augment full-fledged modern language with core APGAS constructs Problem selection: do a few key things well, defer many others Enable programmer to evolve code from prototype to scalable solution Mostly a pragmatic/conservative language design (except when its not ) X10 2.4 (today) is not the end of the story A base language in which to build higher-level frameworks (arrays, ScaleGraph, M3R) A target language for compilers (MatLab, DSLs) APGAS runtime: X10 runtime as Java and C++ libraries APGAS programming model in other languages http://x10-lang.org 52

References Main X10 website http://x10-lang.org A Brief Introduction to X10 (for the HPC Programmer) http://x10.sourceforge.net/documentation/intro/latest/html/ X10 2012 HPC challenge submission http://hpcchallenge.org http://x10.sourceforge.net/documentation/hpcc/x10-hpcc2012-paper.pdf Unbalanced Tree Search in X10 (PPoPP 2011) http://dl.acm.org/citation.cfm?id=1941582 M3R (VLDB 2012) http://vldb.org/pvldb/vol5/p1736_avrahamshinnar_vldb2012.pdf 53