Accelera'ng Big Data Processing with Hadoop, Spark and Memcached

Similar documents
Accelerating Big Data Processing with Hadoop, Spark and Memcached

Accelerating Big Data Processing with Hadoop, Spark and Memcached

Accelerating Spark with RDMA for Big Data Processing: Early Experiences

Big Data: Hadoop and Memcached

A Micro-benchmark Suite for Evaluating Hadoop RPC on High-Performance Networks

Can High-Performance Interconnects Benefit Memcached and Hadoop?

Enabling High performance Big Data platform with RDMA

Accelerating Big Data Processing with Hadoop, Spark, and Memcached over High-Performance Interconnects

RDMA over Ethernet - A Preliminary Study

Hadoop on the Gordon Data Intensive Cluster

Hari Subramoni. Education: Employment: Research Interests: Projects:

RDMA-based Plugin Design and Profiler for Apache and Enterprise Hadoop Distributed File system THESIS

Unstructured Data Accelerator (UDA) Author: Motti Beck, Mellanox Technologies Date: March 27, 2012

HiBench Introduction. Carson Wang Software & Services Group

High Performance Data-Transfers in Grid Environment using GridFTP over InfiniBand

Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA

Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand

MapReduce Evaluator: User Guide

GraySort on Apache Spark by Databricks

Big Fast Data Hadoop acceleration with Flash. June 2013

Hadoop Optimizations for BigData Analytics

SMB Direct for SQL Server and Private Cloud

Performance Comparison of Intel Enterprise Edition for Lustre* software and HDFS for MapReduce Applications

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

Hadoop: Embracing future hardware

Outline. High Performance Computing (HPC) Big Data meets HPC. Case Studies: Some facts about Big Data Technologies HPC and Big Data converging

RDMA for Apache Hadoop User Guide

Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software

Maximizing Hadoop Performance and Storage Capacity with AltraHD TM

Accelerating and Simplifying Apache

Hadoop Deployment and Performance on Gordon Data Intensive Supercomputer!

Oracle s Big Data solutions. Roger Wullschleger. <Insert Picture Here>

Advancing Applications Performance With InfiniBand

Sockets vs. RDMA Interface over 10-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck

FLOW-3D Performance Benchmark and Profiling. September 2012

Introduction to Hadoop on the SDSC Gordon Data Intensive Cluster"

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct

GraySort and MinuteSort at Yahoo on Hadoop 0.23

Lustre * Filesystem for Cloud and Hadoop *

Overview on Modern Accelerators and Programming Paradigms Ivan Giro7o

New Storage System Solutions

Big Data Analytics - Accelerated. stream-horizon.com

How to Deploy OpenStack on TH-2 Supercomputer Yusong Tan, Bao Li National Supercomputing Center in Guangzhou April 10, 2014

Hadoop IST 734 SS CHUNG

Installing Hadoop over Ceph, Using High Performance Networking

Hadoop MapReduce over Lustre* High Performance Data Division Omkar Kulkarni April 16, 2013

CSE-E5430 Scalable Cloud Computing Lecture 2

Self service for software development tools

Hadoop Ecosystem B Y R A H I M A.

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

Introduction to Infiniband. Hussein N. Harake, Performance U! Winter School

Architecting for the next generation of Big Data Hortonworks HDP 2.0 on Red Hat Enterprise Linux 6 with OpenJDK 7

Benchmarking Hadoop & HBase on Violin

High-Performance Networking for Optimized Hadoop Deployments

Apache Hadoop. Alexandru Costan

Application and Micro-benchmark Performance using MVAPICH2-X on SDSC Gordon Cluster

Reference Architecture and Best Practices for Virtualizing Hadoop Workloads Justin Murray VMware

Windows 8 SMB 2.2 File Sharing Performance

InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment

Simplifying Big Data Deployments in Cloud Environments with Mellanox Interconnects and QualiSystems Orchestration Solutions

Understanding Hadoop Performance on Lustre

Data Management in the Cloud: Limitations and Opportunities. Annies Ductan

Developing Scalable Smart Grid Infrastructure to Enable Secure Transmission System Control

Network-Aware Scheduling of MapReduce Framework on Distributed Clusters over High Speed Networks

Performance Comparison of SQL based Big Data Analytics with Lustre and HDFS file systems

Analysis and Optimization of Massive Data Processing on High Performance Computing Architecture

Accelerating Hadoop MapReduce Using an In-Memory Data Grid

THE HADOOP DISTRIBUTED FILE SYSTEM

Near Real Time Indexing Kafka Message to Apache Blur using Spark Streaming. by Dibyendu Bhattacharya

Spark in Action. Fast Big Data Analytics using Scala. Matei Zaharia. project.org. University of California, Berkeley UC BERKELEY

HPC ABDS: The Case for an Integrating Apache Big Data Stack

Hadoop Architecture. Part 1

Chapter 7. Using Hadoop Cluster and MapReduce

High Throughput File Servers with SMB Direct, Using the 3 Flavors of RDMA network adapters

ESS event: Big Data in Official Statistics. Antonino Virgillito, Istat

Mambo Running Analytics on Enterprise Storage

Cray XC30 Hadoop Platform Jonathan (Bill) Sparks Howard Pritchard Martha Dumler

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

Storage Architectures for Big Data in the Cloud

Performance Management in Big Data Applica6ons. Michael Kopp, Technology

CS2510 Computer Operating Systems

CS2510 Computer Operating Systems

Michael Kagan.

TCP/IP Implementation of Hadoop Acceleration. Cong Xu

HPC data becomes Big Data. Peter Braam

Designing Efficient FTP Mechanisms for High Performance Data-Transfer over InfiniBand

Microsoft SMB Running Over RDMA in Windows Server 8

How To Monitor Infiniband Network Data From A Network On A Leaf Switch (Wired) On A Microsoft Powerbook (Wired Or Microsoft) On An Ipa (Wired/Wired) Or Ipa V2 (Wired V2)

SMB Advanced Networking for Fault Tolerance and Performance. Jose Barreto Principal Program Managers Microsoft Corporation

Big Data With Hadoop

Why Compromise? A discussion on RDMA versus Send/Receive and the difference between interconnect and application semantics

Transcription:

Accelera'ng Big Data Processing with Hadoop, Spark and Memcached Talk at HPC Advisory Council Switzerland Conference (Mar 15) by Dhabaleswar K. (DK) Panda The Ohio State University E- mail: panda@cse.ohio- state.edu h<p://www.cse.ohio- state.edu/~panda

Introduc'on to Big Data Applica'ons and Analy'cs Big Data has become the one of the most important elements of business analyfcs Provides groundbreaking opportunifes for enterprise informafon management and decision making The amount of data is exploding; companies are capturing and digifzing more informafon than ever The rate of informafon growth appears to be exceeding Moore s Law Commonly accepted 3V s of Big Data Volume, Velocity, Variety Michael Stonebraker: Big Data Means at Least Three Different Things, hsp://www.nist.gov/itl/ssd/is/upload/nist- stonebraker.pdf 5V s of Big Data 3V + Value, Veracity HPCAC Switzerland Conference (Mar '15) 2

Data Management and Processing on Modern Clusters SubstanFal impact on designing and uflizing modern data management and processing systems in mulfple Fers Front- end data accessing and serving (Online) Memcached + DB (e.g. MySQL), HBase Back- end data analyfcs (Offline) HDFS, MapReduce, Spark Front-end Tier Back-end Tier Internet Data Accessing and Serving Web Server Web Server Web Server Memcached + DB Memcached (MySQL) + DB Memcached (MySQL) + DB (MySQL) NoSQL DB (HBase) NoSQL DB (HBase) NoSQL DB (HBase) Data Analytics Apps/Jobs MapReduce Spark HDFS 3

Overview of Apache Hadoop Architecture Open- source implementafon of Google MapReduce, GFS, and BigTable for Big Data AnalyFcs Hadoop Common UFliFes (RPC, etc.), HDFS, MapReduce, YARN h<p://hadoop.apache.org Hadoop 1.x Hadoop 2.x MapReduce (Data Processing) Other Models (Data Processing) MapReduce (Cluster Resource Management & Data Processing) YARN (Cluster Resource Management & Job Scheduling) Hadoop Distributed File System (HDFS) Hadoop Distributed File System (HDFS) Hadoop Common/Core (RPC,..) Hadoop Common/Core (RPC,..) 4

Spark Architecture Overview An in- memory data- processing framework Worker IteraFve machine learning jobs InteracFve data analyfcs Scala based ImplementaFon Zookeeper Worker Standalone, YARN, Mesos Scalable and communicafon intensive Driver Worker HDFS Wide dependencies between Resilient Distributed Datasets (RDDs) SparkContext Master Worker Worker MapReduce- like shuffle operafons to reparffon RDDs Sockets based communicafon hsp://spark.apache.org 5

Memcached Architecture Main memory CPUs Main memory CPUs SSD HDD... SSD HDD Internet!"#$%"$#& High Performance Networks High Performance Networks Main memory SSD CPUs HDD High Performance Networks Main memory SSD CPUs HDD...... Main memory CPUs (Database Servers) SSD HDD Web Frontend Servers (Memcached Clients) (Memcached Servers) Three- layer architecture of Web 2.0 Web Servers, Memcached Servers, Database Servers Distributed Caching Layer Allows to aggregate spare memory from mulfple nodes General purpose Typically used to cache database queries, results of API calls Scalable model, but typical usage very network intensive 6

Presenta'on Outline Challenges for AcceleraFng Big Data Processing The High- Performance Big Data (HiBD) Project RDMA- based designs for Apache Hadoop and Spark Case studies with HDFS, MapReduce, and Spark Sample Performance Numbers for RDMA- Hadoop 2.x 0.9.6 Release RDMA- based designs for Memcached and HBase RDMA- based Memcached with SSD- assisted Hybrid Memory RDMA- based HBase Challenges in Designing Benchmarks for Big Data Processing OSU HiBD Benchmarks Conclusion and Q&A 7

Drivers for Modern HPC Clusters High End CompuFng (HEC) is growing dramafcally High Performance CompuFng Big Data CompuFng Technology Advancement MulF- core/many- core technologies and accelerators Remote Direct Memory Access (RDMA)- enabled networking (InfiniBand and RoCE) Solid State Drives (SSDs) and Non- VolaFle Random- Access Memory (NVRAM) Accelerators (NVIDIA GPGPUs and Intel Xeon Phi) Tianhe 2 Titan Stampede Tianhe 1A 8

Interconnects and Protocols in the Open Fabrics Stack Applica'on / Middleware Interface Applica'on / Middleware Sockets Verbs Protocol Kernel Space TCP/IP TCP/IP RSockets SDP TCP/IP RDMA RDMA Ethernet Driver IPoIB Hardware Offload User Space RDMA User Space User Space User Space Adapter Ethernet Adapter InfiniBand Adapter Ethernet Adapter InfiniBand Adapter InfiniBand Adapter iwarp Adapter RoCE Adapter InfiniBand Adapter Switch Ethernet Switch InfiniBand Switch Ethernet Switch InfiniBand Switch InfiniBand Switch Ethernet Switch Ethernet Switch InfiniBand Switch 1/10/40 GigE IPoIB 10/40 GigE- TOE RSockets SDP iwarp RoCE IB Na've 9

Wide Adop'on of RDMA Technology Message Passing Interface (MPI) for HPC Parallel File Systems Lustre GPFS Delivering excellent performance: < 1.0 microsec latency 100 Gbps bandwidth 5-10% CPU uflizafon Delivering excellent scalability 10

Challenges in Designing Communica'on and I/O Libraries for Big Data Systems Applica'ons Benchmarks Big Data Middleware (HDFS, MapReduce, HBase, Spark and Memcached) Upper level Changes? Programming Models (Sockets) Other RDMA Protocols? Point- to- Point Communica'on Communica'on and I/O Library Threaded Models and Synchroniza'on Virtualiza'on I/O and File Systems QoS Fault- Tolerance Networking Technologies (InfiniBand, 1/10/40GigE and Intelligent NICs) Commodity Compu'ng System Architectures (Mul'- and Many- core architectures and accelerators) Storage Technologies (HDD and SSD) 11

Can Big Data Processing Systems be Designed with High- Performance Networks and Protocols? Current Design Applica'on Our Approach Applica'on Sockets OSU Design Verbs Interface 1/10 GigE Network 10 GigE or InfiniBand Sockets not designed for high- performance Stream semanfcs onen mismatch for upper layers Zero- copy not available for non- blocking sockets 12

Presenta'on Outline Challenges for AcceleraFng Big Data Processing The High- Performance Big Data (HiBD) Project RDMA- based designs for Apache Hadoop and Spark Case studies with HDFS, MapReduce, and Spark Sample Performance Numbers for RDMA- Hadoop 2.x 0.9.6 Release RDMA- based designs for Memcached and HBase RDMA- based Memcached with SSD- assisted Hybrid Memory RDMA- based HBase Challenges in Designing Benchmarks for Big Data Processing OSU HiBD Benchmarks Conclusion and Q&A 13

Overview of the HiBD Project and Releases RDMA for Apache Hadoop 2.x (RDMA- Hadoop- 2.x) RDMA for Apache Hadoop 1.x (RDMA- Hadoop) RDMA for Memcached (RDMA- Memcached) OSU HiBD- Benchmarks (OHB) hsp://hibd.cse.ohio- state.edu Users Base: 95 organizafons from 18 countries More than 2,900 downloads RDMA for Apache HBase and Spark will be available in near future 14

RDMA for Apache Hadoop 2.x Distribu'on High- Performance Design of Hadoop over RDMA- enabled Interconnects High performance design with nafve InfiniBand and RoCE support at the verbs- level for HDFS, MapReduce, and RPC components Enhanced HDFS with in- memory and heterogeneous storage High performance design of MapReduce over Lustre Easily configurable for different running modes (HHH, HHH- M, HHH- L, and MapReduce over Lustre) and different protocols (nafve InfiniBand, RoCE, and IPoIB) Current release: 0.9.6 Based on Apache Hadoop 2.6.0 Compliant with Apache Hadoop 2.6.0 APIs and applicafons Tested with Mellanox InfiniBand adapters (DDR, QDR and FDR) RoCE support with Mellanox adapters Various mulf- core plarorms Different file systems with disks and SSDs and Lustre hsp://hibd.cse.ohio- state.edu 15

RDMA for Memcached Distribu'on High- Performance Design of Memcached over RDMA- enabled Interconnects High performance design with nafve InfiniBand and RoCE support at the verbs- level for Memcached and libmemcached components Easily configurable for nafve InfiniBand, RoCE and the tradifonal sockets- based support (Ethernet and InfiniBand with IPoIB) High performance design of SSD- Assisted Hybrid Memory Current release: 0.9.3 Based on Memcached 1.4.22 and libmemcached 1.0.18 Compliant with libmemcached APIs and applicafons Tested with Mellanox InfiniBand adapters (DDR, QDR and FDR) RoCE support with Mellanox adapters Various mulf- core plarorms SSD hsp://hibd.cse.ohio- state.edu 16

OSU HiBD Micro- Benchmark (OHB) Suite - Memcached Released in OHB 0.7.1 (ohb_memlat) Evaluates the performance of stand- alone Memcached Three different micro- benchmarks SET Micro- benchmark: Micro- benchmark for memcached set operafons GET Micro- benchmark: Micro- benchmark for memcached get operafons MIX Micro- benchmark: Micro- benchmark for a mix of memcached set/get operafons (Read:Write rafo is 90:10) Calculates average latency of Memcached operafons Can measure throughput in TransacFons Per Second 17

Presenta'on Outline Challenges for AcceleraFng Big Data Processing The High- Performance Big Data (HiBD) Project RDMA- based designs for Apache Hadoop and Spark Case studies with HDFS, MapReduce, and Spark Sample Performance Numbers for Hadoop 2.x 0.9.6 Release RDMA- based designs for Memcached and HBase RDMA- based Memcached with SSD- assisted Hybrid Memory RDMA- based HBase Challenges in Designing Benchmarks for Big Data Processing OSU HiBD Benchmarks Conclusion and Q&A 18

Accelera'on Case Studies and In- Depth Performance Evalua'on RDMA- based Designs and Performance EvaluaFon HDFS MapReduce Spark 19

Design Overview of HDFS with RDMA Others Java Socket Interface 1/10 GigE, IPoIB Network Applica'ons HDFS Write Java Na've Interface (JNI) OSU Design Verbs RDMA Capable Networks (IB, 10GE/ iwarp, RoCE..) Design Features RDMA- based HDFS write RDMA- based HDFS replicafon Parallel replicafon support On- demand connecfon setup InfiniBand/RoCE support Enables high performance RDMA communicafon, while supporfng tradifonal socket interface JNI Layer bridges Java based HDFS with communicafon library wri<en in nafve code 20

Communica'on Times in HDFS Communica'on Time (s) 25 20 15 10 5 10GigE IPoIB (QDR) OSU- IB (QDR) Reduced by 30% 0 Cluster with HDD DataNodes 30% improvement in communicafon Fme over IPoIB (QDR) 56% improvement in communicafon Fme over 10GigE Similar improvements are obtained for SSD DataNodes 2GB 4GB 6GB 8GB 10GB File Size (GB) N. S. Islam, M. W. Rahman, J. Jose, R. Rajachandrasekar, H. Wang, H. Subramoni, C. Murthy and D. K. Panda, High Performance RDMA- Based Design of HDFS over InfiniBand, Supercompu'ng (SC), Nov 2012 N. Islam, X. Lu, W. Rahman, and D. K. Panda, SOR- HDFS: A SEDA- based Approach to Maximize Overlapping in RDMA- Enhanced HDFS, HPDC '14, June 2014 21

Enhanced HDFS with In- memory and Heterogeneous Storage Design Features Hybrid Replica'on Applica'ons Triple- H Data Placement Policies Evic'on/ Promo'on Heterogeneous Storage RAM Disk SSD HDD Lustre Three modes Default (HHH) In- Memory (HHH- M) Lustre- Integrated (HHH- L) Policies to efficiently uflize the heterogeneous storage devices RAM, SSD, HDD, Lustre EvicFon/PromoFon based on data usage pa<ern Hybrid ReplicaFon Lustre- Integrated mode: Lustre- based fault- tolerance N. Islam, X. Lu, M. W. Rahman, D. Shankar, and D. K. Panda, Triple- H: A Hybrid Approach to Accelerate HDFS on HPC Clusters with Heterogeneous Storage Architecture, CCGrid 15, May 2015 22

Enhanced HDFS with In- memory and Heterogeneous Storage Three Modes HHH (default): Heterogeneous storage devices with hybrid replicafon schemes I/O operafons over RAM disk, SSD, and HDD Hybrid replicafon (in- memory and persistent storage) Be<er fault- tolerance as well as performance HHH- M: High- performance in- memory I/O operafons Memory replicafon (in- memory only with lazy persistence) As much performance benefit as possible HHH- L: Lustre integrated Take advantage of the Lustre available in HPC clusters Lustre- based fault- tolerance (No HDFS replicafon) Reduced local storage space usage 23

Performance Improvement on TACC Stampede (HHH) Total Throughput (MBps) 6000 5000 4000 3000 2000 1000 Increased by 2x IPoIB (FDR) OSU- IB (FDR) Increased by 7x Execu'on Time (s) 300 250 200 150 100 50 IPoIB (FDR) OSU- IB (FDR) Reduced by 3x 0 write TestDFSIO For 160GB TestDFSIO in 32 nodes For 120GB RandomWriter in 32 Write Throughput: 7x improvement nodes over IPoIB (FDR) read Read Throughput: 2x improvement over IPoIB (FDR) 0 8:30 16:60 32:120 Cluster Size:Data Size RandomWriter 3x improvement over IPoIB (QDR) 24

Performance Improvement on SDSC Gordon (HHH- L) Execu'on Time (s) 500 450 400 350 300 250 200 150 100 50 0 HDFS- IPoIB (QDR) Lustre- IPoIB (QDR) OSU- IB (QDR) 20 40 60 Data Size (GB) Sort For 60GB Sort in 8 nodes Reduced by 54% 24% improvement over default HDFS 54% improvement over Lustre 33% storage space saving compared to default HDFS Storage Used (GB) HDFS- IPoIB (QDR) Lustre- IPoIB (QDR) 360 120 OSU- IB (QDR) 240 Storage space for 60GB Sort 25

Evalua'on with PUMA and CloudBurst (HHH- L/HHH) Execu'on Time (s) 2500 2000 1500 1000 500 Reduced by 17% HDFS- IPoIB (QDR) Lustre- IPoIB (QDR) OSU- IB (QDR) Reduced by 29.5% HDFS- IPoIB (FDR) OSU- IB (FDR) 60.24 s 48.3 s 0 SequenceCount PUMA PUMA on OSU RI Grep SequenceCount with HHH- L: 17% benefit over Lustre, 8% over HDFS Grep with HHH: 29.5% benefit over Lustre, 13.2% over HDFS CloudBurst CloudBurst on TACC Stampede With HHH: 19% improvement over HDFS 26

Accelera'on Case Studies and In- Depth Performance Evalua'on RDMA- based Designs and Performance EvaluaFon HDFS MapReduce Spark 27

Design Overview of MapReduce with RDMA MapReduce Java Socket Interface Job Tracker 1/10 GigE, IPoIB Network Applica'ons Task Tracker Map Reduce Java Na've Interface (JNI) OSU Design Verbs RDMA Capable Networks (IB, 10GE/ iwarp, RoCE..) Design Features RDMA- based shuffle Prefetching and caching map output Efficient Shuffle Algorithms In- memory merge On- demand Shuffle Adjustment Advanced overlapping map, shuffle, and merge shuffle, merge, and reduce On- demand connecfon setup InfiniBand/RoCE support Enables high performance RDMA communicafon, while supporfng tradifonal socket interface JNI Layer bridges Java based MapReduce with communicafon library wri<en in nafve code 28

Advanced Overlapping among different phases Default Architecture A hybrid approach to achieve maximum possible overlapping in MapReduce across all phases compared to other approaches Efficient Shuffle Algorithms Dynamic and Efficient Switching On- demand Shuffle Adjustment Enhanced Overlapping Advanced Overlapping M. W. Rahman, X. Lu, N. S. Islam, and D. K. Panda, HOMR: A Hybrid Approach to Exploit Maximum Overlapping in MapReduce over High Performance Interconnects, ICS, June 2014. 29

Performance Evalua'on of Sort and TeraSort 1200 1000 IPoIB (QDR) UDA- IB (QDR) OSU- IB (QDR) Reduced by 40% 700 IPoIB (FDR) UDA- IB (FDR) OSU- IB (FDR) Job Execu'on Time (sec) 800 600 400 200 Job Execu'on Time (sec) 600 500 400 300 200 100 Reduced by 38% 0 Data Size: 60 GB Data Size: 120 GB Data Size: 240 GB 0 Data Size: 80 GB Data Size: 160 GB Data Size: 320 GB Cluster Size: 16 Cluster Size: 32 Cluster Size: 64 Cluster Size: 16 Cluster Size: 32 Cluster Size: 64 Sort in OSU Cluster For 240GB Sort in 64 nodes (512 cores) 40% improvement over IPoIB (QDR) with HDD used for HDFS TeraSort in TACC Stampede For 320GB TeraSort in 64 nodes (1K cores) 38% improvement over IPoIB (FDR) with HDD used for HDFS 30

Evalua'ons using PUMA Workload Normalized Execu'on Time 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 10GigE IPoIB (QDR) OSU- IB (QDR) Reduced by 50% Reduced by 49% AdjList (30GB) SelfJoin (80GB) SeqCount (30GB) WordCount (30GB) InvertIndex (30GB) Benchmarks 50% improvement in Self Join over IPoIB (QDR) for 80 GB data size 49% improvement in Sequence Count over IPoIB (QDR) for 30 GB data size 31

Op'mize Hadoop YARN MapReduce over Parallel File Systems Compute Nodes App Master Map Reduce Lustre Client MetaData Servers Object Storage Servers Lustre Setup HPC Cluster Deployment Hybrid topological solufon of Beowulf architecture with separate I/O nodes Lean compute nodes with light OS; more memory space; small local storage Sub- cluster of dedicated I/O nodes with parallel file systems, such as Lustre MapReduce over Lustre Local disk is used as the intermediate data directory Lustre is used as the intermediate data directory 32

Design Overview of Shuffle Strategies for MapReduce over Lustre Design Features Map 1 Map 2 Map 3 Intermediate Data Directory Reduce 1 In- memory merge/sort Lustre Lustre Read / RDMA reduce Reduce 2 In- memory merge/sort reduce Two shuffle approaches Lustre read based shuffle RDMA based shuffle Hybrid shuffle algorithm to take benefit from both shuffle approaches Dynamically adapts to the be<er shuffle approach for each shuffle request based on profiling values for each Lustre read operafon In- memory merge and overlapping of different phases are kept similar to RDMA- enhanced MapReduce design M. W. Rahman, X. Lu, N. S. Islam, R. Rajachandrasekar, and D. K. Panda, High Performance Design of YARN MapReduce on Modern HPC Clusters with Lustre and RDMA, IPDPS, May 2015. 33

Job Execu'on Time (sec) 1200 1000 800 600 400 200 Performance Improvement of MapReduce over Lustre on TACC- Stampede Local disk is used as the intermediate data directory IPoIB (FDR) OSU- IB (FDR) Reduced by 44% Job Execu'on Time (sec) 500 450 400 350 300 250 200 150 100 50 IPoIB (FDR) OSU- IB (FDR) Reduced by 48% 0 300 400 500 Data Size (GB) 0 20 GB 40 GB 80 GB 160 GB 320 GB 640 GB Cluster: 4 Cluster: 8 Cluster: 16 Cluster: 32 Cluster: 64 Cluster: 128 For 500GB Sort in 64 nodes For 640GB Sort in 128 nodes 44% improvement over IPoIB (FDR) 48% improvement over IPoIB (FDR) M. W. Rahman, X. Lu, N. S. Islam, R. Rajachandrasekar, and D. K. Panda, MapReduce over Lustre: Can RDMA- based Approach Benefit?, Euro- Par, August 2014. 34

Case Study - Performance Improvement of MapReduce over Lustre on SDSC- Gordon Lustre is used as the intermediate data directory Job Execu'on Time (sec) 900 800 700 600 500 400 300 200 IPoIB (QDR) OSU- Lustre- Read (QDR) OSU- RDMA- IB (QDR) OSU- Hybrid- IB (QDR) Reduced by 34% Job Execu'on Time (sec) 900 800 700 600 500 400 300 200 IPoIB (QDR) OSU- Lustre- Read (QDR) OSU- RDMA- IB (QDR) OSU- Hybrid- IB (QDR) Reduced by 25% 100 100 0 40 60 80 Data Size (GB) 0 40 80 120 Data Size (GB) For 80GB Sort in 8 nodes For 120GB TeraSort in 16 nodes 34% improvement over IPoIB (QDR) 25% improvement over IPoIB (QDR) 35

Accelera'on Case Studies and In- Depth Performance Evalua'on RDMA- based Designs and Performance EvaluaFon HDFS MapReduce Spark 36

Design Overview of Spark with RDMA Task Java NIO Shuffle Server (default) BlockManager Netty Shuffle Server (optional) Java Socket Task RDMA Shuffle Server (plug-in) 1/10 Gig Ethernet/IPoIB (QDR/FDR) Network Spark Applications (Scala/Java/Python) Spark (Scala/Java) Task Task BlockFetcherIterator Java NIO Shuffle Fetcher (default) Netty Shuffle Fetcher (optional) RDMA Shuffle Fetcher (plug-in) RDMA-based Shuffle Engine (Java/JNI) Native InfiniBand (QDR/FDR) Design Features RDMA based shuffle SEDA- based plugins Dynamic connecfon management and sharing Non- blocking and out- of- order data transfer Off- JVM- heap buffer management InfiniBand/RoCE support Enables high performance RDMA communicafon, while supporfng tradifonal socket interface JNI Layer bridges Scala based Spark with communicafon library wri<en in nafve code X. Lu, M. W. Rahman, N. Islam, D. Shankar, and D. K. Panda, Accelera'ng Spark with RDMA for Big Data Processing: Early Experiences, Int'l Symposium on High Performance Interconnects (HotI'14), August 2014 37

Preliminary Results of Spark- RDMA Design - GroupBy GroupBy Time (sec) 9 8 7 6 5 4 3 2 Reduced by 18% GroupBy Time (sec) 9 8 7 6 5 4 3 2 10GigE IPoIB RDMA Reduced by 20% 1 1 0 4 6 8 10 Data Size (GB) 0 8 12 16 20 Data Size (GB) Cluster with 4 HDD Nodes, GroupBy with 32 cores Cluster with 8 HDD Nodes, GroupBy with 64 cores Cluster with 4 HDD Nodes, single disk per node, 32 concurrent tasks 18% improvement over IPoIB (QDR) for 10GB data size Cluster with 8 HDD Nodes, single disk per node, 64 concurrent tasks 20% improvement over IPoIB (QDR) for 20GB data size 38

Presenta'on Outline Challenges for AcceleraFng Big Data Processing The High- Performance Big Data (HiBD) Project RDMA- based designs for Apache Hadoop and Spark Case studies with HDFS, MapReduce, and Spark Sample Performance Numbers for Hadoop 2.x 0.9.6 Release RDMA- based designs for Memcached and HBase RDMA- based Memcached with SSD- assisted Hybrid Memory RDMA- based HBase Challenges in Designing Benchmarks for Big Data Processing OSU HiBD Benchmarks Conclusion and Q&A 39

Performance Benefits TestDFSIO in TACC Stampede (HHH) Total Throughput (MBps) 8000 7000 6000 5000 4000 3000 2000 1000 Increased by 7.8x IPoIB (FDR) OSU- IB (FDR) Execu'on Time (s) 300 250 200 150 100 50 IPoIB (FDR) OSU- IB (FDR) Reduced by 2.5x 0 0 80 100 120 80 100 120 Data Size (GB) Data Size (GB) Cluster with 32 Nodes with HDD, TestDFSIO with a total 128 maps For Throughput, 6-7.8x improvement over IPoIB for 80-120 GB file size For Latency, 2.5-3x improvement over IPoIB for 80-120 GB file size 40

Performance Benefits RandomWriter & TeraGen in TACC- Stampede (HHH) Execu'on Time (s) 250 200 150 100 50 IPoIB (FDR) OSU- IB (FDR) Reduced by 3x Execu'on Time (s) 250 200 150 100 50 IPoIB (FDR) OSU- IB (FDR) Reduced by 4x 0 80 100 120 0 80 100 120 Data Size (GB) Data Size (GB) RandomWriter Cluster with 32 Nodes with a total of 128 maps TeraGen RandomWriter 3-4x improvement over IPoIB for 80-120 GB file size TeraGen 4-5x improvement over IPoIB for 80-120 GB file size 41

Execu'on Time (s) 900 800 700 600 500 400 300 200 100 Performance Benefits Sort & TeraSort in TACC- Stampede (HHH) IPoIB (FDR) OSU- IB (FDR) Reduced by 52% Execu'on Time (s) 600 500 400 300 200 100 IPoIB (FDR) Reduced by 44% OSU- IB (FDR) 0 80 100 120 Data Size (GB) 0 80 100 120 Data Size (GB) Cluster with 32 Nodes with a total of 128 maps and 57 reduces Cluster with 32 Nodes with a total of 128 maps and 64 reduces Sort with single HDD per node 40-52% improvement over IPoIB for 80-120 GB data TeraSort with single HDD per node 42-44% improvement over IPoIB for 80-120 GB data 42

Performance Benefits TestDFSIO on TACC Stampede and SDSC Gordon (HHH- M) Total Throughput (MBps) 30000 25000 20000 15000 10000 5000 0 IPoIB (FDR) OSU- IB (FDR) Increased by 28x 60 80 100 Data Size (GB) Total Throughput (MBps) 9000 8000 7000 6000 5000 4000 3000 2000 1000 0 IPoIB (QDR) OSU- IB (QDR) Increased by 6x 60 80 100 Data Size (GB) TACC Stampede Cluster with 32 Nodes with HDD, TestDFSIO with a total 128 maps SDSC Gordon Cluster with 16 Nodes with SSD, TestDFSIO with a total 64 maps TestDFSIO Write on TACC Stampede 28x improvement over IPoIB for 60-100 GB file size TestDFSIO Write on SDSC Gordon 6x improvement over IPoIB for 60-100 GB file size 43

Performance Benefits TestDFSIO and Sort in SDSC- Gordon (HHH- L) Total Throughput (MBps) 16000 14000 12000 10000 8000 6000 4000 2000 0 HDFS Lustre OSU- IB Increased by 9x Increased by 29% Write TestDFSIO Read Execu'on Time (s) 500 450 400 350 300 250 200 150 100 50 0 Cluster with 16 Nodes HDFS Lustre Reduced by 50% OSU- IB 40 60 Data Size (GB) 80 Sort TestDFSIO for 80GB data size Write: 9x improvement over HDFS Read: 29% improvement over Lustre Sort up to 28% improvement over HDFS up to 50% improvement over Lustre 44

Presenta'on Outline Challenges for AcceleraFng Big Data Processing The High- Performance Big Data (HiBD) Project RDMA- based designs for Apache Hadoop and Spark Case studies with HDFS, MapReduce, and Spark Sample Performance Numbers for Hadoop 2.x 0.9.6 Release RDMA- based designs for Memcached and HBase RDMA- based Memcached with SSD- assisted Hybrid Memory RDMA- based HBase Challenges in Designing Benchmarks for Big Data Processing OSU HiBD Benchmarks Conclusion and Q&A 45

Memcached- RDMA Design Sockets Client RDMA Client 1 2 1 2 Master Thread Sockets Worker Thread Verbs Worker Thread Sockets Worker Thread Verbs Worker Thread Shared Data Memory Slabs Items Server and client perform a negofafon protocol Master thread assigns clients to appropriate worker thread Once a client is assigned a verbs worker thread, it can communicate directly and is bound to that thread All other Memcached data structures are shared among RDMA and Sockets worker threads NaFve IB- verbs- level Design and evaluafon with Server : Memcached (h<p://memcached.org) Client : libmemcached (h<p://libmemcached.org) Different networks and protocols: 10GigE, IPoIB, nafve IB (RC, UD) 46

Time (us) 1000 Memcached Performance (FDR Interconnect) 100 10 1 Memcached GET Latency OSU- IB (FDR) IPoIB (FDR) Latency Reduced by nearly 20X 1 2 4 8 16 32 64 128 256 512 1K 2K 4K Message Size Thousands of Transac'ons per Second (TPS) 700 600 500 400 300 200 100 Memcached Get latency 4 bytes OSU- IB: 2.84 us; IPoIB: 75.53 us 2K bytes OSU- IB: 4.49 us; IPoIB: 123.42 us Memcached Throughput (4bytes) 4080 clients OSU- IB: 556 Kops/sec, IPoIB: 233 Kops/s Nearly 2X improvement in throughput Experiments on TACC Stampede (Intel SandyBridge Cluster, IB: FDR) 0 Memcached Throughput 16 32 64 128 256 512 1024 2048 4080 No. of Clients 2X 47

Latency (sec) 8 6 4 2 0 Micro- benchmark Evalua'on for OLDP workloads Memcached- IPoIB (32Gbps) Reduced by 66% 3500 Memcached- RDMA (32Gbps) 3000 64 96 128 160 320 400 No. of Clients IllustraFon with Read- Cache- Read access pa<ern using modified mysqlslap load tesfng tool Memcached- RDMA can - improve query latency by up to 66% over IPoIB (32Gbps) - throughput by up to 69% over IPoIB (32Gbps) Throughput (Kq/s) 2500 2000 1500 1000 500 0 Memcached- IPoIB (32Gbps) Memcached- RDMA (32Gbps) 64 96 128 160 320 400 No. of Clients D. Shankar, X. Lu, J. Jose, M. W. Rahman, N. Islam, and D. K. Panda, Can RDMA Benefit On- Line Data Processing Workloads with Memcached and MySQL, ISPASS 15 48

Performance Benefits on SDSC- Gordon OHB Latency & Throughput Micro- Benchmarks Average latency (us) 500 400 300 200 100 0 IPoIB (32Gbps) RDMA- Mem (32Gbps) RDMA- Hyb (32Gbps) Throughput (million trans/sec) 10 9 8 7 6 5 4 3 2 1 0 IPoIB (32Gbps) RDMA- Mem (32Gbps) RDMA- Hybrid (32Gbps) 2X 64 128 256 512 1024 Message Size (Bytes) No. of Clients ohb_memlat & ohb_memthr latency & throughput micro- benchmarks Memcached- RDMA can - improve query latency by up to 70% over IPoIB (32Gbps) - improve throughput by up to 2X over IPoIB (32Gbps) - No overhead in using hybrid mode when all data can fit in memory 49

Latency with penalty (us) Performance Benefits on OSU- RI- SSD OHB Micro- benchmark for Hybrid Memcached 900 800 700 600 500 400 300 200 100 0 RDMA- Mem (32Gbps) RDMA- Hybrid (32Gbps) 53% 512 2K 8K 32K 128K Message Size (Bytes) ohb_memhybrid Uniform Access Pa<ern, single client and single server with 64MB Success Rate of In- Memory Vs. Hybrid SSD- Memory for different spill factors 100% success rate for Hybrid design while that of pure In- memory degrades Average Latency with penalty for In- Memory Vs. Hybrid SSD- Assisted mode for spill factor 1.5. up to 53% improvement over In- memory with server miss penalty as low as 1.5 ms Success Rate 125 115 105 95 85 75 65 55 RDMA- Mem- Uniform RDMA- Hybrid 1 1.15 1.3 1.45 Spill Factor 50

Presenta'on Outline Challenges for AcceleraFng Big Data Processing The High- Performance Big Data (HiBD) Project RDMA- based designs for Apache Hadoop and Spark Case studies with HDFS, MapReduce, and Spark Sample Performance Numbers for Hadoop 2.x 0.9.6 Release RDMA- based designs for Memcached and HBase RDMA- based Memcached with SSD- assisted Hybrid Memory RDMA- based HBase Challenges in Designing Benchmarks for Big Data Processing OSU HiBD Benchmarks Conclusion and Q&A 51

HBase- RDMA Design Overview Applica'ons HBase Java Socket Interface Java Na've Interface (JNI) OSU- IB Design IB Verbs 1/10 GigE, IPoIB Networks RDMA Capable Networks (IB, 10GE/ iwarp, RoCE..) JNI Layer bridges Java based HBase with communicafon library wri<en in nafve code Enables high performance RDMA communicafon, while supporfng tradifonal socket interface 52

HBase YCSB Read- Write Workload 7000 10000 6000 9000 8000 5000 7000 Time (us) 4000 3000 2000 1000 Time (us) 6000 5000 4000 3000 2000 1000 IPoIB (QDR) 10GigE OSU- IB (QDR) 0 8 16 32 64 96 128 No. of Clients 0 8 16 32 64 96 128 No. of Clients Read Latency HBase Get latency (Yahoo! Cloud Service Benchmark) 64 clients: 2.0 ms; 128 Clients: 3.5 ms 42% improvement over IPoIB for 128 clients HBase Put latency 64 clients: 1.9 ms; 128 Clients: 3.5 ms 40% improvement over IPoIB for 128 clients Write Latency J. Huang, X. Ouyang, J. Jose, M. W. Rahman, H. Wang, M. Luo, H. Subramoni, Chet Murthy, and D. K. Panda, High- Performance Design of HBase with RDMA over InfiniBand, IPDPS 12 53

Presenta'on Outline Challenges for AcceleraFng Big Data Processing The High- Performance Big Data (HiBD) Project RDMA- based designs for Apache Hadoop and Spark Case studies with HDFS, MapReduce, and Spark Sample Performance Numbers for Hadoop 2.x 0.9.6 Release RDMA- based designs for Memcached and HBase RDMA- based Memcached with SSD- assisted Hybrid Memory RDMA- based HBase Challenges in Designing Benchmarks for Big Data Processing OSU HiBD Benchmarks Conclusion and Q&A 54

Designing Communica'on and I/O Libraries for Big Data Systems: Solved a Few Ini'al Challenges Applica'ons Benchmarks Big Data Middleware (HDFS, MapReduce, HBase, Spark and Memcached) Upper level Changes? Programming Models (Sockets) Other RDMA Protocols? Point- to- Point Communica'on Communica'on and I/O Library Threaded Models and Synchroniza'on Virtualiza'on I/O and File Systems QoS Fault- Tolerance Networking Technologies (InfiniBand, 1/10/40GigE and Intelligent NICs) Commodity Compu'ng System Architectures (Mul'- and Many- core architectures and accelerators) Storage Technologies (HDD and SSD) 55

Are the Current Benchmarks Sufficient for Big Data Management and Processing? The current benchmarks provide some performance behavior However, do not provide any informafon to the designer/ developer on: What is happening at the lower- layer? Where the benefits are coming from? Which design is leading to benefits or bo<lenecks? Which component in the design needs to be changed and what will be its impact? Can performance gain/loss at the lower- layer be correlated to the performance gain/loss observed at the upper layer? 56

OSU MPI Micro- Benchmarks (OMB) Suite A comprehensive suite of benchmarks to Compare performance of different MPI libraries on various networks and systems Validate low- level funcfonalifes Provide insights to the underlying MPI- level designs Started with basic send- recv (MPI- 1) micro- benchmarks for latency, bandwidth and bi- direcfonal bandwidth Extended later to MPI- 2 one- sided CollecFves GPU- aware data movement OpenSHMEM (point- to- point and collecfves) UPC Has become an industry standard Extensively used for design/development of MPI libraries, performance comparison of MPI libraries and even in procurement of large- scale systems Available from h<p://mvapich.cse.ohio- state.edu/benchmarks Available in an integrated manner with MVAPICH2 stack 57

Challenges in Benchmarking of RDMA- based Designs Applica'ons Big Data Middleware (HDFS, MapReduce, HBase, Spark and Memcached) Programming Models (Sockets) Benchmarks Other RDMA Protocols? Current Benchmarks Correla'on? Communica'on and I/O Library Point- to- Point Communica'on Threaded Models and Synchroniza'on Virtualiza'on No Benchmarks I/O and File Systems QoS Fault- Tolerance Networking Technologies (InfiniBand, 1/10/40GigE and Intelligent NICs) Commodity Compu'ng System Architectures (Mul'- and Many- core architectures and accelerators) Storage Technologies (HDD and SSD) 58

Itera've Process Requires Deeper Inves'ga'on and Design for Benchmarking Next Genera'on Big Data Systems and Applica'ons Applica'ons Benchmarks Big Data Middleware (HDFS, MapReduce, HBase, Spark and Memcached) Applica'ons- Level Benchmarks Programming Models (Sockets) Other RDMA Protocols? Communica'on and I/O Library Point- to- Point Communica'on I/O and File Systems Threaded Models and Synchroniza'on QoS Virtualiza'on Fault- Tolerance Micro- Benchmarks Networking Technologies (InfiniBand, 1/10/40GigE and Intelligent NICs) Commodity Compu'ng System Architectures (Mul'- and Many- core architectures and accelerators) Storage Technologies (HDD and SSD) 59

OSU HiBD Micro- Benchmark (OHB) Suite - HDFS Evaluate the performance of standalone HDFS Five different benchmarks SequenFal Write Latency (SWL) SequenFal or Random Read Latency (SRL or RRL) SequenFal Write Throughput (SWT) SequenFal Read Throughput (SRT) SequenFal Read- Write Throughput (SRWT) N. S. Islam, X. Lu, M. W. Rahman, J. Jose, and D. K. Panda, A Micro- benchmark Suite for Evalua'ng HDFS Opera'ons on Modern Clusters, Int'l Workshop on Big Data Benchmarking (WBDB '12), December 2012. Benchmark File Name File Size HDFS Parameter Readers Writers Random/ SequenFal Read Seek Interval SWL SRL/RRL (RRL) SWT SRT SRWT 60

OSU HiBD Micro- Benchmark (OHB) Suite - RPC Two different micro- benchmarks to evaluate the performance of standalone Hadoop RPC Latency: Single Server, Single Client Throughput: Single Server, MulFple Clients A simple script framework for job launching and resource monitoring Calculates stafsfcs like Min, Max, Average Network configurafon, Tunable parameters, DataType, CPU UFlizaFon Component Network Address Port Data Type Min Msg Size Max Msg Size No. of IteraFons Handlers lat_client lat_server Verbose Component Network Address Port Data Type Min Msg Size Max Msg Size No. of IteraFons No. of Clients Handlers Verbose thr_client thr_server X. Lu, M. W. Rahman, N. Islam, and D. K. Panda, A Micro- Benchmark Suite for Evalua'ng Hadoop RPC on High- Performance Networks, Int'l Workshop on Big Data Benchmarking (WBDB '13), July 2013. 61

OSU HiBD Micro- Benchmark (OHB) Suite - MapReduce Evaluate the performance of stand- alone MapReduce Does not require or involve HDFS or any other distributed file system Considers various factors that influence the data shuffling phase underlying network configurafon, number of map and reduce tasks, intermediate shuffle data pa<ern, shuffle data size etc. Three different micro- benchmarks based on intermediate shuffle data pa<erns MR- AVG micro- benchmark: intermediate data is evenly distributed among reduce tasks. MR- RAND micro- benchmark: intermediate data is pseudo- randomly distributed among reduce tasks. MR- SKEW micro- benchmark: intermediate data is unevenly distributed among reduce tasks. D. Shankar, X. Lu, M. W. Rahman, N. Islam, and D. K. Panda, A Micro- Benchmark Suite for Evalua'ng Hadoop MapReduce on High- Performance Networks, BPOE- 5 (2014). 62

Future Plans of OSU High Performance Big Data Project Upcoming Releases of RDMA- enhanced Packages will support Spark HBase Plugin- based designs Upcoming Releases of OSU HiBD Micro- Benchmarks (OHB) will support HDFS MapReduce RPC ExploraFon of other components (Threading models, QoS, VirtualizaFon, Accelerators, etc.) Advanced designs with upper- level changes and opfmizafons 63

Concluding Remarks Presented an overview of Big Data processing middleware Discussed challenges in accelerafng Big Data middleware Presented inifal designs to take advantage of InfiniBand/RDMA for Hadoop, Spark, Memcached, and HBase Presented challenges in designing benchmarks Results are promising Many other open issues need to be solved Will enable Big processing community to take advantage of modern HPC technologies to carry out their analyfcs in a fast and scalable manner 64

Personnel Acknowledgments Current Students A. Awan (Ph.D.) M. Li (Ph.D.) A. Bhat (M.S.) M. Rahman (Ph.D.) S. Chakraborthy (Ph.D.) D. Shankar (Ph.D.) C.- H. Chu (Ph.D.) A. Venkatesh (Ph.D.) N. Islam (Ph.D.) J. Zhang (Ph.D.) Past Students P. Balaji (Ph.D.) W. Huang (Ph.D.) D. BunFnas (Ph.D.) W. Jiang (M.S.) S. Bhagvat (M.S.) J. Jose (Ph.D.) L. Chai (Ph.D.) S. Kini (M.S.) B. Chandrasekharan (M.S.) M. Koop (Ph.D.) N. Dandapanthula (M.S.) R. Kumar (M.S.) V. Dhanraj (M.S.) S. Krishnamoorthy (M.S.) T. Gangadharappa (M.S.) K. Kandalla (Ph.D.) K. Gopalakrishnan (M.S.) P. Lai (M.S.) Past Post- Docs J. Liu (Ph.D.) H. Wang E. Mancini X. Besseron S. Marcarelli H.- W. Jin J. Vienne M. Luo Current Senior Research Associates K. Hamidouche H. Subramoni X. Lu Current Post- Doc Current Programmer J. Lin J. Perkins Current Research Specialist M. Arnold M. Luo (Ph.D.) G. Santhanaraman (Ph.D.) A. Mamidala (Ph.D.) A. Singh (Ph.D.) G. Marsh (M.S.) J. Sridhar (M.S.) V. Meshram (M.S.) S. Sur (Ph.D.) S. Naravula (Ph.D.) H. Subramoni (Ph.D.) R. Noronha (Ph.D.) K. Vaidyanathan (Ph.D.) X. Ouyang (Ph.D.) A. Vishnu (Ph.D.) S. Pai (M.S.) J. Wu (Ph.D.) S. Potluri (Ph.D.) W. Yu (Ph.D.) R. Rajachandrasekar (Ph.D.) Past Research Scien8st Past Programmers S. Sur D. Bureddy 65

Thank You! panda@cse.ohio- state.edu Network- Based CompuFng Laboratory h<p://nowlab.cse.ohio- state.edu/ The MVAPICH2/MVAPICH2- X Project h<p://mvapich.cse.ohio- state.edu/ The High- Performance Big Data Project h<p://hibd.cse.ohio- state.edu/ 66

Call For Par'cipa'on Interna'onal Workshop on High- Performance Big Data Compu'ng (HPBDC 2015) In conjuncfon with InternaFonal Conference on Distributed CompuFng Systems (ICDCS 2015) In Hilton Downtown, Columbus, Ohio, USA, Monday, June 29th, 2015 h<p://web.cse.ohio- state.edu/~luxi/hpbdc2015 67

Mul'ple Posi'ons Available in My Group Looking for Bright and EnthusiasFc Personnel to join as Post- Doctoral Researchers PhD Students Hadoop/Big Data Programmer/Sonware Engineer MPI Programmer/Sonware Engineer If interested, please contact me at this conference and/or send an e- mail to panda@cse.ohio- state.edu HPCAC China, Nov. '14 68