Disco: Beyond MapReduce



Similar documents
Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

Jeffrey D. Ullman slides. MapReduce for data intensive computing

Scaling Out With Apache Spark. DTL Meeting Slides based on

Take An Internal Look at Hadoop. Hairong Kuang Grid Team, Yahoo! Inc

Developing a MapReduce Application

CSE-E5430 Scalable Cloud Computing Lecture 2

MapReduce and Hadoop. Aaron Birkland Cornell Center for Advanced Computing. January 2012

Hadoop Architecture. Part 1

Introduction to Spark

A bit about Hadoop. Luca Pireddu. March 9, CRS4Distributed Computing Group. (CRS4) Luca Pireddu March 9, / 18

CS 378 Big Data Programming. Lecture 2 Map- Reduce

Chapter 7. Using Hadoop Cluster and MapReduce

Unified Big Data Analytics Pipeline. 连 城

Apache Hadoop. Alexandru Costan

CS 378 Big Data Programming

Large scale processing using Hadoop. Ján Vaňo

Hadoop at Yahoo! Owen O Malley Yahoo!, Grid Team owen@yahoo-inc.com

Hadoop IST 734 SS CHUNG

Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware

Hadoop MapReduce and Spark. Giorgio Pedrazzi, CINECA-SCAI School of Data Analytics and Visualisation Milan, 10/06/2015

Leveraging BlobSeer to boost up the deployment and execution of Hadoop applications in Nimbus cloud environments on Grid 5000

HDFS. Hadoop Distributed File System

Hadoop implementation of MapReduce computational model. Ján Vaňo

Accelerating Hadoop MapReduce Using an In-Memory Data Grid

A Brief Outline on Bigdata Hadoop

Comparative analysis of mapreduce job by keeping data constant and varying cluster size technique

DATA MINING WITH HADOOP AND HIVE Introduction to Architecture

Spark in Action. Fast Big Data Analytics using Scala. Matei Zaharia. project.org. University of California, Berkeley UC BERKELEY

Tutorial: Big Data Algorithms and Applications Under Hadoop KUNPENG ZHANG SIDDHARTHA BHATTACHARYYA

NoSQL and Hadoop Technologies On Oracle Cloud

Hadoop Distributed File System. T Seminar On Multimedia Eero Kurkela

Data-intensive computing systems

Performance Analysis of Mixed Distributed Filesystem Workloads

The Flink Big Data Analytics Platform. Marton Balassi, Gyula Fora" {mbalassi,

MASSIVE DATA PROCESSING (THE GOOGLE WAY ) 27/04/2015. Fundamentals of Distributed Systems. Inside Google circa 2015

Introduction to HDFS. Prasanth Kothuri, CERN

Log Mining Based on Hadoop s Map and Reduce Technique

A Performance Analysis of Distributed Indexing using Terrier

Big Data Storage, Management and challenges. Ahmed Ali-Eldin

Duke University

Spark: Making Big Data Interactive & Real-Time

BIG DATA What it is and how to use?

GraySort and MinuteSort at Yahoo on Hadoop 0.23

The evolution of database technology (II) Huibert Aalbers Senior Certified Executive IT Architect

HDFS: Hadoop Distributed File System

Stateful Distributed Dataflow Graphs: Imperative Big Data Programming for the Masses

Prepared By : Manoj Kumar Joshi & Vikas Sawhney

What We Can Do in the Cloud (2) -Tutorial for Cloud Computing Course- Mikael Fernandus Simalango WISE Research Lab Ajou University, South Korea

Apache Flink Next-gen data analysis. Kostas

Understanding Hadoop Performance on Lustre

A. Aiken & K. Olukotun PA3

Cloud Computing at Google. Architecture

Parallel Programming Map-Reduce. Needless to Say, We Need Machine Learning for Big Data

Big Data Processing with Google s MapReduce. Alexandru Costan

Spark. Fast, Interactive, Language- Integrated Cluster Computing

5 SCS Deployment Infrastructure in Use

Big Data in the Enterprise: Network Design Considerations

Big Data Storage Options for Hadoop Sam Fineberg, HP Storage

THE HADOOP DISTRIBUTED FILE SYSTEM

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee

Reduction of Data at Namenode in HDFS using harballing Technique

Hadoop Job Oriented Training Agenda

Architectures for massive data management

Architectures for massive data management

CIEL A universal execution engine for distributed data-flow computing

Data-Intensive Computing with Map-Reduce and Hadoop

Research Laboratory. Java Web Crawler & Hadoop MapReduce Anri Morchiladze && Bachana Dolidze Supervisor Nodar Momtselidze

Brave New World: Hadoop vs. Spark

Real-time Big Data Analytics with Storm

Performance Comparison of Intel Enterprise Edition for Lustre* software and HDFS for MapReduce Applications

Outline. High Performance Computing (HPC) Big Data meets HPC. Case Studies: Some facts about Big Data Technologies HPC and Big Data converging

Unstructured Data Accelerator (UDA) Author: Motti Beck, Mellanox Technologies Date: March 27, 2012

Machine- Learning Summer School

Systems Engineering II. Pramod Bhatotia TU Dresden dresden.de

MongoDB Developer and Administrator Certification Course Agenda

Hadoop Parallel Data Processing

Distributed Image Processing using Hadoop MapReduce framework. Binoy A Fernandez ( ) Sameer Kumar ( )

Open source Google-style large scale data analysis with Hadoop

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

Mixing Hadoop and HPC Workloads on Parallel Filesystems

Hadoop for MySQL DBAs. Copyright 2011 Cloudera. All rights reserved. Not to be reproduced without prior written consent.

ArcGIS for Server: Administrative Scripting and Automation

Sriram Krishnan, Ph.D.

Parallel Processing of cluster by Map Reduce

Maximizing Hadoop Performance and Storage Capacity with AltraHD TM

Introduction to HDFS. Prasanth Kothuri, CERN

Recommended Literature for this Lecture

Can the Elephants Handle the NoSQL Onslaught?

Advanced Data Management Technologies

Constructing a Data Lake: Hadoop and Oracle Database United!

Lustre * Filesystem for Cloud and Hadoop *

Unified Big Data Processing with Apache Spark. Matei

Programming Hadoop 5-day, instructor-led BD-106. MapReduce Overview. Hadoop Overview

MapReduce Online. Tyson Condie, Neil Conway, Peter Alvaro, Joseph Hellerstein, Khaled Elmeleegy, Russell Sears. Neeraj Ganapathy

Case Study : 3 different hadoop cluster deployments

Transcription:

Disco: Beyond MapReduce Prashanth Mundkur Nokia Mar 22, 2013

Outline BigData/MapReduce Disco Disco Pipeline Model Disco Roadmap

BigData/MapReduce Data too big to fit in RAM/disk of any single machine Analyze chunks of data in parallel (maps) Collect intermediate results into a final result (reduce) Use a cluster of machines

MapReduce TaskGraph

Disco Origins commit 1aa76c1eda8081317f66afbaf872c0f92dfc46f7 Author: Ville Tuulos <tuulos@parvus.pp.htv.fi> Date: Mon Jan 14 02:04:34 2008-0800 Initial commit

Disco Architecture

Worker Protocol Worker Get-TaskInfo Disco TaskInfo Get-Input Input Input-Error Ok Output Ok Done Ok

Disco DFS

Data in ddfs daily-log-2009-12-03_gz daily-log-2009-12-03_gz data:log:assets daily-log-2009-12-03_gz daily-log-2009-12-04_gz data:log:website user:mike data:log:peakday daily-log-2009-12-04_gz daily-log-2009-12-04_gz daily-log-2009-12-05_gz daily-log-2009-12-05_gz daily-log-2009-12-05_gz

Metadata (tags) in ddfs Attributes Tokens Links

Code size Hadoop 1.0 Disco (dev) Map-reduce 53333 (Java) 8053 (Erlang) 3276 (Python) 1724 (OCaml) DFS 34301 (Java) 4600 (Erlang) David A. Wheeler s SLOCCount wc -l Disco s external dependencies: http library (mochiweb, 12.5kLOC) Logging library (lager, 4.3kLOC) Erlang/Python standard libraries

Disco scheduler bug: no backtracking

Shuffle in Disco: bulk user data through Erlang

Rethink Limitations of MapReduce Job computation is performed in three fixed stages

Rethink Limitations of MapReduce Job computation is performed in three fixed stages Processing model is tied to content (key-value pairs)

Rethink Limitations of MapReduce Job computation is performed in three fixed stages Processing model is tied to content (key-value pairs) No inter-task optimization of network resources (crucial for shuffle/reduce)

Node-locality of Tasks in MapReduce

Optimizing network-use based on Node-locality

Output grouping Grouping by label per node (group node label)

Output grouping Grouping per node (group node)

Pipelined Stages of Tasks pipeline ::= stage + stage ::= {grouping, task}

Other grouping options

MapReduce as a Pipeline map-reduce = {split, map}, {group label, reduce}

Disco Pipeline Model Fixes existing issues backtracking scheduler no bulk data passes through Erlang

Disco Pipeline Model Fixes existing issues backtracking scheduler no bulk data passes through Erlang Adds a flexible compute model Allows multiple user-defined stages, as opposed to just map-(shuffle)-reduce Exposes shuffle to user-code Exposes node-locality to tasks, exploitable via user-selectable grouping options

Disco Pipeline Model Conservative extension linear pipeline simpler than dag no need for a graph dsl as in Dryad More flexible platform for higher-level tools like Pig/FlumeJava/etc.

Pipeline Limitations no forks/joins in dataflow no iteration or recursion

New Task API for Pipelines no map or reduce tasks

New Task API for Pipelines no map or reduce tasks user pulls data from input via iterators (process) simpler handling of processing state (init, done) control iteration over input labels (input hook)

Disco Roadmap Disco 0.5 coming soon backtracking job coordinator pipelines alternative task API plus support for existing map-reduce API Evolve pipeline model / API Network-topology-aware task scheduler

Disco and Hadoop DDFS/HDFS storage are different Disco Pipelines/YARN compute models are different

Questions? http://discoproject.org

ddfs Design choices optimized for log-file storage (bulk immutable data files) data is not modified but stored as submitted (e.g. no chunking by default) replication of data and metadata (unlike Hadoop/hdfs) only metadata is mutable dag structure as opposed to tree dag design imposes garbage collection Implementation choices all metadata in readable json all data access over http or local file metadata/data can be recovered using scripts without needing a running ddfs

Hadoop vs Disco Benchmarks 8-node physical cluster of Cisco UCS M2 each node with 4 Xeon, 128GB RAM, 512GB disk from 2011

Job Latency Wordcount on a 1 byte file Completion time (ms) Hadoop 12324 PDisco 359 ODisco 35

DFS Latencies Read / Write a 1 byte file (avg in msecs) HDFS ddfs Reads 670 70 Writes 720 136

DFS Read Throughput 10 3 10 2 DDFS HDFS duration (seconds) 10 1 10 0 10-1 10-2 64K 256K 1M 4M 16M 64M 256M 1G 4G 16G 64G

DFS Write Throughput 10 3 DDFS HDFS 10 2 duration (seconds) 10 1 10 0 10-1 64K 256K 1M 4M 16M 64M 256M 1G 4G 16G 64G

Job Performance Wordcount of English Wikipedia (33GB) Map Shuffle Reduce ODisco Hadoop 0 200 400 600 800 1000 1200 1400 seconds since start