How To Write A Program On A Microsoft.Com (For Microsoft) With A Microsail (For Linux) And A Microosoft.Com Microsails (For Windows) (For Macintosh) ( For Microsils



Similar documents
Programming and Debugging Large- Scale Data Processing Workflows

Programming and Debugging Large- Scale Data Processing Workflows. Christopher Olston and many others Yahoo! Research

web-scale data processing Christopher Olston and many others Yahoo! Research

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2012/13

Data processing goes big

Systems Engineering II. Pramod Bhatotia TU Dresden dresden.de

Distributed Calculus with Hadoop MapReduce inside Orange Search Engine. mardi 3 juillet 12

Hadoop Ecosystem B Y R A H I M A.

Big Data. Donald Kossmann & Nesime Tatbul Systems Group ETH Zurich

Hadoop Job Oriented Training Agenda

Big Data Technology Pig: Query Language atop Map-Reduce

A Brief Introduction to Apache Tez

Chameleon: The Performance Tuning Tool for MapReduce Query Processing Systems

Lecture 10: HBase! Claudia Hauff (Web Information Systems)!

Data and Algorithms of the Web: MapReduce

A Tour of the Zoo the Hadoop Ecosystem Prafulla Wani

Programming Hadoop 5-day, instructor-led BD-106. MapReduce Overview. Hadoop Overview

Cloud Computing at Google. Architecture

Cloud Computing. Up until now

Big Data Too Big To Ignore

Hadoop Ecosystem Overview. CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook

ESS event: Big Data in Official Statistics. Antonino Virgillito, Istat

Workshop on Hadoop with Big Data

Big Data and Scripting Systems build on top of Hadoop

Inspector Gadget: A Framework for Custom Monitoring and Debugging of Distributed Dataflows

Big Data, Cloud Computing, Spatial Databases Steven Hagan Vice President Server Technologies

BIG DATA HANDS-ON WORKSHOP Data Manipulation with Hive and Pig

Hadoop Introduction. Olivier Renault Solution Engineer - Hortonworks

How To Scale Out Of A Nosql Database

Qsoft Inc

Building Scalable Big Data Infrastructure Using Open Source Software. Sam William

Data Management in the Cloud

SOLVING REAL AND BIG (DATA) PROBLEMS USING HADOOP. Eva Andreasson Cloudera

Introduction to Big data. Why Big data? Case Studies. Introduction to Hadoop. Understanding Features of Hadoop. Hadoop Architecture.

BIG DATA What it is and how to use?

Internals of Hadoop Application Framework and Distributed File System

Hadoop and Map-Reduce. Swati Gore

Cloud Application Development (SE808, School of Software, Sun Yat-Sen University) Yabo (Arber) Xu

Online Content Optimization Using Hadoop. Jyoti Ahuja Dec

MySQL and Hadoop. Percona Live 2014 Chris Schneider

Hive Interview Questions

Analysis of Web Archives. Vinay Goel Senior Data Engineer

INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE

StratoSphere Above the Clouds

Deploying Hadoop with Manager

Introduction To Hive

CSE-E5430 Scalable Cloud Computing Lecture 2

Recap. CSE 486/586 Distributed Systems Data Analytics. Example 1: Scientific Data. Two Questions We ll Answer. Data Analytics. Example 2: Web Data C 1

COURSE CONTENT Big Data and Hadoop Training

Challenges for Data Driven Systems

Big Data: Using ArcGIS with Apache Hadoop. Erik Hoel and Mike Park

How To Analyze Log Files In A Web Application On A Hadoop Mapreduce System

COSC 6397 Big Data Analytics. 2 nd homework assignment Pig and Hive. Edgar Gabriel Spring 2015

Application Development. A Paradigm Shift

Unlocking Hadoop for Your Rela4onal DB. Kathleen Technical Account Manager, Cloudera Sqoop PMC Member BigData.

W H I T E P A P E R. Deriving Intelligence from Large Data Using Hadoop and Applying Analytics. Abstract

Big Data Analytics Nokia

Apache Hadoop Ecosystem

BIG DATA SERIES: HADOOP DEVELOPER TRAINING PROGRAM. An Overview

Replicating to everything

Implement Hadoop jobs to extract business value from large and varied data sets

Parallel Computing: Strategies and Implications. Dori Exterman CTO IncrediBuild.

Data-Intensive Programming. Timo Aaltonen Department of Pervasive Computing

Large Scale Text Analysis Using the Map/Reduce

HBase Schema Design. NoSQL Ma4ers, Cologne, April Lars George Director EMEA Services

A programming model in Cloud: MapReduce

Big Data and Hadoop for the Executive A Reference Guide

Big Data and Scripting Systems build on top of Hadoop

Hadoop & Spark Using Amazon EMR

Apache Hadoop: The Pla/orm for Big Data. Amr Awadallah CTO, Founder, Cloudera, Inc.

Manifest for Big Data Pig, Hive & Jaql

Architectures for Big Data Analytics A database perspective

Apache Flink Next-gen data analysis. Kostas

Infomatics. Big-Data and Hadoop Developer Training with Oracle WDP

Peers Techno log ies Pv t. L td. HADOOP

Getting Started with Hadoop. Raanan Dagan Paul Tibaldi

Big Data Technology CS , Technion, Spring 2013

Introduction to Hadoop

Graph Mining on Big Data System. Presented by Hefu Chai, Rui Zhang, Jian Fang

Hadoop. Sunday, November 25, 12

Unified Batch & Stream Processing Platform

Hadoop2, Spark Big Data, real time, machine learning & use cases. Cédric Carbone Twitter

Big Data and Apache Hadoop s MapReduce

You should have a working knowledge of the Microsoft Windows platform. A basic knowledge of programming is helpful but not required.

brief contents PART 1 BACKGROUND AND FUNDAMENTALS...1 PART 2 PART 3 BIG DATA PATTERNS PART 4 BEYOND MAPREDUCE...385

Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 15

Apache Hadoop. Alexandru Costan

DATA MINING WITH HADOOP AND HIVE Introduction to Architecture

Best Practices for Hadoop Data Analysis with Tableau

Big Data Analytics. Chances and Challenges. Volker Markl

MapReduce Algorithms. Sergei Vassilvitskii. Saturday, August 25, 12

Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related

Cloud Computing. Lecture 24 Cloud Platform Comparison

MapReduce and Distributed Data Analysis. Sergei Vassilvitskii Google Research

Introduction to Big Data! with Apache Spark" UC#BERKELEY#

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

Apache Spark 11/10/15. Context. Reminder. Context. What is Spark? A GrowingStack

Complete Java Classes Hadoop Syllabus Contact No:

Introduction to NoSQL Databases and MapReduce. Tore Risch Information Technology Uppsala University

Transcription:

Programming and Debugging Large- Scale Data Processing Workflows Christopher Olston Google Research (work done at Yahoo! Research, with many colleagues)

What I m Working on at Google We ve got fantasjc cloud building blocks: BigTable, MapReduce, Pregel, and on and on (and so do you: EC2, Hadoop, Redis, ZooKeeper, ) To build your app: 1. Think hard, and choose a few building blocks (BB s) 2. SJck your app logic into a blender, and pour it into the various BB abstracjons (keys/values, MR funcjons, callbacks, ) 3. Tune it: stupid map- reduce tricks ; set bazillions of flags 4. Hope that Your BB choices were right, and stay right for a while Nobody ever has to understand your app by reading the code You never a_empt big changes to your app logic or algorithms

What I m Working on at Google We ve got fantasjc cloud building blocks: BigTable, MapReduce, Pregel, and on and on (and so do you: EC2, Hadoop, Redis, ZooKeeper, ) To build your app: 1. Think hard, and choose a few building blocks (BB s) 2. SJck What your app if we logic could into separate a blender, the and app pour logic it into the various BB abstracjons from the assemblage (keys/values, of MR building funcjons, blocks? callbacks, ) 3. Tune it: stupid map- reduce tricks ; set bazillions of flags 4. Hope that Your BB choices were right, and stay right for a while Nobody ever has to understand your app by reading the code You never a_empt big changes to your app logic or algorithms

Research at Google High- risk/high- reward research happening across the company Successful research projects oben become successful products (e.g. speech recognijon) No pressure to publish incremental papers Interdisciplinary In my case: DB + PL + AI We re hiring J

Now, back to your regularly scheduled program

Context Elaborate processing of large data sets e.g.: web search pre- processing cross- dataset linkage web informajon extracjon inges)on storage & processing serving

Context storage & processing workflow manager e.g. Nova dataflow programming framework e.g. Pig distributed sorjng & hashing e.g. Map- Reduce Overview scalable file system e.g. GFS Detail: Inspector Gadget, RubySky Debugging aides: Before: example data generator During: instrumentajon framework Aber: provenance metadata manager

Pig: A High- Level Dataflow Language & RunJme for Hadoop Web browsing sessions with happy endings. Visits = load /data/visits as (user, url, time);! Visits = foreach Visits generate user, Canonicalize(url), time;!! Pages = load /data/pages as (url, pagerank);!! VP = join Visits by url, Pages by url;! UserVisits = group VP by user;! Sessions = foreach UserVisits generate flatten(findsessions(*));! HappyEndings = filter Sessions by BestIsLast(*);!! store HappyEndings into '/data/happy_endings';!

vs. map- reduce: less code! "The [Hofmann PLSA E/M] algorithm was implemented in pig in 30-35 lines of pig-latin statements. Took a lot less compared to what it took in implementing the algorithm in Map-Reduce Java. Exactly that's the reason I wanted to try it out in Pig. It took 3-4 days for me to write it, starting from learning pig. " " -- Prasenjit Mukherjee, Mahout project" 180 160 140 120 100 80 60 40 20 0 1/20 the lines of code Minutes 300 250 200 150 100 50 0 1/16 the development Jme Hadoop Pig Hadoop Pig performs on par with raw Hadoop

vs. SQL: step- by- step style; lower- level control "I much prefer writing in Pig [Latin] versus SQL. The step-by-step method of" creating a program in Pig [Latin] is much cleaner and simpler to use than the single block method of SQL. It is easier to keep track of what your variables are, and where you are in the process of analyzing your data. " " -- Jasmine Novak, Engineer, Yahoo!" "PIG seems to give the necessary parallel programming construct (FOREACH, FLATTEN, COGROUP.. etc) and also give sufficient control back to the programmer (which purely declarative approach like [SQL on Map-Reduce] doesnʼt). " " -- Ricky Ho, Adobe Software"

Conceptually: A Graph of Data TransformaJons Find users who tend to visit good pages. Load Visits(user, url, Jme) Load Pages(url, pagerank) Transform to (user, Canonicalize(url), Jme) Join url = url Group by user Transform to (user, Average(pagerank) as avgpr) Filter avgpr > 0.5

Load Visits(user, url, Jme) Load Pages(url, pagerank) Illustrated! Transform to (user, Canonicalize(url), Jme) (Amy, cnn.com, 8am) (Amy, h_p://www.snails.com, 9am) (Fred, www.snails.com/index.html, 11am) (Amy, www.cnn.com, 8am) (Amy, www.snails.com, 9am) (Fred, www.snails.com, 11am) Join url = url Group by user (Amy, www.cnn.com, 8am, 0.9) (Amy, www.snails.com, 9am, 0.4) (Fred, www.snails.com, 11am, 0.4) (www.cnn.com, 0.9) (www.snails.com, 0.4) (Amy, { (Amy, www.cnn.com, 8am, 0.9), (Amy, www.snails.com, 9am, 0.4) }) (Fred, { (Fred, www.snails.com, 11am, 0.4) }) Transform to (user, Average(pagerank) as avgpr) (Amy, 0.65) (Fred, 0.4) Filter avgpr > 0.5 (Amy, 0.65)

Load Visits(user, url, Jme) Load Pages(url, pagerank) Illustrated! Transform to (user, Canonicalize(url), Jme) (Amy, cnn.com, 8am) (Amy, h_p://www.snails.com, 9am) (Fred, www.snails.com/index.html, 11am) (Amy, www.cnn.com, 8am) (Amy, www.snails.com, 9am) (Fred, www.snails.com, 11am) Join url = url Group by user (Amy, www.cnn.com, 8am, 0.9) (Amy, www.snails.com, 9am, 0.4) (Fred, www.snails.com, 11am, 0.4) (www.cnn.com, 0.9) (www.snails.com, 0.4) (Amy, { (Amy, www.cnn.com, 8am, 0.9), (Amy, www.snails.com, 9am, 0.4) }) (Fred, { (Fred, www.snails.com, 11am, 0.4) }) Transform to (user, Average(pagerank) as avgpr) ILLUSTRATE lets me check the output of my lengthy (Amy, 0.65) batch jobs and their (Fred, 0.4) custom functions without having to do a lengthy run of a long pipeline. [This feature] enables me to be productive. Filter " " avgpr > 0.5 -- Russell Jurney, LinkedIn" (Amy, 0.65)

(Naïve Algorithm) Load Visits(user, url, Jme) Transform to (user, Canonicalize(url), Jme) (Amy, cnn.com, 8am) (Amy, h_p://www.snails.com, 9am) (Fred, www.snails.com/index.html, 11am) (Amy, www.cnn.com, 8am) (Amy, www.snails.com, 9am) (Fred, www.snails.com, 11am) Join url = url Group by user Transform to (user, Average(pagerank) as avgpr) Load Pages(url, pagerank) (www.youtube.com, 0.9) (www.frogs.com, 0.4) Filter avgpr > 0.5

Original UI Prototype (2008)

InteracJve Graphics Programming UI by Bret Victor, Apple Designer (2012) Creators need an immediate connection to what they create If you make a change you need to see the effect of that immediately. " " -- Bret Victor, Apple"

Pig Today Open- source (the Apache Pig Project) Dev./support/training by Cloudera, Hortonworks Offered on Amazon ElasJc Map- Reduce Used by LinkedIn, Nexlix, Salesforce, Twi_er, Yahoo... At Yahoo, as of early 2011: 1000s of jobs/day 75%+ of Hadoop jobs Has an interacjve example- generator command, but no side- by- side UI L

Next: NOVA storage & processing workflow manager e.g. Nova dataflow programming framework e.g. Pig distributed sorjng & hashing e.g. Map- Reduce scalable file system e.g. GFS Debugging aides: Before: example data generator During: instrumentajon framework Aber: provenance metadata manager

Why a Workflow Manager? ConJnuous data processing (simulated on top of Pig/Hadoop stajc processing layer) Independent scheduling of workflow modules

Example RSS feed NEW Workflow template detecjon ALL news arjcles ALL news site templates ALL NEW template tagging NEW NEW shingling NEW shingle hashes seen ALL NEW NEW de- duping NEW unique arjcles

Data Passes Through Many Sub- Systems Nova datum X ingesjon Pig Map- Reduce low- latency processor serving datum Y GFS provenance of X? metadata queries

Ibis Project metadata Ibis metadata queries integrated metadata answers data processing sub- systems metadata manager users Benefits: Provide uniform view to users Factor out metadata management code Decouple metadata lifejme from data/subsystem lifejme Challenges: Overhead of shipping metadata Disparate data/processing granularijes

Example Data and Process GranulariJes Workflow Table MR program Pig script Web page Row Cell Column group Column AlternaJve Pig job MR job MR job phase MR task Pig logical operajon Pig physical operajon Task a_empt data granulari7es process granulari7es

What s Hard About MulJ- Granularity Provenance? Inference: Given relajonships expressed at one granularity, answer queries about other granularijes (the seman;cs are tricky here!) Efficiency: Implement inference without resorjng to materializing everything in terms of finest granularity (e.g. cells)

Inferring TransiJve Links map phase reduce phase IMDB web page Yahoo! Movies web page IMDB extracted table )tle year lead actor Avatar 2009 Worthington IncepJon 2010 DiCaprio Yahoo extracted table )tle year lead actor Avatar 2009 Saldana IncepJon 2010 DiCaprio map output 1 map output 2 combined table )tle year lead actor Avatar 2009 A1: Worthington A2: Saldana IncepJon 2010 DiCaprio Q: Which web page said Saldana?

Next: INSPECTOR GADGET storage & processing workflow manager e.g. Nova dataflow programming framework e.g. Pig distributed sorjng & hashing e.g. Map- Reduce scalable file system e.g. GFS Debugging aides: Before: example data generator During: instrumentajon framework Aber: provenance metadata manager

MoJvated by User Interviews Interviewed 10 Yahoo dataflow programmers (mostly Pig users; some users of other dataflow environments) Asked them how they (wish they could) debug

Summary of User Interviews # of requests feature 7 crash culprit determinajon 5 row- level integrity alerts 4 table- level integrity alerts 4 data samples 3 data summaries 3 memory use monitoring 3 backward tracing (provenance) 2 forward tracing 2 golden data/logic tesjng 2 step- through debugging 2 latency alerts 1 latency profiling 1 overhead profiling 1 trial runs

Our Approach Goal: a programming framework for adding these behaviors, and others, to Pig Precept: avoid modifying Pig or tampering with data flowing through Pig Approach: perform Pig script rewrijng insert special UDFs that look like no- ops to Pig

Pig w/ Inspector Gadget load filter load join IG coordinator group count store

Example: Integrity Alerts load filter load alert! join IG coordinator propagate alert to user group count store

Example: Crash Culprit Determina;on Phases 1 to n- 1: record counts Phase n: records load filter load join IG coordinator group Phases 1 to n- 1: maintain count lower bounds Phase n: maintain last- seen records count store

Example: Forward Tracing load load filter IG coordinator traced records join group tracing instrucjons report traced records to user count store

Flow end user result dataflow program + app. parameters applicajon IG driver library launch instrumented dataflow run(s) raw result(s) load load IG coordinator filter join dataflow engine run;me store

Agent & Coordinator APIs Agent Class init(args) tags = observerecord(record, tags) receivemessage(source, message) finish() Agent Messaging sendtocoordinator(message) sendtoagent(agentid, message) senddownstream(message) sendupstream(message) Coordinator Class init(args) receivemessage(source, message) output = finish() Coordinator Messaging sendtoagent(agentid, message)

ApplicaJons Developed For IG # of requests feature lines of code (Java) 7 crash culprit determinajon 141 5 row- level integrity alerts 89 4 table- level integrity alerts 99 4 data samples 97 3 data summaries 130 3 memory use monitoring N/A 3 backward tracing (provenance) 237 2 forward tracing 114 2 golden data/logic tesjng 200 2 step- through debugging N/A 2 latency alerts 168 1 latency profiling 136 1 overhead profiling 124 1 trial runs 93

RubySky: ScripJng Framework with Built- in Debugging Support Scenario: ad- hoc data analysis e.g. we just did a crawl; is it too biased toward Yahoo content? In this scenario, somejmes using new data; always running new code à Bugs are the norm à Debugging should be a first class cijzen!

RubySky Example :Crawl_data <= load(context, "/crawl_data", schema {string :url;...})!! # quick-and-dirty determination of "domain" from "url"! # (e.g. http://www.yahoo.com/index.html => yahoo.com)! :With_domains <= Crawl_data.foreach() do r! url = r[:url]! prefix = http://www.! if (url[0, prefix.len] == prefix)! domain = url[prefix.len, url.index('/', prefix.len) - prefix.len]! else! throw Don t know how to parse this url: + url! end! Tuple.new([url, domain])! end!! :Yahoo_only <= With_domains.filter do r! r[:domain] =~ /^yahoo/! end!! :Grouped <= Yahoo_only.group!! :Grouped.each { r puts NUM. YAHOO URLs: + r[:yahoo_only].size }!

RubySky Example: w/debug stmts...! :With_domains <= Crawl_data.foreach() do r!...! else! domain = PUNT url! end! Tuple.new([url, domain])! end!! # examine a few extracted domains, as sanity check! :Domain_sample <= With_domains.foreach() do r! $count += 1! ($count < SAMPLE_SIZE)? [Tuple.new([r[:domain]])] : []! end! SHOW Domain_sample!! # alert if any extracted domain is empty or null! :Domain_missing <= With_domains.foreach() do r! (r[:domain] == "" or r[:domain] == nil)?! [Tuple.new(["ALERT! Missing domain from: " + r[:url]])] : []! end! SHOW Domain_missing!!...!

RubySky ExecuJon launch load client punt request response (code and/or data) domain samples empty domain alerts extract domain spy agent filter group answer (# yahoo URLs) count

Summary storage & processing workflow manager e.g. Nova dataflow programming framework e.g. Pig distributed sorjng & hashing e.g. Map- Reduce scalable file system e.g. GFS Nova Pig Debugging aides: Before: example data generator During: instrumentajon framework Aber: provenance metadata manager Dataflow Illustrator Inspector Gadget, RubySky Ibis

Related Work Pig: DryadLINQ, Hive, Jaql, Scope, rela;onal query languages Nova: BigTable, CBP, Oozie, Percolator, scien;fic workflow, incremental view maintenance Dataflow illustrator: [Mannila/Raiha, PODS 86], reverse query processing, constraint databases, hardware verifica;on & model checking Inspector gadget, RubySky: XTrace, taint tracking, aspect- oriented programming Ibis: Kepler COMAD, ZOOM user views, provenance management for databases & scien;fic workflows

Collaborators Shubham Chopra Tyson Condie Anish Das Sarma Alan Gates Pradeep Kamath Ravi Kumar Shravan Narayanamurthy Olga Natkovich Benjamin Reed Santhosh Srinivasan Utkarsh Srivastava Andrew Tomkins