Relational Data Processing on MapReduce

Similar documents
Relational Processing on MapReduce

Big Data Infrastructures & Technologies

Play with Big Data on the Shoulders of Open Source

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2012/13

Large-Scale Data Processing with MapReduce

Big Data. Donald Kossmann & Nesime Tatbul Systems Group ETH Zurich

Big Data With Hadoop

Big Data Technology Pig: Query Language atop Map-Reduce

MapReduce for Data Warehouses

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

Internals of Hadoop Application Framework and Distributed File System

Getting Started with Hadoop. Raanan Dagan Paul Tibaldi

Big Data and Scripting map/reduce in Hadoop

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

Hadoop and Map-Reduce. Swati Gore

COSC 6397 Big Data Analytics. 2 nd homework assignment Pig and Hive. Edgar Gabriel Spring 2015

CSE-E5430 Scalable Cloud Computing Lecture 2

Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware

Introduction to Hadoop

Parallel Databases. Parallel Architectures. Parallelism Terminology 1/4/2015. Increase performance by performing operations in parallel

INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE

How To Handle Big Data With A Data Scientist

Hadoop Ecosystem Overview. CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook

Hadoop Ecosystem B Y R A H I M A.

Cloudera Certified Developer for Apache Hadoop

Chase Wu New Jersey Ins0tute of Technology

Lecture 10: HBase! Claudia Hauff (Web Information Systems)!

Using distributed technologies to analyze Big Data

Recap. CSE 486/586 Distributed Systems Data Analytics. Example 1: Scientific Data. Two Questions We ll Answer. Data Analytics. Example 2: Web Data C 1

Managing Big Data with Hadoop & Vertica. A look at integration between the Cloudera distribution for Hadoop and the Vertica Analytic Database

brief contents PART 1 BACKGROUND AND FUNDAMENTALS...1 PART 2 PART 3 BIG DATA PATTERNS PART 4 BEYOND MAPREDUCE...385

How to Choose Between Hadoop, NoSQL and RDBMS

Cloud Application Development (SE808, School of Software, Sun Yat-Sen University) Yabo (Arber) Xu

Data Management in the Cloud

Chapter 7. Using Hadoop Cluster and MapReduce

Data Warehousing and Analytics Infrastructure at Facebook. Ashish Thusoo & Dhruba Borthakur athusoo,dhruba@facebook.com

How To Scale Out Of A Nosql Database

HOW TO LIVE WITH THE ELEPHANT IN THE SERVER ROOM APACHE HADOOP WORKSHOP

Integrating Big Data into the Computing Curricula

A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)

Lambda Architecture. Near Real-Time Big Data Analytics Using Hadoop. January Website:

Apache Hadoop: Past, Present, and Future

project collects data from national events, both natural and manmade, to be stored and evaluated by

Programming Hadoop 5-day, instructor-led BD-106. MapReduce Overview. Hadoop Overview

Facebook s Petabyte Scale Data Warehouse using Hive and Hadoop

BIG DATA HANDS-ON WORKSHOP Data Manipulation with Hive and Pig

Map Reduce & Hadoop Recommended Text:

Big Data Technology Map-Reduce Motivation: Indexing in Search Engines

Implement Hadoop jobs to extract business value from large and varied data sets

BIG DATA What it is and how to use?

Cloud Computing at Google. Architecture

F1: A Distributed SQL Database That Scales. Presentation by: Alex Degtiar (adegtiar@cmu.edu) /21/2013

Constructing a Data Lake: Hadoop and Oracle Database United!

Developing MapReduce Programs

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Hadoop. Sunday, November 25, 12

Hadoop Job Oriented Training Agenda

Hadoop Introduction. Olivier Renault Solution Engineer - Hortonworks

Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 15

Hadoop IST 734 SS CHUNG

Big Data Course Highlights

Data processing goes big

ESS event: Big Data in Official Statistics. Antonino Virgillito, Istat

Introduction to Big data. Why Big data? Case Studies. Introduction to Hadoop. Understanding Features of Hadoop. Hadoop Architecture.

MAPREDUCE Programming Model

MapReduce Jeffrey Dean and Sanjay Ghemawat. Background context

Spring,2015. Apache Hive BY NATIA MAMAIASHVILI, LASHA AMASHUKELI & ALEKO CHAKHVASHVILI SUPERVAIZOR: PROF. NODAR MOMTSELIDZE

Big Data: Using ArcGIS with Apache Hadoop. Erik Hoel and Mike Park

SOLVING REAL AND BIG (DATA) PROBLEMS USING HADOOP. Eva Andreasson Cloudera

Lecture 32 Big Data. 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop

Alternatives to HIVE SQL in Hadoop File Structure

Click Stream Data Analysis Using Hadoop

Peers Techno log ies Pv t. L td. HADOOP

ITG Software Engineering

Data-Intensive Programming. Timo Aaltonen Department of Pervasive Computing

Benchmarking Cassandra on Violin

Architectural patterns for building real time applications with Apache HBase. Andrew Purtell Committer and PMC, Apache HBase

Distributed Computing and Big Data: Hadoop and MapReduce

COURSE CONTENT Big Data and Hadoop Training

International Journal of Advancements in Research & Technology, Volume 3, Issue 2, February ISSN

FIS GT.M Multi-purpose Universal NoSQL Database. K.S. Bhaskar Development Director, FIS +1 (610)

Big Data Explained. An introduction to Big Data Science.

An Industrial Perspective on the Hadoop Ecosystem. Eldar Khalilov Pavel Valov

E6893 Big Data Analytics Lecture 2: Big Data Analytics Platforms

HADOOP ADMINISTATION AND DEVELOPMENT TRAINING CURRICULUM

Xiaoming Gao Hui Li Thilina Gunarathne

The Big Data Ecosystem at LinkedIn Roshan Sumbaly, Jay Kreps, and Sam Shah LinkedIn

Cost-Effective Business Intelligence with Red Hat and Open Source

Big Data and Market Surveillance. April 28, 2014

Big Data Technology CS , Technion, Spring 2013

Data Mining in the Swamp

Apache Hadoop Ecosystem

A bit about Hadoop. Luca Pireddu. March 9, CRS4Distributed Computing Group. (CRS4) Luca Pireddu March 9, / 18

Session 1: IT Infrastructure Security Vertica / Hadoop Integration and Analytic Capabilities for Federal Big Data Challenges

BigData. An Overview of Several Approaches. David Mera 16/12/2013. Masaryk University Brno, Czech Republic

Associate Professor, Department of CSE, Shri Vishnu Engineering College for Women, Andhra Pradesh, India 2

Big Data Data-intensive Computing Methods, Tools, and Applications (CMSC 34900)

Apache Flink Next-gen data analysis. Kostas

Distributed Data Management Summer Semester 2015 TU Kaiserslautern

Transcription:

Relational Data Processing on MapReduce V. CHRISTOPHIDES Department of Computer Science University of Crete 1 Peta-scale Data Analysis 12+ TBs of tweet data every day 30 billion RFID tags today (1.3B in 2005) 4.6 billion camera phones world wide? TBs of data every day 25+ TBs of log data every day generated by a new user being added every sec. for 3 years 4 billion views/day YouTube is the 2nd most used search engine next to Google 76 million smart meters in 2009 200M by 2014 2014 IBM Corporation 100s of millions of GPS enabled devices sold annually 2+ billion people on the Web by end 20112 1 1

A Bird s Eye View of Big Data The three domains of information Source: Barry Devlin, The Big Data Zoo --- Taming the Beasts 3 Positioning Big Data 2014 IBM Corporation 4 2 2

Big Data Analysis A lot of these datasets have some structure Query logs Point-of-sale records User data (e.g., demographics) How do we perform data analysis at scale? Relational databases and SQL MapReduce (Hadoop) 5 Relational Databases vs. MapReduce Relational databases: Multipurpose: analysis and transactions; batch and interactive Data integrity via ACID transactions Lots of tools in software ecosystem (for ingesting, reporting, etc.) Supports SQL (and SQL integration, e.g., JDBC) Automatic SQL query optimization MapReduce (Hadoop): Designed for large clusters, fault tolerant Data is accessed in native format Supports many query languages Programmers retain control over performance Open source 6 3 3

Parallel Relational Databases vs. MapReduce Parallel relational databases Schema on write Failures are relatively infrequent Possessive of data Mostly proprietary Shared-nothing architecture for parallel processing MapReduce Schema on read Failures are relatively common In situ data processing Open source 7 Hadoop NextGen (YARN) architecture Computation and Data Size Matters! 8 4 4

Database Workloads OLTP (online transaction processing) Typical applications: e-commerce, banking, airline reservations User facing: real-time, low latency, highly-concurrent Tasks: relatively small set of standard transactional queries Data access pattern: random reads, updates, writes (involving relatively small amounts of data) OLAP (online analytical processing) Typical applications: business intelligence, data mining Back-end processing: batch workloads, less concurrency Tasks: complex analytical queries, often ad hoc Data access pattern: table scans, large amounts of data involved per query 9 One Database or Two? Downsides of co-existing OLTP Solution: separate databases and OLAP workloads User-facing OLTP database for highvolume transactions Poor memory management Conflicting data access patterns Data warehouse for OLAP workloads 10 Variable latency How do we connect the two? 5 5

OLTP/OLAP Integration OLTP ETL (Extract, Transform, Load) OLAP OLTP database for user-facing transactions Retain records of all activity Periodic ETL (e.g., nightly) Extract-Transform-Load (ETL) Extract records from source Transform: clean data, check integrity, aggregate, etc. Load into OLAP database OLAP database for data warehousing Business intelligence: reporting, ad hoc queries, data mining, etc. Feedback to improve OLTP services 11 A Closer Look at ETL 12 6 6

MapReduce: A Major Step Backwards? MapReduce is a step backward in database access Schemas are good Separation of the schema from the application is good High-level access languages are good MapReduce is poor implementation Brute force and only brute force (no indexes, for example) MapReduce is missing features Bulk loader, indexing, updates, transactions... MapReduce is incompatible with DMBS tools 13 OLTP/OLAP Architecture: Hadoop? What about here? OLTP ETL (Extract, Transform, Load) OLAP Hadoop here? 14 7 7

OLTP/OLAP/Hadoop Architecture OLTP ETL (Extract, Transform, Load) Hadoop OLAP Why does this make sense? 15 ETL Bottleneck Reporting is often a nightly task: ETL is often slow: why? What happens if processing 24 hours of data takes longer than 24 hours? Often, with noisy datasets, ETL is the analysis! ETL necessarily involves brute force data scans: L, then E and T? Hadoop is perfect: Most likely, you already have some data warehousing solution Ingest is limited by speed of HDFS Scales out with more nodes Massively parallel Ability to use any processing tool Much cheaper than parallel databases ETL is a batch process anyway! 16 8 8

MapReduce Algorithms for Processing Relational Data 17 MapReduce k1 v1 k2 v2 k3 v3 k4 v4 k5 v5 k6 v6 map map map map a 1 b 2 c 3 c 6 a 5 c 2 b 7 c 8 combine combine combine combine a 1 b 2 c 9 a 5 c 2 b 7 c 8 partition partition partition partition Shuffle and Sort: aggregate values by keys a 1 5 b 2 7 c 2 9 8 reduce reduce reduce r1 s1 r2 s2 r3 s3 18 9 9

10 10 Secondary Sorting MapReduce sorts input to reducers by key Values are arbitrarily ordered What if want to sort value also? E.g., k (v1, R), (v3, R), (v4, R), (v8, R) Solution 1: Buffer values in memory, then sort Why is this a bad idea? Solution 2: Value-to-key conversion : extends the key with part of the value Let execution framework do the sorting Preserve state across multiple key-value pairs to handle processing Anything else we need to do? 19 Value-to-Key Conversion Before k (v1, R), (v4, R), (v8, R), (v3, R) Values arrive in arbitrary order After (k, v1) (v1, R) Values arrive in sorted order (k, v3) (v3, R) Process by preserving state across multiple keys! (k, v4) (v4, R) (k, v8) (v8, R) Default comparator, group comparator, and Partitioner has to be tuned to use the appropriate part of the key 20

11 11 Working Scenario Two tables: User demographics (gender, age, income, etc.) User page visits (URL, time spent, etc.) Analyses we might want to perform: Statistics on demographic characteristics Statistics on page visits Statistics on page visits by URL Statistics on page visits by demographic characteristic 21 Relational Algebra www.mathcs.emory.edu/~cheung/courses/377/syllabus/4-relalg/intro.html 22

12 12 Projection R1 R1 R2 R2 R3 R4 R5 π S (R) R3 R4 R5 23 Projection in MapReduce Easy! Map over tuples, emit new tuples with appropriate attributes For each tuple t in R, construct a tuple t by eliminating those components whose attributes are not in S emit a key/value pair (t, t ) No reducers (reducers are the identity function), unless for regrouping or resorting tuples the Reduce operation performs duplicate elimination Alternatively: perform in reducer, after some other processing Basically limited by HDFS streaming speeds Speed of encoding/decoding tuples becomes important Relational databases take advantage of compression Semi-structured data? No problem! 24

13 13 Selection R1 R2 R3 R4 σ C (R) R1 R3 R5 25 Selection in MapReduce Easy! Map over tuples, emit only tuples that meet criteria For each tuple t in R, check if t satisfies C and If so, emit a key/value pair (t, t) No reducers (reducers are the identity function), unless for regrouping or resorting tuples Alternatively: perform in reducer, after some other processing Basically limited by HDFS streaming speeds: Speed of encoding/decoding tuples becomes important Relational databases take advantage of compression Semistructured data? No problem! 26

14 14 Set Operations in Map Reduce R(X,Y) S(Y,Z) Map: for each tuple t either in R or in S, and emit (t,t) Reduce: either receive (t,[t,t]) or (t,[t]) Always emit (t,t) We perform duplicate elimination R(X,Y) S(Y,Z) Map: for each tuple t either in R or in S, emit (t,t) Reduce: either receive (t,[t,t]) or (t,[t]) Emit (t,t) in the former case and nothing in the latter R(X,Y) - S(Y,Z) Map: for each tuple t either in R or in S, emit (t, R or S) Reduce: receive (t,[r]) or (t,[s]) or (t,[r,s]) Emit (t,t) only when received (t,[r]) otherwise (t, NULL) 27 Group by Aggregation Example: What is the average time spent per URL? In SQL: SELECT url, AVG(time) FROM visits GROUP BY url In MapReduce: Let R(A, B, C) be a relation to which we apply γ A,θ(B) (R) The map operation prepares the grouping (e.g., emit time, keyed by url) The grouping is done by the framework The reducer computes the aggregation (e.g. average) Eventually, optimize with combiners Simplifying assumptions: one grouping attribute and one aggregation function 28

15 15 Relational Joins R1 R2 R3 R4 S1 S2 S3 S4 R S R1 R2 R3 R4 S2 S4 S1 S3 29 Types of Relationships Many-to-Many One-to-Many One-to-One 30

16 16 Join Algorithms in MapReduce Join usually just means equi-join, but we also want to support other join predicates Hadoop has some built-in join support, but our goal is to understand important algorithm design principles Algorithms Reduce-side join Map-side join In-memory join Striped variant Memcached variant 31 Reduce-side Join Basic idea: group by join key Execution framework brings together tuples sharing the same key Similar to a sort-merge join in the database terminology A map function Receives a record in R and S Emits its join attribute value as a key and the record as a value A reduce function Receives each join attribute value with its records from R and S Perform actual join between the records in R and S Two variants 1-to-1 joins 1-to-many and many-to-many joins 32

17 17 Re-Partition Join Dataset A Dataset B Different join keys Reducer 1 Reducer 2 Reducer N Reducers perform the actual join Shuffling and Sorting Phase Shuffling and sorting over the network Mapper 1 Mapper2 Mapper3 Mapper M+N - Each mapper processes one block (split) - Each mapper produces the join key and the record pairs 33 HDFS stores data blocks (Replicas are not shown) 33 Reduce-side Join: 1-to-1 Map R1 keys R1 values R4 R4 S2 S2 S3 S3 Reduce keys R1 values S2 S3 R4 Note: no guarantee if R is going to come first or S! 34

18 18 Reduce-Side Join: 1-to-1 At most one tuple from R and one tuple from S share the same join key Reducer will receive keys in the following format k23 [(r64,r64),(s84,s84)] k37 [(r68, R68)] k59 [(s97,s97),(r81,r81)] k61 [(s99, S99)] If there are two values associated with a key, one must be from R and the other from S What should the reduced do for k37? 35 Reduce-side Join: 1-to-Many Map keys values R1 R1 S2 S2 S3 S3 S9 S9 Reduce keys values R1 S2 S3 What s the problem? R is the one side, S is the many 36

19 19 Reduce-side Join: Value-to-Key Conversion In reducer keys Buffer all values in memory, pick out the tuple from R, and then cross it with every tuple from S to perform the join values R1 S2 New key encountered: hold in memory Cross with records from other set S3 S9 R4 S3 New key encountered: hold in memory Cross with records from other set S7 www.inkling.com/read/hadoop-definitive-guide-tom-white-3rd/chapter-8/example-8-9 37 Reduce-side Join: Many-to-Many In reducer keys R1 values R5 Hold in memory R8 S2 Cross with records from other set S3 S9 What s the problem? R is the smaller dataset 38

20 20 Map-side Join: Basic Idea What are the limitations of reduce-side joins? Both datasets are transferred over the network Assume two datasets are sorted by the join key: R1 R2 R4 R3 S2 S4 S3 S1 A sequential scan through both datasets to join: called a sort-merge join in database terminology 39 Map-side Join: Parallel Scans If datasets are sorted by join key, join can be accomplished by a scan over both datasets How can we accomplish this in parallel? Partition and sort both datasets in the same manner In MapReduce: Map over one dataset, read from other corresponding partition No reducers necessary (unless to repartition or resort) Consistently partitioned datasets: realistic to expect? Depends on the workflow For ad hoc data analysis, reduce-side are more general, although less efficient 40

21 21 Broadcast/Replication Join Dataset A Dataset B Different join keys Mapper 1 Mapper 2 Mapper N 41 HDFS stores data blocks (Replicas are not shown) 41 In-Memory Join: Variants Basic idea: load one dataset into memory, stream over other dataset Works if R << S and R fits into memory Called a hash join in database terminology MapReduce implementation Distribute R to all nodes Map over S, each mapper loads R in memory, hashed by join key For every tuple in S, look up join key in R No reducers, unless for regrouping or resorting tuples Downside: need to copy R to all mappers Not so bad, since R is small 42

22 22 In-Memory Join: Variants Distributed Cache: Efficient way to copy files to all nodes processing a certain task Use it to send small R to all mappers Part of the job configuration Striped variant: R too big to fit into memory? Divide R into R1, R2, R3, s.t. each Rn fits into memory Perform in-memory join: n, Rn S Take the union of all join results Hadoop still needs to move the data to the worker nodes, so use this with care But it avoids copying the file for every task on the same node 43 Memcached In order to achieve good performance in accessing distributed key-value stores, it is often necessary to batch queries before making synchronous requests (to amortize latency over many requests) or to rely on asynchronous requests Caching servers: 15 million requests per sec, 95% handled by memcache (15 TB of RAM) Database layer: 800 eight-core Linux servers running MySQL (40 TB user data) 44

23 23 Memcached Join Memcached join: Load R into memcached Replace in-memory hash lookup with memcached lookup Capacity and scalability? Memcached capacity >> RAM of individual node Memcached scales out with cluster Latency? Memcached is fast (basically, speed of network) Batch requests to amortize latency costs 45 Which Join to Use? In-memory join > map-side join > reduce-side join Why? Limitations of each? In-memory join: memory Map-side join: sort order and partitioning Reduce-side join: general purpose algorithm but sensible to data skewness? What about non-equi joins? Inequality (S.A<R.A): map just forwards R-tuples, but replicates S- tuples for all larger R.A values as keys 46

24 24 Problems With Standard Equi-Join Algorithm Degree of parallelism limited by number of distinct join values Data skew If one join value dominates, reducer processing that key will become bottleneck Does not generalize to other joins 47 On Standard Repartition Equi-Join Algorithm Consider only the pairs with the same join attribute values Naïve join algorithm Standard repartition join algorithm Kyuseok Shim (VLDB 2012 TUTORIAL) 48

25 25 Standard Repartition Equi-Join Algorithm 49 Standard Repartition Equi-Join Algorithm 50

26 26 Reducer-Centric Cost Model Difference between join implementations starts with Map output 51 Optimization Goal: Minimal Job Completion time Job complete time depends on the slowest map and reduce functions Balancing the workloads of map functions is easy and thus we ignore map functions Balance the workloads of reduce functions as evenly as possible Assume all reducers are similarly capable Processing time at reducer is approx. monotonic in input and output size Hence need to minimize max-reducer-input or max-reducer-output Join problem classification Input-size dominated: minimize max-reducer-input Output-size dominated: minimize max-reducer-output Input-output balanced: minimize combination of both Kyuseok Shim (VLDB 2012 TUTORIAL) 52

27 27 Join Model Join-matrix M: M(i, j) = true, if and only if (si, tj) in join result Cover each true-valued cell by exactly one reducer Kyuseok Shim (VLDB 2012 TUTORIAL) 53 Reduce Allocations for Standard Repartition Equi-joins Simple/Standard Random Balanced Kyuseok Shim (VLDB 2012 TUTORIAL) Max reduce input size = 5 Max reduce output size = 6 Max reduce input size = 8 Max reduce output size = 4 Max reduce input size = 5 54 Max reduce output size = 4

28 28 Comparisons of Reduce Allocation Methods Simple allocation Minimize the maximum input size of reduce functions Output size may be skewed Random allocation Minimize the maximum output size of reduce functions Input size may be increased due to duplication Balanced allocation Minimize both maximum input and output sizes Kyuseok Shim (VLDB 2012 TUTORIAL) 55 How to Balance Reduce Allocation Assume r is desired number of reduce functions Partition join-matrix M into r regions A map function sends each record in R and S to mapped regions A reduce function outputs all possible (r,s) pairs satisfying the join predicates in its value-list Propose M-Bucket-I algorithm [Okcan Riedewald: SIGMOD 2011] 56

29 29 Processing Relational Data: Summary MapReduce algorithms for processing relational data: Group by, sorting, partitioning are handled automatically by shuffle/sort in MapReduce Selection, projection, and other computations (e.g., aggregation), are performed either in mapper or reducer Complex operations require multiple MapReduce jobs Example: top ten URLs in terms of average time spent Opportunities for automatic optimization Multiple strategies for relational joins 57 Join Implementations on MapReduce Feng Li, Beng Chin Ooi, M. Tamer Özsu, and Sai Wu. 2014. Distributed data management using MapReduce. ACM Comput. Surv. 46, 3, January 2014 58

30 30 Evolving Roles for Relational Database and MapReduce 59 The Traditional Way: Bringing Data to Compute Evolution from Apache Hadoop to the Enterprise Data Hub A. Awadallah 60 Co-Founder & CTO of Cloudera SMDB 2014

31 31 The New Way: Bringing Compute to Data Evolution from Apache Hadoop to the Enterprise Data Hub A. Awadallah 61 Co-Founder & CTO of Cloudera SMDB 2014 OLTP/OLAP/Hadoop Architecture OLTP ETL (Extract, Transform, Load) Hadoop OLAP Why does this make sense? Often, with noisy datasets, ETL is the analysis! Note that ETL necessarily involves brute force data scans L, then E and T? 62

32 32 Need for High-Level Languages Hadoop is great for large-data processing! But writing Java programs for everything is verbose and slow Analysts don t want to (or can t) write Java Solution: develop higher-level data processing languages Hive: HQL is like SQL Pig: Pig Latin is a bit like Perl 63 Hive and Pig Hive: data warehousing application in Hadoop Query language is HQL, variant of SQL Tables stored on HDFS as flat files Developed by Facebook, now open source Pig: large-scale data processing system Scripts are written in Pig Latin, a dataflow language Developed by Yahoo!, now open source Roughly 1/3 of all Yahoo! internal jobs Common idea: Provide higher-level language to facilitate large-data processing Higher-level language compiles down to Hadoop jobs 64

33 33 Hive: Example Hive looks similar to an SQL database Relational join on two tables: Table of word counts from Shakespeare collection Table of word counts from the bible SELECT s.word, s.freq, k.freq FROM shakespeare s JOIN bible k ON (s.word ( = k.word) ) WHERE s.freq >= 1 AND k.freq >= 1 ORDER BY s.freq DESC LIMIT 10; the 25848 62394 I 23031 8854 and 19671 38985 to 18038 13526 of 16700 34654 a 14170 8057 you 12702 2720 my 11297 4135 in 10797 12445 is 8882 6884 Source: Material drawn from Cloudera training VM 65 Hive: Behind the Scenes SELECT s.word, s.freq, k.freq FROM shakespeare s JOIN bible k ON (s.word ( = k.word) ) WHERE s.freq >= 1 AND k.freq >= 1 ORDER BY s.freq DESC LIMIT 10; (Abstract Syntax Tree) (TOK_QUERY (TOK_FROM (TOK_JOIN (TOK_TABREF shakespeare s) (TOK_TABREF bible k) (= (. (TOK_TABLE_OR_COL s) word) (. (TOK_TABLE_OR_COL k) word)))) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL s) word)) (TOK_SELEXPR (. (TOK_TABLE_OR_COL s) freq)) (TOK_SELEXPR (. (TOK_TABLE_OR_COL k) freq))) (TOK_WHERE (AND (>= (. (TOK_TABLE_OR_COL s) freq) ) 1) (>= (. (TOK_TABLE_OR_COL k) freq) ) 1))) (TOK_ORDERBY (TOK_TABSORTCOLNAMEDESC (. (TOK_TABLE_OR_COL s) freq))) (TOK_LIMIT 10))) (one or more of MapReduce jobs) 66

34 34 Hive: Behind the Scenes STAGE DEPENDENCIES: Stage-1 is a root stage Stage-2 depends on stages: Stage-1 Stage-0 is a root stage STAGE PLANS: Stage: Stage-1 Map Reduce Alias -> Map Operator Tree: s TableScan alias: s Filter Operator predicate: expr: (freq >= 1) type: boolean Reduce Output Operator key expressions: expr: word type: string sort order: + Map-reduce partition columns: expr: word type: string tag: 0 value expressions: expr: freq type: int expr: word type: string k TableScan alias: k Filter Operator predicate: expr: (freq >= 1) type: boolean Reduce Output Operator key expressions: expr: word type: string sort order: + Map-reduce partition columns: expr: word type: string tag: 1 value expressions: expr: freq type: int Stage: Stage-2 Map Reduce Alias -> Map Operator Tree: hdfs://localhost:8022/tmp/hive-training/364214370/10002 Reduce Output Operator key expressions: expr: _col1 type: int sort order: - tag: -1 value expressions: expr: _col0 type: string expr: _col1 type: int expr: _col2 type: int Reduce Operator Tree: Extract Reduce Operator Tree: Limit Join Operator File Output Operator condition map: compressed: false Inner Join 0 to 1 GlobalTableId: 0 condition expressions: table: 0 {VALUE._col0} {VALUE._col1} input format: org.apache.hadoop.mapred.textinputformat 1 {VALUE._col0} output format: org.apache.hadoop.hive.ql.io.hiveignorekeytextoutputformat outputcolumnnames: _col0, _col1, _col2 Filter Operator predicate: expr: ((_col0 >= 1) and (_col2 >= 1)) type: boolean Select Operator expressions: Stage: Stage-0 expr: _col1 Fetch Operator type: string limit: 10 expr: _col0 type: int expr: _col2 type: int outputcolumnnames: _col0, _col1, _col2 File Output Operator compressed: false GlobalTableId: 0 table: input format: org.apache.hadoop.mapred.sequencefileinputformat output format: org.apache.hadoop.hive.ql.io.hivesequencefileoutputformat 67 Pig: Example Task: Find the top 10 most visited pages in each category 68

35 35 Pig Query Plan 69 Pig Script visits = load /data/visits as (user, url, time); gvisits = group visits by url; visitcounts = foreach gvisits generate url, count(visits); urlinfo = load /data/urlinfo as (url, category, prank); visitcounts = join visitcounts by url, urlinfo by url; gcategories = group visitcounts by category; topurls = foreach gcategories generate top(visitcounts,10); store topurls into /data/topurls ; 70

36 36 Pig Query Plan 71 References CS9223 Massive Data Analysis J. Freire & J. Simeon New York University Course 2013 INFM 718G / CMSC 828G Data-Intensive Computing with MapReduce J. Lin University of Maryland 2013 CS 6240: Parallel Data Processing in MapReduce Mirek Riedewald Northeastern University 2014 Extreme Computing Stratis D. Viglas University of Edinburg 2014 MapReduce Algorithms for Big Data Analysis Kyuseok Shim VLDB 2012 TUTORIAL 72