Jeffrey D. Ullman slides. MapReduce for data intensive computing



Similar documents
CS246: Mining Massive Datasets Jure Leskovec, Stanford University.

PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS

Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University

Big Data and Apache Hadoop s MapReduce

CIS 4930/6930 Spring 2014 Introduction to Data Science /Data Intensive Computing. University of Florida, CISE Department Prof.

Introduction to Hadoop

HPC & Big Data. Adam S.Z Belloum Software and Network engineering group University of Amsterdam

Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 15

LARGE-SCALE DATA PROCESSING WITH MAPREDUCE

Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 14

MapReduce (in the cloud)

Introduction to Hadoop

CSE-E5430 Scalable Cloud Computing Lecture 2

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)

Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! ti2736b-ewi@tudelft.nl

DATA MINING WITH HADOOP AND HIVE Introduction to Architecture

What We Can Do in the Cloud (2) -Tutorial for Cloud Computing Course- Mikael Fernandus Simalango WISE Research Lab Ajou University, South Korea

Map Reduce / Hadoop / HDFS

Open source large scale distributed data management with Google s MapReduce and Bigtable

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

Prepared By : Manoj Kumar Joshi & Vikas Sawhney

How To Scale Out Of A Nosql Database

MapReduce. from the paper. MapReduce: Simplified Data Processing on Large Clusters (2004)

Introduction to Hadoop. New York Oracle User Group Vikas Sawhney

A programming model in Cloud: MapReduce

The Google File System

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

Hadoop implementation of MapReduce computational model. Ján Vaňo

Hadoop 只 支 援 用 Java 開 發 嘛? Is Hadoop only support Java? 總 不 能 全 部 都 重 新 設 計 吧? 如 何 與 舊 系 統 相 容? Can Hadoop work with existing software?

!"#$%&' ( )%#*'+,'-#.//"0( !"#$"%&'()*$+()',!-+.'/', 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3, Processing LARGE data sets

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee

Introduction to Parallel Programming and MapReduce

Overview. Big Data in Apache Hadoop. - HDFS - MapReduce in Hadoop - YARN. Big Data Management and Analytics

Big Data Processing with Google s MapReduce. Alexandru Costan

Data-Intensive Computing with Map-Reduce and Hadoop

CS2510 Computer Operating Systems

CS2510 Computer Operating Systems

MapReduce. MapReduce and SQL Injections. CS 3200 Final Lecture. Introduction. MapReduce. Programming Model. Example

Cloud Computing at Google. Architecture

Hadoop IST 734 SS CHUNG

GraySort and MinuteSort at Yahoo on Hadoop 0.23

Chase Wu New Jersey Ins0tute of Technology

Big Data With Hadoop

Open source Google-style large scale data analysis with Hadoop

Parallel Processing of cluster by Map Reduce

Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related

Accelerating and Simplifying Apache

BBM467 Data Intensive ApplicaAons

A Cost-Benefit Analysis of Indexing Big Data with Map-Reduce

HDFS. Hadoop Distributed File System

ImprovedApproachestoHandleBigdatathroughHadoop

Application Development. A Paradigm Shift

A very short Intro to Hadoop

Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware

Hadoop at Yahoo! Owen O Malley Yahoo!, Grid Team owen@yahoo-inc.com

Take An Internal Look at Hadoop. Hairong Kuang Grid Team, Yahoo! Inc

Big Data Processing in the Cloud. Shadi Ibrahim Inria, Rennes - Bretagne Atlantique Research Center

What Is Datacenter (Warehouse) Computing. Distributed and Parallel Technology. Datacenter Computing Application Programming

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee June 3 rd, 2008

Introduc)on to the MapReduce Paradigm and Apache Hadoop. Sriram Krishnan

Hadoop Distributed File System. T Seminar On Multimedia Eero Kurkela

Hadoop: Distributed Data Processing. Amr Awadallah Founder/CTO, Cloudera, Inc. ACM Data Mining SIG Thursday, January 25 th, 2010

Hadoop and Map-Reduce. Swati Gore

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

Parallel Databases. Parallel Architectures. Parallelism Terminology 1/4/2015. Increase performance by performing operations in parallel

Distributed File Systems

Advanced Data Management Technologies

Systems Infrastructure for Data Science. Web Science Group Uni Freiburg WS 2013/14

Hadoop. Sunday, November 25, 12

Big Data Analytics with MapReduce VL Implementierung von Datenbanksystemen 05-Feb-13

MapReduce with Apache Hadoop Analysing Big Data


CS54100: Database Systems

Processing of massive data: MapReduce. 2. Hadoop. New Trends In Distributed Systems MSc Software and Systems

16.1 MAPREDUCE. For personal use only, not for distribution. 333

Big Data. Donald Kossmann & Nesime Tatbul Systems Group ETH Zurich

Data Science in the Wild

THE HADOOP DISTRIBUTED FILE SYSTEM

Big Application Execution on Cloud using Hadoop Distributed File System

A Brief Outline on Bigdata Hadoop

Convex Optimization for Big Data: Lecture 2: Frameworks for Big Data Analytics

Big Data: Big N. V.C Note. December 2, 2014

Distributed File Systems

The MapReduce Framework

Role of Cloud Computing in Big Data Analytics Using MapReduce Component of Hadoop

Chapter 7. Using Hadoop Cluster and MapReduce

Analysing Large Web Log Files in a Hadoop Distributed Cluster Environment

A Study on Workload Imbalance Issues in Data Intensive Distributed Computing

NoSQL and Hadoop Technologies On Oracle Cloud

MASSIVE DATA PROCESSING (THE GOOGLE WAY ) 27/04/2015. Fundamentals of Distributed Systems. Inside Google circa 2015

Hadoop and its Usage at Facebook. Dhruba Borthakur June 22 rd, 2009

Hadoop Architecture. Part 1

Cloud Computing Summary and Preparation for Examination

Big Data Rethink Algos and Architecture. Scott Marsh Manager R&D Personal Lines Auto Pricing

MapReduce Jeffrey Dean and Sanjay Ghemawat. Background context

Hadoop & its Usage at Facebook

marlabs driving digital agility WHITEPAPER Big Data and Hadoop

Comparison of Different Implementation of Inverted Indexes in Hadoop

Big Data Storage, Management and challenges. Ahmed Ali-Eldin

Transcription:

Jeffrey D. Ullman slides MapReduce for data intensive computing

Single-node architecture CPU Machine Learning, Statistics Memory Classical Data Mining Disk

Commodity Clusters Web data sets can be very large Tens to hundreds of terabytes Cannot mine on a single server (why?) Standard architecture emerging: Cluster of commodity Linux nodes Gigabit ethernet interconnect How to organize computations on this architecture? Mask issues such as hardware failure

Cluster Architecture 1 Gbps between any pair of nodes in a rack Switch 2-10 Gbps backbone between racks Switch Switch CPU CPU CPU CPU Mem Mem Mem Mem Disk Disk Disk Disk Each rack contains 16-64 nodes

Stable storage First order problem: if nodes can fail, how can we store data persistently? Answer: Distributed File System Provides global file namespace Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place Reads and appends are common

Distributed File System Chunk Servers File is split into contiguous chunks Typically each chunk is 16-64MB Each chunk replicated (usually 2x or 3x) Try to keep replicas in different racks Master node a.k.a. Name Nodes in HDFS Stores metadata Might be replicated Client library for file access Talks to master to find chunk servers Connects directly to chunkservers to access data

Warm up: Word Count We have a large file of words, one word to a line Count the number of times each distinct word appears in the file Sample application: analyze web server logs to find popular URLs

Word Count (2) Case 1: Entire file fits in memory Case 2: File too large for mem, but all <word, count> pairs fit in mem Case 3: File on disk, too many distinct words to fit in memory sort datafile uniq c

Word Count (3) To make it slightly harder, suppose we have a large corpus of documents Count the number of times each distinct word occurs in the corpus words(docs/*) sort uniq -c where words takes a file and outputs the words in it, one to a line The above captures the essence of MapReduce Great thing is it is naturally parallelizable

MapReduce: The Map Step Input key-value pairs Intermediate key-value pairs k v map k k v v k v map k v k v k v

MapReduce: The Reduce Step Intermediate key-value pairs k k v v group Key-value groups k v v v k v v reduce reduce Output key-value pairs k k v v k v k v k v k v

MapReduce Input: a set of key/value pairs User supplies two functions: map(k,v) list(k1,v1) reduce(k1, list(v1)) v2 (k1,v1) is an intermediate key/value pair Output is the set of (k1,v2) pairs

Word Count using MapReduce map(key, value): // key: document name; value: text of document for each word w in value: emit(w, 1) reduce(key, values): // key: a word; value: an iterator over counts result = 0 for each count v in values: result += v emit(result)

Distributed Execution Overview User Program fork fork fork assign map Master assign reduce Input Data Split 0 Split 1 Split 2 read Worker Worker Worker local write remote read, sort Worker Worker write Output File 0 Output File 1

Data flow Input, final output are stored on a distributed file system Scheduler tries to schedule map tasks close to physical storage location of input data Intermediate results are stored on local FS of map and reduce workers Output is often input to another map reduce task

Coordination Master data structures Task status: (idle, in-progress, completed) Idle tasks get scheduled as workers become available When a map task completes, it sends the master the location and sizes of its R intermediate files, one for each reducer Master pushes this info to reducers Master pings workers periodically to detect failures

Failures Map worker failure Map tasks completed or in-progress at worker are reset to idle Reduce workers are notified when task is rescheduled on another worker Reduce worker failure Only in-progress tasks are reset to idle Master failure MapReduce task is aborted and client is notified

How many Map and Reduce jobs? M map tasks, R reduce tasks Rule of thumb: Make M and R much larger than the number of nodes in cluster One DFS chunk per map is common Improves dynamic load balancing and speeds recovery from worker failure Usually R is smaller than M, because output is spread across R files

Combiners Often a map task will produce many pairs of the form (k,v1), (k,v2), for the same key k E.g., popular words in Word Count Can save network time by preaggregating at mapper combine(k1, list(v1)) v2 Usually same as reduce function Works only if reduce function is commutative and associative

Partition Function Inputs to map tasks are created by contiguous splits of input file For reduce, we need to ensure that records with the same intermediate key end up at the same worker System uses a default partition function e.g., hash(key) mod R

Exercise 1: Host size Suppose we have a large web corpus Let s look at the metadata file Lines of the form (URL, size, date, ) For each host, find the total number of bytes i.e., the sum of the page sizes for all URLs from that host

Exercise 1: Host size map(key, value): // key: URL; value: {size,date,..} emit(hostname(url), size) reduce(key, values): // key: a hostname; values: an iterator over sizes result = 0 for each size s in values: result += s emit(result)

Exercise 2: Distributed Grep Find all occurrences of the given pattern in a very large set of files The map function emits a line if it matches a given pattern. The reduce function is an identity function that just copies the supplied intermediate data to the output.

Exercise 2: Distributed Grep map(key, value): // key: source doc id; value: list of target doc ids for each word doc_id in value: emit(doc_id, key) reduce(key, values): // key: a target doc id; values: an iterator over source doc ids emit(key, list(values))

Exercise 3: Graph reversal Given a directed graph as an adjacency list: src1: dest11, dest12, src2: dest21, dest22, Construct the graph in which all the links are reversed

Exercise 3: Graph reversal The map function outputs: <target, source> pairs for each link to a target URL found in a page named "source The reduce function concatenates the list of all source URLs associated with a given target URL and emits the pair: <target, list(source)>

Exercise 4: Inverted index Suppose we have a large web corpus, each document identified by an ID For each word appearing in the corpus, return the list of doc ID in which the word occurs

Exercise 4: Inverted index The map parses each document, and emits a sequence of <word, doc ID> pairs. The reduce function accepts all pairs for a given word sorts the corresponding document IDs and emits a <word, list(doc ID)> pair. The set of all output pairs forms a simple inverted index.

Implementations Google Not available outside Google Hadoop An open-source implementation in Java Uses HDFS for stable storage Download: Aster Data Cluster-optimized SQL Database that also implements MapReduce

Cloud Computing Ability to rent computing by the hour Additional services e.g., persistent storage We will be using Amazon s Elastic Compute Cloud (EC2) Aster Data and Hadoop can both be run on EC2

Reading Jeffrey Dean and Sanjay Ghemawat, MapReduce: Simplified Data Processing on Large Clusters http://labs.google.com/papers/mapreduce.html Sanjay Ghemawat, Howard Gobioff, and Shun- Tak Leung, The Google File System http://labs.google.com/papers/gfs.html

From the Apache Hadoop webpage What Is Hadoop? The Apache Hadoop project develops open-source software for reliable, scalable, distributed computing. Hadoop includes these subprojects: Hadoop Common: The common utilities that support the other Hadoop subprojects. Avro: A data serialization system that provides dynamic integration with scripting languages. Chukwa: A data collection system for managing large distributed systems. HBase: A scalable, distributed database that supports structured data storage for large tables. http://hadoop.apache.org/

From the Apache Hadoop webpage Hadoop includes these subprojects: HDFS: A distributed file system that provides high throughput access to application data. Hive: A data warehouse infrastructure that provides data summarization and ad hoc querying. MapReduce: A software framework for distributed processing of large data sets on compute clusters. Pig: A high-level data-flow language and execution framework for parallel computation. ZooKeeper: A high-performance coordination service for distributed applications.