CS246: Mining Massive Datasets Jure Leskovec, Stanford University. http://cs246.stanford.edu



Similar documents
Jeffrey D. Ullman slides. MapReduce for data intensive computing

Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University

CIS 4930/6930 Spring 2014 Introduction to Data Science /Data Intensive Computing. University of Florida, CISE Department Prof.

PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS

Map Reduce / Hadoop / HDFS

CSE-E5430 Scalable Cloud Computing Lecture 2

Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! ti2736b-ewi@tudelft.nl

Big Data Storage, Management and challenges. Ahmed Ali-Eldin

Big Data Processing with Google s MapReduce. Alexandru Costan

Big Data and Apache Hadoop s MapReduce

Parallel Processing of cluster by Map Reduce

The Google File System

MapReduce (in the cloud)

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

Data-Intensive Computing with Map-Reduce and Hadoop

Big Data Processing in the Cloud. Shadi Ibrahim Inria, Rennes - Bretagne Atlantique Research Center

Big Data With Hadoop

Open source large scale distributed data management with Google s MapReduce and Bigtable

Take An Internal Look at Hadoop. Hairong Kuang Grid Team, Yahoo! Inc

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

MapReduce. MapReduce and SQL Injections. CS 3200 Final Lecture. Introduction. MapReduce. Programming Model. Example

Hadoop Architecture. Part 1

DATA MINING WITH HADOOP AND HIVE Introduction to Architecture

Introduction to Hadoop

16.1 MAPREDUCE. For personal use only, not for distribution. 333

GraySort and MinuteSort at Yahoo on Hadoop 0.23

Distributed File Systems

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)

Distributed File Systems

Introduction to Hadoop

A programming model in Cloud: MapReduce

Chapter 7. Using Hadoop Cluster and MapReduce

Overview. Big Data in Apache Hadoop. - HDFS - MapReduce in Hadoop - YARN. Big Data Management and Analytics

Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware

Open source Google-style large scale data analysis with Hadoop

Scalable Cloud Computing Solutions for Next Generation Sequencing Data

Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 14

Introduction to Big Data! with Apache Spark" UC#BERKELEY#

Analysing Large Web Log Files in a Hadoop Distributed Cluster Environment

Hadoop IST 734 SS CHUNG

Google File System. Web and scalability

HPC & Big Data. Adam S.Z Belloum Software and Network engineering group University of Amsterdam

Welcome to the unit of Hadoop Fundamentals on Hadoop architecture. I will begin with a terminology review and then cover the major components

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

Hadoop at Yahoo! Owen O Malley Yahoo!, Grid Team owen@yahoo-inc.com

Introduction to Parallel Programming and MapReduce

Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 15

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee

Cloud Computing at Google. Architecture

COMP 598 Applied Machine Learning Lecture 21: Parallelization methods for large-scale machine learning! Big Data by the numbers

Hadoop Distributed File System. T Seminar On Multimedia Eero Kurkela

Parallel Databases. Parallel Architectures. Parallelism Terminology 1/4/2015. Increase performance by performing operations in parallel

MapReduce, Hadoop and Amazon AWS

Hadoop MapReduce and Spark. Giorgio Pedrazzi, CINECA-SCAI School of Data Analytics and Visualisation Milan, 10/06/2015

THE HADOOP DISTRIBUTED FILE SYSTEM

A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM

Apache Hadoop. Alexandru Costan

Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related

Intro to Map/Reduce a.k.a. Hadoop


MapReduce. from the paper. MapReduce: Simplified Data Processing on Large Clusters (2004)

Big Data Analytics with MapReduce VL Implementierung von Datenbanksystemen 05-Feb-13

marlabs driving digital agility WHITEPAPER Big Data and Hadoop

Comparison of Different Implementation of Inverted Indexes in Hadoop

Hadoop Cluster Applications

MASSIVE DATA PROCESSING (THE GOOGLE WAY ) 27/04/2015. Fundamentals of Distributed Systems. Inside Google circa 2015

Log Mining Based on Hadoop s Map and Reduce Technique

Prepared By : Manoj Kumar Joshi & Vikas Sawhney

A Cost-Benefit Analysis of Indexing Big Data with Map-Reduce

Comparative analysis of Google File System and Hadoop Distributed File System

Comparative analysis of mapreduce job by keeping data constant and varying cluster size technique

MapReduce and Hadoop Distributed File System V I J A Y R A O

Hadoop and its Usage at Facebook. Dhruba Borthakur June 22 rd, 2009

MapReduce and Hadoop Distributed File System

CS2510 Computer Operating Systems

CS2510 Computer Operating Systems

Optimization and analysis of large scale data sorting algorithm based on Hadoop

MapReduce and the New Software Stack

Big Application Execution on Cloud using Hadoop Distributed File System

MapReduce and Hadoop. Aaron Birkland Cornell Center for Advanced Computing. January 2012

Accelerating and Simplifying Apache

A Study on Workload Imbalance Issues in Data Intensive Distributed Computing

CS54100: Database Systems

Hadoop & its Usage at Facebook

Hadoop and Map-Reduce. Swati Gore

Data Science in the Wild

Reduction of Data at Namenode in HDFS using harballing Technique

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Using Hadoop for Webscale Computing. Ajay Anand Yahoo! Usenix 2008

Big Data. Donald Kossmann & Nesime Tatbul Systems Group ETH Zurich

A very short Intro to Hadoop

Hadoop & its Usage at Facebook

ImprovedApproachestoHandleBigdatathroughHadoop

Big Data Technology Map-Reduce Motivation: Indexing in Search Engines

Session: Big Data get familiar with Hadoop to use your unstructured data Udo Brede Dell Software. 22 nd October :00 Sesión B - DB2 LUW

Journal of science STUDY ON REPLICA MANAGEMENT AND HIGH AVAILABILITY IN HADOOP DISTRIBUTED FILE SYSTEM (HDFS)

Transcription:

CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu

2 CPU Memory Machine Learning, Statistics Classical Data Mining Disk

3 20+ billion web pages x 20KB = 400+ TB 1 computer reads 30-35 MB/sec from disk ~4 months to read the web ~1,000 hard drives to store the web Even more to do something with the data

4 Web data sets are massive Tens to hundreds of terabytes Cannot mine on a single server Standard architecture emerging: Cluster of commodity Linux nodes Gigabit ethernet interconnect How to organize computations on this architecture? Mask issues such as hardware failure

5 Traditional big-iron box (circa 2003) 8 2GHz Xeons 64GB RAM 8TB disk 758,000 USD Prototypical Google rack (circa 2003) 176 2GHz Xeons 176GB RAM ~7TB disk 278,000 USD In Aug 2006 Google had ~450,000 machines

6 1 Gbps between any pair of nodes in a rack Switch 2-10 Gbps backbone between racks Switch Switch CPU CPU CPU CPU Mem Mem Mem Mem Disk Disk Disk Disk Each rack contains 16-64 nodes

7 Yahoo M45 cluster: Datacenter in a Box (DiB) 1000 nodes, 4000 cores, 3TB RAM, 1.5PB disk High bandwidth connection to Internet Located on Yahoo! campus World s top 50 supercomputer

Large scale computing for data mining problems on commodity hardware: PCs connected in a network Process huge datasets on many computers Challenges: How do you distribute computation? Distributed/parallel programming is hard Machines fail Map-reduce addresses all of the above Google s computational/data manipulation model Elegant way to work with big data 1/3/2011 Jure Leskovec, Stanford C246: Mining Massive Datasets 8

9 Implications of such computing environment: Single machine performance does not matter Add more machines Machines break: One server may stay up 3 years (1,000 days) If you have 1,0000 servers, expect to loose 1/day How can we make it easy to write distributed programs?

10 Idea: Bring computation close to the data Store files multiple times for reliability Need: Programming model Map-Reduce Infrastructure File system Google: GFS Hadoop: HDFS

11 Problem: If nodes fail, how to store data persistently? Answer: Distributed File System: Provides global file namespace Google GFS; Hadoop HDFS; Kosmix KFS Typical usage pattern Huge files (100s of GB to TB) Data is rarely updated in place Reads and appends are common

Chunk Servers: File is split into contiguous chunks Typically each chunk is 16-64MB Each chunk replicated (usually 2x or 3x) Try to keep replicas in different racks Master node: a.k.a. Name Nodes in Hadoop s HDFS Stores metadata Might be replicated Client library for file access: Talks to master to find chunk servers Connects directly to chunkservers to access data 1/3/2011 Jure Leskovec, Stanford C246: Mining Massive Datasets 12

13 Reliable distributed file system for petabyte scale Data kept in chunks spread across thousands of machines Each chunk replicated on different machines Seamless recovery from disk or machine failure C 0 C 1 C 5 C 2 D 0 C 5 C 1 C 3 C 2 D 0 C 5 D 1 C 0 C 5 D 0 C 2 Chunk server 1 Chunk server 2 Chunk server 3 Chunk server N Bring computation directly to the data!

14 We have a large file of words: one word per line Count the number of times each distinct word appears in the file Sample application: Analyze web server logs to find popular URLs

15 Case 1: Entire file fits in memory Case 2: File too large for memory, but all <word, count> pairs fit in memory Case 3: File on disk, too many distinct words to fit in memory: sort datafile uniq c

16 Suppose we have a large corpus of documents Count occurrences of words: words(docs/*) sort uniq -c where words takes a file and outputs the words in it, one per a line Captures the essence of MapReduce Great thing is it is naturally parallelizable

17 Read a lot of data Map: Extract something you care about Shuffle and Sort Reduce: Aggregate, summarize, filter or transform Write the result Outline stays the same, map and reduce change to fit the problem

18 Program specifies two primary methods: Map(k,v) <k, v >* Reduce(k, <v >*) <k, v >* All values v with same key k are reduced together and processed in v order

19 Provided by the programmer Provided by the programmer MAP: reads input and produces a set of key value pairs Group by key: Collect all pairs with same key Reduce: Collect all values belonging to the key and output The crew of the space shuttle Endeavor recently returned to Earth as ambassadors, harbingers of a new era of space exploration. Scientists at NASA are saying that the recent assembly of the Dextre bot is the first step in a longterm space-based man/machine partnership. '"The work we're doing now -- the robotics we're doing -- is what we're going to need to do to build any work station or habitat structure on the moon or Mars," said Allard Beutel. (the, 1) (crew, 1) (of, 1) (the, 1) (space, 1) (shuttle, 1) (Endeavor, 1) (recently, 1). (crew, 1) (crew, 1) (space, 1) (the, 1) (the, 1) (the, 1) (shuttle, 1) (recently, 1) (crew, 2) (space, 1) (the, 3) (shuttle, 1) (recently, 1) Sequentially read the data Only sequential reads Big document (key, value) (key, value) (key, value)

20 map(key, value): // key: document name; value: text of document for each word w in value: emit(w, 1) reduce(key, values): // key: a word; value: an iterator over counts result = 0 for each count v in values: result += v emit(result)

21 Map-Reduce environment takes care of: Partitioning the input data Scheduling the program s execution across a set of machines Handling machine failures Managing required inter-machine communication Allows programmers without a PhD in parallel and distributed systems to use large distributed clusters

22 Big document MAP: reads input and produces a set of key value pairs Group by key: Collect all pairs with same key Reduce: Collect all values belonging to the key and output

Programmer specifies: Map and Reduce and input files Workflow: Read inputs as a set of key-valuepairs Map transforms input kv-pairs into a new set of k'v'-pairs Sorts & Shuffles the k'v'-pairs to output nodes All k v -pairs with a given k are sent to the same reduce Reduce processes all k'v'-pairs grouped by key into new k''v''-pairs Write the resulting pairs to files All phases are distributed with many tasks doing the work Input 0 Map 0 Input 1 Map 1 Shuffle Input 2 Map 2 Reduce 0 Reduce 1 Out 0 Out 1 1/3/2011 Jure Leskovec, Stanford C246: Mining Massive Datasets 23

24

25 Input and final output are stored on a distributed file system: Scheduler tries to schedule map tasks close to physical storage location of input data Intermediate results are stored on local FS of map and reduce workers Output is often input to another map reduce task

26 Master data structures: Task status: (idle, in-progress, completed) Idle tasks get scheduled as workers become available When a map task completes, it sends the master the location and sizes of its R intermediate files, one for each reducer Master pushes this info to reducers Master pings workers periodically to detect failures

27 Map worker failure Map tasks completed or in-progress at worker are reset to idle Reduce workers are notified when task is rescheduled on another worker Reduce worker failure Only in-progress tasks are reset to idle Master failure MapReduce task is aborted and client is notified

28 M map tasks, R reduce tasks Rule of thumb: Make M and R much larger than the number of nodes in cluster One DFS chunk per map is common Improves dynamic load balancing and speeds recovery from worker failure Usually R is smaller than M because output is spread across R files

29 Fine granularity tasks: map tasks >> machines Minimizes time for fault recovery Can pipeline shuffling with map execution Better dynamic load balancing

30

31

32

33

34

35

36

37

38

39

40

Want to simulate disease spreading in a network Input Each line: node id, virus parameters Map Reads a line of input and simulate the virus Output: triplets (node id, virus id, hit time) Reduce Collect the node IDs and see which nodes are most vulnerable 1/3/2011 Jure Leskovec, Stanford C246: Mining Massive Datasets 41

42 Statistical machine translation: Need to count number of times every 5-word sequence occurs in a large corpus of documents Easy with MapReduce: Map: Extract (5-word sequence, count) from document Reduce: Combine counts

43 Suppose we have a large web corpus Look at the metadata file Lines of the form (URL, size, date, ) For each host, find the total number of bytes i.e., the sum of the page sizes for all URLs from that host Other examples: Link analysis and graph processing Machine Learning algorithms

Google Not available outside Google Hadoop An open-source implementation in Java Uses HDFS for stable storage Download: http://lucene.apache.org/hadoop/ Aster Data Cluster-optimized SQL Database that also implements MapReduce 1/3/2011 Jure Leskovec, Stanford C246: Mining Massive Datasets 44

45 Ability to rent computing by the hour Additional services e.g., persistent storage Amazon s Elastic Compute Cloud (EC2) Aster Data and Hadoop can both be run on EC2 For CS345 (offered next quarter) Amazon will provide free access for the class

Problem: Slow workers significantly lengthen the job completion time: Other jobs on the machine Bad disks Weird things Solution: Near end of phase, spawn backup copies of tasks Whichever one finishes first wins Effect: Dramatically shortens job completion time 1/3/2011 Jure Leskovec, Stanford C246: Mining Massive Datasets 46

Backup tasks reduce job time System deals with failures 1/3/2011 Jure Leskovec, Stanford C246: Mining Massive Datasets 47

48 Often a map task will produce many pairs of the form (k,v1), (k,v2), for the same key k E.g., popular words in Word Count Can save network time by pre-aggregating at mapper: combine(k1, list(v1)) v2 Usually same as the reduce function Works only if reduce function is commutative and associative

49 Inputs to map tasks are created by contiguous splits of input file Reduce needs to ensure that records with the same intermediate key end up at the same worker System uses a default partition function: hash(key) mod R Sometimes useful to override: E.g., hash(hostname(url)) mod R ensures URLs from a host end up in the same output file

50 Jeffrey Dean and Sanjay Ghemawat, MapReduce: Simplified Data Processing on Large Clusters http://labs.google.com/papers/mapreduce.html Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung, The Google File System http://labs.google.com/papers/gfs.html

51 Hadoop Wiki Introduction http://wiki.apache.org/lucene-hadoop/ Getting Started http://wiki.apache.org/lucene-hadoop/gettingstartedwithhadoop Map/Reduce Overview http://wiki.apache.org/lucene-hadoop/hadoopmapreduce http://wiki.apache.org/lucene-hadoop/hadoopmapredclasses Eclipse Environment http://wiki.apache.org/lucene-hadoop/eclipseenvironment Javadoc http://lucene.apache.org/hadoop/docs/api/

Releases from Apache download mirrors http://www.apache.org/dyn/closer.cgi/lucene/hado op/ Nightly builds of source http://people.apache.org/dist/lucene/hadoop/nightl y/ Source code from subversion http://lucene.apache.org/hadoop/version_control. html 1/3/2011 Jure Leskovec, Stanford C246: Mining Massive Datasets 52

Programming model inspired by functional language primitives Partitioning/shuffling similar to many large-scale sorting systems NOW-Sort ['97] Re-execution for fault tolerance BAD-FS ['04] and TACC ['97] Locality optimization has parallels with Active Disks/Diamond work Active Disks ['01], Diamond ['04] Backup tasks similar to Eager Scheduling in Charlotte system Charlotte ['96] Dynamic load balancing solves similar problem as River's distributed queues River ['99] 1/3/2011 Jure Leskovec, Stanford C246: Mining Massive Datasets 53