Lecture 3 Hadoop Technical Introduction CSE 490H



Similar documents
By Hrudaya nath K Cloud Computing

Introduction to Cloud Computing

Introduction to MapReduce

Big Data Management and NoSQL Databases

Hadoop. Dawid Weiss. Institute of Computing Science Poznań University of Technology

Data-intensive computing systems

University of Maryland. Tuesday, February 2, 2010

Hadoop. History and Introduction. Explained By Vaibhav Agarwal

Tutorial: MapReduce. Theory and Practice of Data-intensive Applications. Pietro Michiardi. Eurecom

Extreme Computing. Hadoop MapReduce in more detail.

Hadoop Design and k-means Clustering

Cloudera Certified Developer for Apache Hadoop

How To Write A Map Reduce In Hadoop Hadooper (Ahemos)

Weekly Report. Hadoop Introduction. submitted By Anurag Sharma. Department of Computer Science and Engineering. Indian Institute of Technology Bombay

MapReduce, Hadoop and Amazon AWS

Working With Hadoop. Important Terminology. Important Terminology. Anatomy of MapReduce Job Run. Important Terminology

Introduction to Map/Reduce & Hadoop

Getting to know Apache Hadoop

How MapReduce Works 資碩一 戴睿宸

Cloud Computing. Lectures 10 and 11 Map Reduce: System Perspective

Tutorial: MapReduce. Theory and Practice of Data-intensive Applications. Pietro Michiardi. Eurecom

Istanbul Şehir University Big Data Camp 14. Hadoop Map Reduce. Aslan Bakirov Kevser Nur Çoğalmış

Hadoop WordCount Explained! IT332 Distributed Systems

Introduction to MapReduce and Hadoop

Hadoop Streaming. Table of contents

Hadoop Certification (Developer, Administrator HBase & Data Science) CCD-410, CCA-410 and CCB-400 and DS-200

Internals of Hadoop Application Framework and Distributed File System

INTRODUCTION TO HADOOP

Hadoop MapReduce: Review. Spring 2015, X. Zhang Fordham Univ.

Hadoop/MapReduce. Object-oriented framework presentation CSCI 5448 Casey McTaggart

MapReduce. Tushar B. Kute,

How To Write A Map In Java (Java) On A Microsoft Powerbook 2.5 (Ahem) On An Ipa (Aeso) Or Ipa 2.4 (Aseo) On Your Computer Or Your Computer

International Journal of Advancements in Research & Technology, Volume 3, Issue 2, February ISSN

Tutorial: Big Data Algorithms and Applications Under Hadoop KUNPENG ZHANG SIDDHARTHA BHATTACHARYYA

Hadoop 只 支 援 用 Java 開 發 嘛? Is Hadoop only support Java? 總 不 能 全 部 都 重 新 設 計 吧? 如 何 與 舊 系 統 相 容? Can Hadoop work with existing software?

Data Science in the Wild

Hadoop IST 734 SS CHUNG

Programming in Hadoop Programming, Tuning & Debugging

map/reduce connected components

Copy the.jar file into the plugins/ subfolder of your Eclipse installation. (e.g., C:\Program Files\Eclipse\plugins)

Distributed Image Processing using Hadoop MapReduce framework. Binoy A Fernandez ( ) Sameer Kumar ( )

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)

Big Data and Scripting map/reduce in Hadoop

Map Reduce & Hadoop Recommended Text:

Big Data 2012 Hadoop Tutorial

HowTo Hadoop. Devaraj Das

Programming Hadoop Map-Reduce Programming, Tuning & Debugging. Arun C Murthy Yahoo! CCDI acm@yahoo-inc.com ApacheCon US 2008

Hadoop: Understanding the Big Data Processing Method

MapReduce. Introduction and Hadoop Overview. 13 June Lab Course: Databases & Cloud Computing SS 2012

Xiaoming Gao Hui Li Thilina Gunarathne

Hadoop Framework. technology basics for data scientists. Spring Jordi Torres, UPC - BSC

Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware

MASSIVE DATA PROCESSING (THE GOOGLE WAY ) 27/04/2015. Fundamentals of Distributed Systems. Inside Google circa 2015

Data-Intensive Computing with Hadoop. Thanks to: Milind Bhandarkar Yahoo! Inc.

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

HADOOP ADMINISTATION AND DEVELOPMENT TRAINING CURRICULUM

Prepared By : Manoj Kumar Joshi & Vikas Sawhney

THE ANATOMY OF MAPREDUCE JOBS, SCHEDULING, AND PERFORMANCE CHALLENGES

Research Laboratory. Java Web Crawler & Hadoop MapReduce Anri Morchiladze && Bachana Dolidze Supervisor Nodar Momtselidze

A very short Intro to Hadoop

CS 378 Big Data Programming. Lecture 5 Summariza9on Pa:erns

Chapter 7. Using Hadoop Cluster and MapReduce

Hadoop Architecture. Part 1

BIG DATA HADOOP TRAINING

6. How MapReduce Works. Jari-Pekka Voutilainen

Step 4: Configure a new Hadoop server This perspective will add a new snap-in to your bottom pane (along with Problems and Tasks), like so:

Introduction To Hadoop

Hadoop and Eclipse. Eclipse Hawaii User s Group May 26th, Seth Ladd

Complete Java Classes Hadoop Syllabus Contact No:

Distributed Computing and Big Data: Hadoop and MapReduce

Open source Google-style large scale data analysis with Hadoop

A programming model in Cloud: MapReduce

USING HDFS ON DISCOVERY CLUSTER TWO EXAMPLES - test1 and test2

Welcome to the unit of Hadoop Fundamentals on Hadoop architecture. I will begin with a terminology review and then cover the major components

Hadoop implementation of MapReduce computational model. Ján Vaňo

Hadoop Fair Scheduler Design Document

Map-Reduce and Hadoop

Lecture 32 Big Data. 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop

Distributed Model Checking Using Hadoop

and HDFS for Big Data Applications Serge Blazhievsky Nice Systems

Fair Scheduler. Table of contents

Advanced Data Management Technologies

Hadoop and Map-Reduce. Swati Gore

COURSE CONTENT Big Data and Hadoop Training

The Hadoop Eco System Shanghai Data Science Meetup

The Performance of MapReduce: An In-depth Study

MarkLogic Server. MarkLogic Connector for Hadoop Developer s Guide. MarkLogic 8 February, 2015

Connecting Hadoop with Oracle Database

Introduction to Hadoop. New York Oracle User Group Vikas Sawhney

t] open source Hadoop Beginner's Guide ij$ data avalanche Garry Turkington Learn how to crunch big data to extract meaning from

Hadoop MapReduce over Lustre* High Performance Data Division Omkar Kulkarni April 16, 2013

Processing of massive data: MapReduce. 2. Hadoop. New Trends In Distributed Systems MSc Software and Systems

HADOOP PERFORMANCE TUNING

How To Use Hadoop

Transcription:

Lecture 3 Hadoop Technical Introduction CSE 490H

Announcements My office hours: M 2:30 3:30 in CSE 212 Cluster is operational; instructions in assignment 1 heavily rewritten Eclipse plugin is deprecated Students who already created accounts: let me know if you have trouble

Breaking news! Hadoop tested on 4,000 node cluster 32K cores (8 / node) 16 PB raw storage (4 x 1 TB disk / node) (about 5 PB usable storage) http://developer.yahoo.com/blogs/hadoop/2008/09/ scaling_hadoop_to_4000_nodes_a.html

You Say, tomato Google calls it: MapReduce GFS Bigtable Chubby Hadoop equivalent: Hadoop HDFS HBase Zookeeper

Some MapReduce Terminology Job A full program - an execution of a Mapper and Reducer across a data set Task An execution of a Mapper or a Reducer on a slice of data a.k.a. Task-In-Progress (TIP) Task Attempt A particular instance of an attempt to execute a task on a machine

Terminology Example Running Word Count across 20 files is one job 20 files to be mapped imply 20 map tasks + some number of reduce tasks At least 20 map task attempts will be performed more if a machine crashes, etc.

Task Attempts A particular task will be attempted at least once, possibly more times if it crashes If the same input causes crashes over and over, that input will eventually be abandoned Multiple attempts at one task may occur in parallel with speculative execution turned on Task ID from TaskInProgress is not a unique identifier; don t use it that way

MapReduce: High Level

Node-to-Node Communication Hadoop uses its own RPC protocol All communication begins in slave nodes Prevents circular-wait deadlock Slaves periodically poll for status message Classes must provide explicit serialization

Nodes, Trackers, Tasks Master node runs JobTracker instance, which accepts Job requests from clients TaskTracker instances run on slave nodes TaskTracker forks separate Java process for task instances

Job Distribution MapReduce programs are contained in a Java jar file + an XML file containing serialized program configuration options Running a MapReduce job places these files into the HDFS and notifies TaskTrackers where to retrieve the relevant program code Where s the data distribution?

Data Distribution Implicit in design of MapReduce! All mappers are equivalent; so map whatever data is local to a particular node in HDFS If lots of data does happen to pile up on the same node, nearby nodes will map instead Data transfer is handled implicitly by HDFS

Configuring With JobConf MR Programs have many configurable options JobConf objects hold (key, value) components mapping String a e.g., mapred.map.tasks 20 JobConf is serialized and distributed before running the job Objects implementing JobConfigurable can retrieve elements from a JobConf

What Happens In MapReduce? Depth First

Job Launch Process: Client Client program creates a JobConf Identify classes implementing Mapper and Reducer interfaces JobConf.setMapperClass(), setreducerclass() Specify inputs, outputs FileInputFormat.addInputPath(), FileOutputFormat.setOutputPath() Optionally, other options too: JobConf.setNumReduceTasks(), JobConf.setOutputFormat()

Job Launch Process: JobClient Pass JobConf to JobClient.runJob() or submitjob() runjob() blocks, submitjob() does not JobClient: Determines proper division of input into InputSplits Sends job data to master JobTracker server

Job Launch Process: JobTracker JobTracker: Inserts jar and JobConf (serialized to XML) in shared location Posts a JobInProgress to its run queue

Job Launch Process: TaskTracker TaskTrackers running on slave nodes periodically query JobTracker for work Retrieve job-specific jar and config Launch task in separate instance of Java main() is provided by Hadoop

Job Launch Process: Task TaskTracker.Child.main(): Sets up the child TaskInProgress attempt Reads XML configuration Connects back to necessary MapReduce components via RPC Uses TaskRunner to launch user process

Job Launch Process: TaskRunner TaskRunner, MapTaskRunner, MapRunner work in a daisy-chain to launch your Mapper Task knows ahead of time which InputSplits it should be mapping Calls Mapper once for each record retrieved from the InputSplit Running the Reducer is much the same

Creating the Mapper You provide the instance of Mapper Should extend MapReduceBase One instance of your Mapper is initialized by the MapTaskRunner for a TaskInProgress Exists in separate process from all other instances of Mapper no data sharing!

Mapper void map(k1 key, V1 value, OutputCollector<K2, V2> output, Reporter reporter) K types implement WritableComparable V types implement Writable

What is Writable? Hadoop defines its own box classes for strings (Text), integers (IntWritable), etc. All values are instances of Writable All keys are instances of WritableComparable

Getting Data To The Mapper

Reading Data Data sets are specified by InputFormats Defines input data (e.g., a directory) Identifies partitions of the data that form an InputSplit Factory for RecordReader objects to extract (k, v) records from the input source

FileInputFormat and Friends TextInputFormat Treats each \n - terminated line of a file as a value KeyValueTextInputFormat Maps \n - terminated text lines of k SEP v SequenceFileInputFormat Binary file of (k, v) pairs with some add l metadata SequenceFileAsTextInputFormat Same, but maps (k.tostring(), v.tostring())

Filtering File Inputs FileInputFormat will read all files out of a specified directory and send them to the mapper Delegates filtering this file list to a method subclasses may override e.g., Create your own xyzfileinputformat to read *.xyz from directory list

Record Readers Each InputFormat provides its own RecordReader implementation Provides (unused?) capability multiplexing LineRecordReader Reads a line from a text file KeyValueRecordReader Used by KeyValueTextInputFormat

Input Split Size FileInputFormat will divide large files into chunks Exact size controlled by mapred.min.split.size RecordReaders receive file, offset, and length of chunk Custom InputFormat implementations may override split size e.g., NeverChunkFile

Sending Data To Reducers Map function receives OutputCollector object OutputCollector.collect() takes (k, v) elements Any (WritableComparable, Writable) can be used By default, mapper output type assumed to be same as reducer output type

WritableComparator Compares WritableComparable data Will call WritableComparable.compare() Can provide fast path for serialized data JobConf.setOutputValueGroupingComparator()

Sending Data To The Client Reporter object sent to Mapper allows simple asynchronous feedback incrcounter(enum key, long amount) setstatus(string msg) Allows self-identification of input InputSplit getinputsplit()

Partition And Shuffle!

Partitioner int getpartition(key, val, numpartitions) Outputs the partition number for a given key One partition == values sent to one Reduce task HashPartitioner used by default Uses key.hashcode() to return partition num JobConf sets Partitioner implementation

Reduction reduce( K2 key, Iterator<V2> values, OutputCollector<K3, V3> output, Reporter reporter) Keys & values sent to one partition all go to the same reduce task Calls are sorted by key earlier keys are reduced and output before later keys

Finally: Writing The Output

OutputFormat Analogous to InputFormat TextOutputFormat Writes key val\n strings to output file SequenceFileOutputFormat Uses a binary format to pack (k, v) pairs NullOutputFormat Discards output

Questions?