University of Maryland. Tuesday, February 2, 2010



Similar documents
Jordan Boyd-Graber University of Maryland. Tuesday, February 10, 2011

Extreme Computing. Hadoop MapReduce in more detail.

Data-intensive computing systems

Hadoop Framework. technology basics for data scientists. Spring Jordi Torres, UPC - BSC

Hadoop Design and k-means Clustering

Lecture 3 Hadoop Technical Introduction CSE 490H

How To Write A Map Reduce In Hadoop Hadooper (Ahemos)

Internals of Hadoop Application Framework and Distributed File System

MapReduce Job Processing

Working With Hadoop. Important Terminology. Important Terminology. Anatomy of MapReduce Job Run. Important Terminology

Introduction to Cloud Computing

MapReduce, Hadoop and Amazon AWS

Hadoop. Dawid Weiss. Institute of Computing Science Poznań University of Technology

Getting to know Apache Hadoop

MapReduce: Algorithm Design Patterns

Hadoop WordCount Explained! IT332 Distributed Systems

Chapter 7. Using Hadoop Cluster and MapReduce

Introduction to MapReduce and Hadoop

Big Data Management and NoSQL Databases

Hadoop. History and Introduction. Explained By Vaibhav Agarwal

Hadoop MapReduce: Review. Spring 2015, X. Zhang Fordham Univ.

International Journal of Advancements in Research & Technology, Volume 3, Issue 2, February ISSN

INTRODUCTION TO HADOOP

Weekly Report. Hadoop Introduction. submitted By Anurag Sharma. Department of Computer Science and Engineering. Indian Institute of Technology Bombay

Hadoop/MapReduce. Object-oriented framework presentation CSCI 5448 Casey McTaggart

By Hrudaya nath K Cloud Computing

Istanbul Şehir University Big Data Camp 14. Hadoop Map Reduce. Aslan Bakirov Kevser Nur Çoğalmış

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)

HowTo Hadoop. Devaraj Das

Tutorial: Big Data Algorithms and Applications Under Hadoop KUNPENG ZHANG SIDDHARTHA BHATTACHARYYA

Hadoop MapReduce Tutorial - Reduce Comp variability in Data Stamps

Tutorial: MapReduce. Theory and Practice of Data-intensive Applications. Pietro Michiardi. Eurecom

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

Hadoop Streaming. Table of contents

Big Data 2012 Hadoop Tutorial

ITG Software Engineering

HADOOP ADMINISTATION AND DEVELOPMENT TRAINING CURRICULUM

Cloud Computing. Lectures 10 and 11 Map Reduce: System Perspective

Cloudera Certified Developer for Apache Hadoop

Hadoop Certification (Developer, Administrator HBase & Data Science) CCD-410, CCA-410 and CCB-400 and DS-200

Introduction to Hadoop

Programming in Hadoop Programming, Tuning & Debugging

COURSE CONTENT Big Data and Hadoop Training

Scalable Computing with Hadoop

Using Lustre with Apache Hadoop

t] open source Hadoop Beginner's Guide ij$ data avalanche Garry Turkington Learn how to crunch big data to extract meaning from

Big Data and Scripting map/reduce in Hadoop

Distributed Computing and Big Data: Hadoop and MapReduce

Tes$ng Hadoop Applica$ons. Tom Wheeler

MapReduce. Tushar B. Kute,

Peers Techno log ies Pv t. L td. HADOOP

map/reduce connected components

Session: Big Data get familiar with Hadoop to use your unstructured data Udo Brede Dell Software. 22 nd October :00 Sesión B - DB2 LUW

A very short Intro to Hadoop

Programming Hadoop Map-Reduce Programming, Tuning & Debugging. Arun C Murthy Yahoo! CCDI acm@yahoo-inc.com ApacheCon US 2008

Distributed Image Processing using Hadoop MapReduce framework. Binoy A Fernandez ( ) Sameer Kumar ( )

BigData. An Overview of Several Approaches. David Mera 16/12/2013. Masaryk University Brno, Czech Republic

Pro Apache Hadoop. Second Edition. Sameer Wadkar. Madhu Siddalingaiah

What is cloud computing?

Lecture 32 Big Data. 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop

MapReduce Détails Optimisation de la phase Reduce avec le Combiner

MAPREDUCE Programming Model

Sriram Krishnan, Ph.D.

Processing of massive data: MapReduce. 2. Hadoop. New Trends In Distributed Systems MSc Software and Systems

Map-Reduce and Hadoop

Introduction to Map/Reduce & Hadoop

Data-Intensive Computing with Hadoop. Thanks to: Milind Bhandarkar Yahoo! Inc.

Introduc)on to Map- Reduce. Vincent Leroy

How MapReduce Works 資碩一 戴睿宸

Introduc)on to the MapReduce Paradigm and Apache Hadoop. Sriram Krishnan

HADOOP PERFORMANCE TUNING

Hadoop Parallel Data Processing

How To Use Hadoop

Complete Java Classes Hadoop Syllabus Contact No:

Introduction to Hadoop. New York Oracle User Group Vikas Sawhney

Introduction to Hadoop

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

MapReduce. Introduction and Hadoop Overview. 13 June Lab Course: Databases & Cloud Computing SS 2012

The Performance of MapReduce: An In-depth Study

CSE-E5430 Scalable Cloud Computing Lecture 2

PassTest. Bessere Qualität, bessere Dienstleistungen!

Data Science in the Wild

Hadoop EKG: Using Heartbeats to Propagate Resource Utilization Data

This exam contains 13 pages (including this cover page) and 18 questions. Check to see if any pages are missing.

Parallel Programming Map-Reduce. Needless to Say, We Need Machine Learning for Big Data

The Performance Characteristics of MapReduce Applications on Scalable Clusters

Keywords: Big Data, HDFS, Map Reduce, Hadoop

Apache Hadoop. Alexandru Costan

Copy the.jar file into the plugins/ subfolder of your Eclipse installation. (e.g., C:\Program Files\Eclipse\plugins)

Hadoop MapReduce in Practice

BIG DATA - HADOOP PROFESSIONAL amron

Hadoop Setup. 1 Cluster

How To Write A Mapreduce Program On An Ipad Or Ipad (For Free)

Implement Hadoop jobs to extract business value from large and varied data sets

Mrs: MapReduce for Scientific Computing in Python

Parallel Processing of cluster by Map Reduce

HDFS. Hadoop Distributed File System

Map Reduce & Hadoop Recommended Text:

The Hadoop Framework

The Hadoop Eco System Shanghai Data Science Meetup

Transcription:

Data-Intensive Information Processing Applications Session #2 Hadoop: Nuts and Bolts Jimmy Lin University of Maryland Tuesday, February 2, 2010 This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States See http://creativecommons.org/licenses/by-nc-sa/3.0/us/ for details

Hadoop Programming Remember strong Java programming as pre-requisite? But this course is not about programming! g Focus on thinking at scale and algorithm design We ll expect you to pick up Hadoop (quickly) along the way How do I learn Hadoop? This session: brief overview White s book RTFM, RTFC(!)

Source: Wikipedia (Mahout)

Basic Hadoop API* Mapper void map(k1 key, V1 value, OutputCollector<K2, V2> output, Reporter reporter) void configure(jobconf job) void close() throws IOException Reducer/Combiner void reduce(k2 key, Iterator<V2> values, OutputCollector<K3,V3> output, Reporter reporter) void configure(jobconf job) void close() throws IOException Partitioner void getpartition(k2 key, V2 value, int numpartitions) *Note: forthcoming API changes

Data Types in Hadoop Writable Defines a de/serialization protocol. Every data type in Hadoop is a Writable. WritableComprable Defines a sort order. All keys must be of fthis type (but not values). IntWritable LongWritable Text Concrete classes for different data types. SequenceFiles Binary encoded of a sequence of key/value pairs

Hello World : Word Count Map(String docid, String text): for each word w in text: Emit(w, 1); Reduce(String term, Iterator<Int> values): int sum = 0; for each v in values: sum += v; Emit(term, value);

Three Gotchas Avoid object creation, at all costs Execution framework reuses value in reducer Passing parameters into mappers and reducers DistributedCache for larger (static) data

Complex Data Types in Hadoop How do you implement complex data types? The easiest est way: Encoded it as Text, e.g., (a, b) = a:b Use regular expressions to parse and extract data Works, but pretty hack-ish The hard way: Define a custom implementation of WritableComprable Must implement: readfields, write, compareto Computationally efficient, but slow for rapid prototyping Alternatives: Cloud 9 offers two other choices: Tuple and JSON (Actually, not that useful in practice)

Basic Cluster Components One of each: Namenode (NN) Jobtracker (JT) Set of each per slave machine: Tasktracker (TT) Datanode (DN)

Putting everything together namenode job submission node namenode daemon jobtracker tasktracker tasktracker tasktracker datanode daemon Linux file system datanode daemon Linux file system datanode daemon Linux file system slave node slave node slave node

Anatomy of a Job MapReduce program in Hadoop = Hadoop job Jobs are divided into map and reduce tasks An instance of running a task is called a task attempt Multiple jobs can be composed into a workflow Job submission process Client (i.e., driver program) creates a job, configures it, and submits it to job tracker JobClient computes input splits (on client end) Job data (jar, configuration XML) are sent to JobTracker JobTracker puts job data in shared location, o enqueues tasks s TaskTrackers poll for tasks Off to the races

Input File Input File InputF Format InputSplit InputSplit InputSplit InputSplit InputSplit RecordReader RecordReader RecordReader RecordReader RecordReader Mapper Mapper Mapper Mapper Mapper Intermediates Intermediates Intermediates Intermediates Intermediates Source: redrawn from a slide by Cloduera, cc-licensed

Mapper Mapper Mapper Mapper Mapper Intermediates Intermediates Intermediates Intermediates Intermediates Partitioner Partitioner Partitioner Partitioner Partitioner (combiners omitted here) Intermediates Intermediates Intermediates Reducer Reducer Reduce Source: redrawn from a slide by Cloduera, cc-licensed

Reducer Reducer Reduce OutputF Format RecordWriter RecordWriter RecordWriter Output File Output File Output File Source: redrawn from a slide by Cloduera, cc-licensed

Input and Output InputFormat: TextInputFormat KeyValueTextInputFormat SequenceFileInputFormat OutputFormat: TextOutputFormat SequenceFileOutputFormat

Shuffle and Sort in Hadoop Probably the most complex aspect of MapReduce! Map side Map outputs are buffered in memory in a circular buffer When buffer reaches threshold, contents are spilled to disk Spills merged in a single, partitioned file (sorted within each partition): combiner runs here Reduce side First, map outputs are copied over to reducer machine Sort is a multi-pass merge of map outputs (happens in memory and on disk): combiner runs here Final merge pass goes directly into reducer

Hadoop Workflow 1. Load data into HDFS 2. Develop code locally You 3. Submit MapReduce job 3a. Go back to Step 2 Hadoop Cluster 4. Retrieve data from HDFS

On Amazon: With EC2 0. Allocate Hadoop cluster 1. Load data into HDFS 2. Develop code locally EC2 You 3. Submit MapReduce job 3a. Go back to Step 2 Your Hadoop Cluster 4. Retrieve data from HDFS 5. Clean up! Uh oh. Where did the data go?

On Amazon: EC2 and S3 Copy from S3 to HDFS EC2 (The Cloud) S3 (Persistent Store) Your Hadoop Cluster Copy from HFDS to S3

Debugging Hadoop First, take a deep breath Start t small, start t locally Strategies Learn to use the webapp Where does println go? Don t use println, use logging Throw RuntimeExceptions

Recap Hadoop data types Anatomy of a Hadoop job Hadoop jobs, end to end Software development workflow

Source: Wikipedia (Japanese rock garden) Questions?