Hadoop implementation of MapReduce computational model. Ján Vaňo



Similar documents
Large scale processing using Hadoop. Ján Vaňo

Hadoop. Sunday, November 25, 12

Prepared By : Manoj Kumar Joshi & Vikas Sawhney

Introduction to Hadoop. New York Oracle User Group Vikas Sawhney

How To Scale Out Of A Nosql Database

Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware

Overview. Big Data in Apache Hadoop. - HDFS - MapReduce in Hadoop - YARN. Big Data Management and Analytics

Hadoop Ecosystem B Y R A H I M A.

Big Data With Hadoop

Hadoop IST 734 SS CHUNG

MapReduce with Apache Hadoop Analysing Big Data

BIG DATA TECHNOLOGY. Hadoop Ecosystem

CSE-E5430 Scalable Cloud Computing Lecture 2

DATA MINING WITH HADOOP AND HIVE Introduction to Architecture

Hadoop Ecosystem Overview. CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook

Application Development. A Paradigm Shift

Open source Google-style large scale data analysis with Hadoop

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee

Data-Intensive Computing with Map-Reduce and Hadoop

A Brief Outline on Bigdata Hadoop

Data-Intensive Programming. Timo Aaltonen Department of Pervasive Computing

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee June 3 rd, 2008

Hadoop: Distributed Data Processing. Amr Awadallah Founder/CTO, Cloudera, Inc. ACM Data Mining SIG Thursday, January 25 th, 2010

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)

Weekly Report. Hadoop Introduction. submitted By Anurag Sharma. Department of Computer Science and Engineering. Indian Institute of Technology Bombay

Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 15

Big Data and Apache Hadoop s MapReduce

Hadoop. Apache Hadoop is an open-source software framework for storage and large scale processing of data-sets on clusters of commodity hardware.

Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 14

ITG Software Engineering

Map Reduce & Hadoop Recommended Text:

BBM467 Data Intensive ApplicaAons

Hadoop Introduction coreservlets.com and Dima May coreservlets.com and Dima May

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

Session: Big Data get familiar with Hadoop to use your unstructured data Udo Brede Dell Software. 22 nd October :00 Sesión B - DB2 LUW

Jeffrey D. Ullman slides. MapReduce for data intensive computing

Hadoop Big Data for Processing Data and Performing Workload

Chapter 7. Using Hadoop Cluster and MapReduce

Hadoop Parallel Data Processing

Introduction to Big data. Why Big data? Case Studies. Introduction to Hadoop. Understanding Features of Hadoop. Hadoop Architecture.

Hadoop Submitted in partial fulfillment of the requirement for the award of degree of Bachelor of Technology in Computer Science

Hadoop and its Usage at Facebook. Dhruba Borthakur June 22 rd, 2009

Constructing a Data Lake: Hadoop and Oracle Database United!

Lecture 32 Big Data. 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop

Big Data Technology Core Hadoop: HDFS-YARN Internals

Using Hadoop for Webscale Computing. Ajay Anand Yahoo! Usenix 2008

INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE

MapReduce, Hadoop and Amazon AWS

Hadoop: Embracing future hardware

Apache Hadoop FileSystem and its Usage in Facebook

THE HADOOP DISTRIBUTED FILE SYSTEM

BIG DATA TRENDS AND TECHNOLOGIES

A very short Intro to Hadoop

NoSQL and Hadoop Technologies On Oracle Cloud

Accelerating and Simplifying Apache

Getting Started with Hadoop. Raanan Dagan Paul Tibaldi

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related

and HDFS for Big Data Applications Serge Blazhievsky Nice Systems

CS2510 Computer Operating Systems

CS2510 Computer Operating Systems

Hadoop and Map-Reduce. Swati Gore

Workshop on Hadoop with Big Data

Hadoop Job Oriented Training Agenda

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

HADOOP MOCK TEST HADOOP MOCK TEST I

Hadoop & its Usage at Facebook

Hadoop Architecture. Part 1

Big Data Big Data/Data Analytics & Software Development

Big Data. Donald Kossmann & Nesime Tatbul Systems Group ETH Zurich

Chase Wu New Jersey Ins0tute of Technology

Open source large scale distributed data management with Google s MapReduce and Bigtable

Hadoop and ecosystem * 本 文 中 的 言 论 仅 代 表 作 者 个 人 观 点 * 本 文 中 的 一 些 图 例 来 自 于 互 联 网. Information Management. Information Management IBM CDL Lab

Implement Hadoop jobs to extract business value from large and varied data sets

Alternatives to HIVE SQL in Hadoop File Structure

COURSE CONTENT Big Data and Hadoop Training

Extending Hadoop beyond MapReduce

Hadoop & its Usage at Facebook

Hadoop Distributed File System: Architecture and Design

Programming Hadoop 5-day, instructor-led BD-106. MapReduce Overview. Hadoop Overview

Testing 3Vs (Volume, Variety and Velocity) of Big Data

Intro to Map/Reduce a.k.a. Hadoop

PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS

Internals of Hadoop Application Framework and Distributed File System

International Journal of Advancements in Research & Technology, Volume 3, Issue 2, February ISSN

Certified Big Data and Apache Hadoop Developer VS-1221

Big Data Introduction

Hadoop Scalability at Facebook. Dmytro Molkov YaC, Moscow, September 19, 2011

How to properly misuse Hadoop. Marcel Huntemann NERSC tutorial session 2/12/13

Qsoft Inc

An Industrial Perspective on the Hadoop Ecosystem. Eldar Khalilov Pavel Valov

Take An Internal Look at Hadoop. Hairong Kuang Grid Team, Yahoo! Inc

Hadoop Distributed File System. Jordan Prosch, Matt Kipps

Hadoop Evolution In Organizations. Mark Vervuurt Cluster Data Science & Analytics

A Survey on Big Data Concepts and Tools

Transcription:

Hadoop implementation of MapReduce computational model Ján Vaňo

What is MapReduce? A computational model published in a paper by Google in 2004 Based on distributed computation Complements Google s distributed file system (GFS) Works with key:value pairs

Why MapReduce? It is the answer for Big Data problem Runs on commodity hardware Very scalable solution

MapReduce model

MapReduce model

MapReduce model

MapReduce alternatives Hadoop Top-level Apache project Spark University of Berkeley project Disco Open source Nokia project MapReduce-MPI US Department of Energy project MARIANE academic project of University of Binghamton Phoenix University of Stanford project BashReduce - MapReduce for std. Unix commands

MapReduce vs. RDBMS

Data Structure Structured Data data organized into entities that have a defined format. Realm of RDBMS Semi-Structured Data there may be a schema, but often ignored; schema is used as a guide to the structure of the data. Unstructured Data doesn t have any particular internal structure. MapReduce works well with semi-structured and unstructured data.

What is Hadoop? Software platform that lets one easily write and run applications that process vast amounts of data Hadoop is most popular implementation of MapReduce so far

Why Hadoop? It has been Apache top-level project for a long time (6 years) Hadoop Ecosystem Hadoop exclusive technologies

Why Hadoop? Scalable: It can reliably store and process petabytes Economical: It distributes the data and processing across clusters of commonly available computers (in thousands) Efficient: By distributing the data, it can process it in parallel on the nodes where the data is located Reliable: It automatically maintains multiple copies of data and automatically redeploys computing tasks based on failures

Brief history 2002 - Project Nutch started (open source web search engine) Doug Cutting 2003 - GFS (Google File System) paper published 2004 - Implementation of GFS started 2004 - Google published MapReduce paper 2005 - Working implementations of MapReduce and GFS (NDFS) 2006 - System applicable beyond realm of search 2006 - Nutch moved to Hadoop project, Doug Cutting joins Yahoo! 2008 - Yahoo!s production index generated by 10,000 core Hadoop cluster 2008 - Hadoop moved under Apache Foundation April 2008 - Hadoop broke world record - fastest sorting of 1 TB of data (209 seconds, previously 297) November 2008 - Google's implementation sorted 1 TB in 68 seconds May 2009 - Yahoo! team sort 1 TB in 62 seconds

1TB sort by Hadoop

Who uses Hadoop?

Assumptions Hardware will fail Processing will be run in batches. Thus there is an emphasis on high throughput as opposed to low latency Applications that run on HDFS have large data sets. A typical file in HDFS is gigabytes to terabytes in size It should provide high aggregate data bandwidth and scale to hundreds of nodes in a single cluster. It should support tens of millions of files in a single instance Applications need a write-once-read-many access model Moving Computation is Cheaper than Moving Data Portability is important

Hadoop modules Hadoop Common - contains libraries and utilities needed by other Hadoop modules Hadoop Distributed File System (HDFS) - a distributed file-system that stores data on the commodity machines, providing very high aggregate bandwidth across the cluster Hadoop MapReduce - a programming model for large scale data processing Hadoop YARN - a resource-management platform responsible for managing compute resources in clusters and using them for scheduling of users' applications Provides base for MapReduce v2

Hadoop architecture

HDFS architecture

HDFS architecture (reading)

HDFS architecture (writing)

Data replication in HDFS

How to use Hadoop MapReduce? Implement 2 basic functions: Map Reduce Implement Driver class

MapReduce structure

MapReduce structure

MapReduce structure

Job submission (MapReduce v1)

Job submission (MapReduce v1) Client applications submit jobs to the Job tracker The JobTracker talks to the NameNode to determine the location of the data The JobTracker locates TaskTracker nodes with available slots at or near the data The JobTracker submits the work to the chosen TaskTracker nodes The TaskTracker nodes are monitored. If they do not submit heartbeat signals often enough, they are deemed to have failed and the work is scheduled on a different TaskTracker

Job submission (MapReduce v1) A TaskTracker will notify the JobTracker when a task fails. The JobTracker decides what to do then: it may resubmit the job elsewhere, it may mark that specific record as something to avoid, and it may may even blacklist the TaskTracker as unreliable When the work is completed, the JobTracker updates its status Client applications can poll the JobTracker for information

Job flow (MapReduce v1)

Job flow (MapReduce v2)

Hadoop Ecosystem Apache Avro (serialization system for persistent data) Apache Pig (high-level dataflow querying language) Apache Hive (data warehouse infrastructure)

Hadoop Ecosystem Apache HBase (database for real-time access) Apache Sqoop (tool for moving data from SQL to Hadoop or opposite) Apache ZooKeeper (distributed coordination service providing high availability) library for building distributed systems

Hadoop exclusive technologies YARN Yet Another Resource Negotiator HDFS federation possibility of partitioning namespace across several namenodes to support high number of files HDFS high-availability techniques for for removing the namenode as the single point of failure

Examples of production use Yahoo! : More than 100,000 CPUs in ~20,000 computers running Hadoop; biggest cluster: 2000 nodes (2*4cpu boxes with 4TB disk each); used to support research for Ad Systems and Web Search Facebook: To store copies of internal log and dimension data sources and use it as a source for reporting/analytics and machine learning; 320 machine cluster with 2,560 cores and about 1.3 PB raw storage

Size of releases

Hadoop + Framework for applications on large clusters + Built for commodity hardware + Provides reliability and data motion + Implements a computational paradigm named Map/Reduce + Very own distributed file system (HDFS) (very high aggregate bandwidth across the cluster) + Failures handles automatically

Hadoop - Time consuming development - Documentation sufficient, but not the most helpful - HDFS is complicated and has plenty issues of its own - Debugging a failure is a "nightmare" - Large clusters require a dedicated team to keep it running properly - Writing a Hadoop job becomes a software engineering task rather than a data analysis task