!"#$%&' ( )%#*'+,'-#.//"0( !"#$"%&'()*$+()',!-+.'/', 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3, Processing LARGE data sets



Similar documents
Prepared By : Manoj Kumar Joshi & Vikas Sawhney

Overview. Big Data in Apache Hadoop. - HDFS - MapReduce in Hadoop - YARN. Big Data Management and Analytics

Apache Hadoop new way for the company to store and analyze big data

Hadoop Architecture. Part 1

Introduction to Hadoop. New York Oracle User Group Vikas Sawhney

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)

Big Data With Hadoop

Open source large scale distributed data management with Google s MapReduce and Bigtable

Tutorial: Big Data Algorithms and Applications Under Hadoop KUNPENG ZHANG SIDDHARTHA BHATTACHARYYA

CSE-E5430 Scalable Cloud Computing Lecture 2

Jeffrey D. Ullman slides. MapReduce for data intensive computing

Setup Hadoop On Ubuntu Linux. ---Multi-Node Cluster

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

Big Data Management and NoSQL Databases

Welcome to the unit of Hadoop Fundamentals on Hadoop architecture. I will begin with a terminology review and then cover the major components

Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! ti2736b-ewi@tudelft.nl

DATA MINING WITH HADOOP AND HIVE Introduction to Architecture

Running Hadoop On Ubuntu Linux (Multi-Node Cluster) - Michael G...

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware

Lecture 2 (08/31, 09/02, 09/09): Hadoop. Decisions, Operations & Information Technologies Robert H. Smith School of Business Fall, 2015

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee June 3 rd, 2008

Hadoop Distributed File System. T Seminar On Multimedia Eero Kurkela

Distributed File Systems

Chase Wu New Jersey Ins0tute of Technology

Hadoop IST 734 SS CHUNG

CS2510 Computer Operating Systems

CS2510 Computer Operating Systems

Comparative analysis of mapreduce job by keeping data constant and varying cluster size technique

Hadoop. Apache Hadoop is an open-source software framework for storage and large scale processing of data-sets on clusters of commodity hardware.

Hadoop Distributed File System. Dhruba Borthakur June, 2007

Data-Intensive Computing with Map-Reduce and Hadoop

Lecture 32 Big Data. 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop

THE HADOOP DISTRIBUTED FILE SYSTEM

Chapter 7. Using Hadoop Cluster and MapReduce

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee

Big Data and Apache Hadoop s MapReduce

Deploying Hadoop with Manager

HDFS Architecture Guide

International Journal of Advancements in Research & Technology, Volume 3, Issue 2, February ISSN


BBM467 Data Intensive ApplicaAons

A very short Intro to Hadoop

Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 14

MASSIVE DATA PROCESSING (THE GOOGLE WAY ) 27/04/2015. Fundamentals of Distributed Systems. Inside Google circa 2015

Introduction to MapReduce and Hadoop

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

研 發 專 案 原 始 程 式 碼 安 裝 及 操 作 手 冊. Version 0.1

Accelerating and Simplifying Apache

Processing of massive data: MapReduce. 2. Hadoop. New Trends In Distributed Systems MSc Software and Systems

A Brief Outline on Bigdata Hadoop

Fault Tolerance in Hadoop for Work Migration

HADOOP MOCK TEST HADOOP MOCK TEST II

Application Development. A Paradigm Shift

Distributed File Systems

Take An Internal Look at Hadoop. Hairong Kuang Grid Team, Yahoo! Inc

What We Can Do in the Cloud (2) -Tutorial for Cloud Computing Course- Mikael Fernandus Simalango WISE Research Lab Ajou University, South Korea

Parallel Processing of cluster by Map Reduce

Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 15

Hadoop implementation of MapReduce computational model. Ján Vaňo

Apache Hadoop. Alexandru Costan

Weekly Report. Hadoop Introduction. submitted By Anurag Sharma. Department of Computer Science and Engineering. Indian Institute of Technology Bombay

Distributed Filesystems

HDFS. Hadoop Distributed File System

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

TP1: Getting Started with Hadoop

Tutorial for Assignment 2.0

Apache Hadoop FileSystem and its Usage in Facebook

A STUDY ON HADOOP ARCHITECTURE FOR BIG DATA ANALYTICS

Intro to Map/Reduce a.k.a. Hadoop

Hadoop Distributed File System (HDFS)

NoSQL and Hadoop Technologies On Oracle Cloud

GraySort and MinuteSort at Yahoo on Hadoop 0.23

Cloud Computing at Google. Architecture

Single Node Setup. Table of contents

Hadoop 只 支 援 用 Java 開 發 嘛? Is Hadoop only support Java? 總 不 能 全 部 都 重 新 設 計 吧? 如 何 與 舊 系 統 相 容? Can Hadoop work with existing software?

Hadoop Ecosystem B Y R A H I M A.

MapReduce and Hadoop. Aaron Birkland Cornell Center for Advanced Computing. January 2012

How To Scale Out Of A Nosql Database

Open source Google-style large scale data analysis with Hadoop

2.1 Hadoop a. Hadoop Installation & Configuration

How To Use Hadoop


Reduction of Data at Namenode in HDFS using harballing Technique

and HDFS for Big Data Applications Serge Blazhievsky Nice Systems

Introduc)on to the MapReduce Paradigm and Apache Hadoop. Sriram Krishnan

HDFS Users Guide. Table of contents

Hadoop Distributed File System (HDFS) Overview

Role of Cloud Computing in Big Data Analytics Using MapReduce Component of Hadoop

MapReduce, Hadoop and Amazon AWS

Survey on Scheduling Algorithm in MapReduce Framework

International Journal of Advance Research in Computer Science and Management Studies

Journal of science STUDY ON REPLICA MANAGEMENT AND HIGH AVAILABILITY IN HADOOP DISTRIBUTED FILE SYSTEM (HDFS)

IJFEAT INTERNATIONAL JOURNAL FOR ENGINEERING APPLICATIONS AND TECHNOLOGY

Hadoop and Map-Reduce. Swati Gore

PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS

Transcription:

!"#$%&' ( Processing LARGE data sets )%#*'+,'-#.//"0( Framework for o! reliable o! scalable o! distributed computation of large data sets 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3,

1&2+#3+2+*4( 56"7*( 56"7*( 8/<"/6&6*' =#+279&( 8/99&$*':7*"7*( ;*+22'8/99&$*':7*"7*( ;$#2#3+2+*4( Service Cost 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 1,

Large data sets: > 5 PB How many hard discs? 1 TB/disc => 5000 discs! Need more computers! 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0,?,

>#+6'?#9*,(! Hadoop Common! Hadoop Distributed File System (HDFS)! Hapoop MapReduce 1&2#*&.'?9/@&$*,(! Avro: data serialization! Cassandra: scalable multi-master database without SPoF! Chukwa: data collection system! HBase: scalable, distributed database! Hive: data warehouse infrastructure! Mahout: machine learning & data mining! Pig: high-level data-flow language & execution framework for parallel computation! ZooKeeper: coordination service for distributed applications 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, @,

Supported by major companies! A,&9,'/='-#.//"( 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, A,

>#64'>/9&B(! Lets stop the list at the letter H ;-) >+2&,*/6&,(! 27 December, 2011: Release 1.0.0 available 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, B,

Typical Hadoop cluster:! Consists of commodity hardware! Heterogenous! Single machines are NOT highly available 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, C,

C#+279&,(! Hardware failures are common!!give a cluster enough computers and there will definitively be machines that are non-functional! Hadoop:! Don t even try to use stable machines! Fault tolerant behaviour at the application layer. 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, D,

-#.//"'DC;(! Fault tolerant! Requires only low-cost hardware! Suitable for large data sets! Is programmed in Java => Runs on many different software platforms 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, E,

! Optimized for high throughput! High data access latency! Not POSIX conform C+2&,(! Typical file size: > 1 GB! Traditional hierarchical organization of files with directories! A file is separated into blocks of equal size. (except the last block)! Typical block size: > 64 MB! Replication of the blocks across the dfs 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 32,

-DC;'!9$%+*&$*79&(! Master/Slave architecture (rack aware)! NameNode (Master): o! Manages file system namespace o! Handles requests to access files o! Distributes blocks to DataNodes o! Handles replication of files! DataNodes (Slaves): o! Stores blocks locally o! Serve read/write requests o! Send periodic Heartbeats to NameNode o! Creates, deletes, renames files upon order from Namenode! Access Model o! WORM (Write once, read many) o! Streaming data to Clients 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 33,

Client Metadata ops NameNode Metadata (Name, replicas, Read / Write Block ops DataNodes DataNodes Rack 1 Rack 2 1&"2+$#*+/6(! Replication factor: o! Configurable for each file o! Changeable at any time! NameNode handles replication o! If not specified: what replication factor? o! Where to store them? o! React to failed replicas 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 31,

NameNode (Filename, numreplicas, block-ids, ) /users/example/data/part-0, r:2, {1, 3}, /users/example/data/part-1, r:3, {2, 4, 5}, DataNodes 1 2 2 1 4 2 5 5 3 4 3 5 4 ;*#6.#9.';*9#*&E+&,(! Optimizing Replication! Default replication factor = 3! One replica within the same rack as the original! One replica on a machine in a different rack! Last replica on different machine but on the same rack as the second replica 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3?,

F/$#2+*4'#G#9&6&,,(! Client wants to read data: o! HDFS tries to serve requests from nearest DataNode. => reduces bandwidth consumption and access latency o! Optimal: client on same machine as DataNode ;?/C(! The NameNode is a SPoF o! Secondary NameNode as Backup o! Requires human interaction => SPoF Secondary NameNode Backup NameNode Metadata (Name, replicas, Client 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3@,

1&2+#3+2+*4'/=' C+2&';4,*&<'>&*#.#*#(! Records changes in transaction log: EditLog.! Stores complete File System Namespace in file FsImage.! Keeps copy of FsImage in memory o! < 8 GB suffice! Checkpoint: o! Apply transactions in EditLog to FsImage! Possibility to maintain multiple copies of EditLog & FsImage! Snapshots: o! Feature of future releases o! Copy of namespace at particular point in time o! Possibility to roll back! Secondary NameNode o! Maintains copy of primary NameNode o! Can replace primary on failure (Manual interaction necessary) 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3A,

>#"H1&.7$&( Programming paradigm:! Large distributed computation transformed to sequence of smaller distributed computations on data sets of key/value pairs o! Simplified data processing on large clusters Jeffrey Dean, Sanjay Ghemawat in Communications of the ACM (2008) 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3B,

Master splits assigns sorts assigns split1 split 2 split 3 split 4 split 5 split 6 map task map task map task reduce task reduce task output file 1 output file 2 Input files Map phase Intermediate key/value pairs Reduce phase Output files IG/'?%#,&,( 1.! Map: (k1, v1) -> list(k2, v2) Split input data into small chunks. Each chunk is processed by a map task. => map key/value pairs to a set of intermediate key/value pairs. 2.! Reduce: (k2, list(v2)) -> list(v2) => reduce set of intermediate values which share a key to a smaller set. 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3C,

!"#$%&'()*+$(,Simplified data processing on large clusters ( -#%./0*123(4'56(/0*123(7#&8'9:( (;;(4'5:(<+=8$'20(2#$'( (;;(7#&8':(<+=8$'20(=+20'20>( (!"#$%&'($)"#*$)$+,$-&./%0$ $ $12+34,3%#2%*+&3%5)6$789:;$ (?'<8='(./0*123(4'56(@0'*#0+*(7#&8'>9:( (;;(4'5:(#(A+*<( (;;(7#&8'>:(#(&1>0(+)(=+820>( (+,3$#%</.3$=$>;$ $!"#$%&'($-$+,$-&./%<0$ $ $#%</.3$?=$@&#<%4,35-:;$ $12+35A<B3#+,C5#%</.3::;$ Example Wordcount from hadoop.apache.org! Content of file1: Hello World Bye World! Content of file2: Hello Hadoop Goodbye Hadoop First line Second line < Hello, 1> < World, 1> < Bye, 1> < World, 1> < Hello, 1> < Hadoop, 1> < Goodbye, 1> < Hadoop, 1> < Bye, 1> < Hello, 1> < World, 2> < Goodbye, 1> < Hadoop, 2> < Hello, 1> < Bye, 1> < Goodbye, 1> < Hadoop, 2> < Hello, 2> < World, 2> Data Intermediate key/value pairs Sorted & combined Intermediate key/value pairs Result of Reduce 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3D,

;?/C( Map/Reduce needs a coordinating Master to assign map tasks and reduce tasks!!!spof Master split1 split 2 map task reduce task output file 1 split 3 split 4 map task split 5 split 6 map task reduce task output file 2 -#.//"'827,*&9';&*7"' ( Name Node Data Node HDFS Data Node Data Node Data Node Data Node Data Node Task Tracker Task Tracker Task Tracker Task Tracker Task Tracker MapReduce Job Tracker Task Tracker 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3E,

1&J7+9&<&6*,(! Java runtime 1.6! SSH o! public key authentication o! passphraseless login! Problems with IPv6 => disable it! Install Hadoop 8/KE79#*+/6'K2&,(! Hadoop-env.sh # The java implementation to use. Required. Export JAVA_HOME=/examplepath/ /java-6-sun 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 12,

! Conf/core-site.xml <configuration> <property> <name>fs.default.name</name> <value>hdfs://dfs_master:54310</value> </property> </configuration>! Conf/mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>mapred_master:54311</value> </property> <configuration> 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 13,

! Conf/hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>2</value> </property> <configuration>! Conf/masters.txt o! Lists the machines on which secondary Namenodes will be started dfs_master 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 11,

! Conf/slaves.txt o! Lists all machines on which DataNodes and TaskTrackers are started dfs_master mapred_master slave1 slave2 slave3! Format the HDFS L#<&L/.&( Execute the following line on the machine that is supposed to run the NameNode (dfs_master): $ bin/hadoop namenode format 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 1?,

! Start the NameNode Execute the following line on the machine that is supposed to run the NameNode (dfs_master): $ bin/start-dfs.sh This will start the NameNode and secondary NameNodes as well as the DataNodes. M/3I9#$N&9(! Start the daemons for MapReduce Execute the following line on the machine that is supposed to run the JobTracker(mapred_master): $ bin/start-mapred.sh This will start the JobTracker and the Tasktrackers 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 1@,

OP#<"2&'M/3(! Copy example input files from local fs to HDFS $ bin/hadoop dfs copyfromlocal /input /input! Run MapReduce job wordcount $ bin/hadoop jar hadoop*examples*.jar wordcount /input /output :7*"7*( 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 1A,

! Input files: o! The Outline of Science, Vol. 1 (of 4) by J. Arthur Thomson o! The Notebooks of Leonardo Da Vinci o! Ulysses by James Joyce o! The Art of War by 6th cent. B.C. Sunzi o! The Adventures of Sherlock Holmes by Sir Arthur Conan Doyle o! The Devil s Dictionary by Ambrose Bierce o! Encyclopaedia Britannica, 11th Edition, Volume 4, Part 3! Four copies of each file to increase data! Output: Pairs of Words and their occurrence 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 1B,

-Q#,&(! Database! Based on Google BigTable! Supports random read/write! Data sets are seldom changed! Data sets are often appended! Kind of NoSQL! Kind of DataStore instead of DataBase o! No advanced query languages o! No typed columns 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 1C,

D+,*9+37*+/6(! Huge amounts of data => good utilisation of cluster! Automatic division of the tables into regions! Automatic RegionServer failover D#*#'>/.&2(! Data is stored in tables o! Rows: sorted by row key (primary key) o! Columns: belong to a column family! Row keys are byte arrays! Table cells o! contain byte arrays o! are versioned 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 1D,

Table:'webtable Row key Time Stamp ColumFamily contents ColumnFamily anchor com.cnn.www T9 Anchor:cnnsi.com= CNN com.cnn.www T8 Anchor:my.look.ca= CNN.com com.cnn.www T6 Contents:html= <html> com.cnn.www T5 Contents:html= <html> com.cnn.www t3 Contents:html= <html> %R"SHH%3#,&T#"#$%&T/9EH3//NH.#*#</.&2T%*<2(! Request values for all columns of row com.cnn.www => Contents:html= <html> (at T6) Anchor:cnnsi.com= CNN (at T9) Anchor:cnnsi.com= CNN (at T8)!9$%+*&$*79&(! Catalog tables: o! -ROOT o!.meta! ZooKeeper coordinates and monitors Hbase o! Stores location of ROOT! -ROOT contains location of.meta table.!.meta contains locations of user regions.! Hmaster: monitors all RegionServers 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 1E,

Finds RS by ROOT ->.META ->#,*&9( 82+&6*( Reads and writes directly to RS V//W&&"&9( ->#,*&9( Assigns regions 1&E+/6 ;&9U&9( 1&E+/6 ;&9U&9( 1&E+/6 ;&9U&9( -DC;( V//W&&"&9( 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0,?2,

! Centralized coordination service! Distributed! Highly reliable! Offers: o! Naming o! Configuration management o! Synchronization o! Group services! Offers hierarchical namespace of data registers, called znodes! Similarities to name spaces of standard file systems! Stores coordination data! Typical sizes measured in kb! Each machine holds its data in memory 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0,?3,

ZooKeeper Service ( ( Leader Server Server ( Server ( ( Server Server Client Client Client Client Client Client Client Clients send requests, get responses, get watch events, send heartbeats 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0,?1,