A BigData Tour HDFS, Ceph and MapReduce



Similar documents
Overview. Big Data in Apache Hadoop. - HDFS - MapReduce in Hadoop - YARN. Big Data Management and Analytics

Distributed Filesystems

HDFS Architecture Guide

The Hadoop Distributed File System

Prepared By : Manoj Kumar Joshi & Vikas Sawhney

Distributed File Systems

Journal of science STUDY ON REPLICA MANAGEMENT AND HIGH AVAILABILITY IN HADOOP DISTRIBUTED FILE SYSTEM (HDFS)

CS2510 Computer Operating Systems

CS2510 Computer Operating Systems

Introduction to Hadoop. New York Oracle User Group Vikas Sawhney

THE HADOOP DISTRIBUTED FILE SYSTEM

The Hadoop Distributed File System

Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! ti2736b-ewi@tudelft.nl

Big Data With Hadoop

Hadoop Architecture. Part 1

Hadoop Distributed File System. T Seminar On Multimedia Eero Kurkela

Hadoop Distributed File System (HDFS) Overview

Intro to Map/Reduce a.k.a. Hadoop

Big Data Technology Core Hadoop: HDFS-YARN Internals

CSE-E5430 Scalable Cloud Computing Lecture 2

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee June 3 rd, 2008

Hadoop Distributed File System. Jordan Prosch, Matt Kipps

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

Apache Hadoop. Alexandru Costan

Hadoop Ecosystem B Y R A H I M A.

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee

GraySort and MinuteSort at Yahoo on Hadoop 0.23

IJFEAT INTERNATIONAL JOURNAL FOR ENGINEERING APPLICATIONS AND TECHNOLOGY

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

Design and Evolution of the Apache Hadoop File System(HDFS)

DATA MINING WITH HADOOP AND HIVE Introduction to Architecture

Hadoop. Apache Hadoop is an open-source software framework for storage and large scale processing of data-sets on clusters of commodity hardware.

Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware

Take An Internal Look at Hadoop. Hairong Kuang Grid Team, Yahoo! Inc

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

Hadoop & its Usage at Facebook

BBM467 Data Intensive ApplicaAons

Distributed File Systems

Apache Hadoop FileSystem and its Usage in Facebook

Data-Intensive Computing with Map-Reduce and Hadoop

Welcome to the unit of Hadoop Fundamentals on Hadoop architecture. I will begin with a terminology review and then cover the major components

Hadoop: Embracing future hardware

Processing of massive data: MapReduce. 2. Hadoop. New Trends In Distributed Systems MSc Software and Systems

Hadoop implementation of MapReduce computational model. Ján Vaňo

MASSIVE DATA PROCESSING (THE GOOGLE WAY ) 27/04/2015. Fundamentals of Distributed Systems. Inside Google circa 2015

Hadoop & its Usage at Facebook

Hadoop IST 734 SS CHUNG

Large scale processing using Hadoop. Ján Vaňo

Sector vs. Hadoop. A Brief Comparison Between the Two Systems

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)

!"#$%&' ( )%#*'+,'-#.//"0( !"#$"%&'()*$+()',!-+.'/', 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3, Processing LARGE data sets

NoSQL and Hadoop Technologies On Oracle Cloud

Parallel Processing of cluster by Map Reduce

HDFS: Hadoop Distributed File System

How To Scale Out Of A Nosql Database

MapReduce with Apache Hadoop Analysing Big Data

NoSQL Data Base Basics

Deploying Hadoop with Manager

HDFS Under the Hood. Sanjay Radia. Grid Computing, Hadoop Yahoo Inc.

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Hadoop and Map-Reduce. Swati Gore

Data-Intensive Programming. Timo Aaltonen Department of Pervasive Computing

A very short Intro to Hadoop

BigData. An Overview of Several Approaches. David Mera 16/12/2013. Masaryk University Brno, Czech Republic

Tutorial: Big Data Algorithms and Applications Under Hadoop KUNPENG ZHANG SIDDHARTHA BHATTACHARYYA

Hadoop and ecosystem * 本 文 中 的 言 论 仅 代 表 作 者 个 人 观 点 * 本 文 中 的 一 些 图 例 来 自 于 互 联 网. Information Management. Information Management IBM CDL Lab

Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related

HADOOP MOCK TEST HADOOP MOCK TEST I

MapReduce and Hadoop. Aaron Birkland Cornell Center for Advanced Computing. January 2012

Open source large scale distributed data management with Google s MapReduce and Bigtable

Hadoop at Yahoo! Owen O Malley Yahoo!, Grid Team owen@yahoo-inc.com

Jeffrey D. Ullman slides. MapReduce for data intensive computing

Hadoop Ecosystem Overview. CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook

HDFS Users Guide. Table of contents

Chapter 7. Using Hadoop Cluster and MapReduce

Big Data Storage Options for Hadoop Sam Fineberg, HP Storage

COSC 6397 Big Data Analytics. Distributed File Systems (II) Edgar Gabriel Spring HDFS Basics

A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM

Hadoop. Sunday, November 25, 12

A Brief Outline on Bigdata Hadoop

EXPERIMENTATION. HARRISON CARRANZA School of Computer Science and Mathematics

Comparative analysis of mapreduce job by keeping data constant and varying cluster size technique

Hadoop Distributed File System. Dhruba Borthakur June, 2007

International Journal of Advancements in Research & Technology, Volume 3, Issue 2, February ISSN

Big Data Management and NoSQL Databases

Weekly Report. Hadoop Introduction. submitted By Anurag Sharma. Department of Computer Science and Engineering. Indian Institute of Technology Bombay

Big Data Analytics(Hadoop) Prepared By : Manoj Kumar Joshi & Vikas Sawhney

What Is Datacenter (Warehouse) Computing. Distributed and Parallel Technology. Datacenter Computing Application Programming

<Insert Picture Here> Big Data

Lecture 2 (08/31, 09/02, 09/09): Hadoop. Decisions, Operations & Information Technologies Robert H. Smith School of Business Fall, 2015

Apache Flink Next-gen data analysis. Kostas

Map Reduce & Hadoop Recommended Text:

Extending Hadoop beyond MapReduce

YARN Apache Hadoop Next Generation Compute Platform

Accelerating and Simplifying Apache

Transcription:

A BigData Tour HDFS, Ceph and MapReduce These slides are possible thanks to these sources Jonathan Drusi - SCInet Toronto Hadoop Tutorial, Amir Payberah - Course in Data Intensive Computing SICS; Yahoo! Developer Network MapReduce Tutorial Data Management and Processing Data intensive computing Concerns with the production, manipulation and analysis of data in the range of hundreds of megabytes (MB) to petabytes (PB) and beyond A range of supporting parallel and distributed computing technologies to deal with the challenges of data representation, reliable shared storage, efficient algorithms and scalable infrastructure to perform analysis 1

Challenges Ahead Challenges with data intensive computing Scalable algorithms that can search and process massive datasets New metadata management technologies that can scale to handle complex, heterogeneous and distributed data sources Support for accessing in-memory multi-terabyte data structures High performance, highly reliable petascale distributed file system Techniques for data reduction and rapid processing Software mobility to move computation where data is located Hybrid interconnect with support for multi-gigabyte data streams Flexible and high performance software integration technique Hadoop A family of related project, best known for MapReduce and Hadoop Distributed File System (HDFS) Data volumes increasing massively! Clusters, storage capacity increasing massively! Disk speeds are not keeping pace.! Seek speeds even worse than read/write Data Intensive Computing Disk (MB/s), CPU (MIPS) Mahout! data mining 1000x! 2

Scale-Out Disk streaming speed ~ 50MB/s! 3TB =17.5 hrs! 1PB = 8 months! Scale-out (weak scaling) - filesystem distributes data on ingest Seeking too slow! ~10ms for a seek! Enough time to read half a megabyte! Batch processing! Go through entire data set in one (or small number) of passes Scale-Out 3

Combining results Each node preprocesses its local data! Shuffles its data to a small number of other nodes! Final processing, output is done there Fault Tolerance Data also replicated upon ingest! Runtime watches for dead tasks, restarts them on live nodes! Re-replicates 4

Why Hadoop Drivers 500M+ unique users per month Billions of interesting events per day Data analysis is key Need massive scalability PB s of storage, millions of files, 1000 s of nodes Need to do this cost effectively Use commodity hardware Share resources among multiple projects Provide scale when needed Need reliable infrastructure Must be able to deal with failures hardware, software, networking Failure is expected rather than exceptional Transparent to applications very expensive to build reliability into each application The Hadoop infrastructure provides these capabilities Introduction to Hadoop Apache Hadoop Based on 2004 Google MapReduce Paper Originally composed of HDFS (distributed F/S), a core-runtime and an implementation of Map-Reduce Open Source Apache Foundation project Yahoo! is Apache Platinum Sponsor History Started in 2005 by Doug Cutting Yahoo! became the primary contributor in 2006 Yahoo! scaled it from 20 node clusters to 4000 node clusters today Portable Written in Java Runs on commodity hardware Linux, Mac OS/X, Windows, and Solaris 5

HPC vs Hadoop HPC attitude The problem of disk-limited, loosely-coupled data analysis was solved by throwing more disks and using weak scaling Flip-side: A single novice developer can write real, scalable, 1000+ node data-processing tasks in Hadoop-family tools in an afternoon MPI... less so 6

7

Data Distribution: Disk Hadoop and similar architectures handle the hardest part of parallelism for you - data distribution.! On disk: HDFS distributes, replicates data as it comes in! Keeps track; computations local to data Data Distribution: Network On network: Map Reduce (eg) works in terms of key-value pairs.! Preprocessing (map) phase ingests data, emits (k,v) pairs! Shuffle phase assigns reducers, gets all pairs with same key onto that reducer.! Programmer does not have to design communication patterns (key1,17) (key5, 23) (key1,99) (key2, 12) (key1,83) (key2, 9) (key1,[17,99]) (key5,[23,83]) (key2,[12,9]) 8

Built a reusable substrate The filesystem (HDFS) and the MapReduce layer were very well architected.! Enables many higher-level tools! Data analysis, machine learning, NoSQL DBs,...! Extremely productive environment! And Hadoop 2.x (YARN) is now much much more than just MapReduce Image from http://hortonworks.com/industry/manufacturing/ 9

and Hadoop vs HPC Not either-or anyway! Use HPC to generate big / many simulations, Hadoop to analyze results! Use Hadoop to preprocess huge input data sets (ETL), and HPC to do the tightly coupled computation afterwards.! Besides,... Everything is converging 1/2 10

Everything is converging 2/2 Big Data Analytics Stack Big Data Analytics Stack Amir Payberah https://www.sics.se/~amir/dic.htm 11 Amir H. Payberah (SICS) Introduction April 8, 2014 23 / 36

ore it across multiple machines in a 16/05/15 Big Data - Storage (Filesystem) Big Data Storage (sans POSIX) I Traditional filesystems are not well-designed for large-scale data processing systems. I E ciency has a higher priority than other features, e.g., directory service. I Massive size of data tends to store it across multiple machines in a distributed way. ment Systems (RDMS) were not demore of the ACID properties: BASE I HDFS, Amazon S3,... lue, column-family, graph, document. Amir Payberah https://www.sics.se/~amir/dic.htm Amir H. Payberah (SICS) Introduction April 8, 2014 24 / 36 Big Data - Database ction April 8, 2014 24 / 36 Big Data - Databases I Relational Databases Management Systems (RDMS) were not designed to be distributed. I NoSQL databases relax one or more of the ACID properties: BASE I Di erent data models: key/value, column-family, graph, document. I Dynamo, Scalaris, BigTable, Hbase, Cassandra, MongoDB, Voldemort, Riak, Neo4J,... Amir Payberah https://www.sics.se/~amir/dic.htm Amir H. Payberah (SICS) Introduction April 8, 2014 25 / 36 base, Cassandra, MongoDB, Voldeduction April 8, 2014 25 / 36 12

rces in a cluster between multiple ce isolation. 16/05/15 Big Data - Resource Management Big Data Resource Management I Di erent frameworks require di erent computing resources. I Large organizations need the ability to share data and resources between multiple frameworks. I Resource management share resources in a cluster between multiple frameworks while providing resource isolation. I Mesos, YARN, Quincy,... Amir Payberah https://www.sics.se/~amir/dic.htm Amir H. Payberah (SICS) Introduction April 8, 2014 26 / 36 n April 8, 2014 26 / 36 YARN 1/3 To address Hadoop v1 deficiencies with scalability, memory usage and synchronization, the Yet Another Resource Negotiator (YARN) Apache sub-project was started Previously a JobTracker service ran on each node. Its roles were then split into separate daemons for Resource management Job scheduling/monitoring Hortonworks http://hortonworks.com/blog/apache-hadoop-yarn-background-and-an-overview/ 13

YARN 2/3 YARN splits the JobTracker s responsibilities into Resource management the global Resource Manager daemon Per application Application Master The resource manger and per-node slave Node Managers allow generic node management The resource manager has a pluggable scheduler Hortonworks http://hortonworks.com/blog/apache-hadoop-yarn-background-and-an-overview/ YARN 3/3 The Scheduler performs its scheduling function based on the resource requirements of the applications; it does so based on the abstract notion of a Resource Container which incorporates resource elements such as memory, cpu, disk, network The NodeManager is the per-machine slave, which is responsible for launching the applications containers, monitoring their resource usage (cpu, memory, disk, network) and reporting the same to the ResourceManager. The per-application ApplicationMaster has the responsibility of negotiating appropriate resource containers from the Scheduler, tracking their status and monitoring for progress. From the system perspective, the ApplicationMaster itself runs as a normal container. Hortonworks http://hortonworks.com/blog/apache-hadoop-yarn-background-and-an-overview/ 14

Dryad, Hyracks,... anguage Big Data - Execution Engine Big Data Execution Engine I Scalable and fault tolerance parallel data processing on clusters of unreliable machines. ecution engines, e.g., MapReduce, is I Data-parallel programming model for clusters of commodity machines. 16/05/15 I MapReduce, Spark, Stratosphere, Dryad, Hyracks,... mprove the query capabilities of exe- Amir Payberah https://www.sics.se/~amir/dic.htm tions to low-level API of the execution Amir H. Payberah (SICS) Introduction April 8, 2014 27 / 36 Big Data - Query/Scripting Language Big Data Query/Scripting Languages adlinq, SCOPE, not easy for end users.... April 8, 2014 27 / 36 I Low-level programming of execution engines, e.g., MapReduce, is I Need high-level language to improve the query capabilities of execution engines. I It translates user-defined functions to low-level API of the execution engines. I Pig, Hive, Shark, Meteor, DryadLINQ, SCOPE,... Amir H. Amir Payberah Payberah (SICS) https://www.sics.se/~amir/dic.htm Introduction April 8, 2014 28 / 36 oduction April 8, 2014 28 / 36 15

Big Data Stream Processing Big Data - Stream Processing I Providing users with fresh and low latency results. I Database Management Systems (DBMS) vs. Systems (SPS) Stream Processing I Storm, S4, SEEP, D-Stream, Naiad,... Amir Payberah https://www.sics.se/~amir/dic.htm Amir H. Payberah (SICS) Introduction April 8, 2014 29 / 36 Big Data Graph Processing Big Data - Graph Processing I Many problems are expressed using graphs: sparse computational dependencies, and multiple iterations to converge. I Data-parallel frameworks, such as MapReduce, are not ideal for these problems: slow I Graph processing frameworks are optimized for graph-based problems. I Pregel, Giraph, GraphX, GraphLab, PowerGraph, GraphChi,... Amir Payberah https://www.sics.se/~amir/dic.htm Amir H. Payberah (SICS) Introduction April 8, 2014 30 / 36 16

Big Data Machine Learning Big Data - Machine Learning I Implementing and consuming machine learning techniques at scale are di cult tasks for developers and end users. I There exist platforms that address it by providing scalable machinelearning and data mining libraries. I Mahout, MLBase, SystemML, Ricardo, Presto,... Amir Payberah https://www.sics.se/~amir/dic.htm Hadoop Big Data Analytics Stack Amir H. Payberah (SICS) Introduction April 8, 2014 31 / 36 Hadoop Big Data Analytics Stack Amir Payberah https://www.sics.se/~amir/dic.htm Amir H. Payberah (SICS) Introduction April 8, 2014 32 / 36 17

Spark Big Data Analytics Stack Spark Big Data Analytics Stack Amir Payberah https://www.sics.se/~amir/dic.htm Amir H. Payberah (SICS) Introduction April 8, 2014 34 / 36 Hadoop Ecosystem Hortonworks http://hortonworks.com/industry/manufacturing/ 18

Hadoop Ecosystem 2008 onwards usage exploded Creation of many tools on top of Hadoop infrastructure What is Filesystem? What is Filesystem? The Need For Filesystems I Controls how data is stored in and retrieved from disk. I Controls how data is stored in and retrieved from disk. Amir Payberah https://www.sics.se/~amir/dic.htm Amir H. Payberah (SICS) Distributed Filesystems April 8, 2014 2 / 32 Amir H. Payberah (SICS) Distributed Filesystems April 8, 2014 2 / 32 19

Distributed Filesystems Distributed Filesystems I When data outgrows the storage capacity of a single machine: partition it across a number of separate machines. I Distributed filesystems: manage the storage across a network of machines. Amir Payberah https://www.sics.se/~amir/dic.htm Amir H. Payberah (SICS) Distributed Filesystems April 8, 2014 3 / 32 20

Hadoop Distributed File System (HDFS) A distributed file system designed to run on commodity hardware HDFS was originally built as infrastructure for the Apache Nutch web search engine project, with the aim to achieve fault tolerance, ability to run on low-cost hardware and handle large datasets It is now an Apache Hadoop subproject Share similarities with existing distributed file systems and supports traditional hierarchical file organization Reliable data replication and accessible via Web interface and Shell commands Benefits: Fault tolerant, high throughput, streaming data access, robustness and handling of large data sets HDFS is not a general purpose F/S Assumptions and Goals Hardware failures Detection of faults, quick and automatic recovery Streaming data access Designed for batch processing rather than interactive use by users Large data sets Applications that run on HDFS have large data sets, typically in gigabytes to terabytes in size Optimized for batch reads rather than random reads Simple coherency model Applications need a write-once, read-many times access model for files Computation migration Computation is moved closer to where data is located Portability Easily portable between heterogeneous hardware and software platforms 21

HDFS iswhat Not HDFS Good for is not... good for I Low-latency reads High-throughput rather than low latency for small chunks of data. HBase addresses this issue. I Large amount of small files Better for millions of large files instead of billions of small files. I Multiple writers Single writer per file. Writes only at the end of file, no-support for arbitrary o set. Amir Payberah https://www.sics.se/~amir/dic.htm Amir H. Payberah (SICS) Distributed Filesystems April 8, 2014 7 / 32 HDFS Architecture The Hadoop Distributed File System (HDFS) Offers a way to store large files across multiple machines, rather than requiring a single machine to have disk capacity equal to/greater than the summed total size of the files HDFS is designed to be faulttolerant Using data replication and distribution of data When a file is loaded into HDFS, it is replicated and broken up into "blocks" of data These blocks are stored across the cluster nodes designated for storage, a.k.a. DataNodes. http://www.revelytix.com/?q=content/hadoop-ecosystem 22

Files and Blocks (1/3) Files and Blocks 1/3 I Files are split into blocks. I Blocks Single unit of storage: a contiguous piece of information on a disk. Transparent to user. Managed by Namenode, storedbydatanode. Blocks are traditionally either 64MB or 128MB: default is 64MB. Amir Payberah https://www.sics.se/~amir/dic.htm Amir H. Payberah (SICS) Distributed Filesystems April 8, 2014 10 / 32 Files and Blocks Files and (2/3) Blocks 2/3 I Why is a block in HDFS so large? To minimize the cost of seeks. I Time to read a block = seek time + transfer time seektime I Keeping the ratio transfertime small: we are reading data from the disk almost as fast as the physical limit imposed by the disk. I Example: if seek time is 10ms and the transfer rate is 100MB/s, to make the seek time 1% of the transfer time, we need to make the block size around 100MB. Amir Payberah https://www.sics.se/~amir/dic.htm Amir H. Payberah (SICS) Distributed Filesystems April 8, 2014 11 / 32 23

Files and Blocks (3/3) Files and Blocks 3/3 I Same block is replicated on multiple machines: default is 3 Replica placements are rack aware. 1st replica on the local rack. 2nd replica on the local rack but di erent machine. 3rd replica on the di erent rack. I Namenode determines replica placement. Amir Payberah https://www.sics.se/~amir/dic.htm Amir H. Payberah (SICS) Distributed Filesystems April 8, 2014 12 / 32 HDFS Daemons HDFS cluster is manager by three types of processes Namenode Manages the filesystem, e.g., namespace, meta-data, and file blocks HDFS Daemons (2/2) Metadata is stored in memory Datanode Stores and retrieves data blocks Reports to Namenode Runs on many machines Secondary Namenode Only for checkpointing. Not a backup for Namenode Amir Payberah https://www.sics.se/~amir/dic.htm Amir H. Payberah (SICS) Distributed Filesystems April 8, 2014 9 / 32 24

Hadoop Server Roles http://www.revelytix.com/?q=content/hadoop-ecosystem NameNode 1/3 The HDFS namespace is a hierarchy of files and directories These are represented in the NameNode using inodes Inodes record attributes permissions, modification and access times; namespace and disk space quotas. The file content is split into large blocks (typically 128 megabytes, but user selectable file-by-file), and each block of the file is independently replicated at multiple DataNodes (typically three, but user selectable file-by-file) The NameNode maintains the namespace tree and the mapping of blocks to DataNodes A Hadoop cluster can have thousands of DataNodes and tens of thousands of HDFS clients per cluster, as each DataNode may execute multiple application tasks concurrently http://www.revelytix.com/?q=content/hadoop-ecosystem 25

NameNode 2/3 The inodes and the list of blocks that define the metadata of the name system are called the image (FsImage above) NameNode keeps the entire namespace image in RAM Each client-initiated transaction is recorded in the journal, and the journal file is flushed and synced before the acknowledgment is sent to the client The NameNode is a multithreaded system and processes requests simultaneously from multiple clients. http://www.revelytix.com/?q=content/hadoop-ecosystem NameNode 3/3 HDFS requires A NameNode process to run on one node in the cluster All other nodes run the DataNode service to run on each "slave" node that will be processing data. When data is loaded into HDFS Data is replicated and split into blocks that are distributed across the DataNodes The NameNode is responsible for storage and management of metadata, so that when MapReduce or another execution framework calls for the data, the NameNode informs it where the needed data resides. http://www.revelytix.com/?q=content/hadoop-ecosystem 26

Where to Replicate? Tradeoff to choosing replication locations! Close: faster updates, less network bandwidth! switch 1 switch 2 Further: better failure tolerance! Default strategy: first copy on different location on same node, second on different rack (switch), third on same rack location, different node.! Strategy configurable.! Need to configure Hadoop file system to know location of nodes rack1 rack2 DataNode 1/3 Each block replica on a DataNode is represented by two files in the local native filesystem. The first file contains the data itself and the second file records the block's metadata including checksums for the data and the generation stamp. At startup each DataNode connects to a NameNode and preforms a handshake. The handshake verifies that the DataNode is part of the NameNode and runs the same version of software A DataNode identifies block replicas in its possession to the NameNode by sending a block report. A block report contains the block ID, the generation stamp and the length for each block replica the server hosts The first block report is sent immediately after the DataNode registration Subsequent block reports are sent every hour and provide the NameNode with an up-to-date view of where block replicas are located on the cluster. http://www.aosabook.org/en/hdfs.html 27

DataNode 2/3 During normal operation DataNodes send heartbeats to the NameNode to confirm that the DataNode is operating and the block replicas it hosts are available If the NameNode does not receive a heartbeat from a DataNode in ten minutes, it considers the DataNode to be out of service and the block replicas hosted by that DataNode to be unavailable The NameNode then schedules creation of new replicas of those blocks on other DataNodes. Heartbeats from a DataNode also carry information about total storage capacity, fraction of storage in use, and the number of data transfers currently in progress. These statistics are used for the NameNode's block allocation and load balancing decisions. http://www.aosabook.org/en/hdfs.html DataNode 3/3 The NameNode does not directly send requests to DataNodes. It uses replies to heartbeats to send instructions to the DataNodes The instructions include commands to replicate blocks to other nodes, remove local block replicas, re-register and send an immediate block report, and shut down the node These commands are important for maintaining the overall system integrity and therefore it is critical to keep heartbeats frequent even on big clusters. The NameNode can process thousands of heartbeats per second without affecting other NameNode operations. http://www.aosabook.org/en/hdfs.html 28

HDFS Client 1/3 User applications access the filesystem using the HDFS client, a library that exports the HDFS filesystem interface User is oblivious to backend implementation details eg # of replicas and which servers have appropriate blocks http://www.aosabook.org/en/hdfs.html HDFS Client 2/3 When an application reads a file, the HDFS client first asks the NameNode for the list of DataNodes that host replicas of the blocks of the file The list is sorted by the network topology distance from the client The client contacts a DataNode directly and requests the transfer of the desired block. http://www.aosabook.org/en/hdfs.html 29

Reading a file Client:! Read lines 1...1000 from bigdata.dat 1. Open Reading a file shorter! Get block locations! Read from a replica Namenode /user/ljdursi/diffuse bigdata.dat datanode1 datanode2 datanode3 Reading a file Client:! Read lines 1...1000 from bigdata.dat 2. Get block locations Reading a file shorter! Get block locations! Read from a replica Namenode /user/ljdursi/diffuse bigdata.dat datanode1 datanode2 datanode3 30

Reading a file Client:! Read lines 1...1000 from bigdata.dat 3. read blocks Reading a file shorter! Get block locations! Read from a replica Namenode /user/ljdursi/diffuse bigdata.dat datanode1 datanode2 datanode3 HDFS Client 3/3 http://www.aosabook.org/en/hdfs.html When a client writes, it first asks the NameNode to choose DataNodes to host replicas of the first block of the file The client organizes a pipeline from node-to-node and sends the data When the first block is filled, the client requests new DataNodes to be chosen to host replicas of the next block A new pipeline is organized, and the client sends the further bytes of the file Choice of DataNodes for each block is likely to be different 31

Writing a file multiple stage process! Create file! Get nodes for blocks! Start writing! Data nodes coordinate replication! Get ack back! Complete Writing a file Client:! Write newdata.dat 1. create Namenode /user/ljdursi/diffuse datanode1 datanode2 datanode3 bigdata.dat Writing a file Writing a file multiple stage process! Create file! Get nodes for blocks! Start writing! Data nodes coordinate replication! Get ack back! Complete Client:! Write newdata.dat 2. get nodes Namenode /user/ljdursi/diffuse datanode1 datanode2 datanode3 bigdata.dat 32

Writing a file Writing a file multiple stage process! Create file! Get nodes for blocks! Start writing! Data nodes coordinate replication! Get ack back! Complete 3. start writing Client:! Write newdata.dat Namenode /user/ljdursi/diffuse datanode1 datanode2 datanode3 bigdata.dat Writing a file Writing a file multiple stage process! Create file! Get nodes for blocks! Start writing! Data nodes coordinate replication! Get ack back! Complete Client:! Write newdata.dat 4. repl Namenode /user/ljdursi/diffuse datanode1 datanode2 datanode3 bigdata.dat 33

Writing a file multiple stage process! Create file! Get nodes for blocks! Start writing! Data nodes coordinate replication! Writing a file Get ack back (while writing)! Complete Client:! Write newdata.dat 5. ack Namenode /user/ljdursi/diffuse datanode1 datanode2 datanode3 bigdata.dat Writing a file Writing a file multiple stage process! Create file! Get nodes for blocks! Start writing! Data nodes coordinate replication! Get ack back! Complete Client:! Write newdata.dat 6. complete Namenode /user/ljdursi/diffuse datanode1 datanode2 datanode3 bigdata.dat 34

HDFS Federation HDFS Federation I Hadoop 2+ I Each Namenode will host part of the blocks. I A Block Pool is a set of blocks that belong to a single namespace. I Support for 1000+ machine clusters. Amir Payberah https://www.sics.se/~amir/dic.htm Amir H. Payberah (SICS) Distributed Filesystems April 8, 2014 17 / 32 File I/O and Leases in HDFS An application Adds data to HDFS by creating a new file and writing data to it On closing the file, new data can only be appended HDFS implements a single-writer, multiple-reader model Leases are granted by the NameNode to HDFS clients Writer clients need to periodically renew the lease via a heartbeat to the NameNode On file close, the lease is revoked There are soft and hard limits for leases (the hard limit being an hour) A write lease does not prevent multiple readers from reading the file 35

Data Pipelining for Writing Blocks 1/2 An HDFS file consists of blocks When there is a need for a new block, the NameNode allocates a block with a unique block ID and determines a list of DataNodes to host replicas of the block The DataNodes form a pipeline, the order of which minimizes the total network distance from the client to the last DataNode http://www.aosabook.org/en/hdfs.html Data Pipelining for Writing Blocks 2/2 Bytes are pushed to the pipeline as a sequence of packets. The bytes that an application writes first buffer at the client side After a packet buffer is filled (typically 64 KB), the data are pushed to the pipeline The next packet can be pushed to the pipeline before receiving the acknowledgment for the previous packets The number of outstanding packets is limited by the outstanding packets window size of the client. http://www.aosabook.org/en/hdfs.html 36

HDFS Interfaces There are many interfaces to interact with HDFS Simplest way of interacting with HDFS in command-line Two properties are set in HDFS configuration Default Hadoop filesystem fs.default.name: hdfs://localhost/ Used to determine the host (localhost) and port (8020) for the HDFS NameNode Replication factor dfs.replication Default is 3, disable replication by setting it to 1 (single datanode) Other HDFS interfaces HTTP: a read only interface for retrieving directory listings and data over HTTP FTP: permits the use of the FTP protocol to interact with HDFS Replication in HDFS Replica placement Critical to improve data reliability, availability and network bandwidth utilization Rack-aware policy as rack failure is far less than node failure With the default replication factor (3), one replica is put on one node in the local rack, another on a node in a different (remote) rack, and the last on a different node in the same remote rack One third of replication are on one node; two-third of replicas are on one rack, and the other third are evenly distributed across racks Benefits is to reduce inter-rack write traffic Replica selection A read request is satisfied from a replica that is nearby to the application Minimizes global bandwidth consumption and read latency If HDFS spans multiple data center, replica in the local data center is preferred over any remote replica 37

Communication Protocol All HDFS communication protocols are layered on top of the TCP/IP protocol A client establishes a connection to a configurable TCP port on the NameNode machine and uses ClientProtocol DataNodes talk to the NameNode using DataNode protocol A Remote Procedure Call (RPC) abstraction wraps both the ClientProtocol and DataNode protocol NameNode never initiates a RPC, instead it only responds to RPC requests issued by DataNodes or clients Robustness Primary objective of HDFS is to store data reliably even during failures Three common types of failures: NameNode, DataNode and network partitions Data disk failure Heartbeat messages to track the health of DataNodes NameNodes performs necessary re-replication on DataNode unavailability, replica corruption or disk fault Cluster rebalancing Automatically move data between DataNodes, if the free space on a DataNode falls below a threshold or during sudden high demand Data integrity Checksum checking on HDFS files, during file creation and retrieval Metadata disk failure Manual intervention no auto recovery, restart or failover 38

Software: Ceph Ceph An Alternative to HDFS in One Slide APP HOST / VM Client RadosGW RBD CephFS Rados LibRados S3 Swift MDS MDS.1 MONs MON.1 Pool 1 Pool 2 Pool... X... Pool n...... CRUSH map MDS.n MON.n PG 1 PG 2 PG 3 PG 4... PG n https://www.terena.org/ activities/tf-storage/ws16/ slides/140210- low_cost_storage_cephopenstack_swift.pdf... 1... n... 1... n 1 n Cluster Node [OSDs] Cluster Node [OSDs] Cluster Node [OSDs] 39