Apache Hadoop FileSystem and its Usage in Facebook



Similar documents
Apache Hadoop FileSystem Internals

Hadoop & its Usage at Facebook

Hadoop & its Usage at Facebook

Hadoop Architecture and its Usage at Facebook

Hadoop and its Usage at Facebook. Dhruba Borthakur June 22 rd, 2009

Design and Evolution of the Apache Hadoop File System(HDFS)

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee June 3 rd, 2008

Hadoop Distributed File System. Dhruba Borthakur June, 2007

Data Warehousing and Analytics Infrastructure at Facebook. Ashish Thusoo & Dhruba Borthakur athusoo,dhruba@facebook.com

Hadoop and Hive Development at Facebook. Dhruba Borthakur Zheng Shao {dhruba, Presented at Hadoop World, New York October 2, 2009

Hadoop at Yahoo! Owen O Malley Yahoo!, Grid Team owen@yahoo-inc.com

EXPERIMENTATION. HARRISON CARRANZA School of Computer Science and Mathematics

Petabyte Scale Data at Facebook. Dhruba Borthakur, Engineer at Facebook, SIGMOD, New York, June 2013

Using Hadoop for Webscale Computing. Ajay Anand Yahoo! Usenix 2008

Take An Internal Look at Hadoop. Hairong Kuang Grid Team, Yahoo! Inc

Apache Hadoop. Alexandru Costan

Journal of science STUDY ON REPLICA MANAGEMENT AND HIGH AVAILABILITY IN HADOOP DISTRIBUTED FILE SYSTEM (HDFS)

Hadoop Distributed File System. T Seminar On Multimedia Eero Kurkela

HDFS Under the Hood. Sanjay Radia. Grid Computing, Hadoop Yahoo Inc.

DATA MINING WITH HADOOP AND HIVE Introduction to Architecture

Hadoop Distributed File System. Jordan Prosch, Matt Kipps

Welcome to the unit of Hadoop Fundamentals on Hadoop architecture. I will begin with a terminology review and then cover the major components

Big Data With Hadoop

Prepared By : Manoj Kumar Joshi & Vikas Sawhney

Overview. Big Data in Apache Hadoop. - HDFS - MapReduce in Hadoop - YARN. Big Data Management and Analytics

CSE-E5430 Scalable Cloud Computing Lecture 2

CS54100: Database Systems

Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! ti2736b-ewi@tudelft.nl

Application Development. A Paradigm Shift

Large scale processing using Hadoop. Ján Vaňo

Distributed Filesystems

Hadoop IST 734 SS CHUNG

Big Data Technology Core Hadoop: HDFS-YARN Internals

Hadoop implementation of MapReduce computational model. Ján Vaňo

BBM467 Data Intensive ApplicaAons

Realtime Apache Hadoop at Facebook. Jonathan Gray & Dhruba Borthakur June 14, 2011 at SIGMOD, Athens

Hadoop Architecture. Part 1

Distributed File Systems

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

THE HADOOP DISTRIBUTED FILE SYSTEM

Hadoop Distributed File System (HDFS) Overview

HADOOP MOCK TEST HADOOP MOCK TEST I

Petabyte Scale Data at Facebook. Dhruba Borthakur, Engineer at Facebook, UC Berkeley, Nov 2012

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

<Insert Picture Here> Big Data

HDFS Architecture Guide

How To Scale Out Of A Nosql Database

CS2510 Computer Operating Systems

CS2510 Computer Operating Systems

Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware

Open source Google-style large scale data analysis with Hadoop

Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related

Distributed File Systems

Introduction to Cloud Computing

Facebook s Petabyte Scale Data Warehouse using Hive and Hadoop

MapReduce with Apache Hadoop Analysing Big Data

Data-Intensive Computing with Map-Reduce and Hadoop

Hadoop. Sunday, November 25, 12

BookKeeper. Flavio Junqueira Yahoo! Research, Barcelona. Hadoop in China 2011

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)

GraySort and MinuteSort at Yahoo on Hadoop 0.23

Hadoop and Map-Reduce. Swati Gore

Petabyte Scale Data at Facebook. Dhruba Borthakur, Engineer at Facebook, XLDB Conference at Stanford University, Sept 2012

The Hadoop Distributed File System

Hadoop: Embracing future hardware

Intro to Map/Reduce a.k.a. Hadoop

Chapter 7. Using Hadoop Cluster and MapReduce

Accelerating and Simplifying Apache

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

Big Data and Apache Hadoop s MapReduce

Alternatives to HIVE SQL in Hadoop File Structure

Hadoop Ecosystem B Y R A H I M A.

A very short Intro to Hadoop

America s Most Wanted a metric to detect persistently faulty machines in Hadoop

NoSQL Data Base Basics

Weekly Report. Hadoop Introduction. submitted By Anurag Sharma. Department of Computer Science and Engineering. Indian Institute of Technology Bombay

NoSQL and Hadoop Technologies On Oracle Cloud

Big Data and Hadoop. Sreedhar C, Dr. D. Kavitha, K. Asha Rani

Processing of massive data: MapReduce. 2. Hadoop. New Trends In Distributed Systems MSc Software and Systems

Hadoop: Distributed Data Processing. Amr Awadallah Founder/CTO, Cloudera, Inc. ACM Data Mining SIG Thursday, January 25 th, 2010

A Brief Outline on Bigdata Hadoop

Hadoop. MPDL-Frühstück 9. Dezember 2013 MPDL INTERN

and HDFS for Big Data Applications Serge Blazhievsky Nice Systems

BIG DATA TECHNOLOGY. Hadoop Ecosystem

International Journal of Advance Research in Computer Science and Management Studies

Data-Intensive Programming. Timo Aaltonen Department of Pervasive Computing

Storage Architectures for Big Data in the Cloud

Snapshots in Hadoop Distributed File System

HDFS Federation. Sanjay Radia Founder and Hortonworks. Page 1

Hadoop. Apache Hadoop is an open-source software framework for storage and large scale processing of data-sets on clusters of commodity hardware.

An Industrial Perspective on the Hadoop Ecosystem. Eldar Khalilov Pavel Valov

!"#$%&' ( )%#*'+,'-#.//"0( !"#$"%&'()*$+()',!-+.'/', 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3, Processing LARGE data sets

Lecture 2 (08/31, 09/02, 09/09): Hadoop. Decisions, Operations & Information Technologies Robert H. Smith School of Business Fall, 2015

Suresh Lakavath csir urdip Pune, India

Transcription:

Apache Hadoop FileSystem and its Usage in Facebook Dhruba Borthakur Project Lead, Apache Hadoop Distributed File System dhruba@apache.org Presented at Indian Institute of Technology November, 2010 http://www.facebook.com/hadoopfs

Outline Introduction Architecture of Hadoop Distributed File System (HDFS) Usage of Hadoop in Facebook Data Warehouse mysql Backups Online application storage

Who Am I? Apache Hadoop FileSystem (HDFS) Project Lead Core contributor since Hadoop s infancy Facebook (Hadoop, Hive, Scribe) Yahoo! (Hadoop in Yahoo Search) Veritas (San Point Direct, Veritas File System) IBM Transarc (Andrew File System) Univ of Wisconsin Computer Science Alumni (Condor Project)

A Confluence of Trends Fault Tolerance File System Open Data format Queryable Database Flexible Schema Archival Store Never Delete Data HADOOP: A Massively Scalable Queryable Store and Archive

Hadoop, Why? Need to process Multi Petabyte Datasets Data may not have strict schema Expensive to build reliability in each application. Nodes fail every day Failure is expected, rather than exceptional. The number of nodes in a cluster is not constant. Need common infrastructure Efficient, reliable, Open Source Apache License

Is Hadoop a Database? Hadoop triggered upheaval in Database Research A giant step backward in the programming paradigm, Dewitt et el DBMS performance outshines Hadoop Stonebraker, Dewitt, SIGMOD 2009 Parallel Databases A few scales to 200 nodes and about 5 PB Primary design goal is performance Requires homogeneous hardware Anomalous behavior is not well tolerated: A slow network can cause serious performance degradation Most queries fail when one node fails Scalability and Fault Tolerance: Hadoop to the rescue!

Hadoop History Dec 2004 Google GFS paper published July 2005 Nutch uses MapReduce Feb 2006 Starts as a Lucene subproject Apr 2007 Yahoo! on 1000-node cluster Jan 2008 An Apache Top Level Project May 2009 Hadoop sorts Petabyte in 17 hours Aug 2010 World s Largest Hadoop cluster at Facebook 2900 nodes, 30+ PetaByte

Who uses Hadoop? Amazon/A9 Facebook Google IBM Joost Last.fm New York Times PowerSet Veoh Yahoo!

What is Hadoop used for? Search Yahoo, Amazon, Zvents Log processing Facebook, Yahoo, ContextWeb. Joost, Last.fm Recommendation Systems Facebook Data Warehouse Facebook, AOL Video and Image Analysis New York Times, Eyealike

Commodity Hardware Typically in 2 level architecture Nodes are commodity PCs 20-40 nodes/rack Uplink from rack is 4 gigabit Rack-internal is 1 gigabit

Goals of HDFS Very Large Distributed File System 10K nodes, 1 billion files, 100 PB Assumes Commodity Hardware Files are replicated to handle hardware failure Detect failures and recovers from them Optimized for Batch Processing Data locations exposed so that computations can move to where data resides Provides very high aggregate bandwidth User Space, runs on heterogeneous OS

HDFS Architecture NameNode Cluster Membership Secondary NameNode Client 3. Read/write data DataNodes NameNode : Maps a file to a file-id and list of DataNodes DataNode : Maps a block-id to a physical location on disk SecondaryNameNode: Periodic merge of Transaction log

Distributed File System Single Namespace for entire cluster Data Coherency Write-once-read-many access model Client can only append to existing files Files are broken up into blocks Typically 128-256 MB block size Each block replicated on multiple DataNodes Intelligent Client Client can find location of blocks Client accesses data directly from DataNode

NameNode Metadata Meta-data in Memory The entire metadata is in main memory No demand paging of meta-data Types of Metadata List of files List of Blocks for each file List of DataNodes for each block File attributes, e.g creation time, replication factor A Transaction Log Records file creations, file deletions. etc

DataNode A Block Server Stores data in the local file system (e.g. ext3) Stores meta-data of a block (e.g. CRC32) Serves data and meta-data to Clients - Periodic validation of checksums Block Report Periodically sends a report of all existing blocks to the NameNode Facilitates Pipelining of Data Forwards data to other specified DataNodes

Block Placement Current Strategy -- One replica on local node -- Second replica on a remote rack -- Third replica on same remote rack -- Additional replicas are randomly placed Clients read from nearest replica Pluggable policy for placing block replicas Co-locate datasets that are often used together http://hadoopblog.blogspot.com/2009/09/hdfs-block-replica-placement-in-your.html

Data Pipelining Client writes block to the first DataNode The first DataNode forwards the data to the next DataNode in the Pipeline, and so on When all replicas are written, the Client moves on to write the next block in file

NameNode Failure A Single Point of Failure Transaction Log stored in multiple directories A directory on the local file system A directory on a remote file system (NFS/CIFS) This is a problem with 24 x 7 operations AvatarNode comes to the rescue

NameNode High Availability: Challenges DataNodes send block location information to only one NameNode Client Client retrieves block location from NameNode NameNode needs block locations in memory to serve clients Primary NameNode The in-memory metadata for 100 million files could be 60 GB, huge! Block location message yes, I have blockid 123 DataNodes

NameNode High Availability: AvatarNode Active-Standby Pair Coordinated via zookeeper Failover in few seconds Wrapper over NameNode Active AvatarNode Writes transaction log to filer Standby AvatarNode Reads transactions from filer Active AvatarNode (NameNode) Block location messages Client Client retrieves block location from Primary or Standby write transaction NFS Filer DataNodes read transaction Latest metadata in memory http://hadoopblog.blogspot.com/2010/02/hadoop-namenode-high-availability.html Standby AvatarNode (NameNode) Block location messages

Rebalancer Goal: % disk full on DataNodes should be similar Usually run when new DataNodes are added Cluster is online when Rebalancer is active Rebalancer is throttled to avoid network congestion Disadvantages Does not rebalance based on access patterns or load No support for automatic handling of hotspots of data

Disk is not cheap! - RAID A Data Block is stored in triplicate A B C File /dir/file.txt three data blocks A B C nine physical blocks on disk A B C HDFS RAID to the rescue DiskReduce from CMU Garth Gibson research A B C /dir/file.txt A file with three blocks A, B and C

HDFS Raid Start the same: triplicate every data block Background encoding Combine third replica of blocks from a single file to create parity block Remove third replica RaidNode Auto fix of failed replicas A A B B C C A B C A+B+C A file with three blocks A, B and C http://hadoopblog.blogspot.com/2009/08/hdfs-and-erasure-codes-hdfs-raid.html

Hadoop @ Facebook

Who generates this data? Lots of data is generated on Facebook 500+ million active users 30 billion pieces of content shared every month (news stories, photos, blogs, etc)

Data Usage Statistics per day: 20 TB of compressed new data added per day 3 PB of compressed data scanned per day 20K jobs on production cluster per day 480K compute hours per day Barrier to entry is significantly reduced: New engineers go though a Hadoop/Hive training session 300+ people run jobs on Hadoop Analysts (non-engineers) use Hadoop through Hive

Where is this data stored? Hadoop/Hive Warehouse 24K cores, 30 PetaBytes 12 or 24 TB per node Two level network topology 1 Gbit/sec from node to rack switch 4 Gbit/sec to top level rack switch

Data Flow into Hadoop Cloud Web Servers Scribe MidTier Network Storage and Servers Oracle RAC Hadoop Hive Warehouse MySQL

Hadoop Scribe Web Servers Scribe MidTier Scribe Writers RealBme Hadoop Cluster Oracle RAC Hadoop Hive Warehouse MySQL http://hadoopblog.blogspot.com/2009/06/hdfs-scribe-integration.html

Archival: Move old data to cheap storage Hadoop Warehouse NFS Hadoop Archive Node Cheap NAS Hadoop Archival Cluster Hive Query hep://issues.apache.org/jira/browse/hdfs- 220

Hive SQL Query Language for Hadoop Efficient SQL to Map-Reduce Compiler Mar 2008: Started at Facebook Countable for 95%+ of Hadoop jobs @ Facebook Used by ~300 engineers and business analysts at Facebook every month

Other uses for HDFS Backup of all mysql databases Mysql dump files stored in HDFS Storage for Online Application Apache HBase layered on HDFS HBase is a key-value store 500 TB in size

Useful Links HDFS Design: http://hadoop.apache.org/core/docs/current/hdfs_design.html Hadoop API: http://hadoop.apache.org/core/docs/current/api/ My Hadoop Blog: http://hadoopblog.blogspot.com/ http://www.facebook.com/hadoopfs