Hadoop & its Usage at Facebook

Similar documents
Hadoop & its Usage at Facebook

Apache Hadoop FileSystem and its Usage in Facebook

Hadoop Architecture and its Usage at Facebook

Hadoop and its Usage at Facebook. Dhruba Borthakur June 22 rd, 2009

Apache Hadoop FileSystem Internals

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee June 3 rd, 2008

Design and Evolution of the Apache Hadoop File System(HDFS)

Hadoop Distributed File System. Dhruba Borthakur June, 2007

Hadoop and Hive Development at Facebook. Dhruba Borthakur Zheng Shao {dhruba, Presented at Hadoop World, New York October 2, 2009

Hadoop at Yahoo! Owen O Malley Yahoo!, Grid Team owen@yahoo-inc.com

Using Hadoop for Webscale Computing. Ajay Anand Yahoo! Usenix 2008

Take An Internal Look at Hadoop. Hairong Kuang Grid Team, Yahoo! Inc

Apache Hadoop. Alexandru Costan

Data Warehousing and Analytics Infrastructure at Facebook. Ashish Thusoo & Dhruba Borthakur athusoo,dhruba@facebook.com

EXPERIMENTATION. HARRISON CARRANZA School of Computer Science and Mathematics

Hadoop Distributed File System. T Seminar On Multimedia Eero Kurkela

HDFS Under the Hood. Sanjay Radia. Grid Computing, Hadoop Yahoo Inc.

CS54100: Database Systems

DATA MINING WITH HADOOP AND HIVE Introduction to Architecture

Prepared By : Manoj Kumar Joshi & Vikas Sawhney

Introduction to Cloud Computing

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

Big Data With Hadoop

Application Development. A Paradigm Shift

CSE-E5430 Scalable Cloud Computing Lecture 2

THE HADOOP DISTRIBUTED FILE SYSTEM

Petabyte Scale Data at Facebook. Dhruba Borthakur, Engineer at Facebook, SIGMOD, New York, June 2013

Journal of science STUDY ON REPLICA MANAGEMENT AND HIGH AVAILABILITY IN HADOOP DISTRIBUTED FILE SYSTEM (HDFS)

Hadoop Distributed File System. Jordan Prosch, Matt Kipps

Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! ti2736b-ewi@tudelft.nl

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

Hadoop IST 734 SS CHUNG

Large scale processing using Hadoop. Ján Vaňo

BBM467 Data Intensive ApplicaAons

Big Data Technology Core Hadoop: HDFS-YARN Internals

Overview. Big Data in Apache Hadoop. - HDFS - MapReduce in Hadoop - YARN. Big Data Management and Analytics

Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related

<Insert Picture Here> Big Data

Hadoop implementation of MapReduce computational model. Ján Vaňo

Distributed Filesystems

CS2510 Computer Operating Systems

CS2510 Computer Operating Systems

GraySort and MinuteSort at Yahoo on Hadoop 0.23

Distributed File Systems

How To Scale Out Of A Nosql Database

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

Open source Google-style large scale data analysis with Hadoop

Realtime Apache Hadoop at Facebook. Jonathan Gray & Dhruba Borthakur June 14, 2011 at SIGMOD, Athens

Intro to Map/Reduce a.k.a. Hadoop

Data-Intensive Computing with Map-Reduce and Hadoop

MapReduce with Apache Hadoop Analysing Big Data

HDFS Architecture Guide

Welcome to the unit of Hadoop Fundamentals on Hadoop architecture. I will begin with a terminology review and then cover the major components

GeoGrid Project and Experiences with Hadoop

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)

Hadoop: Embracing future hardware

Distributed File Systems

Hadoop Distributed File System (HDFS) Overview

Hadoop Big Data for Processing Data and Performing Workload

Hadoop Architecture. Part 1

Hadoop: Distributed Data Processing. Amr Awadallah Founder/CTO, Cloudera, Inc. ACM Data Mining SIG Thursday, January 25 th, 2010

Hadoop Ecosystem B Y R A H I M A.

Storage Architectures for Big Data in the Cloud

Alternatives to HIVE SQL in Hadoop File Structure

Chapter 7. Using Hadoop Cluster and MapReduce

A very short Intro to Hadoop

and HDFS for Big Data Applications Serge Blazhievsky Nice Systems

Petabyte Scale Data at Facebook. Dhruba Borthakur, Engineer at Facebook, UC Berkeley, Nov 2012

NoSQL and Hadoop Technologies On Oracle Cloud

Parallel Processing of cluster by Map Reduce

HADOOP MOCK TEST HADOOP MOCK TEST I

Lecture 2 (08/31, 09/02, 09/09): Hadoop. Decisions, Operations & Information Technologies Robert H. Smith School of Business Fall, 2015

Hadoop. Sunday, November 25, 12

Big Data and Hadoop. Sreedhar C, Dr. D. Kavitha, K. Asha Rani

Facebook s Petabyte Scale Data Warehouse using Hive and Hadoop

The Hadoop Distributed File System

Big Fast Data Hadoop acceleration with Flash. June 2013

An Industrial Perspective on the Hadoop Ecosystem. Eldar Khalilov Pavel Valov

Petabyte Scale Data at Facebook. Dhruba Borthakur, Engineer at Facebook, XLDB Conference at Stanford University, Sept 2012

Processing of massive data: MapReduce. 2. Hadoop. New Trends In Distributed Systems MSc Software and Systems

BIG DATA TECHNOLOGY. Hadoop Ecosystem

America s Most Wanted a metric to detect persistently faulty machines in Hadoop

Introduction to HDFS. Prasanth Kothuri, CERN

Generic Log Analyzer Using Hadoop Mapreduce Framework

Snapshots in Hadoop Distributed File System

Big Data and Apache Hadoop s MapReduce

Hadoop. MPDL-Frühstück 9. Dezember 2013 MPDL INTERN

Reduction of Data at Namenode in HDFS using harballing Technique

Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware

IT and Storage for Big Data Analytics

MapReduce, Hadoop and Amazon AWS

Weekly Report. Hadoop Introduction. submitted By Anurag Sharma. Department of Computer Science and Engineering. Indian Institute of Technology Bombay

The Hadoop Distributed File System

Big Data Storage Options for Hadoop Sam Fineberg, HP Storage

HDFS Federation. Sanjay Radia Founder and Hortonworks. Page 1

Transcription:

Hadoop & its Usage at Facebook Dhruba Borthakur Project Lead, Hadoop Distributed File System dhruba@apache.org Presented at the Storage Developer Conference, Santa Clara September 15, 2009

Outline Introduction Architecture of Hadoop Distributed File System Hadoop Usage at Facebook

Who Am I? Hadoop FileSystem (HDFS) Project Lead Core contributor since Hadoop s infancy Facebook (Hadoop, Hive, Scribe) Yahoo! (Hadoop in Yahoo Search) Veritas (San Point Direct, Veritas File System) IBM Transarc (Andrew File System) UW Computer Science Alumni (Condor Project)

A Confluence of Trends Fault Tolerance File System Open Data format Queryable Database Flexible Schema Archival Store Never Delete Data HADOOP: A Massively Scalable Queryable Store and Archive

Hadoop, Why? Need to process Multi Petabyte Datasets Data may not have strict schema Expensive to build reliability in each application. Nodes fail every day Failure is expected, rather than exceptional. The number of nodes in a cluster is not constant. Need common infrastructure Efficient, reliable, Open Source Apache License

Hadoop History Dec 2004 Google GFS paper published July 2005 Nutch uses MapReduce Feb 2006 Starts as a Lucene subproject Apr 2007 Yahoo! on 1000-node cluster Jan 2008 An Apache Top Level Project Jul 2008 A 4000 node test cluster May 2009 Hadoop sorts Petabyte in 17 hours

Amazon/A9 Facebook Google IBM Joost Last.fm New York Times PowerSet Veoh Yahoo! Who uses Hadoop?

What is Hadoop used for? Search Yahoo, Amazon, Zvents Log processing Facebook, Yahoo, ContextWeb. Joost, Last.fm Recommendation Systems Facebook Data Warehouse Facebook, AOL Video and Image Analysis New York Times, Eyealike

Public Hadoop Clouds Hadoop Map-reduce on Amazon EC2 http://wiki.apache.org/hadoop/amazonec2 IBM Blue Cloud Partnering with Google to offer web-scale infrastructure Global Cloud Computing Testbed Joint effort by Yahoo, HP and Intel http://www.opencloudconsortium.org/testbed.html

Commodity Hardware Typically in 2 level architecture Nodes are commodity PCs 30-40 nodes/rack Uplink from rack is 3-4 gigabit Rack-internal is 1 gigabit

Goals of HDFS Very Large Distributed File System 10K nodes, 100 million files, 10 PB Assumes Commodity Hardware Files are replicated to handle hardware failure Detect failures and recovers from them Optimized for Batch Processing Data locations exposed so that computations can move to where data resides Provides very high aggregate bandwidth User Space, runs on heterogeneous OS

HDFS Architecture Cluster Membership NameNode Secondary NameNode Client 3. Read/write data DataNodes NameNode : Maps a file to a file-id and list of DataNodes DataNode : Maps a block-id to a physical location on disk SecondaryNameNode: Periodic merge of Transaction log

Distributed File System Single Namespace for entire cluster Data Coherency Write-once-read-many access model Client can only append to existing files Files are broken up into blocks Typically 128 MB block size Each block replicated on multiple DataNodes Intelligent Client Client can find location of blocks Client accesses data directly from DataNode

NameNode Metadata Meta-data in Memory The entire metadata is in main memory No demand paging of meta-data Types of Metadata List of files List of Blocks for each file List of DataNodes for each block File attributes, e.g creation time, replication factor A Transaction Log Records file creations, file deletions. etc

DataNode A Block Server Stores data in the local file system (e.g. ext3) Stores meta-data of a block (e.g. CRC) Serves data and meta-data to Clients Block Report Periodically sends a report of all existing blocks to the NameNode Facilitates Pipelining of Data Forwards data to other specified DataNodes

Data Correctness Use Checksums to validate data Use CRC32 File Creation Client computes checksum per 512 byte DataNode stores the checksum File access Client retrieves the data and checksum from DataNode If Validation fails, Client tries other replicas

Block Placement Current Strategy -- One replica on local node -- Second replica on a remote rack -- Third replica on same remote rack -- Additional replicas are randomly placed Clients read from nearest replica Pluggable policy Helps in experimentation and innovation Work in progress

Data Pipelining Client writes block to the first DataNode The first DataNode forwards the data to the next DataNode in the Pipeline, and so on When all replicas are written, the Client moves on to write the next block in file

NameNode Failure A Single Point of Failure Transaction Log stored in multiple directories A directory on the local file system A directory on a remote file system (NFS/CIFS) Need to develop a real HA solution work in progress: BackupNode

Rebalancer Goal: % disk full on DataNodes should be similar Usually run when new DataNodes are added Cluster is online when Rebalancer is active Rebalancer is throttled to avoid network congestion Command line tool Disadvantages Does not rebalance based on access patterns or load No support for automatic handling of hotspots of data

Hadoop Map/Reduce The Map-Reduce programming model Distributed processing of large data sets Pluggable user code runs in generic framework Common design pattern in data processing cat * grep sort unique -c cat > file input map shuffle reduce output Natural for: Log processing Web search indexing Ad-hoc queries

Map/Reduce and Storage Clean API between Map/Reduce and HDFS Hadoop Map/Reduce and Storage Stacks Typical installations store data in HDFS Hadoop Map/Reduce can run on data in MySQL Demonstrated to run on IBM GPFS

Job Scheduling Current state of affairs with Hadoop Scheduler Places computation close to data FIFO and Fair Share scheduler Work in progress Resource aware (cpu, memory, network) Support for MPI workloads Isolation of one job from another

Hadoop Cloud at Facebook

Who generates this data? Lots of data is generated on Facebook 250+ million active users 30 million users update their statuses at least once each day More than 1 billion photos uploaded each month More than 10 million videos uploaded each month More than 1 billion pieces of content (web links, news stories, blog posts, notes, photos, etc.) shared each week

Where is this data stored? Hadoop/Hive Warehouse 5000 cores, 2.6 PetaBytes (August 2009) Hadoop Archival Store 200 TB

Rate of Data Growth

Data Flow into Hadoop Cloud Network Storage and Servers Web Servers Scribe MidTier Oracle RAC Hadoop Hive Warehouse MySQL

Hadoop Scribe: Avoid Costly Web Servers Filers Scribe MidTier Scribe Writers RealBme Hadoop Cluster Oracle RAC Hadoop Hive Warehouse MySQL http://hadoopblog.blogspot.com/2009/06/hdfs-scribe-integration.html

HDFS Raid Start the same: triplicate every data block Background encoding Combine third replica of blocks from a single file to create parity block Remove third replica Apache JIRA HDFS-503 DiskReduce from CMU Garth Gibson research A A B B C C A B C A+B+C A file with three blocks A, B and C http://hadoopblog.blogspot.com/2009/08/hdfs-and-erasure-codes-hdfs-raid.html

Statistics per day: Data Usage 4 TB of compressed new data added per day 55TB of compressed data scanned per day 3200+ Hive jobs on production cluster per day 80M compute minutes per day Barrier to entry is significantly reduced: New engineers go though a Hive training session 140+ people run jobs on Hadoop/Hive jobs Analysts (non-engineers) use Hadoop through Hive

Hive Query Language SQL type query language on Hadoop Analytics SQL queries translate well to mapreduce Files are insufficient data management abstractions Need Tables, schemas, partitions, indices

Archival: Move old data to cheap storage Hadoop Warehouse Distributed copy NFS Hadoop Archive Node Cheap NAS Hadoop Archival Cluster Hive Query hfp://issues.apache.org/jira/browse/hdfs 220

Dynamic-size Hadoop Clouds Why multiple compute clouds in Facebook? Accept arbitrary jobs from users Users unaware of resources needed by job Absence of flexible Job Isolation techniques Provide adequate SLAs for jobs Why multiple private storage clouds? Lack of security in HDFS

Summary Hadoop is a disruptive technology Hadoop is the platform of choice for Storage Cloud Opportunity for vendors to plug in their storage with Hadoop Map/Reduce

Useful Links HDFS Design: http://hadoop.apache.org/core/docs/current/hdfs_design.html Hadoop API: http://hadoop.apache.org/core/docs/current/api/ My Hadoop Blog: http://hadoopblog.blogspot.com/