HDFS Under the Hood Sanjay Radia Sradia@yahoo-inc.com Grid Computing, Hadoop Yahoo Inc. 1
Outline Overview of Hadoop, an open source project Design of HDFS On going work 2
Hadoop Hadoop provides a framework for storing and processing petabytes of data using commodity hardware and storage Storage: HDFS, HBase Processing: MapReduce Pig MapReduce HBase HDFS 3
Hadoop, an Open Source Project Implemented in Java Apache Top Level Project http://hadoop.apache.org/core/ Core (15 Committers) HDFS MapReduce Hbase (3 Committers) Community of contributors is growing Though mostly Yahoo for HDFS and MapReduce Powerset is leading the effort for HBase Facebook is in process of opening Hive Opportunities contributing to a major open source project 4
Hadoop Users Clusters from 1 to 2k nodes Yahoo, Last.fm, Joost, Facebook, A9, In production use at Yahoo in multiple 2k clusters Initial tests show that 0.18 will scale to 4K nodes - being validated Broad academic interest IBM/Google cloud computing initiative A 40 node cluster + Xen based VMs CMU/Yahoo supercomputer cluster M45-500 node cluster Looking into making this more widely available to other universities Hadoop Summit hosted by in march 2008 Attracted over 400 attendees 5
Hadoop Characteristics Commodity HW + Horizontal scaling Add inexpensive servers with JBODS Storage servers and their disks are not assumed to be highly reliable and available Use replication across servers to deal with unreliable storage/servers Metadata-data separation - simple design Storage scales horizontally Metadata scales vertically (today) Slightly Restricted file semantics Focus is mostly sequential access Single writers No file locking features Support for moving computation close to data i.e. servers have 2 purposes: data storage and computation Simplicity of design why a small team could build such a large system in the first place 6
File Systems Background(1) : Scaling Horizontally Scale namespace ops and IO Vertical Scaling Distributed FS Vertically Scale namespace ops Horizontally Scale IO and Storage 7
File Systems Background(2) Federating Andrew Federated mount of file systems on /afs (Plus follow on work on disconnected operations) Newcastle connection /.. Mounts (plus remote Unix semantics) Many others. 8
File systems Background (3) Separation of metadata from data - 1978, 1980 Separating Data from Function in a Distributed File System (1978) by J E Israel, J G Mitchell, H E Sturgis A universal file server (1980) by A D Birrell, R M Needham + Horizontal scaling of storage nodes and io bandwidth Several startups building scalable NFS Luster GFS pnfs + Commodity HW with JBODs, Replication, Non-posix semantics GFS + Computation close to the data GFS/MapReduce 9
Hadoop: Multiple FS implementations FileSystem is the interface for accessing the file system It has multiple implementations: HDFS: hdfs:// Local file system: file:// Amazon S3: s3:// See Tom White s writeup on using hadoop on EC2/S3 Kosmos kfs:// Archive har:// - 0.18 You can set your default file system so that your file names are simply /foo/bar/ MapReduce uses the FileSystem interface - hence it can run on multiple file systems 10
HDFS: Directories, Files & Blocks Data is organized into files and directories Files are divided into uniform sized blocks and distributed across cluster nodes Blocks are replicated to handle hardware failure HDFS keeps checksums of data for corruption detection and recovery HDFS (& FileSystem) exposes block placement so that computation can be migrated to data 11
HDFS Architecture (1) Files broken into blocks of 128MB (per-file configurable) Single Namenode Manages the file namespace File name to list blocks + location mapping File metadata (i.e. inode ) Authorization and authentication Collect block reports from Datanodes on block locations Replicate missing blocks Implementation detail: Keeps ALL namespace in memory plus checkpoints & journal 60M objects on 16G machine (e.g. 20M files with 2 blocks each) Datanodes (thousands) handle block storage Clients access the blocks directly from data nodes Data nodes periodically send block reports to Namenode Implementation detail: Datanodes store the blocks using the underlying OS s files 12
HDFS Architecture (2) Metadata Metadata Log getlocations Namenode create getfileinfo addblock Client Client read copy blockreceived write b1 b3 b2 b4 b1 b3 replicate b3 b2 b6 b2 b3 b5 write b5 write b5 b4 13
HDFS Architecture (3): Computation close to the data Data Data data data data data Data data data data data Data data data data data Data data data data data Data data data data data Data data data data data Data data data data data Data data data data data Data data data data data Data data data data data Data data data data data Data data data data data DFS Block 2 DFS Block 2 Hadoop Cluster DFS Block 1 DFS Block 1 DFS Block 1 MAP DFS Block 2 DFS Block 3 MAP Reduce MAP DFS Block 3 DFS Block 3 Results Data data data data Data data data data Data data data data Data data data data Data data data data Data data data data Data data data data Data data data data Data data data data 14
Reads, Writes, Block Placement and Replication Reads From the nearest replica Writes Writes are pipelined to block replicas Append is mostly in 0.18, will be completed in 0.19 Hardest part is dealing with failures of DNs holding the replicas during an append Generation number to deal with failures of DNs during write Concurrent appends are likely to happen in the future No plans to add random writes so far. Replication and block placement A file s replication factor can be changed dynamically (default 3) Block placement is rack aware Block under-replication & over-replication is detected by Namenode triggers a copy or delete operation Balancer application rebalances blocks to balance DN utilization 15
Details (1): The Protocols Client to Namenode RPC Client to Datanode Streaming writes/reads On reads data shipped directly from OS Considering RPC for pread(offset, bytes) Datanode to Namenode RPC RPC (heartbeat, blockreport ) RPC Reply is the command from Namenode to Datanode (copy block ) Not cross language but that was the goal (hence not RMI ) 0.18 rpc is quite good solves timeouts and manages queues/buffers and spikes well 16
Current & near term work at (1) HDFS Improved RPC - 0.18 - made a significant improvement in scaling Clean up the interfaces - 0.18, 0.19 Improved Reliability & Availability - 0.17, 0.18, 0.19 Current Uptime: 98.5% (includes planned downtime) Data loss: A few data blks lost due to bugs and corruption, Never had any fsimage corruption Append - 0.18, 0.19 Authorization (Posix like permissions) 0.16 Authentication - in progress, 0.20? Performance - 0.17, 0.18, 0.19, 0.16 Performance Reads within in Rack: 4 threads: 110MB/s, 1 Thread 40MB/s (buffer copies fixed in 0.17) Writes within Rack: 4 threads 45MB/s, 1 Thread 21MB/s Goal: Read/write at speed of network/disk with low CPU utilization NN scaling NN Replication and HA Protocol/Interface versioning Language agnostic RPC 17
Current & near term work at (2) MapReduce New API using context objects - 0.19? New resource manager/scheduler framework Main goal - release resources not being used (e.g. during reduce phase) Pluggable schedulers Yahoo - Queues with guaranteed capacity + priorities + user quotas Facebook - Queue per user? 18
Scaling the Name Service: Options Not to scale # clients 100x 50x Good isolation properties Dynamic Partitioning 20x 4x NS in memory Plus RO Replicas NS in malloc hean + RO Replicas + Finer grain locks Mountable Volumes + Automount volume catalog Partial NS in memory With mountable volumes 1x All NS in memory Separate Bmaps from NN Archives Partial NS (Cache) in memory 20M 60M 400M 1000M 2000M+ # names 19
Scaling the Name Service: Mountable Namespace Volumes with Automounter 20
Scaling using Volumes Multiple name space volumes that share the data node Volume = Namespace + Blockpool Volume is a self-contained unit can gc independent of other volumes A grid has multiple volumes and hence multiple blockpools One Storage Pool Each DNs can contribute to each blockpool A block-pool is like a LUN Block pool manager deals with replication & BRs Each name space registered in catalog Automount at top level using zoo-keeper BTW we will support mounting elsewhere Block-pools and volumes useful for other things Snapshots, temp space, per-job tmp namespace, quotas, federation, other (non hdfs) namespace built on block-pool Grid 1 V = NS + BP Grid 2 NS NS Block Pool Block Pool NS Block Pool DN DN DN 21
DN, BP and NS Interactions GetBlist(pathname) NS1 NS2 getlistofvalidblocks() Client GetLocations(B) Block Pool 1 Manager Block Pool 2 Manager HB & BRs for BP1 HB & BRs for BP2 Free, Replicate B in BP1 Free, Replicate B in BP2 DN DN DN 22
One example of managing namespaces Cmu.org.. Yahoo-inc.com ysearch.corp / Grid1.corp.. data projects d1 d2 d3 p1 p2 users tmp 23
Namespace Volumes Plus Scales both namespace and performance Gives performance isolation between volumes Well understood concepts Posix mount, DNS mount and automounter block-pools are logical storage units (Luns) Volume (NS + Block-pool) is self-contained unit with no dependencies on other volumes Block-pools and volumes useful for other things Snapshots, temp space, per-job tmp namespace, quotas, federation, other (non hdfs) namespace built on block-pool Compatible with separation of Block maps (true for most other solutions) Compatible with RO replicas (true for most other solutions) Minus Manual separation of NSs (a few can be done automatically) Not completely transparent (namespace structure changes unless you manually mount in right place) Common issue for all NS partitioning solutions Managing multiple NNs 24
More Info Main Web sites http://hadoop.apache.org/core/ http://wiki.apache.org/hadoop/ http://wiki.apache.org/hadoop/gettingstartedwithhadoop http://wiki.apache.org/hadoop/hadoopmapreduce Hadoop Futures - categorized list of areas of new development http://wiki.apache.org/hadoop/hdfsfutures Some ideas for interesting projects on Hadoop http://research.yahoo.com/node/1980 My contact Sradia@yahoo-inc.com 25