!"#$%&' ( Processing LARGE data sets )%#*'+,'-#.//"0( Framework for o! reliable o! scalable o! distributed computation of large data sets 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3,
1&2+#3+2+*4( 56"7*( 56"7*( 8/<"/6&6*' =#+279&( 8/99&$*':7*"7*( ;*+22'8/99&$*':7*"7*( ;$#2#3+2+*4( Service Cost 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 1,
Large data sets: > 5 PB How many hard discs? 1 TB/disc => 5000 discs! Need more computers! 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0,?,
>#+6'?#9*,(! Hadoop Common! Hadoop Distributed File System (HDFS)! Hapoop MapReduce 1&2#*&.'?9/@&$*,(! Avro: data serialization! Cassandra: scalable multi-master database without SPoF! Chukwa: data collection system! HBase: scalable, distributed database! Hive: data warehouse infrastructure! Mahout: machine learning & data mining! Pig: high-level data-flow language & execution framework for parallel computation! ZooKeeper: coordination service for distributed applications 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, @,
Supported by major companies! A,&9,'/='-#.//"( 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, A,
>#64'>/9&B(! Lets stop the list at the letter H ;-) >+2&,*/6&,(! 27 December, 2011: Release 1.0.0 available 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, B,
Typical Hadoop cluster:! Consists of commodity hardware! Heterogenous! Single machines are NOT highly available 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, C,
C#+279&,(! Hardware failures are common!!give a cluster enough computers and there will definitively be machines that are non-functional! Hadoop:! Don t even try to use stable machines! Fault tolerant behaviour at the application layer. 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, D,
-#.//"'DC;(! Fault tolerant! Requires only low-cost hardware! Suitable for large data sets! Is programmed in Java => Runs on many different software platforms 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, E,
! Optimized for high throughput! High data access latency! Not POSIX conform C+2&,(! Typical file size: > 1 GB! Traditional hierarchical organization of files with directories! A file is separated into blocks of equal size. (except the last block)! Typical block size: > 64 MB! Replication of the blocks across the dfs 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 32,
-DC;'!9$%+*&$*79&(! Master/Slave architecture (rack aware)! NameNode (Master): o! Manages file system namespace o! Handles requests to access files o! Distributes blocks to DataNodes o! Handles replication of files! DataNodes (Slaves): o! Stores blocks locally o! Serve read/write requests o! Send periodic Heartbeats to NameNode o! Creates, deletes, renames files upon order from Namenode! Access Model o! WORM (Write once, read many) o! Streaming data to Clients 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 33,
Client Metadata ops NameNode Metadata (Name, replicas, Read / Write Block ops DataNodes DataNodes Rack 1 Rack 2 1&"2+$#*+/6(! Replication factor: o! Configurable for each file o! Changeable at any time! NameNode handles replication o! If not specified: what replication factor? o! Where to store them? o! React to failed replicas 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 31,
NameNode (Filename, numreplicas, block-ids, ) /users/example/data/part-0, r:2, {1, 3}, /users/example/data/part-1, r:3, {2, 4, 5}, DataNodes 1 2 2 1 4 2 5 5 3 4 3 5 4 ;*#6.#9.';*9#*&E+&,(! Optimizing Replication! Default replication factor = 3! One replica within the same rack as the original! One replica on a machine in a different rack! Last replica on different machine but on the same rack as the second replica 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3?,
F/$#2+*4'#G#9&6&,,(! Client wants to read data: o! HDFS tries to serve requests from nearest DataNode. => reduces bandwidth consumption and access latency o! Optimal: client on same machine as DataNode ;?/C(! The NameNode is a SPoF o! Secondary NameNode as Backup o! Requires human interaction => SPoF Secondary NameNode Backup NameNode Metadata (Name, replicas, Client 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3@,
1&2+#3+2+*4'/=' C+2&';4,*&<'>&*#.#*#(! Records changes in transaction log: EditLog.! Stores complete File System Namespace in file FsImage.! Keeps copy of FsImage in memory o! < 8 GB suffice! Checkpoint: o! Apply transactions in EditLog to FsImage! Possibility to maintain multiple copies of EditLog & FsImage! Snapshots: o! Feature of future releases o! Copy of namespace at particular point in time o! Possibility to roll back! Secondary NameNode o! Maintains copy of primary NameNode o! Can replace primary on failure (Manual interaction necessary) 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3A,
>#"H1&.7$&( Programming paradigm:! Large distributed computation transformed to sequence of smaller distributed computations on data sets of key/value pairs o! Simplified data processing on large clusters Jeffrey Dean, Sanjay Ghemawat in Communications of the ACM (2008) 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3B,
Master splits assigns sorts assigns split1 split 2 split 3 split 4 split 5 split 6 map task map task map task reduce task reduce task output file 1 output file 2 Input files Map phase Intermediate key/value pairs Reduce phase Output files IG/'?%#,&,( 1.! Map: (k1, v1) -> list(k2, v2) Split input data into small chunks. Each chunk is processed by a map task. => map key/value pairs to a set of intermediate key/value pairs. 2.! Reduce: (k2, list(v2)) -> list(v2) => reduce set of intermediate values which share a key to a smaller set. 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3C,
!"#$%&'()*+$(,Simplified data processing on large clusters ( -#%./0*123(4'56(/0*123(7#&8'9:( (;;(4'5:(<+=8$'20(2#$'( (;;(7#&8':(<+=8$'20(=+20'20>( (!"#$%&'($)"#*$)$+,$-&./%0$ $ $12+34,3%#2%*+&3%5)6$789:;$ (?'<8='(./0*123(4'56(@0'*#0+*(7#&8'>9:( (;;(4'5:(#(A+*<( (;;(7#&8'>:(#(&1>0(+)(=+820>( (+,3$#%</.3$=$>;$ $!"#$%&'($-$+,$-&./%<0$ $ $#%</.3$?=$@&#<%4,35-:;$ $12+35A<B3#+,C5#%</.3::;$ Example Wordcount from hadoop.apache.org! Content of file1: Hello World Bye World! Content of file2: Hello Hadoop Goodbye Hadoop First line Second line < Hello, 1> < World, 1> < Bye, 1> < World, 1> < Hello, 1> < Hadoop, 1> < Goodbye, 1> < Hadoop, 1> < Bye, 1> < Hello, 1> < World, 2> < Goodbye, 1> < Hadoop, 2> < Hello, 1> < Bye, 1> < Goodbye, 1> < Hadoop, 2> < Hello, 2> < World, 2> Data Intermediate key/value pairs Sorted & combined Intermediate key/value pairs Result of Reduce 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3D,
;?/C( Map/Reduce needs a coordinating Master to assign map tasks and reduce tasks!!!spof Master split1 split 2 map task reduce task output file 1 split 3 split 4 map task split 5 split 6 map task reduce task output file 2 -#.//"'827,*&9';&*7"' ( Name Node Data Node HDFS Data Node Data Node Data Node Data Node Data Node Task Tracker Task Tracker Task Tracker Task Tracker Task Tracker MapReduce Job Tracker Task Tracker 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3E,
1&J7+9&<&6*,(! Java runtime 1.6! SSH o! public key authentication o! passphraseless login! Problems with IPv6 => disable it! Install Hadoop 8/KE79#*+/6'K2&,(! Hadoop-env.sh # The java implementation to use. Required. Export JAVA_HOME=/examplepath/ /java-6-sun 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 12,
! Conf/core-site.xml <configuration> <property> <name>fs.default.name</name> <value>hdfs://dfs_master:54310</value> </property> </configuration>! Conf/mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>mapred_master:54311</value> </property> <configuration> 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 13,
! Conf/hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>2</value> </property> <configuration>! Conf/masters.txt o! Lists the machines on which secondary Namenodes will be started dfs_master 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 11,
! Conf/slaves.txt o! Lists all machines on which DataNodes and TaskTrackers are started dfs_master mapred_master slave1 slave2 slave3! Format the HDFS L#<&L/.&( Execute the following line on the machine that is supposed to run the NameNode (dfs_master): $ bin/hadoop namenode format 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 1?,
! Start the NameNode Execute the following line on the machine that is supposed to run the NameNode (dfs_master): $ bin/start-dfs.sh This will start the NameNode and secondary NameNodes as well as the DataNodes. M/3I9#$N&9(! Start the daemons for MapReduce Execute the following line on the machine that is supposed to run the JobTracker(mapred_master): $ bin/start-mapred.sh This will start the JobTracker and the Tasktrackers 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 1@,
OP#<"2&'M/3(! Copy example input files from local fs to HDFS $ bin/hadoop dfs copyfromlocal /input /input! Run MapReduce job wordcount $ bin/hadoop jar hadoop*examples*.jar wordcount /input /output :7*"7*( 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 1A,
! Input files: o! The Outline of Science, Vol. 1 (of 4) by J. Arthur Thomson o! The Notebooks of Leonardo Da Vinci o! Ulysses by James Joyce o! The Art of War by 6th cent. B.C. Sunzi o! The Adventures of Sherlock Holmes by Sir Arthur Conan Doyle o! The Devil s Dictionary by Ambrose Bierce o! Encyclopaedia Britannica, 11th Edition, Volume 4, Part 3! Four copies of each file to increase data! Output: Pairs of Words and their occurrence 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 1B,
-Q#,&(! Database! Based on Google BigTable! Supports random read/write! Data sets are seldom changed! Data sets are often appended! Kind of NoSQL! Kind of DataStore instead of DataBase o! No advanced query languages o! No typed columns 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 1C,
D+,*9+37*+/6(! Huge amounts of data => good utilisation of cluster! Automatic division of the tables into regions! Automatic RegionServer failover D#*#'>/.&2(! Data is stored in tables o! Rows: sorted by row key (primary key) o! Columns: belong to a column family! Row keys are byte arrays! Table cells o! contain byte arrays o! are versioned 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 1D,
Table:'webtable Row key Time Stamp ColumFamily contents ColumnFamily anchor com.cnn.www T9 Anchor:cnnsi.com= CNN com.cnn.www T8 Anchor:my.look.ca= CNN.com com.cnn.www T6 Contents:html= <html> com.cnn.www T5 Contents:html= <html> com.cnn.www t3 Contents:html= <html> %R"SHH%3#,&T#"#$%&T/9EH3//NH.#*#</.&2T%*<2(! Request values for all columns of row com.cnn.www => Contents:html= <html> (at T6) Anchor:cnnsi.com= CNN (at T9) Anchor:cnnsi.com= CNN (at T8)!9$%+*&$*79&(! Catalog tables: o! -ROOT o!.meta! ZooKeeper coordinates and monitors Hbase o! Stores location of ROOT! -ROOT contains location of.meta table.!.meta contains locations of user regions.! Hmaster: monitors all RegionServers 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 1E,
Finds RS by ROOT ->.META ->#,*&9( 82+&6*( Reads and writes directly to RS V//W&&"&9( ->#,*&9( Assigns regions 1&E+/6 ;&9U&9( 1&E+/6 ;&9U&9( 1&E+/6 ;&9U&9( -DC;( V//W&&"&9( 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0,?2,
! Centralized coordination service! Distributed! Highly reliable! Offers: o! Naming o! Configuration management o! Synchronization o! Group services! Offers hierarchical namespace of data registers, called znodes! Similarities to name spaces of standard file systems! Stores coordination data! Typical sizes measured in kb! Each machine holds its data in memory 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0,?3,
ZooKeeper Service ( ( Leader Server Server ( Server ( ( Server Server Client Client Client Client Client Client Client Clients send requests, get responses, get watch events, send heartbeats 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0,?1,