2. Implementation 2.1 Hadoop a. Hadoop Installation & Configuration First of all, we need to install Java Sun 6, and it is preferred to be version 6 not 7 for running Hadoop. Type the following commands in Ubuntu s terminal: sudo add-apt-repository ppa:ferramroberto/java sudo apt-get update sudo apt-get install sun-java6-jdk After that, you will need to set the class path of Java by the following commands: export JAVA_HOME=/usr/lib/jvm/java-6-sun To ensure that the new Java Home is set correctly, you can print (echo) the JAVA_HOME environment variable by the following command: echo $JAVA_HOME The output expected is the java home path. a.1 Errors you may face during installation of Java are Error: package sun-java6-jdk has no installation candidate. Solution is to type the following in your terminal: sudo add-apt-repository "deb http://us.archive.ubuntu.com/ubuntu/ hardy multiverse" sudo add-apt-repository "deb http://us.archive.ubuntu.com/ubuntu/ hardy-updates multiverse" sudo apt-get update sudo apt-get install sun-java6-jdk JAVA_HOME error: where the java home is not read or identified, you can do the following to solve this problem: add the below line to ~/.bashrc export JAVA_HOME=/usr/lib/jvm/java-6-sun or whatever the java home path is, usually in /usr/lib/jvm P a g e 31
Close and open terminal again and type export $JAVA_HOME To check if your java is set & correctly installed, type the following in your terminal: java -version It should display the java version you have just installed. Now, after we installed Java Sun 6, we move on to install Hadoop. b. Hadoop Cluster b.1 Single Cluster Hadoop 1- Install Java as mentioned before. 2- Adding a dedicated Hadoop system user $ sudo addgroup hadoop $ sudo adduser --ingroup hadoop hduser 3- Configuring SSH Hadoop requires SSH access to manage its nodes, i.e. remote machines plus your local machine if you want to use Hadoop on it (which is what we want to do in this short tutorial). For our single-node setup of Hadoop, we therefore need to configure SSH access to localhost for the hduser user we created in the previous section. Output: user@ubuntu:~$ su - hduser hduser@ubuntu:~$ ssh-keygen -t rsa -P "" Generating public/private rsa key pair. Enter file in which to save the key (/home/hduser/.ssh/id_rsa): Created directory '/home/hduser/.ssh'. Your identification has been saved in /home/hduser/.ssh/id_rsa. Your public key has been saved in /home/hduser/.ssh/id_rsa.pub. The key fingerprint is: 9b:82:ea:58:b4:e0:35:d7:ff:19:66:a6:ef:ae:0e:d2 hduser@ubuntu The key's randomart image is: [...snipp...] P a g e 32
Then, you have to enable SSH access to your local machine with this newly created key. cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys Check by writing: ssh localhost Output: The authenticity of host 'localhost (::1)' can't be established. RSA key fingerprint is d7:87:25:47:ae:02:00:eb:1d:75:4f:bb:44:f9:36:26. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'localhost' (RSA) to the list of known hosts. Linux ubuntu 2.6.32-22-generic #33-Ubuntu SMP Wed Apr 28 13:27:30 UTC 2010 i686 GNU/Linux Ubuntu 10.04 LTS [...snipp...] 4- Disabling IPv6 To disable IPv6 on Ubuntu 12.04 LTS, open /etc/sysctl.conf in the editor of your choice and add the following lines to the end of the file: #disable ipv6 net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 Hint: open in root mode sudo gedit /etc/sysctl.conf Hint: You have to reboot your machine in order to make the changes take effect. P a g e 33
Downloading Hadoop Download hadoop from http://www.apache.org/dyn/closer.cgi/hadoop/core Commands: $ cd /usr/local $ sudo tar xzf hadoop-1.0.3.tar.gz $ sudo mv hadoop-1.0.3 hadoop $ sudo chown -R hduser:hadoop hadoop Update $HOME/.bashrc: - Add the following lines to the end of the $HOME/.bashrc file of user hduser. If you use a shell other than bash, you should of course update its appropriate configuration files instead of.bashrc. # Set Hadoop-related environment variables export HADOOP_HOME=/usr/local/hadoop # Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on) export JAVA_HOME=/usr/lib/jvm/java-6-sun # Some convenient aliases and functions for running Hadoop-related commands unalias fs &> /dev/null alias fs="hadoop fs" unalias hls &>/dev/null alias hls="fs-ls" # If you have LZO compression enabled in your Hadoop cluster # compress job outputs with LZOP (not covered in this tutorial): # Conveniently inspect an LZOP compressed file from the command # line; run via: # # $ lzohead /hdfs/path/to/lzop/compressed/file.lzo # Requires installed 'lzop' command. lzohead () { hadoop fs -cat $1 lzop -dc head -1000 less } # Add Hadoop bin/ directory to PATH export PATH=$PATH:$HADOOP_HOME/bin P a g e 34
Hadoop Distributed File System (HDFS) Configuration hadoop-env.sh: The only required environment variable we have to configure for Hadoop in this tutorial is JAVA_HOME. Open conf/hadoop-env.sh in the editor of your choice (if you used the installation path in this tutorial, the full path is /usr/local/hadoop/conf/hadoop-env.sh) and set the JAVA_HOME environment variable to the Sun JDK/JRE 6 directory. Change to: # The java implementation to use required. export JAVA_HOME=/usr/lib/jvm/java-6-sun conf/*-site.xml: In this section, we will configure the directory where Hadoop will store its data files, the network ports it listens to, etc. Our setup will use Hadoop s Distributed File System, HDFS, even though our little cluster only contains our single local machine. You can leave the settings below as is with the exception of the hadoop.tmp.dir parameter this parameter you must change to a directory of your choice. We will use the directory /app/hadoop/tmp in this tutorial. Hadoop s default configurations use hadoop.tmp.dir as the base temporary directory both for the local file system and HDFS, so don t be surprised if you see Hadoop creating the specified directory automatically on HDFS at some later point. $ sudo mkdir -p /app/hadoop/tmp $ sudo chown hduser:hadoop /app/hadoop/tmp #...and if you want to tighten up security, chmod from 755 to 750... $ sudo chmod 750 /app/hadoop/tmp Add the following snippets between the <configuration>... the respective configuration XML file. </configuration> tags in P a g e 35
In file conf/core-site.xml: <property> <name>hadoop.tmp.dir</name> <value>/app/hadoop/tmp</value> <description>a base for other temporary directories.</description> </property> <property> <name>fs.default.name</name> <value>hdfs://localhost:54310</value> <description>the name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.scheme.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.</description> </property> In file conf/mapred-site.xml: <property> <name>mapred.job.tracker</name> <value>localhost:54311</value> <description>the host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description> </property> In file conf/hdfs-site.xml: <property> <name>dfs.replication</name> <value>1</value> <description>default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. </description> </property> P a g e 36
Formatting the HDFS filesystem via the NameNode The first step to starting up your Hadoop installation is formatting the Hadoop filesystem which is implemented on top of the local filesystem of your cluster (which includes only your local machine if you followed this tutorial). You need to do this the first time you set up a Hadoop cluster. Output: /usr/local/hadoop/bin/hadoop namenode -format hduser@ubuntu:/usr/local/hadoop$ bin/hadoop namenode -format 10/05/08 16:59:56 INFO namenode.namenode: STARTUP_MSG: / ************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = ubuntu/127.0.1.1 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 0.20.2 STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010 ************************************************************ / 10/05/08 16:59:56 INFO namenode.fsnamesystem: fsowner=hduser,hadoop 10/05/08 16:59:56 INFO namenode.fsnamesystem: supergroup=supergroup 10/05/08 16:59:56 INFO namenode.fsnamesystem: ispermissionenabled=true 10/05/08 16:59:56 INFO common.storage: Image file of size 96 saved in 0 seconds. 10/05/08 16:59:57 INFO common.storage: Storage directory.../hadoop- hduser/dfs/name has been successfully formatted. 10/05/08 16:59:57 INFO namenode.namenode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1 ************************************************************ / hduser@ubuntu:/usr/local/hadoop$ P a g e 37
Starting your single-node cluster /usr/local/hadoop/bin/start-all.sh This will startup a Namenode, Datanode, Jobtracker and a Tasktracker on your machine. Output: hduser@ubuntu:/usr/local/hadoop$ bin/start-all.sh starting namenode, logging to /usr/local/hadoop/bin/../logs/hadoop-hduser- namenode-ubuntu.out localhost: starting datanode, logging to /usr/local/hadoop/bin/../logs/hadoop-hduser-datanode-ubuntu.out localhost: starting secondarynamenode, logging to /usr/local/hadoop/bin/../logs/hadoop-hduser-secondarynamenodeubuntu.out starting jobtracker, logging to /usr/local/hadoop/bin/../logs/hadoop-hduser- jobtracker-ubuntu.out localhost: starting tasktracker, logging to /usr/local/hadoop/bin/../logs/hadoop-hduser-tasktracker-ubuntu.out hduser@ubuntu:/usr/local/hadoop$ Check by writing: jsp Output: hduser@ubuntu:/usr/local/hadoop$ jps 2287 TaskTracker 2149 JobTracker 1938 DataNode 2085 SecondaryNameNode 2349 Jps 1788 NameNode P a g e 38
Stopping your single-node cluster /usr/local/hadoop/bin/stop-all.sh Testing Hadoop To start running the map-reduce job of Hadoop, you can download the following input data to apply map reduce jobs on it. http://www.gutenberg.org/etext/20417 http://www.gutenberg.org/etext/5000 http://www.gutenberg.org/etext/4300 Download each ebook as text files in Plain Text UTF-8 encoding and store the files in a local temporary directory of choice, for example /tmp/gutenberg ls -I /tmp/gutenberg/ Output: total 3604 -rw-r--r-- 1 hduser hadoop 674566 Feb 3 10:17 pg20417.txt -rw-r--r-- 1 hduser hadoop 1573112 Feb 3 10:18 pg4300.txt -rw-r--r-- 1 hduser hadoop 1423801 Feb 3 10:18 pg5000.txt Restart the Hadoop Cluster /usr/local/hadoop/bin/start-all.sh Copy local example data to HDFS bin/hadoop dfs -copyfromlocal /tmp/gutenberg /user/hduser/gutenberg P a g e 39
Check : bin/hadoop dfs -ls /user/hduser bin/hadoop dfs -ls /user/hduser/gutenberg Run the MapReduce job hduser@ubuntu:/usr/local/hadoop$ bin/hadoop jar hadoop*examples*.jar wordcount /user/hduser/gutenberg /user/hduser/gutenberg-output This command will read all the files in the HDFS directory /user/hduser/gutenberg, process it, and store the result in the HDFS directory /user/hduser/gutenberg- output. Check if the result is successfully stored in HDFS directory /user/hduser/gutenbergoutput: bin/hadoop dfs -ls /user/hduser Check output: Bin/Hadoop dfs cat/user/hduser/gutenberg-output/part-r-00000 Hadoop Web Interface http://localhost:50070/ web UI of the NameNode daemon http://localhost:50030/ web UI of the JobTracker daemon http://localhost:50060/ web UI of the TaskTracker daemon P a g e 41
b.2 Multi-Node Cluster Hadoop Assume our configuration on one master and one slave only. Configuring single-node clusters first Networking This should come hardly as a surprise, but for the sake of completeness I have to point out that both machines must be able to reach each other over the network. The easiest is to put both machines in the same network with regard to hardware and software configuration, for example connect both machines via a single hub or switch and configure the network interfaces to use a common network such as 192.168.0.x/24. To make it simple, we will assign the IP address 192.168.0.1 to the master machine and 192.168.0.2 to the slave machine. Update /etc/hosts on both machines with the following lines: Change: /etc/hosts 192.168.0.1 master 192.168.0.2 slave SSH access The hduser user on the master (aka hduser@master) must be able to connect a) to its own user account on the master i.e. ssh master in this context and not necessarily ssh localhost and b) to the hduser user account on the slave (aka hduser@slave) via a password-less SSH login. If you followed my single-node cluster tutorial, you just have to add the hduser@master s public SSH key (which should be in $HOME/.ssh/id_rsa.pub) to the authorized_keys file of hduser@slave (in this user s $HOME/.ssh/authorized_keys). You can do this manually or use the ssh-copy-id -i $HOME/.ssh/id_rsa.pub hduser@slave P a g e 41
This command will prompt you for the login password for user hduser on slave, then copy the public SSH key for you, creating the correct directory and fixing the permissions as necessary. The final step is to test the SSH setup by connecting with user hduser from the master to the user account hduser on the slave. The step is also needed to save slave s host key fingerprint to the hduser@master s known_hosts file. So, connecting from master to master Test: ssh master Output: The authenticity of host 'master (192.168.0.1)' can't be established. RSA key fingerprint is 3b:21:b3:c0:21:5c:7c:54:2f:1e:2d:96:79:eb:7f:95. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'master' (RSA) to the list of known hosts. Linux master 2.6.20-16-386 #2 Thu Jun 7 20:16:13 UTC 2007 i686... hduser@master:~$ To test from master to slave, write ssh slave Output: The authenticity of host 'slave (192.168.0.2)' can't be established. RSA key fingerprint is 74:d7:61:86:db:86:8f:31:90:9c:68:b0:13:88:52:72. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'slave' (RSA) to the list of known hosts. P a g e 42
Configuration Conf/Masters (Master Only) Despite its name, the conf/masters file defines on which machines Hadoop will start secondary NameNodes in our multi-node cluster. In our case, this is just the master machine. The primary NameNode and the JobTracker will always be the machines on which you run the bin/start-dfs.sh and bin/start-mapred.sh scripts, respectively (the primary NameNode and the JobTracker will be started on the same machine if you run bin/start-all.sh). On master, update conf/masters that it looks like this: edit conf/masters to master Conf/Slaves: (Slave Only) The conf/slaves file lists the hosts, one per line, where the Hadoop slave daemons (DataNodes and TaskTrackers) will be run. We want both the master box and the slave box to act as Hadoop slaves because we want both of them to store and process data. On master, update conf/slaves that it looks like this: Master Slave If you have additional slave nodes, just add them to the conf/slaves file, one hostname per line. master slave anotherslave01 anotherslave02 anotherslave03 P a g e 43
Conf/*-site.xml (All Machines) You must change the configuration files conf/core-site.xml, conf/mapred- site.xml and conf/hdfs-site.xml on ALL machines as follows. Conf/core-site.xml (All Machines) <property> <name>fs.default.name</name> <value>hdfs://master:54310</value> <description>the name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.scheme.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.</description> </property> Conf/mapred-site.xml (All Machines) <property> <name>mapred.job.tracker</name> <value>master:54311</value> <description>the host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description> </property> Third, we change the dfs.replication parameter (in conf/hdfs-site.xml) which specifies the default block replication. It defines how many machines a single file should be replicated to before it becomes available. If you set this to a value higher than the number of available slave nodes (more precisely, the number of DataNodes), you will start seeing a lot of (Zero targets found, forbidden1.size=1) type errors in the log files. The default value of dfs.replication is 3. However, we have only two nodes available, so we set dfs.replication to 2. P a g e 44
Conf/hdfs-site.xml (All Machines) <property> <name> dfs.replication </name> <value>2</value> <description>default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. </description> </property> Formatting the HDFS filesystem via the NameNode hduser@master:/usr/local/hadoop$ bin/hadoop namenode -format Starting the multi-node cluster: Starting the cluster is performed in two steps 1. We begin with starting the HDFS daemons: the NameNode daemon is started on master, and DataNode daemons are started on all slaves (here: master and slave). 2. Then we start the MapReduce daemons: the JobTracker is started on master, and TaskTracker daemons are started on all slaves (here: master and slave). hduser@master:/usr/local/hadoop$ bin/start-dfs.sh Test :: master :: hduser@master:/usr/local/hadoop$ jps Output: 14799 NameNode 15314 Jps 14880 DataNode 14977 SecondaryNameNode test :: slave :: hduser@slave:/usr/local/hadoop$ jps 15183 DataNode 15616 Jps P a g e 45
c. Hadoop Map-Reduce c.1 Introduction MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model. Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data, scheduling the program's execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system. Our implementation of MapReduce runs on a large cluster of commodity machines and is highly scalable, a typical MapReduce computation processes many terabytes of data on thousands of machines. Programmers find the system easy to use: hundreds of MapReduce programs have been implemented and upwards of one thousand MapReduce jobs are executed on Google's clusters every day. c.2 Programming Model The computation takes a set of input key/value pairs, and produces a set of output key/value pairs. The user of the MapReduce library expresses the computation as two functions: Map and Reduce. Map, written by the user, takes an input pair and produces a set of intermediate key/value pairs. The MapReduce library groups together all intermediate values associated with the same intermediate key I and passes them to the Reduce function. The Reduce function, also written by the user, accepts an intermediate key I and a set of values for that key. It merges together these values to form a possibly smaller set of values. Typically just zero or one output value is produced per Reduce invocation. The intermediate values are supplied to the user's reduce function via an iterator. This allows us to handle lists of values that are too large to t in memory. P a g e 46
Example: Consider the problem of counting the number of occurrences of each word in a large collection of documents. The user would write code similar to the following pseudocode: map (String key, String value): // key: document name // value: document contents for each word w in value: EmitIntermediate(w, "1"); reduce(string key, Iterator values): // key: a word // values: a list of counts int result = 0; for each v in values: result += ParseInt(v); Emit(AsString(result)); The map function emits each word plus an associated count of occurrences (just `1' in this simple example). The reduce function sums together all counts emitted for a particular word. In addition, the user writes code to all in a MapReduce specification object with the names of the input and output les, and optional tuning parameters. The user then invokes the MapReduce function, passing it the specification object. The user's code is linked together with the MapReduce library (implemented in C++). P a g e 47
c.3 Types Even though the previous pseudo-codes written in terms of string inputs and outputs, conceptually the map and reduce functions supplied by the user have associated types : map (k1,v1) list(k2,v2) reduce (k2,list(v2)) list(v2) I.e., the input keys and values are drawn from a different domain than the output keys and values. Furthermore, the intermediate keys and values are from the same domain as the output keys and values. Our C++ implementation passes strings to and from the user-defined functions and leaves it to the user code to convert between strings and appropriate types. More Examples: Here are a few simple examples of interesting programs that can be easily expressed as MapReduce computations. Distributed Grep: The map function emits a line if it matches a supplied pattern. The reduce function is an identity function that just copies the supplied intermediate data to the output. Count of URL Access Frequency: The map function processes logs of web page requests and outputs (URL,1). The reduce function adds together all values for the same URL and emits a (URL, total count) pair. Reverse Web-Link Graph: The map function outputs (target, source) pairs for each link to a target URL found in a page named source. The reduce function concatenates the list of all source URLs associated with a given target URL and emits the pair: (target, list (source)). Inverted Index: The map function parses each document, and emits a sequence of (word, document ID) pairs. The reduce function accepts all pairs for a given word, sorts the corresponding document IDs and emits a (word, list (document ID)) pair. The set of all output pairs forms a simple inverted index. It is easy to augment this computation to keep track of word positions. P a g e 48
c.4 Implementation Many different implementations of the MapReduce interface are possible. The right choice depends on the environment. For example, one implementation may be suitable for a small shared-memory machine, another for a large NUMA multi-processor, and yet another for an even larger collection of networked machines. (1) Machines are typically dual-processor x86 processors running Linux, with 2-4 GB of memory per machine. (2) Commodity networking hardware is used typically either 100 megabits/second or 1 gigabit/second at the machine level, but averaging considerably less in overall bisection bandwidth. (3) A cluster consists of hundreds or thousands of machines, and therefore machine failures are common. (4) Storage is provided by inexpensive IDE disks attached directly to individual machines. A distributed file system developed in-house is used to manage the data stored on these disks. The file system uses replication to provide availability and reliability on top of unreliable hardware. (5) Users submit jobs to a scheduling system. Each job consists of a set of tasks, and is mapped by the scheduler to a set of available machines within a cluster. P a g e 49
c.5 Execution Overview The Map invocations are distributed across multiple machines by automatically partitioning the input data into a set of M splits. The input splits can be processed in parallel by different machines. Reduce invocations are distributed by partitioning the intermediate key space into R pieces using a partitioning function (e.g., hash (key) mod R). The number of partitions (R) and the partitioning function are specified by the user. Figure shows the overall flow of a MapReduce operation in our implementation. When the user program calls the MapReduce function, the following sequence of actions occurs (the numbered labels in Figure 1 correspond to the numbers in the list below): 1. The MapReduce library in the user program first splits the input files into M pieces of typically 16 megabytes to 64 megabytes (MB) per piece (controllable by the user via an optional parameter). It then starts up many copies of the program on a cluster of machines. 2. One of the copies of the program is special the master. The rest are workers that are assigned work by the master. There are Map tasks and Reduce tasks to assign. The master picks idle workers and assigns each one a map task or a reduce task. 3. A worker who is assigned a map task reads the contents of the corresponding input split. It parses key/value pairs out of the input data and passes each pair to the user-defined Map function. The intermediate key/value pairs produced by the Map function are buffered in memory. 4. Periodically, the buffered pairs are written to local disk, partitioned into R regions by the partitioning function. The locations of these buffered pairs on the local disk are passed back to the master, who is responsible for forwarding these locations to the reduce workers. 5. When a reduce worker is notified by the master about these locations, it uses remote procedure calls to read the buffered data from the local disks of the map workers. When a reduce worker has read all intermediate data, it sorts it by the intermediate keys so that all occurrences of the same key are grouped together. The sorting is needed because typically many different keys map to the same reduce task. If the amount of intermediate data is too large to fit in memory, an external sort is used. 6. The reduce worker iterates over the sorted intermediate data and for each unique intermediate key encountered, it passes the key and the corresponding set of intermediate values to the user s Reduce function. The output of the Reduce function is appended to a final output file for this reduce partition. P a g e 51
7. When all map tasks and reduce tasks have been completed, the master wakes up the user program. At this point, the MapReduce call in the user program returns back to the user code. After successful completion, the output of the MapReduce execution is available in the R output files (one per reduce task, with file names as specified by the user). Typically, users do not need to combine these R output files into one file they often pass these files as input to another MapReduce call, or use them from another distributed application that is able to deal with input that is partitioned into multiple files. c.6 Hadoop Map Reduce Hadoop MapReduce is a software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner. A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a file-system. The framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks. Typically the compute nodes and the storage nodes are the same, that is, the MapReduce framework and the Hadoop Distributed File System are running on the same set of nodes. This configuration allows the framework to effectively schedule tasks on the nodes where data is already present, resulting in very high aggregate bandwidth across the cluster. The MapReduce framework consists of a single master JobTracker and one slave TaskTracker per cluster-node. The master is responsible for scheduling the jobs' component tasks on the slaves, monitoring them and re-executing the failed tasks. The slaves execute the tasks as directed by the master. Minimally, applications specify the input/output locations and supply map and reduce functions via implementations of appropriate interfaces and/or abstractclasses. These, and other job parameters, comprise the job configuration. The Hadoop job client then submits the job (jar/executable etc.) and configuration to the JobTracker which then assumes the responsibility of distributing the software/configuration to the slaves, scheduling tasks and monitoring them, providing status and diagnostic information to the job-client. Although the Hadoop framework is implemented in Java TM, MapReduce applications need not be written in Java. P a g e 51
c.7 Inputs and Outputs The MapReduce framework operates exclusively on <key, value> pairs, that is, the framework views the input to the job as a set of <key, value> pairs and produces a set of <key, value> pairs as the output of the job, conceivably of different types. The key and value classes have to be serializable by the framework and hence need to implement the Writable interface. Additionally, the key classes have to implement the WritableComparable interface to facilitate sorting by the framework. Input and Output types of a MapReduce job: (input) <k1, v1> -> map -> <k2, v2> -> combine -> <k2, v2> -> reduce -> <k3, v3> (output) c.8 Examples of Hadoop Map Reduce Jobs This is an overview of the MapReduce application that was created in this project. After researching about the Hadoop technology and carrying out three of its functions (word count, inverted index and temperature searching) on Ubuntu system, the next step was to design and implement a MapReduce application involving a number of nodes (machines). The application we created used to handle a vast amount of customer receipts that are generated from many points of sales across Egypt. it is more like all the receipts from the various supermarkets around the country. The application is expected to provide its users (government agencies, business owners or marketers) with statistics, indicators and information that support the decision making process. Also so we could use it to make possible queries to retain some data to see the effect of some market changes. In order for the application to be effectively tested, we ll first build a generator program whose job is to automatically create a massive amount of receipts (XML files) that are randomly filled with data according to the standard receipt format. This format includes but not limited to: The name of the point of sale at which the receipt is produced. Accurate timestamp indicating day, date and time. A list of the names of the purchased items The quantity and price of each item as well as the producing company. Total price of items. P a g e 52
The application will use the Map-Reduce on a number of nodes on which the generated files are distributed to deduce the following information as an example: The item that has the highest sales in a certain region (governorate/district): In order to know the dominant products in certain regions. The timestamp (time in the day, day of the week or date of the month) that represents the peak of purchases: In order to notice when it is the most appropriate time for people to purchase and perhaps use it to provide offers at such time. The region that has the highest sales of a certain item: In order to know the various needs of different regions and to be able to know which items might be more important to that region. The effect of increasing the price of a certain item on its sales figures represented in either the quantity of items sold or the volume of revenue from sales: In order to be able to tell how people are affected by the increase in the price and to what extent can they afford for it. The sales figures of a certain company during a specified period of time (represented in total quantity of items sold or total revenue): In order to be able to figure out the dominant companies and the most trusted by the consumer. P a g e 53