Extreme computing lab exercises Session one



Similar documents
Extreme computing lab exercises Session one

Hadoop Shell Commands

Hadoop Shell Commands

HDFS File System Shell Guide

File System Shell Guide

Extreme Computing. Hadoop. Stratis Viglas. School of Informatics University of Edinburgh Stratis Viglas Extreme Computing 1

Tutorial: Big Data Algorithms and Applications Under Hadoop KUNPENG ZHANG SIDDHARTHA BHATTACHARYYA

Hadoop Streaming. Table of contents

Data Science Analytics & Research Centre

CS242 PROJECT. Presented by Moloud Shahbazi Spring 2015

Cloud Computing. Chapter Hadoop

Introduction to Cloud Computing

Introduction to HDFS. Prasanth Kothuri, CERN

Hadoop Hands-On Exercises

map/reduce connected components

HDFS Installation and Shell

USING HDFS ON DISCOVERY CLUSTER TWO EXAMPLES - test1 and test2

HDFS. Hadoop Distributed File System

Introduction to Hadoop

Tutorial for Assignment 2.0

HSearch Installation

Command Line - Part 1

Extreme Computing Second Assignment

CS380 Final Project Evaluating the Scalability of Hadoop in a Real and Virtual Environment

Introduction to HDFS. Prasanth Kothuri, CERN

Basic Hadoop Programming Skills

Hadoop Installation MapReduce Examples Jake Karnes

Hadoop/MapReduce Workshop. Dan Mazur, McGill HPC July 10, 2014

How To Install Hadoop From Apa Hadoop To (Hadoop)

TP1: Getting Started with Hadoop

研 發 專 案 原 始 程 式 碼 安 裝 及 操 作 手 冊. Version 0.1

Tutorial for Assignment 2.0

CS2510 Computer Operating Systems Hadoop Examples Guide

Distributed Filesystems

Linux command line. An introduction to the Linux command line for genomics. Susan Fairley

To reduce or not to reduce, that is the question

Hadoop (pseudo-distributed) installation and configuration

Data Intensive Computing Handout 5 Hadoop

How To Use Hadoop

Single Node Hadoop Cluster Setup

Introduction to MapReduce and Hadoop

MapReduce. Tushar B. Kute,

Hadoop Hands-On Exercises

Hands-on Exercises with Big Data

Data Intensive Computing Handout 6 Hadoop

An Introduction to Using the Command Line Interface (CLI) to Work with Files and Directories

Hadoop. Apache Hadoop is an open-source software framework for storage and large scale processing of data-sets on clusters of commodity hardware.

Tutorial- Counting Words in File(s) using MapReduce

How To Write A Mapreduce Program On An Ipad Or Ipad (For Free)

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research

Hadoop 只 支 援 用 Java 開 發 嘛? Is Hadoop only support Java? 總 不 能 全 部 都 重 新 設 計 吧? 如 何 與 舊 系 統 相 容? Can Hadoop work with existing software?

Hadoop Basics with InfoSphere BigInsights

MASSIVE DATA PROCESSING (THE GOOGLE WAY ) 27/04/2015. Fundamentals of Distributed Systems. Inside Google circa 2015

Apache Hadoop new way for the company to store and analyze big data

HADOOP MOCK TEST HADOOP MOCK TEST II

Tutorial Guide to the IS Unix Service

Hadoop Training Hands On Exercise

Hadoop Basics with InfoSphere BigInsights

A. Aiken & K. Olukotun PA3

Hadoop Tutorial Group 7 - Tools For Big Data Indian Institute of Technology Bombay

Yahoo! Grid Services Where Grid Computing at Yahoo! is Today

Installing Hadoop. You need a *nix system (Linux, Mac OS X, ) with a working installation of Java 1.7, either OpenJDK or the Oracle JDK. See, e.g.

Tutorial 0A Programming on the command line

RDMA for Apache Hadoop User Guide

Package hive. January 10, 2011

ratings.dat ( UserID::MovieID::Rating::Timestamp ) users.dat ( UserID::Gender::Age::Occupation::Zip code ) movies.dat ( MovieID::Title::Genres )

Running Hadoop on Windows CCNP Server

Recommended Literature for this Lecture

HADOOP. Installation and Deployment of a Single Node on a Linux System. Presented by: Liv Nguekap And Garrett Poppe

PassTest. Bessere Qualität, bessere Dienstleistungen!

Hadoop MultiNode Cluster Setup

Weekly Report. Hadoop Introduction. submitted By Anurag Sharma. Department of Computer Science and Engineering. Indian Institute of Technology Bombay

10605 BigML Assignment 4(a): Naive Bayes using Hadoop Streaming

IBM Smart Cloud guide started

NIST/ITL CSD Biometric Conformance Test Software on Apache Hadoop. September National Institute of Standards and Technology (NIST)

Setup Hadoop On Ubuntu Linux. ---Multi-Node Cluster

Unix Sampler. PEOPLE whoami id who

Command Line Crash Course For Unix

Package hive. July 3, 2015

CSE 344 Introduction to Data Management. Section 9: AWS, Hadoop, Pig Latin TA: Yi-Shu Wei

Hadoop/MapReduce Workshop

Hadoop Data Warehouse Manual

OLH: Oracle Loader for Hadoop OSCH: Oracle SQL Connector for Hadoop Distributed File System (HDFS)

Kognitio Technote Kognitio v8.x Hadoop Connector Setup

Big Data and Scripting map/reduce in Hadoop

Setting up Radmind For an OSX Public Lab

RHadoop and MapR. Accessing Enterprise- Grade Hadoop from R. Version 2.0 (14.March.2014)

Introduction to Hadoop on the SDSC Gordon Data Intensive Cluster"

and HDFS for Big Data Applications Serge Blazhievsky Nice Systems

CS 103 Lab Linux and Virtual Machines

Hadoop Tutorial. General Instructions

Lecture 2 (08/31, 09/02, 09/09): Hadoop. Decisions, Operations & Information Technologies Robert H. Smith School of Business Fall, 2015

Transcription:

Extreme computing lab exercises Session one Michail Basios (m.basios@sms.ed.ac.uk) Stratis Viglas (sviglas@inf.ed.ac.uk) 1 Getting started First you need to access the machine where you will be doing all the work. Do this by typing: ssh namenode If this machine is busy, try another one (for example bw1425n13, bw1425n14 or any other number up to bw1425n24). In case you want to access Hadoop from a non-dice machine, you should firstly access a dice machine through ssh by using the following command: ssh sxxxxxxx@student.ssh.inf.ed.ac.uk where sxxxxxxx is your matriculation number. Then you can type: ssh namenode and access Hadoop. Hadoop is installed on DICE under /opt/hadoop/hadoop-0.20.2 (this will also be referred to as the Hadoop home directory). The hadoop command can be found in the /opt/hadoop/hadoop-0.20.2/bin directory. To make sure you can run Hadoop command without having to be in the same directory, use your favorite editor (gedit) to edit the ~/.benv file by typing: and add the following line: gedit ~/.benv PATH=$PATH:/opt/hadoop/hadoop-0.20.2/bin/ Don t forget to save the changes! After that, run the following command: source ~/.benv to make sure new changes take place right away. After you do this, run hadoop dfs -ls / If you get a listing of directories on HDFS, you ve successfully configured everything. If not, make sure you do ALL of the described steps EXACTLY as they appear in this document. Take in mind that you should not continue if you have not managed to do this section. 2 HDFS Hadoop consists of two major parts: a distributed filesystem (HDFS for Hadoop DFS) and the mechanism for running jobs. Files on HDFS are distributed across the network and replicated on different nodes for reliability and speed. When a job requires a particular file, it is fetched from the machine/rack nearest to the machines that are actually executing the job. You will now proceed to do some simple commands in HDFS. REMEMBER: all the commands mentioned here refer to HDFS shell commands, NOT to UNIX commands which may have the same name. You should keep all your data within a directory named /user/sxxxxxxx (where sxxxxxxx is your matriculation number) which should already be created for you. Very important is to understand the difference between files stored locally on a DICE machine and the files stored on HDFS (see Figure 1). This difference will be obvious after we have completed this tutorial. Figure 1 shows some existing folders (s1250553, s1250553/data/) and other folders (s14xxxxxx/) that we will need to create for this task. 1

/user/ HDFS s1250553 s14xxxxxx data uploads data example1.txt input output hdfs -copytolocal hdfs -copyfromlocal /afs/inf.ed.ac.uk/user/s14/s14xxxxxx/ Desktop DICE Local Folder sampleinput.txt Figure 1: File transfer between HDFS and local Dice machine Tasks 1. Initially, make sure that your home directory exists: hadoop dfs -ls /user/sxxxxxxx where sxxxxxxx should be your matriculation number. To create a directory in Hadoop the following command is used: hadoop dfs -mkdir dirname For this task, initialy create the following directories in a similar way (these directories will NOT have been created for you, so you need to create them yourself): /user/sxxxxxxx/data /user/sxxxxxxx/data/input /user/sxxxxxxx/data/output /user/sxxxxxxx/uploads Confirm that you ve done the right thing by typing hadoop dfs -ls /user/sxxxxxxx For example, if your matriculation number is s0123456, you should see something like: Found 2 items drwxr-xr-x - s0123456 s0123456 0 2011-10-19 09:55 /user/s0123456/data drwxr-xr-x - s0123456 s0123456 0 2011-10-19 09:54 /user/s0123456/uploads 2. Copy the file /user/s1250553/data/example1.txt to /user/sxxxxxxx/data/output by typing: hadoop dfs -cp /user/s1250553/data/example1.txt /user/sxxxxxxx/data/output 3. Obviously, example1.txt doesn t belong there. Move it from /user/sxxxxxxx/data/output to /user/sxxxxxxx/data/input where it belongs and delete the /user/sxxxxxxx/data/output directory: 2

hadoop dfs -mv /user/sxxxxxxx/data/output/example1.txt /user/sxxxxxxx/data/input hadoop dfs -rmr /user/sxxxxxxx/data/output/ 4. Examine the content of example1.txt by using cat and then tail: hadoop dfs -cat /user/sxxxxxxx/data/input/example1.txt hadoop dfs -tail /user/sxxxxxxx/data/input/example1.txt 5. Create an empty file named example2.txt in /user/sxxxxxxx/data/input. hadoop dfs -touchz /user/sxxxxxxx/data/input/example2.txt 6. Remove the file example2.txt: hadoop dfs -rm /user/sxxxxxxx/data/input/example2.txt 7. Create locally on the Desktop of your dice machine a folder named dicefolder: mkdir ~/Desktop/diceFolder cd ~/Desktop/diceFolder 8. Inside dicefolder create a file named sampleinput.txt and add some text content: touch sampleinput.txt echo "This is a message" > sampleinput.txt 9. Next, in order to copy (upload) the sampleinput.txt file on HDFS execute the following command: hadoop dfs -copyfromlocal ~/Desktop/diceFolder/sampleInput.txt /user/s14xxxxxx/uploads Observe that the first path (~/Desktop/diceFolder/sampleInput.txt) refers to the local Dice machine and the second path (/user/s14xxxxxx/uploads) to HDFS. 2.1 List of HDFS commands What follows is a list of useful HDFS shell commands. cat copy files to stdout, similar to UNIX cat command. hadoop dfs -cat /user/hadoop/file4 copyfromlocal copy single src, or multiple srcs from local file system to the destination filesystem. Source has to be a local file reference. hadoop dfs -copyfromlocal localfile /user/hadoop/file1 copytolocal copy files to the local file system. Destination must be a local file reference. hadoop dfs -copytolocal /user/hadoop/file localfile 3

cp copy files from source to destination. This command allows multiple sources as well in which case the destination must be a directory. Similar to UNIX cp command. hadoop dfs -cp /user/hadoop/file1 /user/hadoop/file2 getmerge take a source directory and a destination file as input and concatenate files in src into the destination local file. Optionally addnl can be set to enable adding a newline character at the end of each file. hadoop dfs -getmerge /user/hadoop/mydir/ ~/result_file ls for a file returns stat on the file with the format: filename <number of replicas> size modification_date modification_time permissions userid groupid For a directory it returns list of its direct children as in UNIX, with the format: dirname <dir> modification_time modification_time permissions userid groupid hadoop dfs -ls /user/hadoop/file1 lsr recursive version of ls. Similar to UNIX ls -R command. hadoop dfs -lsr /user/hadoop/ mkdir create a directory. Behaves similar to UNIX mkdir -p command creating parent directories along the path (for bragging rights, what is the difference?) hadoop dfs -mkdir /user/hadoop/dir1 /user/hadoop/dir2 mv move files from source to destination similar to UNIX mv command. This command allows multiple sources as well in which case the destination needs to be a directory. Moving files across filesystems is not permitted. hadoop dfs -mv /user/hadoop/file1 /user/hadoop/file2 rm delete files, similar to UNIX rm command. Only deletes empty directories and files. hadoop dfs -rm /user/hadoop/file1 rmr recursive version of rm. Same as rm -r on UNIX. hadoop dfs -rmr /user/hadoop/dir1/ tail Displays last kilobyte of the file to stdout. Similar to UNIX tail command. Options: -f output appended data as the file grows (follow) hadoop dfs -tail /user/hadoop/file1 touchz create a file of zero length. Similar to UNIX touch command. hadoop dfs -touchz /user/hadoop/file1 More analytical information about hadoop commands can be found here (http://hadoop. apache.org/docs/r0.18.3/hdfs_shell.html). 4

3 Running jobs 3.1 Demo: Word counting Hadoop has a number of demo applications and here we will look at the canonical task of word counting. We will count the number of times each word appears in a document. For this purpose, we will use the /user/s1250553/data/example1.txt file, so first copy that file to your input directory. Second, make sure you delete your output directory before running the job or the job will fail (IMPORTANT). We run the wordcount example by typing: hadoop jar /opt/hadoop/hadoop-0.20.2/hadoop-0.20.2-examples.jar wordcount <in> <out> where in and out are the input and output directories, respectively. After running the example, examine (using ls) the contents of the output directory. From the output directory, copy the file part-r-00000 to a local directory (somewhere in your home directory) and examine the contents. Was the job successful? 3.2 Running streaming jobs In order to run your programms in hadoop we have to use the Hadoop streaming utility. It allows you to create and run map/reduce jobs with any executable or script as the mapper and/or the reducer. The way it works is very simple: input is converted into lines which are fed to the stdin of the mapper process. The mapper processes this data and writes to stdout. Lines from the stdout of the mapper process are converted into key/value pairs by splitting them on the first tab character. 1 The key/value pairs are fed to the stdin of the reducer process which collects and processes them. Finally, the reducer writes to stdout which is the final output of the program. Everything will become much clearer through examples later. It is important to note that with Hadoop streaming mappers and reducers can be any programs that read from stdin and write to stdout, so the choice of the programming language is left to the programmer. Here, we will use Python. 3.2.1 Writing a word counting program in Python In this subsection, we will see how to create a program in Python that can count the number of words of a specific file. Initially, we will test the code locally in small files before using it in a MapReduce job. As we will see later, this is important as it helps in not running jobs in Hadoop that can give wrong results. Step 1: Mapper Using Streaming, a Mapper reads from STDIN and writes to STDOUT Keys and Values are delimited (by default) using tabs. Records are split using newlines For this task, create a file named mapper.py and add the following code: #! / usr / b i n / python import sys # i n p u t from s t a n d a r d i n p u t for l i n e in sys. s t d i n : # remove w h i t e s p a c e s l i n e = l i n e. s t r i p ( ) # s p l i t t h e l i n e i n t o t o k e n s tokens = l i n e. s p l i t ( ) for token in tokens : # w r i t e t h e r e s u l t s t o s t a n d a r d o u t p u t print %s\ t%s % ( token, 1) 1 Of course, this is only the default behavior and can be changed. 5

I would suggest not to copy the code but write it from the beginning, especially if you are not familiar with python. For those who are not familiar with Python, tutorials can be found by clicking the following links: Introduction to Python - Part 1, Introduction to Python - Part 2. If you consider that writing these files is time consuming then you can find them here: /user/s1250553/source/mapper.py and /user/s1250553/source/reducer.py Step 2: Reducer Create a file named reducer.py and add the following code: #! / usr / b i n / python from operator import i t e m g e t t e r import sys prevword = " " valuetotal = 0 word = " " # i n p u t from STDIN for l i n e in sys. s t d i n : l i n e = l i n e. s t r i p ( ) word, value = l i n e. s p l i t ( \ t, 1 ) value = i n t ( value ) # Remember t h a t Hadoop s o r t s map o u t p u t by key # r e d u c e r t a k e s t h e s e k e y s s o r t e d i f prevword == word : valuetotal += value e lse : i f prevword : # w r i t e r e s u l t t o STDOUT print %s\ t%s % ( prevword, valuetotal ) valuetotal = value prevword = word i f prevword == word : print %s\ t%s % ( prevword, valuetotal ) Step 3: Testing the code (cat data map sort reduce) As mentioned previously, before using the code in a MapReduce job you should test it locally. You have to make sure that files mapper.py and reducer.py have execution permissions (chmod u+x mapper.py). Try the following command and explain the results: echo "this is a test and this should count the number of words"./mapper.py sort -k1,1./reducer.py Task Count the number of words that have more then two occurrences in a file. How to run the job After running locally the code successfully, the next step is to run it on Hadoop. Suppose you have your mapper, mapper.py, and your reducer, reducer.py, and the input is in /user/hadoop/input/. How do you run your streaming job? It s similar to running the DFS examples from section 3.1, with minor differences: hadoop jar contrib/streaming/hadoop-0.20.2-streaming.jar \ -input /user/sxxxxxxx/data/input \ -output /user/sxxxxxxx/data/output \ -mapper mapper.py \ -reducer reducer.py Note the differences: we always have to specify contrib/streaming/hadoop-0.20.2-streaming.jar (this file can be found in /opt/hadoop/hadoop-0.20.2/ directory so you either have to be in this directory when running the job, or specify the full path to the file) as the jar to run, and the particular mapper and reducer we use are specified through -mapper and -reducer options. 6

In case that the mapper and/or reducer are not already present on the remote machine (which will often be the case), we also have to package the actual files in the job submission. Assuming that neither mapper.py nor reducer.py were present on the machines in the cluster, the previous job would be run as hadoop jar contrib/streaming/hadoop-0.20.2-streaming.jar \ -input /user/hadoop/input \ -output /user/hadoop/output \ -mapper mapper.py \ -file mapper.py \ -reducer reducer.py \ -file reducer.py Here, the -file option specifies that the file is to be copied to the cluster. This can be very useful for also packaging any auxiliary files your program might use (dictionaries, configuration files, etc). Each job can have multiple -file options. We will run a simple example to demonstrate how streaming jobs are run. Copy the file: /user/s1250553/source/random-mapper.py from HDFS to a local directory (directory on the local machine, NOT on HDFS). This mapper simply generates a random number for each word in the input file, hence the input file in your input directory can be anything. Run the job by typing: hadoop jar contrib/streaming/hadoop-0.20.2-streaming.jar \ -input /user/sxxxxxxx/data/input \ -output /user/sxxxxxxx/data/output \ -mapper random-mapper.py \ -file random-mapper.py IMPORTANT NOTE: for Hadoop to know how to properly run your Python scripts, you MUST include the following line as the FIRST line in all your mappers/reducers: #!/usr/bin/python What happens when instead of using mapper.py you use /bin/cat as a mapper? Run the example of Word Counting on Hadoop (the one you previouly tried locally). As input use a file that you will create locally and copy it on HDFS, as shown previously with the command (hadoop dfs -copyfromlocal). Observe the output folder and files. What can you conclude from the number of files generated? How can you obtain a single file instead of multiple files? When should this be done locally and when on Hadoop? Setting job configuration Various job options can be specified on the command line, we will cover the most used ones in this section. The general syntax for specifying additional configuration variables is -jobconf <name>=<value> To avoid having your job named something like streamjob5025479419610622742.jar, you can specify an alternative name through mapred.job.name variable. For example, -jobconf mapred.job.name="my job" Run the random-mapper.py example again, this time naming your job Random job <matriculation_number>, where <matriculation_number> is your matriculation number. After you run the job (and preferably before it finishes), open the browser and go to http://hcrc1425n01.inf.ed.ac.uk: 50030/. In the list of running jobs look for the job with the name you gave it and click on it. You can see various statistics about your job try to find the number of reducers used. How many reducers did you use? If your job finished before you had a chance to open the browser, it will be in the list of finished jobs, not the list of running jobs, but you can still see all the same information by clicking on it. 7

3.3 Secondary sorting As was mentioned earlier, the key/value pairs are obtained by splitting the mapper output on the first tab character in the line. This can be changed using stream.map.output.field.separator and stream.num.map.output.key.fields variables. For example, if I want the key to be everything up to the second - character in the line, I would add the following: -jobconf stream.map.output.field.separator=- \ -jobconf stream.num.map.output.key.fields=2 Hadoop also comes with a partitioner class org.apache.hadoop.mapred.lib.keyfieldbasedpartitioner which is useful for cases where you want to perform a secondary sort on the keys. Imagine you have the following list of IPs: 192.168.2.1 190.191.34.38 161.53.72.111 192.168.1.1 161.53.72.23 You want to partition the data so that addresses with the first 16 bits are processed by the same reducer. However, you also want each reducer to see the data sorted according to the first 24 bits of the address. Using the mentioned partitioner class you can tell Hadoop how to group the data to be processed by the reducers. You do this using the following options: -jobconf map.output.key.field.separator=. -jobconf num.key.fields.for.partition=2 Task The first option tells Hadoop what character to use as a separator (just like in the previous example), and the second one tells how many fields from the key to use for partitioning. Knowing this, here is how we would solve the IP address example (assuming that the addresses are in /user/hadoop/input): hadoop jar contrib/streaming/hadoop-0.20.2-streaming.jar \ -input /user/hadoop/input \ -output /user/hadoop/output \ -mapper cat \ -reducer cat \ -partitioner org.apache.hadoop.mapred.lib.keyfieldbasedpartitioner \ -jobconf stream.map.output.field.separator=. \ -jobconf stream.num.map.output.key.fields=3 \ -jobconf map.output.key.field.separator=. \ -jobconf num.key.fields.for.partition=2 The line with -jobconf num.key.fields.for.partition=2 tells Hadoop to partition IPs based on the first 16 bits (first two numbers), and -jobconf stream.num.map.output.key.fields=3 tells it to sort the IPs according to everything before the third separator (the dot in this case) this corresponds to the first 24 bits of the address. Copy the file /user/s1250553/data/secondary.txt to your input directory. Lines in this file have the following format: LastName.FirstName.Address.PhoneNo Using what you learned in this section, your task is to: 1. Partition the data so that all people with the same last name go to the same reducer. 2. Partition the data so that all people with the same last name go to the same reducer, and also make sure that the lines are sorted according to first name. 3. Partition the data so that all the people with the same first and last name go to the same reducer, and that they are sorted according to address. 8