Hadoop Installation Guide



Similar documents
Apache Hadoop 2.0 Installation and Single Node Cluster Configuration on Ubuntu A guide to install and setup Single-Node Apache Hadoop 2.

Installing Hadoop. You need a *nix system (Linux, Mac OS X, ) with a working installation of Java 1.7, either OpenJDK or the Oracle JDK. See, e.g.

Set JAVA PATH in Linux Environment. Edit.bashrc and add below 2 lines $vi.bashrc export JAVA_HOME=/usr/lib/jvm/java-7-oracle/

Hadoop (pseudo-distributed) installation and configuration

HSearch Installation

HADOOP - MULTI NODE CLUSTER

How To Install Hadoop From Apa Hadoop To (Hadoop)

Deploy Apache Hadoop with Emulex OneConnect OCe14000 Ethernet Network Adapters

Setup Hadoop On Ubuntu Linux. ---Multi-Node Cluster

This handout describes how to start Hadoop in distributed mode, not the pseudo distributed mode which Hadoop comes preconfigured in as on download.

Single Node Setup. Table of contents

Hadoop MultiNode Cluster Setup

研 發 專 案 原 始 程 式 碼 安 裝 及 操 作 手 冊. Version 0.1

Lecture 2 (08/31, 09/02, 09/09): Hadoop. Decisions, Operations & Information Technologies Robert H. Smith School of Business Fall, 2015

Deploying Cloudera CDH (Cloudera Distribution Including Apache Hadoop) with Emulex OneConnect OCe14000 Network Adapters

The objective of this lab is to learn how to set up an environment for running distributed Hadoop applications.

Chase Wu New Jersey Ins0tute of Technology

Running Kmeans Mapreduce code on Amazon AWS

HADOOP MOCK TEST HADOOP MOCK TEST II

HADOOP. Installation and Deployment of a Single Node on a Linux System. Presented by: Liv Nguekap And Garrett Poppe

CDH 5 Quick Start Guide

HADOOP CLUSTER SETUP GUIDE:

E6893 Big Data Analytics: Demo Session for HW I. Ruichi Yu, Shuguan Yang, Jen-Chieh Huang Meng-Yi Hsu, Weizhen Wang, Lin Haung.

Installation and Configuration Documentation

Prepared By : Manoj Kumar Joshi & Vikas Sawhney

Introduction to Hadoop. New York Oracle User Group Vikas Sawhney

Installation Guide Setting Up and Testing Hadoop on Mac By Ryan Tabora, Think Big Analytics

Distributed Filesystems

Hadoop Installation. Sandeep Prasad

Big Data Analytics(Hadoop) Prepared By : Manoj Kumar Joshi & Vikas Sawhney

CS380 Final Project Evaluating the Scalability of Hadoop in a Real and Virtual Environment

How to install Apache Hadoop in Ubuntu (Multi node/cluster setup)

CactoScale Guide User Guide. Athanasios Tsitsipas (UULM), Papazachos Zafeirios (QUB), Sakil Barbhuiya (QUB)

Apache Hadoop new way for the company to store and analyze big data

TP1: Getting Started with Hadoop

Big Data : Experiments with Apache Hadoop and JBoss Community projects

Perforce Helix Threat Detection On-Premise Deployment Guide

Single Node Hadoop Cluster Setup

How to install Apache Hadoop in Ubuntu (Multi node setup)

Easily parallelize existing application with Hadoop framework Juan Lago, July 2011

Hadoop 2.6 Configuration and More Examples

Hadoop Lab - Setting a 3 node Cluster. Java -

Hadoop Setup Walkthrough

Integrating SAP BusinessObjects with Hadoop. Using a multi-node Hadoop Cluster

Install Hadoop on Ubuntu and run as standalone

Pivotal HD Enterprise 1.0 Stack and Tool Reference Guide. Rev: A03

HDFS Installation and Shell

Data Analytics. CloudSuite1.0 Benchmark Suite Copyright (c) 2011, Parallel Systems Architecture Lab, EPFL. All rights reserved.

Pivotal HD Enterprise

Installing Hadoop. Hortonworks Hadoop. April 29, Mogulla, Deepak Reddy VERSION 1.0

Overview. Big Data in Apache Hadoop. - HDFS - MapReduce in Hadoop - YARN. Big Data Management and Analytics

E6893 Big Data Analytics Lecture 2: Big Data Analytics Platforms

Hadoop Distributed File System and Map Reduce Processing on Multi-Node Cluster

Important Notice. (c) Cloudera, Inc. All rights reserved.

2.1 Hadoop a. Hadoop Installation & Configuration

Deploying MongoDB and Hadoop to Amazon Web Services

IDS 561 Big data analytics Assignment 1

Using The Hortonworks Virtual Sandbox

Tutorial- Counting Words in File(s) using MapReduce

Hadoop. Apache Hadoop is an open-source software framework for storage and large scale processing of data-sets on clusters of commodity hardware.

How To Use Hadoop

Map Reduce & Hadoop Recommended Text:

!"#$%&' ( )%#*'+,'-#.//"0( !"#$"%&'()*$+()',!-+.'/', 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3, Processing LARGE data sets

About this Tutorial. Audience. Prerequisites. Copyright & Disclaimer

Hadoop Multi-node Cluster Installation on Centos6.6

INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE

Cloudera Distributed Hadoop (CDH) Installation and Configuration on Virtual Box

MapReduce. Tushar B. Kute,

Hadoop Tutorial. General Instructions

Perforce Helix Threat Detection OVA Deployment Guide

Qsoft Inc

Hadoop Training Hands On Exercise

Setting up Hadoop with MongoDB on Windows 7 64-bit

1. GridGain In-Memory Accelerator For Hadoop. 2. Hadoop Installation. 2.1 Hadoop 1.x Installation

Hadoop 只 支 援 用 Java 開 發 嘛? Is Hadoop only support Java? 總 不 能 全 部 都 重 新 設 計 吧? 如 何 與 舊 系 統 相 容? Can Hadoop work with existing software?

The Maui High Performance Computing Center Department of Defense Supercomputing Resource Center (MHPCC DSRC) Hadoop Implementation on Riptide - -

Ankush Cluster Manager - Hadoop2 Technology User Guide

COURSE CONTENT Big Data and Hadoop Training

Hadoop Basics with InfoSphere BigInsights

The Hadoop Eco System Shanghai Data Science Meetup

How To Analyze Network Traffic With Mapreduce On A Microsoft Server On A Linux Computer (Ahem) On A Network (Netflow) On An Ubuntu Server On An Ipad Or Ipad (Netflower) On Your Computer

Pro Apache Hadoop. Second Edition. Sameer Wadkar. Madhu Siddalingaiah

Introduction to Big data. Why Big data? Case Studies. Introduction to Hadoop. Understanding Features of Hadoop. Hadoop Architecture.

Cloudera Manager Training: Hands-On Exercises

HDFS Users Guide. Table of contents

How to Install and Configure EBF15328 for MapR or with MapReduce v1

Web Crawling and Data Mining with Apache Nutch Dr. Zakir Laliwala Abdulbasit Shaikh

Complete Java Classes Hadoop Syllabus Contact No:

ITG Software Engineering

Big Data Evaluator 2.1: User Guide

Workshop on Hadoop with Big Data

Programming Hadoop 5-day, instructor-led BD-106. MapReduce Overview. Hadoop Overview

CDH 5 High Availability Guide

HDFS Cluster Installation Automation for TupleWare

hadoop Running hadoop on Grid'5000 Vinicius Cogo Marcelo Pasin Andrea Charão

Introduction to HDFS. Prasanth Kothuri, CERN

CDH installation & Application Test Report

Hadoop Job Oriented Training Agenda

t] open source Hadoop Beginner's Guide ij$ data avalanche Garry Turkington Learn how to crunch big data to extract meaning from

Integration Of Virtualization With Hadoop Tools

Transcription:

Hadoop Installation Guide

Hadoop Installation Guide (for Ubuntu- Trusty) v1.0, 25 Nov 2014 Naveen Subramani

Hadoop Installation Guide (for Ubuntu - Trusty) v1.0, 25 Nov 2014 Hadoop and the Hadoop Logo are registered trademarks of Hadoop. Read Hadoop s trademark policy here. Ubuntu, the Ubuntu Logo and Canonical are registered trademarks of Canonical. Read Canonical s trademark policy here. All other trademarks mentioned in the book belong to their respective owners. This book is aimed at making it easy/simple for a beginner to build a Hadoop Cluster. This book will be updated periodically based on the suggestions, ideas, corrections, etc., from readers. Mail Feedback to: books@pinlabs.in Released under Creative Commons - Attribution-ShareAlike 4.0 International license. A brief description of the license A more detailed license text

Preface About this guide We have been working on Hadoop for quite sometime. To share our knowledge of hadoop I write this guide to help people install Hadoop easily. This guide is based on Hadoop installation on Ubuntu 14.04 LTS. Target Audience Our aim has been to provide a guide for a beginners who are new to Hadoop Implementation. Some familiarity with Big data is assumed for the users of this book. Acknowledgement Some of the content and definitions have been borrowed from web resources like manuals and documentation, white papers etc. from hadoop.apache.org. We would like to thank all the authors of these resources. License Attribution-ShareAlike 4.0 International. For the full version of the license text, please refer to http://creativecommons.org/licenses/by-sa/4.0/legalcode and http:// creativecommons.org/licenses/by-sa/4.0/ for a shorter description. Feedback We would really appreciate your feedback. We will enhance the book on an ongoing basis based on your feedback. Please mail your feedback to books@pinlabs.in.

Hadoop Installation Guide 1 Contents 1 Hadoop Ecosystem 7 1.1 What Is Apache Hadoop?............................................... 7 1.2 Understanding Hadoop Ecosystem.......................................... 7 1.2.1 What Is HDFS?................................................ 7 1.2.2 Apache Hadoop NextGen MapReduce (YARN)............................... 7 1.3 Assumptions on Environment variable........................................ 8 2 Apache YARN Pseudo-Distributed Mode 11 2.1 Supported Modes................................................... 11 2.2 Pseudo-Distributed Mode............................................... 11 2.2.1 Requirements for Pseudo-Distributed Mode................................. 11 2.2.2 Installation Notes............................................... 11 2.2.2.1 Get some tools........................................... 11 2.2.2.2 Install Jdk 1.7........................................... 11 2.2.2.2.1 Installing open-jdk 1.7................................. 12 2.2.2.2.2 Installing oracle-jdk 1.7................................. 12 2.2.2.3 Setup passphraseless ssh...................................... 12 2.2.2.4 Setting Hadoop Package...................................... 13 2.2.2.5 Configuring core-site.xml..................................... 13 2.2.2.6 Configuring hdfs-site.xml..................................... 13 2.2.2.7 Configuring mapred-site.xml................................... 14 2.2.2.8 Configuring yarn-site.xml..................................... 14 2.2.3 Execution................................................... 14 2.2.3.1 Start the hadoop cluster...................................... 15 2.2.3.2 Verify the hadoop cluster..................................... 15 2.2.4 Running Example Program.......................................... 15 2.2.5 Debugging.................................................. 15 2.2.6 Web UI.................................................... 16

3 Apache YARN Fully-Distributed Mode 17 3.1 Fully-Distributed Mode................................................ 17 3.1.1 Requirements for Fully-Distributed Mode.................................. 17 3.1.2 Installation Notes............................................... 17 3.1.2.1 Setting Hadoop Package ( for all machines)........................... 17 3.1.2.2 Configuring core-site.xml..................................... 18 3.1.2.3 Configuring hdfs-site.xml..................................... 18 3.1.2.4 Configuring mapred-site.xml................................... 19 3.1.2.5 Configuring yarn-site.xml..................................... 19 3.1.3 Add slave Node details ( for all machines).................................. 19 3.1.4 Edit /etc/hosts entry ( for all machines)................................... 20 3.1.5 Setup passphraseless ssh........................................... 20 3.1.6 Execution (on Master Node)......................................... 20 3.1.6.1 Start the hadoop cluster...................................... 20 3.1.6.2 Verify the hadoop cluster..................................... 21 3.1.7 Running Example Program.......................................... 21 3.1.8 Debugging.................................................. 21 3.1.9 Web UI.................................................... 21 4 Apache HBase Installation 23 4.1 Supported Modes................................................... 23 4.1.1 Requirements................................................. 23 4.1.2 Standalone Mode............................................... 23 5 Apache HBase Pseudo-Distributed Mode 27 5.1 Requirements..................................................... 27 5.2 Pseudo-Distributed Mode............................................... 27 6 Apache HBase Fully-Distributed Mode 31 6.1 Requirements..................................................... 31 6.2 Fully-Distributed Mode................................................ 31 7 Apache Hive Installation 37 7.1 Hive Installation.................................................... 37 7.1.1 Requirements................................................. 37 7.1.2 Installation Guide............................................... 37 8 Apache Pig Installation 39 8.1 Pig Installation.................................................... 39 8.1.1 Requirements................................................. 39 8.1.2 Installation Guide............................................... 39

Hadoop Installation Guide 3 9 Apache Spark Installation 43 9.1 Apache Spark Installation............................................... 43 9.1.1 Requirements................................................. 43 9.1.2 Installation Guide............................................... 43

Hadoop Installation Guide 5 List of Tables 1.1 Hadoop Modules................................................... 8 3.1 Two Node Cluster Setup............................................... 17 3.2 Daemons List for two node cluster.......................................... 21 6.1 Distributed Mode Sample Architecture....................................... 31 8.1 Pig Operators..................................................... 42

Hadoop Installation Guide 7 Chapter 1 Hadoop Ecosystem 1.1 What Is Apache Hadoop? The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures. 1.2 Understanding Hadoop Ecosystem The Apache Hadoop consists of two key components: reliable, distributed file system called Hadoop Distributed File System (HDFS) and the high-performance parallel data processing engine called Apache YARN. The most important aspect of Hadoop is that ability to move computation to the data rather than moving data to computation. Thus HDFS and MapReduce are tightly integrated. 1.2.1 What Is HDFS? HDFS is distributed file system that provides high-throughput access to data.hdfs creates multiple replicas(default : 3) of each data block across the hadoop cluster to enable reliable and rapid access to the data. The Main Daemons of HDFS are listed Below. NameNode is the master of the system. It oversees and coordinates data storage (directories and files). DataNodes are the actual slaves which are deployed in all slave machines provides actual storage to HDFS. It oversees and coordinates reads and writes requests from client. Secondary NameNode is responsible for performing periodic checkpoints. In the event of NameNode failure, you can restart the NameNode using the checkpoint. But its not a Backup Node for Name node. 1.2.2 Apache Hadoop NextGen MapReduce (YARN) Yet Another Resource Negotiator (YARN) is 2nd generation MapReduce (MR) which splits up the two major responsibilities of the MapReduce - JobTracker i.e. resource management and job scheduling/monitoring, into separate daemons: a global ResourceManager and per-application ApplicationMaster (AM). The Main Daemons of YARN are listed Below.

The ResourceManager has two main components: Scheduler and ApplicationsManager.The Scheduler is responsible for allocating resources to the various running applications subject to familiar constraints of capacities, queues etc. The ApplicationsManager is responsible for accepting job-submissions, negotiating the first container for executing the application specific ApplicationMaster and provides the service for restarting the ApplicationMaster container on failure. The NodeManager is the per-machine framework agent who is responsible for containers, monitoring their resource usage (cpu, memory, disk, network) and reporting the same to the ResourceManager/Scheduler. JobHistoryServer is a daemon that serves historical information about completed applications. Typically, JobHistory server can be co-deployed with JobTracker, but we recommend running JobHistory server as a separate daemon. Module Description Version Installation Guide A distributed file system Apache HDFS that provides Pseudo-Distributed 2.5.1 high-throughput access to Fully-Distributed application data. Apache YARN Hadoop MapReduce Apache HBase Apache Hive Apache Pig Apache Spark A framework for job scheduling and cluster resource management. A YARN-based system for parallel processing of large data sets. A scalable, distributed database that supports structured data storage for large tables. A data warehouse infrastructure that provides data summarization and ad hoc querying. A high-level data-flow language and execution framework for parallel computation. A fast and general compute engine for Hadoop data. Spark provides a simple and expressive programming model that supports a wide range of applications, including ETL, machine learning, stream processing, and graph computation. 2.5.1 2.5.1 0.98.8-hadoop2 Pseudo-Distributed Fully-Distributed Pseudo-Distributed Fully-Distributed Standalone Pseudo-Distributed Fully-Distributed 0.14.0 Standalone 0.14.0 Standalone 1.1.0 Standalone Table 1.1: Hadoop Modules 1.3 Assumptions on Environment variable For adding.bashrc entry follow these steps $ cd $ vim.bashrc Add this line at end of.bashrc file

Hadoop Installation Guide 9 export JAVA_HOME=/opt/jdk1.7.0_51 export PATH=$PATH:$HOME/bin:$JAVA_HOME/bin export HADOOP_HOME="$HOME/hadoop-2.5.1" export HIVE_HOME="$HOME/hive-0.14.0" export HBASE_HOME="$HOME/hbase-0.98.8-hadoop2" export PIG_HOME="$HOME/pig-0.14.0" export PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HIVE_HOME/bin Esc -> :wq for save and quit from vim editor $ source.bashrc

Hadoop Installation Guide 11 Chapter 2 Apache YARN Pseudo-Distributed Mode 2.1 Supported Modes Apache Hadoop cluster can be installed in one of the three supported modes Local (Standalone) Mode - Hadoop is configured to run in a non-distributed mode, as a single Java process. This is useful for debugging. Pseudo-Distributed Mode - pseudo-distributed mode where each Hadoop daemon runs in a separate Java process. Fully-Distributed Mode - Master, Slave cluster setup where daemons runs on seperate machines. 2.2 Pseudo-Distributed Mode 2.2.1 Requirements for Pseudo-Distributed Mode Ubuntu Server 14.04 Jdk 1.7 Apache Hadoop-2.5.1 Package ssh-server 2.2.2 Installation Notes 2.2.2.1 Get some tools Before begin with installation let us update the Ubuntu packages with latest contents and get some tools for edition $ sudo apt-get update $ sudo apt-get install vim $ cd 2.2.2.2 Install Jdk 1.7 For Running Apache hadoop java jdk1.7 is required. Install open-jdk1.7 or oracle jdk1.7 on your ubuntu machine, for installing open-jdk-1.7 read this and for oracle jdk1.7 read this.

2.2.2.2.1 Installing open-jdk 1.7 For installing open-jdk 1.7 follow these steps $ sudo apt-get install openjdk-7-jdk $ vim.bashrc Add this line at end of.bashrc file export JAVA_HOME=/usr export PATH=$PATH:$HOME/bin:$JAVA_HOME/bin Esc -> :wq for save and quit from vim editor $ source.bashrc $ java -version 2.2.2.2.2 Installing oracle-jdk 1.7 For installing oracle-jdk 1.7 follow these steps $ wget https://dl.dropboxusercontent.com/u/24798834/hadoop/jdk-7u51-linux-x64.tar.gz $ tar xzf jdk-7u51-linux-x64.tar.gz $ sudo mv jdk1.7.0_51 /opt/ $ vim.bashrc Add this line at end of.bashrc file export JAVA_HOME=/opt/jdk1.7.0_51 export PATH=$PATH:$HOME/bin:$JAVA_HOME/bin Esc -> :wq for save and quit from vim editor $ source.bashrc $ java -version Console output : java version "1.7.0_51" Java(TM) SE Runtime Environment (build 1.7.0_51-b13) Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode) 2.2.2.3 Setup passphraseless ssh Setup password less ssh access for hadoop daemon $ sudo apt-get install ssh $ ssh-keygen -t rsa -P "" $ ssh-copy-id -i ~/.ssh/id_rsa.pub localhost $ ssh localhost $ exit $ cd

Hadoop Installation Guide 13 2.2.2.4 Setting Hadoop Package Download and install hadoop 2.5.1 package on home dir of ubuntu. $ wget http://apache.cs.utah.edu/hadoop/common/stable/hadoop-2.5.1.tar.gz $ tar xzf hadoop-2.5.1.tar.gz $ cd hadoop-2.5.1/etc/hadoop Ensure that JAVA_HOME is set in hadoop-env.sh and points to the Java installation you intend to use. You can set other environment variables in hadoop-env.sh to suit your requirments. Some of the default settings refer to the variable HADOOP_HOME. The value of HADOOP_HOME is automatically inferred from the location of the startup scripts. HADOOP_HOME is the parent directory of the bin directory that holds the Hadoop scripts. In this instance it is $HADOOP_INSTALL/hadoop Configure JAVA_HOME in hadoop-env.sh file by uncommenting the line export JAVA_HOME= and replace with below contents for open-jdk export JAVA_HOME=/usr Configure JAVA_HOME in hadoop-env.sh file by uncommenting the line export JAVA_HOME= and replace with below contents for oracle-jdk export JAVA_HOME=/opt/jdk1.7.0_51 2.2.2.5 Configuring core-site.xml etc/hadoop/core-site.xml: Edit core-site.xml available in ($HADOOP_HOME/etc/hadoop/core-site.xml) with the contents below. <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </configuration> 2.2.2.6 Configuring hdfs-site.xml etc/hadoop/hdfs-site.xml: Edit hdfs-site.xml available in ($HADOOP_HOME/etc/hadoop/hdfs-site.xml) with the contents below. <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <name>dfs.replication</name> <value>1</value> <name>dfs.namenode.name.dir</name> <value>file:/home/ubuntu/yarn/yarn_data/hdfs/namenode</value> <name>dfs.datanode.data.dir</name>

<value>file:/home/ubuntu/yarn/yarn_data/hdfs/datanode</value> </configuration> Where ubuntu is current user. dfs.replication represents the data replication factor (by default 3) since its single node it has be set as 1. dfs.namenode.name.dir attribute defines the local directory for storing the name node data. dfs.datanode.data.dir attribute defines the local directory for storing the data node data (ie. actual user data). 2.2.2.7 Configuring mapred-site.xml etc/hadoop/mapred-site.xml: Edit mapred-site.xml available in ($HADOOP_HOME/etc/hadoop/mapred-site.xml) with the contents below. <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <name>mapreduce.framework.name</name> <value>yarn</value> </configuration> Where mapreduce.framework.name specifies the Mapreduce Version to be used MR1 or MR2(YARN). 2.2.2.8 Configuring yarn-site.xml etc/hadoop/yarn-site.xml: Edit yarn-site.xml available in ($HADOOP_HOME/etc/hadoop/yarn-site.xml) with the contents below. <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.shufflehandler</value> </configuration> 2.2.3 Execution Now all the configuration has be done next step is to formate the name node and to start the hadoop cluster. Frmate the name node using the below command available at $HADOOP_HOME directory. $ bin/hadoop namenode -format

Hadoop Installation Guide 15 2.2.3.1 Start the hadoop cluster Start the hadoop cluster with the below command available ad $HADOOP_HOME directory. $ sbin/start-all.sh 2.2.3.2 Verify the hadoop cluster After starting the hadoop cluster you can verify for the 5 hadoop daemons using the java profiling tool (jps) jps command which will display the daemons with its pid.it should list all 5 daemons 1.Namnode, 2.DataNode, 3.SecondaryNameNode 4.NodeManager 5. ResourceManger $ jps Console output: 11495 SecondaryNameNode 11653 ResourceManager 11260 NameNode 25361 NodeManager 25217 DataNode 26101 Jps 2.2.4 Running Example Program Now hadoop cluster is up and running. Lets run a famous wordcount program on hadoop cluster. We have to create and upload some test input file for word count program in hdfs (/input) in this path. Create a input folder and test file using the below commands. Note: Execute these commands inside $HADOOP_HOME folder. $ mkdir input && echo "This is word count example using hadoop 2.2.0" >> input/file Upload the created folder to HDFS. On successfull execution you can able to see a folder contents on HDFS web ui (http://localhost:50070 under path /input/file. $ bin/hadoop dfs -copyfromlocal input /input Now run the word count program: $ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.1.jar wordcount /input /output On successfull run the output of the wordcount result will be stored under the directory /output in HDFS and the result will be avilable in part-r-00000 file. _SUCCESS file indicates that successfull run of the job 2.2.5 Debugging If your hadoop cluster fails to list all the daemons you can monitor the log files avialble at $HADOOP_HOME/logs directory. $ ls -al logs

2.2.6 Web UI Acess the Hadoop components using the below URI. Web UI for Hadoop NameNode: http://localhost:50070/ Web UI for Hadoop HDFS: http://localhost:50070/explorer.html Web UI for Hadoop JobTracker: http://localhost:50030/ Web UI for Hadoop TaskTracker: http://localhost:50060/

Hadoop Installation Guide 17 Chapter 3 Apache YARN Fully-Distributed Mode 3.1 Fully-Distributed Mode Yarn fully-distributed mode is Master, Slave cluster setup where daemons runs on seperate machines. In this book we take two node cluster setup and going to implement hadoop cluster on 3 machines where 1 machine will act as Master Node and another two machines will act as slave nodes. Below table view describes about the machine configurations. Name Machine Roles MasterNode ubuntu 14.04 LTS Name Node,Secondary NameNode,ResourceManager SlaveNode1 ubuntu 14.04 LTS DataNode,NodeManager SlaveNode2 ubuntu 14.04 LTS DataNode,NodeManager Table 3.1: Two Node Cluster Setup 3.1.1 Requirements for Fully-Distributed Mode Ubuntu Server 14.04 Jdk 1.7 Apache Hadoop-2.5.1 Package ssh-server 3.1.2 Installation Notes 1. Setup some tools on all 3 machines - for setting up required tools refer this Document. 2. Next install jdk on all 3 machines - for setting up JDK refer this Document. 3.1.2.1 Setting Hadoop Package ( for all machines) Download and install hadoop 2.5.1 package on home dir of ubuntu on all machines (ie. Master node, Slave Node1,Slave Node2). $ wget http://apache.cs.utah.edu/hadoop/common/stable/hadoop-2.5.1.tar.gz $ tar xzf hadoop-2.5.1.tar.gz $ cd hadoop-2.5.1/etc/hadoop

Ensure that JAVA_HOME is set in hadoop-env.sh and points to the Java installation you intend to use. You can set other environment variables in hadoop-env.sh to suit your requirments. Some of the default settings refer to the variable HADOOP_HOME. The value of HADOOP_HOME is automatically inferred from the location of the startup scripts. HADOOP_HOME is the parent directory of the bin directory that holds the Hadoop scripts. In this instance it is $HADOOP_INSTALL/hadoop Configure JAVA_HOME in hadoop-env.sh file by uncommenting the line export JAVA_HOME= and replace with below contents for open-jdk export JAVA_HOME=/usr Configure JAVA_HOME in hadoop-env.sh file by uncommenting the line export JAVA_HOME= and replace with below contents for oracle-jdk export JAVA_HOME=/opt/jdk1.7.0_51 3.1.2.2 Configuring core-site.xml etc/hadoop/core-site.xml: Edit core-site.xml available in ($HADOOP_HOME/etc/hadoop/core-site.xml) with the contents below. <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <name>fs.default.name</name> <value>hdfs://<master host Name>:9000</value> </configuration> 3.1.2.3 Configuring hdfs-site.xml etc/hadoop/hdfs-site.xml: Edit hdfs-site.xml available in ($HADOOP_HOME/etc/hadoop/hdfs-site.xml) with the contents below. <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <name>dfs.replication</name> <value>3</value> <name>dfs.namenode.name.dir</name> <value>/home/ubuntu/yarn/yarn_data/hdfs/namenode</value> <name>dfs.datanode.data.dir</name> <value>/home/ubuntu/yarn/yarn_data/hdfs/datanode</value> </configuration> Where ubuntu is current user. dfs.replication represents the data replication factor (by default 3) since its single node it has be set as 1. dfs.namenode.name.dir attribute defines the local directory for storing the name node data. dfs.datanode.data.dir attribute defines the local directory for storing the data node data (ie. actual user data).

Hadoop Installation Guide 19 3.1.2.4 Configuring mapred-site.xml etc/hadoop/mapred-site.xml: Edit mapred-site.xml available in ($HADOOP_HOME/etc/hadoop/mapred-site.xml) with the contents below. <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <name>mapreduce.framework.name</name> <value>yarn</value> </configuration> Where mapreduce.framework.name specifies the Mapreduce Version to be used MR1 or MR2(YARN). 3.1.2.5 Configuring yarn-site.xml etc/hadoop/yarn-site.xml: Edit yarn-site.xml available in ($HADOOP_HOME/etc/hadoop/yarn-site.xml) with the contents below. <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.shufflehandler</value> <name>yarn.resourcemanager.resource-tracker.address</name> <value><master hostname>:8025</value> <name>yarn.resourcemanager.scheduler.address</name> <value><master hostname>:8030</value> <name>yarn.resourcemanager.address</name> <value><master hostname>:8040</value> </configuration> 3.1.3 Add slave Node details ( for all machines) After configuring the hadoop config files. Now we have to add list of slave machines on slaves file located in $HADOOP_HOME/etc/hado directory. Edit the file and remove localhost entry and append the lines with the fillowing contents. Kindly replace appropiate ip address with your data center address.

<slavenode1 private ip> <slavenode2 private ip> 3.1.4 Edit /etc/hosts entry ( for all machines) Now we have to add routing of all machine details on /etc/hosts entry with the following entry. Kindly replace appropiate ip address with your data center address. Change this entry on all machines <masternode private ip> <slavenode1 private ip> <slavenode1 private ip> <master hostname> <slave1 hostname> <slave2 hostname> Important Note: please comment 127.0.1.1 entry in etc/hosts file 3.1.5 Setup passphraseless ssh Setup password less ssh on Master to access slave machines $ sudo apt-get install ssh $ ssh-keygen -t rsa -P "" $ ssh-copy-id -i ~/.ssh/id_rsa.pub localhost $ ssh-copy-id -i ~/.ssh/id_rsa.pub <slave1 ip> $ ssh-copy-id -i ~/.ssh/id_rsa.pub <slave2 ip> $ ssh localhost $ exit $ ssh <slave1 ip> $ exit $ ssh <slave2 ip> $ exit Make sure that you are able to login to all the slaves without password. 3.1.6 Execution (on Master Node) Now all the configuration has be done next step is to format the name node and to start the hadoop cluster. format the name node using the below command available at $HADOOP_HOME directory. Execution is done on Master Node $ bin/hadoop namenode -format 3.1.6.1 Start the hadoop cluster Start the hadoop cluster with the below command available ad $HADOOP_HOME directory. $ sbin/start-all.sh

Hadoop Installation Guide 21 3.1.6.2 Verify the hadoop cluster After starting the hadoop cluster you can verify for the 5 hadoop daemons using the java profiling tool (jps) jps command which will display the daemons with its pid.it should list appropiate daemons on various machine. check below table for verifying which daemons runs on which machine $ jps Console output: Name Machine Roles MasterNode ubuntu 14.04 LTS Name Node,Secondary NameNode,ResourceManager SlaveNode1 ubuntu 14.04 LTS DataNode,NodeManager SlaveNode2 ubuntu 14.04 LTS DataNode,NodeManager Table 3.2: Daemons List for two node cluster 3.1.7 Running Example Program Now hadoop cluster is up and running. Lets run a famous wordcount program on hadoop cluster. We have to create and upload some test input file for word count program in hdfs (/input) in this path. Execute all these commands on master machine. Create a input folder and test file using the below commands. Note: Execute these commands inside $HADOOP_HOME folder. $ mkdir input && echo "This is word count example using hadoop 2.2.0" >> input/file Upload the created folder to HDFS. On successfull execution you can able to see a folder contents on HDFS web ui (http://localhost:50070 under path /input/file. $ bin/hadoop dfs -copyfromlocal input /input Now run the word count program: $ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.1.jar wordcount /input /output On successfull run the output of the wordcount result will be stored under the directory /output in HDFS and the result will be avilable in part-r-00000 file. _SUCCESS file indicates that successfull run of the job 3.1.8 Debugging If your hadoop cluster fails to list all the daemons you can monitor the log files avialble at $HADOOP_HOME/logs directory. $ ls -al logs 3.1.9 Web UI Acess the Hadoop components using the below URI.

Web UI for Hadoop NameNode: http://<masternode ip>:50070/ Web UI for Hadoop HDFS: http://<masternode ip>:50070/explorer.html Web UI for Hadoop JobTracker: http://<masternode ip>:50030/ Web UI for Hadoop TaskTracker: http://<masternode ip>:50060/

Hadoop Installation Guide 23 Chapter 4 Apache HBase Installation 4.1 Supported Modes Apache HBase cluster can be installed in one of the three supported modes Local (Standalone) Mode - Hadoop is configured to run against the local filesystem. This is not an appropriate configuration for a production instance of HBase. This is useful for experimenting with HBase. you can insert rows into the table, perform put and scan operations against the table, enable or disable the table, and start and stop HBase using hbase shell CLI. Pseudo-Distributed Mode - Pseudo-Distributed mode where each Hbase daemon (HMaster, HRegionServer, and Zookeeper) runs in a separate Java process,but on single host. Fully-Distributed Mode - Master, Slave cluster setup where daemons runs on seperate machines.in a distributed configuration, the cluster contains multiple nodes, each of which runs one or more HBase daemon. These include primary and backup Master instances, multiple Zookeeper nodes, and multiple RegionServer nodes.fully-distributed uses real-world scenarios. 4.1.1 Requirements HBase requires that a JDK and hadoop be installed.see JDK installation section for Oracle JDK or Open JDK Installation. See Hadoop installation section for Hadoop Installation. Ensure to set HADOOP_HOME entry in.bashrc. 4.1.2 Standalone Mode Stand alone mode of installation uses local file system for storing hbase data.stand alone mode is not suitable for production. It is met for development and testing purpose. Loopback IP - HBase 0.94.x and earlier Prior to HBase 0.94.x, HBase expected the loopback IP address to be 127.0.0.1. Ubuntu and some other distributions default to 127.0.1.1 and this will cause problems for you. Example /etc/hosts file looks like this 127.0.0.1 localhost 127.0.0.1 mydell

Get Started with Hbase Choose a download site from this list of Apache Download Mirrors. Click on the suggested top link. This will take you to a mirror of HBase Releases. Click on the folder named stable and then download the binary file that ends in.tar.gz to your local filesystem. Be sure to choose the version that corresponds with the version of Hadoop you are likely to use later. In most cases, you should choose the file for Hadoop 2, which will be called something like hbase-0.98.3-hadoop2-bin.tar.gz. Do not download the file ending in src.tar.gz for now. Extract HBase Package $ tar xzf hbase-0.98.8-hadoop2-bin.tar.gz $ cd hbase-0.98.8-hadoop2 Set JAVA_HOME in conf/hbase-env.sh $ vim conf/hbase-env.sh uncomment JAVA_HOME entry in conf/hbase-env.sh and point you jdk location as /opt/jdk1.7.0_51 Example: export JAVA_HOME=/opt/jdk1.7.0_51 Edit conf/hbase-site.xml Edit conf/hbase-site.xml and add entry for zookeeper data directory and data directory for hbase. replace the contents with the below contents. <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <name>hbase.rootdir</name> <value>file:///home/ubuntu/yarn/hbase_data/data</value> <name>hbase.zookeeper.property.datadir</name> <value>/home/ubuntu/yarn/hbase_data/zookeeper</value> </configuration> Start HBase Start Hbase by runninng shell script available in bin/start-hbase.sh.after you started jps command should give the list of java process HMaster daemon responsible for hbase. $ bin/start-hbase.sh Get started with HBase Shell After installing HBase now its time to get started with HBase shell.lets fire up Hbase shell using command bin/hbase shell command. $ bin/hbase shell

Hadoop Installation Guide 25 Create a table Use create command to create table.you must specify table name and column family as argument for Create command.in this below command we are creating a table called employeedb with column family finance. hbase> create 'employeedb', 'finance' 0 row(s) in 1.2200 seconds List tables Use list command to list all tables in hbase. hbase> list 'employeedb' TABLE employeedb 1 row(s) in 0.0350 seconds => ["employeedb"] Insert data in to table Use put command to insert data into table in hbase. hbase> put 'employeedb', 'row1', 'finance:name', 'Naveen' 0 row(s) in 0.1770 seconds hbase> put 'employeedb', 'row2', 'finance:salary', 20000 0 row(s) in 0.0160 seconds hbase> put 'employeedb', 'row3', 'finance:empid', 10124 0 row(s) in 0.0260 seconds Scan the table for all data at once. Use Scan command to list all contents of a table in hbase. hbase> scan 'employeedb' ROW COLUMN+CELL row1 column=finance:name, timestamp=1403759475114, value=naveen row2 column=finance:salary, timestamp=1403759492807, value=20000 row3 column=finance:empid, timestamp=1403759503155, value=10124 3 row(s) in 0.0440 seconds Get Particular row of data Use get command to single row from a table in hbase. hbase> get 'employeedb', 'row1' COLUMN CELL finance:name timestamp=1403759475114, value=naveen 1 row(s) in 0.0230 seconds

Delete a table To delete a table in hbase you have to first disable the table then only you can able to delete the table. use disable command to disable the table and enable command to enable the table.drop command to drop the table hbase> disable 'employeedb' 0 row(s) in 1.6270 seconds hbase> drop 'employeedb' 0 row(s) in 0.2900 seconds Exit from HBase Shell use quit command to exit from hbase shell. hbase> quit Stopping HBase For stopping HBase use bin/stop-hbase.sh shell script in bin folder. $./bin/stop-hbase.sh stopping hbase...

Hadoop Installation Guide 27 Chapter 5 Apache HBase Pseudo-Distributed Mode 5.1 Requirements HBase requires that a JDK and hadoop be installed.see JDK installation section for Oracle JDK or Open JDK Installation. See Hadoop installation section for Hadoop Installation. Ensure to set HADOOP_HOME entry in.bashrc. 5.2 Pseudo-Distributed Mode Pseudo-Distributed mode of installation uses HDFS file system for storing hbase data.pseudo-distributed mode is not suitable for production. It is met for development and testing purpose in single machine. Loopback IP - HBase 0.94.x and earlier Prior to HBase 0.94.x, HBase expected the loopback IP address to be 127.0.0.1. Ubuntu and some other distributions default to 127.0.1.1 and this will cause problems for you. Example /etc/hosts file looks like this 127.0.0.1 localhost 127.0.0.1 mydell Get Started with Hbase Choose a download site from this list of Apache Download Mirrors. Click on the suggested top link. This will take you to a mirror of HBase Releases. Click on the folder named stable and then download the binary file that ends in.tar.gz to your local filesystem. Be sure to choose the version that corresponds with the version of Hadoop you are likely to use later. In most cases, you should choose the file for Hadoop 2, which will be called something like hbase-0.98.3-hadoop2-bin.tar.gz. Do not download the file ending in src.tar.gz for now. Extract HBase Package $ tar xzf hbase-0.98.8-hadoop2-bin.tar.gz $ cd hbase-0.98.8-hadoop2

Set JAVA_HOME in conf/hbase-env.sh $ vim conf/hbase-env.sh uncomment JAVA_HOME entry in conf/hbase-env.sh and point you jdk location as /opt/jdk1.7.0_51 Example: export JAVA_HOME=/opt/jdk1.7.0_51 Edit conf/hbase-site.xml Edit conf/hbase-site.xml and add entry for zookeeper data directory and data directory for hbase. replace the contents with the below contents.in this case we use HDFS for HBase storage and we going to run region server and zookeeper as seperate daemons. <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <name>hbase.rootdir</name> <value>hdfs://localhost:9000/hbase</value> <name>hbase.zookeeper.property.datadir</name> <value>/home/ubuntu/yarn/hbase_data/zookeeper</value> <name>hbase.zookeeper.quorum</name> <value>localhost</value> <name>hbase.cluster.distributed</name> <value>true</value> </configuration> Start HBase Start Hbase by runninng shell script available in bin/start-hbase.sh.after you started jps command should give the list of java process responsible for hbase (HMaster,HRegionServer,HQuorumPeer). $ bin/start-hbase.sh 5605 HMaster 5826 Jps 5003 JobTracker 5545 HQuorumPeer 4756 DataNode 5728 HRegionServer 4546 NameNode 5157 TaskTracker 4907 SecondaryNameNode

Hadoop Installation Guide 29 Get started with HBase Shell After installing HBase now its time to get started with HBase shell.lets fire up Hbase shell using command bin/hbase shell command. $ bin/hbase shell Create a table Use create command to create table.you must specify table name and column family as argument for Create command.in this below command we are creating a table called employeedb with column family finance. hbase> create 'employeedb', 'finance' 0 row(s) in 1.2200 seconds List tables Use list command to list all tables in hbase. hbase> list 'employeedb' TABLE employeedb 1 row(s) in 0.0350 seconds => ["employeedb"] Insert data in to table Use put command to insert data into table in hbase. hbase> put 'employeedb', 'row1', 'finance:name', 'Naveen' 0 row(s) in 0.1770 seconds hbase> put 'employeedb', 'row2', 'finance:salary', 20000 0 row(s) in 0.0160 seconds hbase> put 'employeedb', 'row3', 'finance:empid', 10124 0 row(s) in 0.0260 seconds Scan the table for all data at once. Use Scan command to list all contents of a table in hbase. hbase> scan 'employeedb' ROW COLUMN+CELL row1 column=finance:name, timestamp=1403759475114, value=naveen row2 column=finance:salary, timestamp=1403759492807, value=20000 row3 column=finance:empid, timestamp=1403759503155, value=10124 3 row(s) in 0.0440 seconds

Get Particular row of data Use get command to single row from a table in hbase. hbase> get 'employeedb', 'row1' COLUMN CELL finance:name timestamp=1403759475114, value=naveen 1 row(s) in 0.0230 seconds Delete a table To delete a table in hbase you have to first disable the table then only you can able to delete the table. use disable command to disable the table and enable command to enable the table.drop command to drop the table hbase> disable 'employeedb' 0 row(s) in 1.6270 seconds hbase> drop 'employeedb' 0 row(s) in 0.2900 seconds Exit from HBase Shell use quit command to exit from hbase shell. hbase> quit Stopping HBase For stopping HBase use bin/stop-hbase.sh shell script in bin folder. $./bin/stop-hbase.sh stopping hbase...

Hadoop Installation Guide 31 Chapter 6 Apache HBase Fully-Distributed Mode 6.1 Requirements HBase requires that a JDK and hadoop be installed.see JDK installation section for Oracle JDK or Open JDK Installation. See Hadoop installation section for Hadoop Installation. Ensure to set HADOOP_HOME entry in.bashrc. 6.2 Fully-Distributed Mode In Fully-Distributed mode, the cluster contains multiple nodes, each of which runs one or more HBase daemon. These include primary and backup Master instances, multiple Zookeeper nodes, and multiple RegionServer nodes. It is well suitable for realworld scenarios. Distributed Mode Sample Architecture Node Name node1.sample.com node2.sample.com node3.sample.com Roles Master,ZooKeeper Backupnode,ZooKeeper,RegionServer ZooKeeper,RegionServer Table 6.1: Distributed Mode Sample Architecture This guide assumes that all nodes are configured on same network and have full access to the machines ie. Assuming that for all nodes no firewall rules has be defined. Setting up password-less ssh Node1 able to login to Node2 and Node2 without password for that we going to setup password-less SSH login from node1 to each of the others. On Node1 generate a key pair Assume that we are going to all HBase services are run buy the user named ubuntu.generate the SSH key pair, using the following command $ sudo apt-get install ssh $ ssh-keygen -t rsa -P ""

Note: Generated ssh key pair will be found on the location /home/ubuntu/.ssh/id_rsa.pub. Copy the public key to the other nodes Prior to HBase 0.94.x, HBase expected the loopback IP address to be 127.0.0.1. Ubuntu and some other distributions default to 127.0.1.1 and this will cause problems for you. $ ssh-copy-id -i ~/.ssh/id_rsa.pub localhost $ ssh-copy-id -i ~/.ssh/id_rsa.pub <Node2 ip> $ ssh-copy-id -i ~/.ssh/id_rsa.pub <Node3 ip> $ ssh localhost $ exit $ ssh <Node1 ip> $ exit $ ssh <Node2 ip> $ exit Make sure that you are able to login to all the slaves without password. Configuring Node2 as backup Node Since node2 will run a backup Master, repeat the procedure above, substituting node2 everywhere you see node1. Be sure not to overwrite your existing.ssh/authorized_keys files, but concatenate the new key onto the existing file using the >> operator rather than the > operator. Prepare Node1 Choose a download site from this list of Apache Download Mirrors. Click on the suggested top link. This will take you to a mirror of HBase Releases. Click on the folder named stable and then download the binary file that ends in.tar.gz to your local filesystem. Be sure to choose the version that corresponds with the version of Hadoop you are likely to use later. In most cases, you should choose the file for Hadoop 2, which will be called something like hbase-0.98.3-hadoop2-bin.tar.gz. Do not download the file ending in src.tar.gz for now. Extract HBase Package $ tar xzf hbase-0.98.8-hadoop2-bin.tar.gz $ cd hbase-0.98.8-hadoop2 Set JAVA_HOME in conf/hbase-env.sh $ vim conf/hbase-env.sh uncomment JAVA_HOME entry in conf/hbase-env.sh and point you jdk location as /opt/jdk1.7.0_51 Example: export JAVA_HOME=/opt/jdk1.7.0_51

Hadoop Installation Guide 33 Edit conf/regionservers Edit conf/regionservers and remove the line which contains localhost. Add lines with the hostnames or IP addresses for node2 and node3. Even if you did want to run a RegionServer on node-a, you should refer to it by the hostname the other servers would use to communicate with it. In this case, that would be node1.sample.com. This enables you to distribute the configuration to each node of your cluster any hostname conflicts. Save the file. Edit conf/hbase-site.xml Edit conf/hbase-site.xml and add entry for zookeeper data directory and data directory for hbase. replace the contents with the below contents.in this case we use HDFS for HBase storage and we going to run region server and zookeeper as seperate daemons. <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <name>hbase.rootdir</name> <value>hdfs://localhost:9000/hbase</value> <name>hbase.zookeeper.property.datadir</name> <value>/home/ubuntu/yarn/hbase_data/zookeeper</value> <name>hbase.cluster.distributed</name> <value>true</value> <name>hbase.zookeeper.quorum</name> <value>node1.sample.com,node2.sample.com,node3.sample.com</value> </configuration> Configure HBase to use node2 as a backup master Edit or create file conf/backup-masters and add a new line to it with the hostname for node2. In this demonstration, the hostname is node2.example.com. Note: Everywhere in your configuration that you have referred to node1 as localhost, change the reference to point to the hostname that the other nodes will use to refer to node1. In these examples, the hostname is node1.sample.com. Prepare node2 and node3 Everywhere in your configuration that you have referred to node1 as localhost, change the reference to point to the hostname that the other nodes will use to refer to node1. In these examples, the hostname is node1.sample.com. Note: node2 will run a backup master server and a ZooKeeper instance.

Download and unpack HBase. Download and unpack HBase to node-b, just as you did for the standalone and pseudo-distributed. Copy the configuration files from node1 to node2.and node3. Each node of your cluster needs to have the same configuration information. Copy the contents of the conf/ directory to the conf/ directory on node2 and node3. Start HBase Cluster Important: Be sure HBase is not running on any node. Start Hbase by runninng shell script available in bin/start-hbase.sh.after you started jps command should give the list of java process responsible for hbase (HMaster,HRegionServer,HQuorumPeer) on various nodes.zookeeper starts first, followed by the master, then the RegionServers, and finally the backup masters. $ bin/start-hbase.sh Node1: jps output 5605 HMaster 5826 Jps 5545 HQuorumPeer Node2: jps output 5605 HMaster 5826 Jps 5545 HQuorumPeer 5930 HRegionServer Node3: jps output 5826 Jps 5545 HQuorumPeer 5930 HRegionServer Browse to the Web UI In HBase newer than 0.98.x, the HTTP ports used by the HBase Web UI changed from 60010 for the Master and 60030 for each RegionServer to 16610 for the Master and 16030 for the RegionServer.Once your installation has been done properly you can able to access UI for the Master http://node1.sample.com:60110/ or the secondary master at http://node2.sample.com:60110/ for the secondary master, using a web browser. For debuging kindly refer logs directory. $ bin/hbase shell

Hadoop Installation Guide 35 Get started with HBase Shell After installing HBase now its time to get started with HBase shell.lets fire up Hbase shell using command bin/hbase shell command. $ bin/hbase shell Create a table Use create command to create table.you must specify table name and column family as argument for Create command.in this below command we are creating a table called employeedb with column family finance. hbase> create 'employeedb', 'finance' 0 row(s) in 1.2200 seconds List tables Use list command to list all tables in hbase. hbase> list 'employeedb' TABLE employeedb 1 row(s) in 0.0350 seconds => ["employeedb"] Insert data in to table Use put command to insert data into table in hbase. hbase> put 'employeedb', 'row1', 'finance:name', 'Naveen' 0 row(s) in 0.1770 seconds hbase> put 'employeedb', 'row2', 'finance:salary', 20000 0 row(s) in 0.0160 seconds hbase> put 'employeedb', 'row3', 'finance:empid', 10124 0 row(s) in 0.0260 seconds Scan the table for all data at once. Use Scan command to list all contents of a table in hbase. hbase> scan 'employeedb' ROW COLUMN+CELL row1 column=finance:name, timestamp=1403759475114, value=naveen row2 column=finance:salary, timestamp=1403759492807, value=20000 row3 column=finance:empid, timestamp=1403759503155, value=10124 3 row(s) in 0.0440 seconds

Get Particular row of data Use get command to single row from a table in hbase. hbase> get 'employeedb', 'row1' COLUMN CELL finance:name timestamp=1403759475114, value=naveen 1 row(s) in 0.0230 seconds Delete a table To delete a table in hbase you have to first disable the table then only you can able to delete the table. use disable command to disable the table and enable command to enable the table.drop command to drop the table hbase> disable 'employeedb' 0 row(s) in 1.6270 seconds hbase> drop 'employeedb' 0 row(s) in 0.2900 seconds Exit from HBase Shell use quit command to exit from hbase shell. hbase> quit Stopping HBase For stopping HBase use bin/stop-hbase.sh shell script in bin folder. $./bin/stop-hbase.sh stopping hbase...

Hadoop Installation Guide 37 Chapter 7 Apache Hive Installation 7.1 Hive Installation The Apache Hive data warehouse software facilitates querying and managing large datasets residing in distributed storage. 7.1.1 Requirements Hive requires that a JDK and hadoop be installed.see JDK installation section for Oracle JDK or Open JDK Installation. See Hadoop installation section for Oracle JDK or Open JDK Installation. 7.1.2 Installation Guide Download Hive package from the hive.apache.org site and extract the packages using the following commands.in this installation we use default derby database as metastore. Extract Hive Package $ tar xzf apache-hive-0.14.0-bin.tar.gz $ cd hive-0.14.0 Add entry for hadoop_home,hive_home in.profile or.bashrc For addiing.bashrc entry follow these steps $ cd $ vim.bashrc Add this line at end of.bashrc file export JAVA_HOME=/opt/jdk1.7.0_51 export HADOOP_HOME=/home/ubuntu/hadoop-2.5.1 export HIVE_HOME=/home/ubuntu/hive-0.14.0 export PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HIVE_HOME/bin Esc -> :wq for save and quit from vim editor

Note: Assumptions made that hadoop is install on home directory for ubuntu user (ie: /home/ubuntu/hadoop-2.5.1) and hive is instlled on home directory of ubuntu user (ie: /home/ubuntu/hive-0.14.0) $ source.bashrc $ java -version Console output : java version "1.7.0_51" Java(TM) SE Runtime Environment (build 1.7.0_51-b13) Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode) Start Hadoop Cluster $ $HADOOP_HOME/bin/start-all.sh Jps to verify Hadoop Daemons $ jps Get started with Hive Shell After installing Hive now its time to get started with Hive shell.lets fire up Hive shell using command bin/hive command. $ $HIVE_HOME/bin/hive Some sample commands Try out some basic commands listed below, for detailed documentation kindly refer apache documentation at hive.apache.org. :> CREATE DATABASE my_hive_db; :> DESCRIBE DATABASE my_hive_db; :> USE my_hive_db; :> DROP DATABASE my_hive_db; :> exit Exit from Hive Shell use quit command to exit from hive shell. :> quit

Hadoop Installation Guide 39 Chapter 8 Apache Pig Installation 8.1 Pig Installation Apache Pig is a platform for analyzing large data sets. Apache pig can be run in distributed fashion on cluster. Pig consists of a high-level language called pig latin for expressing data analysis programs. Pig is very similar to Hive, provides datawarehouse. Using PigLatin we can able to analyze,filter and Extract data sets.pig internally converts PigLatin commands in to MapReduce jobs and will execute on HDFS to retrive data sets. 8.1.1 Requirements Pig requires that a JDK and hadoop be installed.see JDK installation section for Oracle JDK or Open JDK Installation. See Hadoop installation section for Hadoop Installation. Ensure to set HADOOP_HOME entry in.bashrc. 8.1.2 Installation Guide Download Pig package from the pig.apache.org site and extract the packages using the following commands. Extract Pig Package $ tar xzf pig-0.14.0.tar.gz $ cd pig-0.14.0 Add entry for pig_home in.profile or.bashrc For adding.bashrc entry kindly follow.bashrc entry section. Start Hadoop Cluster $ $HADOOP_HOME/bin/start-all.sh Jps to verify Hadoop Daemons $ jps

Execution Modes Local Mode :To run Pig in local mode, you need access to a single machine; all files are installed and run using your local host and file system. Specify local mode using the -x flag (pig -x local). Mapreduce Mode :To run Pig in mapreduce mode, you need access to a Hadoop cluster and HDFS installation. Mapreduce mode is the default mode; you can, but don t need to, specify it using the -x flag (pig OR pig -x mapreduce). Get started with Pig Shell After installing Pig now its time to get started with Pig shell.lets fire up Pig shell using command bin/pig command. $ $PIG_HOME/bin/pig Some Example pig commands Try out some basic commands listed below, for detailed documentation kindly refer apache documentation at pig.apache.org. Invoke the Grunt shell by typing the "pig" command (in local or hadoop mode). Then, enter the Pig Latin statements interactively at the grunt prompt (be sure to include the semicolon after each statement). The DUMP operator will display the results to your terminal screen.store operator will store the pig results in the HDFS. Note: For using below commands kindly download employee dataset from this this and upload to HDFS in directory /dataset/employee.csv. and download manager.csv from this url and upload to HDFS in directory /dataset/manager.csv Upload datasets to HDFS: $ $HADOOP_HOME/bin/hadoop dfs -copyfromlocal ~/Downloads/employee.csv /dataset/ employee.csv $ $HADOOP_HOME/bin/hadoop dfs -copyfromlocal ~/Downloads/manager.csv /dataset/manager.csv Select employee id and name from employee dataset grunt> A =load '/dataset/employee.csv' using PigStorage(','); grunt> E = foreach A generate $0,$1; grunt> dump E; select * from employee where country= China grunt> A = load '/dataset/employee.csv' using PigStorage(',') as (eid:int, emp_name: chararray, country:chararray,salary:int); grunt> F = filter A by country == 'China'; grunt> dump F;