Comparative Study of Hive and Map Reduce to Analyze Big Data

Similar documents
研 發 專 案 原 始 程 式 碼 安 裝 及 操 作 手 冊. Version 0.1

Apache Hadoop 2.0 Installation and Single Node Cluster Configuration on Ubuntu A guide to install and setup Single-Node Apache Hadoop 2.

Installation and Configuration Documentation

How To Install Hadoop From Apa Hadoop To (Hadoop)

Installing Hadoop. You need a *nix system (Linux, Mac OS X, ) with a working installation of Java 1.7, either OpenJDK or the Oracle JDK. See, e.g.

How to install Apache Hadoop in Ubuntu (Multi node/cluster setup)

Hadoop Installation. Sandeep Prasad

Hadoop (pseudo-distributed) installation and configuration

Installing Hadoop. Hortonworks Hadoop. April 29, Mogulla, Deepak Reddy VERSION 1.0

HADOOP. Installation and Deployment of a Single Node on a Linux System. Presented by: Liv Nguekap And Garrett Poppe

Set JAVA PATH in Linux Environment. Edit.bashrc and add below 2 lines $vi.bashrc export JAVA_HOME=/usr/lib/jvm/java-7-oracle/

HADOOP - MULTI NODE CLUSTER

Single Node Setup. Table of contents

Tutorial- Counting Words in File(s) using MapReduce

The objective of this lab is to learn how to set up an environment for running distributed Hadoop applications.

This handout describes how to start Hadoop in distributed mode, not the pseudo distributed mode which Hadoop comes preconfigured in as on download.

HADOOP CLUSTER SETUP GUIDE:

Hadoop MapReduce Tutorial - Reduce Comp variability in Data Stamps

How to install Apache Hadoop in Ubuntu (Multi node setup)

About this Tutorial. Audience. Prerequisites. Copyright & Disclaimer

HSearch Installation

CS380 Final Project Evaluating the Scalability of Hadoop in a Real and Virtual Environment

HADOOP MOCK TEST HADOOP MOCK TEST II

TP1: Getting Started with Hadoop

Hadoop and Hive. Introduction,Installation and Usage. Saatvik Shah. Data Analytics for Educational Data. May 23, 2014

Data Analytics. CloudSuite1.0 Benchmark Suite Copyright (c) 2011, Parallel Systems Architecture Lab, EPFL. All rights reserved.

Case-Based Reasoning Implementation on Hadoop and MapReduce Frameworks Done By: Soufiane Berouel Supervised By: Dr Lily Liang

Easily parallelize existing application with Hadoop framework Juan Lago, July 2011

Running Kmeans Mapreduce code on Amazon AWS

Lecture 2 (08/31, 09/02, 09/09): Hadoop. Decisions, Operations & Information Technologies Robert H. Smith School of Business Fall, 2015

Hadoop Training Hands On Exercise

Setup Hadoop On Ubuntu Linux. ---Multi-Node Cluster

Single Node Hadoop Cluster Setup

Hadoop Lab Notes. Nicola Tonellotto November 15, 2010

Hadoop Distributed File System and Map Reduce Processing on Multi-Node Cluster

A Study of Data Management Technology for Handling Big Data

Deploy Apache Hadoop with Emulex OneConnect OCe14000 Ethernet Network Adapters

2.1 Hadoop a. Hadoop Installation & Configuration

Hadoop and Eclipse. Eclipse Hawaii User s Group May 26th, Seth Ladd

Hadoop Setup Walkthrough

Hadoop Lab - Setting a 3 node Cluster. Java -

E6893 Big Data Analytics: Demo Session for HW I. Ruichi Yu, Shuguan Yang, Jen-Chieh Huang Meng-Yi Hsu, Weizhen Wang, Lin Haung.

HDFS Installation and Shell

Xiaoming Gao Hui Li Thilina Gunarathne

Deploying MongoDB and Hadoop to Amazon Web Services

NoSQL and Hadoop Technologies On Oracle Cloud

CactoScale Guide User Guide. Athanasios Tsitsipas (UULM), Papazachos Zafeirios (QUB), Sakil Barbhuiya (QUB)

Hadoop IST 734 SS CHUNG

Introduction to MapReduce and Hadoop

Hadoop Tutorial Group 7 - Tools For Big Data Indian Institute of Technology Bombay

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

Hadoop at Yahoo! Owen O Malley Yahoo!, Grid Team owen@yahoo-inc.com

IDS 561 Big data analytics Assignment 1

Installation Guide Setting Up and Testing Hadoop on Mac By Ryan Tabora, Think Big Analytics

CS54100: Database Systems

1. GridGain In-Memory Accelerator For Hadoop. 2. Hadoop Installation. 2.1 Hadoop 1.x Installation

Using The Hortonworks Virtual Sandbox

American International Journal of Research in Science, Technology, Engineering & Mathematics

Web Crawling and Data Mining with Apache Nutch Dr. Zakir Laliwala Abdulbasit Shaikh

How To Analyze Network Traffic With Mapreduce On A Microsoft Server On A Linux Computer (Ahem) On A Network (Netflow) On An Ubuntu Server On An Ipad Or Ipad (Netflower) On Your Computer

Word count example Abdalrahman Alsaedi

Tableau Spark SQL Setup Instructions

Hadoop MultiNode Cluster Setup

Word Count Code using MR2 Classes and API

Connecting Hadoop with Oracle Database

Integrating VoltDB with Hadoop

Distributed Systems + Middleware Hadoop

Hadoop Configuration and First Examples

CDH 5 Quick Start Guide

Hadoop Installation MapReduce Examples Jake Karnes

Hadoop Basics with InfoSphere BigInsights

Distributed Framework for Data Mining As a Service on Private Cloud

Report Vertiefung, Spring 2013 Constant Interval Extraction using Hadoop

map/reduce connected components

Hadoop: Understanding the Big Data Processing Method

Cloudera Distributed Hadoop (CDH) Installation and Configuration on Virtual Box

hadoop Running hadoop on Grid'5000 Vinicius Cogo Marcelo Pasin Andrea Charão

Hadoop Tutorial. General Instructions

CASE STUDY OF HIVE USING HADOOP 1

Internals of Hadoop Application Framework and Distributed File System

Running Knn Spark on EC2 Documentation

Prepared By : Manoj Kumar Joshi & Vikas Sawhney

Hadoop Framework. technology basics for data scientists. Spring Jordi Torres, UPC - BSC

INTRODUCTION TO HADOOP

Deploying Cloudera CDH (Cloudera Distribution Including Apache Hadoop) with Emulex OneConnect OCe14000 Network Adapters

Copy the.jar file into the plugins/ subfolder of your Eclipse installation. (e.g., C:\Program Files\Eclipse\plugins)

Hadoop. Apache Hadoop is an open-source software framework for storage and large scale processing of data-sets on clusters of commodity hardware.

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee

Hadoop 2.6 Configuration and More Examples

How to Install and Configure EBF15328 for MapR or with MapReduce v1

Chase Wu New Jersey Ins0tute of Technology

Hadoop WordCount Explained! IT332 Distributed Systems

Getting to know Apache Hadoop

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

A Multilevel Secure MapReduce Framework for Cross-Domain Information Sharing in the Cloud

Distributed Filesystems

Working With Hadoop. Important Terminology. Important Terminology. Anatomy of MapReduce Job Run. Important Terminology

An Experimental Approach Towards Big Data for Analyzing Memory Utilization on a Hadoop cluster using HDFS and MapReduce.

A Brief Outline on Bigdata Hadoop

Hadoop Installation Guide

Transcription:

IJCST Vo l. 6, Is s u e 3, Ju l y - Se p t 2015 ISSN : 0976-8491 (Online) ISSN : 2229-4333 (Print) Comparative Study of Hive and Map Reduce to Analyze Big Data 1 Nisha Bhardwaj, 2 Dr Balkishan, 3 Dr. Anubhav Kumar 1,2 Dept. of Computer Science & Applications, Maharshi Dayanand University, Rohtak, Haryana, India 3 Head CSE Dept., Lingaya s GVKS IMT, Faridabad, Haryana, India Abstract Big data is the combination of large datasets and the management of this large dataset is very difficult. So, we require some new techniques to handle such huge data. The challenge is to collect or extract the data from multiple sources, process or transform it according to our analytical need and then load it for analysis, this process is known as Extract, Transform & Load (ETL). In this research paper, firstly implementation of hadoop in pseudodistributed mode is done and then implement hive on hadoop to analyze the large dataset. In this paper, we consider the data from Book-Crossing dataset and take only BX-Books.csv file from dataset. Over this dataset we perform query by executing hive on command line to calculate the frequency of books which are published each year. Then, comparison of hive code is done with the mapreduce code. And, finally this paper shows that how hive is better than map reduce. Keywords Hadoop, Hive, Map Reduce, Hadoop Distributed File System. I. Introduction Apache hadoop is openly available software which allows the processing of massive or extremely large datasets in a completely distributed environment. In the year 2003 and 2004, the employees of Google proposed two papers which describe the method for handling large amount of data which is distributed over different places. By getting inspiration from these papers Doug created a distributed computing framework which is known as Hadoop [1]. Basically, hadoop consists of two components. The first component is Hadoop Distributed File System and the other component is Map Reduce. HDFS has designed to store very large files as a sequence of data nodes across various machines in a large cluster. The basic components of HDFS are data node and name node. Basically, the application s data are stored in data nodes. Data files which are uploaded to HDFS are broken down into many blocks and these blocks are then stored to different data nodes. Data node have the permission to do various operations such as creating, deleting and replacing whenever it gets instruction from the name node. Name node stores all the metadata. Along storing the metadata, name node acts as a master node which enables the coordination activities among data nodes. There are two basic components of Map Reduce. One is job and another is task tracker. Job tracker enables monitoring and the execution of different jobs among different Task trackers. It maintains the track of progress on job execution. Task tracker handles the task assigned to it. They are responsible to serve requests from Job tracker [2]. Hadoop is widely used by world s largest online companies which are named as IBM, Amazon, Facebook, LinkedIn, Google and Yahoo. The size of the hadoop users are increasing day by day. As the media reported that yahoo using this facility effectively and maintaining 38 thousand machines which are distributed among 20 different clusters [1]. Hadoop gives distributed processing at a very low cost and make it complement to the traditional systems. Hadoop gives various benefits to extract the value from large datasets which are- 1. Precision Means applying only the proven algorithms and machine learning techniques to give best results. 2. Scalability It scales the large data as the number of users grows day by day and problems become more complex. 3. Speed It helps in analyzing the large and complex data very fast. 4. Interactivity It supports multiuser which in turn increased productivity. A popularly known data warehouse for hadoop is the hive. It provides different type of queries to analyze the huge datasets. It runs on the top of apache hadoop and for data storage it uses HDFS. Apache hadoop framework requires different approach from traditional programming to write MapReduce based programs. This becomes possible using some techniques. One of those techniques is Hive. Using this technique developers do not need to write MapReduce at all. It provides a SQL like query language known as HiveQL to application developers. Due to its SQL-like interface, hive becomes a popular technology of choice for using hadoop [2]. II. Flowchart A. Description of Flowchart First of all install java on the ubuntu operating system then install and implement hadoop in a pseudo-distributed mode. After the complete successful installation of hadoop, check that all nodes i.e. namenode, secondary namenode, datanode, resource manager and task manager are working correctly or not. We can also check their web interfaces. If everything works fine then we move to the downloading of Book Crossing Dataset which is available online on http://www2.informatik.uni-freiburg.de/~cziegler/bx/ This database composed of three tables which are- B. BX-Users It contains information of users which includes User IDs i.e. User- ID and they are mapped to integers. If any demographic data is available then it is also provided as Location, Age. If it is not available then the field contains NULL values. C. BX-Books It contains the complete information of books Books which are identified using ISBN number. It also contains some other information which incudes booktitle, bookauthor, yearofpublication, publisher, imageurls, imageurlm, imageurll. Images are available in three flavors i.e. small, medium and large. www.ijcst.com International Journal of Computer Science And Technology 75

IJCST Vo l. 6, Is s u e 3, Ju l y - Se p t 2015 ISSN : 0976-8491 (Online) ISSN : 2229-4333 (Print) For the implementation we firstly require to install hadoop in standalone mode and pseudo-distributed mode. By default after downloading hadoop it is configured in a standalone mode and can be run as a single java process. The prerequisites for hadoop installation are- Either linux or windows. Java. Ssh(secure shell) to run start/stop/status and other such scripts across cluster. We run hadoop-2.7.0 on ubuntu-14.04.2 in this implementation. The complete procedure is explained below- A. Hadoop Installation and implementation 1. Firstly install the java asnish@nish-virtualbox:~$ sudo apt-get install default-jdk then check it using the commandnish@nish-virtualbox:~$ java version Fig. 1: Flowchart of Hive D. BX-Book-Ratings It contains the information of book ratings. The information can be expressed explicit or implicit. From the above three tables this paper use BX-Books dataset. After downloading the dataset, installation and implementation of hive is done and then perform query on it using the command line by running hive. After that, this paper shows the comparative study of hive and mapreduce. And, finally shows that how hive is better than mapreduce for performing data analysis. III. Database Description The database we take for our research is the Book Crossing Dataset. This database was gathered by Cai-Nicolas Zeigler by taking the permission from the CTO of Humankind systems whose named is Ron Hornbaker. The database consist of 278,858 users, also providing the implicit and explicit ratings which are about 1,149,780 about 271,379 books. It is available online for the research. This database contains three tables which are- BX-Users BX-Books BX-Book-Ratings But for our research we use BX-Books database. In this Books have their ISBN numbers, booktitles, bookauthors, yearofpublication, publishers. Also it include three different flavours for images which are imageurl-s, imageurl-m, imageurl-l. IV. Implementation Details For the implementation of hive, we require the following prerequisites- Latest stable build of hadoop. Should have java installed. Knowledge of java and linux. Book Crossing Dataset to perform analysis on it. (Available online on: http://www2.informatik.uni-freiburg. de/~cziegler/bx/) 2. Adding a dedicated hadoop user as shown belownish@nish-virtualbox:~$ sudo addgroup hadoop nish@nish-virtualbox:~$ sudo adduser ingroup hadoop hduser Creating home directory /home/hduser.. Copying files from /etc/skel. Enter new UNIX password: Retype new UNIX password: passwd : password updated successfully Changing the user information for hduser. 3. Now we install the ssh asnish@nish-virtualbox:~$ sudo apt-get install ssh 4. At this step, we login to the hduser to create and setup ssh certificates. This is necessary for performing different operations on hadoop cluster. For the authentication of hadoop users we have to generate public/private key pairs and share it with different users. We create it as shown belowhduser@nish-virtualbox:~$ ssh-keygen t rsa P Here, we make the created key an authorized key, by doing this hadoop can use secure shell without password and the command is as followshduser@nish-virtualbox:~$ cat $HOME/.ssh/id_rsa.pub >> $ HOME/.ssh/authorized_keys Check hduser added to the localhost ashduser@nish-virtualbox:~$ ssh localhost 5. We install hadoop.tar file and extract it. Then, by adding hduser to sudoers we move the hadoop to the local directory i.e. usr/ local/hadoop. So that it available to all users. We have to setup the following configuration files to complete the hadoop setup. (i)..bashrc Before editing the.bashrc file, we have tofind out the path of java so that we can set the JAVA_HOME environment variable as - hduser@nish-virtualbox:~$ update-alternatives config java Now, we can open the.bashrc where we have to add the following lines and source it using the commands ashduser@nish-virtualbox:~$ sudo vi.bashrc hduser@nish-virtualbox:~$ source.bashrc 76 International Journal of Computer Science And Technology www.ijcst.com

ISSN : 0976-8491 (Online) ISSN : 2229-4333 (Print) (ii). Make changes to hadoop-env.sh file In this file, we have to define the current JAVA_HOME path. This path is according to our java installation path. Open it using the below commandhduser@nish-virtualbox:~$ sudo vi /usr/local/hadoop/etc/ hadoop/hadoop-env.sh then add these linesexport JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64 exporthadoop_opts= $HADOOP_OPTS Djava.net. preferipv4stack=true Djava.library.path=$HADOOP_PRE FIX/ lib/native (iii). Make changes to the core-site.xml file This file has the information regarding port number, file system memory, limits of the memory for data storage and the size of R/W buffers. Open the core-site file and add below lines in it. <name>hadoop.tmp.dir</name> <value>app/hadoop/tmp</value> <name>fs.default.name</name> <value>hdfs://localhost:54310</value> IJCST Vo l. 6, Is s u e 3, Ju l y - Se p t 2015 can find easily. Open the file and add properties in it. Note: The properties of this file are user-defined so we can change it in reference to our hadoop infrastructure. <name>dfs.replication</name> <value>3</value> <name>dfs.namenode.name.dir</name> <value>file:/usr/local/hadoop_store/hdfs/namenode</value> <name> dfs.datanode.data.dir </name> <value>file:/usr/local/hadoop_store/hdfs/datanode </value> (vi). Make changes to yarn-site.xml file Using this file we can configure yarn. By opening this file we have to add given properties in it. <name>yarn.nodemanager.aux_services</name> <value>mapreduce_shuffle </value> (6) Now, verifying hadoop installation ashduser@nish-virtualbox:~$ hdfs namenode format hduser@nish-virtualbox:~$ start-dfs.sh hduser@nish-virtualbox:~$ start-yarn.sh (iv). Make changes to mapred-site.xml file This file specifies the framework of map reduce which is currently in use. This file is not available in the configuration file of hadoop. So, firstly we need to copy it from mapred-site.xml.template to the mapred-site.xml file as: hduser@nish-virtualbox:~$cp /usr/local/hadoop/etc/hadoop/ mapred-site.xml.template /usr/local/hadoop/etc/hadoop/ mapredsite.xml Open mapred file and then add the below properties in it. <name>mapreduce.jobtracker.address</name> <value>localhost:54311</value> (v). Make changes to hdfs-site.xml file This file contains the information regarding the data replication value, paths of name node and data node so that file system location www.ijcst.com International Journal of Computer Science And Technology 77

IJCST Vo l. 6, Is s u e 3, Ju l y - Se p t 2015 ISSN : 0976-8491 (Online) ISSN : 2229-4333 (Print) (7). There are also web interfaces of hadoop. So, we can also have its access on browser. The port number for accessing the hadoop is 50070. Using http://localhost:50070 we can get its services on browser. unzip this file. hduser@nish-virtualbox:~$ sudo mkdir /usr/local/hive/data hduser@nish-virtualbox:~$ sudo cp BX-CSV-Dump.zip /usr/ local/hive/data hduser@nish-virtualbox:~$ sudo unzip BX-CSV-Dump.zip It contains three tables but we use only BX-Books.csv for our research work. 6. Login to the root account and then using the sed command we cleansed the data as followsroot@nish-virtualbox:~$ sed s/\&/\&/g BX-Books.csv sed -e 1d sed s/;/$$$/g sed s/ $$$ / ; /g sed s/ //g > BX-BooksCorrected1.txt The above command does the following: It removes & and add & in its place. It also removes the header line. It replaces ; with $$$ It replaces $$$ with ; It removes all characters B. Hive installation and implementation 1. Firstly, we have to download recent version of hive from the apache mirrors. Here, we download hive-1.0.1.tar.gz. After downloading we have to extract it and then move it to the /usr/ local/hive so that it is available to all users by using the following commands as shown belownish@nish-virtualbox:~$ sudo tar xzf apache-hive-1.0.1-bin. tar.gz nish@nish-virtualbox:~$ sudo mv apache-hive-1.0.1-bin /usr/ local/hive 7. Now, exit to root account and login to the hduser where we create a input directory to hadoop and copy the extracted file there ashduser@nish-virtualbox:~$ hadoop fs mkdir p input hduser@nish-virtualbox:~$ hadoop fs put /usr/local/hive/data/ BX-BooksCorrected1.txt input 8. Now, type the hive command and run hive using the command line. Here, we implement the query using which we analyze the data for finding the frequency of books which are published each year. 2. Then login to the hduser to make changes to the.bashrc file and then source it as followshduser@nish-virtualbox:~$ sudo vi.bashrc hduser@nish-virtualbox:~$ source.bashrc Add the below lines to the bash fileexport HIVE_HOME=/usr/local/hive export HIVE_CONF_DIR=$HIVE_HOME/conf export HIVE_CLASSPATH=$HIVE_CONF_DIR export PATH=$HIVE_HOME/bin:$PATH 3. Now, start both dfs.sh & yarn.sh and check all nodes are started or not using jps command. hduser@nish-virtualbox:~$ start-dfs.sh hduser@nish-virtualbox:~$ start-yarn.sh hduser@nish-virtualbox:~$ jps 4. Here, we must create two directories /tmp and /user/hive/ warehouse and then give appropriate permissions to them in HDFS as shown belowhduser@nish-virtualbox:~$ hadoop fs mkdir p /tmp hduser@nish-virtualbox:~$ hadoop fs mkdir p /usr/hive/ warehouse hduser@nish-virtualbox:~$ hadoop fs chmod g+w /tmp hduser@nish-virtualbox:~$ hadoop fs chmod g+w /usr/hive/ warehouse 5. Here, we create a directory /usr/local/hive/data and download the BX-CSV-Dump.zip file on which we perform analysis. Then, we copy this.zip file to the current created directory where we 78 International Journal of Computer Science And Technology Here, create a table bxdataset and then load data into the table from BX-BooksCorrected1.txt and then perform query over it which is explained belowhive> create table if not exists bxdataset(isbn string, booktitle string, bookauthor string, yearofpublication string, pubisher string) row format delimited fields terminated by \; stored as textfile; hive> load data inpath /usr/hduser/input/bx-bookscorrected1. txt overwrite into table bxdataset; hive> select yearofpublication, count(booktitle) from bxdataset group by yearofpublication; www.ijcst.com

ISSN : 0976-8491 (Online) ISSN : 2229-4333 (Print) IJCST Vo l. 6, Is s u e 3, Ju l y - Se p t 2015 V. Comparison of Hive With Mapreduce Code If the same job is done using mapreduce which is based on java then the code will be as follows- C. Output Write the map method as: package com.bookx; import java.io.ioexception; public class Books XMapper extends MapReduceBase implements org.apache.hadoop.mapred.mapper<longwritable, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); @Override public void map(longwritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { String temp = value.tostring(); String[] SingleBookdata = temp.split( \ ;\ ); output.collect(new Text(SingleBookdata[3]),new IntWritable(1)); Write the reduce method as: package com.bookx; import java.io.ioexception; public class Books XReducer extends Map Reduce Base implements org.apache.hadoop.mapred.reducer<text, IntWritable, Text, IntWritable> { @Override public void reduce(text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { Text key = _key; int frequencyforyear = 0; while(values.hasnext()) { intwritable value = (IntWritable) values.next(); frequencyforyear += value.get(); output. collect(key, new IntWritable(frequencyForYear)); www.ijcst.com International Journal of Computer Science And Technology 79

IJCST Vo l. 6, Is s u e 3, Ju l y - Se p t 2015 ISSN : 0976-8491 (Online) ISSN : 2229-4333 (Print) Main function to run map reduce job: package com.bookx; import org.apache.hadoop.fs.path; public class BookxDriver { public static void main(string[] args) { jobclient client = new JobClient(); JobConf newconf = new JobConf(com.bookx.BookxDriver. class); newconf.setjobname( BookCrossing 1.0 ); newconf.setoutputkeyclass(text.class); newconf.setoutputvalueclass(intwritable.class); newconf.setmapperclass(com.bookx.booksxmapper.class); newconf.setreducerclass(com.bookx.booksxreducer.class); newconf.setinputformat(textinputformat.class); newconf.setoutputformat(textoutputformat.class); FileInputFormat.setInputPaths(newconf, new Path(args[0])); FileOutputFormat.setOutputPath(newconf, new Path(args[1])); client.setconf(newconf); try { JobClient.runJob(newconf); catch (Exception e) { e.printstacktrace(); From the above given code, it is clear that hive takes less effort because it reduces the programming effort in turn reducing the learning and writing map reduce code. If we compare the above code with hive code then we find hive reduces the coding effort because it requires only three lines of code whereas map reduce requires approximately forty-six lines of code. VI. Conclusion In this research paper, the implementation of hive is done to analyze the big dataset. It reduces the lines of code in comparison to mapreduce code. The motive of this paper is to find which one is best between mapreduce and hive. So, from the analysis it is concluded that hive is best because of the following reasons-first of all hive reduces the lines of code to 3 whereas map reduce requires roughly 46 lines of code. Second if comparison is done on their response time then hive takes only 7.989 seconds. So, it is clear that hive is better than mapreduce. References [1] Andrew Becherer, Hadoop Security Design, isec Partners, Inc. [2] Hrishikesh Karambelkar, Scaling Big Data with Hadoop and Solr, Birmingham B3 2PB, UK, 2013. [3] Big Data: Getting Started with Hadoop, Sqoop & Hive, available, [Online] Available: http://cloudacademy.com/ blog/big-data-getting-started-with-hadoop-sqoop-hive/ [4] David Floyer, Enterprise Big-data, [Online] Available: http://wikibon.org/wiki/v/enterprise_big-data. [5] Benjamin Bengfort, Jenny Kim, Creating a Hadoop Pseudo- Distributed Environment, [Online] Available: https:// districtdatalabs.silvrback.com/creating-a-hadoop-pseudodistributed-environment [6] Koen Vlaswinkel, How to install java on Ubuntu with Apt- Get, [Online] Available: https://www.digitalocean.com/ community/tutorials/how-to-install-java-on-ubuntu-withapt-get 80 International Journal of Computer Science And Technology [7] Mathias Kettner, SSH login without password, [online]. Available: http://www.linuxproblem.org/art_9.html [8] Dustin Kirkland, Ubuntu manuals, [Online] Available: http://manpages.ubuntu.com/manpages/raring/man1/ nvi.1.html [9] Varad Meru, Single-Node hadoop tutorial, [Online] Available: http://www.orzota.com/single-node-hadooptutorial/ [10] Michael G. Noll, Running Hadoop on Ubuntu Linux(Single- Node Cluster), [Online] Available: http://www.michaelnoll.com/tutorials/running-hadoop-on-ubuntu-linux-singlenode-cluster/ [11] Hadoop 2.6 Installing on Ubuntu 14.04 (Single-Node Cluster) 2015. [Online] Available: http://www.bogotobogo.com/ Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_ cluster.php [12] Apache hadoop, Single Node Setup, [Online] Available: https://hadoop.apache.org/docs/r1.2.1/single_node_setup. html [13] Apache Hadoop (CDH 5) Hive Introduction 2015, [Online] Available: http://www.bogotobogo.com/hadoop/ BigData_hadoop_CDH5_Hive_Introduction.php [14] Oshin Prem, HIVE Installation, [Online] Available: http:// doctuts.readthedocs.org/en/latest/hive.html [15] Edureka, Apache Hive Installation on Ubuntu, [Online] Available: http://www.edureka.co/blog/apache-hiveinstallation-on-ubuntu/ [16] Hive Installation On Ubuntu, [Online] Available: https:// archanaschangale.wordpress.com/2013/08/24/hiveinstallation-on-ubuntu/ [17] Safari, Programming Hive, [Online] Available: https:// www.safaribooksonline.com/library/view/programminghive/9781449326944/ch04.html [18] Hive Introduction, [Online] Available: http://www. tutorialspoint.com/hive/hive_introduction.htm [19] Jason Dere. LanguageManualDDL. [online]. Available: https://cwiki.apache.org/confluence/display/hive/ LanguageManual+DDL [20] Hadoop material. [Online] Available: http://www. hadoopmaterial.com/2013/10/find-frequency-of-bookspublished-in.html Nisha Bhardwaj received Btech degree in Information Technology with HONOURS from L.F.MVN.IET affiliated from MDU Rohtak and now Pursuing M.Tech degree in Computer Science from Maharshi Dayanand University, Rohtak. My interest is in research field and interest area is big data and software. www.ijcst.com

ISSN : 0976-8491 (Online) ISSN : 2229-4333 (Print) IJCST Vo l. 6, Is s u e 3, Ju l y - Se p t 2015 Dr. Balkishan received MCA and Ph.D degrees. He is Assistant Professor in Department of Computer Science & Applications in Maharshi Dayanand University, Rohtak. He is very good in research work. Dr. Anubhav Kumar has received Ph.D degree in Computer Science Engineering from the School of Computer and System Sciences, Jaipur National University, Jaipur. He has over 9+ years of teaching experience and authored, co-authored almost 45research papers in National, International Journals &Conferences. His current area of research includes ERP, KM, Web Usage Mining, 3D Animation. He is a Senior Member of numerous academic and professional bodies such as: IEEE, IEI, ISCA,WASET, IAENG Hong Kong, IACSIT Singapore, UACEE UK, Association for Computing Machinery Inc. (ACM), New York. He is also Reviewer and Editorial Board Member of many International Journals such as IJRECE, IJCAT,IJCDS, IJMR, IJMIT&IJCT. Besides it, he is guiding a few numbers of M.Tech & Ph.D Scholars in the area of Computer Science Engineering. www.ijcst.com International Journal of Computer Science And Technology 81