A Survey of Applications of Big Data Hadoop s Mapreduce Technique and its Limitations



Similar documents
Bundling Hadoop & Map Reduce for Data-Intensive Computing in Distributed Systems

Map Reducing in Big Data Using Hadoop

Jeffrey D. Ullman slides. MapReduce for data intensive computing

Introduction to Parallel Programming and MapReduce

Introduction to Hadoop

A STUDY ON HADOOP ARCHITECTURE FOR BIG DATA ANALYTICS

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

Big Data and Apache Hadoop s MapReduce

ISSN: CONTEXTUAL ADVERTISEMENT MINING BASED ON BIG DATA ANALYTICS

Chapter 7. Using Hadoop Cluster and MapReduce

How To Handle Big Data With A Data Scientist

Hadoop and Map-Reduce. Swati Gore

Introduction to Hadoop

Big Data: Tools and Technologies in Big Data

Lecture 32 Big Data. 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop

Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware

Analysing Large Web Log Files in a Hadoop Distributed Cluster Environment

Detection of Distributed Denial of Service Attack with Hadoop on Live Network

PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS

INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE

Cloudera Certified Developer for Apache Hadoop

Infomatics. Big-Data and Hadoop Developer Training with Oracle WDP

Manifest for Big Data Pig, Hive & Jaql

Data-Intensive Computing with Map-Reduce and Hadoop

Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 15

Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related

Hadoop Ecosystem Overview. CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook

A Brief Outline on Bigdata Hadoop

How To Scale Out Of A Nosql Database

Lecture Data Warehouse Systems

Workshop on Hadoop with Big Data

Big Data Processing with Google s MapReduce. Alexandru Costan

Implement Hadoop jobs to extract business value from large and varied data sets

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

Map Reduce & Hadoop Recommended Text:

Hadoop Introduction. Olivier Renault Solution Engineer - Hortonworks

Hadoop Ecosystem B Y R A H I M A.

DATA MINING WITH HADOOP AND HIVE Introduction to Architecture

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Hadoop IST 734 SS CHUNG


Role of Cloud Computing in Big Data Analytics Using MapReduce Component of Hadoop

Big Data With Hadoop

A bit about Hadoop. Luca Pireddu. March 9, CRS4Distributed Computing Group. (CRS4) Luca Pireddu March 9, / 18

Big Data: Using ArcGIS with Apache Hadoop. Erik Hoel and Mike Park

Cloud Computing at Google. Architecture

A Study on Big Data Integration with Data Warehouse

How To Use Hadoop For Gis

Open source Google-style large scale data analysis with Hadoop

Apache Hadoop: The Big Data Refinery

Hadoop. MPDL-Frühstück 9. Dezember 2013 MPDL INTERN

Data processing goes big

The Hadoop Framework

International Journal of Innovative Research in Computer and Communication Engineering

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)

Log Mining Based on Hadoop s Map and Reduce Technique

Large-Scale Data Sets Clustering Based on MapReduce and Hadoop

Problem Solving Hands-on Labware for Teaching Big Data Cybersecurity Analysis

BIG DATA TRENDS AND TECHNOLOGIES

Keywords: Big Data, HDFS, Map Reduce, Hadoop

International Journal of Innovative Research in Information Security (IJIRIS) ISSN: (O) Volume 1 Issue 3 (September 2014)

Open source large scale distributed data management with Google s MapReduce and Bigtable

Application Development. A Paradigm Shift

A very short Intro to Hadoop

Big Data, Why All the Buzz? (Abridged) Anita Luthra, February 20, 2014

A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM

Efficient Analysis of Big Data Using Map Reduce Framework

Hadoop. Sunday, November 25, 12

Associate Professor, Department of CSE, Shri Vishnu Engineering College for Women, Andhra Pradesh, India 2

Keywords: Big Data, Hadoop, cluster, heterogeneous, HDFS, MapReduce

A B S T R A C T. Index Terms: Hadoop, Clickstream, I. INTRODUCTION

Internals of Hadoop Application Framework and Distributed File System

What We Can Do in the Cloud (2) -Tutorial for Cloud Computing Course- Mikael Fernandus Simalango WISE Research Lab Ajou University, South Korea

Big Data and Analytics: A Conceptual Overview. Mike Park Erik Hoel

Volume 3, Issue 6, June 2015 International Journal of Advance Research in Computer Science and Management Studies

Big Systems, Big Data

Big Data: Study in Structured and Unstructured Data

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

MapReduce. MapReduce and SQL Injections. CS 3200 Final Lecture. Introduction. MapReduce. Programming Model. Example

International Journal of Advancements in Research & Technology, Volume 3, Issue 2, February ISSN

Introduction to Hadoop. New York Oracle User Group Vikas Sawhney

An Industrial Perspective on the Hadoop Ecosystem. Eldar Khalilov Pavel Valov

A Survey on Big Data Concepts and Tools

White Paper: Hadoop for Intelligence Analysis

From GWS to MapReduce: Google s Cloud Technology in the Early Days

Big Data Too Big To Ignore

Advances in Natural and Applied Sciences

A B S T R A C T. Index Terms : Apache s Hadoop, Map/Reduce, HDFS, Hashing Algorithm. I. INTRODUCTION

Chase Wu New Jersey Ins0tute of Technology

NoSQL and Hadoop Technologies On Oracle Cloud

Data Refinery with Big Data Aspects

Fault Tolerance in Hadoop for Work Migration

Introduction to Hadoop

Survey Paper on Big Data Processing and Hadoop Components

MapReduce with Apache Hadoop Analysing Big Data

Big Data Explained. An introduction to Big Data Science.

Hadoop implementation of MapReduce computational model. Ján Vaňo

Networking in the Hadoop Cluster

Transcription:

A Survey of Applications of Big Data Hadoop s Mapreduce Technique and its Limitations Ch.Srilakshmi Asst Professor Department of Information Technology R.M.D Engineering College,Kavaraipettai,Tamil Nadu,Chennai Abstract : We are living in an age when an explosive amount of data is being generated every day. Data from sensors, mobile devices, social networking websites, scientific data & enterprises all are contributing to this huge explosion in data. This sudden bombardment has created a vast volume of data in the last few years. Big Data- large chunks of data has become one of the hottest research trends today. Research suggests that tapping the potential of this data can benefit businesses, scientific disciplines and the public sector contributing to their economic gains as well as development in every sphere. The need is to develop efficient systems that can exploit this potential to the maximum, In the Big Data processing, MapReduce technique has been one of the main approaches used for meeting continuously increasing demands on computing resources imposed by massive data sets. The MapReduce paradigm uses massively parallel and distributed execution over a large number of computing nodes. Keywords: Hadoop MapReduce,Parallel and distributed computing I.INTRODUCTION Nowadays, dealing with datasets in the order of terabytes or even petabytes has become a reality. Therefore, processing such big datasets in an efficient way is needed. Hadoop MapReduce is a big data processing framework that has rapidly become the de facto standard in both industry and academia to process large-size data. The main reasons of such popularity are the ease-of-use, scalability, and parallelism of Hadoop MapReduce. A. HADOOP Hadoop is an open-source software framework for storing and processing big data in a distributed fashion on large clusters of commodity hardware. Essentially, it accomplishes two tasks: massive data storage and faster processing of that data. Hadoop Has three core components: HDFS the Java-based distributed file system that can store all kinds of data without prior organization. MapReduce a software programming model for processing large sets of data in parallel. YARN a resource management framework for scheduling and handling resource requests from distributed applications. Other components are: Pig a platform for e in HDFS. It consists of a compiler for MapReduce programs and a high-level language called Pig Latin. It provides a way to perform data extractions, transformations and loading, and basic analysis without having to write MapReduce programs. Hive a data warehousing and SQL-like query language that presents data in the form of tables. Hive programming is similar to database programming. (It was initially developed by Facebook.) HBase a nonrelational, distributed database that runs on top of Hadoop. HBase tables can serve as input and output for MapReduce jobs. Zookeeper an application that coordinates distributed processes. Ambari a web interface for managing, configuring and testing Hadoop services and components. Flume software that collects, aggregates and moves large amounts of streaming data into HDFS. Sqoop a connection and transfer mechanism that moves data between Hadoop and relational databases. Oozie a Hadoop job scheduler. Volume 6 Issue 2 December 2015 165 ISSN: 2319 1058

Fig-Hadoop Framework B.HADOOP s MAPREDUCE Hadoop MapReduce, is the most popular open source implementation of the MapReduce framework proposed by Google. The essence of Hadoop MapReduce is that users have to define the map and reduce functions. The framework takes care of everything else such as parallelization and scalability. A Hadoop MapReduce job mainly consists of two user-defined functions: map and reduce. The input of a Hadoop MapReduce job is a set of key-value pairs (k, v) and the map function is called for each of these pairs. The map function produces zero or more intermediate key-value pairs (k, v). Then, the Hadoop MapReduce framework groups these intermediate key-value pairs by intermediate key k and calls the reduce function for each group. Finally, the reduce function produces zero or more aggregated results. Map Job: The first is the map job, which takes a set of data and converts it into another set of data, where individual elements are broken down into tuples (key/value pairs). The map function produces zero or more intermediate keyvalue pairs (key, value). Map function takes one pair of data with a type in one data domain, and returns a list of pairs in a different domain. The Map function is applied in parallel to every pair in the input dataset. This produces a list of pairs. After that, the MapReduce framework collects all pairs with the same key from all lists and groups them together, creating one group for each key. Map(k1,v1) list(k2,v2) Reduce job: The second is the reduce job, which takes the output from a map as input and combines those data tuples into a smaller set of tuples. The Reduce function is then applied in parallel to each group, which in turn produces a collection of values in the same domain. Each Reduce call typically produces either one value v3 or an empty return, though one call is allowed to return more than one value. The returns of all calls are collected as the desired result list. Reduce(k2, list (v2)) list(v3) As the sequence of the name Map Reduce implies that reduce job is always performed after the map job.fig.2 shows the MapReduce Framework. Volume 6 Issue 2 December 2015 166 ISSN: 2319 1058

Fig.2-Overview of MapReduce framework C.CHOOSING BETWEEN M AND R Let M be Number of tasks and R be Number of reduce tasks.a small Small M and R value are inefficient as speed of processing is low. A Large M and R causes several smaller tasks,this enabling easier load balancing and faster recovery in terms of failed machine(nodes) which is the advantage. The drawback is that it causes excess overhead is caused and increases complexity as O(M+N) due to scheduling decisions and O(MxN) inter-memory states at master node. This also increases set up cost. Hence is better to choose M so that the split size is of 64 MB and choose R based as a small multiple of number of workers. Or alternatively can choose R as less than no of workers to finish reduce phase in one wave. D.AN EXAMPLE OF MAPREDUCE Assume you have five files, and each file contains two columns,a key and a value in Hadoop terms that represent a city and the corresponding temperature recorded in that city for the various measurement days. This example is made very simple so it s easy to follow. You can imagine that a real application contain millions or even billions of rows. Delhi, 31 Mumbai, 32 Chennai, 33 Calcutta, 32 Delhi, 24 Calcutta, 34 Chennai, 38 Delhi, 27 Calcutta, 33 Chennai, 37 Out of all the data collected, to find the maximum temperature for each city across all of the data files (note that each file might have the same city represented multiple times). Using the MapReduce framework, we can break this down into five map tasks, where each mapper works on one of the five files and the mapper task goes through the data and returns the maximum temperature for each city. The results produced from mapper task for the above data would look like this: (Delhi, 31) (Mumbai, 32) (Chennai, 38) (Calcutta, 34) Volume 6 Issue 2 December 2015 167 ISSN: 2319 1058

II.OVERVIEW The following sections thows light on how Hodoop s Mapreduce parallel and distributed framework can be used for the following applications 1. Distributed GREP 2. Geospatial Query Processing 3. Ultrafast and Scalable Cone-Beam CT Reconstruction 4. Map reduce for Digital Evaluation Model 5.Other Applications 1.DISTRIBUTED GREP Distributed grep is used to search for a given pattern in a large number of files. For example, a web administrator can use distributed grep to search web server logs in order to find the top requested pages that match a given pattern,map function would take input as (input file, line) and generate one of the following outputs. An empty list [], if there is no match found A key value pair [(line, 1)] if a match is found The reduce function would take input as (line, [1, 1, 1,...]) and generate output as (line, n) where 'n' is the number of 1's in the list.consider the following grep operation on files in a directory. grep -Eh 'A C' /* sort uniq -c sort - Fig.3-Distributed GREP execution overview 2.GEOSPATIAL QUERY PROCESSING With the technological advancements in location-based services, there is a huge surge in the amount of geospatial data. Geospatial queries(nearest neighbor queries and reverse nearest neighbor queries) consume lot of computational resources and it is observed that their processing Volume 6 Issue 2 December 2015 168 ISSN: 2319 1058

is heavily parallelizable. Google Maps uses MapReduce to solve problems like Given an intersection, find all roads connecting to it. Rendering of the tiles in the map. Finding the nearest feature to a given address or current location. 3. ULTRAFAST AND SCALABLE CONE-BEAM CT RECONSTRUCTION Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. To develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT=CT using a parallel computing framework MapReduce technique can be used. The Feldcamp Davis Kress (FDK) algorithm is accelerated by porting it to Hadoop. Map functions were used to filter and back project subsets of projections, and Reduce function to aggregate those partial into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. Fig.5-Ultrafast and scalable cone-beam CT reconstruction using FDK algorithm Voxel-based FDK (FDK-VB), uses Key=Value pairs to transfer individual voxels. In this scheme, the Key encodes the voxel position (x,y,z) using an index calculated as (z *N 2 + y * N + x), and the Value is the backprojected image intensity V position (x,y,z). Each Map task emits Key=Value pairs voxel by voxel. The Reduce function accumulates all the intermediate values that share a common same key, producing the final reconstructed volume. 4.MAP REDUCE FOR DIGITAL EVALUATION MODEL Digital Elevation Models(DEM) are digital 3D representation of the landscape, where each (X, Y) position is represented by a single elevation value. DEMs are used for a range of scientific and engineering applications, including hydrologic modeling, terrain analysis, and infrastructure design. One of the fundamental processing tasks Volume 6 Issue 2 December 2015 169 ISSN: 2319 1058

in the Open Topography system is generation of DEMs from very dense (multiple measurements per square meter) LIDAR (Light detection and Ranging) topography data. In the Map phase, input points are assigned to corresponding grid cells, and in the Reduce phase the corresponding elevations for each grid cell are computed from the local bins. The reduced outputs are merged, sorted and the DEM is generated in the Arc ASCII grid format. 5.OTHER APPLICATIONS Table-1: Few other prominent applications of MapReduce APPLICATIONS MAP FUNCTION REDUCE FUNCTION Count of URL Access Frequency To find the total number of times a URL is accessed. Reverse Web-Link Graph For each URL find all pages (URL) pointing to it. (incoming links). Term-Vector per Host A term vector summarizes the most important words that occur in a document or a set of documents as a list of <word, frequency> pairs. Inverted Index Inverted index is an index data structure storing a mapping from content, such as words or numbers, to its locations in a database file, or in a document or a set of documents. The map function processes logs of web page requests and outputs <URL, 1>. The map function outputs <target, source> pairs for each link to a target URL found in a page named "source". The map function emits a <hostname, term vector> pair for each input document (where the hostname is extracted from the URL of the document). The map function parses each document, and emits a sequence of <word, document ID> pairs. The Reduce function adds together all values for the same URL and emits a <URL,total count>pair. The reduce function concatenates the list of all source URLs associated with a given target URL and emits the pair: <target, list(source)>. The reduce function is passed all perdocument term vectors for a given host. It adds these term vectors together; throwing away infrequent terms, and then emits a final <hostname, term vector> pair. The reduce function accepts all pairs for a given word, sorts the corresponding document IDs and emits a <word, list(document ID)> pair. The set of all output pairs forms a simple inverted index. It is easy to augment this computation to keep track of word positions. Volume 6 Issue 2 December 2015 170 ISSN: 2319 1058

III.LIMITATIONS OF MAPREDUCE There are few scenarios where MapReduce programming model cannot be employed. If the computation of a value depends on previously computed values, then MapReduce cannot be used. One good example is the Fibonacci series where each value is summation of the previous two values. i.e., f(k+2) = f(k+1) + f(k). Also, if the data set is small enough to be computed on a single machine, then it is better to do it as a single reduce operation rather than going through the entire map reduce Process. However there are special implementations of MapReduce like the MRLite which are optimized to process moderate size data with dependencies among the various computational steps. IV.CONCLUSION Learning from the application studies, we explore the design space for supporting data-intensive and computeintensive applications on large data-center-scale computer systems. Traditional data processing and storage approaches are facing many challenges in meeting the continuously increasing computing demands of Big Data. This work focused on MapReduce, one of the key enabling approaches for meeting Big Data demands by means of highly parallel processing on a large number of commodity nodes. REFERENCES [1] Jens Dittrich, Stefan Richter and Stefan Schuh, " Efficient OR Hadoop: Why Not Both?," Datenbank-Spektrum, Volume 13, Issue 1, pp 17-22 [2] Humbetov, S, "Data-Intensive Computing with Map-reduce and Hadoop," in Proc. 2012 Application of Information and Communication Technologies (AICT), IEEE,6th International Conference pp. 5 [3] Hadoop Tutorial, Apache Software Foundation, 2014, Available:http://hadoop.apache.org/ [4] Sherif Sakr, Anna Liu and Ayman G. Fayoumi, " The family of mapreduce and large-scale data processing systems," ACM Computing Surveys, Volume 46 Issue 1, October 2013, Article No. 11 [5] Aditya B. Patel, Manashvi Birla and Ushma Nair, " Addressing Big Data Problem Using Hadoop and Map Reduce," in Proc. 2012 Nirma University International Conference On Engineering, pp. 1-5. [6] Abouzeid, A., Bajda-Pawlikowski, K., Abadi, D., Rasin, A., and Silberschatz, A. 2009. HadoopDB: An architectural hybrid of mapreduce and dbms technologies for analytical workloads. Proc. VLDB Endow. 2, 1,922 933. [7] Jyoti Nandimath, Ankur Patil, Ekata Banerjee,Pratima Kakade and Saumitra Vaidya, " Big Data Analysis Using Apache Hadoop," IEEE IRI 2013, August 14-16, 2013, San Francisco, California, USA [8] MapReduce: Simplified Data Processing on Large Clusters. Available at http://labs.google.com/papers/mapreduceosdi04.pdf [9] Stephen Kaisler, Frank Armour, J. Alberto Espinosa, William Money, Big Data: Issues and Challenges Moving Forward, IEEE, 46th Hawaii International Conference on System Sciences, 2013. Volume 6 Issue 2 December 2015 171 ISSN: 2319 1058