DESIGN AND ESTIMATION OF BIG DATA ANALYSIS USING MAPREDUCE AND HADOOP-A



Similar documents
Hadoop Optimizations for BigData Analytics

Lifetime Management of Cache Memory using Hadoop Snehal Deshmukh 1 Computer, PGMCOE, Wagholi, Pune, India

Unstructured Data Accelerator (UDA) Author: Motti Beck, Mellanox Technologies Date: March 27, 2012

Analysing Large Web Log Files in a Hadoop Distributed Cluster Environment

Chapter 7. Using Hadoop Cluster and MapReduce

Analysis and Optimization of Massive Data Processing on High Performance Computing Architecture

Keywords: Big Data, HDFS, Map Reduce, Hadoop

CURTAIL THE EXPENDITURE OF BIG DATA PROCESSING USING MIXED INTEGER NON-LINEAR PROGRAMMING

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

TCP/IP Implementation of Hadoop Acceleration. Cong Xu

Scalable Cloud Computing Solutions for Next Generation Sequencing Data

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)

Hadoop Parallel Data Processing

Large-Scale Data Sets Clustering Based on MapReduce and Hadoop

Task Scheduling in Hadoop

MapReduce Online. Tyson Condie, Neil Conway, Peter Alvaro, Joseph Hellerstein, Khaled Elmeleegy, Russell Sears. Neeraj Ganapathy

Log Mining Based on Hadoop s Map and Reduce Technique

Associate Professor, Department of CSE, Shri Vishnu Engineering College for Women, Andhra Pradesh, India 2

Survey on Load Rebalancing for Distributed File System in Cloud

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

Data-Intensive Computing with Map-Reduce and Hadoop

R.K.Uskenbayeva 1, А.А. Kuandykov 2, Zh.B.Kalpeyeva 3, D.K.Kozhamzharova 4, N.K.Mukhazhanov 5

Parallel Processing of cluster by Map Reduce

Different Technologies for Improving the Performance of Hadoop

The Comprehensive Performance Rating for Hadoop Clusters on Cloud Computing Platform

Survey on Scheduling Algorithm in MapReduce Framework

CSE-E5430 Scalable Cloud Computing Lecture 2

A STUDY ON HADOOP ARCHITECTURE FOR BIG DATA ANALYTICS

Apache Hadoop. Alexandru Costan

Big Data With Hadoop

A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM

Research on Clustering Analysis of Big Data Yuan Yuanming 1, 2, a, Wu Chanle 1, 2

PLATFORM AND SOFTWARE AS A SERVICE THE MAPREDUCE PROGRAMMING MODEL AND IMPLEMENTATIONS

Fault Tolerance in Hadoop for Work Migration

Analyzing Log Files to Find Hit Count Through the Utilization of Hadoop MapReduce in Cloud Computing Environmen

Optimization and analysis of large scale data sorting algorithm based on Hadoop

Finding Insights & Hadoop Cluster Performance Analysis over Census Dataset Using Big-Data Analytics

Network-Aware Scheduling of MapReduce Framework on Distributed Clusters over High Speed Networks

Analysis and Modeling of MapReduce s Performance on Hadoop YARN

Comparision of k-means and k-medoids Clustering Algorithms for Big Data Using MapReduce Techniques

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

Map Reduce & Hadoop Recommended Text:

Big Data Technology Map-Reduce Motivation: Indexing in Search Engines

Comparison of Different Implementation of Inverted Indexes in Hadoop

MAPREDUCE Programming Model

MapReduce (in the cloud)

Outline. High Performance Computing (HPC) Big Data meets HPC. Case Studies: Some facts about Big Data Technologies HPC and Big Data converging

DATA MINING WITH HADOOP AND HIVE Introduction to Architecture

Map Reduce / Hadoop / HDFS

Enhancing MapReduce Functionality for Optimizing Workloads on Data Centers

Improving MapReduce Performance in Heterogeneous Environments

Open source Google-style large scale data analysis with Hadoop

The Hadoop Framework

Big Data Analysis and Its Scheduling Policy Hadoop

From GWS to MapReduce: Google s Cloud Technology in the Early Days

A SURVEY ON MAPREDUCE IN CLOUD COMPUTING

Hadoop Architecture. Part 1

Big Data and Apache Hadoop s MapReduce

What is Analytic Infrastructure and Why Should You Care?


Research Article Hadoop-Based Distributed Sensor Node Management System

Tutorial: Big Data Algorithms and Applications Under Hadoop KUNPENG ZHANG SIDDHARTHA BHATTACHARYYA

CLOUDDMSS: CLOUD-BASED DISTRIBUTED MULTIMEDIA STREAMING SERVICE SYSTEM FOR HETEROGENEOUS DEVICES

Storage and Retrieval of Data for Smart City using Hadoop

MapReduce. MapReduce and SQL Injections. CS 3200 Final Lecture. Introduction. MapReduce. Programming Model. Example

How To Analyze Log Files In A Web Application On A Hadoop Mapreduce System

Volume 3, Issue 6, June 2015 International Journal of Advance Research in Computer Science and Management Studies

Lecture 32 Big Data. 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop

Parallel Computing. Benson Muite. benson.

marlabs driving digital agility WHITEPAPER Big Data and Hadoop

LARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENT

Big Data Analytics Hadoop and Spark

A Study on Workload Imbalance Issues in Data Intensive Distributed Computing

Mobile Cloud Computing for Data-Intensive Applications

Processing of Hadoop using Highly Available NameNode

Enabling High performance Big Data platform with RDMA

MapReduce and Hadoop Distributed File System

Enhancing Dataset Processing in Hadoop YARN Performance for Big Data Applications

Hadoop IST 734 SS CHUNG

Efficient Data Replication Scheme based on Hadoop Distributed File System

Parallel Databases. Parallel Architectures. Parallelism Terminology 1/4/2015. Increase performance by performing operations in parallel

Developing Scalable Smart Grid Infrastructure to Enable Secure Transmission System Control

IMPROVED FAIR SCHEDULING ALGORITHM FOR TASKTRACKER IN HADOOP MAP-REDUCE

MapReduce Jeffrey Dean and Sanjay Ghemawat. Background context

Role of Cloud Computing in Big Data Analytics Using MapReduce Component of Hadoop

The Performance Characteristics of MapReduce Applications on Scalable Clusters

Big Data and Hadoop. Sreedhar C, Dr. D. Kavitha, K. Asha Rani

Benchmark Study on Distributed XML Filtering Using Hadoop Distribution Environment. Sanjay Kulhari, Jian Wen UC Riverside

Shareability and Locality Aware Scheduling Algorithm in Hadoop for Mobile Cloud Computing

A REAL TIME MEMORY SLOT UTILIZATION DESIGN FOR MAPREDUCE MEMORY CLUSTERS

Transcription:

DESIGN AND ESTIMATION OF BIG DATA ANALYSIS USING MAPREDUCE AND HADOOP-A 1 DARSHANA WAJEKAR, 2 SUSHILA RATRE 1,2 Department of Computer Engineering, Pillai HOC College of Engineering& Technology Rasayani, Raigad. University of Mumbai E-mail: 1 dwajekar@mes.ac.in, 2 sratre@mes.ac.in Abstract Hadoop is a trendy open-source implementation of the MapReduce programming model. Map-Reduce and HDFS are the two major mechanisms of Hadoop. To avoid network congestions, a new technique to preprocess intermediate data between the maps and reduce stages, thus increasing the throughput of Hadoop clusters. These include a serialization barrier that delays the reduce phase and repetitive merges, disk accesses. To handle large dataset needs to progress the performance by modifying existing Hadoop system. Describe Hadoop-A, an acceleration framework that optimizes Hadoop with plugin mechanism implemented for fast data movement, overcoming its existing limitations. A merge algorithm is introduced to merge data without replication and disk access. In addition, a full pipeline is designed to overlap the shuffle, merge and reduce phases. Experimental results show that Hadoop-A significantly speeds up data movement in MapReduce. Keywords Hadoop, MapReduce, Hadoop Acceleration. I. INTRODUCTION Big data is a collection of datasets that cannot be processed using traditional computing techniques. Hadoop is a Java-based programming framework that supports the handling of large data sets in a distributed computing environment and is part of the Apache project subsidized by the Apache Software Foundation. Hadoop was originally developed on the origin of Google s MapReduce, in which an appliance is broken down into numerous small parts[1]. The Apache Hadoop software documents can detect and handle failures at the application layer, and can provide a highly-available service on top of a cluster of computers, each of which may be level to failures. Representative data-intensive Web applications consist of search engines online auctions, webmail, and online retail sales, Customizing content for users, Supply chain management, Fraud analysis, Bioinformatics, Beyond analytics, Miscellaneous uses etc; Hadoop is an open-source accomplishment of MapReduce, supported by leading IT companies such as Google and Yahoo! Hadoop implements MapReduce framework with two categories of components: a JobTracker and various TaskTrackers. TaskTrackers are managed by the JobTracker and launched on apiece computational node to perform the tasks they receives from JobTracker. Data processing is performed in parallel through two main tasks: map and reduce. The JobTracker is in charge of scheduling the map tasks (MapTasks) and reduce tasks (ReduceTasks) toward TaskTrackers. It also monitors job progress, collects runtime execution statistics, handles possible faults and errors throughout task re-execution. Hadoop merges these intermediary data segmentsof map task when numbers of data segments go over athreshold. The current merging algorithm frequently mergesdata segments which cause multiple rounds of disk access for equal data. This degrades the performance of Hadoop. The MapReduce programming representation mostly only requires programmers refer to the computation using two primitives of functional programming languages, Map and Reduce. The map function typically independently processes a portion of the input data and emits multiple intermediary key/value pairs, while the reduce function sets all key/value pairs with the similar key to a single output key/value pair. Additionally, users can arrange for an optional combine function that locally aggregates the intermediary key/value pairs to save networking bandwidth and reduce memory consumptions. HadoopMapReduce s has three data processing phase i.e., shuffle, merge and reduce. II. LITERATURE SUREY Authors Jiuxin, et.al proposed design of MPI over infini-band which acquires the benefit of RDMA to large messages as well as small and controller messages which increases to upgraded scalability by combining RDMA operations like send or receive. RDMA based design used toward reduce latency by 24% and increases the bandwidth by over 104% also reduce the overhead up-to 22% [2]. In past decades author implemented some special purpose computation that process large amount of raw data corresponding crawled documents, web request logs etc. for computing various kind of derived data but then this computations are straightforward which cannot handle problems like how to parallelize the computation,distribute the facts and handle computations with simple computation on excess of data. The system is well recognized for its expandable scalability and fine-grained fault tolerance although its performance has been making a note of to be suboptimal in the database context. According to a 40

latest study[3], Hadoop, an open source execution of MapReduce, is measured two state of the art parallel database systems in execution a variation of analytical tasks. MapReduce can reach better performance with the allocation of more calculate nodes from the cloud to speed up computation; yet, this approach of pay for nodes is not charge in result in pay as you go environment. In this paper, conduct a performance study of MapReduce (Hadoop) on a node cluster of Amazon Elastic Compute Cloud (EC2) with various levels of parallelism. Here show that by carefully tuning these factors, in general performance of Hadoop can be improved by a factor for the same benchmark used in[3], and is thus more comparable to that of parallel database systems. Results illustrate that it is therefore conceivable to build a cloud data processing system that is mutually elastically scalable and effective. Big data[4] cannot be powerfully dealt with using a large number of relational database management systems, as usually it requires parallel execution on a large quantity of servers. In reality, the completion time of a MapReduce job could be delayed caused by slower tasks. This paper proposes a usage-aware MapReduce scheduler compact with the system heterogeneity by including task execution period in scheduling. Inspiration from the thoughts of both the Fair scheduler and LATE scheduler, usage-aware scheduler is capable to reduce the overall achievement time of MapReduce applications. Recent MapReduce system requires dataset designate loaded into cluster previously running queries because of that there is problem arise like high delay to start query processing. Thus authors Edward Mazur, BoduoLi[5] presented one pass analytic technique. This method is used to reduce the delay in query processing. Map-reduce based method is slower than Parallel Database system in execution variety of tasks. Because there is performance gap between the MapReduce and Parallel Database system. To make simpler fault tolerance in map-reduce programming model to represent the output of each one map and reduce the task previously it used. The authors Tyson Condie, et.al[6] proposed improved map-reduce architecture data to be pipeline between operator where intermediary data is pipelined. There are three techniques presented by author Online Aggregation, HOP(Hadoop Online Prototype), Pipeline. 1. Online Aggregation: - When mapper produces the data which has been immediately handled by reducer. Thus task can be completed in efficient period. 2. Hadoop Online Prototype (HOP):- It is used to support continuous queries that MapReduce job run constantly, receive the new data when it arrives and evaluates it. 3. Pipeline:- It increases the chances of parallelism; It progresses utilization and reduces response time[6]. III. EXISTING SYSTEM As data size is increasing day by day the management of data is becoming a critical issue. As an example Some stock interchange generate data in Terabytes a day, well-liked social networking site Facebook uploads many of images and videos and other many things per day whose size in Terabytes. This data may require for future references or some business intelligence determination for many organizations. Existing database management system is not capable of managing this large amount of data. This leads to the essential of effective and fault tolerant system. Google invented a new file system known as Google file system for his own determination of big data management. On basis of Google s file system the Hadoop HDFS and MapReduce is designed. Hadoop is conserved by Apache Foundation and it is supported by many organizations like Yahoo, Facebook. Existing Hadoop system has several performance and security problems. Hadoop has two main components map and reduce. The data which is provided by client to Hadoop system is separated into various splits. Each split is assigned to each map task. Map task creates the key value pair of input data. The mapper s job is map input kay value pair to intermediate key value. Mapper makes over the input record to intermediary record. The number of maps are depends upon number of input blocks. This intermediate data is provided to reduce task. Reduce task merges these key value pair into single value. During the map task data is sorted giving to user query and shuffled. Improve performance [7] with many factors like merging, data I/O etc. Many studies have been carried out to improve the performance over existing system. Numbers of dissimilar improvements are made. There are several issues in the existing Hadoop framework including 1. A serialization between Hadoop shuffle/merge and reduces phases 2. Repetitive merges and disk access 3.1 A Serialization in Hadoop Data Processing Hadoop[7] strives to pipeline the processing of huge datasets. Itis indeed able to do so, particularly for map and shuffle/merge phases. After a brief initialization period, a pool of simultaneous MapTask starts the map function on the first set of data splits. As soon as the MOFs are created from these splits, a pool of ReduceTasks starts to fetch barriers from these MOFs. At each ReduceTask, once the number of segments is larger than a threshold, while their total data size is more than a memory threshold, the small part segments are merged. To guarantee correctness of the MapReduceprogramming model, it is necessary to make sure that the reduce phase does not start up to the map phase is done for all data splits. However, the pipeline contains an implicit 41

serialization. At each ReduceTask, not up to all its segments are presented and merged, will the reduce phase begin to process data segments via the reduce function. These essentially implement a serialization in between the shuffle/merge phase and the reduce phase. 3.2. Repetitive Merges and Disk Access Reduce Tasks[7] merge data segments when the amount of segments or their total size goes over a threshold. A newly merged segment has to be fall out to local disks due to memory pressure. However, the current merge algorithm in Hadoopa lot leads to repetitive merges, thus extra disk accesses. A common sequence of merge operations in Hadoop. Under memory pressure, this will earn disk access. The resulting segment is inserted back into the heap based on its comparative size. When more segments arrive, the threshold will be broken again. It is then required to merge another set of segments. This over againcause additional disk access, let alone the need to read certain segments if they have been stored on local disks. Depending on its relative size, a previously merged segment is likely to be grouped into another set and merged again. Since the smallest segments are usually selected for merge, chances are rather high for a segment to be merged repetitively. Furthermore, any segment merged from a subset of segments finally needs to be merged for final outcome. Altogether, this means repetitive merges and disk access, therefore degraded performance for Hadoop. This can lead to a similar problem of essentially the same nature. The key constraint is that, if any merge happens before a global order of segments is established, it ought to be re-merged into the final result before the reduce function. Therefore, an alternative merge algorithm is acute for Hadoop to mitigate the impact of repetitive merges and their associated disk access. IV. SYSTEM MODEL System Architecture of Hadoop-A: Fig.1.shows[7] the architecture of Hadoop-A. Two new user configurable plug-in modules, MAP OUTPUT FILE Supplier and Net-Merger, are introduced in the direction of control RDMA-capable interconnects and enable substitute of data merge algorithms. Both MAP OUTPUT FILE Supplier and NetMerger are threaded C implementations. The choice of C in excess of Java is to avoid the overhead of the Java Virtual Machine (JVM) in data processing and allow flexible select new connection mechanisms such as RDMA, which does not yet exist in Java. A primary requirement of Hadoop-A is to continue the same programming and control interfaces for users. Fig.1. Software Architecture of Hadoop-A At that point, design the MAP OUTPUT FILESupplier and NetMerger plug-in the same as native C programs that can be launched by TaskTrackers. A user can make a decision to enable or disable the acceleration, which is controlled by a parameter in the confirmation file. Hadoop programs can run without any change when the Hadoop-A plug-in be activated. Implementation outcome shows that Hadoop-A decreasesthe execution time by 50% and improves the throughput. A query submission and execution using a Hadoop platform can be identified as follows: 1. Step 1: User submits a query through the client machine to the Hadoop Master Node. 2. Step 2: Master node assigns the job and sends the data chunks to the Slave Nodes. 3. Step 3: The Master Node collects the result of every Slave Node and performs aggregation of these partial results into final form. 4. Step 4: The Master Node sends the aggregated result to the client machine and client machine displays the results to the User. 42

Fig.2. Proposed System Architecture V. PROPOSED SYSTEM TECHNIQUE Data Uploading Select the big data and stored into the hadoop environment for the performing map reduce on hadoop. The data should be loaded into the VM server location. Segmentation Packet segmentation improves network performance by splitting the packets in established Ethernet frames into split buffers. Packet segmentation perhaps responsible for splitting one addicted to multiple so that reliable transmission of each one can be performed separately. Segmentation may be required when the data packet is larger than the utmost transmission unit supported by the network. The packet processing system is specially designed for dealing with the network traffic. Most networks, such as the Internet are distributed and layered systems composed of hosts, workstations, switches and routers etc. The processing speed of edge equipment falls behind those in core network. Finally the access network connects the terminals of a customer endpoint. And usually the bandwidth and line rate requirement is lowest among the three. The packet processing system can be equipped in whichever layer of the network, either in the high end core routers or else in the LAN switches. The flexibility of the system comes from the programmable components within it. And a series of stacked network protocols assurance its capability to achieve the performance specification. Task Assignment The Data Center should be selected according to computation and storage capacity of servers resides in the data center. Identification of Data Center is important substance for minimizing operational expenditure of servers reside in the each data centers. Data chunks can be placed in the same data center when extra servers are provided in each data center. Further increasing the number of servers will not affect the distributions of tasks. Task should be assigned to data center where numbers of activated servers are optimal. Task assignment is deeply influence the operational expenditure of data center. Task is assigned to data center according to nearest data center for effectively processing of data. Each data chunk has a storage requirement and will be required by big data tasks. Data Loading A Data Placement on the servers and the amount of load capacity assigned to each file copy so as to minimize the communication rate while ensuring the user experience. Invent Min Copy sets, a data replication placement system that decouples data distribution and duplication to improve the data durability assets in distributed data centers. In recent times, Jin et propose a joint optimization scheme that simultaneously elevates virtual machine (VM) placement and network flow routing to maximize energy savings. Processing of Task The high computational server should not processing the low population of data chunk. Because it increases the operational expenditure of server, wastage of storage and broadcast cost. The population of data is processed depend upon the computational capacity of servers exist in in the data centers. Evaluation Process We present the performance outcome of MapReduce algorithm. Evaluate server cost, communication cost and overall cost under different total server numbers. CONCLUSIONS In this paper study of several papers are executed. Many of them are also illustrations effective results in 43

performance. But there is still need to improve the different factors related to performance such as merging. Examined the Design and Architecture of HadoopMapReduce structure and reveal critical problems faced by the existing implementation. Designed and implemented Hadoop-A the same as an extensible speeding up framework which addresses all these issues. Here examined the design and architecture of HadoopMapReduce structure. Particularly, analysis has focused on data processing inside ReduceTasks. Designed and implemented Hadoop-A as an extensible acceleration framework that can tolerate plug-in modules to address all these issues. By introducing algorithm that merges data without touching disks and planning a full pipeline of shuffle, merge, and reduce phases for ReduceTasks, effectively accomplished an accelerated Hadoop framework, Hadoop-A. Experimental results demonstrate that Hadoop-A doubles the data processing throughput of Hadoopand that Hadoop-A is capable of effectively utilizing multiple threads to read data from multiple disks. Because of the use of merge algorithm, it can significantly reduce disk accesses during Hadoop shuffling and merging phases, thereby speeding up data movement. ACKNOWLEDGMENTS This paper would not have been come into reality without the able guidance, support and wishes of all those who stand by me in the development. I wish to give my special thanks to my guide, Prof. SushilaRatre, for her timely advice and guidance. REFERENCES [1] J. Dean and S. Ghemawat, Mapreduce: Simplified data processing on large clusters, Sixth Symp.on Operating System Design and Implementation (OSDI), pages 137150, December 2004. [2] J. Liu, J. Wu, and D.K. Panda, High Performance RDMA-Based MPI Implementation over InfiniBand, Int l J. Parallel Programming, vol. 32, pp. 167-198, 2004. [3] Dawei Jiang, Beng Chin Ooi, Lei Shi, and Sai Wu, The performance of mapreduce: An in-depth study. In Proceedings of the 36th International Conference on Very Large DataBases (VLDB), volume 3, pages 472483, 2010. [4] J. H. Hsiao and S. J. Kao, "A Usage-Aware Scheduler for Improving MapReduce Performance in Heterogeneous Environments", International Conference on Information Science, Electronics and Electrical Engineering (ISEEE), Vol.3, 2014, pp. 1648-1652. [5] B. Li, E. Mazur, Y. Diao, A. McGregor, and P. Shenoy, A Platform for Scalable One-Pass Analytics Using MapReduce, Proc. ACM SIGMOD Int l Conf. Management of Data (SIGMOD 11), pp. 985-996, 2011. [6] T. Condie, N. Conway, P. Alvaro, J.M. Hellerstein, K. Elmeleegy and Systems Design and Implementation (NSDI), pp. 312-328,Apr. 2010. [7] Weikuan Yu, Member, IEEE, Yandong Wang, and Xinyu Que. Design and Evaluation of Network-Levitated Merge for Hadoop Acceleration IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 25, NO.3, MARCH 2014. [8] A. Pavlo, E. Paulson, A. Rasin, D. J. Abadi, D. J.DeWitt, S. Madden, and M. Stonebraker, A comparison of approaches to large-scale data analysis. In SIGMOD, pages 165 178. ACM, 2009. 44