# Comparision of k-means and k-medoids Clustering Algorithms for Big Data Using MapReduce Techniques

Save this PDF as:

Size: px
Start display at page:

Download "Comparision of k-means and k-medoids Clustering Algorithms for Big Data Using MapReduce Techniques"

## Transcription

1 Comparision of k-means and k-medoids Clustering Algorithms for Big Data Using MapReduce Techniques Subhashree K 1, Prakash P S 2 1 Student, Kongu Engineering College, Perundurai, Erode 2 Assistant Professor, Kongu Engineering College, Perundurai, Erode Abstract Huge amounts of structured and unstructured are being collected from various sources. These data called as Big Data which are difficult to handle by a single machine require the work to be distributed across many computers. Hadoop is a distributed framework which uses MapReduce programming model to process the data in a distributed manner. Clustering analysis is one of the important research areas in the field of data mining. Clustering is the most commonly used data processing algorithms. Clustering is a division of data into different groups. Data are grouped in such a way that data of the same group are similar and the data in other groups are dissimilar. Clustering aims in minimizing intra-class similarity and in maximizing interclass dissimilarity. k-means is the popular clustering algorithm because of its simplicity. Nowadays, as the volume of data increases, researchers started to use MapReduce which is a parallel processing framework to get high performance. But, MapReduce is unsuitable for iterated algorithms owing to repeated times of restarting jobs, big data reading and shuffling. To overcome this problem, a novel processing model in MapReduce called optimized k- means clustering method which uses the methods of probability sampling and clustering, merging using two algorithms called weight based merge clustering and distribution based merge clustering is introduced to eliminate the iteration dependence and obtain high performance. Also the algorithms are compared with the k-medoids clustering algorithms Keywords: k-means, MapReduce, k-medoids 1. Introduction A massive improvement in the computer technology leads to the development of large new devices. Computing devices have several uses and are necessary for business, scientists, governments and engineers. The common thing among all computing devices is the potential to generate data. The popularity of the Internet along with a sharp increase in the network bandwidth available to users has resulted in the generation of huge amounts of data. The amount of data generated can often be too large for a single computer to process in a reasonable amount of time. The data may also be too big to store on a single machine. To reduce the time it takes to process the data, and to store the data, it is necessary to write programs that can execute on two or more computers and distribute the workload among them. To address the above issues, Google developed the Google File System (GFS), a distributed file system architecture model for large-scale data processing and created the MapReduce programming model. The MapReduce programming model is a programming abstraction that hides the underlying complexity of distributed data processing. Therefore the difficulty of parallelizing computation, distribute data and handle faults no longer become an issue. This is because the MapReduce framework handles all these details internally and removes the responsibility of having to deal with these complexities from the programmer. 627

2 A novel processing model in MapReduce is source. It is important to note that we do not need to restart proposed in this paper, in which sampling is used to our algorithm many times for each bootstrap sample. eliminate the iteration dependence of k-means and to obtain high performance. In particular, the contributions Fahim.A.M et al [3], proposed an idea that of the paper are summarized as follows. First, a novel makes k-means more efficient, especially for dataset method to optimize k-means clustering algorithms using containing large number of clusters. In each iteration, the MapReduce is presented, which eliminates the k-means algorithm computes the distances between data dependence of iteration and reduces the computation cost point and all centers, this is computationally very of algorithms. The implementation defines the mapper expensive for huge datasets. For each data point, we can and reducer jobs and requires no modifications to the keep the distance to the nearest cluster. At the next MapReduce framework. Second, two sample merging iteration, we compute the distance to the previous nearest strategies for the processing model is proposed and cluster. At the next iteration, distance to the previous extensive experiments are conducted to study the effect nearest cluster is computed. If the new distance is less of various parameters using a real dataset. The results than or equal to the previous distance, the point stays in show that the proposed methods are efficient and its cluster and there is no need to compute its distances to scalable. the cluster centers. This saves the time required to compute distances to k-1 cluster centers. Two functions 2. Related Work are written in the proposed method. The first function is Davidson.I et al [2], proposed an algorithm for the basic function of the k-means algorithm, that finds speeding up k-means clustering with bootstrap averaging. the nearest center for each data point.. Second function is k-means is time consuming as it converges to a local distance_new(). Here an efficient implementation method optimum of its loss function (the distortion) and the for implementing the k-means method is proposed. The solution converged to is particularly sensitive to the algorithm produces the same clustering results as that of initial starting positions. As a result its typical use in the k-means algorithm, and has significantly superior practice involves applying the algorithm from many performance than the k-means algorithm. randomly chosen initial starting positions. To overcome David Arthur et al [1], dealt with the speed and this disadvantage bootstrap averaging is used. Bootstrap accuracy of k-means algorithm. The proposed algorithm averaging builds multiple models by creating small rectifies the problem by augmenting k-means with a bootstrap samples of the training set and building a single simple, randomized seeding technique, obtain an model from each, similar cluster centers are then algorithm that is O(log k)-competitive with the optimal averaged to produce a single model that contains k clustering. clusters. The speedup is because the computational complexity of k-means is linear with respect to the Dean J et al [4], discussed about data processing number of data points. The number of iterations of the on large clusters using MapReduce. MapReduce is a algorithm until convergence is proportional to the size of programming model and an associated implementation the data set. This approach yields a speedup for two for processing and generating large data sets. Users reasons. First, less data are clustered and second because specify a map function that processes a key/value pair to the k-means algorithm converges more quickly for generate a set of intermediate key/value pairs, and a smaller data sets than larger data sets from the same reduce function that merges all intermediate values associated with the same intermediate key. Programs 628

3 written in this functional style are automatically and excluded the interference of outliers in the search for parallelized and executed on a large cluster of commodity the next cluster center point. We secondly apply fast machines. The run-time system takes care of the details global k-means clustering algorithm on the new data set of partitioning the input data, scheduling the program's which is generated previously. Fast global k-means execution across a set of machines, handling machine clustering algorithm is an improved global k-means failures, and managing the required inter-machine clustering algorithm by Aristidis Likas. communication. This allows programmers without any experience with parallel and distributed systems to easily Min Chen et al [7], reviewed the background utilize the resources of a large distributed system. and state-of-the-art of big data. The general background MapReduce runs on a large cluster of commodity of big data and review related technologies, such as could machines and is highly scalable: a typical MapReduce computing, Internet of Things, data centers and Hadoop computation processes many terabytes of data on were introduced. Then the four phases of the value chain thousands of machines. Programmers find the system of big data, i.e., data generation, data acquisition, data easy to use: hundreds of MapReduce programs have been storage, and data analysis are focussed. For each phase, implemented and upwards of one thousand MapReduce introduction of the general background is studied, the jobs are executed on Google's clusters every day. technical challenges are discussed, and the latest advances are reviewed. Finally several representative Kenn Slagter et al [5], proposed an efficient applications of big data, including enterprise partitioning technique to overcome the problem of skew. management, Internet of Things, online social networks, The proposed algorithm improves load balancing as well medical applications, collective intelligence, and smart as reduces the memory requirements. Nodes which runs grid are examined. These discussions aim to provide a slow degrade the performance of Mapreduce job. To comprehensive overview and big-picture to readers of overcome this problem, a technique called micro this exciting area. partitioning is used that divide the tasks into smaller tasks greater than the number of reducers and are assigned to Cluster analysis has undergone vigorous reducers in just-in-time fashion. Running many small development in the recent years. There are several types tasks lessens the impact of stragglers, since work that is of clustering hierarchical clustering, density-based scheduled on slow nodes is small which can be clustering, grid based clustering, model-based clustering performed by other idle workers. and partitional clustering. Each clustering type has its own style and optimization methods. Regarding the Juntao wang et al [6], developed density-based expansion of the data size and the limitation of a single detection methods based on characteristics of noise data machine, a natural solution is to consider parallelism in a where the discovery and processing steps of the noise distributed computational environment. MapReduce is a data are added to the original algorithm. Preprocessing programming framework for processing large-scale the data improves the clustering result significantly and datasets by exploiting the parallelism among a cluster of the impact of noise data on k-means algorithm is computing nodes. MapReduce gains popularity for its decreased. First a pre treatment is made with the data to simplicity, flexibility, fault tolerance and scalability. All be clustered to remove the outliers using outlier detection the big data processing problems cannot be made method based on LOF, so that the outliers cannot efficient by parallelism because partitional clustering participate in the calculation of the initial cluster centers, algorithm requires exponentially much iteration. Also job 629

4 exponential creation time and time of big data shuffling are hard to swallow especially when data size is huge, so just parallelism is not enough, only by eliminating the partitional clustering algorithms dependence on the iteration, high performance can be achieved. 3. Proposed Method The proposed work is divided into three modules Traditional k-means clustering in MapReduce Optimized clustering in MapReduce Weight-based Merge Clustering (WMC) Distribution-based Merge Clustering (DMC) k-medoid Clustering 3.1 Traditional k-means clustering in MapReduce MapReduce is a programming model for processing and generating large data sets with a parallel, distributed algorithm on a cluster. This has two steps namely, map and reduce. In map step, problem is divided to sub-problems ie this step process smaller problems. The results are passed to reduce task. In reduce step, the reduce task answers of sub-problems are collected and combined to form the final answer of the original problem Optimized clustering in MapReduce Set D as the original dataset, k as the number of the clusters that are required, i and j are all integers, while i from 1 to 2k and j from 1 to k, c i is the ith center, C i is the collection of points belong to center c i. MapReduce job will read disk where the large-scale datasets stored repeatedly and the large-scale data will be shuffled over the whole clusters each time. Therefore the I/O cost and network costs are very expensive. To estimate the iteration, sampling is done to get some subsets of the big data. By processing these subsets, the center sets are obtained which can be used to cluster original datasets. This method involves probability sampling and sample clustering and merging. 1. Probability Sampling The first MapReduce job is sample selection. Sampling is performed on the original large dataset using k and probability p x =, where ε (0,1) and controls the size ε of the sample while N is the number of points, and then we generate some samples small enough to be processed in a single machine. 2. Sample Clustering And Merging Small-sized sample files are obtained after sampling. In the second job, mappers cluster each sample file using k and get the centers for those samples. We shuffle these centers to only one reducer and merge them into k final centers, which are used to obtain final clustering result 3.3. Weight-based Merge Clustering (WMC) Given a situation S in which two clusters are to be merged, A and B, whose centers are point c a and point c b, respectively, and A has m points while B has n points. If simple merging strategy is used, the new center of the merging result is the midpoint of c a and c b, but this is inaccurate, because, if m > n, the real new center of A and B will be more closer to the range of A and c a, according to this theorem, WMC strategy is introduced. In the second MapReduce job, after sample clustering in each mapper, the number of points assigned to each center points is collected, which will be used to weight the centers Distribution-based Merge Clustering (DMC) With rational sampling, each sample file can represent the whole dataset very well. Meanwhile, the clustering result of each map is global optimal by carefully seeding. So we come up with a merge idea to separate the 2k 2 centers into k groups, each group has 2k centers and consists of one and only one center from each sample clustering results. Here we first select an intermediate clustering result randomly whose k centers 630

5 are recognized as the first member of k different groups, and then we choose members from the rest of intermediate clustering results for each group according to the distances between centers 3.5. k-medoids Clustering The k-means method uses centroid to represent the cluster and it is sensitive to outliers. This means, a data object with an extremely large value may disrupt the distribution of data. k-medoids method overcomes this problem by using medoids to represent the cluster rather than centroid. A medoid is the most centrally located data object in a cluster. Here, k data objects are selected randomly as medoids to represent k cluster and remaining all data objects are placed in a cluster having medoid nearest (or most similar) to that data object. After processing all data objects, new medoid is determined which can represent cluster in a better way and the entire process is repeated. Again all data objects are bound to the clusters based on the new medoids. In each iteration, medoids change their location step by step. Or in other words, medoids move in each iteration. This process is continued until no any medoid move. As a result, k clusters are found representing a set of n data objects. An algorithm for this method is given below. from experiments shows that the proposed approach works better than the existing system. Distance in value Distance in value Inter Cluster Distance rate Methods Fig.1.Inter Cluster Distance Intra Cluster Distance rate Methods Fig.2. Intra Cluster Distance 4. Result Analysis The dataset considered is from real-world settings and are publicly available from the website, https://archive.ics.uci.edu. The Individual household electric power consumption Data Set (House) consists of 4,296,075,259 records in 9 dimensions. Performance offered by k-means clustering, optimized k-means clustering weighted-based Merge Clustering, Distribution based merge clustering and k-medoid clustering methods are analysed and compared. The performance is evaluated by the clustering validation parameters such as intra cluster distance, inter cluster distance and time complexity rate. Based on the comparison and the results 5. Conclusion Iteration of k-means algorithm was the important factor which affected the performance of clustering and hence a novel efficient parallel clustering model was proposed. Experimental results on large realworld dataset demonstrate that the proposed optimized algorithm is efficient and performs better compared to the existing algorithms. Clustering validation shows that the quality of our clustering methods is improved compared to k-means. From experimental results, it is identified that proposed system is well effective than existing system in terms of effectiveness of the system References 631

6 clusters, Communications of the ACM, Vol. 51, No. [1] Arthur, David, and Sergei Vassilvitskii (2007), K- 1 pp Means++: The Advantages of careful seeding, Proc. 18 th [5]Slagter, Kenn, Ching-Hsien Hsu, Yeh-Ching Chung, Ann. ACM-SIAM Symp. Discrete Algorithms (SODA), and Daqiang Zhang (2013), An improved partitioning Vol. 80, No. 3, pp mechanism for optimizing massive data analysis using [2]Davidson I, Satyanarayana A (2003), Speeding up k- MapReduce, The Journal of Supercomputing, Vol.66 means clustering by bootstrap averaging, IEEE data No.1 pp mining workshop on clustering large data sets, Vol. 25, [6]Wang, Juntao, and Xiaolong Su (2011), An improved No. 2, pp K-Means clustering algorithm, In proc IEEE 3rd [3]Fahim, A. M., A. M. Salem, F. A. Torkey, and M. A. International Conf on Communication Software and Ramadan (2006), An efficient enhanced k-means Networks (ICCSN), Vol. 30, No.7, pp clustering algorithm, Journal of Zhejiang University [7]Chen, Min, Shiwen Mao, and Yunhao Liu (2014), Science, Vol. 7 No. 10 pp Big Data: A Survey, Mobile Networks and [4]Dean, Jeffrey, and Sanjay Ghemawat (2008), Applications, Vol.19 No. 2 pp MapReduce: simplified data processing on large 632

### Large-Scale Data Sets Clustering Based on MapReduce and Hadoop

Journal of Computational Information Systems 7: 16 (2011) 5956-5963 Available at http://www.jofcis.com Large-Scale Data Sets Clustering Based on MapReduce and Hadoop Ping ZHOU, Jingsheng LEI, Wenjun YE

### Survey on Load Rebalancing for Distributed File System in Cloud

Survey on Load Rebalancing for Distributed File System in Cloud Prof. Pranalini S. Ketkar Ankita Bhimrao Patkure IT Department, DCOER, PG Scholar, Computer Department DCOER, Pune University Pune university

### Optimization and analysis of large scale data sorting algorithm based on Hadoop

Optimization and analysis of large scale sorting algorithm based on Hadoop Zhuo Wang, Longlong Tian, Dianjie Guo, Xiaoming Jiang Institute of Information Engineering, Chinese Academy of Sciences {wangzhuo,

### Research on Clustering Analysis of Big Data Yuan Yuanming 1, 2, a, Wu Chanle 1, 2

Advanced Engineering Forum Vols. 6-7 (2012) pp 82-87 Online: 2012-09-26 (2012) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/aef.6-7.82 Research on Clustering Analysis of Big Data

### Distributed Framework for Data Mining As a Service on Private Cloud

RESEARCH ARTICLE OPEN ACCESS Distributed Framework for Data Mining As a Service on Private Cloud Shraddha Masih *, Sanjay Tanwani** *Research Scholar & Associate Professor, School of Computer Science &

### Data-Intensive Computing with Map-Reduce and Hadoop

Data-Intensive Computing with Map-Reduce and Hadoop Shamil Humbetov Department of Computer Engineering Qafqaz University Baku, Azerbaijan humbetov@gmail.com Abstract Every day, we create 2.5 quintillion

### A Study on Data Analysis Process Management System in MapReduce using BPM

A Study on Data Analysis Process Management System in MapReduce using BPM Yoon-Sik Yoo 1, Jaehak Yu 1, Hyo-Chan Bang 1, Cheong Hee Park 1 Electronics and Telecommunications Research Institute, 138 Gajeongno,

### Keywords: Big Data, HDFS, Map Reduce, Hadoop

Volume 5, Issue 7, July 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Configuration Tuning

### Journal of Chemical and Pharmaceutical Research, 2015, 7(3):1388-1392. Research Article. E-commerce recommendation system on cloud computing

Available online www.jocpr.com Journal of Chemical and Pharmaceutical Research, 2015, 7(3):1388-1392 Research Article ISSN : 0975-7384 CODEN(USA) : JCPRC5 E-commerce recommendation system on cloud computing

### Log Mining Based on Hadoop s Map and Reduce Technique

Log Mining Based on Hadoop s Map and Reduce Technique ABSTRACT: Anuja Pandit Department of Computer Science, anujapandit25@gmail.com Amruta Deshpande Department of Computer Science, amrutadeshpande1991@gmail.com

### Task Scheduling Algorithm for Map Reduce To Control Load Balancing In Big Data

Task Scheduling Algorithm for Map Reduce To Control Load Balancing In Big Data Ms.N.Saranya, M.E., (CSE), Jay Shriram Group of Institutions, Tirupur. charanyaa19@gmail.com Abstract- Load balancing is biggest

### A STUDY ON HADOOP ARCHITECTURE FOR BIG DATA ANALYTICS

A STUDY ON HADOOP ARCHITECTURE FOR BIG DATA ANALYTICS Dr. Ananthi Sheshasayee 1, J V N Lakshmi 2 1 Head Department of Computer Science & Research, Quaid-E-Millath Govt College for Women, Chennai, (India)

### Parallel Processing of cluster by Map Reduce

Parallel Processing of cluster by Map Reduce Abstract Madhavi Vaidya, Department of Computer Science Vivekanand College, Chembur, Mumbai vamadhavi04@yahoo.co.in MapReduce is a parallel programming model

### Reducer Load Balancing and Lazy Initialization in Map Reduce Environment S.Mohanapriya, P.Natesan

Reducer Load Balancing and Lazy Initialization in Map Reduce Environment S.Mohanapriya, P.Natesan Abstract Big Data is revolutionizing 21st-century with increasingly huge amounts of data to store and be

### Data Mining Project Report. Document Clustering. Meryem Uzun-Per

Data Mining Project Report Document Clustering Meryem Uzun-Per 504112506 Table of Content Table of Content... 2 1. Project Definition... 3 2. Literature Survey... 3 3. Methods... 4 3.1. K-means algorithm...

### Big Data with Rough Set Using Map- Reduce

Big Data with Rough Set Using Map- Reduce Mr.G.Lenin 1, Mr. A. Raj Ganesh 2, Mr. S. Vanarasan 3 Assistant Professor, Department of CSE, Podhigai College of Engineering & Technology, Tirupattur, Tamilnadu,

### Survey on Scheduling Algorithm in MapReduce Framework

Survey on Scheduling Algorithm in MapReduce Framework Pravin P. Nimbalkar 1, Devendra P.Gadekar 2 1,2 Department of Computer Engineering, JSPM s Imperial College of Engineering and Research, Pune, India

### CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop) Rezaul A. Chowdhury Department of Computer Science SUNY Stony Brook Spring 2016 MapReduce MapReduce is a programming model

With Saurabh Singh singh.903@osu.edu The Ohio State University February 11, 2016 Overview 1 2 3 Requirements Ecosystem Resilient Distributed Datasets (RDDs) Example Code vs Mapreduce 4 5 Source: [Tutorials

### Hadoop Operations Management for Big Data Clusters in Telecommunication Industry

Hadoop Operations Management for Big Data Clusters in Telecommunication Industry N. Kamalraj Asst. Prof., Department of Computer Technology Dr. SNS Rajalakshmi College of Arts and Science Coimbatore-49

### Parallel Programming Map-Reduce. Needless to Say, We Need Machine Learning for Big Data

Case Study 2: Document Retrieval Parallel Programming Map-Reduce Machine Learning/Statistics for Big Data CSE599C1/STAT592, University of Washington Carlos Guestrin January 31 st, 2013 Carlos Guestrin

### Advances in Natural and Applied Sciences

AENSI Journals Advances in Natural and Applied Sciences ISSN:1995-0772 EISSN: 1998-1090 Journal home page: www.aensiweb.com/anas Clustering Algorithm Based On Hadoop for Big Data 1 Jayalatchumy D. and

### An Analysis on Density Based Clustering of Multi Dimensional Spatial Data

An Analysis on Density Based Clustering of Multi Dimensional Spatial Data K. Mumtaz 1 Assistant Professor, Department of MCA Vivekanandha Institute of Information and Management Studies, Tiruchengode,

### Parallel Databases. Parallel Architectures. Parallelism Terminology 1/4/2015. Increase performance by performing operations in parallel

Parallel Databases Increase performance by performing operations in parallel Parallel Architectures Shared memory Shared disk Shared nothing closely coupled loosely coupled Parallelism Terminology Speedup:

### Chapter 7. Using Hadoop Cluster and MapReduce

Chapter 7 Using Hadoop Cluster and MapReduce Modeling and Prototyping of RMS for QoS Oriented Grid Page 152 7. Using Hadoop Cluster and MapReduce for Big Data Problems The size of the databases used in

### CSE-E5430 Scalable Cloud Computing Lecture 2

CSE-E5430 Scalable Cloud Computing Lecture 2 Keijo Heljanko Department of Computer Science School of Science Aalto University keijo.heljanko@aalto.fi 14.9-2015 1/36 Google MapReduce A scalable batch processing

### Map Reduce / Hadoop / HDFS

Chapter 3: Map Reduce / Hadoop / HDFS 97 Overview Outline Distributed File Systems (re-visited) Motivation Programming Model Example Applications Big Data in Apache Hadoop HDFS in Hadoop YARN 98 Overview

### Fig. 1 A typical Knowledge Discovery process [2]

Volume 4, Issue 7, July 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com A Review on Clustering

### Exploring HADOOP as a Platform for Distributed Association Rule Mining

FUTURE COMPUTING 2013 : The Fifth International Conference on Future Computational Technologies and Applications Exploring HADOOP as a Platform for Distributed Association Rule Mining Shravanth Oruganti,

Task Scheduling in Hadoop Sagar Mamdapure Munira Ginwala Neha Papat SAE,Kondhwa SAE,Kondhwa SAE,Kondhwa Abstract Hadoop is widely used for storing large datasets and processing them efficiently under distributed

### Management & Analysis of Big Data in Zenith Team

Management & Analysis of Big Data in Zenith Team Zenith Team, INRIA & LIRMM Outline Introduction to MapReduce Dealing with Data Skew in Big Data Processing Data Partitioning for MapReduce Frequent Sequence

### International Journal of Innovative Research in Computer and Communication Engineering

FP Tree Algorithm and Approaches in Big Data T.Rathika 1, J.Senthil Murugan 2 Assistant Professor, Department of CSE, SRM University, Ramapuram Campus, Chennai, Tamil Nadu,India 1 Assistant Professor,

### Big Data Processing with Google s MapReduce. Alexandru Costan

1 Big Data Processing with Google s MapReduce Alexandru Costan Outline Motivation MapReduce programming model Examples MapReduce system architecture Limitations Extensions 2 Motivation Big Data @Google:

### A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM

A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM Sneha D.Borkar 1, Prof.Chaitali S.Surtakar 2 Student of B.E., Information Technology, J.D.I.E.T, sborkar95@gmail.com Assistant Professor, Information

### MapReduce Design of K-Means Clustering Algorithm

MapReduce Design of K-Means Clustering Algorithm Prajesh P Anchalia Department of CSE, Email: prajeshanchalia@yahoo.in Anjan K Koundinya Department of CSE, Email: annjank@gmail.com Srinath N K Department

### A review on MapReduce and addressable Big data problems

International Journal of Research In Science & Engineering e-issn: 2394-8299 Special Issue: Techno-Xtreme 16 p-issn: 2394-8280 A review on MapReduce and addressable Big data problems Prof. Aihtesham N.

### The WAMS Power Data Processing based on Hadoop

Proceedings of 2012 4th International Conference on Machine Learning and Computing IPCSIT vol. 25 (2012) (2012) IACSIT Press, Singapore The WAMS Power Data Processing based on Hadoop Zhaoyang Qu 1, Shilin

### Index Terms : Load rebalance, distributed file systems, clouds, movement cost, load imbalance, chunk.

Load Rebalancing for Distributed File Systems in Clouds. Smita Salunkhe, S. S. Sannakki Department of Computer Science and Engineering KLS Gogte Institute of Technology, Belgaum, Karnataka, India Affiliated

### Mining Interesting Medical Knowledge from Big Data

IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 18, Issue 1, Ver. II (Jan Feb. 2016), PP 06-10 www.iosrjournals.org Mining Interesting Medical Knowledge from

### International Journal of Engineering Research ISSN: 2348-4039 & Management Technology November-2015 Volume 2, Issue-6

International Journal of Engineering Research ISSN: 2348-4039 & Management Technology Email: editor@ijermt.org November-2015 Volume 2, Issue-6 www.ijermt.org Modeling Big Data Characteristics for Discovering

### Analysis and Optimization of Massive Data Processing on High Performance Computing Architecture

Analysis and Optimization of Massive Data Processing on High Performance Computing Architecture He Huang, Shanshan Li, Xiaodong Yi, Feng Zhang, Xiangke Liao and Pan Dong School of Computer Science National

### Open Access Research of Fast Search Algorithm Based on Hadoop Cloud Platform

Send Orders for Reprints to reprints@benthamscience.ae The Open Automation and Control Systems Journal, 2015, 7, 1153-1159 1153 Open Access Research of Fast Search Algorithm Based on Hadoop Cloud Platform

### Map/Reduce Affinity Propagation Clustering Algorithm

Map/Reduce Affinity Propagation Clustering Algorithm Wei-Chih Hung, Chun-Yen Chu, and Yi-Leh Wu Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology,

### IMPROVISATION OF STUDYING COMPUTER BY CLUSTER STRATEGIES

INTERNATIONAL JOURNAL OF ADVANCED RESEARCH IN ENGINEERING AND SCIENCE IMPROVISATION OF STUDYING COMPUTER BY CLUSTER STRATEGIES C.Priyanka 1, T.Giri Babu 2 1 M.Tech Student, Dept of CSE, Malla Reddy Engineering

### SEMANTIC WEB BASED INFERENCE MODEL FOR LARGE SCALE ONTOLOGIES FROM BIG DATA

SEMANTIC WEB BASED INFERENCE MODEL FOR LARGE SCALE ONTOLOGIES FROM BIG DATA J.RAVI RAJESH PG Scholar Rajalakshmi engineering college Thandalam, Chennai. ravirajesh.j.2013.mecse@rajalakshmi.edu.in Mrs.

### Analysing Large Web Log Files in a Hadoop Distributed Cluster Environment

Analysing Large Files in a Hadoop Distributed Cluster Environment S Saravanan, B Uma Maheswari Department of Computer Science and Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham,

### http://www.wordle.net/

Hadoop & MapReduce http://www.wordle.net/ http://www.wordle.net/ Hadoop is an open-source software framework (or platform) for Reliable + Scalable + Distributed Storage/Computational unit Failures completely

### Managing Big data using Hadoop Map Reduce in Telecom Domain

Managing Big data using Hadoop Map Reduce in Telecom Domain V.Vaidhehi, Arun M.B Department of Computer Science, Christ University Bengaluru Abstract- Map reduce is a programming model for analysing and

### processed parallely over the cluster nodes. Mapreduce thus provides a distributed approach to solve complex and lengthy problems

Big Data Clustering Using Genetic Algorithm On Hadoop Mapreduce Nivranshu Hans, Sana Mahajan, SN Omkar Abstract: Cluster analysis is used to classify similar objects under same group. It is one of the

### Big Application Execution on Cloud using Hadoop Distributed File System

Big Application Execution on Cloud using Hadoop Distributed File System Ashkan Vates*, Upendra, Muwafaq Rahi Ali RPIIT Campus, Bastara Karnal, Haryana, India ---------------------------------------------------------------------***---------------------------------------------------------------------

### Chapter 7. Cluster Analysis

Chapter 7. Cluster Analysis. What is Cluster Analysis?. A Categorization of Major Clustering Methods. Partitioning Methods. Hierarchical Methods 5. Density-Based Methods 6. Grid-Based Methods 7. Model-Based

### Map-Reduce Algorithm for Mining Outliers in the Large Data Sets using Twister Programming Model

Map-Reduce Algorithm for Mining Outliers in the Large Data Sets using Twister Programming Model Subramanyam. RBV, Sonam. Gupta Abstract An important problem that appears often when analyzing data involves

### Big Data Rethink Algos and Architecture. Scott Marsh Manager R&D Personal Lines Auto Pricing

Big Data Rethink Algos and Architecture Scott Marsh Manager R&D Personal Lines Auto Pricing Agenda History Map Reduce Algorithms History Google talks about their solutions to their problems Map Reduce:

### Big Data and Apache Hadoop s MapReduce

Big Data and Apache Hadoop s MapReduce Michael Hahsler Computer Science and Engineering Southern Methodist University January 23, 2012 Michael Hahsler (SMU/CSE) Hadoop/MapReduce January 23, 2012 1 / 23

Hadoop Overview Data analytics has become a key element of the business decision process over the last decade. Classic reporting on a dataset stored in a database was sufficient until recently, but yesterday

### MapReduce Jeffrey Dean and Sanjay Ghemawat. Background context

MapReduce Jeffrey Dean and Sanjay Ghemawat Background context BIG DATA!! o Large-scale services generate huge volumes of data: logs, crawls, user databases, web site content, etc. o Very useful to be able

### EFFICIENT DATA ANALYSIS SCHEME FOR INCREASING PERFORMANCE IN BIG DATA

EFFICIENT DATA ANALYSIS SCHEME FOR INCREASING PERFORMANCE IN BIG DATA Mr. V. Vivekanandan Computer Science and Engineering, SriGuru Institute of Technology, Coimbatore, Tamilnadu, India. Abstract Big data

### Large data computing using Clustering algorithms based on Hadoop

Large data computing using Clustering algorithms based on Hadoop Samrudhi Tabhane, Prof. R.A.Fadnavis Dept. Of Information Technology,YCCE, email id: sstabhane@gmail.com Abstract The Hadoop Distributed

### Robust Outlier Detection Technique in Data Mining: A Univariate Approach

Robust Outlier Detection Technique in Data Mining: A Univariate Approach Singh Vijendra and Pathak Shivani Faculty of Engineering and Technology Mody Institute of Technology and Science Lakshmangarh, Sikar,

### 16.1 MAPREDUCE. For personal use only, not for distribution. 333

For personal use only, not for distribution. 333 16.1 MAPREDUCE Initially designed by the Google labs and used internally by Google, the MAPREDUCE distributed programming model is now promoted by several

### MapReduce (in the cloud)

MapReduce (in the cloud) How to painlessly process terabytes of data by Irina Gordei MapReduce Presentation Outline What is MapReduce? Example How it works MapReduce in the cloud Conclusion Demo Motivation:

### Outline. High Performance Computing (HPC) Big Data meets HPC. Case Studies: Some facts about Big Data Technologies HPC and Big Data converging

Outline High Performance Computing (HPC) Towards exascale computing: a brief history Challenges in the exascale era Big Data meets HPC Some facts about Big Data Technologies HPC and Big Data converging

### Comparison of Different Implementation of Inverted Indexes in Hadoop

Comparison of Different Implementation of Inverted Indexes in Hadoop Hediyeh Baban, S. Kami Makki, and Stefan Andrei Department of Computer Science Lamar University Beaumont, Texas (hbaban, kami.makki,

### A Novel Cloud Based Elastic Framework for Big Data Preprocessing

School of Systems Engineering A Novel Cloud Based Elastic Framework for Big Data Preprocessing Omer Dawelbeit and Rachel McCrindle October 21, 2014 University of Reading 2008 www.reading.ac.uk Overview

### Big Data Technology Map-Reduce Motivation: Indexing in Search Engines

Big Data Technology Map-Reduce Motivation: Indexing in Search Engines Edward Bortnikov & Ronny Lempel Yahoo Labs, Haifa Indexing in Search Engines Information Retrieval s two main stages: Indexing process

CONTEXTUAL ADVERTISEMENT MINING BASED ON BIG DATA ANALYTICS A.Divya *1, A.M.Saravanan *2, I. Anette Regina *3 MPhil, Research Scholar, Muthurangam Govt. Arts College, Vellore, Tamilnadu, India Assistant

### Use of Data Mining Techniques to Improve the Effectiveness of Sales and Marketing

Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 4, Issue. 4, April 2015,

### Distributed forests for MapReduce-based machine learning

Distributed forests for MapReduce-based machine learning Ryoji Wakayama, Ryuei Murata, Akisato Kimura, Takayoshi Yamashita, Yuji Yamauchi, Hironobu Fujiyoshi Chubu University, Japan. NTT Communication

### Mobile Storage and Search Engine of Information Oriented to Food Cloud

Advance Journal of Food Science and Technology 5(10): 1331-1336, 2013 ISSN: 2042-4868; e-issn: 2042-4876 Maxwell Scientific Organization, 2013 Submitted: May 29, 2013 Accepted: July 04, 2013 Published:

### R.K.Uskenbayeva 1, А.А. Kuandykov 2, Zh.B.Kalpeyeva 3, D.K.Kozhamzharova 4, N.K.Mukhazhanov 5

Distributed data processing in heterogeneous cloud environments R.K.Uskenbayeva 1, А.А. Kuandykov 2, Zh.B.Kalpeyeva 3, D.K.Kozhamzharova 4, N.K.Mukhazhanov 5 1 uskenbaevar@gmail.com, 2 abu.kuandykov@gmail.com,

### Self Learning Based Optimal Resource Provisioning For Map Reduce Tasks with the Evaluation of Cost Functions

Self Learning Based Optimal Resource Provisioning For Map Reduce Tasks with the Evaluation of Cost Functions Nithya.M, Damodharan.P M.E Dept. of CSE, Akshaya College of Engineering and Technology, Coimbatore,

### Searching frequent itemsets by clustering data

Towards a parallel approach using MapReduce Maria Malek Hubert Kadima LARIS-EISTI Ave du Parc, 95011 Cergy-Pontoise, FRANCE maria.malek@eisti.fr, hubert.kadima@eisti.fr 1 Introduction and Related Work

### Introduction to Hadoop and MapReduce

Introduction to Hadoop and MapReduce THE CONTRACTOR IS ACTING UNDER A FRAMEWORK CONTRACT CONCLUDED WITH THE COMMISSION Large-scale Computation Traditional solutions for computing large quantities of data

### A REAL TIME MEMORY SLOT UTILIZATION DESIGN FOR MAPREDUCE MEMORY CLUSTERS

A REAL TIME MEMORY SLOT UTILIZATION DESIGN FOR MAPREDUCE MEMORY CLUSTERS Suma R 1, Vinay T R 2, Byre Gowda B K 3 1 Post graduate Student, CSE, SVCE, Bangalore 2 Assistant Professor, CSE, SVCE, Bangalore

### Mining Large Datasets: Case of Mining Graph Data in the Cloud

Mining Large Datasets: Case of Mining Graph Data in the Cloud Sabeur Aridhi PhD in Computer Science with Laurent d Orazio, Mondher Maddouri and Engelbert Mephu Nguifo 16/05/2014 Sabeur Aridhi Mining Large

### UPS battery remote monitoring system in cloud computing

, pp.11-15 http://dx.doi.org/10.14257/astl.2014.53.03 UPS battery remote monitoring system in cloud computing Shiwei Li, Haiying Wang, Qi Fan School of Automation, Harbin University of Science and Technology

### International Journal of Innovative Research in Information Security (IJIRIS) ISSN: 2349-7017(O) Volume 1 Issue 3 (September 2014)

SURVEY ON BIG DATA PROCESSING USING HADOOP, MAP REDUCE N.Alamelu Menaka * Department of Computer Applications Dr.Jabasheela Department of Computer Applications Abstract-We are in the age of big data which

### Hadoop Design and k-means Clustering

Hadoop Design and k-means Clustering Kenneth Heafield Google Inc January 15, 2008 Example code from Hadoop 0.13.1 used under the Apache License Version 2.0 and modified for presentation. Except as otherwise

### SCHEDULING IN CLOUD COMPUTING

SCHEDULING IN CLOUD COMPUTING Lipsa Tripathy, Rasmi Ranjan Patra CSA,CPGS,OUAT,Bhubaneswar,Odisha Abstract Cloud computing is an emerging technology. It process huge amount of data so scheduling mechanism

### Map-Reduce for Machine Learning on Multicore

Map-Reduce for Machine Learning on Multicore Chu, et al. Problem The world is going multicore New computers - dual core to 12+-core Shift to more concurrent programming paradigms and languages Erlang,

### Developing MapReduce Programs

Cloud Computing Developing MapReduce Programs Dell Zhang Birkbeck, University of London 2015/16 MapReduce Algorithm Design MapReduce: Recap Programmers must specify two functions: map (k, v) * Takes

### International Journal of Emerging Technology & Research

International Journal of Emerging Technology & Research High Performance Clustering on Large Scale Dataset in a Multi Node Environment Based on Map-Reduce and Hadoop Anusha Vasudevan 1, Swetha.M 2 1, 2

### INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK A REVIEW ON HIGH PERFORMANCE DATA STORAGE ARCHITECTURE OF BIGDATA USING HDFS MS.

### marlabs driving digital agility WHITEPAPER Big Data and Hadoop

marlabs driving digital agility WHITEPAPER Big Data and Hadoop Abstract This paper explains the significance of Hadoop, an emerging yet rapidly growing technology. The prime goal of this paper is to unveil

### Image Search by MapReduce

Image Search by MapReduce COEN 241 Cloud Computing Term Project Final Report Team #5 Submitted by: Lu Yu Zhe Xu Chengcheng Huang Submitted to: Prof. Ming Hwa Wang 09/01/2015 Preface Currently, there s

### Fault Tolerance in Hadoop for Work Migration

1 Fault Tolerance in Hadoop for Work Migration Shivaraman Janakiraman Indiana University Bloomington ABSTRACT Hadoop is a framework that runs applications on large clusters which are built on numerous

IMPROVED FAIR SCHEDULING ALGORITHM FOR TASKTRACKER IN HADOOP MAP-REDUCE Mr. Santhosh S 1, Mr. Hemanth Kumar G 2 1 PG Scholor, 2 Asst. Professor, Dept. Of Computer Science & Engg, NMAMIT, (India) ABSTRACT

### A computational model for MapReduce job flow

A computational model for MapReduce job flow Tommaso Di Noia, Marina Mongiello, Eugenio Di Sciascio Dipartimento di Ingegneria Elettrica e Dell informazione Politecnico di Bari Via E. Orabona, 4 70125

### BigData. An Overview of Several Approaches. David Mera 16/12/2013. Masaryk University Brno, Czech Republic

BigData An Overview of Several Approaches David Mera Masaryk University Brno, Czech Republic 16/12/2013 Table of Contents 1 Introduction 2 Terminology 3 Approaches focused on batch data processing MapReduce-Hadoop

### International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 3, May-Jun 2014

RESEARCH ARTICLE OPEN ACCESS A Survey of Data Mining: Concepts with Applications and its Future Scope Dr. Zubair Khan 1, Ashish Kumar 2, Sunny Kumar 3 M.Tech Research Scholar 2. Department of Computer

### Journal of science STUDY ON REPLICA MANAGEMENT AND HIGH AVAILABILITY IN HADOOP DISTRIBUTED FILE SYSTEM (HDFS)

Journal of science e ISSN 2277-3290 Print ISSN 2277-3282 Information Technology www.journalofscience.net STUDY ON REPLICA MANAGEMENT AND HIGH AVAILABILITY IN HADOOP DISTRIBUTED FILE SYSTEM (HDFS) S. Chandra

### Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware

Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware Created by Doug Cutting and Mike Carafella in 2005. Cutting named the program after

### Exploring the Efficiency of Big Data Processing with Hadoop MapReduce

Exploring the Efficiency of Big Data Processing with Hadoop MapReduce Brian Ye, Anders Ye School of Computer Science and Communication (CSC), Royal Institute of Technology KTH, Stockholm, Sweden Abstract.

### Efficient Analysis of Big Data Using Map Reduce Framework

Efficient Analysis of Big Data Using Map Reduce Framework Dr. Siddaraju 1, Sowmya C L 2, Rashmi K 3, Rahul M 4 1 Professor & Head of Department of Computer Science & Engineering, 2,3,4 Assistant Professor,

### Efficient Data Replication Scheme based on Hadoop Distributed File System

, pp. 177-186 http://dx.doi.org/10.14257/ijseia.2015.9.12.16 Efficient Data Replication Scheme based on Hadoop Distributed File System Jungha Lee 1, Jaehwa Chung 2 and Daewon Lee 3* 1 Division of Supercomputing,

### Recognization of Satellite Images of Large Scale Data Based On Map- Reduce Framework

Recognization of Satellite Images of Large Scale Data Based On Map- Reduce Framework Vidya Dhondiba Jadhav, Harshada Jayant Nazirkar, Sneha Manik Idekar Dept. of Information Technology, JSPM s BSIOTR (W),

### A Review on Load Balancing Algorithms in Cloud

A Review on Load Balancing Algorithms in Cloud Hareesh M J Dept. of CSE, RSET, Kochi hareeshmjoseph@ gmail.com John P Martin Dept. of CSE, RSET, Kochi johnpm12@gmail.com Yedhu Sastri Dept. of IT, RSET,