AENSI Journals Advances in Natural and Applied Sciences ISSN:1995-0772 EISSN: 1998-1090 Journal home page: www.aensiweb.com/anas Clustering Algorithm Based On Hadoop for Big Data 1 Jayalatchumy D. and 2 Thambidurai P. 1 Assistant Professor, Department of CSE, PKIET, Karaikal, Pondicherry University, India. 2 Professor and Principal, Department of CSE, PKIET, Karaikal,Pondicherry University, India. A R T I C L E I N F O Article history: Received 12 October 2014 Received in revised form 26 December 2014 Accepted 1 January 2015 Available online 25 February 2015 Keywords: Big Data, Hadoop, Power Iteration, Data Clustering, MapReduce, Scalability. A B S T R A C T Big Data is a new term coined to the large dataset due to the rapid change in the information technology. Being difficult to be managed using current technologies or tools, mining of Big Data creates a new challenge following data clustering. Scalability is the groundwork of likely new technologies. This paper aims at improving the performance of clustering algorithms using Big Data computing techniques based on Hadoop and PIC algorithm. The clustering based on Client Based parallel PIC is an emulative way to explore the structure of the dataset based on power iteration which increases the speedup and improves the efficiency of the system, maintaining its accuracy. 2015 AENSI Publisher All rights reserved. To Cite This Article: Jayalatchumy D. and Thambidurai P.., Clustering Algorithm Based On Hadoop for Big Data. Adv. in Nat. Appl. Sci., 9(6): 92-96, 2015 INTRODUCTION The era of petabytes has almost gone, leading to the future of zettabytes and growing around 40% every year (Wei Fan, Albert Bifet ). Large-scale heterogeneous system resource sharing is the main crisis to take in data mining technology. Cloud computing provides various solutions to these problems (Xu Zhengqiao, Zhao Dewei, 2012). One of the challenges in Cloud computing is how to process and store large amounts of data. i.e., Big Data in an efficient and reliable way. Big Data Applications are a way to process such data by means of platforms, tools and techniques for parallel and distributed processing (Luis Eduardo Bautista Villalpando, 2014). The need for designing and implementing very large scale parallel machine learning algorithms leads to powerful platforms, e.g., Hadoop MapReduce (http://en.wikipedia.org/wiki/big_data). MapReduce is one of the programming models used to develop Big Data Applications, which was developed by Google for processing and generating large datasets. The paper aims to improve the performance of clustering algorithm using MapReduce. 1. Big Data Mining: Big Data is a set of techniques that require new forms of integration to uncover hidden values from larger datasets that are diverse and complex. The data produced nowadays is estimated in the order of zettabytes, and it is growing rapidly every year. These massive datas are hiding a large number of valuable information. Digging out useful information to make use of it becomes an important area for research and data mining play its role. Data mining can be classified into two categories: descriptive data mining and predictive data mining. Cluster analysis is a part of descriptive data mining where the dataset should be described in a concise manner and should reveal interesting properties of the data (Nair, S., Shah, J. Mehta, 2011). Scalability is at the heart of the expected new technologies to meet the challenges coming along with Big Data. Cloud Computing is a suitable technology to withstand the needed scalability with elasticity and parallelism (Ran Jin, 2013). The disadvantages of current data mining techniques when applied to Big Data are run on scalability and parallelism (Robson, L.F., Cordeiro, 2011). To overcome this challenge of Big Data, several attempts have been made on exploiting massive parallel processing architectures. The first such attempt was made by Google. Numerous notable attempts have been initiated to exploit massive parallel processing architectures (UnrenChe, 2013). Big Data must deal with heterogeneity, scalability, velocity, privacy, and accuracy. Corresponding Author: D.Jayalatcchumy,Assistant Professor, Department of CSE, PKIET, Karaikal, Pondicherry University, India
93 Jayalatchumy D. and Thambidurai P., 2015 2. Hadoop/ MapReduce Overview: Hadoop is an open source framework designed for storing and processing large datasets with a cluster of commodity hardware. The two main components of Hadoop include the HDFS (Hadoop Distributed File system) and the MapReduce. The storing of data is done in Hadoop and its processing takes place in MapReduce. During processing the HDFS splits the data into chunks of default size 64 MB or even128 MB if it s a very large dataset. Hadoop basically has five services which include the Namenode, Secondary Namenode, Jobtracker considered as the master and the Datanode and the Tasktracker considered as the slave respectively. Hence the cluster of Hadoop is considered as the Master/Slave framework. The master can communicate only with the corresponding slaves and among themselves. MapReduce has 2 important phases namely the Map and the Reduce. The map function reads the input and outputs a <Key, Value> pair. The shuffle phase moves the output from the Map and directs it to the Reduce phase which merges the intermediate value associated with same intermediate key when the threshold is reached. Fig. 1: Architectural Framework of MapReduce. The syntax for MapReduce model is Map (Key (i), Value (i) list (Key (i+1), Value (i+1)) The Map function is starts parallel for every pair in the input dataset which produces a list of pairs for each call. The Reduce function is then applied in parallel to each group. Reduce (k(i+1), list (v(i+1))) list (v(i+2)) The Reduce method will produce one value or it can also be null (Ping Zhou, L.E.I., 2011). The phases which perform Map and Reduce are connected. The architectural framework of MapReduce is shown in Figure 1. 3. Clustering Algorithms based on Apache Hadoop: A cluster is described as a set of similar objects grouped together. Cluster analysis is a study of algorithm and methods of classifying objects. It is an unsupervised learning technique for discovering the groups or identifying the structure of the dataset. Traditional clustering methods suffer from the curse of dimensionality [8]. Sequential algorithms have been replaced by parallel techniques. One common method used for clustering is to reduce the dataset size with the characteristics that represent the entire data; alternatively the data can be distributed to many nodes as chunks using MapReduce (Chen, W.Y., 2011). The main idea is to identify the clusters within the datasets using parallelization techniques. 3.1 Background: Spectral clustering (Frank Lin frank, W. William, 2010) is being popular among the clustering techniques because of its advantages over other traditional algorithms; to mention K-means and hierarchical clustering. But spectral clustering requires the use of computing eigenvectors, making it time consuming (Weizhong Yana, 2013). To overcome this limitation Lin et.al. (2010), proposed the Power Iteration Clustering (PIC) algorithm which finds only one pseudo-eigenvector, a linear combination of the eigenvectors in linear time. Weizhong Yan et.al.(2012) has applied the MPI framework to parallelize the PIC. Let (x1, x2 xn) be the n data points and p be the number of processors. The parallel PIC algorithm starts by the master determining the starting and ending indices for the corresponding data chunks for each processor and broadcasts it to all the slaves. The slave processors now read the chunks (n/p) and calculate the sub similarity matrix and then normalize the result by its row sum which is finally sent to the master (Weizhong Yana, 2013). The master calculates the overall row sum vector on concatenation. The problem with this algorithm is the time taken for transferring the data from the server to client takes much of the time for execution. It consume more time on finding the vectors since initial vectors has to be calculated and sent to the master each time (Niu, X.Z., K. She, 2012). To reduce this processing time a client based algorithm is designed in such a way that the client takes the responsibility of handling much work hence
94 Jayalatchumy D. and Thambidurai P., 2015 reducing the work of server. The working of client based power iteration clustering is as follows. The master receives the data from the dataset. The data are spilt based on the number of slaves (n). On receiving the data from the master, each slave starts its work of computation. Now each slave receives the data file from the master and calculates the row sum.this row sum is sent again to the master. Finally the master finds the initial vector and intimates it to all slaves. The calculation of initial vector and number of clusters is calculated and the process ends when the stopping criteria is met. The parallelism concept comes into picture for an algorithm implemented in MapReduce framework when the map functions are forked for every job. These maps are run in any node under distributed environment configured under Hadoop. The job distribution is done by the Hadoop system and datasets are required to be put in HDFS. The datasets are scanned to find the entry and its frequency is noted. This is given as output to the reduce function in the class defined in Hadoop. In the reducer function each output is collected and results are produced. 3.2 Algorithm Design for pc-pic in MapReduce: In this section we present a detailed process for pc-pic algorithm using MapReduce based on similarity measures and matrix vector multiplication, since MapReduce is an easier parallel computation framework that takes care of node failure by its replication process. The map method receives the matrix input and generates intermediate <key, value> pair. It finds the similarity matrix and normalizes it (Kyuseok Shim, 2012). The reducer gets the intermediate value and calculates the initial vector which stops at the convergence point to output the final vectors. By applying k means to the vectors the clusters are obtained. The algorithm for MapReduce is shown in Figure 2. Map function - The input dataset is stored on HDFS as a sequence file <Key, Value> pairs where each pair is a record in the dataset. The dataset is split and globally broadcasted. Consequently the sub matrix is computed in parallel for each map task. The mapper constructs the sub similarity matrix and obtains the initial vector v 0. Reduce function -The input for the reducer function will be from the intermediate file. The similarity matrix obtained from the map is normalized in the reduce function and also the final vector is calculated Using MapReduce for clustering, we split the datasets between the available mapper. For each mapper we perform an iteration based on the number of nodes and iterate through the output from the mappers for reducer. Hence the computational complexity has been reduced to a greater extent. The computation is done by calculating the number of iterations and the dimensionality of similarity matrix to the number of mappers involved. Thus the computational complexity of proposed algorithm using MapReduce is expressed as O (k (n+1)/no. of Mappers). Fig. 2: Algorithm for pc-pic in MapReduce. INPUT: Datasets {x1,x2,x3.xn} of sample size S. OUTPUT: Clusters. MAPPER Step 1. m Mappers reads the data and split the dataset in <key,value> lists. Step 2. The corresponding <Key,Value> list is the input to MAP. Step 3. Calculate the similarity matrix from the similarity function A. B Similarity A, B = A B Step 4. Find the affinity of the given matrix (W= D -1 A). Step 5. Normalize similarity matrix W, obtain as output of the mapper. REDUCER Step 6. The intermediate values obtained from mappers are given as input to the Reducers. The reduce use them to obtain the initial vector V 0. Step 7. Calculate the norm where R[n] is the row sum using V 0 =R/r. Step 8. Initial vectors v 0 obtained are used to find the final vector v t. Step 9. Process is iterated until convergence in the reducer output Clusters are obtained and sent back. Step 10. The vectors are merged to form clusters Step 11. Return clusters. 4. Results and Findings: This section will focus on demonstrating the scalability of the implementing client based PIC in the MapReduce framework. To demonstrate the data scalability of the algorithm over the framework, it is implemented over a number of synthetic datasets of many records. A data generator has also been created to produce the dataset. The experimented has been performed on local cluster. The local cluster is an Intel
95 Jayalatchumy D. and Thambidurai P., 2015 processor and the number of nodes is 8. The stopping criteria to find the number of clusters created is approximately equal 0. In this work speed up has been used as the performance measure. 4.1 Speed up: The first criterion to be considered when the performances of the parallel systems are analyzed is to express how many times a parallel program works faster than a sequential one. The most important reason of parallelization is to make a sequential program is to run the program faster. The formula for speedup is given as Speedup= T s / T p Where T s is the execution time of the sequential program that solves the problem and T p is the execution time of the parallel program used to complete the same problem. Fig. 3: Speed up of pc-pic in MapReduce. Fig. 4: Efficiency of pc-pic in MapReduce. The performance results of pc-pic on MapReduce in terms of speedup and efficiency has been shown in Figure 3 and Figure 4. The graph has been shown with no of processors along the x axis and time along the y axis. The scalability of the algorithm with different database size has been analyzed. From the results it is inferred that the performance of pc-pic with MapReduce has been increased along with the scalability. 5. Conclusion: This paper which is based on Hadoop identifies the massive data clustering using PIC algorithm. This paper puts forward a new Client based parallel PIC (pc-pic) algorithm implemented on MapReduce framework which performs better than the existing algorithms. In this study we have applied MapReduce to pc-pic algorithm clustered over large data points of varying synthetic and generated datasets. The implementation of clustering algorithms addresses the problem of node failure and fault tolerance since its MapReduce. In future we need to continue to research the algorithms clustering efficiency using large datasets in cloud environment. REFERENCES Felician Alecu, 2007. Performance Analysis of Parallel Algorithms, Journal of Applied Quantitative methods, 2(1): 129-134. Frank Lin frank, W. William, 2010. Power Iteration Clustering, International Conference on Machine Learning, Haifa, Israel. http://hadoop.apaoche.org/docs/r1.2.1/mapred_tutorial.html http://en.wikipedia.org/wiki/big_data http://www.cs.uic.edu/~ajayk/c566/hadoopmap-reduce.pdf
96 Jayalatchumy D. and Thambidurai P., 2015 Kyuseok Shim, 2012. MapReduce Algorithms for Big Data Analysis, Proceedings of the VLDB Endowment, 5(12) Copyright VLDB Endowment 21508097/12/08. Luis Eduardo Bautista Villalpando, Alain April and Alain Abran, 2014. Performance analysis model for Big Data applications in cloud computing, Bautista Villalpando et al. Journal of Cloud Computing: Advances, Systems and Applications, 3:19 http://www.journalofcloudcomputing.com/content/3/1/19. Michael Steinbach, Levent Ertöz, and Vipin Kumar, 2004. The Challenges of Clustering High Dimensional Data, New Directions in Statistical Physics, Book Part IV, Pages, 273-309. doi 10.1007/978-3-662-08968. Niu, X.Z., K. She, 2012. Study of fast parallel clustering partition algorithm for large dataset. Comput Sci., 39: 134-151. Ping Zhou, L.E.I., Jingsheng, Y.E. Wenjun, 2011. Large-Scale Data Sets Clustering Based on MapReduce and Hadoop Journal of Computational Information Systems, 7(16): 5956-5963. Ran Jin, Chunhai Kou, Ruijuan Liu and Yefeng Li, 2013. Efficient Parallel Spectral Clustering Algorithm Design for large datasets under Cloud computing, Journal of Cloud computing: Advances, Systems and Applications, 2: 18. Robson, L.F., Cordeiro, 2011. Clustering Very Large Multi-dimensional Datasets with MapReduce, KDD 11: 21-24. San Diego, California. Nair, S., Shah, J. Mehta, 2011. Clustering with Apache Hadoop, International Conference and Workshop on Emerging Trends in Technology (ICWET) TCET, Mumbai, India UnrenChe, MejdlSafran, and ZhiyongPeng, 2013. From Big Data to Big Data Mining: Challenges, Issues, and Opportunities, DASFAA Workshops, LNCS, 7827: 1-15. Wei Fan, Albert Bifet, Mining Big Data: Current Status, and Forecast to the Future, SIGKDD explorations, 14-2. Xu Zhengqiao, Zhao Dewei, 2012. Research on Clustering Algorithm for Massive Data Based on Hadoop Platform, 978-0-7695-4719-0/12 $26.00 2012 IEEE DOI 10.1109/CSSS.2012.19 Chen, W.Y., Y. Song, H. Bai, C. Lin, E.Y. Chang, 2011. Parallel spectral clustering in distributed systems, IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(3): 568-586. Weizhong Yana, 2013. p-pic: Parallel power iteration clustering for Big Data, Models and Algorithms for High Performance Distributed Data Mining, 73-3.