Survey on Data Mining Algorithm and Its Application in Healthcare Sector Using Hadoop Platform



Similar documents
International Journal of Innovative Research in Computer and Communication Engineering

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 3, May-Jun 2014

Indian Journal of Science The International Journal for Science ISSN EISSN Discovery Publication. All Rights Reserved

Role of Cloud Computing in Big Data Analytics Using MapReduce Component of Hadoop

Hadoop. Sunday, November 25, 12

Lecture 32 Big Data. 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop

An Overview of Knowledge Discovery Database and Data mining Techniques

Predicting the Risk of Heart Attacks using Neural Network and Decision Tree

IMPLEMENTATION OF P-PIC ALGORITHM IN MAP REDUCE TO HANDLE BIG DATA

Manifest for Big Data Pig, Hive & Jaql

Introduction to Big Data Analytics p. 1 Big Data Overview p. 2 Data Structures p. 5 Analyst Perspective on Data Repositories p.

Extending the Enterprise Data Warehouse with Hadoop Robert Lancaster. Nov 7, 2012

BIG DATA What it is and how to use?

BIG DATA CHALLENGES AND PERSPECTIVES

Big Data and Data Science: Behind the Buzz Words

Volume 3, Issue 6, June 2015 International Journal of Advance Research in Computer Science and Management Studies

Chase Wu New Jersey Ins0tute of Technology

How to use Big Data in Industry 4.0 implementations. LAURI ILISON, PhD Head of Big Data and Machine Learning

Managing Cloud Server with Big Data for Small, Medium Enterprises: Issues and Challenges

Distributed Framework for Data Mining As a Service on Private Cloud

BITKOM& NIK - Big Data Wo liegen die Chancen für den Mittelstand?

Infomatics. Big-Data and Hadoop Developer Training with Oracle WDP

Distributed Computing and Big Data: Hadoop and MapReduce

Collaborative Big Data Analytics. Copyright 2012 EMC Corporation. All rights reserved.

International Journal of Advancements in Research & Technology, Volume 3, Issue 2, February ISSN

Hadoop Technology for Flow Analysis of the Internet Traffic

A Brief Outline on Bigdata Hadoop

Are You Ready for Big Data?

Apache Hama Design Document v0.6

Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware

Big Data on Microsoft Platform

Hadoop and Map-Reduce. Swati Gore

Workshop on Hadoop with Big Data

BIG DATA TRENDS AND TECHNOLOGIES

Advances in Natural and Applied Sciences

Keywords data mining, prediction techniques, decision making.

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Machine Learning with MATLAB David Willingham Application Engineer

Big Data: Tools and Technologies in Big Data

ISSN: CONTEXTUAL ADVERTISEMENT MINING BASED ON BIG DATA ANALYTICS

BIG DATA TECHNOLOGY. Hadoop Ecosystem

DATA MINING AND REPORTING IN HEALTHCARE

RevoScaleR Speed and Scalability

Large-Scale Data Sets Clustering Based on MapReduce and Hadoop

CS2510 Computer Operating Systems

CS2510 Computer Operating Systems

Big Data, Why All the Buzz? (Abridged) Anita Luthra, February 20, 2014

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)

Big Data Open Source Stack vs. Traditional Stack for BI and Analytics

Finding Insights & Hadoop Cluster Performance Analysis over Census Dataset Using Big-Data Analytics

Hadoop MapReduce and Spark. Giorgio Pedrazzi, CINECA-SCAI School of Data Analytics and Visualisation Milan, 10/06/2015

White Paper: Hadoop for Intelligence Analysis

Overview. Big Data in Apache Hadoop. - HDFS - MapReduce in Hadoop - YARN. Big Data Management and Analytics

Hadoop Big Data for Processing Data and Performing Workload

ISSUES, CHALLENGES, AND SOLUTIONS: BIG DATA MINING

Bringing Big Data to People

A STUDY ON HADOOP ARCHITECTURE FOR BIG DATA ANALYTICS

E6893 Big Data Analytics Lecture 2: Big Data Analytics Platforms

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee June 3 rd, 2008

Impelling Heart Attack Prediction System using Data Mining and Artificial Neural Network

Developing Scalable Smart Grid Infrastructure to Enable Secure Transmission System Control

The Data Mining Process

Associate Professor, Department of CSE, Shri Vishnu Engineering College for Women, Andhra Pradesh, India 2

Advanced Big Data Analytics with R and Hadoop

A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM

Big Data With Hadoop

Keywords Big Data; OODBMS; RDBMS; hadoop; EDM; learning analytics, data abundance.

Detection of Distributed Denial of Service Attack with Hadoop on Live Network

A Review of Data Mining Techniques

Data Mining System, Functionalities and Applications: A Radical Review

Advanced In-Database Analytics

ISSN: Keywords: HDFS, Replication, Map-Reduce I Introduction:

Prediction of Heart Disease Using Naïve Bayes Algorithm

SPATIAL DATA CLASSIFICATION AND DATA MINING

Native Connectivity to Big Data Sources in MSTR 10

Testing 3Vs (Volume, Variety and Velocity) of Big Data

Architectures for Big Data Analytics A database perspective

WROX Certified Big Data Analyst Program by AnalytixLabs and Wiley

Outline. High Performance Computing (HPC) Big Data meets HPC. Case Studies: Some facts about Big Data Technologies HPC and Big Data converging

Decision Support System on Prediction of Heart Disease Using Data Mining Techniques

International Journal of Advance Research in Computer Science and Management Studies

Log Mining Based on Hadoop s Map and Reduce Technique

International Journal of Innovative Research in Information Security (IJIRIS) ISSN: (O) Volume 1 Issue 3 (September 2014)

The Future of Data Management

Sunnie Chung. Cleveland State University

Big Fast Data Hadoop acceleration with Flash. June 2013

BIDM Project. Predicting the contract type for IT/ITES outsourcing contracts

Constructing a Data Lake: Hadoop and Oracle Database United!

BIG DATA-AS-A-SERVICE

Interactive data analytics drive insights

Hadoop Beyond Hype: Complex Adaptive Systems Conference Nov 16, Viswa Sharma Solutions Architect Tata Consultancy Services

Association Technique on Prediction of Chronic Diseases Using Apriori Algorithm

Improving Data Processing Speed in Big Data Analytics Using. HDFS Method

Deploying Hadoop with Manager

Hadoop 只 支 援 用 Java 開 發 嘛? Is Hadoop only support Java? 總 不 能 全 部 都 重 新 設 計 吧? 如 何 與 舊 系 統 相 容? Can Hadoop work with existing software?

Testing Big data is one of the biggest

Prepared By : Manoj Kumar Joshi & Vikas Sawhney

Transcription:

Survey on Data Mining Algorithm and Its Application in Healthcare Sector Using Hadoop Platform K. Sharmila 1, Dr. S. A.Vethamanickam 2 1 Asst Prof. & Research Scholar, Dept. of Computer Science, Vels University, Chennai, India. 2 Research Advisor, Chennai, India. Abstract- In this survey paper, we have scrutinized and revealed the benefits of Hadoop in the Healthcare sector using data mining where the data flow was in massive volume. In developing countries like India with huge population, there exists various problems in the field of healthcare with respect to the expenses met by the economically underprivileged people, access to the hospitals and research in the field of medicine for Big Data. The Apache Hadoop has become a world-wide adoption and it has brought parallel processing in the hands of average programmer for Big data. It has become imperative to migrate existing data mining algorithms onto Hadoop platform for increased parallel processing efficiency. In this paper, we have surveyed various progress made in the area of data mining technique, its latest adoption in Hadoop platform and Big data, algorithms used in such platform, and listed out the open challenges in using such algorithm in the Indian medicinal data set. Keywords: Hadoop, Data mining, Healthcare, Big data. I. INTRODUCTION In this era of Big data the organizations and health industry are facing problems of three V s namely Volume, Velocity and Variety in migrating the data over the network for the purpose of transformation or analysis that has become unrealistic. Moving terabytes of data from one system to another often has brought the network administrator infeasible and made the process slow and limited to SAN (Storage Area Network) bandwidth. The distributed processing of huge data sets across groups of systems is facilitated by using a computing model of the Apache Hadoop Framework. The framework was projected to widen from solitary server to thousands of systems for the computation and storage. This forceful feature of Hadoop framework attracts variety of companies and organizations to use it for both research and production. Healthcare is one of the most important areas of developing and developed countries to ease the priceless human resource. Commonwealth Governments have identified various health issues like diabetes as a significant and growing global public health problem. Estimation shows 40 million Indians suffer from diabetes, and the crisis seems to be growing at a shocking rate. By 2020, the number is expected to twice, even though half the numbers of diabetics in India remain undiagnosed due to the massive volume of data. 567 Since healthcare industry nowadays has flooded with massive amount of data, need validation and accurate analysis. Even though Big Data Analytics and Hadoop can contribute a major role in processing and analyzing the healthcare data in variety of forms to deliver suitable applications and in turn reduces the cost of services to a common man in the country, there are some open challenges to conquer which are explicitly stated in this survey paper. II. DATA MINING Data mining is the core step, which has resulted in the discovery of hidden but useful knowledge from massive databases. It is the non-trivial extraction of previously unknown and useful information about data. It can also be defined as the science of extracting useful information from large databases. The two primary goals of data mining are prediction and description. Prediction involves some variables or fields in the data set to predict unknown or future values of other variables of interest. Description focuses on finding patterns describing the data that can be interpreted by humans. Categorization of Data Mining Techniques: 2.1 Data mining algorithms falls under 4 classes a. Association rule learning: This category of algorithms search is for relation between variables. This is used for application like knowing the frequently visited items.

b. Clustering: This category of algorithms discovers groups and structures in the data such that objects within the same group i.e. cluster are more similar to each other than to those in other groups c. Classification: This category of algorithms deals with associating an unknown structure to a well known structure. d. Regression: This category of algorithms attempts to find a function to model the data with least error. III. CURRENT WORKS IN DATA MINING ALGORITHMS ON HADOOP S.No Author Name of the Algorithm used 1 Fayyad et.al (1996) Data Mining techniques 2 Zhao et.al (2009) k-means clustering 3 Gong-Qing Wu (2009) C4.5 decision tree 4 Jimmy Lin and Chris Dyer EM algorithms (2010) 5 Zhenhua (2010) K-Means algorithm 6 Kang and Christos Faloutsos Graph mining algorithms (2012) 7 Anjan K Koundinya etal Apriori algorithm (2012) 8 Zganquan sun (2012) Support Vector Machine 9 Jiangtao Yin (2012) EM 10 Nandakumar and Nandita Existing Algorithm Yambem (2014) 11 Ranshul Chaudhary et.al (2014) k means, Hierarchical clustering, COBWEB and DBSCAN 1. Fayyad et.al(1996) describes the various data mining techniques that allow extracting unknown relationships among the data items from large data collection that are useful for decision making 2. A fast parallel k-means clustering algorithm was proposed by Zhao etal(2009) based on MapReduce, which has been widely embraced by both academia and industry. The results show that the proposed algorithm can process large datasets on commodity hardware effectively.one of the problems noticed when testing the Parallel K-means is that, the speedup is not linear. The main reason is that communication overhead increases as we increase the dataset size. 3. C4.5 decision tree classification algorithm was implemented on apache hadoop by Gong-Qing Wu(2009). In their work, while constructing the bagging ensemble based reduction to construct the final classifier many duplicates were found. These duplicates could not have avoided if proper data partitioning method have been applied. 568 4. A very detailed explanation of applying EM algorithms to text processing and fitting those algorithms into the MapReduce programming model was proposed by Jimmy Lin and Chris Dyer (2010). The EM fits naturally into the MapReduce programming model by making each iteration of EM one Map Reduce job. In this work it was observed that when global data is needed for synchronization of Hadoop tasks, it was difficult with current support from Hadoop platform. 5. K-Means algorithm for remote sensing images in Hadoop was applied by Zhenhua(2010). One of important issues learnt while doing their experiment is that Hadoop operates only on text and when image has to be represented as text and processed, the overhead in representation and processing is huge even for smaller images. 6. Kang and Christos Faloutsos(2012) applied Hadoop for graph mining in social network data. One of the important observations here is that some of the graph mining algorithms cannot be parallelized, so approximate solutions are needed. 7. Apriori algorithm has been implemented on Apache Hadoop platform by Anjan K Koundinya etal(2012). Contrary to the believe that parallel processing will take less time to get Frequent item sets, they experimental observation proved that multi node Hadoop with differential system configuration (FHDSC) was taking more time. The reason was in way the data has been portioned to the nodes. 8. Zganquan sun (2012) explored the applicability of SVM (Support Vector Machine) on Map Reduce platform. Through his experiments he concluded that the Map reduce is able to reduce the training time and the computation time for SVM, the portioning method was very unclear. 9. EM is an iterative approach that alternates between performing a expectation step (E-Step) and Maximization step (M-Step).An EM with frequent updates to convert EM as a parallel algorithm which has been proposed by Jiangtao Yin (2012) shows that the cost of frequent updates was very high in Hadoop clusters. To alleviate this problem, new mechanism must be explored to reduce the frequent updates to block updates. Apache Mahout Implementation of Naïve Bayes has very good performance and reduced the training time but still improvements can be made the platform to support block key value updating mechanism. 10. Nandakumar and Nandita Yambem (2014) had described that Apache Hadoop has become a worldwide adoption in data centers and hence it becomes imperative to migrate existing data mining algorithms onto Hadoop platform for increased parallel processing efficiency.

With the introduction of big data analytics, this trend of migration of the existing data mining algorithms to Hadoop platform has become widespread. In this paper, they explored the current migration activities and challenges in migration. 11. Ranshul Chaudhary et.al(2014) surveyed that some well known algorithms concerned with data mining under the clustering techniques namely: k means, Hierarchical clustering, COBWEB and DBSCAN algorithms are studied. The results are compared and analyzed in accordance to their efficiencies. IV. HADOOP Hadoop is a, Java-based programming framework which supports the processing of large data sets in a distributed computing environment and is part of the Apache project sponsored by the Apache Software Foundation. Hadoop was originally conceived on the basis of Google's Map Reduce, in which an application is broken down into numerous small parts. The Apache Hadoop software library can detect and handle failures at the application layer. The Hadoop mainly includes: 4.1 HDFS: 1) Hadoop Distributed File System (HDFS) 2) Hadoop Map Reduce. The HDFS has some desired options for enormous information parallel processing, such as: (1) work in commodity clusters in case of hardware failures, (2) access with streamed information, (3) dealing with big data set (4) uses an easy coherency model, and (5) moveable across various hardware and software platforms. The HDFS is designed as master/slavearchitecture. As shown in figure HDFS cluster consist of a single node known as a Name Node, which manages the file system namespace and regulates client access to files. Data Node store data as blocks within files. Name Node is responsible for mapping of data blocks to Data Node. Also Name Node manages file system operations like opening, closing, renaming files and directories. The Name Node information must be preserved even after the Name Node machine fails. There are multiple copies of data on Name Node is maintained on number of machines. So that in case of crashes of Name Node these nodes can be used by other nodes in cluster. These nodes are called as secondary Name Node. 4.2 Hadoop Map Reduce: Map Reduce was originally proposed by Google to handle large scale web search applications. This approach has been proved to be an effective programming approach for developing machine learning, data mining and search applications in data centers. The Hadoop Ecosystem consists of the following components namely HBase, Hive, Pig, Zookeeper, Sqoop, and Flume. HBase: Distributed column-based database Hive: Distributed Data Warehouse, provides SQLbased Query language Pig: Data Flow Language and execution environment Zookeeper: Allows distributed processing of data in large systems to synchronize with each other in order to provide consistent data to client requests. Sqoop: Tool designed for efficiently transferring bulk data between Hadoop and structured data stores such as relational databases. Flume: A distributed service for collecting, aggregating, and moving large amount of log data. HDFS Architecture 569

6. Parthiban and Srivatsa (2012) have tried to predict the chances of heart disease using attributes from diabetic s diagnosis and we have shown that it is possible to diagnose heart disease vulnerability in diabetic patients with reasonable accuracy. Hence SVM model can be recommended for the classification of the diabetic dataset. V. CURRENT APPLICATIONS OF DIABETIC DATA SET USING DATA MINING 1. Tamilarasi and Sapna (2008) has described research on complex diseases only seems to be approaching the final goal, the prevention and cure of the diseases, very slowly. 2. Huy Nguyen Anh Pham and Evangelos Triantaphyllou predicts whether a new patient would test positive for diabetes using a new approach, called the Homogeneity- Based Algorithm (or HBA) on the dataset (Pima Indian diabetes data set) and the author concluded that it is very important both for accurately predicting diabetes and also for the data mining community, in general. 3. Parthiban et.al (2011) tried to predict the chances of getting a heart disease using attributes from diabetic s diagnosis. This can be extended to predict other type of ailments which arise from diabetes, such as visual impairment in future. Further, the data analysis results can be used for further research in enhancing the accuracy of the prediction system in future. 4. Karthikeyani et.al (2012) concluded that there are different data mining classification techniques can be used for the identification and prevention of diabetes disease among patients. Among ten classification techniques in data mining to predict diabetes disease in patients the CS-CRT algorithm is best. 5. Several intelligent classifiers such as Bayesian, Functional, Rule-base, Decision Trees and Ensemble for diabetes mellitus diagnosis on PID dataset has been applied by Najmeh Hosseinpour et.al (2012) and therefore, they concluded that, Bagging with logistic core had the best performance. VI. CURRENT APPLICATIONS OF DIABETIC DATA SET USING HADOOP S.No Author Feature of Algorithm Used 1 Sadhana and Savitha Shetty (2014) Implementation of hive and R. 2 Peter Augustine(2014) Consideration of clinical facts 3 Wullianallur Raghupathi and Generating output Viju Raghupathi (2014) features at faster rate. 1. Sadhana and Savitha Shetty (2014) have proposed a detailed analysis of the diabetic data set which was carried out efficiently with the help of hive and R. The facts which were revealed during the process can be used for developing some prediction models. The information which was revealed can be further used to develop efficient prediction models. 2. Achieving better outcomes at lower costs has become very important for healthcare, and Big Data Analytics and Hadoop s presence are positively part of the solution in reaching the goal has been concluded by Peter Augustine (2014). Although we are in the early days of healthcare big data, it is clear those strategies for big data in the integration of R&D data, efficient clinical trials, and finally in clinical outcomes is foundational to building that solution, lowering costs, and enhances the accessibility and availability of healthcare to all in 1.2 billions of Indians. 3. Wullianallur Raghupathi and Viju Raghupathi (2014) have described that big data analytics and applications in healthcare are at a promising stage of development, but rapid advances in platforms and tools can accelerate their maturing process. VII. OPEN CHALLENGES During the survey of all works in respective areas, we had come across many open challenges and problems in their respective work. We are listing down some of the significant problem, which are pertinent to our work. 1. Migrating existing data mining algorithm onto Hadoop platform to increase the parallel processing efficiency is a direct challenge.[1] 2. Synchronization problems cannot be solved. Sharing of global data is also a problem.[7] 570

3. The communication overhead increases as we increase the size of the dataset which hadoop has to process. Techniques to reduce this communication overhead must be devised. [8]. 4. Though Zganquan sun concluded that the Map reduce is able to reduce the training time and the computation time for SVM, the portioning method was very unclear. No relationship between the portioning technique and the performance could be derived. If the portioning heuristics are part of hadoop platform, it would have given fewer burdens to the programmers. [12] 5. The cost of frequent updates is very high in Hadoop clusters. To alleviate this problem mechanism based on updates to closest node must be devised. Also heuristics methods must be formed to reduce the frequent updates to block updates [13]. The above challenges have to be overcome by applying assorted techniques and algorithm in data mining the above challenges can be overcome. VIII. CONCLUSION This paper presents an overview of various data mining techniques and application of diabetic dataset using Hadoop platform. This helps us to acquire knowledge about how Hadoop can be implemented to predict the diabetics and related diseases to it for huge dataset using data mining techniques. As per the references considered above, we have reached to the conclusion that, the existing platform has implemented the data set with immature development. There by we are planning to carry out the further research in Hadoop platform with own clinical data set. REFERENCES [1] "A Survey on Data Mining Algorithms on Apache Hadoop Platform",DR. A. N. Nandakumar, Nandita Yambem,ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 4, Issue 1, January 2014. [2] "Analysis of Diabetic Data Set Using Hive and R",Sadhana, Savitha Shetty, ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 4, Issue 7, July 2014. [3] "Leveraging Big Data Analytics and Hadoop in Developing India s Healthcare Services",D. Peter Augustine, 0975 8887 Volume 89 No 16, March 2014 [4] G. Parthiban, A. Rajesh, S.K.Srivatsa Diagnosis of Heart Disease for Diabetic Patients using Naive Bayes Method. [5] "Comparative of Data Mining Classification Algorithm (CDMCA) in Diabetes Disease Prediction", Karthikeyani et.al (2012),International Journal of Computer Applications (0975 8887) Volume 60 No.12, December 2012. [6] "Diabetes Diagnosis by Using Computational Intelligence Algorithms", Najmeh et al., International Journal of Advanced Research in Computer Science and Software Engineering 2 (12), December - 2012, pp. 71-77. [7] Data-Intensive Text Processing with MapReduce, Jimmy Lin and Chris Dyer. April 2010. [8] Parallel K-Means Clustering Based on MapReduce. M.G. Jaatun, G. Zhao, and C. Rong (Eds.): CloudCom 2009, LNCS 5931, pp. 674 679, 2009. Springer-Verlag Berlin Heidelberg 2009. [9] Kang and Christos Faloutsos in SIGKDD Explorations Volume 14 issue 2 Big Graph Mining: Algorithms and Discoveries. [10] Anjan K Koundinya, Srinath N K,,K A K Sharma, Kiran Kumar, Madhu M N,and Kiran U Shanbag, MAP REDUCE DESIGN AND IMPLEMENTATION OF A PRIORI ALGORITHM FOR HANDLING OLUMINOUS DATA Advanced Computing: An International Journal (ACIJ), Vol.3, No.6, November 2012 [11] Gong-Qing Wu MReC4.5: C4.5 Ensemble Classification with MapReduce chinagrid 2009 [12] Zhanquan Sun Study on Parallel SVM Based on MapReduce in conference on worldcomp2012 [13] Jiangtao Yin Accelerating Expectation-Maximization Algorithms with Frequent Updates 2012 IEEE International Conference on Cluster Computing [14] http://hadoop.apache.org/. 2013. [15] "Survey on Task Assignment Techniques in Hadoop", Sarika Patil and Shyam Deshmukh, (0975 8887) Volume 59 No.14, December 2012. [16] "Diagnosis of Heart Disease for Diabetic Patients using Naive Bayes Method",G. Parthiban and A.Rajesh, 0975 8887,Volume 24 No.3, June 2011. [17] Data mining fuzzy neural genetic algorithm in Predicting diabetes S.Sapna and A.Tamilarasi, BOOM 2K8 Research Journal on Computer Engineering, March 2008 [18] "A Survey On Data Mining Techniques", Ranshul Chaudhary, Prabhdeep Singh, Rajiv Mahajan,IJARCCE Vol. 3, Issue 1, January 2014. 571