Application of Data Mining Techniques For Diabetic DataSet



Similar documents
Static Data Mining Algorithm with Progressive Approach for Mining Knowledge

Data Mining using Artificial Neural Network Rules

Finding Frequent Patterns Based On Quantitative Binary Attributes Using FP-Growth Algorithm

Project Report. 1. Application Scenario

Implementing Improved Algorithm Over APRIORI Data Mining Association Rule Algorithm

Predicting the Risk of Heart Attacks using Neural Network and Decision Tree

Selection of Optimal Discount of Retail Assortments with Data Mining Approach

A comparative study on the pre-processing and mining of Pima Indian Diabetes Dataset

A Survey on Association Rule Mining in Market Basket Analysis

DEVELOPMENT OF HASH TABLE BASED WEB-READY DATA MINING ENGINE

MAXIMAL FREQUENT ITEMSET GENERATION USING SEGMENTATION APPROACH

MINING THE DATA FROM DISTRIBUTED DATABASE USING AN IMPROVED MINING ALGORITHM

Association Rule Mining

New Matrix Approach to Improve Apriori Algorithm

Building A Smart Academic Advising System Using Association Rule Mining

Binary Coded Web Access Pattern Tree in Education Domain

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 3, May-Jun 2014

Mining Association Rules: A Database Perspective

Clinic + - A Clinical Decision Support System Using Association Rule Mining

Feature Selection for Classification in Medical Data Mining

A Way to Understand Various Patterns of Data Mining Techniques for Selected Domains

DATA MINING AND REPORTING IN HEALTHCARE

Comparison of Data Mining Techniques for Money Laundering Detection System

ASSOCIATION RULE MINING ON WEB LOGS FOR EXTRACTING INTERESTING PATTERNS THROUGH WEKA TOOL

Discovery of Maximal Frequent Item Sets using Subset Creation

Searching frequent itemsets by clustering data

Comparative Study in Building of Associations Rules from Commercial Transactions through Data Mining Techniques

Fuzzy Logic -based Pre-processing for Fuzzy Association Rule Mining

Mining Frequent Patterns without Candidate Generation: A Frequent-Pattern Tree Approach

Using Associative Classifiers for Predictive Analysis in Health Care Data Mining

Association Technique on Prediction of Chronic Diseases Using Apriori Algorithm

Prediction of Heart Disease Using Naïve Bayes Algorithm

Data Mining Solutions for the Business Environment

Predictive Data Mining for Medical Diagnosis: An Overview of Heart Disease Prediction

International Journal of World Research, Vol: I Issue XIII, December 2008, Print ISSN: X DATA MINING TECHNIQUES AND STOCK MARKET

TOWARDS SIMPLE, EASY TO UNDERSTAND, AN INTERACTIVE DECISION TREE ALGORITHM

Association Rule Mining: A Survey

Enhancement of Security in Distributed Data Mining

How To Write A Summary Of A Review

Improving Apriori Algorithm to get better performance with Cloud Computing

A Time Efficient Algorithm for Web Log Analysis

Data Mining and Knowledge Discovery in Databases (KDD) State of the Art. Prof. Dr. T. Nouri Computer Science Department FHNW Switzerland

Data Mining Approach in Security Information and Event Management

INTERNATIONAL JOURNAL FOR ENGINEERING APPLICATIONS AND TECHNOLOGY DATA MINING IN HEALTHCARE SECTOR.

A Serial Partitioning Approach to Scaling Graph-Based Knowledge Discovery

KNIME TUTORIAL. Anna Monreale KDD-Lab, University of Pisa

A Breadth-First Algorithm for Mining Frequent Patterns from Event Logs

A Hybrid Model of Hierarchical Clustering and Decision Tree for Rule-based Classification of Diabetic Patients

Web Mining Patterns Discovery and Analysis Using Custom-Built Apriori Algorithm

KNOWLEDGE DISCOVERY and SAMPLING TECHNIQUES with DATA MINING for IDENTIFYING TRENDS in DATA SETS

Intelligent and Effective Heart Disease Prediction System using Weighted Associative Classifiers

Business Lead Generation for Online Real Estate Services: A Case Study

Laboratory Module 8 Mining Frequent Itemsets Apriori Algorithm

COMBINED METHODOLOGY of the CLASSIFICATION RULES for MEDICAL DATA-SETS

Mining an Online Auctions Data Warehouse

PREDICTIVE MODELING OF INTER-TRANSACTION ASSOCIATION RULES A BUSINESS PERSPECTIVE

Data Mining Algorithms Part 1. Dejan Sarka

NEW TECHNIQUE TO DEAL WITH DYNAMIC DATA MINING IN THE DATABASE

International Journal of Computer Science and Communication Engineering Volume 2 issue 4(November 2013 issue)

The basic data mining algorithms introduced may be enhanced in a number of ways.

Horizontal Aggregations in SQL to Prepare Data Sets for Data Mining Analysis

Knowledge Based Context Awareness Network Security For Wireless Networks

Scoring the Data Using Association Rules

EMPIRICAL STUDY ON SELECTION OF TEAM MEMBERS FOR SOFTWARE PROJECTS DATA MINING APPROACH

Association Rule Mining using Apriori Algorithm for Distributed System: a Survey

Mining Interesting Medical Knowledge from Big Data

A COGNITIVE APPROACH IN PATTERN ANALYSIS TOOLS AND TECHNIQUES USING WEB USAGE MINING

Frequent item set mining

Divide-n-Discover Discretization based Data Exploration Framework for Healthcare Analytics

Extend Table Lens for High-Dimensional Data Visualization and Classification Mining

A Novel Approach for Heart Disease Diagnosis using Data Mining and Fuzzy Logic

An Efficient Frequent Item Mining using Various Hybrid Data Mining Techniques in Super Market Dataset

Big Data Visualization for Occupational Health and Security problem in Oil and Gas Industry

Data Outsourcing based on Secure Association Rule Mining Processes

Graph Mining and Social Network Analysis

Data Mining Apriori Algorithm

College information system research based on data mining

Web Document Clustering

A Statistical Text Mining Method for Patent Analysis

Fuzzy Association Rules

Web Usage Mining: Identification of Trends Followed by the user through Neural Network

Association rules for improving website effectiveness: case analysis

ONTOLOGY IN ASSOCIATION RULES PRE-PROCESSING AND POST-PROCESSING

Data Mining Project Report. Document Clustering. Meryem Uzun-Per

Future Trend Prediction of Indian IT Stock Market using Association Rule Mining of Transaction data

Dynamic Data in terms of Data Mining Streams

The Fuzzy Frequent Pattern Tree

Discretization and grouping: preprocessing steps for Data Mining

Association Rules Mining: A Recent Overview

Comparative Analysis of EM Clustering Algorithm and Density Based Clustering Algorithm Using WEKA tool.

Transcription:

Computing For Nation Development, February 25 26, 2010 Bharati Vidyapeeth s Institute of Computer Applications and Management, New Delhi Application of Data Mining Techniques For DataSet 1 Runumi Devi and 2 Vineeta Khemchandani Dept of Computer Application, JSS Academy of Technical Education, NOIDA, Uttar Pradesh E-Mails: 1 mailto:runumi@jssaten.ac.in, 2 vkhemchandani@jssaten.ac.in ABSTRACT Medical data mining has great potential for exploring the hidden patterns in the data sets of the medical domain. These patterns can be utilized for fast and better clinical decision making, and also to curb the occurrence of particular disease by physicians. However, the available raw medical data are widely distributed, heterogeneous in nature and voluminous. Data mining and Statistics both strive towards discovering hidden patterns and structures in data. Statistics deals with heterogeneous numbers only, whereas data mining deals with heterogeneous fields. We have identified one area of healthcare where data mining techniques can be applied for knowledge discovery. In this paper, the impact of two Data Mining techniques(fp-growth and Apriori) on a known diabetic dataset has been examined. Also rules generated by the FP- Growth approach are being matched and co-related with those being generated by Apriori algorithm. KEYWORDS Data mining, Knowledge discovery in database(kdd), Association Rules(AR), FPtree, Ttree. 1.0 INTRODUCTION The healthcare environment is generally perceived as being information rich yet knowledge poor.[6,13] There is a wealth of data available within the healthcare systems. However, there is a lack of effective analysis tools to discover hidden relationships and trends in data. Knowledge discovery and data mining have found numerous applications in business and scientific domain. Valuable knowledge can be discovered from application of data mining techniques in healthcare system[6]. The characteristics of clinical data as it originates during the process of clinical documentation, including issues of data availability and complex representation models, can make data mining applications challenging. Data preprocessing and transformation are required before one can apply data mining to clinical data. In today's Information Technology(IT) driven society, knowledge is the most significant asset of any organization. Knowledge discovery in databases is a welldefined process consisting of several distinct steps. Data mining is the core step, which results in discovery of hidden but useful knowledge from massive databases[6]. A formal definition of data mining in databases is as follows:- It is a process of semi-automatically analyzing large databases to find patterns that are:- valid: hold on new data with some certainity. novel: non-obvious to the system useful: should be possible to act on the item. understandable: humans should be able to interpret the pattern. The role of (IT) in health care is well established. Knowledge management in Health care offers many challenges in creation, dissemination and preservation of health care knowledge using advanced technologies. The healthcare environment is usually information rich but knowledge poor. However data mining techniques can be applied to create a knowledge rich healthcare environment [6].Application of data mining techniques to Diabetes, Acquired Immune Deficiency Syndrome(AIDS) datasets can be a highly important area. In a thickly populated country with scarce resources such as India, information dissemination and knowledge discovery from large databases seem to be the only solution to reduce the number of diabetes patients and to check the spread of AIDS.[13] The article is organized as follows : Section2 introduces description about the diabetic dataset. Section3 explains how discretization can be performed on a dataset having continuous data values for different attributes. Section4 provides definition of association rule mining as well as Apriori and FP-Growth Approach. Section5 discuses about Ttree data structure that is being used in the association rule mining. Section6 presents the number of frequent itemsets and rules generated by the different approaches. Section7 presents a discussion and conclusion on the rules generated and Section8 presents some directions for future work. 2.0 DATASET DESCRIPTION The Dataset which is being used is Pima Indian Diabetes Database (PIDD). 2.1THE PIMA INDIAN DIABETIC DATABASE The Pima Indians may be genetically predisposed to diabetes (Hanson, Ehm et al. 1998), and it was noted that their diabetic rate was 19 times that of a typical town in Minnesota (Knowler, Bennett et al. 1978). The National Institute of Diabetes and Digestive and Kidney Diseases of the NIH originally owned the Pima Indian Diabetes Database (PIDD).[12] In 1990 it was received by the UC-Irvine Machine Learning Repository and can be downloaded at www.ics.uci.edu/~mlearn/mlrepository.html. The database has n=768 patients each with 9 numeric variables. Ages range from 21 to 81 and all are female. Out of the nine condition attributes, six attributes describe the result of physical examination, rest of the attributes of chemical examinations.

There have been many studies applying data mining techniques to the PIDD. The independent or target variable is diabetes status within 5 years, represented by the 9 th variable (class=1).the attributes are : 1. number of times pregnant. 2. 2-hour OGTT plasma glucose. 3.diastolic blood pressure 4. triceps skin fold thickness 5. 2-hour serum insulin 6. BMI 7.diabetes pedigree function 8. age 9.diabetes onset within 5 years(0,1). The goal is to use the first 8 variables to predict 9 th attribute values. 3.0 DATA DISCRETIZATION: Many real-world data mining tasks involve continuous attributes. Data discretization is defined as a process of converting continuous data attribute values into a finite set of intervals and associating with each interval some specific data value. There are no restrictions on discrete values associated with a given data interval except that these values must induce some ordering on the discretized attribute domain. Discretization significantly improves the quality of discovered knowledge and also reduces the running time of various data mining tasks such as association rule discovery, classification, and prediction.[4] Good discretization may lead to new and more accurate knowledge. On the other hand, bad discretization leads to unnecessary loss of information or in some cases to false information with disastrous consequences. Any discretization process generally leads to a loss of information. The goal of the good discretization algorithm is to minimize such information loss. If discretization leads to an unreasonably small number of data intervals, then it may result in significant information loss. If a discretization method generates too many data intervals, it may lead to false information. Discretization of continuous attributes has been extensively studied. There are a wide variety of discretization methods starting with the naive methods often referred to as unsupervised methods such as equal-width and equal-frequency to much more sophisticated methods often referred to as supervised methods such as Minimum Description Length(MDL) and Pearson s X 2 or Wilks G 2 statistics based discretization algorithms.[3,4] Both unsupervised and supervised discretization methods can be further subdivided into top-down or bottom-up methods.[3] A top-down method starts with a single interval that includes all data attribute values and then generates a set of intervals by splitting the initial interval into two or more intervals. A bottom-up method initially considers each data point as a separate interval. It then selects one or more adjacent data points merging them into a new interval. Equal-depth(frequency ) partitioning It divide the range into N intervals(binns), each containing approximately same number of samples. Smoothing is then performed by finding bin means. 4.0 ASSOCIATION RULE MINING : Association rule mining techniques are used to identify relationships among a set of items in database.[1] These relationships are not based on inherent properties of the data themselves(as with functional dependencies), but rather based on cooccurence of the data items. Association rules are more appropriate when we search for completely new rules.[1,7] In this context, the association rule mining technique may generate the probable causes of the particular disease(diabetes) in the form of association rules which can be used for fast and better clinical decision-making. Let D be a set of transactions, where each transaction T is a set of items such that T I, I= i 1,i 2,..i m be a set of literals, called items. Given the set of transactions D the problem is to find association rules that have support and confidance greater than the user specified minimum support and minimum confidence[1]. An association rule is an implication of the form X Y, where X I, Y I, X Y=. The rule X Y holds in the transaction set D with confidence c, if c of transactions in D that contain X also contain Y. The rule X Y has support s in the transaction set D, if s of transactions in D contain X Y[7,8]. Given the set of transactions T, one may be interested in generating all rules that satisfy certain fixed constraint for support and confidence. Support and confidence are measures of the interestingness of the rule. A high level of support indicates that the rule is frequent enough for the organization to be interested in it. A high level of confidence shows that the rule is true often enough to justify a decision based on it.[14] Thus for a rule X Y, Support(XY)=(Number of times X and Y appear together)/d Confidence(XY)=Support(XY)/Support(X) 4.1 APRIORI: This algorithm may be considered to consist of two parts.[10,14] First part- finding frequent itemsets Second part-finding the rules For finding frequent itemsets following steps are followed: Step 1: Scan all transactions and find all frequent items that have support above s.let these frequent items be L. Step 2: Build potential sets of k items from L k-1 by using pairs of itemsets in L k-1 such that each pair has the first k-2 items in common. Now the k-2 common items and the one remaining

Application of Data Mining Techniques For DataSet item from each of the two itemsets are combined to form a k- itemset. The set of such potentially frequent k itemsets is the candidate set C K. (For k=2, we build the potential frequent pairs by using the frequent itemset L 1 appears with every other item in L 1. The set so generated is the candidate set C 2 ). Step 3: Scan all transactions and find all k-itemsets in CK that are frequent. The frequent set so obtained is L 2. The first pass of the Apriori algorithm simply counts item occurrences to determine the large 1-itemsets. A subsequent pass, say pass k, consists of two phases. First, the large itemsets L k-1 found in the (k-1)th pass are used to generate the candidate itemsets C k, using the apriori- gen function. Next, the database is scanned and the support of candidates in C k is counted. For fast counting, we need to efficiently determine the candidates in C k that are contained in a given transaction t.[11,14] For finding rules we follow the following straightforward algorithm: We take a large frequent itemset, say l, and find each nonempty subset a. For every such subset a, output a rule of the form a (l-a) if support(l)/support(a) satisfies minimum confidence. 4.2 FREQUENT PATTERN GROWTh This algorithm comprises two major steps : First : compress a large database into a compact, Frequent Pattern tree(fp-tree) structure. Secondly : develop an efficient, FP-tree based frequent pattern mining. The major difference between FP-growth and the Apriori algorithm discussed above is that FP-growth does not generates the candidate itemsets and then tests[5,9]. 5.0 DATA STRUCTURE FOR ASSOCIATION RULE MINING Association Rule Mining (ARM) obtains a set of rules which indicate that the consequent of a rule is likely to apply if the antecedent applies.[1] To generate such rules, the first step is to determine the support for sets of items (I) that may be present in the data set, i.e., the frequency with which each combination of items occurs. After eliminating those I for which the support fails to meet a given minimum support threshold, the remaining large I can be used to produce ARs of the form A B, where A and B are disjoint subsets of a large I. The ARs generated are usually pruned according to some notion of confidence in each AR. However to achieve this pruning, it is always necessary to first identify the large I contained in the input data. This in turn requires an effective storage structure. One of the efficient data storage mechanism for itemset storage is the T-tree.[2] 5.1 TOTAL SUPPORT TREE (T-TREE) : A T-tree is a set enumeration tree structure which is used to store frequent itemset information. The difference between the T-tree and other set enumeration tree structure is: 1. Array is used to define the levels in each sub-branch of the tree which permits indexing in at all levels which in turn offers computational advantages. 2. To make the indexing at all levels the tree is built in reverse. Here, each branch is founded on the last element of the frequent sets to be stored. The most significant overhead when considering ARM data structures is that the number of possible combinations represented by the items (columns) in the input data scales exponentially with the size of the record. A partial solution is to store only those combinations that actually appear in the data set. A further mechanism is to make use of the downward closure property of itemsets if any given itemset I is not large, any superset of I will also not be large. This can be used effectively to avoid the need to generate and compute support for all combinations in the input data. However, the approach requires: 1) a number of passes of the data set and 2) the construction of candidate sets to be counted in the next pass. The implementation of this structure can be optimized by storing levels in the tree in the form of arrays, thus reducing the number of links needed and providing direct indexing. For the latter purpose, it is more convenient to build a reverse version of the tree, refered to as a T-tree (Total support tree). 6.0 FIRST RESULT In the following table1 we can see how many frequent itemsets and rules were produced using different approaches and different parameter settings. For parameter value Support(3%) Support(3%) Table1 : Number of rules per approach FP-Growth Apriori Number Number of Number Number of of Rules of Rules Frequent Frequent itemsets itemsets 121 3 121 3 121 14 121 14 121 23 121 23 121 37 121 37 68 No Rules 68 No Rules 68 6 68 6 68 13 68 13 68 21 68 21 49 No Rules 49 No Rules 49 4 49 4 49 11 49 11 49 15 49 15 195 8 195 8 195 74 195 74

Runtime(Sec.) 0.1 0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 Rule Generation Time 1 2 3 4 5 Figure Support 1: threshold(%) Rule Generation time Figure 1: Rule Generation Time FP-Growth Apriori 7.0 DISCUSSION AND CONCLUSION Our goal was to observe the impact of data mining technique on diabetic dataset. In this study, we have implemented two algorithms of association rule mining technique, viz Apriori and FP-Growth techniques. In FP-Growth a novel data structure, frequent pattern tree (FP-tree), is being implemented for storing compressed, crucial information about frequent pattern. A method, FP-growth, is also followed for efficient mining of frequent patterns in large database. There are several advantages of FP-growth over other Apriori approach : 1) It constructs a highly compact FP-tree, which is usually substantially smaller than the original database and thus saves the costly database scans in the subsequent mining processes.2) It applies a pattern growth method which avoids costly candidate generation and test by successively concatenating frequent 1-itemset found in the (conditional) FPtrees. 3) It applies a partitioning-based divide-and-conquer method which dramatically reduces the size of the subsequent conditional pattern bases and conditional FP-trees. We have observed that both the techniques generates the same number of frequent sets as a consequence same number of rules for the same known dataset under the same constraints.there were two exceptions in the rules generated by the two techniques. Both the techniques (FP-Growth, Apriori) generate no association rules within the limitations (supp= 5% conf=80% and supp=6%, conf=80%). Here the limitations seem to be too hard as we are considering 80% reliability as well as 5% or 6% occurence. Moreover, rule generation time graph proved the efficiency of the FP-Growth approach in terms of generation time. Although most of these rules provided valuable knowledge, we only describe some of the most beneficial rules under 100.0% confidence, according to secondary data source which is given below: 1. IF ( OGTT=127) THEN (number of times pregnant=3) 2. IF (Diastolic Blood Pressure =75) AND (BMI=30) THEN (Diabetis pedigree function=0.5) Unit of BMI weight in Kg/(height in m)^2 3. IF (number of times pregnant=3) AND (Age=35) THEN (Diabetis pedigree function=0.33) 4. IF (Diastolic Blood Pressure =50) THEN Not 5. IF (number of times pregnant=6) AND (BMI=34) THEN Not 6. IF(Triceps Skinfold Thickness=22) AND (Diabetis pedigree function=0.5) THEN Not 7. IF (number of times pregnant=5) AND (Diabetis pedigree function=0.66) THEN Not 8. IF (number of times pregnant=7) AND (Diabetis pedigree function=0.66) THEN Not 9. IF (BMI=35) AND (Diabetis pedigree function=0.66) THEN Not 10. IF (OGTT=103) THEN 11. IF (OGTT=105) THEN 12. IF (OGTT=119) THEN 13. IF (OGTT=120) THEN 14. IF (Diastolic Blood Pressure =63) THEN 15. IF (number of times pregnant=2) AND (Diastolic Blood Pressure =75) THEN 16. IF (number of times pregnant=3) AND (Triceps Skinfold Thickness=17) THEN 17. IF (number of times pregnant=2) AND (BMI=30) THEN 18. IF (Diastolic Blood Pressure =75) AND (BMI=30) THEN 19. IF (number of times pregnant=3) AND (BMI=33) THEN 20. IF (OGTT=113) AND (Diabetis pedigree function=0.33) THEN 21. IF (OGTT=120) AND (Diabetis pedigree function=0.33) THEN 22. IF (BMI=32) AND (Diabetis pedigree function=0.33) THEN 23. IF (OGTT=119) AND (Diabetis pedigree function=0.5) THEN 24. IF (Diastolic Blood Pressure =75) AND (Diabetis pedigree function=0.5) THEN 25. IF (Diastolic Blood Pressure =75) AND (BMI=30) THEN (Diabetis pedigree function=0.5) AND 26. IF (Diastolic Blood Pressure =75) AND (BMI=30) AND (Diabetis pedigree function=0.5) THEN 27. IF (Diastolic Blood Pressure =75) AND (BMI=30) AND THEN (Diabetis pedigree function=0.5) 28. IF (BMI=31) AND (Diabetis pedigree function=0.5) THEN Continued on Page No. 368

Application of Data Mining Techniques For DataSet Continued from Page No. 364 29. IF(Age=23) THEN. 30. IF (number of times pregnant=1) AND (Age=25) THEN 31. IF (BMI=31) AND (Age=25) THEN 32. IF (number of times pregnant=3) AND (Age=26) THEN 33. IF (Diabetis pedigree function=0.33) AND (Age=28) THEN 34. IF (number of times pregnant=3) AND (Age=29) THEN 35. IF (Diabetis pedigree function=0.33) AND (Age=31) THEN 36. IF (number of times pregnant=2) AND (Age=33) THEN 37. IF (BMI=30) AND AND (Age=33) THEN Figure2 : Association Rules for Women The group with confidence 100% shows if the value of 2-hour OGTT plasma glucose is between 103 to 120 then the person is likely to become diabetic. If Diastolic Blood Pressure=75 and Body Mass Index =30 and has diabetes history in family then chances of becoming diabetic is more.triceps Skinfold Thickness which is an estimated measure of subcutaneous fat in a body also plays a major role for diabetes.moreover, if a lady is 1or 2 or 3 times pregnant and age is also either 25,26,28,29,31,33,there may be more number of diabetic cases. Summarizing, these results shows that apart from the values of chemical examinations, results of physical examination is necessary for better clinical decision making by the medical expert. Some risk factor have less importance because they appear less frequently in the dataset. These rules have the potential to improve the expert system and to make better clinical decision making. In a thickly populated country with scarce resources such as India, public awareness can also be achieved through the dissemination of the above knowledge. 8. FUTURE WORK The above application software is designed for generating association rules using two different association rule mining techniques which provide same set of rules. It is observed that association rule mining techniques generate large number of rules that are very difficult to be analyzed by users. Also, it is found that most of the rules are redundant. Therefore different filtering techniques as well as rule pruning can also be performed on these two techniques. Moreover, Decision Tree techniques can be an alternative solution where knowledge is represented in more compact and optimized way. Also Decision Trees may be more convenient in goal oriented cases, where we looks for rules with fixed consequence or outcome, like some specific diagnosis whereas association rule mining techniques are good, when we search for completely new rules. 9. REFERENCES [1] M. H. Margahny and A. A. Mitwaly. Fast Algorithms for mining association rules. AIML 05 Conf.,19-21 December 2005, CICC, Cairo, Egypt. [2] Frans Coenen, Paul Leng, and Shakil Ahmed. Data Structure for Association Rule Mining. 05. IEEE Transactions On Knowledge and Data Engineering, VOL. 16, NO. 6, June. 2004. [3] Ruoming Jin, Yuri Breitbart, Chibuike Muoh, "Data Discretization Unification," icdm, pp. 183-192, 2007 Seventh IEEE International Conference on Data Mining, 2007. [4] Khiops: A Statistical Discretization Method of Continuous Attributes. Marc Boullé. Journal Title: Machine Learning. Date: 2004. Volume: 55. [5] Christian Borgelt. An Implementation of The FP-growth Algorithm. Conference on Knowledge Discovery in Data Proceedings of the 1st international workshop on open source data mining: frequent pattern mining implementations Chicago, Illinois Pages: 1-5 Year of Publication: 2005 ISBN:1-59593-210-0 [6] Harleen Kaur and Siri Krishan Wasan. Empirical Study on Applications of Data Mining Techniques in Healthcare. Journal of Computer Science2 (2): 194-200, 2006. [7] Milan Zorman, Gou Masuda, Peter Kakol, Ryuiochi Yamamoto, Bruno Stiglic. Mining Diabetes Database With Decision Trees and Association Rules. Proc. Of the 15 th IEEE Symposium on Computer Based Medical Systems (CBMS 2002) [8] Carlos Ordonnez, Comparing Association Rules and Decision Trees for Disease Prediction, HIKM 06, November 11, 2006, Arlington, Virginia, USA ISBN:1-59593-528-2. [9] Jiawei Han, Jian Pei and Yiwen Yin. Mining Frequent Patterns without Candidate Generation. International Conference on Management of Data Proceedings of the 2000 ACM SIGMOD international conference on Management of data. [10] R. Agrawal and R. Srikant. Fast Algorithms for mining association rules. In Proc. Of Intl. Conf. On Very Large Databases(VLDB), Sept. 1994. [11]R. Agrawal, T. Imielinksi, and A. Swami. Mining association rules between sets of items in large databases. In proc. of the ACM SIGMOD Conference on Management of Data, pages 207-216, 1993. [12] Joseph L. Breault, MD, MPH, MS. Data Mining Databases: Are Rough Sets a Useful Addition.www.galaxy.gmu.edu/interface/I01/I2001Proceed ings/jbreault/jbreault-paper.pdf[last accessed on [17/04/2009] [13] Siri Krishan Wasan, Vasudha Bhatnagar and Harleen Kaur. The Impact of Data Mining Techniques on Medical Diagnostics.In Data Science Journal, Volume 5, 19 October 2006. [14] G.K. Gupta. Introduction to Data Mining with Case Studies,2006,Prentice Hall India.