Keywords Data mining, Classification Algorithm, Decision tree, J48, Random forest, Random tree, LMT, WEKA 3.7. Fig.1. Data mining techniques.

Similar documents
Analysis of WEKA Data Mining Algorithm REPTree, Simple Cart and RandomTree for Classification of Indian News

A Comparative Study of clustering algorithms Using weka tools

Data Mining with Weka

How To Understand How Weka Works

Performance Analysis of Naive Bayes and J48 Classification Algorithm for Data Classification

DECISION TREE INDUCTION FOR FINANCIAL FRAUD DETECTION USING ENSEMBLE LEARNING TECHNIQUES

DATA MINING TOOL FOR INTEGRATED COMPLAINT MANAGEMENT SYSTEM WEKA 3.6.7

Overview. Evaluation Connectionist and Statistical Language Processing. Test and Validation Set. Training and Test Set

Chapter 6. The stacking ensemble approach

Artificial Neural Network, Decision Tree and Statistical Techniques Applied for Designing and Developing Classifier

1. Classification problems

Université de Montpellier 2 Hugo Alatrista-Salas : hugo.alatrista-salas@teledetection.fr

An Introduction to WEKA. As presented by PACE

FRAUD DETECTION IN ELECTRIC POWER DISTRIBUTION NETWORKS USING AN ANN-BASED KNOWLEDGE-DISCOVERY PROCESS

Predicting the Risk of Heart Attacks using Neural Network and Decision Tree

Web Document Clustering

Azure Machine Learning, SQL Data Mining and R

Adding New Level in KDD to Make the Web Usage Mining More Efficient. Abstract. 1. Introduction [1]. 1/10

BOOSTING - A METHOD FOR IMPROVING THE ACCURACY OF PREDICTIVE MODEL

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 3, May-Jun 2014

WEKA Explorer User Guide for Version 3-4-3

Introduction Predictive Analytics Tools: Weka

Data Mining Algorithms Part 1. Dejan Sarka

DATA MINING TECHNIQUES AND APPLICATIONS

ASSOCIATION RULE MINING ON WEB LOGS FOR EXTRACTING INTERESTING PATTERNS THROUGH WEKA TOOL

Predicting Student Performance by Using Data Mining Methods for Classification

WebFOCUS RStat. RStat. Predict the Future and Make Effective Decisions Today. WebFOCUS RStat

Practical Data Science with Azure Machine Learning, SQL Data Mining, and R

EMPIRICAL STUDY ON SELECTION OF TEAM MEMBERS FOR SOFTWARE PROJECTS DATA MINING APPROACH

Comparison and Analysis of Various Clustering Methods in Data mining On Education data set Using the weak tool

How To Predict Web Site Visits

Prof. Pietro Ducange Students Tutor and Practical Classes Course of Business Intelligence

Comparison of Data Mining Techniques used for Financial Data Analysis

Data Mining. Nonlinear Classification

ANALYSIS OF FEATURE SELECTION WITH CLASSFICATION: BREAST CANCER DATASETS

Data Mining: A Preprocessing Engine

Final Project Report

Classification of Titanic Passenger Data and Chances of Surviving the Disaster Data Mining with Weka and Kaggle Competition Data

Data Mining & Data Stream Mining Open Source Tools

Predictive Data modeling for health care: Comparative performance study of different prediction models

Contents WEKA Microsoft SQL Database

Customer Classification And Prediction Based On Data Mining Technique

An Introduction to Data Mining. Big Data World. Related Fields and Disciplines. What is Data Mining? 2/12/2015

How To Understand The Impact Of A Computer On Organization

Evaluation & Validation: Credibility: Evaluating what has been learned

Data Mining Solutions for the Business Environment

Predicting Students Final GPA Using Decision Trees: A Case Study

COURSE RECOMMENDER SYSTEM IN E-LEARNING

TOWARDS SIMPLE, EASY TO UNDERSTAND, AN INTERACTIVE DECISION TREE ALGORITHM

Data quality in Accounting Information Systems

Comparison of K-means and Backpropagation Data Mining Algorithms

A comparative study of data mining (DM) and massive data mining (MDM)

ON INTEGRATING UNSUPERVISED AND SUPERVISED CLASSIFICATION FOR CREDIT RISK EVALUATION

Data Mining Practical Machine Learning Tools and Techniques

A Decision Tree for Weather Prediction

HYBRID PROBABILITY BASED ENSEMBLES FOR BANKRUPTCY PREDICTION

WEKA KnowledgeFlow Tutorial for Version 3-5-8

International Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 3, May-June 2015

Open-Source Machine Learning: R Meets Weka

REVIEW OF ENSEMBLE CLASSIFICATION

not possible or was possible at a high cost for collecting the data.

Comparative Analysis of Classification Algorithms on Different Datasets using WEKA

Getting Even More Out of Ensemble Selection

Classification of Learners Using Linear Regression

IDENTIFIC ATION OF SOFTWARE EROSION USING LOGISTIC REGRESSION

WEKA Explorer Tutorial

K-means Clustering Technique on Search Engine Dataset using Data Mining Tool

EFFICIENCY OF DECISION TREES IN PREDICTING STUDENT S ACADEMIC PERFORMANCE

An Approach to Detect Spam s by Using Majority Voting

Experiments in Web Page Classification for Semantic Web

A Regression Approach for Forecasting Vendor Revenue in Telecommunication Industries

A Study on Prediction of User s Tendency Toward Purchases in Online Websites Based on Behavior Models

Social Media Mining. Data Mining Essentials

Heart Disease Diagnosis Using Predictive Data mining

Rule based Classification of BSE Stock Data with Data Mining

SVM Ensemble Model for Investment Prediction

Random forest algorithm in big data environment

Mobile Phone APP Software Browsing Behavior using Clustering Analysis

How To Cluster

Data Mining Application in Direct Marketing: Identifying Hot Prospects for Banking Product

Data Mining Framework for Direct Marketing: A Case Study of Bank Marketing

Clustering on Large Numeric Data Sets Using Hierarchical Approach Birch

E-commerce Transaction Anomaly Classification

International Journal of Advanced Computer Technology (IJACT) ISSN: PRIVACY PRESERVING DATA MINING IN HEALTH CARE APPLICATIONS

An Overview of Knowledge Discovery Database and Data mining Techniques

Detection. Perspective. Network Anomaly. Bhattacharyya. Jugal. A Machine Learning »C) Dhruba Kumar. Kumar KaKta. CRC Press J Taylor & Francis Croup

Empirical Study of Decision Tree and Artificial Neural Network Algorithm for Mining Educational Database

Data Mining - Evaluation of Classifiers

IDENTIFYING BANK FRAUDS USING CRISP-DM AND DECISION TREES

Studying Auto Insurance Data

Distributed forests for MapReduce-based machine learning

Data Quality Mining: Employing Classifiers for Assuring consistent Datasets

Data Mining. Knowledge Discovery, Data Warehousing and Machine Learning Final remarks. Lecturer: JERZY STEFANOWSKI

Supervised Learning (Big Data Analytics)

Interactive Data Mining and Visualization

New Ensemble Combination Scheme

8. Machine Learning Applied Artificial Intelligence

BIDM Project. Predicting the contract type for IT/ITES outsourcing contracts

Waffles: A Machine Learning Toolkit

How To Solve The Kd Cup 2010 Challenge

Transcription:

International Journal of Emerging Research in Management &Technology Research Article October 2015 Comparative Study of Various Decision Tree Classification Algorithm Using WEKA Purva Sewaiwar, Kamal Kant Verma Uttrakhand Technical University, Dun, Uttrakhand, India Abstract- T his paper is focused on comparison of various decision tree classification algorithms using WEKA tool. Data mining tools such as classification, clustering, association and neural network solve large amount of problem. These are all open source tools, we directly communicate with each tool or by java code. In this paper we discuss on classification technique of data mining. In classification, various techniques are present such as bayes, functions, lazy, rules and tree etc.. Decision tree is one of the most frequently used classification algorithm. Decision tree classification with Waikato Environment for Knowledge Analysis (WEKA) is the simplest way to mining information from huge database. This work shows the process of WEKA analysis of file converts, step by step process of weka execution, selection of attributes to be mined and comparison with Knowledge Extraction of Evolutionary Learning. I took database [1] and execute in weka software. The conclusion of the paper shows the comparison among all type of decision tree algorithms by weka tool. Keywords Data mining, Classification Algorithm, Decision tree, J48, Random forest, Random tree, LMT, WEKA 3.7. I. INTRODUCTION Data mining is a collection of techniques to glean information from data and turn into meaningful trends and rules to improve your understanding. The basic principles of data mining is to analyze the data from different direction, categorize it and finally to summarize it.today we are living in digital world where data increasing day by day, to get any information from mountain of database is not only difficult but mind blogging also. To deal with this huge data we need data mining techniques. Data mining [2] define as the process of analysing, searching data in order to find contained, but prospective information. data mining is used to find the hidden information prototype and relationship between the large data set which is very useful in decision creation. The advantages of data mining are ;analysis routinely, results of analysis is objective, accuracy of data is constant. Data mining also known as knowledge discovery in database (KDD), mainly data mining follows these steps; Data cleaning, Data integration, Data selection, data transformation, data mining, pattern evolution, knowledge evolution data reduction. Data mining having various numbers of techniques which have own speciality, such as clustering, data processing, pattern recognition, association, visualization etc. Fig.1. Data mining techniques.[3] II. CLASSIFICATION Classification is possibly the most frequently used data mining technique. Classification [4] is the process of finding a set of models that describe and differentiate data classes and concepts, for the purpose of being able to use the model to predict the class whose label is unknown. There are many algorithms that can be used for classification, such as decision trees, neural networks, logistic regression, etc. In this work we are using decision tree algorithm for classification. The Classification process involves following steps: Create training data set. Identify class attribute and classes. 2015, IJERMT All Rights Reserved Page 87

Sewaiwar et al., International Journal of Emerging Research in Management &Technology Identify useful attributes for classification (Relevance analysis). Learn a model using training examples in Training set. Use the model to classify the unknown data samples. III. DECISION TREE Decision trees [5] are a way of representing a sequence of rules that lead to a class or value. Decision Tree flowchart like tree structure. is a Fig.2. Decision tree. The decision tree consists of three fundamentals, root node, internal node and leaf node. Top most fundamental is the root node. Leaf node is the terminal fundamental of the structure and the nodes in between is called the internal node. Each internal node denotes test on an attribute, each branch represents an outcome of the test, and each leaf node holds a class label. Various decision tree algorithms are used in classification like ID3,AD Tree, REP, J48, FT Tree, LAD Tree, decision stamp, LMT, random forest, random tree etc. In this work following trees take for comparison- A. J48- A predictive machine-learning model which decide the target value of a new sample based on different attribute values of the available data is J48 decision tree [6]. the different attributes denote by the internal nodes of a decision tree, the branches between the nodes tell us the possible values that these attributes can have in the experimental samples, while the terminal nodes tell us the final value of the dependent variable. B. LMT- A classification model with an associated supervised training algorithm that combines logistic prediction and decision tree learning is logistic model tree (LMT)[7]. Logistic model trees use a decision tree that has linear regression models at its leaves to provide a section wise linear regression model. C. Random Forest- Random forests[8] are an ensemble learning method for classification, regression and other tasks, that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classification or mean prediction of the individual trees. Random forests correct for decision trees' habit of over fitting to their training set. Random forests are a way of averaging multiple deep decision trees, trained on different parts of the same training set, with the goal of reducing the variance. This comes at the expense of a small increase and some loss of interpretability, but generally greatly boosts the presentation of the final model. D. Random tree- A random tree [9] is a collection of tree predictors that is called forest. It can deal with both classification and regression problems. The classification works as follows: the random trees classifier takes the input feature vector, classifies it with every tree in the forest, and outputs the class label that received the majority of votes. In case of a regression, the classifier response is the average of the responses over all the trees in the forest. All the trees are trained with the same parameters but on different training sets. IV. WEKA WEKA[10] is a data mining software developed by the University of waikato in New Zealand that apparatus data mining algorithms using the JAVA language. Weka is a milestone in the history of the data mining and machine learning research communities, because it is the only toolkit that has gained such widespread adoption. Weka is a bird name of newzealand. WEKA is a modern feature for developing machine learning (ML) techniques and their application to real-world data mining problems. It is a collection of machine learning algorithms for data mining tasks. The WEKA project aims to provide a comprehensive collection of machine learning algorithms and data pre processing tools to researchers. The algorithms are directly to a database. WEKA implements algorithms for data pre-processing, classification, regression, clustering and association rules; It also includes visualization tools. WEKA would not only afford a toolbox of learning algorithms, but also a framework inside which researchers could implement new algorithms without having to be concerned with supporting infrastructure for data manipulation and scheme evaluation. WEKA is open source software issued under General Public License [11]. The data file normally used by Weka is in ARFF file for-mat, which consists of special tags to indicate different things in the data file foremost: attribute names, attribute types, and attribute values and the data. For working of WEKA we not need 2015, IJERMT All Rights Reserved Page 88

the deep knowledge of data mining that s reason it is very popular data mining tool. Weka also provides the graphical user interface of the user and provides many facilities.the GUI Chooser consists of four buttons one for each of the four major Weka applications. The buttons can be used to start the following applications: Explorer : It is the main interface in Weka. It has a set of panels, each of which can be used to perform a certain task.once a dataset has been loaded, one of the other panels in the Explorer can be used to perform further analysis. Experimenter: An environment for performing experiments and conducting statistical tests between learning schemes. Knowledge Flow: This environment supports essentially the same functions as the Explorer but with a drag and drop interface. One advantage is that it supports incremental learning. Simple CLI: Provides a simple command-line interface that allows direct execution of WEKA commands for operating systems that do not provide their own command line interface. Fig. 3. WEKA tool front view. A. Execution in weka- Execution on weka is a step by step process. First is data loading, Data can be loaded from various sources, including files, URLs and databases. WEKA has the capacity to read in ".csv" format. firstly we take excel datasheet from real world, the first row contains the attribute names (separated by commas) followed by each data row with attribute values listed in the same order (also separated by commas), convert in.csv file format. Than go to the explore button on weka and save this.csv file. once data loaded into WEKA, the data set automatically saved into ARFF format. Fig. 4. Execution in weka tool Choosing The Data From File, After data is loaded, WEKA will recognize the attributes and during the scan of the data will compute some basic statistics on each attribute. the list of recognized attributes, while the top panels indicate the names of the base relation (table) and the current working relation. left panel will show the basic statistics on that attribute Click on any attribute. For categorical attributes, the frequency for each attribute value is shown; while for continuous attributes we can obtain min, max, mean, standard deviation, etc. Prepare the Data to Be Mined, Selecting Attributes From sample data file, each record is individually identified by attribute and using the Attribute filter in WEKA. In the "Filters" panel, click on the filter button (to the left of the "Add" button). This will show a popup window with a list available filters. Scroll down the list and select weka.filters.attributefilter" After setting filters, go to the classification button and click on it.this will show a popup window with a list of classification algorithm, expand decision tree on this and select the tree which one u want to experiment. 2015, IJERMT All Rights Reserved Page 89

Before implementation of database some important terminology are- N Total number of classified instances. True Positive () correctly predicted of positive classes. True Negative (TN) correctly predicted of negative classes. True Negative (FP) wrongly predicted as positive classes. True Negative (FN) total wrongly predicted as negative classes. False Positive Rate (FPR) negatives in correctly classified/total negatives. True Positive Rate(R) positives correctly classified/total positives. Accuracy (A): It shows the proportion of the total number of instance predictions which are correctly predicted TN A N Receiver Operating Characteristic (ROC) Curve: It is a graphical approach for displaying the trade off between true positive rate (R) and false positive rate (FPR) of a classifier. R is plotted along the y axis and FPR is plotted along the x axis. Performance of each classifier represented as a point on the ROC curve. Precision(P): It is a determine of exactness. It is the ration of the predicted positive cases that were correct to the total number of predicted positive cases. P FP Recall(R): Recall is determine of completeness. It is the proportion of positive cases that were correctly recognized to the total number of positive cases. It is also known as sensitivity or true positive rate (R). R FN F-Measure: The harmonic mean of precision and recall. It is an important measure as it gives equal importance to precision and recall. 2recall precision F measure precision recall V. RESULTS The cross validation method used to analysis for the datasets. Various performance measures for all the datasets mentioned in Table I, II, III. Comparative analysis of various decision tree classification, simulation results as follows - Table 1. Final statistic of decision tree Decision tree Rate FT Rate precision recall f- measure Roc curve area CLASS Time taken (sec) J48 1 0 1 1 1 1 Y 0.14 1 0 1 1 1 1 N Random 0.838 0.014 0.969 0.838 0.899 0.964 Y 0.07 forest 0.986 0.016 0.924 0.924 0.954 0.962 N Random 0.838 0.014 0.969 0.838 0.899 0.976 Y 0.01 tree 0.986 0.162 0.924 0.986 0.954 0.971 N LMT 1 0.014 0.974 1 0.987 1 Y 6.9 Decision stump 0.986 0 1 0.986 0.993 0.99 N 1 O 1 1 1 1 Y 0.18 1 0 1 1 1 1 N Table. 2. Confusion matrix for all decision tree Decision tree Mean a b Parametric outcome absolute error variable J48 0 37 0 la YES LMT 0.0433 37 0 la YES 1 73 lb NO Random forest 0.2242 35 2 la YES Random tree 0.3216 11 26 la YES 1 73 lb NO Decision stump 0 37 0 la YES 2015, IJERMT All Rights Reserved Page 90

Table 3. compression of weight avg. for decision tree Decision tree Rate FP rate precision recall f-measure ROC area J48 1 0 1 1 1 1 LMT 0.991 0.005 0.991 0.991 0.991 0.993 Random forest 0.982 0.036 0.982 0.982 0.982 0.998 Random tree 0.757 0.473 0.797 0.757 0.712 0.702 Decision stump 1 0 1 1 1 1 VI. CONCLUSION Results shows that Decision stump classification algorithm takes minimum time to classify data but gives less accuracy. J48 have quite good accuracy with a little increase in time used for classification. maximum accuracy given by LMT, but time taken to build classification model is much higher than other classifiers or we can say maximum in all the classifiers in most of cases. Rest of the models also lies in between the best and worst ones.in this paper Decision Tree classification algorithms are analysing and justification method to explain the results. The specific approaches for classification are characterized, we developed the WEKA method is based on choosing the file and selecting attributes to convert.csv file to flat file and discussed features of WEKA performance. Our work extends to utilize the implementation of different dataset. Each decision tree classify the data correctly and incorrectly instance. We can use these decision tree algorithms in medical, banking, stock market and various area. ACKNOWLEDGEMENT I would like to express my deepest thanks to all those who provided me the possibility to complete this paper. A special gratefulness give to my guide, Mr. K.K. Verma,( assistant prof.,)whose contribution in stimulating suggestions and encouragement, helped me to coordinate my work especially in writing this paper.furthermore I would also like to acknowledge with much appreciation the crucial role of my family & friends, who gave the full effort in achieving the goal. I have to gratitude the guidance given by Mr. Satendra Kumar & Shushir Sangal to permission to use all the necessary equipment to complete the task. Last but not least, many thanks go to the god to giving me strength and courage to complete this paper. REFERENCE [1] Database [2] J. Han and M. Kamber, Data Mining: Concepts and Techniques. Morgan Kaufmann, 2001 [3] Swati singal,monika jain:a study on weka tool for data preprocessing, classisfication and clustring intreranation journal of innovation technology and exploring enginnering.2013 [4] King, M., A., and Elder, J., F., Evaluation of Fourteen Desktop Data Mining Tools, in Proceedings of the 1998 IEEE International Conference on Systems, Man and Cybernetics, 1998. [5] An Implementation of ID3: Decision Tree Learning Algorithm Wei Peng, Juhua Chen and Haiping Zhou Project of Comp 9417: Machine Learning University of New South Wales, School of Computer Science & Engineering, Sydney, NSW 2032, and Australia. [6] Wikipedia contributors, C4.5_algorithm, Wikipedia, The Free Encyclopedia. Wikimedia Foundation, 28- Jan-2015. [7] N. Landwehr, M. Hall, and E. Frank, Logistic model trees, Mach. Learn., vol. 59, no. 1 2, pp. 161 205, 2005. [8] L. Breiman, Random forests, Mach. Learn., vol. 45, no. 1, pp. 5 32, 2001. [9] Wikipedia contributors, Random_tree, Wikipedia, The Free Encyclopedia. Wikimedia Foundation, 13-Jul- 2014. [10] E. Frank, M. Hall, G. Holmes, R. Kirkby, B. Pfahringer, I. H. Witten, and L. Trigg, Weka, in Data Mining and Knowledge Discovery Handbook, Springer, 2005, pp. 1305 1314. [11] Pallavi, SunilaGodara A Comparative Performance Analysis of Clustering Algorithms International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 1, Issue 3, pp. 441-445 2015, IJERMT All Rights Reserved Page 91