Data Mining Classification: Decision Trees



Similar documents
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation. Lecture Notes for Chapter 4. Introduction to Data Mining

COMP3420: Advanced Databases and Data Mining. Classification and prediction: Introduction and Decision Tree Induction

Data Mining for Knowledge Management. Classification

Classification and Prediction

Decision-Tree Learning

Data mining techniques: decision trees

Learning Example. Machine learning and our focus. Another Example. An example: data (loan application) The data and the goal

Performance Analysis of Decision Trees

Data Mining with R. Decision Trees and Random Forests. Hugh Murrell

Classification Techniques (1)

Decision Trees from large Databases: SLIQ

Lecture 10: Regression Trees

Decision Trees. JERZY STEFANOWSKI Institute of Computing Science Poznań University of Technology. Doctoral School, Catania-Troina, April, 2008

Data Mining Techniques Chapter 6: Decision Trees

Decision Trees. Andrew W. Moore Professor School of Computer Science Carnegie Mellon University.

DECISION TREE INDUCTION FOR FINANCIAL FRAUD DETECTION USING ENSEMBLE LEARNING TECHNIQUES

Classification: Basic Concepts, Decision Trees, and Model Evaluation

TOWARDS SIMPLE, EASY TO UNDERSTAND, AN INTERACTIVE DECISION TREE ALGORITHM

An Introduction to Data Mining. Big Data World. Related Fields and Disciplines. What is Data Mining? 2/12/2015

Social Media Mining. Data Mining Essentials

Professor Anita Wasilewska. Classification Lecture Notes

Introduction to Learning & Decision Trees

ENSEMBLE DECISION TREE CLASSIFIER FOR BREAST CANCER DATA

EXPLORING & MODELING USING INTERACTIVE DECISION TREES IN SAS ENTERPRISE MINER. Copyr i g ht 2013, SAS Ins titut e Inc. All rights res er ve d.

Data Mining: Foundation, Techniques and Applications

Reference Books. Data Mining. Supervised vs. Unsupervised Learning. Classification: Definition. Classification k-nearest neighbors

A Data Mining Tutorial

ANALYSIS OF FEATURE SELECTION WITH CLASSFICATION: BREAST CANCER DATASETS

Data Mining Methods: Applications for Institutional Research

Introduction of Information Visualization and Visual Analytics. Chapter 4. Data Mining

!"!!"#$$%&'()*+$(,%!"#$%$&'()*""%(+,'-*&./#-$&'(-&(0*".$#-$1"(2&."3$'45"

An Overview and Evaluation of Decision Tree Methodology

Data Mining Part 5. Prediction

Insurance Analytics - analýza dat a prediktivní modelování v pojišťovnictví. Pavel Kříž. Seminář z aktuárských věd MFF 4.

Data Mining Algorithms Part 1. Dejan Sarka

Data Preprocessing. Week 2

Optimization of C4.5 Decision Tree Algorithm for Data Mining Application

Class #6: Non-linear classification. ML4Bio 2012 February 17 th, 2012 Quaid Morris

Foundations of Artificial Intelligence. Introduction to Data Mining

A Study of Detecting Credit Card Delinquencies with Data Mining using Decision Tree Model

PLANET: Massively Parallel Learning of Tree Ensembles with MapReduce. Authors: B. Panda, J. S. Herbach, S. Basu, R. J. Bayardo.

EFFICIENT CLASSIFICATION OF BIG DATA USING VFDT (VERY FAST DECISION TREE)

Comparative Analysis of Serial Decision Tree Classification Algorithms

Data Mining Practical Machine Learning Tools and Techniques

Classification On The Clouds Using MapReduce

Decision Tree Learning on Very Large Data Sets

Role of Customer Response Models in Customer Solicitation Center s Direct Marketing Campaign

Predicting the Risk of Heart Attacks using Neural Network and Decision Tree

Model-Based Recursive Partitioning for Detecting Interaction Effects in Subgroups

Chapter 4 Data Mining A Short Introduction. 2006/7, Karl Aberer, EPFL-IC, Laboratoire de systèmes d'informations répartis Data Mining - 1

Classification and Prediction

DATA MINING USING INTEGRATION OF CLUSTERING AND DECISION TREE

AnalysisofData MiningClassificationwithDecisiontreeTechnique

A NEW DECISION TREE METHOD FOR DATA MINING IN MEDICINE

Data Mining - Evaluation of Classifiers

Applied Data Mining Analysis: A Step-by-Step Introduction Using Real-World Data Sets

Data Mining based on Rough Set and Decision Tree Optimization

Content-Based Recommendation

6 Classification and Regression Trees, 7 Bagging, and Boosting

Classification algorithm in Data mining: An Overview

Gerry Hobbs, Department of Statistics, West Virginia University

Knowledge Discovery and Data Mining

Fine Particulate Matter Concentration Level Prediction by using Tree-based Ensemble Classification Algorithms

Classification Algorithms in Intrusion Detection System: A Survey

Knowledge Discovery and Data Mining. Bootstrap review. Bagging Important Concepts. Notes. Lecture 19 - Bagging. Tom Kelsey. Notes

Data Mining as a tool to Predict the Churn Behaviour among Indian bank customers

Discretization: An Enabling Technique

THE LAST THING A FISH NOTICES IS THE WATER IN WHICH IT SWIMS COMPETITIVE MARKET ANALYSIS: AN EXAMPLE FOR MOTOR INSURANCE PRICING RISK

D-optimal plans in observational studies

Data Mining on Streams

Comparison of Data Mining Techniques used for Financial Data Analysis

Volume 4, Issue 1, January 2016 International Journal of Advance Research in Computer Science and Management Studies

BIRCH: An Efficient Data Clustering Method For Very Large Databases

Clustering Via Decision Tree Construction

Discretization and grouping: preprocessing steps for Data Mining

Data Mining: Overview. What is Data Mining?

Data Mining: A Preprocessing Engine

Data Mining. 1 Introduction 2 Data Mining methods. Alfred Holl Data Mining 1

Data Mining. Nonlinear Classification

Chapter 20: Data Analysis

Application of Data Mining Techniques to Model Breast Cancer Data

Data Mining Part 5. Prediction

Predicting earning potential on Adult Dataset

Data Mining Applications in Fund Raising

Model Combination. 24 Novembre 2009

Classifying Large Data Sets Using SVMs with Hierarchical Clusters. Presented by :Limou Wang

SURVEY OF TEXT CLASSIFICATION ALGORITHMS FOR SPAM FILTERING

Methods for statistical data analysis with decision trees

Introduction to Artificial Intelligence G51IAI. An Introduction to Data Mining

Data mining and statistical models in marketing campaigns of BT Retail

Université de Montpellier 2 Hugo Alatrista-Salas : hugo.alatrista-salas@teledetection.fr

Knowledge Discovery and Data Mining. Structured vs. Non-Structured Data

Transcription:

Data Mining Classification: Decision Trees Classification Decision Trees: what they are and how they work Hunt s (TDIDT) algorithm How to select the best split How to handle Inconsistent data Continuous attributes Missing values Sections 4.1-4.3, 4.4.1, 4.4.2, 4.4.5 of course book Overfitting ID3, C4.5, C5.0, CART Advantages and disadvantages of decision trees Extensions to predict continuous values TNM033: Introduction to Data Mining 1 Classification Given a collection of records Each record contains a set of attributes, one of the attributes is the class. Find a model for class attribute as a function of the values of other attributes Goals apply the model to previously unseen records to predict their class (class should be predicted as accurately as possible) Carry out deployment based on the model (e.g. implement more profitable marketing strategies) The data set can be divided into Training set used to build the model Test set used to determine the accuracy of the model

10 10 Attrib1 Illustrating Classification Task Tid Attrib1 Attrib2 Attrib3 Class 1 Large 125K 2 Medium 100K 3 Small 70K 4 Medium 120K 5 Large 95K 6 Medium 60K 7 Large 220K 8 Small 85K 9 Medium 75K 10 Small 90K Learn Model Tid Attrib1 Attrib2 Attrib3 Class 11 Small 55K? 12 Medium 80K? 13 Large 110K? 14 Small 95K? 15 Large 67K? Apply Model = yes Class = Attrib1 = Attrib3 < 95K Class = Examples of Classification Task Predicting tumor cells as benign or malignant Classifying credit card transactions as legitimate or fraudulent Classifying secondary structures of protein as alpha-helix, beta-sheet, or random coil Categorizing news stories as finance, weather, entertainment, sports, etc The UCI Repository of Datasets http://archive.ics.uci.edu/ml/

10 Classification Techniques This lecture introduces Decision Trees Other techniques will be presented in this course: Rule-based classifiers But, there are other methods Nearest-neighbor classifiers Naïve Bayes Support-vector machines Neural networks Example of a Decision Tree Tid Refund Marital Cheat Splitting Attributes 1 Single 125K 2 Married 100K 3 Single 70K 4 Married 120K 5 Divorced 95K 6 Married 60K 7 Divorced 220K 8 Single 85K 9 Married 75K 10 Single 90K Refund MarSt Single, Divorced TaxInc < 80K > 80K YES Married Training Data Model: Decision Tree

10 10 Another Example of Decision Tree Tid Refund Marital 1 Single 125K 2 Married 100K 3 Single 70K 4 Married 120K Cheat 5 Divorced 95K 6 Married 60K 7 Divorced 220K 8 Single 85K 9 Married 75K 10 Single 90K Married MarSt Single, Divorced Refund TaxInc < 80K > 80K YES There could be more than one tree that fits the same data! Search for the best tree Apply Model to Test Data Start from the root of tree. Test Data Refund Marital Cheat Refund Married 80K? Single, Divorced MarSt Married TaxInc < 80K > 80K YES

10 10 Apply Model to Test Data Test Data Refund Marital Cheat Refund Married 80K? Single, Divorced MarSt Married TaxInc < 80K > 80K YES Apply Model to Test Data Test Data Refund Marital Cheat Refund Married 80K? Single, Divorced MarSt Married TaxInc < 80K > 80K YES

10 10 Apply Model to Test Data Test Data Refund Marital Cheat Refund Married 80K? Single, Divorced MarSt Married TaxInc < 80K > 80K YES Apply Model to Test Data Test Data Refund Marital Cheat Refund Married 80K? Single, Divorced MarSt Married TaxInc < 80K > 80K YES

10 Apply Model to Test Data Test Data Refund Marital Cheat Refund Married 80K? Single, Divorced MarSt Married Assign Cheat to TaxInc < 80K > 80K YES Decision Tree Induction How to build a decision tree from a training set? Many existing systems are based on Hunt s Algorithm Top-Down Induction of Decision Tree (TDIDT) Employs a top-down search, greedy search through the space of possible decision trees

10 General Structure of Hunt s Algorithm Let D t be the set of training records that reach a node t General Procedure: If D t contains records that belong the same class y t, then t is a leaf node labeled as y t If D t contains records that belong to more than one class, use an attribute test to split the data into smaller subsets. Recursively apply the procedure to each subset Which attribute should be tested at each splitting node? Use some heuristic Tid Refund Marital Cheat 1 Single 125K 2 Married 100K 3 Single 70K 4 Married 120K 5 Divorced 95K 6 Married 60K 7 Divorced 220K 8 Single 85K 9 Married 75K 10 Single 90K D t? de t Tree Induction Issues Determine when to stop splitting Determine how to split the records Which attribute to use in a split node split? How to determine the best split? How to specify the attribute test condition? E.g. X < 1? or X+Y < 1? Shall we use 2-way split or multi-way split?

Splitting of minal Attributes Multi-way split: Use as many partitions as distinct values CarType Family Luxury Sports Binary split: Divides values into two subsets. Need to find optimal partitioning {Sports, Luxury} CarType {Family} OR {Family, Luxury} CarType {Sports} Stopping Criteria for Tree Induction Hunt s algorithm terminates when All the records in a node belong to the same class All records in a node have similar attribute values Create a leaf node with the same class label as the majority of the training records reaching the node A minimum pre-specified number of records belong to a node

Which Attribute Corresponds to the Best Split? Before Splitting: 10 records of class 0, 10 records of class 1 Which test condition is the best? (assume the attribute is categorical) How to determine the Best Split des with homogeneous class distribution are preferred Need a measure M of node impurity!! n-homogeneous, High degree of impurity Homogeneous, Low degree of impurity

Measures of de Impurity Entropy Gini Index Misclassification error How to Find the Best Split Before Splitting: C0 C1 N00 N01 M0 A? B? de N1 de N2 de N3 de N4 C0 N10 C0 N20 C0 N30 C0 N40 C1 N11 C1 N21 C1 N31 C1 N41 M1 M2 M3 M4 M12 Gain Split = M0 M12 vs M0 M34 M34

How to Find the Best Split Before Splitting: C0 C1 N00 N01 M0 A? de N0 B? de N1 de N2 de N3 de N4 C0 C1 N10 N11 C0 C1 N20 N21 C0 C1 N30 N31 C0 C1 N40 N41 M1 M2 M3 M4 M i Entropy( Ni) p( j Ni)log p( j Ni) j Splitting Criteria Based on Entropy Entropy at a given node t: ( t) p( j t)log p( j t j Entropy ) (TE: p( j t) is the relative frequency of class j at node t). Measures homogeneity of a node Maximum (log n c ) when records are equally distributed among all classes implying maximum impurity n c is the number of classes Minimum (0.0) when all records belong to one class, implying least impurity

Examples for computing Entropy Entropy t) p( j t)log p( j t) j ( 2 C1 0 C2 6 P(C1) = 0/6 = 0 P(C2) = 6/6 = 1 Entropy = 0 log 2 0 1 log 2 1 = 0 0 = 0 C1 1 C2 5 P(C1) = 1/6 P(C2) = 5/6 Entropy = (1/6) log 2 (1/6) (5/6) log 2 (5/6) = 0.65 C1 2 C2 4 P(C1) = 2/6 P(C2) = 4/6 Entropy = (2/6) log 2 (2/6) (4/6) log 2 (4/6) = 0.92 Splitting Based on Information Gain Information Gain: k ni GAIN Entropy( p) Entropy( i) split i 1 n Parent node p with n records is split into k partitions; n i is number of records in partition (node) i GAIN split measures Reduction in Entropy achieved because of the split Choose the split that achieves most reduction (maximizes GAIN) Used in ID3 and C4.5 Disadvantage: bias toward attributes with large number of values Large trees with many branches are preferred What happens if there is an ID attribute?

Splitting Based on GainRATIO Gain Ratio: GAIN Split k GainRATIO split SplitINFO i Parent node p is split into k partitions n i is the number of records in partition i n n i SplitINFO log 1 ni n GAIN split is penalized when large number of small partitions are produced by the split! SplitINFO increases when a larger number of small partitions is produced Used in C4.5 (Ross Quinlan) Designed to overcome the disadvantage of Information Gain. Split Information k i SplitINFO log 1 i n n ni n A = 1 A= 2 A = 3 A = 4 SplitINFO 32 0 0 0 0 16 16 0 0 1 16 8 8 0 1.5 16 8 4 4 1.75 8 8 8 8 2

Other Measures of Impurity Gini Index for a given node t : GINI( t) 1 (TE: p( j t) is the relative frequency of class j at node t) j [ p( j t)] 2 Classification error at a node t: Error( t) 1 max P( i t) i Comparison among Splitting Criteria For a 2-class problem:

Practical Issues in Learning Decision Trees Conclusion: decision trees are built by greedy search algorithms, guided by some heuristic that measures impurity In real-world applications we need also to consider Continuous attributes Missing values Improving computational efficiency Overfitted trees Splitting of Continuous Attributes Different ways of handling Discretize once at the beginning Binary Decision: (A < v) or (A v) A is a continuous attribute: consider all possible splits and find the best cut can be more computational intensive

10 Continuous Attributes Several Choices for the splitting value Number of possible splitting values = Number of distinct values n For each splitting value v 1. Scan the data set and 2. Compute class counts in each of the partitions, A < v and A v 3. Compute the entropy/gini index Choose the value v that gives lowest entropy/gini index Naïve algoritm Repetition of work O(n 2 ) Efficient implementation O(m nlog(n)) m is the nunber of attributes and n is the number of records Tid Refund Marital 1 Single 125K 2 Married 100K 3 Single 70K 4 Married 120K Cheat 5 Divorced 95K 6 Married 60K 7 Divorced 220K 8 Single 85K 9 Married 75K 10 Single 90K 85 > 85 Handling Missing Attribute Values Missing values affect decision tree construction in three different ways: Affects how impurity measures are computed Affects how to distribute instance with missing value to child nodes How to build a decision tree when some records have missing values? Usually, missing values should be handled during the preprocessing phase

10 10 Distribute Training Instances with missing values Tid Refund Marital 1 Single 125K 2 Married 100K 3 Single 70K 4 Married 120K Class 5 Divorced 95K 6 Married 60K 7 Divorced 220K 8 Single 85K 9 Married 75K Refund Tid Refund Marital Class 10? Married 90K Class= 0 + 3/9 Class= 3 Refund Class= 2 + 6/9 Class= 4 Send down record Tid=10 to the left child with weight = 3/9 and to the right child with weight = 6/9 Class= 0 Class= 3 Cheat= 2 Cheat= 4 Overfitting A tree that fits the training data too well may not be a good classifier for new examples. Overfitting results in decision trees more complex than necessary Estimating error rates Use statistical techniques Re-substitution errors: error on training data set (training error) Generalization errors: error on a testing data set (test error) Typically, 2/3 of the data set is reserved to model building and 1/3 for error estimation Disadvantage: less data is available for training Overfitted trees may have a low re-substitution error but a high generalization error.

Underfitting and Overfitting Overfitting When the tree becomes too large, its test error rate begins increasing while its training error rate continues too decrease. What causes overfitting? ise Underfitting: when model is too simple, both training and test errors are large How to Address Overfitting Pre-Pruning (Early Stopping Rule) Stop the algorithm before it becomes a fully-grown tree Stop if all instances belong to the same class Stop if all the attribute values are the same Early stopping conditions: Stop if number of instances is less than some user-specified threshold e.g. 5 to 10 records per node Stop if class distribution of instances are independent of the available attributes (e.g., using 2 test) Stop if splitting the current node improves the impurity measure (e.g. Gini or information gain) below a given threshold

How to Address Overfitting Post-pruning Grow decision tree to its entirety Trim the nodes of the decision tree in a bottom-up fashion Reduced-error pruning Use a dataset not used in the training pruning set Three data sets are needed: training set, test set, pruning set If test error in the pruning set improves after trimming then prune the tree Post-pruning can be achieved in two ways: Sub-tree replacement Sub-tree raising See section 4.4.5 of course book ID3, C4.5, C5.0, CART Ross Quinlan ID3 uses the Hunt s algorithm with information gain criterion and gain ratio Available in WEKA (no discretization, no missing values) C4.5 improves ID3 Needs entire data to fit in memory Handles missing attributes and continuous attributes Performs tree post-pruning Available in WEKA as J48 C5.0 is the current commercial successor of C4.5 Breiman et al. CART builds multivariate decision (binary) trees Available in WEKA as SimpleCART

Decision Boundary Border line between two neighboring regions of different classes is known as decision boundary. Decision boundary is parallel to axes because test condition involves a single attribute at-a-time Oblique Decision Trees x + y < 1 Class = + Class = Test condition may involve multiple attributes (e.g. CART) More expressive representation Finding optimal test condition is computationally expensive

Advantages of Decision Trees Extremely fast at classifying unknown records Easy to interpret for small-sized trees Able to handle both continuous and discrete attributes Work well in the presence of redundant attributes If methods for avoiding overfitting are provided then decision trees are quite robust in the presence of noise Robust to the effect of outliers Provide a clear indication of which fields are most important for prediction Disadvantages of Decision Trees Irrelevant attributes may affect badly the construction of a decision tree E.g. ID numbers Decision boundaries are rectilinear Small variations in the data can imply that very different looking trees are generated A sub-tree can be replicated several times Error-prone with too many classes t good for predicting the value of a continuous class attribute This problem is addressed by regression trees and model trees

Tree Replication P Q R S 0 Q 1 0 1 S 0 0 1 Same subtree appears in multiple branches