Learning Example. Machine learning and our focus. Another Example. An example: data (loan application) The data and the goal



Similar documents
COMP3420: Advanced Databases and Data Mining. Classification and prediction: Introduction and Decision Tree Induction

Classification and Prediction

Data Mining for Knowledge Management. Classification

Data Mining Classification: Decision Trees

Data mining techniques: decision trees

Lecture 10: Regression Trees

Social Media Mining. Data Mining Essentials

Data Mining Algorithms Part 1. Dejan Sarka

TOWARDS SIMPLE, EASY TO UNDERSTAND, AN INTERACTIVE DECISION TREE ALGORITHM

Decision-Tree Learning

Data Mining Part 5. Prediction

Professor Anita Wasilewska. Classification Lecture Notes

Data Mining Techniques Chapter 6: Decision Trees

Introduction to Learning & Decision Trees

Data Mining with R. Decision Trees and Random Forests. Hugh Murrell

Data Mining: Foundation, Techniques and Applications

Data Mining Practical Machine Learning Tools and Techniques

Chapter 12 Discovering New Knowledge Data Mining

Foundations of Artificial Intelligence. Introduction to Data Mining

An Introduction to Data Mining. Big Data World. Related Fields and Disciplines. What is Data Mining? 2/12/2015

CS 2750 Machine Learning. Lecture 1. Machine Learning. CS 2750 Machine Learning.

Supervised Learning (Big Data Analytics)

Decision Trees from large Databases: SLIQ

Classification and Prediction

Data Preprocessing. Week 2

Reference Books. Data Mining. Supervised vs. Unsupervised Learning. Classification: Definition. Classification k-nearest neighbors

Attribution. Modified from Stuart Russell s slides (Berkeley) Parts of the slides are inspired by Dan Klein s lecture material for CS 188 (Berkeley)

How To Perform An Ensemble Analysis

Data Mining Part 5. Prediction

Gerry Hobbs, Department of Statistics, West Virginia University

FUNDAMENTALS OF MACHINE LEARNING FOR PREDICTIVE DATA ANALYTICS Algorithms, Worked Examples, and Case Studies

A NEW DECISION TREE METHOD FOR DATA MINING IN MEDICINE

Numerical Algorithms Group

Unit 4 DECISION ANALYSIS. Lesson 37. Decision Theory and Decision Trees. Learning objectives:

Rule based Classification of BSE Stock Data with Data Mining

Decision Trees. JERZY STEFANOWSKI Institute of Computing Science Poznań University of Technology. Doctoral School, Catania-Troina, April, 2008

Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data

Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation. Lecture Notes for Chapter 4. Introduction to Data Mining

Decision Trees. Andrew W. Moore Professor School of Computer Science Carnegie Mellon University.

Data Mining Applications in Higher Education

Data Mining Essentials

Classification algorithm in Data mining: An Overview

!"!!"#$$%&'()*+$(,%!"#$%$&'()*""%(+,'-*&./#-$&'(-&(0*".$#-$1"(2&."3$'45"

DECISION TREE INDUCTION FOR FINANCIAL FRAUD DETECTION USING ENSEMBLE LEARNING TECHNIQUES

Clustering. Danilo Croce Web Mining & Retrieval a.a. 2015/201 16/03/2016

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

Classifying Large Data Sets Using SVMs with Hierarchical Clusters. Presented by :Limou Wang

Machine Learning. Mausam (based on slides by Tom Mitchell, Oren Etzioni and Pedro Domingos)

Data quality in Accounting Information Systems

Data Mining Methods: Applications for Institutional Research

BIRCH: An Efficient Data Clustering Method For Very Large Databases

Introduction to Data Mining

Azure Machine Learning, SQL Data Mining and R

DATA MINING TECHNIQUES AND APPLICATIONS

Optimization of C4.5 Decision Tree Algorithm for Data Mining Application

Insurance Analytics - analýza dat a prediktivní modelování v pojišťovnictví. Pavel Kříž. Seminář z aktuárských věd MFF 4.

International Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 3, May-June 2015

Data Mining Applications in Fund Raising

Machine Learning CS Lecture 01. Razvan C. Bunescu School of Electrical Engineering and Computer Science

Machine Learning. Chapter 18, 21. Some material adopted from notes by Chuck Dyer

Data Mining: Overview. What is Data Mining?

Data Mining. 1 Introduction 2 Data Mining methods. Alfred Holl Data Mining 1

Machine Learning. Term 2012/2013 LSI - FIB. Javier Béjar cbea (LSI - FIB) Machine Learning Term 2012/ / 34

Volume 4, Issue 1, January 2016 International Journal of Advance Research in Computer Science and Management Studies

X X X a) perfect linear correlation b) no correlation c) positive correlation (r = 1) (r = 0) (0 < r < 1)

EFFICIENT DATA PRE-PROCESSING FOR DATA MINING

Model-Based Recursive Partitioning for Detecting Interaction Effects in Subgroups

An Overview of Data Mining: Predictive Modeling for IR in the 21 st Century

Practical Data Science with Azure Machine Learning, SQL Data Mining, and R

Microsoft Azure Machine learning Algorithms

Medical Information Management & Mining. You Chen Jan,15, 2013 You.chen@vanderbilt.edu

Performance Analysis of Decision Trees

BOOSTING - A METHOD FOR IMPROVING THE ACCURACY OF PREDICTIVE MODEL

Machine Learning using MapReduce

Some Essential Statistics The Lure of Statistics

PLANET: Massively Parallel Learning of Tree Ensembles with MapReduce. Authors: B. Panda, J. S. Herbach, S. Basu, R. J. Bayardo.

A Basic Guide to Modeling Techniques for All Direct Marketing Challenges

Classification On The Clouds Using MapReduce

Obtaining Value from Big Data

Data Mining. Nonlinear Classification

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 3, May-Jun 2014

Data Mining - Evaluation of Classifiers

Machine Learning: Overview

OUTLIER ANALYSIS. Data Mining 1

PharmaSUG2011 Paper HS03

ASSIGNMENT 4 PREDICTIVE MODELING AND GAINS CHARTS

A Study of Detecting Credit Card Delinquencies with Data Mining using Decision Tree Model

Data Mining Techniques and its Applications in Banking Sector

Clustering Artificial Intelligence Henry Lin. Organizing data into clusters such that there is

New Work Item for ISO Predictive Analytics (Initial Notes and Thoughts) Introduction

EXPLORING & MODELING USING INTERACTIVE DECISION TREES IN SAS ENTERPRISE MINER. Copyr i g ht 2013, SAS Ins titut e Inc. All rights res er ve d.

Clustering Via Decision Tree Construction

Introduction to Pattern Recognition

Chapter 20: Data Analysis

Simple linear regression

SURVEY OF TEXT CLASSIFICATION ALGORITHMS FOR SPAM FILTERING

Data Mining and Knowledge Discovery in Databases (KDD) State of the Art. Prof. Dr. T. Nouri Computer Science Department FHNW Switzerland

ENSEMBLE DECISION TREE CLASSIFIER FOR BREAST CANCER DATA

Event driven trading new studies on innovative way. of trading in Forex market. Michał Osmoła INIME live 23 February 2016

Transcription:

Learning Example Chapter 18: Learning from Examples 22c:145 An emergency room in a hospital measures 17 variables (e.g., blood pressure, age, etc) of newly admitted patients. A decision is needed: whether to put a new patient in an ICU (intensive-care unit). Due to the high cost of ICU, those patients who may survive less than a month are given higher priority. Problem: to predict high-risk patients and discriminate them from low-risk patients. 2 Another Example A credit card company receives thousands of applications for new cards. Each application contains information about an applicant, age marital status annual salary outstanding debts credit rating etc. Problem: to decide whether an application should be approved, or to classify applications into two categories, approved and not approved. Machine learning and our focus Like human learning from past experiences. A computer does not have experiences. A computer system learns from data, which represent some past experiences of an application domain. Our focus: learn a target function that can be used to predict the values of a discrete class attribute, e.g., approve or not-approved, and high-risk or low risk. The task is commonly called: supervised learning, classification, or inductive learning. 3 4 The data and the goal Data: A set of data records (also called examples, instances or cases) described by k attributes: A 1, A 2, A k. a class: Each example is labelled with a predefined class. Goal: To learn a classification model from the data that can be used to predict the classes of new (future, or test) cases/instances. An example: data (loan application) Approved or not 5 6 1

An example: the learning task Learn a classification model from the data Use the model to classify future loan applications into Yes (approved) and No (not approved) What is the class for the following case/instance? Supervised vs. unsupervised Learning Supervised learning: classification is seen as supervised learning from examples. Supervision: The data (observations, measurements, etc.) are labeled with pre-defined classes. It is like that a teacher gives the classes (supervision). Test data are classified into these classes too. Unsupervised learning (clustering) Class labels of the data are unknown Given a set of data, the task is to establish the existence of classes or clusters in the data 7 8 Supervised learning process: two steps Learning (training): Learn a model using the training data Testing: Test the model using unseen test data to assess the model accuracy Number of correct classifications Accuracy Total number of test cases, What do we mean by learning? Given a data set D, a task T, and a performance measure M, a computer system is said to learn from D to perform the task T if after learning the system s performance on T improves as measured by M. In other words, the learned model helps the system to perform T better as compared to no learning. 9 10 An example Data: Loan application data Task: Predict whether a loan should be approved or not. Performance measure: accuracy. No learning: classify all future applications (test data) to the majority class (i.e., Yes): Accuracy = 9/ = 60%. We can do better than 60% with learning. Fundamental assumption of learning Assumption: The distribution of training examples is identical to the distribution of test examples (including future unseen examples). In practice, this assumption is often violated to certain degree. Strong violations will clearly result in poor classification accuracy. To achieve good accuracy on the test data, training examples must be sufficiently representative of the test data. 11 12 2

Introduction Decision tree learning is one of the most widely used techniques for classification. Its classification accuracy is competitive with other methods, and it is very efficient. The classification model is a tree, called decision tree (consists of only decision points, no chance nodes). The loan data (reproduced) Approved or not 13 14 A decision tree from the loan data Use the decision tree Decision nodes and leaf nodes (classes) No 16 Is the decision tree unique? No. Here is a simpler tree. We want smaller tree and accurate tree. Easy to understand and perform better. Finding the best tree is NP-hard. All current tree building algorithms are heuristic algorithms (3/3) (6/6) From a decision tree to a set of rules A decision tree can be converted to a set of rules Each path from the root to a leaf is a rule. (3/3) (6/6) 17 18 3

Algorithm for decision tree learning Basic algorithm (a greedy divide-and-conquer algorithm) Assume attributes are discrete now (continuous attributes can be handled too) Tree is constructed in a top-down recursive manner At start, all the training examples are at the root Examples are partitioned recursively based on selected attributes Attributes are selected on the basis of an impurity function (e.g., information gain) Conditions for stopping partitioning All examples for a given node belong to the same class There are no remaining attributes for further partitioning majority class is the leaf There are no examples left Decision tree learning algorithm 19 20 Choose an attribute to partition data The key to building a decision tree - which attribute to choose in order to branch. The objective is to reduce impurity or uncertainty in data as much as possible. A subset of data is pure if all instances belong to the same class. The heuristic in the Algorithm is to choose the attribute with the maximum Information Gain or Gain Ratio based on information theory. The loan data (reproduced) Approved or not 21 22 Two possible roots, which is better? Information theory (B) seems to be better. Information theory provides a mathematical basis for measuring the information content. To understand the notion of information, think about it as providing the answer to a question, for example, whether a coin will come up heads. If one already has a good guess about the answer, then the actual answer is less informative. If one already knows that the coin is rigged so that it will come with heads with probability 0.99, then a message (advanced information) about the actual outcome of a flip is worth less than it would be for a honest coin (50-50). 23 24 4

Information theory (cont ) For a fair (honest) coin, you have no information, and you are willing to pay more (say in terms of $) for advanced information - less you know, the more valuable the information. Information theory uses this same intuition, but instead of measuring the value for information in dollars, it measures information contents in bits. One bit of information is enough to answer a yes/no question about which one has no idea, such as the flip of a fair coin Shannon Entropy H(.5) = 1 H(0) = 0 H(1) = 0 H(.4) =.97 H(.33) =.9 H(.2) =.71 H(p) =? 25 26 Shannon Entropy Entropy measure H(.5) = 1 H(0) = 0 H(1) = 0 H(.4) =.97 H(.33) =.9 H(.2) =.71 H(p) = -p log 2 (p) - (1-p)log 2 (1-p). The entropy formula, entropy( D) C j 1 P( c ) 1, j C j 1 P( c ) log P( c ) P(c j ) is the probability of class c j in data set D We use entropy as a measure of impurity or disorder of data set D. (Or, a measure of information in a tree) j 2 j 27 28 Entropy measure: let us get a feeling Information gain Given a set of examples D, we first compute its entropy: As the data become purer and purer, the entropy value becomes smaller and smaller. This is useful to us! 29 If we make attribute A i, with v values, the root of the current tree, this will partition D into v subsets D 1, D 2, D v. The expected entropy if A i is used as the current root: v D j entropya i ( D) entropy( D j ) j 1 D 30 5

Information gain (cont ) Information gained by selecting attribute A i to branch or to partition the data is gain( D, A ) entropy( D) entropy ( D) i We choose the attribute with the highest gain to branch/split the current tree. A i The example 6 6 9 9 entropyd ( ) log log2 2 5 5 5 entropy Age Yes No entropy(di) Age ( D) entropy( D1 ) entropy( D2 ) entropy( D3 ) young 2 3 0.971 5 5 5 0.971 0.971 0.722 middle 3 2 0.971 old 4 1 0.722 0.888 Own_house is the best choice for the root. 0.971 6 9 entropyown_ house( D) entropy( D1 ) entropy( D2 ) 6 9 0 0.918 0.551 31 32 We build the final tree Another Example (3/3) (6/6) 33 Handling continuous attributes Handle continuous attribute by splitting into two intervals (can be more) at each node. How to find the best threshold to divide? Use information gain or gain ratio again Sort all the values of an continuous attribute in increasing order {v 1, v 2,, v r }, One possible threshold between two adjacent values v i and v i+1. Try all possible thresholds and find the one that maximizes the gain (or gain ratio). 36 6

An example in a continuous space Some Real life applications Systems Biology Gene expression microarray data: Text categorization: spam detection Face detection: Signature recognition: Customer discovery Medicine: Predict if a patient has heart ischemia by a spectral analysis of his/her ECG. 37 Microarray data Separate malignant from healthy tissues based on the mrna expression profile of the tissue. Face detection discriminating human faces from non faces. Signature recognition Recognize signatures by structural similarities which are difficult to quantify. does a signature belongs to a specific person, say Tony Blair, or not. Character recognition (multi category) Identify handwritten characters: classify each image of character into one of 10 categories 0, 1, 2 6132 2056 2014 4283 7

Customer discovery Regression predict whether a customer is likely to purchase certain goods according to a database of customer profiles and their history of shopping activities. Dependent variable Independent variable (x) Regression analysis is a statistical process for estimating the relationships among variables. Regression may explain the variation in a dependent variable using the variation in independent variable, thus an explanation of causation. If the independent variable(s) sufficiently explain the variation in the dependent variable, the model can be used for prediction. Linear Regression Dependent variable (y) є b0 (y intercept) Independent variable (x) y = b0 + b1x ± є b1 = slope = y/ x The output of a regression is a function that predicts the dependent variable based upon values of the independent variables. Simple regression fits a straight line to the data. Least Squares Regression Dependent variable Independent variable (x) A least squares regression selects the line with the lowest total sum of squared prediction errors. This value is called the Sum of Squares of Error, or SSE. Least Squares Regression Least squares. Given n points in the plane: (x 1, y 1 ), (x 2, y 2 ),..., (x n, y n ). Find a line y = ax + b that minimizes the sum of the squared error: y n SSE ( y i ax i b) 2 i 1 a n x i i y i ( i x i ) ( i y i ), b n x 2 i ( x i ) 2 i i i y i a i x i n x Nonlinear Regression Nonlinear functions can also be fit as regressions. Common choices include Power, Logarithmic, Exponential, and Logistic, but any continuous function can be used. 47 8

Over Fitting A common problem fit a model to existing data: Regression Training Neural Networks Approximation Data Mining Learning Simple Example Overfitting: fitted model is too complex Simple Example (cont.) Overfitting poor predictive power Cause of Ovefitting Noise in the system: Greater variability in data Complex model many parameters higher degree of freedom greater variability Avoid overfitting in classification Overfitting: A tree may overfit the training data Good accuracy on training data but poor on test data Symptoms: tree too deep and too many branches, some may reflect anomalies due to noise or outliers Two approaches to avoid overfitting Pre-pruning: Halt tree construction early Difficult to decide because we do not know what may happen subsequently if we keep growing the tree. Post-pruning: Remove branches or sub-trees from a fully grown tree. This method is commonly used. C4.5 uses a statistical method to estimates the errors at each node for pruning. A validation set may be used for pruning as well. An example Likely to overfit the data 53 54 9

Other issues in decision tree learning From tree to rules, and rule pruning Handling of miss values Handing skewed distributions Handling attributes and classes with different costs. Attribute construction Etc. 55 10