Machine Learning Techniques for Data Mining



Similar documents
Data mining techniques: decision trees

Decision Trees. JERZY STEFANOWSKI Institute of Computing Science Poznań University of Technology. Doctoral School, Catania-Troina, April, 2008

Data Mining: Introduction

Decision-Tree Learning

Knowledge-based systems and the need for learning

Data Mining with Weka

Data Mining III: Numeric Estimation

Social Media Mining. Data Mining Essentials

Data Mining Classification: Decision Trees

Chapter 12 Discovering New Knowledge Data Mining

Weather forecast prediction: a Data Mining application

Data Mining of Web Access Logs

Knowledge Discovery and Data Mining

NC STATE UNIVERSITY Exploratory Analysis of Massive Data for Distribution Fault Diagnosis in Smart Grids

Data Mining with R. Decision Trees and Random Forests. Hugh Murrell

COMP3420: Advanced Databases and Data Mining. Classification and prediction: Introduction and Decision Tree Induction

Version Spaces.

Equational Reasoning as a Tool for Data Analysis

More Data Mining with Weka

Learning Example. Machine learning and our focus. Another Example. An example: data (loan application) The data and the goal

Data Mining on Streams

Model Combination. 24 Novembre 2009

Content-Based Recommendation

Analysis Tools and Libraries for BigData

What is Data Mining? Data Mining (Knowledge discovery in database) Data mining: Basic steps. Mining tasks. Classification: YES, NO

Foundations of Artificial Intelligence. Introduction to Data Mining

8. Machine Learning Applied Artificial Intelligence

WEKA. Machine Learning Algorithms in Java

Classification and Prediction

Lecture 6 - Data Mining Processes

Data Mining for Knowledge Management. Classification

Decision Trees. Andrew W. Moore Professor School of Computer Science Carnegie Mellon University.

Identification and Prevention of Frost or Freeze Damage By Linda Reddick, Kingman Area Master Gardener

Introduction to Machine Learning Using Python. Vikram Kamath

AWESOME ADAPTATIONS WORKSHEETS. for. Rainforest Desert Mediterranean

Data Mining Techniques

DATA MINING TECHNIQUES AND APPLICATIONS

Using Data Mining for Mobile Communication Clustering and Characterization

Applying Data Science to Sales Pipelines for Fun and Profit

IBM Social Media Analytics

Experiments in Web Page Classification for Semantic Web

Data Mining Practical Machine Learning Tools and Techniques

Problems often have a certain amount of uncertainty, possibly due to: Incompleteness of information about the environment,

Example: Document Clustering. Clustering: Definition. Notion of a Cluster can be Ambiguous. Types of Clusterings. Hierarchical Clustering

4th GRADE MINIMUM CONTENTS-NATURAL SCIENCE UNIT 11: PLANTS

STATISTICA. Financial Institutions. Case Study: Credit Scoring. and

Data Mining for Manufacturing: Preventive Maintenance, Failure Prediction, Quality Control

ON INTEGRATING UNSUPERVISED AND SUPERVISED CLASSIFICATION FOR CREDIT RISK EVALUATION

Overview. Evaluation Connectionist and Statistical Language Processing. Test and Validation Set. Training and Test Set

Data Mining Applications in Manufacturing

Sanjeev Kumar. contribute

Introduction to Artificial Intelligence G51IAI. An Introduction to Data Mining

Database Marketing, Business Intelligence and Knowledge Discovery

Data Science and Prediction*

Data Mining - Evaluation of Classifiers

Course 395: Machine Learning

Machine Learning and Data Mining. Fundamentals, robotics, recognition

DATA MINING TECHNIQUES SUPPORT TO KNOWLEGDE OF BUSINESS INTELLIGENT SYSTEM

Introduction to Pattern Recognition

Applied Data Mining Analysis: A Step-by-Step Introduction Using Real-World Data Sets

Big Data: The Science of Patterns. Dr. Lutz Hamel Dept. of Computer Science and Statistics

IDENTIFYING BANK FRAUDS USING CRISP-DM AND DECISION TREES

Example application (1) Telecommunication. Lecture 1: Data Mining Overview and Process. Example application (2) Health

SPATIAL DATA CLASSIFICATION AND DATA MINING

Data Mining: Foundation, Techniques and Applications

Perspectives on Data Mining

Extend Table Lens for High-Dimensional Data Visualization and Classification Mining

Data Mining Solutions for the Business Environment

Case Based Model to enhance aircraft fleet management and equipment performance

The Data Mining Process

Data Mining. 1 Introduction 2 Data Mining methods. Alfred Holl Data Mining 1

Data quality in Accounting Information Systems

Monday Morning Data Mining

Practical Data Science with Azure Machine Learning, SQL Data Mining, and R

Applying Data Mining Technique to Sales Forecast

not possible or was possible at a high cost for collecting the data.

An Introduction to Data Mining. Big Data World. Related Fields and Disciplines. What is Data Mining? 2/12/2015

Contrasting Xpriori Insight with Traditional Statistical Analysis, Data Mining and Online Analytical Processing

Beating the NCAA Football Point Spread

Chapter 3: Data Mining Driven Learning Apprentice System for Medical Billing Compliance

Environmental Remote Sensing GEOG 2021

How To Use Data Mining For Loyalty Based Management

COC131 Data Mining - Clustering

An innovative application of a constrained-syntax genetic programming system to the problem of predicting survival of patients

Tutorial Exercises for the Weka Explorer

Software Development Training Camp 1 (0-3) Prerequisite : Program development skill enhancement camp, at least 48 person-hours.

Machine Learning. Chapter 18, 21. Some material adopted from notes by Chuck Dyer

Université de Montpellier 2 Hugo Alatrista-Salas : hugo.alatrista-salas@teledetection.fr

A Review of Data Mining Techniques

Gerard Mc Nulty Systems Optimisation Ltd BA.,B.A.I.,C.Eng.,F.I.E.I

Practical Introduction to Machine Learning and Optimization. Alessio Signorini

Model-Based Recursive Partitioning for Detecting Interaction Effects in Subgroups

Prediction of Stock Performance Using Analytical Techniques

Measurement and Metrics Fundamentals. SE 350 Software Process & Product Quality

The KDD Process: Applying Data Mining

Learning is a very general term denoting the way in which agents:

Data Mining for Business Analytics

Introduction to Learning & Decision Trees

Network Machine Learning Research Group. Intended status: Informational October 19, 2015 Expires: April 21, 2016

Attribution. Modified from Stuart Russell s slides (Berkeley) Parts of the slides are inspired by Dan Klein s lecture material for CS 188 (Berkeley)

Transcription:

Machine Learning Techniques for Data Mining Eibe Frank University of Waikato New Zealand 10/25/2000 1

PART I What s it all about 10/25/2000 2

Data vs. information Society produces huge amounts of data Sources: business, science, medicine, economics, geography, environment, sports, Potentially valuable resource Raw data is useless: need techniques to automatically extract information from it Data: recorded facts Information: patterns underlying the data 10/25/2000 3

Information is crucial Example 1: in vitro fertilization Given: embryos described by 60 features Problem: selection of embryos that will survive Data: historical records of embryos and outcome Example 2: cow culling Given: cows described by 700 features Problem: selection of cows that should be culled Data: historical records and farmers decisions 10/25/2000 4

Data mining Extraction of implicit, previously unknown, and potentially useful information from data Needed: programs that detect patterns and regularities in the data Strong patterns can be used to make predictions Problem 1: most patterns are not interesting Problem 2: patterns may be inexact (or even completely spurious) if data is garbled or missing 10/25/2000 5

Machine learning techniques Technical basis for data mining: algorithms for acquiring structural descriptions from examples Structural descriptions represent patterns explicitly Can be used to predict outcome in new situation Can be used to understand and explain how prediction is derived (maybe even more important) Methods originate from artificial intelligence, statistics, and research on databases 10/25/2000 6

Structural descriptions For example: if-then rules If tear production rate = reduced then recommendation = none Otherwise, if age = young and astigmatic = no then recommendation = soft Age Spectacle prescription Astigmatism Tear production rate Recommended lenses Young Myope No Reduced None Young Hypermetrope No Normal Soft Pre-presbyopic Hypermetrope No Reduced None Presbyopic Myope Yes Normal Hard 10/25/2000 7

Can machines really learn Definitions of learning from dictionary: To get knowledge of by study, experience, or being taught To become aware by information or from observation To commit to memory To be informed of, ascertain; to receive instruction Difficult to measure Trivial for computers Operational definition: Things learn when they change their behavior in a way that makes them perform better in the future. Does a slipper learn Does learning imply intention 10/25/2000 8

The weather problem Conditions for playing an unspecified game Outlook Temperature Humidity Windy Play Sunny Hot High False No Sunny Hot High True No Overcast Hot High False Yes Rainy Mild Normal False Yes If outlook = sunny and humidity = high then play = no If outlook = rainy and windy = true then play = no If outlook = overcast then play = yes If humidity = normal then play = yes If none of the above then play = yes 10/25/2000 9

Classification vs. association rules Classification rule: predicts value of pre-specified attribute (the classification of an example) If outlook = sunny and humidity = high then play = no Associations rule: predicts value of arbitrary attribute or combination of attributes If temperature = cool then humidity = normal If humidity = normal and windy = false then play = yes If outlook = sunny and play = no then humidity = high If windy = false and play = no then outlook = sunny and humidity = high 10/25/2000 10

Weather data with mixed attributes Two attributes with numeric values Outlook Temperature Humidity Windy Play Sunny 85 85 False No Sunny 80 90 True No Overcast 83 86 False Yes Rainy 75 80 False Yes If outlook = sunny and humidity > 83 then play = no If outlook = rainy and windy = true then play = no If outlook = overcast then play = yes If humidity < 85 then play = yes If none of the above then play = yes 10/25/2000 11

10/25/2000 12 The contact lenses data None Reduced Yes Hypermetrope Pre-presbyopic None Normal Yes Hypermetrope Pre-presbyopic None Reduced No Myope Presbyopic None Normal No Myope Presbyopic None Reduced Yes Myope Presbyopic Hard Normal Yes Myope Presbyopic None Reduced No Hypermetrope Presbyopic Soft Normal No Hypermetrope Presbyopic None Reduced Yes Hypermetrope Presbyopic None Normal Yes Hypermetrope Presbyopic Soft Normal No Hypermetrope Pre-presbyopic None Reduced No Hypermetrope Pre-presbyopic Hard Normal Yes Myope Pre-presbyopic None Reduced Yes Myope Pre-presbyopic Soft Normal No Myope Pre-presbyopic None Reduced No Myope Pre-presbyopic hard Normal Yes Hypermetrope Young None Reduced Yes Hypermetrope Young Soft Normal No Hypermetrope Young None Reduced No Hypermetrope Young Hard Normal Yes Myope Young None Reduced Yes Myope Young Soft Normal No Myope Young None Reduced No Myope Young Recommended lenses Tear production rate Astigmatism Spectacle prescription Age

A complete and correct rule set If tear production rate = reduced then recommendation = none If age = young and astigmatic = no and tear production rate = normal then recommendation = soft If age = pre-presbyopic and astigmatic = no and tear production rate = normal then recommendation = soft If age = presbyopic and spectacle prescription = myope and astigmatic = no then recommendation = none If spectacle prescription = hypermetrope and astigmatic = no and tear production rate = normal then recommendation = soft If spectacle prescription = myope and astigmatic = yes and tear production rate = normal then recommendation = hard If age young and astigmatic = yes and tear production rate = normal then recommendation = hard If age = pre-presbyopic and spectacle prescription = hypermetrope and astigmatic = yes then recommendation = none If age = presbyopic and spectacle prescription = hypermetrope and astigmatic = yes then recommendation = none 10/25/2000 13

A decision tree for this problem 10/25/2000 14

Classifying iris flowers Sepal length Sepal width Petal length Petal width Type 1 5.1 3.5 1.4 0.2 Iris setosa 2 4.9 3.0 1.4 0.2 Iris setosa 51 7.0 3.2 4.7 1.4 Iris versicolor 52 6.4 3.2 4.5 1.5 Iris versicolor 101 6.3 3.3 6.0 2.5 Iris virginica 102 5.8 2.7 5.1 1.9 Iris virginica If petal length < 2.45 then Iris setosa If sepal width < 2.10 then Iris versicolor... 10/25/2000 15

Predicting CPU performance Examples: 209 different computer configurations Cycle time (ns) Main memory (Kb) Cache (Kb) Channels Performance MYCT MMIN MMAX CACH CHMIN CHMAX PRP 1 125 256 6000 256 16 128 198 2 29 8000 32000 32 8 32 269 208 480 512 8000 32 0 0 67 209 480 1000 4000 0 0 0 45 Linear regression function PRP = -55.9 + 0.0489 MYCT + 0.0153 MMIN + 0.0056 MMAX + 0.6410 CACH - 0.2700 CHMIN + 1.480 CHMAX 10/25/2000 16

Data from labor negotiations Attribute Duration Wage increase first year Wage increase second year Wage increase third year Cost of living adjustment Working hours per week Pension Standby pay Shift-work supplement Education allowance Statutory holidays Vacation Long-term disability assistance Dental plan contribution Bereavement assistance Health plan contribution Acceptability of contract Type (Number of years) Percentage Percentage Percentage {none,tcf,tc} (Number of hours) {none,ret-allw, empl-cntr} Percentage Percentage {yes,no} (Number of days) {below-avg,avg,gen} {yes,no} {none,half,full} {yes,no} {none,half,full} {good,bad} 1 1 2% none 28 none yes 11 avg no none no none bad 2 2 4% 5% tcf 35 13% 5% 15 gen good 3 3 4.3% 4.4% 38 4% 12 gen full full good 40 2 4.5 4.0 none 40 4 12 avg yes full yes half good 10/25/2000 17

Decision trees for the labor data 10/25/2000 18

Soybean classification Environment Seed Fruit Leaves Attribute Time of occurrence Precipitation Condition Mold growth Condition of fruit pods Fruit spots Condition Leaf spot size Number of values 7 3 2 2 4 5 2 3 Sample value July Above normal Normal Absent Normal Abnormal Stem Condition 2 Abnormal Stem lodging 2 Yes Roots Condition 3 Normal Diagnosis 19 Diaporthe stem canker 10/25/2000 19

The role of domain knowledge If leaf condition is normal and stem condition is abnormal and stem cankers is below soil line and canker lesion color is brown then diagnosis is rhizoctonia root rot If leaf malformation is absent and stem condition is abnormal and stem cankers is below soil line and canker lesion color is brown then diagnosis is rhizoctonia root rot 10/25/2000 20

Fielded applications Where the result of learning or the learning method itself is deployed in practical applications Reducing delays in rotogravure printing Autoclave layout for aircraft parts Automatic classification of sky objects Predicting pilot bids Automated completion of repetitive forms Text retrieval 10/25/2000 21

Processing loan applications Given: questionnaire with financial and personal information Problem: should money be lend Simple statistical method covers 90% of cases Borderline cases referred to loan officers But: 50% of accepted borderline cases defaulted! Solution(): reject all borderline cases No! Borderline cases are most active customers 10/25/2000 22

Enter machine learning 1000 training examples of borderline cases 20 attributes: age, years with current employer, years at current address, years with the bank, other credit cards possessed, Learned rules predicted 2/3 of borderline cases correctly! Also: company liked rules because they could be used to explain decisions to customers 10/25/2000 23

Screening images Given: radar satellite images of coastal waters Problem: detecting oil slicks in those images Oil slicks appear as dark regions with changing size and shape Not easy: lookalike dark regions can be caused by weather conditions (e.g. high wind) Expensive process requiring highly trained personnel 10/25/2000 24

Enter machine learning Dark regions are extracted from normalized image Attributes: size of region, shape, area, intensity, sharpness and jaggedness of boundaries, proximity of other regions, info about background Constraints: Scarcity of training examples (oil slicks are rare!) Unbalanced data: most dark regions aren t oil slicks Regions from same image form a batch Requirement: adjustable false-alarm rate 10/25/2000 25

Load forecasting Electricity supply companies require forecast of future demand for power Accurate forecasts of minimum and maximum load for each hour result in significant savings Given: manually constructed static load model that assumes normal climatic conditions Problem: adjusting for weather conditions Static model consist of: base load for the year, load periodicity over the year, effect of holidays 10/25/2000 26

Enter machine learning Prediction corrected using most similar days Attributes: temperature, humidity, wind speed, and cloud cover readings, along with difference between actual load and predicted load Average difference among three most similar days added to static model Coefficients of linear regression form attribute weights in similarity function 10/25/2000 27

Diagnosis of machine faults Diagnosis: classical domain of expert systems Given: Fourier analysis of vibrations measured at various points of a device s mounting Problem: which fault is present Preventative maintenance of electromechanical motors and generators Information very noisy So far: diagnosis by expert/hand-crafted rules 10/25/2000 28

Enter machine learning Available: 600 faults with expert s diagnosis ~300 unsatisfactory, the rest used for training Attributes were augmented by intermediate concepts that embodied causal domain knowledge Expert was not satisfied with initial rules because they did not relate to his domain knowledge Further background knowledge resulted in more complex rules that were satisfactory Learned rules outperformed hand-crafted ones 10/25/2000 29

Marketing and sales I Companies precisely record massive amounts of marketing and sales data Possible applications: Customer loyalty: identifying customers that are likely to defect by detecting changes in their behavior (e.g. banks/phone companies) Special offers: identifying profitable customers (e.g. reliable owners of credit cards that need extra money during the holiday season) 10/25/2000 30

Marketing and sales II Market basket analysis Association techniques to find groups of items that tend to occur together in a transaction (mainly used to analyze checkout data) Historical analysis of purchasing patterns Identifying prospective customers Focusing promotional mailouts (targeted campaigns are cheaper than mass-marketed ones) 10/25/2000 31

Machine learning and statistics Difference historically (grossly oversimplified): Statistics: testing hypotheses Machine learning: finding the right hypothesis But: huge overlap Decision trees (C4.5 and CART) Nearest-neighbor methods Today: perspectives have converged Most ML algorithms employ statistical techniques 10/25/2000 32

Generalization as search Inductive learning: finding a concept description that fits the data Example: rule sets as description language Enormous, but finite, search space Simple solution: enumerating the concept space, eliminating descriptions that do not fit examples Surviving descriptions contain target concept 10/25/2000 33

Enumerating the concept space Search space for weather problem 4 x 4 x 3 x 3 x 2 = 288 possible rules No more than 14 rules: 2.7x10 24 possible rule sets Possible remedy: candidate-elimination algorithm Other practical problems: More than one description may survive No description may survive Language may not be able to describe target concept Data may contain noise 10/25/2000 34

The version space Space of consistent concept descriptions Completely determined by two sets L: most specific descriptions that cover all positive examples and no negative ones G: most general descriptions that do not cover any negative examples and all positive ones Only L and G need to be maintained and updated But: still computationally very expensive And: does not solve other practical problems 10/25/2000 35

Version space example Given: red or green cows or chicken L={} G={<*,*>} <green,cow>: positive L={<green,cow>} G={<*,*>} <red,chicken>: negative L={<green,cow>} G={<green,*>,<*,cow>} <green, chicken>: positive L={<green,*>} G={<green,*>} 10/25/2000 36

Candidate-elimination algorithm Initialize L and G For each example e: If e is positive: Delete all elements from G that do not cover e For each element r in L that does not cover e: Replace r by all of its most specific generalizations that cover e and that are more specific than some element in G Remove elements from L that are more general than some other element in L If e is negative: Delete all elements from L that cover e For each element r in G that covers e: Replace r by all of its most general specializations that do not cover e and that are more general than some element in L Remove elements from G that are more specific than some other element in G 10/25/2000 37

Bias The most important decisions in learning systems: The concept description language The order in which the space is searched The way that overfitting to the particular training data is avoided These properties form the bias of the search: Language bias Search bias Overfitting-avoidance bias 10/25/2000 38

Language bias Most important question: is language universal or does it restrict what can be learned Universal language can express arbitrary subsets of examples If language can represent statements involving logical or ( disjunctions ) it is universal Example: rule sets Domain knowledge can be used to exclude some concept descriptions a priori from the search 10/25/2000 39

Search bias Search heuristic Greedy search: performing the best single step Beam search : keeping several alternatives Direction of search General-to-specific E.g. specializing a rule by adding conditions Specific-to-general E.g. generalizing an individual instance into a rule 10/25/2000 40

Overfitting-avoidance bias Can be seen as a form of search bias Modified evaluation criterion E.g. balancing simplicity and number of errors Modified search strategy E.g. pruning (simplifying a description) Pre-pruning: stops at a simple description before search proceeds to an overly complex one Post-pruning: generates a complex description first and simplifies it afterwards 10/25/2000 41

Data mining and ethics I Many ethical issues arise in practical applications Data mining often used to discriminate E.g. loan applications: using some information (e.g. sex, religion, race) is unethical Ethical situation depends on application E.g. same information ok in medical application Attributes may contain problematic information E.g. area code may correlate with race 10/25/2000 42

Data mining and ethics II Important questions in practical applications: Who is permitted access to the data For what purpose was the data collected What kind of conclusions can be legitimately drawn from it Caveats must be attached to results Purely statistical arguments are never sufficient! Are resources put to good use 10/25/2000 43