Chapter 26: Data Mining



Similar documents
Class 10. Data Mining and Artificial Intelligence. Data Mining. We are in the 21 st century So where are the robots?

Introduction to Artificial Intelligence G51IAI. An Introduction to Data Mining

Foundations of Artificial Intelligence. Introduction to Data Mining

Unsupervised Learning and Data Mining. Unsupervised Learning and Data Mining. Clustering. Supervised Learning. Supervised Learning

Social Media Mining. Data Mining Essentials

Decision Trees from large Databases: SLIQ

Introduction to Data Mining

Clustering. Danilo Croce Web Mining & Retrieval a.a. 2015/201 16/03/2016

An Introduction to Data Mining. Big Data World. Related Fields and Disciplines. What is Data Mining? 2/12/2015

Data Mining Classification: Decision Trees

Data Mining and Knowledge Discovery in Databases (KDD) State of the Art. Prof. Dr. T. Nouri Computer Science Department FHNW Switzerland

DATA MINING TECHNIQUES AND APPLICATIONS

DATA MINING CLUSTER ANALYSIS: BASIC CONCEPTS

Data Mining Techniques Chapter 6: Decision Trees

Chapter ML:XI (continued)

Université de Montpellier 2 Hugo Alatrista-Salas : hugo.alatrista-salas@teledetection.fr

Chapter 7. Cluster Analysis

Medical Information Management & Mining. You Chen Jan,15, 2013 You.chen@vanderbilt.edu

Machine Learning using MapReduce

Chapter 20: Data Analysis

Analytics on Big Data

The Data Mining Process

Clustering. Adrian Groza. Department of Computer Science Technical University of Cluj-Napoca

Cluster Analysis. Isabel M. Rodrigues. Lisboa, Instituto Superior Técnico

Using multiple models: Bagging, Boosting, Ensembles, Forests

Class #6: Non-linear classification. ML4Bio 2012 February 17 th, 2012 Quaid Morris

Chapter 4 Data Mining A Short Introduction. 2006/7, Karl Aberer, EPFL-IC, Laboratoire de systèmes d'informations répartis Data Mining - 1

BIDM Project. Predicting the contract type for IT/ITES outsourcing contracts

Information Management course

Azure Machine Learning, SQL Data Mining and R

Distances, Clustering, and Classification. Heatmaps

Clustering Artificial Intelligence Henry Lin. Organizing data into clusters such that there is

Data Mining - Evaluation of Classifiers

Chapter 12 Discovering New Knowledge Data Mining

1. What are the uses of statistics in data mining? Statistics is used to Estimate the complexity of a data mining problem. Suggest which data mining

Classification and Prediction

Data Mining for Knowledge Management. Classification

Data Mining Analytics for Business Intelligence and Decision Support

Data Mining Algorithms Part 1. Dejan Sarka

Practical Data Science with Azure Machine Learning, SQL Data Mining, and R

Using Data Mining for Mobile Communication Clustering and Characterization

Machine Learning. Chapter 18, 21. Some material adopted from notes by Chuck Dyer

Neural Networks Lesson 5 - Cluster Analysis

MACHINE LEARNING IN HIGH ENERGY PHYSICS

Insurance Analytics - analýza dat a prediktivní modelování v pojišťovnictví. Pavel Kříž. Seminář z aktuárských věd MFF 4.

Data Mining Project Report. Document Clustering. Meryem Uzun-Per

Unsupervised learning: Clustering

Data Mining: Overview. What is Data Mining?

Statistical Data Mining. Practical Assignment 3 Discriminant Analysis and Decision Trees

Data Mining Methods: Applications for Institutional Research

Data Mining: Concepts and Techniques. Jiawei Han. Micheline Kamber. Simon Fräser University К MORGAN KAUFMANN PUBLISHERS. AN IMPRINT OF Elsevier

Principles of Data Mining by Hand&Mannila&Smyth

Data Warehousing and Data Mining. A.A Datawarehousing & Datamining 1

Classification algorithm in Data mining: An Overview

Learning Example. Machine learning and our focus. Another Example. An example: data (loan application) The data and the goal

How To Cluster

Cluster Analysis: Basic Concepts and Methods

Summary Data Mining & Process Mining (1BM46) Content. Made by S.P.T. Ariesen

KNIME TUTORIAL. Anna Monreale KDD-Lab, University of Pisa

Introduction to Data Mining

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Mobile Phone APP Software Browsing Behavior using Clustering Analysis

Introduction. A. Bellaachia Page: 1

Professor Anita Wasilewska. Classification Lecture Notes

Classifying Large Data Sets Using SVMs with Hierarchical Clusters. Presented by :Limou Wang

not possible or was possible at a high cost for collecting the data.

An Overview of Knowledge Discovery Database and Data mining Techniques

Performance Analysis of Naive Bayes and J48 Classification Algorithm for Data Classification

CI6227: Data Mining. Lesson 11b: Ensemble Learning. Data Analytics Department, Institute for Infocomm Research, A*STAR, Singapore.

Knowledge Discovery and Data Mining

Clustering. Data Mining. Abraham Otero. Data Mining. Agenda

Data Preprocessing. Week 2

EXPLORING & MODELING USING INTERACTIVE DECISION TREES IN SAS ENTERPRISE MINER. Copyr i g ht 2013, SAS Ins titut e Inc. All rights res er ve d.

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 3, May-Jun 2014

Data Mining with SQL Server Data Tools

!"!!"#$$%&'()*+$(,%!"#$%$&'()*""%(+,'-*&./#-$&'(-&(0*".$#-$1"(2&."3$'45"

Data Mining for Retail Website Design and Enhanced Marketing

Clustering Via Decision Tree Construction

Data Mining. Cluster Analysis: Advanced Concepts and Algorithms

Knowledge Discovery and Data Mining

A Data Mining Tutorial

UNSUPERVISED MACHINE LEARNING TECHNIQUES IN GENOMICS

Data Mining Part 5. Prediction

ARTIFICIAL INTELLIGENCE (CSCU9YE) LECTURE 6: MACHINE LEARNING 2: UNSUPERVISED LEARNING (CLUSTERING)

Linear Threshold Units

Quick Introduction of Data Mining Techniques

Data Mining: Foundation, Techniques and Applications

from Larson Text By Susan Miertschin

Data Mining: An Overview. David Madigan

An Overview and Evaluation of Decision Tree Methodology

Applied Data Mining Analysis: A Step-by-Step Introduction Using Real-World Data Sets

Data Mining with R. Decision Trees and Random Forests. Hugh Murrell

A comparison of various clustering methods and algorithms in data mining

IBM SPSS Direct Marketing 23

Data Mining Techniques

OLAP and Data Mining. Data Warehousing and End-User Access Tools. Introducing OLAP. Introducing OLAP

Data Mining Clustering (2) Sheets are based on the those provided by Tan, Steinbach, and Kumar. Introduction to Data Mining

Transcription:

Chapter 6: Data Mining (Some slides courtesy of Rich Caruana, Cornell University) Definition Data mining is the exploration and analysis of large quantities of data in order to discover valid, novel, potentially useful, and ultimately understandable patterns in data. Example pattern (Census Bureau Data): If (relationship = husband), then (gender = male). 99.6% Definition (Cont.) Data mining is the exploration and analysis of large quantities of data in order to discover valid, novel, potentially useful, and ultimately understandable patterns in data. Valid: The patterns hold in general. Novel: We did not know the pattern beforehand. Useful: We can devise actions from the patterns. Understandable: We can interpret and comprehend the patterns. Why Use Data Mining Today? Human analysis skills are inadequate: Volume and dimensionality of the data High data growth rate Availability of: Data Storage Computational power Off-the-shelf software Expertise An Abundance of Data Supermarket scanners, POS data Preferred customer cards Credit card transactions Direct mail response Call center records ATM machines Demographic data Sensor networks Cameras Web server logs Customer web site trails Commercial Support Many data mining tools http://www.kdnuggets.com/software Database systems with data mining support Visualization tools Data mining process support Consultants

Why Use Data Mining Today? Competitive pressure! The secret of success is to know something that nobody else knows. Aristotle Onassis Competition on service, not only on price (Banks, phone companies, hotel chains, rental car companies) Personalization CRM The real-time enterprise Security, homeland defense Types of Data Relational data and transactional data Spatial and temporal data, spatio-temporal observations Time-series data Text Voice Images, video Mixtures of data Sequence data Features from processing other data sources The Knowledge Discovery Process Steps: Identify business problem Data mining Action Evaluation and measurement Deployment and integration into businesses processes Data Mining Step in Detail. Data preprocessing Data selection: Identify target datasets and relevant fields Data transformation Data cleaning Combine related data sources Create common units Generate new fields Sampling. Data mining model construction.3 Model evaluation Data Selection Data Sources are Expensive Obtaining Data Loading Data into Database Maintaining Data Most Fields are not useful Names Addresses Code Numbers Data Cleaning Missing Data Unknown demographic data Impute missing values when possible Incorrect Data Hand-typed default values (e.g. 900 for dates) Misplaced Fields Data does not always match documentation Missing Relationships Foreign keys missing or dangling

Combining Data Sources Enterprise Data typically stored in many heterogeneous systems Keys to join systems may or may not be present Heuristics must be used when keys are missing Time-based matching Situation-based matching Create Common Units Data exists at different Granularity Levels Customers Transactions Products Data Mining requires a common Granularity Level (often called a Case) Mining usually occurs at customer or similar granularity Generate New Fields Raw data fields may not be useful by themselves Simple transformations can improve mining results dramatically: Customer start date Customer tenure Recency, Frequency, Monetary values Fields at wrong granularity level must be aggregated Sampling Most real datasets are too large to mine directly (> 00 million cases) Apply random sampling to reduce data size and improve error estimation Always sample at analysis granularity (case/ customer ), never at transaction granularity. Target Formats Denormalized Table Target Formats Star Schema Products Customers One row per case/customer One column per field Transactions Services Must join/roll-up to Customer level before mining 3

Data Transformation Example Client: major health insurer Business Problem: determine when the web is effective at deflecting call volume Data Sources Call center records Web data Claims Customer and Provider database Data Transformation Example Cleaning Required Dirty reason codes in call center records Missing customer Ids in some web records No session information in web records Incorrect date fields in claims Missing values in customer and provider records Some customer records missing entirely Data Transformation Example Combining Data Sources Systems use different keys. Mappings were provided, but not all rows joined properly. Web data difficult to match due to missing customer Ids on certain rows. Call center rows incorrectly combined portions of different calls. Data Transformation Example Creating Common Units Symptom: a combined reason code that could be applied to both web and call data Interaction: a unit of work in servicing a customer comparable between web and call Rollup to customer granularity Data Transformation Example New Fields Followup call: was a web interaction followed by a call on a similar topic within a given timeframe? Repeat call: did a customer call more than once about the same topic? Web adoption rate: to what degree did a customer or group use the web? Data Transformation Example Implementation took six man-months Two full-time employees working for three months Time extended due to changes in problem definition and delays in obtaining data Transformations take time One week to run all transformations on a full dataset (00GB) Transformation run needed to be monitored continuously 4

What is a Data Mining Model? A data mining model is a description of a specific aspect of a dataset. It produces output values for an assigned set of input values. Examples: Linear regression model Classification model Clustering Data Mining Models (Contd.) A data mining model can be described at two levels: Functional level: Describes model in terms of its intended usage. Examples: Classification, clustering Representational level: Specific representation of a model. Example: Log-linear model, classification tree, nearest neighbor method. Black-box models versus transparent models Types of Variables Numerical: Domain is ordered and can be represented on the real line (e.g., age, income) Nominal or categorical: Domain is a finite set without any natural ordering (e.g., occupation, marital status, race) Ordinal: Domain is ordered, but absolute differences between values is unknown (e.g., preference scale, severity of an injury) Data Mining Techniques Supervised learning Classification and regression Unsupervised learning Clustering and association rules Dependency modeling Outlier and deviation detection Trend analysis and change detection Supervised Learning F(x): true function (usually not known) D: training sample drawn from F(x) Supervised Learning F(x): true function (usually not known) D: training sample (x,f(x)) 57,M,95,0,5,95,39,5,0,,0,0,0,,0,0,0,0,0,0,,,0,0,0,0,0,0,0,0 78,M,60,,30,00,37,40,,0,0,0,,0,,,,0,0,0,0,0,0,0,0,0,0,0,0,0 69,F,80,0,5,85,40,,0,0,0,0,0,,0,0,0,0,,0,0,0,0,0,0,0,0,0,0,0,0 8,M,65,0,0,80,4,30,0,0,0,0,,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 54,F,35,0,5,95,39,35,,,0,0,0,,0,0,0,,0,0,0,0,,0,0,0,,0,0,0,0 84,F,0,,35,05,39,4,0,0,0,0,0,0,0,0,,0,0,0,0,0,0,0,0,0,0,0,0,0 89,F,35,0,0,95,36,8,0,0,0,0,0,0,0,0,0,0,0,0,,,0,0,0,0,0,0,,0,0 49,M,95,0,5,85,39,3,0,0,0,,,0,0,0,0,0,0,,0,0,0,0,0,,0,0,0,0 40,M,05,0,5,90,37,8,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 74,M,50,,30,00,38,6,,,0,0,0,,,0,0,0,0,0,0,0,0,0,0,0,0,0,0 77,F,40,0,5,00,40,30,,,0,0,0,0,0,0,0,0,,0,0,0,0,0,0,0,0,0,, 0 0 0 0 0 0 57,M,95,0,5,95,39,5,0,,0,0,0,,0,0,0,0,0,0,,,0,0,0,0,0,0,0,0 0 78,M,60,,30,00,37,40,,0,0,0,,0,,,,0,0,0,0,0,0,0,0,0,0,0,0,0 69,F,80,0,5,85,40,,0,0,0,0,0,,0,0,0,0,,0,0,0,0,0,0,0,0,0,0,0,0 0 8,M,65,0,0,80,4,30,0,0,0,0,,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 0 54,F,35,0,5,95,39,35,,,0,0,0,,0,0,0,,0,0,0,0,,0,0,0,,0,0,0,0 G(x): model learned from D 7,M,60,,30,05,38,0,,0,0,0,0,0,0,0,0,0,,0,0,0,0,0,0,0,0,0,0? Goal: E[(F(x)-G(x)) ] is small (near zero) for future samples 5

Supervised Learning Supervised vs. Unsupervised Learning Well-defined goal: Learn G(x) that is a good approximation to F(x) from training sample D Well-defined error metrics: Accuracy, RMSE, ROC, Supervised y=f(x): true function D: labeled training set D: {x i,f(x i )} Learn: G(x): model trained to predict labels D Goal: E[(F(x)-G(x)) ] Ö 0 Well defined criteria: Accuracy, RMSE,... Unsupervised Generator: true model D: unlabeled data sample D: {x i } Learn?????????? Goal:?????????? Well defined criteria:?????????? Classification Example Example training database Two predictor attributes: Age and Car-type (Sport, Minivan and Truck) Age is ordered, Car-type is categorical attribute Class label indicates whether person bought product Dependent attribute is categorical Age Car Class 0 M Yes 30 M Yes 5 T No 30 S Yes 40 S Yes 0 T No 30 M Yes 5 M Yes 40 M Yes 0 S No Regression Example Example training database Two predictor attributes: Age and Car-type (Sport, Minivan and Truck) Spent indicates how much person spent during a recent visit to the web site Dependent attribute is numerical Age Car Spent 0 M $00 30 M $50 5 T $300 30 S $0 40 S $400 0 T $80 30 M $00 5 M $5 40 M $500 0 S $40 Types of Variables (Review) Numerical: Domain is ordered and can be represented on the real line (e.g., age, income) Nominal or categorical: Domain is a finite set without any natural ordering (e.g., occupation, marital status, race) Ordinal: Domain is ordered, but absolute differences between values is unknown (e.g., preference scale, severity of an injury) Goals and Requirements Goals: To produce an accurate classifier/regression function To understand the structure of the problem Requirements on the model: High accuracy Understandable by humans, interpretable Fast construction for very large training databases 6

Different Types of Classifiers Decision Trees Simple Bayesian models Nearest neighbor methods Logistic regression Neural networks Linear discriminant analysis (LDA) Quadratic discriminant analysis (QDA) Density estimation methods Decision Trees A decision tree T encodes d (a classifier or regression function) in form of a tree. A node t in T without children is called a leaf node. Otherwise t is called an internal node. What are Decision Trees? Internal Nodes Age <30 >=30 Car Type YES Minivan Sports, Truck Minivan YES Sports, Truck NO YES Each internal node has an associated splitting predicate. Most common are binary predicates. Example predicates: Age <= 0 Profession in {student, teacher} 5000*Age + 3*Salary 0000 > 0 YES NO 0 30 60 Age Leaf Nodes Example Consider leaf node t Classification problem: Node t is labeled with one class label c in dom(c) Regression problem: Two choices Piecewise constant model: t is labeled with a constant y in dom(y). Piecewise linear model: t is labeled with a linear model Y = y t + Ó a i X i Age <30 >=30 Car Type YES Minivan Sports, Truck YES NO Encoded classifier: If (age<30 and cartype=minivan) Then YES If (age <30 and (cartype=sports or cartype=truck)) Then NO If (age >= 30) Then NO 7

Decision Tree Construction Top-down tree construction schema: Examine training database and find best splitting predicate for the root node Partition training database Recurse on each child node Top-Down Tree Construction BuildTree(Node t, Training database D, Split Selection Method S) () Apply S to D to find splitting criterion () if (t is not a leaf node) (3) Create children nodes of t (4) Partition D into children partitions (5) Recurse on each partition (6) endif Decision Tree Construction Three algorithmic components: Split selection (CART, C4.5, QUEST, CHAID, CRUISE, ) Pruning (direct stopping rule, test dataset pruning, cost-complexity pruning, statistical tests, bootstrapping) Data access (CLOUDS, SLIQ, SPRINT, RainForest, BOAT, UnPivot operator) Split Selection Method Numerical or ordered attributes: Find a split point that separates the (two) classes 30 35 (Yes: No: ) Age Split Selection Method (Contd.) Categorical attributes: How to group? Sport: Truck: Minivan: (Sport, Truck) -- (Minivan) (Sport) --- (Truck, Minivan) (Sport, Minivan) --- (Truck) Pruning Method For a tree T, the misclassification rate R(T,P) and the mean-squared error rate R(T,P) depend on P, but not on D. The goal is to do well on records randomly drawn from P, not to do well on the records in D If the tree is too large, it overfits D and does not model P. The pruning method selects the tree of the right size. 8

Data Access Method Recent development: Very large training databases, both in-memory and on secondary storage Goal: Fast, efficient, and scalable decision tree construction, using the complete training database. Decision Trees: Summary Many application of decision trees There are many algorithms available for: Split selection Pruning Handling Missing Values Data Access Decision tree construction still active research area (after 0+ years!) Challenges: Performance, scalability, evolving datasets, new applications Evaluation of Misclassification Error Problem: In order to quantify the quality of a classifier d, we need to know its misclassification rate RT(d,P). But unless we know P, RT(d,P) is unknown. Thus we need to estimate RT(d,P) as good as possible. Resubstitution Estimate The Resubstitution estimate R(d,D) estimates RT(d,P) of a classifier d using D: Let D be the training database with N records. R(d,D) = /N Ó I(d(r.X)!= r.c)) Intuition: R(d,D) is the proportion of training records that is misclassified by d Problem with resubstitution estimate: Overly optimistic; classifiers that overfit the training dataset will have very low resubstitution error. Test Sample Estimate Divide D into D and D Use D to construct the classifier d Then use resubstitution estimate R(d,D ) to calculate the estimated misclassification error of d Unbiased and efficient, but removes D from training dataset D V-fold Cross Validation Procedure: Construct classifier d from D Partition D into V datasets D,, D V Construct classifier d i using D \ D i Calculate the estimated misclassification error R(d i,d i ) of d i using test sample D i Final misclassification estimate: Weighted combination of individual misclassification errors: R(d,D) = /V Ó R(d i,d i ) 9

Cross-Validation: Example Cross-Validation d d d Misclassification estimate obtained through cross-validation is usually nearly unbiased Costly computation (we need to compute d, and d,, d V ); computation of d i is nearly as expensive as computation of d Preferred method to estimate quality of learning algorithms in the machine learning literature d 3 Clustering: Unsupervised Learning Given: Data Set D (training set) Similarity/distance metric/information Find: Partitioning of data Groups of similar/close items Similarity? Groups of similar customers Similar demographics Similar buying behavior Similar health Similar products Similar cost Similar function Similar store Similarity usually is domain/problem specific Distance Between Records d-dim vector space representation and distance metric r : r : r N : 57,M,95,0,5,95,39,5,0,,0,0,0,,0,0,0,0,0,0,,,0,0,0,0,0,0,0,0 78,M,60,,30,00,37,40,,0,0,0,,0,,,,0,0,0,0,0,0,0,0,0,0,0,0,0... 8,M,65,0,0,80,4,30,0,0,0,0,,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 Distance (r,r ) =??? Pairwise distances between points (no d-dim space) Similarity/dissimilarity matrix (upper or lower diagonal) Distance: 0 = near, = far Similarity: 0 = far, = near -- 3 4 5 6 7 8 9 0 - d d d d d d d d d - d d d d d d d d 3 - d d d d d d d 4 - d d d d d d 5 - d d d d d 6 - d d d d 7 - d d d 8 - d d 9 - d Properties of Distances: Metric Spaces A metric space is a set S with a global distance function d. For every two points x, y in S, the distance d(x,y) is a nonnegative real number. A metric space must also satisfy d(x,y) = 0 iff x = y d(x,y) = d(y,x) (symmetry) d(x,y) + d(y,z) >= d(x,z) (triangle inequality) 0

Minkowski Distance (L p Norm) Only Binary Variables Consider two records x=(x,,x d ), y=(y,,y d ): p p p p d ( x, y) = x y + x y +... + x y d d x Table: Special cases: p=: Manhattan distance d x, y) = x y + x y +... + x p y ( p p=: Euclidean distance d( x, y) = ( x y ) + ( x y ) +... + ( x y ) d d Simple matching coefficient: (symmetric) Jaccard coefficient: (asymmetric) d ( x, y ) = d ( x, y ) = b c a + b + + c + d b c b + c + + d Nominal and Ordinal Variables Nominal: Count number of matching variables m: # of matches, d: total # of variables d ( x, y ) = d m d Ordinal: Bucketize and transform to numerical: Consider record x with value x i for i th attribute of record x; new value x i : ' x x = i i dom ( X ) i Mixtures of Variables Weigh each variable differently Can take importance of variable into account (although usually hard to quantify in practice) Clustering: Informal Problem Definition Input: A data set of N records each given as a d- dimensional data feature vector. Output: Determine a natural, useful partitioning of the data set into a number of (k) clusters and noise such that we have: High similarity of records within each cluster (intracluster similarity) Low similarity of records between clusters (intercluster similarity) Types of Clustering Hard Clustering: Each object is in one and only one cluster Soft Clustering: Each object has a probability of being in each cluster

Clustering Algorithms Partitioning-based clustering K-means clustering K-medoids clustering EM (expectation maximization) clustering Hierarchical clustering Divisive clustering (top down) Agglomerative clustering (bottom up) Density-Based Methods Regions of dense points separated by sparser regions of relatively low density K-Means Clustering Algorithm Initialize k cluster centers Do Assignment step: Assign each data point to its closest cluster center Re-estimation step: Re-compute cluster centers While (there are still changes in the cluster centers) Visualization at: http://www.delft-cluster.nl/textminer/theory/kmeans/kmeans.html Issues Why is K-Means working: How does it find the cluster centers? Does it find an optimal clustering What are good starting points for the algorithm? What is the right number of cluster centers? How do we know it will terminate? K-Means: Distortion Communication between sender and receiver Sender encodes dataset: x i à {,,k} Receiver decodes dataset: j à center j Distortion: D N ( xi center x ) encode ( i) = A good clustering has minimal distortion. Properties of the Minimal Distortion Recall: Distortion ( xi center x ) encode ( i) = Property : Each data point x i is encoded by its nearest cluster center center j. (Why?) Property : When the algorithm stops, the partial derivative of the Distortion with respect to each center attribute is zero. D N Property Followed Through Calculating the partial derivative: D = N k ( xi center x ) = i encode ( i) ( x D = centerj center Thus at the minimum: j= i Cluster ( centerj) ( xi centerj) centerj = { i Cluster( centerj)} centerj) = ( xi j i Cluster( cj) i Cluster ( cj) xi i Cluster( centerj) centerj) = 0!

K-Means Minimal Distortion Property Property : Each data point x i is encoded by its nearest cluster center center j Property : Each center is the centroid of its cluster. How do we improve a configuration: Change encoding (encode a point by its nearest cluster center) Change the cluster center (make each center the centroid of its cluster) K-Means Minimal Distortion Property (Contd.) Termination? Count the number of distinct configurations Optimality? We might get stuck in a local optimum. Try different starting configurations. Choose the starting centers smart. Choosing the number of centers? Hard problem. Usually choose number of clusters that minimizes some criterion. K-Means: Summary Advantages: Good for exploratory data analysis Works well for low-dimensional data Reasonably scalable Disadvantages Hard to choose k Often clusters are non-spherical Market Basket Analysis Consider shopping cart filled with several items Market basket analysis tries to answer the following questions: Who makes purchases? What do customers buy together? In what order do customers purchase items? Market Basket Analysis Given: A database of customer transactions Each transaction is a set of items Example: Transaction with TID contains items {Pen, Ink, Milk, Juice} TID CID Date Item Qty 0 5//99 Pen 0 5//99 Ink 0 5//99 Milk 3 0 5//99 Juice 6 05 6/3/99 Pen 05 6/3/99 Ink 05 6/3/99 Milk 3 06 6/5/99 Pen 3 06 6/5/99 Milk 4 0 7//99 Pen 4 0 7//99 Ink 4 0 7//99 Juice 4 Market Basket Analysis (Contd.) Coocurrences 80% of all customers purchase items X, Y and Z together. Association rules 60% of all customers who purchase X and Y also buy Z. Sequential patterns 60% of customers who first buy X also purchase Y within three weeks. 3

Confidence and Support We prune the set of all possible association rules using two interestingness measures: Confidence of a rule: X Y has confidence c if P(Y X) = c Support of a rule: X Y has support s if P(XY) = s We can also define Support of an itemset (a coocurrence) XY: XY has support s if P(XY) = s Example Examples: {Pen} => {Milk} Support: 75% Confidence: 75% {Ink} => {Pen} Support: 00% Confidence: 00% TID CID Date Item Qty 0 5//99 Pen 0 5//99 Ink 0 5//99 Milk 3 0 5//99 Juice 6 05 6/3/99 Pen 05 6/3/99 Ink 05 6/3/99 Milk 3 06 6/5/99 Pen 3 06 6/5/99 Milk 4 0 7//99 Pen 4 0 7//99 Ink 4 0 7//99 Juice 4 Example Example Find all itemsets with support >= 75%? TID CID Date Item Qty 0 5//99 Pen 0 5//99 Ink 0 5//99 Milk 3 0 5//99 Juice 6 05 6/3/99 Pen 05 6/3/99 Ink 05 6/3/99 Milk 3 06 6/5/99 Pen 3 06 6/5/99 Milk 4 0 7//99 Pen 4 0 7//99 Ink 4 0 7//99 Juice 4 Can you find all association rules with support >= 50%? TID CID Date Item Qty 0 5//99 Pen 0 5//99 Ink 0 5//99 Milk 3 0 5//99 Juice 6 05 6/3/99 Pen 05 6/3/99 Ink 05 6/3/99 Milk 3 06 6/5/99 Pen 3 06 6/5/99 Milk 4 0 7//99 Pen 4 0 7//99 Ink 4 0 7//99 Juice 4 Market Basket Analysis: Applications Sample Applications Direct marketing Fraud detection for medical insurance Floor/shelf planning Web site layout Cross-selling Applications of Frequent Itemsets Market Basket Analysis Association Rules Classification (especially: text, rare classes) Seeds for construction of Bayesian Networks Web log analysis Collaborative filtering 4

Association Rule Algorithms More abstract problem redux Breadth-first search Depth-first search Problem Redux Abstract: A set of items {,,,k} A dabase of transactions (itemsets) D={T, T,, Tn}, Tj subset {,,,k} GOAL: Find all itemsets that appear in at least x transactions ( appear in == are subsets of ) I subset T: T supports I For an itemset I, the number of transactions it appears in is called the support of I. x is called the minimum support. Concrete: I = {milk, bread, cheese, } D = { {milk,bread,cheese}, {bread,cheese,juice}, } GOAL: Find all itemsets that appear in at least 000 transactions {milk,bread,cheese} supports {milk,bread} Problem Redux (Contd.) The Itemset Lattice Definitions: An itemset is frequent if it is a subset of at least x transactions. (FI.) An itemset is maximally frequent if it is frequent and it does not have a frequent superset. (MFI.) GOAL: Given x, find all frequent (maximally frequent) itemsets (to be stored in the FI (MFI)). Obvious relationship: MFI subset FI Example: D={,,, {,,4} } Minimum support x = 3 {,} is frequent is maximal frequent Support({,}) = 4 All maximal frequent itemsets: {} {} {,,3,4} Frequent Itemsets {} {} Breath First Search: -Itemsets {} {} {,,3,4} Frequent itemsets Infrequent itemsets {,,3,4} The Apriori Principle: Iinfrequent è (I union {x}) infrequent Infrequent Frequent Currently examined Don t know 5

Breath First Search: -Itemsets {} {} Breath First Search: 3-Itemsets {} {} {,,3,4} Infrequent Frequent Currently examined Don t know {,,3,4} Infrequent Frequent Currently examined Don t know Breadth First Search: Remarks We prune infrequent itemsets and avoid to count them To find an itemset with k items, we need to count all k subsets Depth First Search () {} {} {,,3,4} Infrequent Frequent Currently examined Don t know Depth First Search () {} {} Depth First Search (3) {} {} {,,3,4} Infrequent Frequent Currently examined Don t know {,,3,4} Infrequent Frequent Currently examined Don t know 6

Depth First Search (4) {} {} Depth First Search (5) {} {} {,,3,4} Infrequent Frequent Currently examined Don t know {,,3,4} Infrequent Frequent Currently examined Don t know Depth First Search: Remarks BFS Versus DFS We prune frequent itemsets and avoid counting them (works only for maximal frequent itemsets) To find an itemset with k items, we need to count k prefixes Breadth First Search Prunes infrequent itemsets Uses antimonotonicity: Every superset of an infrequent itemset is infrequent Depth First Search Prunes frequent itemsets Uses monotonicity: Every subset of a frequent itemset is frequent Extensions Imposing constraints Only find rules involving the dairy department Only find rules involving expensive products Only find expensive rules Only find rules with whiskey on the right hand side Only find rules with milk on the left hand side Hierarchies on the items Calendars (every Sunday, every st of the month) Itemset Constraints Definition: A constraint is an arbitrary property of itemsets. Examples: The itemset has support greater than 000. No element of the itemset costs more than $40. The items in the set average more than $0. Goal: Find all itemsets satisfying a given constraint P. Solution : If P is a support constraint, use the Apriori Algorithm. 7