Data Mining Cluster Analsis: Basic Concepts and Algorithms Lecture Notes for Chapter 8 Introduction to Data Mining b Tan, Steinbach, Kumar Clustering Algorithms K-means and its variants Hierarchical clustering Densit-based clustering Tan,Steinbach, Kumar Introduction to Data Mining /8/ Tan,Steinbach, Kumar Introduction to Data Mining /8/ Hierarchical Clustering Produces a set of nested clusters organized as a hierarchical tree Can be visualized as a dendrogram A tree like diagram that records the sequences of merges or splits Strengths of Hierarchical Clustering Do not have to assume an particular number of clusters An desired number of clusters can be obtained b cutting the dendogram at the proper level The ma correspond to meaningful taonomies Eample in biological sciences (eg, animal kingdom, phlogen reconstruction, ) Tan,Steinbach, Kumar Introduction to Data Mining /8/ Tan,Steinbach, Kumar Introduction to Data Mining /8/ Hierarchical Clustering Two main tpes of hierarchical clustering Agglomerative: Start with the points as individual clusters At each step, merge the closest pair of clusters until onl one cluster (or k clusters) left Divisive: Start with one, all-inclusive cluster At each step, split a cluster until each cluster contains a point (or there are k clusters) Traditional hierarchical algorithms use a similarit or distance matri Merge or split one cluster at a time Agglomerative Clustering Algorithm Most popular hierarchical clustering technique Basic algorithm is straightforward Compute the proimit matri Let each data point be a cluster Repeat Merge the two closest clusters Update the proimit matri Until onl a single cluster remains Ke operation is the computation of the proimit of two clusters Different approaches to defining the distance between clusters distinguish the different algorithms Tan,Steinbach, Kumar Introduction to Data Mining /8/ Tan,Steinbach, Kumar Introduction to Data Mining /8/
Starting Situation Start with clusters of individual points and a proimit matri p p p p p p p p p p Proimit Matri Intermediate Situation After some merging steps, we have some clusters C C C C C C C C C C C C Proimit Matri C C C Tan,Steinbach, Kumar Introduction to Data Mining /8/ 7 Tan,Steinbach, Kumar Introduction to Data Mining /8/ 8 Intermediate Situation After Merging We want to merge the two closest clusters (C and C) and update the proimit matri C C C C C C C Proimit Matri C C C C C C The question is How do we update the proimit matri? C C C C C U C C C C C U C??????? C C Proimit Matri C C C U C Tan,Steinbach, Kumar Introduction to Data Mining /8/ 9 Tan,Steinbach, Kumar Introduction to Data Mining /8/ How to Define Inter-Cluster Similarit How to Define Inter-Cluster Similarit p p p p p p p p p p Similarit? p p p p p p p p MIN MAX Group Average Distance Between Centroids Other methods driven b an objective function Ward s Method uses squared error p Proimit Matri MIN MAX Group Average Distance Between Centroids Other methods driven b an objective function Ward s Method uses squared error p Proimit Matri Tan,Steinbach, Kumar Introduction to Data Mining /8/ Tan,Steinbach, Kumar Introduction to Data Mining /8/
How to Define Inter-Cluster Similarit How to Define Inter-Cluster Similarit MIN MAX Group Average Distance Between Centroids Other methods driven b an objective function Ward s Method uses squared error p p p p p p p p p p Proimit Matri MIN MAX Group Average Distance Between Centroids Other methods driven b an objective function Ward s Method uses squared error p p p p p p p p p p Proimit Matri Tan,Steinbach, Kumar Introduction to Data Mining /8/ Tan,Steinbach, Kumar Introduction to Data Mining /8/ How to Define Inter-Cluster Similarit Cluster Similarit: MIN or Single Link p p p p p p p p Similarit of two clusters is based on the two most similar (closest) points in the different clusters p MIN MAX Group Average Distance Between Centroids Other methods driven b an objective function Ward s Method uses squared error p Proimit Matri I I I I I I 9 I 9 7 I 7 I 8 I 8 Tan,Steinbach, Kumar Introduction to Data Mining /8/ Tan,Steinbach, Kumar Introduction to Data Mining /8/ Hierarchical Clustering: MIN Strength of MIN Original Two Clusters Nested Clusters Dendrogram Can handle non-elliptical shapes Tan,Steinbach, Kumar Introduction to Data Mining /8/ 7 Tan,Steinbach, Kumar Introduction to Data Mining /8/ 8
Limitations of MIN Cluster Similarit: MAX or Complete Linkage Similarit of two clusters is based on the two least similar (most distant) points in the different clusters Original Sensitive to noise and outliers Two Clusters I I I I I I 9 I 9 7 I 7 I 8 I 8 Tan,Steinbach, Kumar Introduction to Data Mining /8/ 9 Tan,Steinbach, Kumar Introduction to Data Mining /8/ Hierarchical Clustering: MAX Strength of MAX Original Two Clusters Nested Clusters Dendrogram Less susceptible to noise and outliers Tan,Steinbach, Kumar Introduction to Data Mining /8/ Tan,Steinbach, Kumar Introduction to Data Mining /8/ Limitations of MAX Cluster Similarit: Group Average Proimit of two clusters is the average of pairwise proimit between points in the two clusters proimit(p i,pj) p p i Cluster i Cluster j j proimit(cluster i,clusterj) = Cluster Cluster i j Original Tends to break large clusters Biased towards globular clusters Two Clusters I I I I I I 9 I 9 7 I 7 I 8 I 8 Tan,Steinbach, Kumar Introduction to Data Mining /8/ Tan,Steinbach, Kumar Introduction to Data Mining /8/
Hierarchical Clustering: Group Average Hierarchical Clustering: Group Average Compromise between Single and Complete Linkage Strengths Less susceptible to noise and outliers Limitations Biased towards globular clusters Nested Clusters Dendrogram Tan,Steinbach, Kumar Introduction to Data Mining /8/ Tan,Steinbach, Kumar Introduction to Data Mining /8/ Cluster Similarit: Ward s Method Hierarchical Clustering: Comparison Similarit of two clusters is based on the increase in squared error when two clusters are merged Similar to group average if distance between points is squared euclidian distance Less susceptible to noise and outliers MIN MAX Biased towards globular clusters Hierarchical analogue of K-means Can be used to initialize K-means Ward s Method Group Average Tan,Steinbach, Kumar Introduction to Data Mining /8/ 7 Tan,Steinbach, Kumar Introduction to Data Mining /8/ 8 Hierarchical Clustering: Eample Hierarchical Clustering: Eample 7 9 8 7 9 7 7 7 7 9 7 8 7 9 8 7 9 D 7 7 7 7 9 7 8 Use Manhattan distance as distance measure For eample: d(,) = 7 - + - = 7 Look for smallest non-diagonal entr in matri Merge the corresponding clusters Tan,Steinbach, Kumar Introduction to Data Mining /8/ 9 Tan,Steinbach, Kumar Introduction to Data Mining /8/
Hierarchical Clustering: Eample Hierarchical Clustering: Eample Objects and are most similar, since entr (,) has the smallest value of the distance matri Make a new cluster {,} and remove the single point clusters {} and {} Compute the distance between the newl formed cluster and the remaining clusters {}, {} and {} For eample (single linkage): d({},{,}) = min{d(,),d(,)} = min{7,7} = 7 D {,} 7 7 {,} 7 8 D D {,} {,} {,} 8 {,,} {,} {,,} {,} 8 {,} 7 Tan,Steinbach, Kumar Introduction to Data Mining /8/ Tan,Steinbach, Kumar Introduction to Data Mining /8/ Eample: data points Eample: Dendrogram Cluster Dendrogram 7 Height 7 Tan,Steinbach, Kumar Introduction to Data Mining /8/ Single Linkage hclust (*, "single") Tan,Steinbach, Kumar Introduction to Data Mining /8/ Hierarchical Clustering: Time and Space requirements Hierarchical Clustering: Problems and Limitations O(N ) space since it uses the proimit matri N is the number of points O(N ) time in man cases There are N steps and at each step the size, N, proimit matri must be updated and searched Compleit can be reduced to O(N log(n) ) time for some approaches Once a decision is made to combine two clusters, it cannot be undone No objective function is directl minimized Different schemes have problems with one or more of the following: Sensitivit to noise and outliers Difficult handling different sized clusters and conve shapes Breaking large clusters Tan,Steinbach, Kumar Introduction to Data Mining /8/ Tan,Steinbach, Kumar Introduction to Data Mining /8/
DBSCAN DBSCAN: Core, Border, and Noise DBSCAN is a densit-based algorithm Densit = number of points within a specified radius (Eps) A point is a core point if it has at least a specified number of points (MinPts) within Eps (including the point itself) These are points that are at the interior of a cluster A border point has fewer than MinPts within Eps, but is in the neighborhood of a core point A noise point is an point that is not a core point or a border point Tan,Steinbach, Kumar Introduction to Data Mining /8/ 7 Tan,Steinbach, Kumar Introduction to Data Mining /8/ 8 DBSCAN Algorithm DBSCAN: Core, Border and Noise Label all points as core, border or noise Eliminate noise points Perform clustering on the remaining points Put an edge between all core points within Eps of each other Make each group of connected core points into a separate cluster Assign each border point to the cluster of one of its associated core points Original Point tpes: core, border and noise Eps =, MinPts = Tan,Steinbach, Kumar Introduction to Data Mining /8/ 9 Tan,Steinbach, Kumar Introduction to Data Mining /8/ When DBSCAN Works Well Potential Problem with DBSCAN A B C D noise noise Original Clusters If we choose Minpts small enough that C and D are found as separate clusters, than A+B+surrounding noise will become a single cluster Resistant to Noise Can handle clusters of different shapes and sizes Tan,Steinbach, Kumar Introduction to Data Mining /8/ Tan,Steinbach, Kumar Introduction to Data Mining /8/
DBSCAN: Determining EPS and MinPts Idea is that for points in a cluster, their k th nearest neighbors are at roughl the same distance Noise points have the k th nearest neighbor at farther distance So, plot sorted distance of ever point to its k th nearest neighbor Choose Eps = because of sharp increase in distance to th nearest neighbour Cluster Validit For supervised classification we have a variet of measures to evaluate how good our model is Accurac, precision, recall For cluster analsis, the analogous question is how to evaluate the goodness of the resulting clusters? But clusters are in the ee of the beholder! Then wh do we want to evaluate them? To avoid finding patterns in noise To compare clustering algorithms To compare two sets of clusters To compare two clusters Tan,Steinbach, Kumar Introduction to Data Mining /8/ Tan,Steinbach, Kumar Introduction to Data Mining /8/ Clusters found in Random Data Different Aspects of Cluster Validation Random K-means 9 8 7 9 8 7 8 9 8 7 8 9 8 7 DBSCAN Complete Link Determining the clustering tendenc of a set of data, ie, distinguishing whether non-random structure actuall eists in the data Comparing the results of a cluster analsis to eternall known results, eg, to eternall given class labels Evaluating how well the results of a cluster analsis fit the data without reference to eternal information - Use onl the data Comparing two sets of clusters to determine which is better Determining the correct number of clusters For,, and, we can further distinguish whether we want to evaluate the entire clustering or just individual clusters 8 8 Tan,Steinbach, Kumar Introduction to Data Mining /8/ Tan,Steinbach, Kumar Introduction to Data Mining /8/ Measures of Cluster Validit Numerical measures that are applied to judge various aspects of cluster validit, are classified into the following three tpes Eternal Inde: Used to measure the etent to which cluster labels match eternall supplied class labels Entrop Internal Inde: Used to measure the goodness of a clustering structure without reference to eternal information Sum of Squared Error (SSE) Relative Inde: Used to compare two different clusterings or clusters Often an eternal or internal inde is used for this function, eg, SSE or entrop Tan,Steinbach, Kumar Introduction to Data Mining /8/ 7 Measuring Cluster Validit Via Correlation Two matrices Proimit Matri Incidence Matri One row and one column for each data point An entr is if the associated pair of points belong to the same cluster An entr is if the associated pair of points belongs to different clusters Compute the correlation between the two matrices Since the matrices are smmetric, onl the correlation between n(n-) / entries needs to be calculated Strong correlation indicates that points that belong to the same cluster are close to each other Not a good measure for some densit or contiguit based clusters Tan,Steinbach, Kumar Introduction to Data Mining /8/ 8
Measuring Cluster Validit Via Correlation Correlation Eample Correlation of incidence and distance matrices for the K-means clusterings of the following two data sets 9 8 7 9 8 7 D 7 7 7 7 9 7 8 8 8 C = {,,} C = {,} (,) (,) (,) 7 7 7 corr = -998 Corr = -9 Corr = -8 (,) Tan,Steinbach, Kumar Introduction to Data Mining /8/ 9 Tan,Steinbach, Kumar Introduction to Data Mining /8/ Using Similarit Matri for Cluster Validation Order the similarit matri with respect to cluster labels and inspect visuall Using Similarit Matri for Cluster Validation Clusters in random data are not so crisp 9 9 9 9 8 8 8 7 8 7 7 8 7 7 7 9 8 9 8 Similarit 8 8 8 Similarit DBSCAN Tan,Steinbach, Kumar Introduction to Data Mining /8/ Tan,Steinbach, Kumar Introduction to Data Mining /8/ Using Similarit Matri for Cluster Validation Clusters in random data are not so crisp Using Similarit Matri for Cluster Validation Clusters in random data are not so crisp 9 9 9 9 8 8 8 8 7 7 7 7 7 7 8 8 9 9 8 Similarit 8 8 Similarit 8 K-means Complete Link Tan,Steinbach, Kumar Introduction to Data Mining /8/ Tan,Steinbach, Kumar Introduction to Data Mining /8/
Internal Measures: SSE Clusters in more complicated figures aren t well separated Internal Inde: Used to measure the qualit of a clustering without respect to eternal information SSE SSE is good for comparing two clusterings or two clusters (average SSE) Can also be used to estimate the number of clusters Internal Measures: SSE SSE curve for a more complicated data set 9 8 7 7 SSE - - - K Tan,Steinbach, Kumar Introduction to Data Mining /8/ SSE of clusters found using K-means Tan,Steinbach, Kumar Introduction to Data Mining /8/ Framework for Cluster Validit Need a framework to interpret an measure For eample, if our measure of evaluation has the value,, is that good, fair, or poor? Statistics provide a framework for cluster validit The more atpical a clustering result is, the more likel it represents valid structure in the data Can compare the values of an inde that result from random data or clusterings to those of a clustering result If the value of the inde is unlikel, then the cluster results are valid These approaches are more complicated and harder to understand For comparing the results of two different sets of clusters, a framework is less necessar However, there is the question of whether the difference between two inde values is significant Tan,Steinbach, Kumar Introduction to Data Mining /8/ 7 Statistical Framework for SSE Eample 9 8 7 Compare SSE of against three clusters in random data Histogram shows SSE of three clusters in sets of random data points of size distributed over the range 8 for and values 8 Count 8 8 SSE Tan,Steinbach, Kumar Introduction to Data Mining /8/ 8 Statistical Framework for Correlation Correlation of incidence and proimit matrices for the K-means clusterings of the following two data sets 9 8 7 8 9 8 7 8 Corr = -9 Corr = -8 Tan,Steinbach, Kumar Introduction to Data Mining /8/ 9 Internal Measures: Cohesion and Separation Cluster Cohesion: Measures how closel related are objects in a cluster Eample: SSE Cluster Separation: Measure how distinct or wellseparated a cluster is from other clusters Eample: Squared Error Cohesion is measured b the within cluster sum of squares (SSE) WSS = ( mi ) i C i Separation is measured b the between cluster sum of squares BSS = C ( m i m i i Where C i is the size of cluster i and m is the overall mean Tan,Steinbach, Kumar Introduction to Data Mining /8/ )
Internal Measures: Cohesion and Separation Eample: SSE BSS + WSS = constant m m m Internal Measures: Cohesion and Separation A proimit graph based approach can also be used for cohesion and separation Cluster cohesion is the sum of the weight of all links within a cluster Cluster separation is the sum of the weights between nodes in the cluster and nodes outside the cluster K= cluster: WSS= ( ) + ( ) + ( ) + ( ) = BSS= ( ) = Total = + = K= clusters: WSS= ( ) + ( ) + ( ) + ( ) = BSS= ( ) + ( ) = 9 Total = + 9 = cohesion separation Tan,Steinbach, Kumar Introduction to Data Mining /8/ Tan,Steinbach, Kumar Introduction to Data Mining /8/ Internal Measures: Silhouette Coefficient Eternal Measures of Cluster Validit: Entrop and Purit Silhouette Coefficient combine ideas of both cohesion and separation, but for individual points, as well as clusters and clusterings For an individual point, i Calculate a = average distance of i to the points in its cluster Calculate b = min (average distance of i to points in another cluster) The silhouette coefficient for a point is then given b s = a/b if a < b, (or s = b/a - if a b, not the usual case) Tpicall between and The closer to the better a b Can calculate the Average Silhouette width for a cluster or a clustering: does not necessaril increase with k Tan,Steinbach, Kumar Introduction to Data Mining /8/ Tan,Steinbach, Kumar Introduction to Data Mining /8/ Final Comment on Cluster Validit The validation of clustering structures is the most difficult and frustrating part of cluster analsis Without a strong effort in this direction, cluster analsis will remain a black art accessible onl to those true believers who have eperience and great courage Algorithms for Clustering Data, Jain and Dubes Tan,Steinbach, Kumar Introduction to Data Mining /8/