Clustering and Outlier Detection
Application Examples Customer segmentation How to partition customers into groups so that customers in each group are similar, while customers in different groups are dissimilar? Pattern recognition in image How to identify objects in a satellite image? The pixels of an object are similar to each other in some way Jian Pei: Data Mining -- Clustering and Outlier Detection 2
What Is Clustering? Group data into clusters Similar to one another within the same cluster Dissimilar to the objects in other clusters Unsupervised learning: no predefined classes Outliers Cluster 2 Cluster 1 Jian Pei: Data Mining -- Clustering and Outlier Detection 3
More Application Examples A stand-alone tool: exploring data distribution A preprocessing step for other algorithms Pattern recognition, spatial data analysis, image processing, market research, WWW, Clustering documents Clustering web log data to discover groups of similar access patterns Jian Pei: Data Mining -- Clustering and Outlier Detection 4
What Is Good Clustering? High intra-class similarity and low inter-class similarity Depending on the similarity measure The ability to discover some or all of the hidden patterns Jian Pei: Data Mining -- Clustering and Outlier Detection 5
Requirements of Clustering Scalability Ability to deal with various types of attributes Discovery of clusters with arbitrary shape Minimal requirements for domain knowledge to determine input parameters Jian Pei: Data Mining -- Clustering and Outlier Detection 6
Requirements of Clustering (Con t) Can deal with noise and outliers Insensitive to the order of input records Can handle high dimensionality Incorporation of user-specified constraints Interpretability and usability Jian Pei: Data Mining -- Clustering and Outlier Detection 7
Jian Pei: Data Mining -- Clustering and Outlier Detection 8 Data Matrix For memory-based clustering Also called object-by-variable structure Represents n objects with p variables (attributes, measures) A relational table np x nf x n x ip x if x i x p x f x x 1 1 1 1 11
Dissimilarity Matrix For memory-based clustering Also called object-by-object structure Proximities of pairs of objects d(i, j): dissimilarity between objects i and j Nonnegative Close to 0: similar 0 d(2,1) 0 d(3,1) d(3,2) 0 d( n,1) d( n,2) 0 Jian Pei: Data Mining -- Clustering and Outlier Detection 9
How Good Is Clustering? Dissimilarity/similarity depends on distance function Different applications have different functions Judgment of clustering quality is typically highly subjective Jian Pei: Data Mining -- Clustering and Outlier Detection 10
Types of Data in Clustering Interval-scaled variables Binary variables Nominal, ordinal, and ratio variables Variables of mixed types Jian Pei: Data Mining -- Clustering and Outlier Detection 11
Interval-valued Variables Continuous measurements of a roughly linear scale Weight, height, latitude and longitude coordinates, temperature, etc. Effect of measurement units in attributes Smaller unit larger variable range larger effect to the result Standardization + background knowledge Jian Pei: Data Mining -- Clustering and Outlier Detection 12
Standardization Calculate the mean absolute deviation s f = 1( n x m + x m +... + x m 1 f f 2 f f nf f Calculate the standardized measurement (zscore) z x m if f = if s f Mean absolute deviation is more robust The effect of outliers is reduced but remains detectable ) = 1 n + m (x x +... + f 1 f 2 f x nf ). Jian Pei: Data Mining -- Clustering and Outlier Detection 13
Jian Pei: Data Mining -- Clustering and Outlier Detection 14 Similarity and Dissimilarity Distances are normally used measures Minkowski distance: a generalization If q = 2, d is Euclidean distance If q = 1, d is Manhattan distance If q =, d is Chebyshev distance Weighed distance 0) (... ), ( 2 2 1 1 > + + + = q j x i x j x i x j x i x j i d q q p p q q 0) ( )... 2 1 ), ( 2 2 1 1 > + + + = q j x i x p w j x i x w j x i x w j i d q q p p q q
Manhattan and Chebyshev Distance Manhattan Distance Chebyshev Distance When n = 2, chess-distance Picture from Wekipedia http://brainking.com/images/rules/chess/02.gif Jian Pei: Data Mining -- Clustering and Outlier Detection 15
Properties of Minkowski Distance Nonnegative: d(i,j) 0 The distance of an object to itself is 0 d(i,i) = 0 Symmetric: d(i,j) = d(j,i) Triangular inequality d(i,j) d(i,k) + d(k,j) i j k Jian Pei: Data Mining -- Clustering and Outlier Detection 16
Binary Variables A contingency table for binary data Symmetric variable: each state carries the same weight Invariant similarity d( i, Object i j)= Asymmetric variable: the positive value carries more weight d( i, j)= Noninvariant similarity (Jacard) Object j 1 0 Sum 1 q r q+r 0 s t s+t Sum q+s r+t p r s q+ r + + s+ t r s q+ + r + s Jian Pei: Data Mining -- Clustering and Outlier Detection 17
Nominal Variables A generalization of the binary variable in that it can take more than 2 states, e.g., Red, yellow, blue, green d( i, j) = Method 1: simple matching p p m M: # of matches, p: total # of variables Method 2: use a large number of binary variables Creating a new binary variable for each of the M nominal states Jian Pei: Data Mining -- Clustering and Outlier Detection 18
Ordinal Variables An ordinal variable can be discrete or continuous r { 1,..., M if f Order is important, e.g., rank Can be treated like interval-scaled Replace x if by their rank Map the range of each variable onto [0, 1] by replacing the i-th object in the f-th variable by z if = r M if 1 1 Compute the dissimilarity using methods for interval-scaled variables f Jian Pei: Data Mining -- Clustering and Outlier Detection 19 }
Ratio-scaled Variables Ratio-scaled variable: a positive measurement on a nonlinear scale E.g., approximately at exponential scale, such as Ae Bt Treat them like interval-scaled variables? Not a good choice: the scale can be distorted! Apply logarithmic transformation, y if = log(x if ) Treat them as continuous ordinal data, treat their rank as interval-scaled Jian Pei: Data Mining -- Clustering and Outlier Detection 20
Variables of Mixed Types A database may contain all the six types of variables Symmetric binary, asymmetric binary, nominal, ordinal, interval and ratio One may use a weighted formula to combine their effects d( i, Σ j) = p f Σ = 1 p f δ ( f ij = 1 δ ) ( ij d ( f ij f ) ) Jian Pei: Data Mining -- Clustering and Outlier Detection 21
Dimensionality Reduction Clustering a high dimensional data set is challenging Distance between two points could be dominated by noise Dimensionality reduction: choosing the informative dimensions for clustering analysis Feature selection: choosing a subset of existing dimensions Feature construction: construct a new (small) set of informative attributes Jian Pei: Data Mining -- Clustering and Outlier Detection 22
Jian Pei: Data Mining -- Clustering and Outlier Detection 23 Variance and Covariance Given a set of 1-d points, how different are those points? Standard deviation: Variance: Given a set of 2-d points, are the two dimensions correlated? Covariance: 1 ) ( 1 2 = = n X X s n i i 1 ) ( 1 2 2 = = n X X s n i i 1 ) )( ( ), cov( 1 = = n Y Y X X Y X n i i i
Principal Components Art work and example from http://csnet.otago.ac.nz/cosc453/student_tutorials/principal_components.pdf Jian Pei: Data Mining -- Clustering and Outlier Detection 24
Step 1: Mean Subtraction Subtract the mean from each dimension for each data point Intuition: centralizing the data set Jian Pei: Data Mining -- Clustering and Outlier Detection 25
Jian Pei: Data Mining -- Clustering and Outlier Detection 26 Step 2: Covariance Matrix = ), cov( ), cov( ), cov( ), cov( ), cov( ), cov( ), cov( ), cov( ), cov( 2 1 2 2 2 1 2 1 2 1 1 1 n n n n n n D D D D D D D D D D D D D D D D D D C
Step 3: Eigenvectors and Eigenvalues Compute the eigenvectors and the eigenvalues of the covariance matrix Intuition: find those direction invariant vectors as candidates of new attributes Eigenvalues indicate how much the direction invariant vectors are scaled the larger the better for manifest the data variance Jian Pei: Data Mining -- Clustering and Outlier Detection 27
Step 4: Forming New Features Choose the principal components and forme new features Typically, choose the top-k components Jian Pei: Data Mining -- Clustering and Outlier Detection 28
New Features NewData = RowFeatureVector x RowDataAdjust The first principal component is used Jian Pei: Data Mining -- Clustering and Outlier Detection 29
Clustering Methods K-means and partitioning methods Hierarchical clustering Density-based clustering Grid-based clustering Pattern-based clustering Other clustering methods Jian Pei: Data Mining -- Clustering and Outlier Detection 30
Partitioning Algorithms: Ideas Partition n objects into k clusters Optimize the chosen partitioning criterion Global optimal: examine all possible partitions (k n -(k-1) n - -1) possible partitions, too expensive! Heuristic methods: k-means and k-medoids K-means: a cluster is represented by the center K-medoids or PAM (partition around medoids): each cluster is represented by one of the objects in the cluster Jian Pei: Data Mining -- Clustering and Outlier Detection 31
K-means Arbitrarily choose k objects as the initial cluster centers Until no change, do (Re)assign each object to the cluster to which the object is the most similar, based on the mean value of the objects in the cluster Update the cluster means, i.e., calculate the mean value of the objects for each cluster Jian Pei: Data Mining -- Clustering and Outlier Detection 32
K-Means: Example 10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 Assign each objects to most similar center 10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 reassign Update the cluster means 10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 reassign K=2 Arbitrarily choose K object as initial cluster center 10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 Update the cluster means 10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 Jian Pei: Data Mining -- Clustering and Outlier Detection 33
Pros and Cons of K-means Relatively efficient: O(tkn) n: # objects, k: # clusters, t: # iterations; k, t << n. Often terminate at a local optimum Applicable only when mean is defined What about categorical data? Need to specify the number of clusters Unable to handle noisy data and outliers Unsuitable to discover non-convex clusters Jian Pei: Data Mining -- Clustering and Outlier Detection 34
Variations of the K-means Aspects of variations Selection of the initial k means Dissimilarity calculations Strategies to calculate cluster means Handling categorical data: k-modes Use mode instead of mean Mode: the most frequent item(s) A mixture of categorical and numerical data: k-prototype method EM (expectation maximization): assign a probability of an object to a cluster Jian Pei: Data Mining -- Clustering and Outlier Detection 35
A Problem of K-means Sensitive to outliers Outlier: objects with extremely large values May substantially distort the distribution of the data K-medoids: the most centrally located object in a cluster + + 10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 Jian Pei: Data Mining -- Clustering and Outlier Detection 36
PAM: A K-medoids Method PAM: partitioning around Medoids Arbitrarily choose k objects as the initial medoids Until no change, do (Re)assign each object to the cluster to which the nearest medoid Randomly select a non-medoid object o, compute the total cost, S, of swapping medoid o with o If S < 0 then swap o with o to form the new set of k medoids Jian Pei: Data Mining -- Clustering and Outlier Detection 37
Swapping Cost Measure whether o is better than o as a medoid Use the squared-error criterion E = i= 1 p C i Compute E o -E o Negative: swapping brings benefit k d( p, o i 2 ) Jian Pei: Data Mining -- Clustering and Outlier Detection 38
PAM: Example Total Cost = 20 10 10 10 9 9 9 8 8 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 Arbitrary choose k object as initial medoids 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 Assign each remaining object to nearest medoids 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 K=2 Total Cost = 26 Randomly select a nonmedoid object,o ramdom Do loop Until no change Swapping O and O ramdom If quality is improved. 10 9 8 7 6 5 4 3 2 Compute total cost of swapping 10 9 8 7 6 5 4 3 2 1 1 0 0 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 Jian Pei: Data Mining -- Clustering and Outlier Detection 39
Pros and Cons of PAM PAM is more robust than k-means in the presence of noise and outliers Medoids are less influenced by outliers PAM is efficiently for small data sets but does not scale well for large data sets O(k(n-k) 2 ) for each iteration Sampling based method: CLARA Jian Pei: Data Mining -- Clustering and Outlier Detection 40
CLARA CLARA: Clustering LARge Applications (Kaufmann and Rousseeuw in 1990) Built in statistical analysis packages, such as S + Draw multiple samples of the data set, apply PAM on each sample, give the best clustering Perform better than PAM in larger data sets Efficiency depends on the sample size A good clustering on samples may not be a good clustering of the whole data set Jian Pei: Data Mining -- Clustering and Outlier Detection 41
CLARANS Clustering large applications based upon randomized search The problem space graph of clustering A vertex is k from n numbers, vertices in total PAM searches the whole graph CLARA searches some random sub-graphs CLARANS climbs hills Randomly sample a set and select k medoids Consider neighbors of medoids as candidate for new medoids Use the sample set to verify Repeat multiple times to avoid bad samples n k Jian Pei: Data Mining -- Clustering and Outlier Detection 42
Hierarchical Clustering Group data objects into t tree of clusters Step 0 Step 1 Step 2 Step 3 Step 4 a a b b a b c d e c c d e d d e e Step 4 Step 3 Step 2 Step 1 Step 0 agglomerative (AGNES) divisive (DIANA) Jian Pei: Data Mining -- Clustering and Outlier Detection 43
AGNES (Agglomerative Nesting) Initially, each object is a cluster Step-by-step cluster merging, until all objects form a cluster Single-link approach Each cluster is represented by all of the objects in the cluster The similarity between two clusters is measured by the similarity of the closest pair of data points belonging to different clusters Jian Pei: Data Mining -- Clustering and Outlier Detection 44
Dendrogram Show how to merge clusters hierarchically Decompose data objects into a multilevel nested partitioning (a tree of clusters) A clustering of the data objects: cutting the dendrogram at the desired level Each connected component forms a cluster Jian Pei: Data Mining -- Clustering and Outlier Detection 45
DIANA (DIvisive ANAlysis) Initially, all objects are in one cluster Step-by-step splitting clusters until each cluster contains only one object 10 10 10 9 9 9 8 8 8 7 7 7 6 6 6 5 5 5 4 4 4 3 3 3 2 2 2 1 1 1 0 0 1 2 3 4 5 6 7 8 9 10 0 0 1 2 3 4 5 6 7 8 9 10 0 0 1 2 3 4 5 6 7 8 9 10 Jian Pei: Data Mining -- Clustering and Outlier Detection 46
Jian Pei: Data Mining -- Clustering and Outlier Detection 47 Distance Measures Minimum distance Maximum distance Mean distance Average distance = = = = i j j i j i C p C q j i j i avg j i j i mean C q C p j i C q C p j i q p d n n C C d m m d C C d q p d C C d q p d C C d ), ( 1 ), ( ), ( ), ( ), ( max ), ( ), ( min ), (, max, min m: mean for a cluster C: a cluster n: the number of objects in a cluster
Challenges of Hierarchical Clustering Methods Hard to choose merge/split points Never undo merging/splitting Merging/splitting decisions are critical Do not scale well: O(n 2 ) Integrating hierarchical clustering with other techniques BIRCH, CURE, CHAMELEON, ROCK Jian Pei: Data Mining -- Clustering and Outlier Detection 48
BIRCH Balanced Iterative Reducing and Clustering using Hierarchies CF (Clustering Feature) tree: a hierarchical data structure summarizing object info Clustering objects clustering leaf nodes of the CF tree Jian Pei: Data Mining -- Clustering and Outlier Detection 49
Clustering Feature Vector Clustering Feature: CF = (N, LS, SS) N: Number of data points LS: N i=1=o i SS: N i=1=o i 2 10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 CF = (5, (16,30),(54,190)) (3,4) (2,6) (4,5) (4,7) (3,8) Jian Pei: Data Mining -- Clustering and Outlier Detection 50
CF-tree in BIRCH Clustering feature: Summarize the statistics for a cluster Many cluster quality measures (e.g., radium, distance) can be derived Additivity: CF 1 +CF 2 =(N 1 +N 2, L 1 +L 2, SS 1 +SS 2 ) A CF tree: a height-balanced tree storing the clustering features for a hierarchical clustering A nonleaf node in a tree has descendants or children The nonleaf nodes store sums of the CFs of children Jian Pei: Data Mining -- Clustering and Outlier Detection 51
CF Tree B = 7 L = 6 CF 1 CF 2 child 1 child 2 child 3 child 6 CF 3 CF 6 Non-leaf node Root CF 1 CF 2 child 1 child 2 child 3 child 5 CF 3 CF 5 Leaf node Leaf node prev CF 1 CF 2 CF 6 next prev CF 1 CF 2 CF 4 next Jian Pei: Data Mining -- Clustering and Outlier Detection 52
Parameters of A CF-tree Branching factor: the maximum number of children Threshold: max diameter of sub-clusters stored at the leaf nodes Jian Pei: Data Mining -- Clustering and Outlier Detection 53
BIRCH Clustering Phase 1: scan DB to build an initial inmemory CF tree (a multi-level compression of the data that tries to preserve the inherent clustering structure of the data) Phase 2: use an arbitrary clustering algorithm to cluster the leaf nodes of the CFtree Jian Pei: Data Mining -- Clustering and Outlier Detection 54
Pros & Cons of BIRCH Linear scalability Good clustering with a single scan Quality can be further improved by a few additional scans Can handle only numeric data Sensitive to the order of the data records Jian Pei: Data Mining -- Clustering and Outlier Detection 55
Drawbacks of Square Error Based Methods One representative per cluster Good only for convex shaped having similar size and density K: the number of clusters parameter Good only if k can be reasonably estimated Jian Pei: Data Mining -- Clustering and Outlier Detection 56
CURE: the Ideas Each cluster has c representatives Choose c well scattered points in the cluster Shrink them towards the mean of the cluster by a fraction of α The representatives capture the physical shape and geometry of the cluster Merge the closest two clusters Distance of two clusters: the distance between the two closest representatives Jian Pei: Data Mining -- Clustering and Outlier Detection 57
Cure: The Algorithm Draw random sample S Partition sample to p partitions Partially cluster each partition Eliminate outliers Random sampling + remove clusters growing too slowly Cluster partial clusters until only k clusters left Shrink representatives of clusters towards the cluster center Jian Pei: Data Mining -- Clustering and Outlier Detection 58
Data Partitioning and Clustering y y y y y x x x x Jian Pei: Data Mining -- Clustering and Outlier Detection 59 x
Cure: Shrinking Representative Points Shrink the multiple representative points towards the gravity center by a fraction of α Representatives capture the shape y y x x Jian Pei: Data Mining -- Clustering and Outlier Detection 60
Clustering Categorical Data: ROCK Robust Clustering using links # of common neighbors between two points Use links to measure similarity/proximity Not distance based 2 2 O( n + nm m + n log n) Basic ideas: m Similarity function and neighbors: Let T1 = {1,2,3}, T2={3,4,5} a Sim( T1, T2) Sim( T, T ) = { 3} 1 = = = 0. 2 { 1, 2, 3, 4, 5} 5 Jian Pei: Data Mining -- Clustering and Outlier Detection 61 1 2 T T T T 1 2 1 2
Limitations Merging decision based on static modeling No special characteristics of clusters are considered C1 C2 C1 C2 CURE and BIRCH merge C1 and C2 C1 and C2 are more appropriate for merging Jian Pei: Data Mining -- Clustering and Outlier Detection 62
Chameleon Hierarchical clustering using dynamic modeling Measures the similarity based on a dynamic model Interconnectivity & closeness (proximity) between two clusters vs interconnectivity of the clusters and closeness of items within the clusters A two-phase algorithm Use a graph partitioning algorithm: cluster objects into a large number of relatively small sub-clusters Find the genuine clusters by repeatedly combining subclusters Jian Pei: Data Mining -- Clustering and Outlier Detection 63
Overall Framework of CHAMELEON Construct Sparse Graph Partition the Graph Data Set Merge Partition Final Clusters Jian Pei: Data Mining -- Clustering and Outlier Detection 64
Drawback of Distance-based Methods Hard to find clusters with irregular shapes Hard to specify the number of clusters Heuristic: a cluster must be dense Jian Pei: Data Mining -- Clustering and Outlier Detection 65
Directly Density Reachable Parameters MinPts = 3 Eps = 1 cm Eps: Maximum radius of the neighborhood MinPts: Minimum number of points in an Epsneighborhood of that point NEps(p): {q dist(p,q) Eps} Core object p: Neps(p) MinPts Point q directly density-reachable from p iff q NEps(p) and p is a core object q p Jian Pei: Data Mining -- Clustering and Outlier Detection 66
Density-Based Clustering Density-reachable Directly density reachable p 1 p 2, p 2 p 3,, p n- 1 p n p n density-reachable from p 1 Density-connected Points p, q are density-reachable from o p and q are density-connected p p q q p 1 o Jian Pei: Data Mining -- Clustering and Outlier Detection 67
DBSCAN A cluster: a maximal set of densityconnected points Discover clusters of arbitrary shape in spatial databases with noise Outlier Border Core Eps = 1cm MinPts = 5 Jian Pei: Data Mining -- Clustering and Outlier Detection 68
DBSCAN: the Algorithm Arbitrary select a point p Retrieve all points density-reachable from p wrt Eps and MinPts If p is a core point, a cluster is formed If p is a border point, no points are densityreachable from p and DBSCAN visits the next point of the database Continue the process until all of the points have been processed Jian Pei: Data Mining -- Clustering and Outlier Detection 69
Problems of DBSCAN Different clusters may have very different densities Clusters may be in hierarchies Jian Pei: Data Mining -- Clustering and Outlier Detection 70
OPTICS: A Cluster-ordering Method OPTICS: ordering points to identify the clustering structure Group points by density connectivity Hierarchies of clusters Visualize clusters and the hierarchy Jian Pei: Data Mining -- Clustering and Outlier Detection 71
Ordering Points Points strongly density-connected should be close to one another Clusters density-connected should be close to one another and form a cluster of clusters Jian Pei: Data Mining -- Clustering and Outlier Detection 72
OPTICS: An Example Reachability-distance undefined ε ε ε Cluster-order of the objects Jian Pei: Data Mining -- Clustering and Outlier Detection 73
DENCLUE: Using Density Functions DENsity-based CLUstEring Major features Solid mathematical foundation Good for data sets with large amounts of noise Allow a compact mathematical description of arbitrarily shaped clusters in high-dimensional data sets Significantly faster than existing algorithms (faster than DBSCAN by a factor of up to 45) But need a large number of parameters Jian Pei: Data Mining -- Clustering and Outlier Detection 74
DENCLUE: Techniques Use grid cells Only keep grid cells actually containing data points Manage cells in a tree-based access structure Influence function: describe the impact of a data point on its neighborhood Overall density of the data space is the sum of the influence function of all data points Clustering by identifying density attractors Density attractor: local maximal of the overall density function Jian Pei: Data Mining -- Clustering and Outlier Detection 75
Density Attractor Jian Pei: Data Mining -- Clustering and Outlier Detection 76
Center-defined and Arbitrary Clusters Jian Pei: Data Mining -- Clustering and Outlier Detection 77
A Shrinking-based Approach Difficulties of Multi-dimensional Clustering Noise (outliers) Clusters of various densities Not well-defined shapes A novel preprocessing concept Shrinking A shrinking-based clustering approach Jian Pei: Data Mining -- Clustering and Outlier Detection 78
Intuition & Purpose For data points in a data set, what if we could make them move towards the centroid of the natural subgroup they belong to? Natural sparse subgroups become denser, thus easier to be detected Noises are further isolated Jian Pei: Data Mining -- Clustering and Outlier Detection 79
Inspiration Newton s Universal Law of Gravitation Any two objects exert a gravitational force of attraction on each other The direction of the force is along the line joining the objects The magnitude of the force is directly proportional to the product of the gravitational masses of the objects, and inversely proportional to the square of the distance between them 1 2 G: universal gravitational constant G = 6.67 x 10-11 N m 2 /kg 2 Fg = 2 G m m r Jian Pei: Data Mining -- Clustering and Outlier Detection 80
The Concept of Shrinking A data preprocessing technique Aim to optimize the inner structure of real data sets Each data point is attracted by other data points and moves to the direction in which way the attraction is the strongest Can be applied in different fields Jian Pei: Data Mining -- Clustering and Outlier Detection 81
Apply shrinking into clustering field Shrink the natural sparse clusters to make them much denser to facilitate further cluster-detecting process. Multiattribute hyperspac e Jian Pei: Data Mining -- Clustering and Outlier Detection 82
Data Shrinking Each data point moves along the direction of the density gradient and the data set shrinks towards the inside of the clusters Points are attracted by their neighbors and move to create denser clusters It proceeds iteratively; repeated until the data are stabilized or the number of iterations exceeds a threshold Jian Pei: Data Mining -- Clustering and Outlier Detection 83
Approximation & Simplification Problem: Computing mutual attraction of each data points pair is too time consuming O(n 2 ) Solution: No Newton's constant G, m 1 and m 2 are set to unit Only aggregate the gravitation surrounding each data point Use grids to simplify the computation Jian Pei: Data Mining -- Clustering and Outlier Detection 84
Termination condition Average movement of all points in the current iteration is less than a threshold The number of iterations exceeds a threshold Jian Pei: Data Mining -- Clustering and Outlier Detection 85
Optics on Pendigits Data Before data shrinking After data shrinking Jian Pei: Data Mining -- Clustering and Outlier Detection 86
Grid-based Clustering Methods Ideas Using multi-resolution grid data structures Using dense grid cells to form clusters Several interesting methods STING WaveCluster CLIQUE Jian Pei: Data Mining -- Clustering and Outlier Detection 87
STING: A Statistical Information Grid Approach Complexity of spatial query answering and clustering At least O(n), if each point has to be accessed Get summarization lower complexity The spatial area is divided into rectangular cells Levels of cells correspond to different levels of resolution Jian Pei: Data Mining -- Clustering and Outlier Detection 88
Grid and Cells in STING Jian Pei: Data Mining -- Clustering and Outlier Detection 89
STING: Hierarchical Structure of Cells A cell at a high level is partitioned into a number of smaller cells in the next lower level Statistical info of each cell is pre-computed and stored query answering Parameters of higher level cells can be easily calculated from parameters of lower level cells Count, mean, standard deviation, min, max Type of distribution normal, uniform, etc For each cell in the current level, compute the confidence interval Jian Pei: Data Mining -- Clustering and Outlier Detection 90
STING: Query Answering A top-down approach Start from a pre-selected layer typically with a small number of cells Remove the irrelevant cells from further consideration When finish examining the current layer, proceed to the next lower level Repeat this process until the bottom layer is reached Jian Pei: Data Mining -- Clustering and Outlier Detection 91
STING: Pros and Cons Complexity O(k) K: number of grid cells at the lowest level Query-independent, easy to parallelize, incremental update All the cluster boundaries are either horizontal or vertical, and no diagonal boundary is detected Jian Pei: Data Mining -- Clustering and Outlier Detection 92
WaveCluster A multi-resolution clustering approach Apply wavelet transformation to the feature space Both grid-based and density-based Input parameters: Number of grid cells for each dimension The wavelet The number of applications of wavelet transform Jian Pei: Data Mining -- Clustering and Outlier Detection 93
Wavelet Decomposition Wavelets: a math tool for space-efficient hierarchical decomposition of functions S=[2, 2, 0, 2, 3, 5, 4, 4] can be transformed to S^=[2 3 / 4, -1 1 / 4, 1 / 2, 0, 0, -1, 0] Compression: many small detail coefficients can be replaced by 0 s, and only the significant coefficients are retained Jian Pei: Data Mining -- Clustering and Outlier Detection 94
Haar Wavelet Coefficients Coefficient Supports Hierarchical decomposition structure (a.k.a. error tree ) + + 2.75 + -1.25 0.5 0 0-1 -1 0 2 2 0 2 3 5 4 4 - - + - + - + - + - + - Original frequency distribution 2.75-1.25 0.5 0 0-1 -1 0 + + - + - + - +- + - +- +- Jian Pei: Data Mining -- Clustering and Outlier Detection 95
What Is Wavelet Transform? Decomposes a signal into different frequency subbands Applicable to n-dimensional signals Data are transformed to preserve relative distance between objects at different levels of resolution Allow natural clusters to become more distinguishable Jian Pei: Data Mining -- Clustering and Outlier Detection 96
Wavelet Transformation Jian Pei: Data Mining -- Clustering and Outlier Detection 97
Why Is Wavelet Transform? Use hat-shape filters Emphasize region where points cluster Suppress weaker information in their boundaries Effective removal of outliers Insensitive to noise, insensitive to input order Multi-resolution Detect arbitrary shaped clusters at different scales Efficient Complexity O(N) Only applicable to low dimensional data Jian Pei: Data Mining -- Clustering and Outlier Detection 98
WaveCluster: Method Summarize the data by imposing a multidimensional grid structure on to data space Multidimensional spatial data objects are represented in an n-dimensional feature space Apply wavelet transform on feature space to find the dense regions in the feature space Apply wavelet transform multiple times Result in clusters at different scales from fine to coarse Jian Pei: Data Mining -- Clustering and Outlier Detection 99
CLIQUE Clustering In QUEst Automatically identify subspaces of a high dimensional data space Both density-based and grid-based Jian Pei: Data Mining -- Clustering and Outlier Detection 100
CLIQUE: the Ideas Partition each dimension into the same number of equal length intervals Partition an m-dimensional data space into nonoverlapping rectangular units A unit is dense if the number of data points in the unit exceeds a threshold A cluster is a maximal set of connected dense units within a subspace Jian Pei: Data Mining -- Clustering and Outlier Detection 101
CLIQUE: the Method Partition the data space and find the number of points in each cell of the partition Apriori: a k-d cell cannot be dense if one of its (k-1)-d projection is not dense Identify clusters: Determine dense units in all subspaces of interests and connected dense units in all subspaces of interests Generate minimal description for the clusters Determine the minimal cover for each cluster Jian Pei: Data Mining -- Clustering and Outlier Detection 102
CLIQUE: An Example Salary (10,000) Vac atio n 30 50 age 1 2 3 4 5 6 7 Vacatio n(week) 1 2 3 4 5 6 7 0 20 30 40 50 60 age 0 20 30 40 50 60 age Jian Pei: Data Mining -- Clustering and Outlier Detection 103
CLIQUE: Pros and Cons Automatically find subspaces of the highest dimensionality with high density clusters Insensitive to the order of input Not presume any canonical data distribution Scale linearly with the size of input Scale well with the number of dimensions The clustering result may be degraded at the expense of simplicity of the method Jian Pei: Data Mining -- Clustering and Outlier Detection 104
Bad Results from CLIQUE Parts of a cluster may be missed A cluster from CLIQUE may contain noise Jian Pei: Data Mining -- Clustering and Outlier Detection 105
Pattern-based Clustering How to cluster the five objects? Hard to define a global similarity measure Jian Pei: Data Mining -- Clustering and Outlier Detection 106
Pattern-based Clustering A cluster: a set of objects following the same pattern in a subset of dimensions (Wang et al, 2002) Jian Pei: Data Mining -- Clustering and Outlier Detection 107
Is That Subspace Clustering? Looks like subset of dimensions! Not really!!! Subspace clustering uses global distance/similarity measure Pattern-based clustering looks at patterns A subspace cluster according to a globally defined similarity measure may not follow the same pattern Jian Pei: Data Mining -- Clustering and Outlier Detection 108
Two Distinct Features No globally defined similarity/distance measure Can be used in many cases Clusters may not be exclusive An object can appear in multiple clusters DNA micro-array data analysis using patternbased clusters Identify subsets of genes whose expression levels change coherently under a subset of conditions Critical in revealing the significant connections in gene regulatory networks Jian Pei: Data Mining -- Clustering and Outlier Detection 109
Objects Follow the Same Pattern? pscore Object blue Obejct green D 1 D 2 The less the pscore, the more consistent the objects Jian Pei: Data Mining -- Clustering and Outlier Detection 110
Jian Pei: Data Mining -- Clustering and Outlier Detection 111 Pattern-based Clusters pscore: the similarity between two objects r x, r y on two attributes a u, a v δ-pcluster (R, D): for any objects r x, r y R and any attributes a u, a v D, ).. ( ).. (.... v y v x u y u x v y u y v x u x a r a r a r a r a r a r a r a r pscore = 0) (.... δ δ v y u y v x u x a r a r a r a r pscore
Maximal pcluster If (R, D) is a δ-pcluster, then every subcluster (R, D ) is a δ-pcluster, where R R and D D An anti-monotonic property A large pcluster is accompanied with many small pclusters! Inefficacious Idea: mining only the maximal pclusters! A δ-pcluster is maximal if there exists no proper super cluster as a δ-pcluster Jian Pei: Data Mining -- Clustering and Outlier Detection 112
Mining Maximal pclusters Given A cluster threshold δ An attribute threshold min a An object threshold min o Task: mine the complete set of significant maximal δ-pclusters A significant δ-pcluster has at least min o objects on at least min a attributes Jian Pei: Data Mining -- Clustering and Outlier Detection 113
pcluters and Frequent Itemsets A transaction database can be modeled as a binary matrix Frequent itemset: a sub-matrix of all 1 s 0-pCluster on binary data Min o : support threshold Min a : no less than mina attributes Maximal pclusters closed itemsets Frequent itemset mining algorithms cannot be extended straightforwardly for mining pclusters on numeric data Jian Pei: Data Mining -- Clustering and Outlier Detection 114
Where Should We Start from? How about the pclusters having only 2 objects or 2 attributes? MDS (maximal dimension set) A pcluster must have at least 2 objects and 2 attributes Objects Finding MDSs Attribute a b c d e f g h x 13 11 9 7 9 13 2 15 y 7 4 10 1 12 3 4 7 x - y 6 7-1 6-3 10-2 8 Jian Pei: Data Mining -- Clustering and Outlier Detection 115
How to Assemble Larger pclusters? Systematically enumerate every combination of attributes D For each attribute subset, find the maximal subsets of objects R s.t. (R, D) is a pcluster Check whether (R, D) is maximal Prune search branches as early as possible Why attribute-first-objectlater? # of objects >> # attributes Algorithm MaPle (Pei et al, 2003) Jian Pei: Data Mining -- Clustering and Outlier Detection 116
Pruning MDS s Let (R, D) be a significant pcluster An attribute should appear in every object pair MDS for by rx, ry R At least min o (min o -1)/2 object pair MDSs Similarly, an object should appear in at least min a (min a -1)/2 attribute-pair MDSs Objects and attributes less frequent than stated above can be pruned The pruning can be used repeatedly, until no objects and attributes are pruned Jian Pei: Data Mining -- Clustering and Outlier Detection 117
More Pruning Techniques Only possible attributes should be considered to get larger pclusters Pruning local maximal pclusters having insufficient possible attributes Extracting common attributes from possible attribute set directly Prune non-maximal pclusters Jian Pei: Data Mining -- Clustering and Outlier Detection 118
Gene-Sample-Time Series Data Sample-Time Matrix Sample time2 time1 sample1 sample2 Time gene1 gene2 Gene-Sample Matrix Gene-Time Matrix Gene expression level of gene i on sample j at time k Jian Pei: Data Mining -- Clustering and Outlier Detection 119
Mining GST Microarray Data Reduce the gene-sample-time series data to gene-sample data Use the Pearson's correlation coeffcient as the coherence measure Jian Pei: Data Mining -- Clustering and Outlier Detection 120
Basic Approaches Sample-gene search Enumerate the subsets of samples systematically For each subset of samples, find the genes that are coherent on the samples Gene-sample search Enumerate the subsets of genes systematically For each subset of genes, find the samples on which the genes are coherent Jian Pei: Data Mining -- Clustering and Outlier Detection 121
Basic Tools Set enumeration tree Sample-gene search and gene-sample search are not symmetric! Many genes, but a few samples No requirement on samples coherent on genes Jian Pei: Data Mining -- Clustering and Outlier Detection 122
Phenotypes and Informative Genes samples 1 2 3 4 5 6 7 Informative Genes gene 1 gene 2 gene 3 gene 4 Noninformative Genes gene 5 gene 6 gene 7 Jian Pei: Data Mining -- Clustering and Outlier Detection 123
The Phenotype Mining Problem Input: a microarray matrix and k Output: phenotypes and informative genes Partitioning the samples into k exclusive subsets phenotypes Informative genes discriminating the phenotypes Machine learning methods Heuristic search Mutual reinforcing adjustment Jian Pei: Data Mining -- Clustering and Outlier Detection 124
Requirements The expression levels of each informative gene should be similar over the samples within each phenotype The expression levels of each informative gene should display a clear dissimilarity between each pair of phenotypes Jian Pei: Data Mining -- Clustering and Outlier Detection 125
Intra-phenotype Consistency In a subset of genes (candidate informative genes), does every gene have good consistency on a set of samples? Average of variance of the subset of genes the smaller the intra-phenotype consistency, the better Con( G', S') = G' 1 ( S' 1) gi G ' sj S ' ( w i, j wi, S ') 2 Jian Pei: Data Mining -- Clustering and Outlier Detection 126
Inter-phenotype Divergence How a subset of genes (candidate informative genes) can discriminate two phenotypes of samples? Sum of the average difference between the phenotypes the larger the inter-phenotype divergence, the better Div( G', S 1, S 2 )) i g G ' = w i, S 1 G' w i, S 2 Jian Pei: Data Mining -- Clustering and Outlier Detection 127
Quality of Phenotypes and Informative Genes Ω = S i, S j (1 i, j K ; i j) 1 Con( G', S i Div( G', S ) + Con( G', S S i, j) j ) The higher the value, the better the quality Jian Pei: Data Mining -- Clustering and Outlier Detection 128
Heuristic Search Start from a random subset of genes and an arbitrary partition of the samples Iteratively adjust the partition and the gene set toward a better solution For each possible adjustment, compute Ω For each gene, try possible insert/remove For each sample, try the best movement Ω > 0 conduct the adjustment T (i) e Ω < 0 conduct the adjustment with probability Ω Ω T(i) is a decreasing simulated annealing function and i is the iteration number. T(i)=1/(i+1) in our implementation Jian Pei: Data Mining -- Clustering and Outlier Detection 129
Possible Adjustments Insert a gene Remove a gene Move a sample Jian Pei: Data Mining -- Clustering and Outlier Detection 130
Disadvantages of Heuristic Search Samples and genes are examined and adjusted with equal chances # samples << # genes Samples should play more important roles Outliers in the samples should be handled specifically Outliers highly interfere the quality and the adjustment decisions Jian Pei: Data Mining -- Clustering and Outlier Detection 131
Mutual Reinforcing Adjustment A two-phase approach Iteration phase Refinement phase Mutual reinforcement Use gene partition to improve the sample partition Use the sample partition to improve the gene partition Jian Pei: Data Mining -- Clustering and Outlier Detection 132
Fuzzy Clustering Each point x i takes a probability w ij to belong to a cluster C j Requirements For each point x i, k j= 1 w ij = 1 For each cluster C j m 0 < i=1 w ij < m Jian Pei: Data Mining -- Clustering and Outlier Detection 133
Fuzzy C-Means (FCM) Select an initial fuzzy pseudo-partition, i.e., assign values to all the w ij Repeat Compute the centroid of each cluster using the fuzzy pseudo-partition Recompute the fuzzy pseudo-partition, i.e., the w ij Until the centroids do not change (or the change is below some threshold) Jian Pei: Data Mining -- Clustering and Outlier Detection 134
Critical Details Optimization on sum of the squared error k m (SSE): p 2 SSE( C 1,, Ck ) = wij dist( xi, c j ) j= 1 i= 1 m m Computing centroids: p c j = wij xi / i= 1 i= 1 Updating the fuzzy pseudo-partition w ij = (1/ dist( x i, c j ) 2 ) 1 p 1 k q= 1 (1/ dist( x i, c w q p ij ) 2 ) 1 p 1 When p=2 w ij = 1/ dist( x i, c j ) 2 k q= 1 1/ dist( x i, c q ) 2 Jian Pei: Data Mining -- Clustering and Outlier Detection 135
Choice of P When p 1, FCM behaves like traditional k-means When p is larger, the cluster centroids approach the global centroid of all data points The partition becomes fuzzier as p increases Jian Pei: Data Mining -- Clustering and Outlier Detection 136
Effectiveness Jian Pei: Data Mining -- Clustering and Outlier Detection 137
Mixture Models A cluster can be modeled as a probability distribution Practically, assume a distribution can be approximated well using multivariate normal distribution Multiple clusters is a mixture of different probability distributions A data set is a set of observations from a mixture of models Jian Pei: Data Mining -- Clustering and Outlier Detection 138
Object Probability Suppose there are k clusters and a set X of m objects Let the j-th cluster have parameter θ j = (µ j, σ j ) The probability that a point is in the j-th cluster is w j, w 1 + + w k = 1 The probability of an object x is k prob( x Θ) = w j p j ( x θ j ) m i= 1 j= 1 prob( X Θ) = prob( x Θ) = w p ( x θ ) i m k i= 1 j= 1 j j i j Jian Pei: Data Mining -- Clustering and Outlier Detection 139
Example prob ( x µ ) 1 2 2σ ( ) x i Θ = e 2πσ 2 θ1 = ( 4,2) θ2 = (4,2) prob( x Θ) = 2 1 e 2π ( x+ 4) 8 2 + 2 1 e 2π ( x 4) 8 2 Jian Pei: Data Mining -- Clustering and Outlier Detection 140
Maximal Likelihood Estimation Maximum likelihood principle: if we know a set of objects are from one distribution, but do not know the parameter, we can choose the parameter maximizing the probability 2 m ( x µ ) Maximize 1 2 prob 2σ ( x Θ = i ) e 2πσ j= 1 Equivalently, maximize log prob( X Θ) = m i= 1 ( xi µ ) 2 2σ 2 0.5mlog 2π mlogσ Jian Pei: Data Mining -- Clustering and Outlier Detection 141
EM Algorithm Expectation Maximization algorithm Select an initial set of model parameters Repeat Expectation Step: for each object, calculate the probability that it belongs to each distribution θ i, i.e., prob(x i θ i ) Maximization Step: given the probabilities from the expectation step, find the new estimates of the parameters that maximize the expected likelihood Until the parameters are stable Jian Pei: Data Mining -- Clustering and Outlier Detection 142
Advantages and Disadvantages Mixture models are more general than k- means and fuzzy c-means Clusters can be characterized by a small number of parameters The results may satisfy the statistical assumptions of the generative models Computationally expensive Need large data sets Hard to estimate the number of clusters Jian Pei: Data Mining -- Clustering and Outlier Detection 143
Constrained Clustering Constraints exist in data space or in user queries Example: ATM allocation with bridges and highways People can cross a highway by a bridge Jian Pei: Data Mining -- Clustering and Outlier Detection 144
Clustering With Obstacle Objects Not Taking obstacles into account Taking obstacles into account Jian Pei: Data Mining -- Clustering and Outlier Detection 145
Outlier Analysis One person s noise is another person s signal Outliers: the objects considerably dissimilar from the remainder of the data Examples: credit card fraud, Michael Jordon, intrusions, etc Applications: credit card fraud detection, telecom fraud detection, intrusion detection, customer segmentation, medical analysis, etc Jian Pei: Data Mining -- Clustering and Outlier Detection 146
Statistical Outlier Analysis Discordancy/outlier tests 100+ tests proposed Data distribution Distribution parameters The number of outliers The types of expected outliers Example: upper or lower outliers in an ordered sample Jian Pei: Data Mining -- Clustering and Outlier Detection 147
Drawbacks of Statistical Approaches Most tests are univariate Unsuitable for multidimensional datasets All are distribution-based Unknown distributions in many applications Jian Pei: Data Mining -- Clustering and Outlier Detection 148
Depth-based Methods Organize data objects in layers with various depths The shallow layers are more likely to contain outliers Example: Peeling, Depth contours Complexity O(N k/2 ) for k-d datasets Unacceptable for k>2 Jian Pei: Data Mining -- Clustering and Outlier Detection 149
Depth-based Outliers: Example Jian Pei: Data Mining -- Clustering and Outlier Detection 150
Distance-based Outliers A DB(p, D)-outlier is an object O in a dataset T s.t. at least fraction p of the objects in T lies at a distance greater than distance D from O Algorithms for mining distance-based outliers The index-based algorithm, the nested-loop algorithm, the cell-based algorithm Jian Pei: Data Mining -- Clustering and Outlier Detection 151
Index-based Algorithms Find DB(p, D) outliers in T with n objects Find an objects having at most n(1-p) neighbors with radius D Algorithm Build a standard multidimensional index Search every object O with radius D If there are at least n(1-p) neighbors, O is not an outlier Else, output O Jian Pei: Data Mining -- Clustering and Outlier Detection 152
Pros and Cons of Index-based Algorithms Complexity of search O(kN 2 ) More scalable with dimensionality than depthbased approaches Building a right index is very costly Index building cost renders the index-based algorithms non-competitive Jian Pei: Data Mining -- Clustering and Outlier Detection 153
A Naïve Nested-loop Algorithm For j=1 to n do Set count j =0; For k=1 to n do if (dist(j,k)<d) then count j ++; If count j <= n(1-p) then output j as an outlier; No explicit index construction O(N 2 ) Many database scans Jian Pei: Data Mining -- Clustering and Outlier Detection 154
Optimizations of Nested-loop Algorithm Once an object has at least n(1-p) neighbors with radius D, no need to count further Use the data in main memory as much as possible Reduce the number of database scans Jian Pei: Data Mining -- Clustering and Outlier Detection 155
A Block-based Nested-loop Algorithm Partition the available memory into two blocks with an equivalent size Fill the first block, compare objects in the block, mark non-outliers Read remaining objects into the second block, compare objects from the first and second block Mark non-outliers, only compare potential outliers in the first block Output unmarked objects in the first block as outliers Swap the names of the first and second blocks, until all objects have been processed Jian Pei: Data Mining -- Clustering and Outlier Detection 156
Example Dataset has four blocks: A, B, C, and D A Compare objects in A (1 read) C D C A A B A C A D Compare objects in A to those in B, C, and D (3 reads) C B Compare objects in C to those in C, D, A, and B (2 reads) C B C A A D Compare objects in D (0 read) C D Compare objects in B to those in B, C, A, and D (2 reads) Jian Pei: Data Mining -- Clustering and Outlier Detection 157 A D Compare objects in D to those in A (0 read) B D C D Compare objects in D to those in B and C (2 reads) 10 blocks are read in total 10/4=2.5 passes over T
Analysis of the Nested-loop Algorithm The data set is partition into n blocks Total number of block reads: n+(n-2)(n-1)=n 2-2n+2 The number of passes over the dataset (n-2) Many passes for large datasets Jian Pei: Data Mining -- Clustering and Outlier Detection 158
A Cell-based Approach L l = ( Cx, y ) = { Cu, v u x 1, v y 1, Cu, v Cx, 1 y 2 D 2 L ( Cx, y ) = { Cu, v u x 3, v y 3, Cu, v L1 ( Cx, y ), Cu, v Cx, 2 y } M+ objects in C x,y no outlier in C x,y } D M+ objects in C x,y L 1 (C x,y ) no outlier in C x,y M- objects in C x,y L 1 (C x,y ) L 2 (C x,y ) all objects in C x,y are outliers Jian Pei: Data Mining -- Clustering and Outlier Detection 159
The Algorithm Quantize each object to its appropriate cell Label all cells having m+ objects red No outlier in red cells Label L 1 neighbours of red cells, and cells having m+ objects in C x,y L1(C x,y ) pink No outlier in pink cells Output objects in cells having m- objects in C x,y L 1 (C x,y ) L 2 (C x,y ) as outliers For remaining cells, check them one by one Jian Pei: Data Mining -- Clustering and Outlier Detection 160
Analysis of the Cell-based Approach A typical cell has 8 L 1 neighbours and 40 L 2 neighbours Complexity: O(m+N) (m: # of cells) The worst case: no red/pink cell at all In practice, many red/pink cells The method can be easily generalized to k-d space and other distance functions Jian Pei: Data Mining -- Clustering and Outlier Detection 161
Handling Large Datasets Where do we need page reads? Quantize objects to cells: 1 pass Object-pairwise: many passes Idea: only keep white objects in main memory White objects are in cells not red nor pink Jian Pei: Data Mining -- Clustering and Outlier Detection 162
Reducing Disk Reads Classify pages in datasets A: contain some white objects B: contain no white objects but L 2 neighbours of white objects C: other pages Object-pairwise don t need class C pages Scheduling pages A and B properly At most 3 passes Jian Pei: Data Mining -- Clustering and Outlier Detection 163
Density-based Local Outlier Both o1 and o2 are outliers Distance-based methods can detect o1, but not o2 Jian Pei: Data Mining -- Clustering and Outlier Detection 164
K-Distance The k-distance of p is the distance between p and its k-th nearest neighbor In a set D of points, for any positive integer k, the k-distance of object p, denoted as k- distance(p), is the distance d(p, o) between p and an object o such that For at least k objects o D \ {p}, it holds that d(p, o ) d(p, o) For at most (k-1) objects o D \ {p}, it holds that d(p, o ) < d(p, o) Jian Pei: Data Mining -- Clustering and Outlier Detection 165
K-distance Neighborhood Given the k-stance of p, the k-distance neighborhood of p contains every object whose distance from p is not greater than the k-distance N k-distance(p) (p) = {q D\{p} d(p, q) k- distance(p)} N k-distance(p) (p) can be written as N k (p) Jian Pei: Data Mining -- Clustering and Outlier Detection 166
Reachability Distance The reachability distance of object p with respect to object o is reach-dist k (p, o) = max{k-distance(o), d(p, o)} If p and o are close to each other, reach-dist(p, o) is the k-distance, otherwise, it is the real distance Jian Pei: Data Mining -- Clustering and Outlier Detection 167
Local Reachability Density Local outlier factor Jian Pei: Data Mining -- Clustering and Outlier Detection 168
Examples Jian Pei: Data Mining -- Clustering and Outlier Detection 169
Examples Jian Pei: Data Mining -- Clustering and Outlier Detection 170
Summary Clustering: group objects based on similarity Various similarity measures Clustering algorithms Partitioning: k-means, k-medoids, CLARANS Hierarchical: BIRCH, CURE Density-based: DBSCAN, CLIQUE, OPTICS Grid-based: STING, WaveCluster Pattern-based: pclustering, MaPle Model-based: Autoclass, Denclue, cobweb Outlier detection and analysis Jian Pei: Data Mining -- Clustering and Outlier Detection 171
Reading List R. Ng and J. Han, Efficient and Effective Clustering Method for Spatial Data Mining, VLDB 94 T. Zhang, R. Ramakrishnan, and M. Livny. BIRCH : an efficient data clustering method for very large databases. SIGMOD'96. G. Karypis, Eui-Hong Han, V. Kumar. "CHAMELEON: A Hierarchical Clustering Algorithm Using Dynamic Modeling. " In IEEE Computer, Aug. 1999. W. Wang, J. Yang, and R. Muntz. STING: a statistical information grid approach to spatial data mining. VLDB 97 M. Ankerst, M. Breunig, H.-P. Kriegel, and J. Sander. Optics: Ordering points to identify the clustering structure, SIGMOD 99. T. Johnson, I. Kwok, R. Ng. Fast Computation of 2-Dimensional Depth Contours. Knowledge Discovery and Data Mining, pages 224-228, 1998. Edwin M. Knorr, Raymond T. Ng, and V. Tucakov. Distance-Based Outliers: Algorithms and Applications. VLDB Journal, 8(3-4):237--253, 2000. D. Jiang, J. Pei, M. Ramanathan, C. Tang and A. Zhang. "Mining Coherent Gene Clusters from Gene-Sample-Time Microarray Data". In KDD'04. J. Pei, X. Zhang, M. Cho, H. Wang and P.S. Yu. "MaPle: A Fast Algorithm for Maximal Pattern-based Clustering". In ICDM'03. Jian Pei: Data Mining -- Clustering and Outlier Detection 172