Data Mining Cluster Analysis: Advanced Concepts and Algorithms. Lecture Notes for Chapter 9. Introduction to Data Mining



Similar documents
Data Mining Cluster Analysis: Advanced Concepts and Algorithms. Lecture Notes for Chapter 9. Introduction to Data Mining

Cluster Analysis: Advanced Concepts

Data Mining. Cluster Analysis: Advanced Concepts and Algorithms

Clustering. Data Mining. Abraham Otero. Data Mining. Agenda

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Clustering Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Data Mining Clustering (2) Sheets are based on the those provided by Tan, Steinbach, and Kumar. Introduction to Data Mining

DATA MINING CLUSTER ANALYSIS: BASIC CONCEPTS

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Clustering. Adrian Groza. Department of Computer Science Technical University of Cluj-Napoca

GraphZip: A Fast and Automatic Compression Method for Spatial Data Clustering

K-Means Cluster Analysis. Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1

Cluster Analysis. Alison Merikangas Data Analysis Seminar 18 November 2009

Clustering UE 141 Spring 2013

An Introduction to Cluster Analysis for Data Mining

Chapter 7. Cluster Analysis

Cluster Analysis: Basic Concepts and Algorithms

The Role of Visualization in Effective Data Cleaning

Example: Document Clustering. Clustering: Definition. Notion of a Cluster can be Ambiguous. Types of Clusterings. Hierarchical Clustering

Clustering methods for Big data analysis

A Comparative Analysis of Various Clustering Techniques used for Very Large Datasets

Cluster Analysis: Basic Concepts and Algorithms

Unsupervised Data Mining (Clustering)

An Analysis on Density Based Clustering of Multi Dimensional Spatial Data

On Data Clustering Analysis: Scalability, Constraints and Validation

Clustering Techniques: A Brief Survey of Different Clustering Algorithms

SCAN: A Structural Clustering Algorithm for Networks

Data Mining: Exploring Data. Lecture Notes for Chapter 3. Introduction to Data Mining

ROCK: A Robust Clustering Algorithm for Categorical Attributes

On Clustering Validation Techniques

Data Mining Process Using Clustering: A Survey

Recent Advances in Clustering: A Brief Survey

Concept of Cluster Analysis

A comparison of various clustering methods and algorithms in data mining

How To Cluster

. Learn the number of classes and the structure of each class using similarity between unlabeled training patterns

A Method for Decentralized Clustering in Large Multi-Agent Systems

2 Basic Concepts and Techniques of Cluster Analysis

Clustering on Large Numeric Data Sets Using Hierarchical Approach Birch

Reference Books. Data Mining. Supervised vs. Unsupervised Learning. Classification: Definition. Classification k-nearest neighbors

Clustering Objects on a Spatial Network

Data Mining Project Report. Document Clustering. Meryem Uzun-Per

Chapter ML:XI (continued)

Data Clustering Techniques Qualifying Oral Examination Paper

Territorial Analysis for Ratemaking. Philip Begher, Dario Biasini, Filip Branitchev, David Graham, Erik McCracken, Rachel Rogers and Alex Takacs

Public Transportation BigData Clustering

SoSe 2014: M-TANI: Big Data Analytics

A Study on the Hierarchical Data Clustering Algorithm Based on Gravity Theory

A Comparative Study of clustering algorithms Using weka tools

Distances, Clustering, and Classification. Heatmaps

Clustering Artificial Intelligence Henry Lin. Organizing data into clusters such that there is

Clustering. Clustering. What is Clustering? What is Clustering? What is Clustering? Types of Data in Cluster Analysis

Social Media Mining. Data Mining Essentials

Comparison and Analysis of Various Clustering Methods in Data mining On Education data set Using the weak tool

Medical Information Management & Mining. You Chen Jan,15, 2013 You.chen@vanderbilt.edu

Categorical Data Visualization and Clustering Using Subjective Factors

Classification algorithm in Data mining: An Overview

COM CO P 5318 Da t Da a t Explora Explor t a ion and Analysis y Chapte Chapt r e 3

Protein Protein Interaction Networks

Classification Techniques (1)

PERFORMANCE ANALYSIS OF CLUSTERING ALGORITHMS IN DATA MINING IN WEKA

Summary Data Mining & Process Mining (1BM46) Content. Made by S.P.T. Ariesen

Data Clustering Using Data Mining Techniques

Cluster Analysis. Isabel M. Rodrigues. Lisboa, Instituto Superior Técnico

A Two-Step Method for Clustering Mixed Categroical and Numeric Data

BIRCH: An Efficient Data Clustering Method For Very Large Databases

Clustering: Techniques & Applications. Nguyen Sinh Hoa, Nguyen Hung Son. 15 lutego 2006 Clustering 1

How To Cluster On A Large Data Set

Clustering. Danilo Croce Web Mining & Retrieval a.a. 2015/201 16/03/2016

Data Mining for Automated GIS Data Collection

Data Mining: Exploring Data. Lecture Notes for Chapter 3. Introduction to Data Mining

How To Perform An Ensemble Analysis

Linköpings Universitet - ITN TNM DBSCAN. A Density-Based Spatial Clustering of Application with Noise

Unsupervised Learning and Data Mining. Unsupervised Learning and Data Mining. Clustering. Supervised Learning. Supervised Learning

UNSUPERVISED MACHINE LEARNING TECHNIQUES IN GENOMICS

King Saud University

Scientific Report. BIDYUT KUMAR / PATRA INDIAN VTT Technical Research Centre of Finland, Finland. Raimo / Launonen. First name / Family name

Unsupervised learning: Clustering

Cluster Analysis: Basic Concepts and Methods

Data Mining: A Preprocessing Engine

B490 Mining the Big Data. 2 Clustering

An Overview of Knowledge Discovery Database and Data mining Techniques

Distances between Clustering, Hierarchical Clustering

Clustering Data Streams

SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING

Robust Outlier Detection Technique in Data Mining: A Univariate Approach

Well-Separated Pair Decomposition for the Unit-disk Graph Metric and its Applications

A Survey of Clustering Techniques

Going Big in Data Dimensionality:

Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data

Applying Data Mining to Demand Forecasting and Product Allocations

Spatial Ordering and Encoding for Geographic Data Mining and Visualization. Diansheng Guo 1 and Mark Gahegan 2

Identifying erroneous data using outlier detection techniques

Resource-bounded Fraud Detection

Load balancing in a heterogeneous computer system by self-organizing Kohonen network

The Scientific Data Mining Process

Image Segmentation and Registration

Clustering and Outlier Detection

Transcription:

Data Mining Cluster Analysis: Advanced Concepts and Algorithms Lecture Notes for Chapter 9 Introduction to Data Mining by Tan, Steinbach, Kumar Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1 Hierarchical Clustering: Revisited Creates nested clusters Agglomerative clustering algorithms vary in terms of how the proximity of two clusters are computed MIN (single link): susceptible to noise/outliers MAX/GROUP AVERAGE: may not work well with non-globular clusters CURE algorithm tries to handle both problems Often starts with a proximity matrix A type of graph-based algorithm Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 2

CURE: Another Hierarchical Approach Uses a number of points to represent a cluster Representative points are found by selecting a constant number of points from a cluster and then shrinking them toward the center of the cluster Cluster similarity is the similarity of the closest pair of representative points from different clusters Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 3 CURE Shrinking representative points toward the center helps avoid problems with noise and outliers CURE is better able to handle clusters of arbitrary shapes and sizes Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 4

Experimental Results: CURE Picture from CURE, Guha, Rastogi, Shim. Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 5 Experimental Results: CURE (centroid) (single link) Picture from CURE, Guha, Rastogi, Shim. Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 6

CURE Cannot Handle Differing Densities Original Points CURE Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 7 Graph-Based Clustering Graph-Based clustering uses the proximity graph Start with the proximity matrix Consider each point as a node in a graph Each edge between two nodes has a weight which is the proximity between the two points Initially the proximity graph is fully connected MIN (single-link) and MAX (complete-link) can be viewed as starting with this graph In the simplest case, clusters are connected components in the graph. Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 8

Graph-Based Clustering: Sparsification The amount of data that needs to be processed is drastically reduced Sparsification can eliminate more than 99% of the entries in a proximity matrix The amount of time required to cluster the data is drastically reduced The size of the problems that can be handled is increased Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 9 Graph-Based Clustering: Sparsification Clustering may work better Sparsification techniques keep the connections to the most similar (nearest) neighbors of a point while breaking the connections to less similar points. The nearest neighbors of a point tend to belong to the same class as the point itself. This reduces the impact of noise and outliers and sharpens the distinction between clusters. Sparsification facilitates the use of graph partitioning algorithms (or algorithms based on graph partitioning algorithms. Chameleon and Hypergraph-based Clustering Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 10

Sparsification in the Clustering Process Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 11 Limitations of Current Merging Schemes Existing merging schemes in hierarchical clustering algorithms are static in nature MIN or CURE: merge two clusters based on their closeness (or minimum distance) GROUP-AVERAGE: merge two clusters based on their average connectivity Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 12

Limitations of Current Merging Schemes (a) (b) (c) (d) Closeness schemes will merge (a) and (b) Average connectivity schemes will merge (c) and (d) Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 13 Chameleon: Clustering Using Dynamic Modeling Adapt to the characteristics of the data set to find the natural clusters Use a dynamic model to measure the similarity between clusters Main property is the relative closeness and relative interconnectivity of the cluster Two clusters are combined if the resulting cluster shares certain properties with the constituent clusters The merging scheme preserves self-similarity One of the areas of application is spatial data Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 14

Characteristics of Spatial Data Sets Clusters are defined as densely populated regions of the space Clusters have arbitrary shapes, orientation, and non-uniform sizes Difference in densities across clusters and variation in density within clusters Existence of special artifacts (streaks) and noise The clustering algorithm must address the above characteristics and also require minimal supervision. Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 15 Chameleon: Steps Preprocessing Step: Represent the Data by a Graph Given a set of points, construct the k-nearest-neighbor (k-nn) graph to capture the relationship between a point and its k nearest neighbors Concept of neighborhood is captured dynamically (even if region is sparse) Phase 1: Use a multilevel graph partitioning algorithm on the graph to find a large number of clusters of well-connected vertices Each cluster should contain mostly points from one true cluster, i.e., is a sub-cluster of a real cluster Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 16

Chameleon: Steps Phase 2: Use Hierarchical Agglomerative Clustering to merge sub-clusters Two clusters are combined if the resulting cluster shares certain properties with the constituent clusters Two key properties used to model cluster similarity: Relative Interconnectivity: Absolute interconnectivity of two clusters normalized by the internal connectivity of the clusters Relative Closeness: Absolute closeness of two clusters normalized by the internal closeness of the clusters Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 17 Experimental Results: CHAMELEON Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 18

Experimental Results: CHAMELEON Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 19 Experimental Results: CURE (10 clusters) Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 20

Experimental Results: CURE (15 clusters) Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 21 Experimental Results: CHAMELEON Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 22

Experimental Results: CURE (9 clusters) Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 23 Experimental Results: CURE (15 clusters) Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 24

Shared Near Neighbor Approach SNN graph: the weight of an edge is the number of shared neighbors between vertices given that the vertices are connected i j i j 4 Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 25 Creating the SNN Graph Sparse Graph Shared Near Neighbor Graph Link weights are similarities between neighboring points Link weights are number of Shared Nearest Neighbors Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 26

ROCK (RObust Clustering using links) Clustering algorithm for data with categorical and Boolean attributes A pair of points is defined to be neighbors if their similarity is greater than some threshold Use a hierarchical clustering scheme to cluster the data. 1. Obtain a sample of points from the data set 2. Compute the link value for each set of points, i.e., transform the original similarities (computed by Jaccard coefficient) into similarities that reflect the number of shared neighbors between points 3. Perform an agglomerative hierarchical clustering on the data using the number of shared neighbors as similarity measure and maximizing the shared neighbors objective function 4. Assign the remaining points to the clusters that have been found Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 27 Jarvis-Patrick Clustering First, the k-nearest neighbors of all points are found In graph terms this can be regarded as breaking all but the k strongest links from a point to other points in the proximity graph A pair of points is put in the same cluster if any two points share more than T neighbors and the two points are in each others k nearest neighbor list For instance, we might choose a nearest neighbor list of size 20 and put points in the same cluster if they share more than 10 near neighbors Jarvis-Patrick clustering is too brittle Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 28

When Jarvis-Patrick Works Reasonably Well Jarvis Patrick Clustering Original Points 6 shared neighbors out of 20 Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 29 When Jarvis-Patrick Does NOT Work Well Smallest threshold, T, that does not merge clusters. Tan,Steinbach, Kumar Introduction to Data Mining Threshold of T - 1 4/18/2004 30

SNN Clustering Algorithm 1. Compute the similarity matrix This corresponds to a similarity graph with data points for nodes and edges whose weights are the similarities between data points 2. Sparsify the similarity matrix by keeping only the k most similar neighbors This corresponds to only keeping the k strongest links of the similarity graph 3. Construct the shared nearest neighbor graph from the sparsified similarity matrix. At this point, we could apply a similarity threshold and find the connected components to obtain the clusters (Jarvis-Patrick algorithm) 4. Find the SNN density of each Point. Using a user specified parameters, Eps, find the number points that have an SNN similarity of Eps or greater to each point. This is the SNN density of the point Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 31 SNN Clustering Algorithm 5. Find the core points Using a user specified parameter, MinPts, find the core points, i.e., all points that have an SNN density greater than MinPts 6. Form clusters from the core points If two core points are within a radius, Eps, of each other they are place in the same cluster 7. Discard all noise points All non-core points that are not within a radius of Eps of a core point are discarded 8. Assign all non-noise, non-core points to clusters This can be done by assigning such points to the nearest core point (Note that steps 4-8 are DBSCAN) Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 32

SNN Density a) All Points b) High SNN Density c) Medium SNN Density d) Low SNN Density Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 33 SNN Clustering Can Handle Differing Densities Original Points SNN Clustering Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 34

SNN Clustering Can Handle Other Difficult Situations Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 35 Finding Clusters of Time Series In Spatio-Temporal Data S N N De n s ity o f S LP T im e S e rie s Da ta 2 6 S LP C lus te rs via S ha re d Ne a re s t Ne ig hb o r C lu s te ring (1 0 0 NN, 1 9 8 2-1 9 9 4 ) 90 90 24 22 25 60 60 13 26 14 30 30 latitude 16 20 17 la titude 21 15 18 0 19 0-3 0 23-3 0 9 1-6 0-6 0 3 6-9 0-1 80 4 5 2 11 8-1 50-1 20-9 0-6 0-3 0 0 30 12 60 7 90-9 0-18 0-15 0-12 0-90 10 1 20 1 50 1 80-60 -30 0 30 lo ngitude 60 90 1 20 1 50 lo ng itude SNN Clusters of SLP. Tan,Steinbach, Kumar SNN Density of Points on the Globe. Introduction to Data Mining 4/18/2004 36 1 80

Features and Limitations of SNN Clustering Does not cluster all the points Complexity of SNN Clustering is high O( n * time to find numbers of neighbor within Eps) In worst case, this is O(n 2 ) For lower dimensions, there are more efficient ways to find the nearest neighbors R* Tree k-d Trees Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 37