How To Cluster With A Data Set

Similar documents
Neural Networks Lesson 5 - Cluster Analysis

Clustering. Danilo Croce Web Mining & Retrieval a.a. 2015/201 16/03/2016

SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING

DATA MINING CLUSTER ANALYSIS: BASIC CONCEPTS

Cluster Analysis. Isabel M. Rodrigues. Lisboa, Instituto Superior Técnico

Unsupervised learning: Clustering

Clustering DHS , ,

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Chapter ML:XI (continued)

How To Cluster

Vector Quantization and Clustering

Clustering UE 141 Spring 2013

L4: Bayesian Decision Theory

. Learn the number of classes and the structure of each class using similarity between unlabeled training patterns

Data Mining Clustering (2) Sheets are based on the those provided by Tan, Steinbach, and Kumar. Introduction to Data Mining

Medical Information Management & Mining. You Chen Jan,15, 2013 You.chen@vanderbilt.edu

An Introduction to Cluster Analysis for Data Mining

Chapter 7. Cluster Analysis

Clustering. Adrian Groza. Department of Computer Science Technical University of Cluj-Napoca

Clustering. Data Mining. Abraham Otero. Data Mining. Agenda

Environmental Remote Sensing GEOG 2021

Data Mining. Cluster Analysis: Advanced Concepts and Algorithms

Cluster Analysis: Advanced Concepts

Cluster Analysis: Basic Concepts and Algorithms

L25: Ensemble learning

Information Retrieval and Web Search Engines

Clustering Artificial Intelligence Henry Lin. Organizing data into clusters such that there is

Lecture 9: Introduction to Pattern Analysis

Distances, Clustering, and Classification. Heatmaps

MACHINE LEARNING IN HIGH ENERGY PHYSICS

Cluster analysis Cosmin Lazar. COMO Lab VUB

Classification Techniques for Remote Sensing

A comparison of various clustering methods and algorithms in data mining

Data Mining Project Report. Document Clustering. Meryem Uzun-Per

UNSUPERVISED MACHINE LEARNING TECHNIQUES IN GENOMICS

Machine Learning using MapReduce

CLUSTERING LARGE DATA SETS WITH MIXED NUMERIC AND CATEGORICAL VALUES *

How To Solve The Cluster Algorithm

Data Mining Cluster Analysis: Advanced Concepts and Algorithms. Lecture Notes for Chapter 9. Introduction to Data Mining

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

Social Media Mining. Data Mining Essentials

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Example: Document Clustering. Clustering: Definition. Notion of a Cluster can be Ambiguous. Types of Clusterings. Hierarchical Clustering

Statistical Machine Learning

Unsupervised Learning and Data Mining. Unsupervised Learning and Data Mining. Clustering. Supervised Learning. Supervised Learning

ARTIFICIAL INTELLIGENCE (CSCU9YE) LECTURE 6: MACHINE LEARNING 2: UNSUPERVISED LEARNING (CLUSTERING)

There are a number of different methods that can be used to carry out a cluster analysis; these methods can be classified as follows:

Cluster Analysis: Basic Concepts and Algorithms

CS 2750 Machine Learning. Lecture 1. Machine Learning. CS 2750 Machine Learning.

Steven M. Ho!and. Department of Geology, University of Georgia, Athens, GA

Data Mining Cluster Analysis: Advanced Concepts and Algorithms. Lecture Notes for Chapter 9. Introduction to Data Mining

Linear Threshold Units

Segmentation & Clustering

Statistical Databases and Registers with some datamining

Statistical Machine Learning from Data

SoSe 2014: M-TANI: Big Data Analytics

0.1 What is Cluster Analysis?

Feature Selection vs. Extraction

K-Means Cluster Analysis. Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Clustering Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

L13: cross-validation

Categorical Data Visualization and Clustering Using Subjective Factors

Nimble Algorithms for Cloud Computing. Ravi Kannan, Santosh Vempala and David Woodruff

EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set

STATISTICA. Clustering Techniques. Case Study: Defining Clusters of Shopping Center Patrons. and

CLUSTER ANALYSIS FOR SEGMENTATION

Big Data Analytics CSCI 4030

Machine Learning and Data Mining. Regression Problem. (adapted from) Prof. Alexander Ihler

Machine Learning and Data Analysis overview. Department of Cybernetics, Czech Technical University in Prague.

Standardization and Its Effects on K-Means Clustering Algorithm

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Hadoop SNS. renren.com. Saturday, December 3, 11

CS 688 Pattern Recognition Lecture 4. Linear Models for Classification

Data Clustering: A Review

Unsupervised Data Mining (Clustering)

Forschungskolleg Data Analytics Methods and Techniques

Binary Image Scanning Algorithm for Cane Segmentation

The SPSS TwoStep Cluster Component

Cluster Analysis: Basic Concepts and Methods

Clustering & Visualization

PROBABILISTIC DISTANCE CLUSTERING

Analysis of kiva.com Microlending Service! Hoda Eydgahi Julia Ma Andy Bardagjy December 9, 2010 MAS.622j

A successful market segmentation initiative answers the following critical business questions: * How can we a. Customer Status.

Exploratory Data Analysis with MATLAB

Applied Algorithm Design Lecture 5

Cluster Analysis. Alison Merikangas Data Analysis Seminar 18 November 2009

BIRCH: An Efficient Data Clustering Method For Very Large Databases

2 Basic Concepts and Techniques of Cluster Analysis

Hierarchical Cluster Analysis Some Basics and Algorithms

Going Big in Data Dimensionality:

Decision Support System Methodology Using a Visual Approach for Cluster Analysis Problems

An Impossibility Theorem for Clustering

A Comparative Study of clustering algorithms Using weka tools

Methodology for Emulating Self Organizing Maps for Visualization of Large Datasets

Data Mining and Data Warehousing. Henryk Maciejewski. Data Mining Predictive modelling: regression

Basic Questions in Cluster Analysis

Class #6: Non-linear classification. ML4Bio 2012 February 17 th, 2012 Quaid Morris

Data a systematic approach

Support Vector Machine (SVM)

Decision Trees from large Databases: SLIQ

Transcription:

Similarity measures Criterion functions Cluster validity Flat clustering algorithms k-means ISODATA L15: statistical clustering Hierarchical clustering algorithms Divisive Agglomerative CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 1

Non-parametric unsupervised learning In L14 we introduced the concept of unsupervised learning A collection of pattern recognition methods that learn without a teacher Two types of clustering methods were mentioned: parametric and nonparametric Parametric unsupervised learning Equivalent to density estimation with a mixture of (Gaussian) components Through the use of EM, the identity of the component that originated each data point was treated as a missing feature Non-parametric unsupervised learning No density functions are considered in these methods Instead, we are concerned with finding natural groupings (clusters) in a dataset Non-parametric clustering involves three steps Defining a measure of (dis)similarity between examples Defining a criterion function for clustering Defining an algorithm to minimize (or maximize) the criterion function CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 2

Proximity measures Definition of metric A measuring rule d(x, y) for the distance between two vectors x and y is considered a metric if it satisfies the following properties d x, y d 0 d x, y = d 0 iff x = y d x, y = d y, x d x, y d x, z + d z, y If the metric has the property d ax, ay norm and denoted d x, y = x y = a d x, y then it is called a The most general form of distance metric is the power norm x y p/r = x i y i p D i=1 p controls the weight placed on any dimension dissimilarity, whereas r controls the distance growth of patterns that are further apart Notice that the definition of norm must be relaxed, allowing a power factor for a CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 3 1/r [Marques de Sá, 2001]

Most commonly used metrics are derived from the power norm Minkowski metric (L k norm) D k x y k = i=1 x i y i The choice of an appropriate value of k depends on the amount of emphasis that you would like to give to the larger differences between dimensions 1/k Manhattan or city-block distance (L 1 norm) x y c b = D x i y i i=1 When used with binary vectors, the L1 norm is known as the Hamming distance Euclidean norm (L 2 norm) D 2 x y e = i=1 x i y i Chebyshev distance (L norm) x y c = max x i y i 1 i D 1/2 x 2 Contours of equal distance L 1 L 2 L x 1 CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 4

Other metrics are also popular Quadratic distance d x, y = x y T B x y The Mahalanobis distance is a particular case of this distance Canberra metric (for non-negative features) Non-linear distance d ca x, y = x i y i x i +y i CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 5 D i=1 d N x, y = 0 if d e x, y < T H o. w. where T is a threshold and H is a distance An appropriate choice for H and T for feature selection is that they should satisfy H = Γ p 2 T P 2 π P and that T satisfies the unbiasedness and consistency conditions of the Parzen estimator: T P N, T 0 as N [Webb, 1999]

The above distance metrics are measures of dissimilarity Some measures of similarity also exist Inner product s INNER x, y = x T y The inner product is used when the vectors x and y are normalized, so that they have the same length Correlation coefficient D i=1 x i x y i y s CORR x, y = D x i x 2 D i=1 i=1 y i y 2 Tanimoto measure (for binary-valued vectors) x T y s T x, y = x 2 + y 2 x T y 1/2 CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 6

Criterion function for clustering Once a (dis)similarity measure has been determined, we need to define a criterion function to be optimized The most widely used clustering criterion is the sum-of-square-error C J MSE = x μ i 2 i=1 x ω i where μ i = 1 N i x ω i This criterion measures how well the data set X = {x (1 x (N } is represented by the cluster centers μ = {μ (1 μ (C } (C < N) Clustering methods that use this criterion are called minimum variance Other criterion functions exist, based on the scatter matrices used in Linear Discriminant Analysis For details, refer to [Duda, Hart and Stork, 2001] x CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 7

Cluster validity The validity of the final cluster solution is highly subjective This is in contrast with supervised training, where a clear objective function is known: Bayes risk Note that the choice of (dis)similarity measure and criterion function will have a major impact on the final clustering produced by the algorithms Example Which are the meaningful clusters in these cases? How many clusters should be considered? A number of quantitative methods for cluster validity are proposed in [Theodoridis and Koutrombas, 1999] CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 8

Iterative optimization Once a criterion function has been defined, we must find a partition of the data set that minimizes the criterion Exhaustive enumeration of all partitions, which guarantees the optimal solution, is unfeasible For example, a problem with 5 clusters and 100 examples yields 10 67 partitionings The common approach is to proceed in an iterative fashion 1) Find some reasonable initial partition and then 2) Move samples from one cluster to another in order to reduce the criterion function These iterative methods produce sub-optimal solutions but are computationally tractable We will consider two groups of iterative methods Flat clustering algorithms These algorithms produce a set of disjoint clusters Two algorithms are widely used: k-means and ISODATA Hierarchical clustering algorithms: The result is a hierarchy of nested clusters These algorithms can be broadly divided into agglomerative and divisive approaches CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 9

Method The k-means algorithm k-means is a simple clustering procedure that attempts to minimize the criterion function J MSE in an iterative fashion C J MSE = x μ 2 i=1 i where μ i = 1 x N i x ω i x ω i 1. Define the number of clusters 2. Initialize clusters by an arbitrary assignment of examples to clusters or an arbitrary set of cluster centers (some examples used as centers) 3. Compute the sample mean of each cluster 4. Reassign each example to the cluster with the nearest mean 5. If the classification of all samples has not changed, stop, else go to step 3 It can be shown (L14) that k-means is a particular case of the EM algorithm for mixture models CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 10

Vector quantization An application of k-means to signal processing and communication Univariate signal values are usually quantized into a number of levels Typically a power of 2 so the signal can be transmitted in binary format The same idea can be extended for multiple channels We could quantize each separate channel Instead, we can obtain a more efficient coding if we quantize the overall multidimensional vector by finding a number of multidimensional prototypes (cluster centers) The set of cluster centers is called a codebook, and the problem of finding this codebook is normally solved using the k-means algorithm S ig n a l 8 7 6 5 4 3 2 1 0 C o d e w o rd s V o ro n o i re g io n C o n tin u o u s s ig n a l Q u a n tiz a tio n n o is e 0 1 2 3 4 5 6 T im e (s ) Q u a n tiz e d s ig n a l V e c to rs CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 11

ISODATA Iterative Self-Organizing Data Analysis (ISODATA) An extension to the k-means algorithm with some heuristics to automatically select the number of clusters ISODATA requires the user to select a number of parameters N MIN_EX minimum number of examples per cluster N D desired (approximate) number of clusters σ S 2 maximum spread parameter for splitting D MERGE maximum distance separation for merging N MERGE maximum number of clusters that can be merged The algorithm works in an iterative fashion 1) Perform k-means clustering 2) Split any clusters whose samples are sufficiently dissimilar 3) Merge any two clusters sufficiently close 4) Go to 1) CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 12

1. Select an initial number of clusters N C and use the first N C examples as cluster centers k, k = 1.. NC 2. Assign each example to the closest cluster a. Exit the algorithm if the classification of examples has not changed 3. Eliminate clusters that contain less than N MIN_ EX examples and a. Assign their examples to the remaining clusters based on minimum distance b. Decrease N C accordingly 4. For each cluster k, a. Compute the center k as the sample mean of all the examples assigned to that cluster b. Compute the average distance between examples and cluster centers N C d avg = 1 N N k=1 k d k and d k = 1 N x ωk x μ k k c. Compute the variance of each axis and find the axis n with maximum variance σ 2 k n 6. For each cluster k with σ 2 k n > S2, if {d k > d AVG and N k > 2NMIN _ EX + 1} or {NC < ND/2} a. Split that cluster into two clusters where the two centers k1 and k2 differ only in the coordinate n i. k1( n ) = k(n ) + k(n ) (all other coordinates remain the same, 0 < < 1) ii. k2( n ) = k(n ) k(n ) (all other coordinates remain the same, 0 < < 1) b. Increment N C accordingly c. Reassign the cluster s examples to one of the two new clusters based on minimum distance to cluster centers 7. If N C > 2ND then a. Compute all distances D ij = d( i, j ) b. Sort D ij in decreasing order b. For each pair of clusters sorted by D ij, if (1) neither cluster has been already merged, (2) D ij < DMER GE and (3) not more than N MERGE pairs of clusters have been merged in this loop, then i. Merge i th and j th clusters ii. Compute the cluster center μ = N iμ i +N j μ j N i +N j iii. Decrement N C accordingly 8. Go to step 1 [Therrien, 1989] CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 13

ISODATA has been shown to be an extremely powerful heuristic Some of its advantages are Self-organizing capabilities Flexibility in eliminating clusters that have very few examples Ability to divide clusters that are too dissimilar Ability to merge clusters that are sufficiently similar However, it suffers from the following limitations Data must be linearly separable (long narrow or curved clusters are not handled properly) It is difficult to know a priori the optimal parameters Performance is highly dependent on these parameters For large datasets and large number of clusters, ISODATA is less efficient than other linear methods Convergence is unknown, although it appears to work well for nonoverlapping clusters In practice, ISODATA is run multiple times with different values of the parameters and the clustering with minimum SSE is selected CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 14

Hierarchical clustering k-means and ISODATA create disjoint clusters, resulting in a flat data representation However, sometimes it is desirable to obtain a hierarchical representation of data, with clusters and sub-clusters arranged in a tree-structured fashion Hierarchical representations are commonly used in the sciences (e.g., biological taxonomy) Hierarchical clustering methods can be grouped in two general classes Agglomerative Also known as bottom-up or merging Starting with N singleton clusters, successively merge clusters until one cluster is left Divisive Also known as top-down or splitting Starting with a unique cluster, successively split the clusters until N singleton examples are left CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 15

Dendrograms A binary tree that shows the structure of the clusters Dendrograms are the preferred representation for hierarchical clusters In addition to the binary tree, the dendrogram provides the similarity measure between clusters (the vertical axis) An alternative representation is based on sets {{x 1, {x 2, x 3 }}, {{{x 4, x 5 }, {x 6, x 7 }}, x 8 }} However, unlike the dendrogram, sets cannot express quantitative information x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 H ig h s im ila rity L o w s im ila rity CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 16

Define N C Number of clusters N EX Number of examples Divisive clustering How to choose the worst cluster Largest number of examples Largest variance Largest sum-squared-error How to split clusters Mean-median in one feature direction Perpendicular to the direction of largest variance 1. Start with one large cluster 2. Find worst cluster 3. Split it 4. If N C < NEX go to 2 The computations required by divisive clustering are more intensive than for agglomerative clustering methods For this reason, agglomerative approaches are more popular CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 17

Agglomerative clustering Define N C Number of clusters N EX Number of examples 1. Start with N EX singleton clusters 2. Find nearest clusters 3. Merge them 4. If N C > 1 go to 2 How to find the nearest pair of clusters Minimum distance d min ω i, ω j = min x ω i y ω j x y Maximum distance d max ω i, ω j = max x ω i y ω j x y Average distance d avg ω i, ω j = 1 N i N j x ω i y ω j x y Mean distance d mean ω i, ω j = μ i μ j CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 18

Minimum distance When d min is used to measure distance between clusters, the algorithm is called the nearest-neighbor or single-linkage clustering algorithm If the algorithm is allowed to run until only one cluster remains, the result is a minimum spanning tree (MST) This algorithm favors elongated classes Maximum distance When d max is used to measure distance between clusters, the algorithm is called the farthest-neighbor or complete-linkage clustering algorithm From a graph-theoretic point of view, each cluster constitutes a complete sub-graph This algorithm favors compact classes Average and mean distance d min and d max are extremely sensitive to outliers since their measurement of between-cluster distance involves minima or maxima d ave and d mean are more robust to outliers Of the two, d mean is more attractive computationally Notice that d ave involves the computation of N i N j pairwise distances CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 19

Distance Example Perform agglomerative clustering on X using the single-linkage metric X = {1, 3, 4, 9, 10, 13, 21, 23, 28, 29} In case of ties, always merge the pair of clusters with the largest mean Indicate the order in which the merging operations occur 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 1 3.5 3 2 9.5 28.5 1 2 2.25 5 22 4 3 11.25 6 4 5 6 8 13.5 25.25 7 7 8 9 19.38 CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 20

(dis)similarity (dis)similarity d min vs. d max B O S NY DC M IA CHI SEA SF LA D E N B O S 0 206 429 1504 963 2976 3095 2979 1949 NY 206 0 233 1308 802 2815 2934 2786 1771 DC 429 233 0 1075 671 2684 2799 2631 1616 M IA 1504 1308 1075 0 1329 3273 3053 2687 2037 CHI 963 802 671 1329 0 2013 2142 2054 996 SEA 2976 2815 2684 3273 2013 0 808 1131 1307 SF 3095 2934 2799 3053 2142 808 0 379 1235 LA 2979 2786 2631 2687 2054 1131 379 0 1059 D E N 1949 1771 1616 2037 996 1307 1235 1059 0 Single-linkage Complete-linkage 1000 3000 800 600 2500 2000 1500 400 1000 200 500 0 BOS NY DC CHI DEN SEA SF LA MIA 0 BOS NY DC CHI MIA SEA SF LA DEN CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna CSE@TAMU 21