# L15: statistical clustering

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Similarity measures Criterion functions Cluster validity Flat clustering algorithms k-means ISODATA L15: statistical clustering Hierarchical clustering algorithms Divisive Agglomerative CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna 1

2 Non-parametric unsupervised learning In L14 we introduced the concept of unsupervised learning A collection of pattern recognition methods that learn without a teacher Two types of clustering methods were mentioned: parametric and nonparametric Parametric unsupervised learning Equivalent to density estimation with a mixture of (Gaussian) components Through the use of EM, the identity of the component that originated each data point was treated as a missing feature Non-parametric unsupervised learning No density functions are considered in these methods Instead, we are concerned with finding natural groupings (clusters) in a dataset Non-parametric clustering involves three steps Defining a measure of (dis)similarity between examples Defining a criterion function for clustering Defining an algorithm to minimize (or maximize) the criterion function CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna 2

3 Proximity measures Definition of metric A measuring rule d(x, y) for the distance between two vectors x and y is considered a metric if it satisfies the following properties d x, y d 0 d x, y = d 0 iff x = y d x, y = d y, x d x, y d x, z + d z, y If the metric has the property d ax, ay norm and denoted d x, y = x y = a d x, y then it is called a The most general form of distance metric is the power norm x y p/r = x i y i p D i=1 p controls the weight placed on any dimension dissimilarity, whereas r controls the distance growth of patterns that are further apart Notice that the definition of norm must be relaxed, allowing a power factor for a CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna 3 1/r [Marques de Sá, 2001]

4 Most commonly used metrics are derived from the power norm Minkowski metric (L k norm) D k x y k = i=1 x i y i The choice of an appropriate value of k depends on the amount of emphasis that you would like to give to the larger differences between dimensions 1/k Manhattan or city-block distance (L 1 norm) x y c b = D x i y i i=1 When used with binary vectors, the L1 norm is known as the Hamming distance Euclidean norm (L 2 norm) D 2 x y e = i=1 x i y i Chebyshev distance (L norm) x y c = max x i y i 1 i D 1/2 x 2 Contours of equal distance L 1 L 2 L x 1 CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna 4

5 Other metrics are also popular Quadratic distance d x, y = x y T B x y The Mahalanobis distance is a particular case of this distance Canberra metric (for non-negative features) Non-linear distance d ca x, y = x i y i x i +y i CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna 5 D i=1 d N x, y = 0 if d e x, y < T H o. w. where T is a threshold and H is a distance An appropriate choice for H and T for feature selection is that they should satisfy H = Γ p 2 T P 2 π P and that T satisfies the unbiasedness and consistency conditions of the Parzen estimator: T P N, T 0 as N [Webb, 1999]

6 The above distance metrics are measures of dissimilarity Some measures of similarity also exist Inner product s INNER x, y = x T y The inner product is used when the vectors x and y are normalized, so that they have the same length Correlation coefficient D i=1 x i x y i y s CORR x, y = D x i x 2 D i=1 i=1 y i y 2 Tanimoto measure (for binary-valued vectors) x T y s T x, y = x 2 + y 2 x T y 1/2 CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna 6

7 Criterion function for clustering Once a (dis)similarity measure has been determined, we need to define a criterion function to be optimized The most widely used clustering criterion is the sum-of-square-error C J MSE = x μ i 2 i=1 x ω i where μ i = 1 N i x ω i This criterion measures how well the data set X = {x (1 x (N } is represented by the cluster centers μ = {μ (1 μ (C } (C < N) Clustering methods that use this criterion are called minimum variance Other criterion functions exist, based on the scatter matrices used in Linear Discriminant Analysis For details, refer to [Duda, Hart and Stork, 2001] x CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna 7

8 Cluster validity The validity of the final cluster solution is highly subjective This is in contrast with supervised training, where a clear objective function is known: Bayes risk Note that the choice of (dis)similarity measure and criterion function will have a major impact on the final clustering produced by the algorithms Example Which are the meaningful clusters in these cases? How many clusters should be considered? A number of quantitative methods for cluster validity are proposed in [Theodoridis and Koutrombas, 1999] CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna 8

9 Iterative optimization Once a criterion function has been defined, we must find a partition of the data set that minimizes the criterion Exhaustive enumeration of all partitions, which guarantees the optimal solution, is unfeasible For example, a problem with 5 clusters and 100 examples yields partitionings The common approach is to proceed in an iterative fashion 1) Find some reasonable initial partition and then 2) Move samples from one cluster to another in order to reduce the criterion function These iterative methods produce sub-optimal solutions but are computationally tractable We will consider two groups of iterative methods Flat clustering algorithms These algorithms produce a set of disjoint clusters Two algorithms are widely used: k-means and ISODATA Hierarchical clustering algorithms: The result is a hierarchy of nested clusters These algorithms can be broadly divided into agglomerative and divisive approaches CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna 9

10 Method The k-means algorithm k-means is a simple clustering procedure that attempts to minimize the criterion function J MSE in an iterative fashion C J MSE = x μ 2 i=1 i where μ i = 1 x N i x ω i x ω i 1. Define the number of clusters 2. Initialize clusters by an arbitrary assignment of examples to clusters or an arbitrary set of cluster centers (some examples used as centers) 3. Compute the sample mean of each cluster 4. Reassign each example to the cluster with the nearest mean 5. If the classification of all samples has not changed, stop, else go to step 3 It can be shown (L14) that k-means is a particular case of the EM algorithm for mixture models CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna 10

11 Vector quantization An application of k-means to signal processing and communication Univariate signal values are usually quantized into a number of levels Typically a power of 2 so the signal can be transmitted in binary format The same idea can be extended for multiple channels We could quantize each separate channel Instead, we can obtain a more efficient coding if we quantize the overall multidimensional vector by finding a number of multidimensional prototypes (cluster centers) The set of cluster centers is called a codebook, and the problem of finding this codebook is normally solved using the k-means algorithm S ig n a l C o d e w o rd s V o ro n o i re g io n C o n tin u o u s s ig n a l Q u a n tiz a tio n n o is e T im e (s ) Q u a n tiz e d s ig n a l V e c to rs CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna 11

12 ISODATA Iterative Self-Organizing Data Analysis (ISODATA) An extension to the k-means algorithm with some heuristics to automatically select the number of clusters ISODATA requires the user to select a number of parameters N MIN_EX minimum number of examples per cluster N D desired (approximate) number of clusters σ S 2 maximum spread parameter for splitting D MERGE maximum distance separation for merging N MERGE maximum number of clusters that can be merged The algorithm works in an iterative fashion 1) Perform k-means clustering 2) Split any clusters whose samples are sufficiently dissimilar 3) Merge any two clusters sufficiently close 4) Go to 1) CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna 12

13 1. Select an initial number of clusters N C and use the first N C examples as cluster centers k, k = 1.. NC 2. Assign each example to the closest cluster a. Exit the algorithm if the classification of examples has not changed 3. Eliminate clusters that contain less than N MIN_ EX examples and a. Assign their examples to the remaining clusters based on minimum distance b. Decrease N C accordingly 4. For each cluster k, a. Compute the center k as the sample mean of all the examples assigned to that cluster b. Compute the average distance between examples and cluster centers N C d avg = 1 N N k=1 k d k and d k = 1 N x ωk x μ k k c. Compute the variance of each axis and find the axis n with maximum variance σ 2 k n 6. For each cluster k with σ 2 k n > S2, if {d k > d AVG and N k > 2NMIN _ EX + 1} or {NC < ND/2} a. Split that cluster into two clusters where the two centers k1 and k2 differ only in the coordinate n i. k1( n ) = k(n ) + k(n ) (all other coordinates remain the same, 0 < < 1) ii. k2( n ) = k(n ) k(n ) (all other coordinates remain the same, 0 < < 1) b. Increment N C accordingly c. Reassign the cluster s examples to one of the two new clusters based on minimum distance to cluster centers 7. If N C > 2ND then a. Compute all distances D ij = d( i, j ) b. Sort D ij in decreasing order b. For each pair of clusters sorted by D ij, if (1) neither cluster has been already merged, (2) D ij < DMER GE and (3) not more than N MERGE pairs of clusters have been merged in this loop, then i. Merge i th and j th clusters ii. Compute the cluster center μ = N iμ i +N j μ j N i +N j iii. Decrement N C accordingly 8. Go to step 1 [Therrien, 1989] CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna 13

14 ISODATA has been shown to be an extremely powerful heuristic Some of its advantages are Self-organizing capabilities Flexibility in eliminating clusters that have very few examples Ability to divide clusters that are too dissimilar Ability to merge clusters that are sufficiently similar However, it suffers from the following limitations Data must be linearly separable (long narrow or curved clusters are not handled properly) It is difficult to know a priori the optimal parameters Performance is highly dependent on these parameters For large datasets and large number of clusters, ISODATA is less efficient than other linear methods Convergence is unknown, although it appears to work well for nonoverlapping clusters In practice, ISODATA is run multiple times with different values of the parameters and the clustering with minimum SSE is selected CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna 14

15 Hierarchical clustering k-means and ISODATA create disjoint clusters, resulting in a flat data representation However, sometimes it is desirable to obtain a hierarchical representation of data, with clusters and sub-clusters arranged in a tree-structured fashion Hierarchical representations are commonly used in the sciences (e.g., biological taxonomy) Hierarchical clustering methods can be grouped in two general classes Agglomerative Also known as bottom-up or merging Starting with N singleton clusters, successively merge clusters until one cluster is left Divisive Also known as top-down or splitting Starting with a unique cluster, successively split the clusters until N singleton examples are left CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna 15

16 Dendrograms A binary tree that shows the structure of the clusters Dendrograms are the preferred representation for hierarchical clusters In addition to the binary tree, the dendrogram provides the similarity measure between clusters (the vertical axis) An alternative representation is based on sets {{x 1, {x 2, x 3 }}, {{{x 4, x 5 }, {x 6, x 7 }}, x 8 }} However, unlike the dendrogram, sets cannot express quantitative information x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 H ig h s im ila rity L o w s im ila rity CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna 16

17 Define N C Number of clusters N EX Number of examples Divisive clustering How to choose the worst cluster Largest number of examples Largest variance Largest sum-squared-error How to split clusters Mean-median in one feature direction Perpendicular to the direction of largest variance 1. Start with one large cluster 2. Find worst cluster 3. Split it 4. If N C < NEX go to 2 The computations required by divisive clustering are more intensive than for agglomerative clustering methods For this reason, agglomerative approaches are more popular CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna 17

18 Agglomerative clustering Define N C Number of clusters N EX Number of examples 1. Start with N EX singleton clusters 2. Find nearest clusters 3. Merge them 4. If N C > 1 go to 2 How to find the nearest pair of clusters Minimum distance d min ω i, ω j = min x ω i y ω j x y Maximum distance d max ω i, ω j = max x ω i y ω j x y Average distance d avg ω i, ω j = 1 N i N j x ω i y ω j x y Mean distance d mean ω i, ω j = μ i μ j CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna 18

19 Minimum distance When d min is used to measure distance between clusters, the algorithm is called the nearest-neighbor or single-linkage clustering algorithm If the algorithm is allowed to run until only one cluster remains, the result is a minimum spanning tree (MST) This algorithm favors elongated classes Maximum distance When d max is used to measure distance between clusters, the algorithm is called the farthest-neighbor or complete-linkage clustering algorithm From a graph-theoretic point of view, each cluster constitutes a complete sub-graph This algorithm favors compact classes Average and mean distance d min and d max are extremely sensitive to outliers since their measurement of between-cluster distance involves minima or maxima d ave and d mean are more robust to outliers Of the two, d mean is more attractive computationally Notice that d ave involves the computation of N i N j pairwise distances CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna 19

20 Distance Example Perform agglomerative clustering on X using the single-linkage metric X = {1, 3, 4, 9, 10, 13, 21, 23, 28, 29} In case of ties, always merge the pair of clusters with the largest mean Indicate the order in which the merging operations occur CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna 20

21 (dis)similarity (dis)similarity d min vs. d max B O S NY DC M IA CHI SEA SF LA D E N B O S NY DC M IA CHI SEA SF LA D E N Single-linkage Complete-linkage BOS NY DC CHI DEN SEA SF LA MIA 0 BOS NY DC CHI MIA SEA SF LA DEN CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna 21

### Neural Networks Lesson 5 - Cluster Analysis

Neural Networks Lesson 5 - Cluster Analysis Prof. Michele Scarpiniti INFOCOM Dpt. - Sapienza University of Rome http://ispac.ing.uniroma1.it/scarpiniti/index.htm michele.scarpiniti@uniroma1.it Rome, 29

### Clustering. Danilo Croce Web Mining & Retrieval a.a. 2015/201 16/03/2016

Clustering Danilo Croce Web Mining & Retrieval a.a. 2015/201 16/03/2016 1 Supervised learning vs. unsupervised learning Supervised learning: discover patterns in the data that relate data attributes with

### DATA MINING CLUSTER ANALYSIS: BASIC CONCEPTS

DATA MINING CLUSTER ANALYSIS: BASIC CONCEPTS 1 AND ALGORITHMS Chiara Renso KDD-LAB ISTI- CNR, Pisa, Italy WHAT IS CLUSTER ANALYSIS? Finding groups of objects such that the objects in a group will be similar

### Cluster Analysis. Isabel M. Rodrigues. Lisboa, 2014. Instituto Superior Técnico

Instituto Superior Técnico Lisboa, 2014 Introduction: Cluster analysis What is? Finding groups of objects such that the objects in a group will be similar (or related) to one another and different from

### Unsupervised learning: Clustering

Unsupervised learning: Clustering Salissou Moutari Centre for Statistical Science and Operational Research CenSSOR 17 th September 2013 Unsupervised learning: Clustering 1/52 Outline 1 Introduction What

### SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING

AAS 07-228 SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING INTRODUCTION James G. Miller * Two historical uncorrelated track (UCT) processing approaches have been employed using general perturbations

### 6. If there is no improvement of the categories after several steps, then choose new seeds using another criterion (e.g. the objects near the edge of

Clustering Clustering is an unsupervised learning method: there is no target value (class label) to be predicted, the goal is finding common patterns or grouping similar examples. Differences between models/algorithms

### Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Data Mining Cluster Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar Tan,Steinbach, Kumar Introduction to Data Mining 4/8/2004 Hierarchical

### Clustering DHS 10.6-10.7, 10.9-10.10, 10.4.3-10.4.4

Clustering DHS 10.6-10.7, 10.9-10.10, 10.4.3-10.4.4 Clustering Definition A form of unsupervised learning, where we identify groups in feature space for an unlabeled sample set Define class regions in

### Chapter ML:XI (continued)

Chapter ML:XI (continued) XI. Cluster Analysis Data Mining Overview Cluster Analysis Basics Hierarchical Cluster Analysis Iterative Cluster Analysis Density-Based Cluster Analysis Cluster Evaluation Constrained

### Vector Quantization and Clustering

Vector Quantization and Clustering Introduction K-means clustering Clustering issues Hierarchical clustering Divisive (top-down) clustering Agglomerative (bottom-up) clustering Applications to speech recognition

### Chapter 4: Non-Parametric Classification

Chapter 4: Non-Parametric Classification Introduction Density Estimation Parzen Windows Kn-Nearest Neighbor Density Estimation K-Nearest Neighbor (KNN) Decision Rule Gaussian Mixture Model A weighted combination

### Data Clustering. Dec 2nd, 2013 Kyrylo Bessonov

Data Clustering Dec 2nd, 2013 Kyrylo Bessonov Talk outline Introduction to clustering Types of clustering Supervised Unsupervised Similarity measures Main clustering algorithms k-means Hierarchical Main

### Clustering UE 141 Spring 2013

Clustering UE 141 Spring 013 Jing Gao SUNY Buffalo 1 Definition of Clustering Finding groups of obects such that the obects in a group will be similar (or related) to one another and different from (or

### Data Mining Clustering (2) Sheets are based on the those provided by Tan, Steinbach, and Kumar. Introduction to Data Mining

Data Mining Clustering (2) Toon Calders Sheets are based on the those provided by Tan, Steinbach, and Kumar. Introduction to Data Mining Outline Partitional Clustering Distance-based K-means, K-medoids,

### . Learn the number of classes and the structure of each class using similarity between unlabeled training patterns

Outline Part 1: of data clustering Non-Supervised Learning and Clustering : Problem formulation cluster analysis : Taxonomies of Clustering Techniques : Data types and Proximity Measures : Difficulties

### Distance based clustering

// Distance based clustering Chapter ² ² Clustering Clustering is the art of finding groups in data (Kaufman and Rousseeuw, 99). What is a cluster? Group of objects separated from other clusters Means

### Data Mining Cluster Analysis: Advanced Concepts and Algorithms. ref. Chapter 9. Introduction to Data Mining

Data Mining Cluster Analysis: Advanced Concepts and Algorithms ref. Chapter 9 Introduction to Data Mining by Tan, Steinbach, Kumar 1 Outline Prototype-based Fuzzy c-means Mixture Model Clustering Density-based

### Medical Information Management & Mining. You Chen Jan,15, 2013 You.chen@vanderbilt.edu

Medical Information Management & Mining You Chen Jan,15, 2013 You.chen@vanderbilt.edu 1 Trees Building Materials Trees cannot be used to build a house directly. How can we transform trees to building materials?

### Lecture 20: Clustering

Lecture 20: Clustering Wrap-up of neural nets (from last lecture Introduction to unsupervised learning K-means clustering COMP-424, Lecture 20 - April 3, 2013 1 Unsupervised learning In supervised learning,

### Fig. 1 A typical Knowledge Discovery process [2]

Volume 4, Issue 7, July 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com A Review on Clustering

### L4: Bayesian Decision Theory

L4: Bayesian Decision Theory Likelihood ratio test Probability of error Bayes risk Bayes, MAP and ML criteria Multi-class problems Discriminant functions CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna

### An Introduction to Cluster Analysis for Data Mining

An Introduction to Cluster Analysis for Data Mining 10/02/2000 11:42 AM 1. INTRODUCTION... 4 1.1. Scope of This Paper... 4 1.2. What Cluster Analysis Is... 4 1.3. What Cluster Analysis Is Not... 5 2. OVERVIEW...

### Text Clustering. Clustering

Text Clustering 1 Clustering Partition unlabeled examples into disoint subsets of clusters, such that: Examples within a cluster are very similar Examples in different clusters are very different Discover

### Clustering & Association

Clustering - Overview What is cluster analysis? Grouping data objects based only on information found in the data describing these objects and their relationships Maximize the similarity within objects

### Clustering Hierarchical clustering and k-mean clustering

Clustering Hierarchical clustering and k-mean clustering Genome 373 Genomic Informatics Elhanan Borenstein The clustering problem: A quick review partition genes into distinct sets with high homogeneity

### Machine Learning for NLP

Natural Language Processing SoSe 2015 Machine Learning for NLP Dr. Mariana Neves May 4th, 2015 (based on the slides of Dr. Saeedeh Momtazi) Introduction Field of study that gives computers the ability

### Data Mining and Data Warehousing Henryk Maciejewski Data Mining Clustering

Data Mining and Data Warehousing Henryk Maciejewski Data Mining Clustering Clustering Algorithms Contents K-means Hierarchical algorithms Linkage functions Vector quantization Clustering Formulation Objects.................................

### Chapter 7. Cluster Analysis

Chapter 7. Cluster Analysis. What is Cluster Analysis?. A Categorization of Major Clustering Methods. Partitioning Methods. Hierarchical Methods 5. Density-Based Methods 6. Grid-Based Methods 7. Model-Based

### Environmental Remote Sensing GEOG 2021

Environmental Remote Sensing GEOG 2021 Lecture 4 Image classification 2 Purpose categorising data data abstraction / simplification data interpretation mapping for land cover mapping use land cover class

### Clustering and Data Mining in R

Clustering and Data Mining in R Workshop Supplement Thomas Girke December 10, 2011 Introduction Data Preprocessing Data Transformations Distance Methods Cluster Linkage Hierarchical Clustering Approaches

### Clustering. Adrian Groza. Department of Computer Science Technical University of Cluj-Napoca

Clustering Adrian Groza Department of Computer Science Technical University of Cluj-Napoca Outline 1 Cluster Analysis What is Datamining? Cluster Analysis 2 K-means 3 Hierarchical Clustering What is Datamining?

### CLUSTERING (Segmentation)

CLUSTERING (Segmentation) Dr. Saed Sayad University of Toronto 2010 saed.sayad@utoronto.ca http://chem-eng.utoronto.ca/~datamining/ 1 What is Clustering? Given a set of records, organize the records into

### Clustering. Data Mining. Abraham Otero. Data Mining. Agenda

Clustering 1/46 Agenda Introduction Distance K-nearest neighbors Hierarchical clustering Quick reference 2/46 1 Introduction It seems logical that in a new situation we should act in a similar way as in

### Data Mining. Cluster Analysis: Advanced Concepts and Algorithms

Data Mining Cluster Analysis: Advanced Concepts and Algorithms Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1 More Clustering Methods Prototype-based clustering Density-based clustering Graph-based

### Robotics 2 Clustering & EM. Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Maren Bennewitz, Wolfram Burgard

Robotics 2 Clustering & EM Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Maren Bennewitz, Wolfram Burgard 1 Clustering (1) Common technique for statistical data analysis to detect structure (machine learning,

### 10-810 /02-710 Computational Genomics. Clustering expression data

10-810 /02-710 Computational Genomics Clustering expression data What is Clustering? Organizing data into clusters such that there is high intra-cluster similarity low inter-cluster similarity Informally,

Cluster Analysis: Advanced Concepts and dalgorithms Dr. Hui Xiong Rutgers University Introduction to Data Mining 08/06/2006 1 Introduction to Data Mining 08/06/2006 1 Outline Prototype-based Fuzzy c-means

### CPSC 340: Machine Learning and Data Mining. K-Means Clustering Fall 2015

CPSC 340: Machine Learning and Data Mining K-Means Clustering Fall 2015 Admin Assignment 1 solutions posted after class. Tutorials for Assignment 2 on Monday. Random Forests Random forests are one of the

### Cluster Analysis: Basic Concepts and Algorithms

8 Cluster Analysis: Basic Concepts and Algorithms Cluster analysis divides data into groups (clusters) that are meaningful, useful, or both. If meaningful groups are the goal, then the clusters should

### L25: Ensemble learning

L25: Ensemble learning Introduction Methods for constructing ensembles Combination strategies Stacked generalization Mixtures of experts Bagging Boosting CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna

### CLASSIFICATION AND CLUSTERING. Anveshi Charuvaka

CLASSIFICATION AND CLUSTERING Anveshi Charuvaka Learning from Data Classification Regression Clustering Anomaly Detection Contrast Set Mining Classification: Definition Given a collection of records (training

### Clustering. 15-381 Artificial Intelligence Henry Lin. Organizing data into clusters such that there is

Clustering 15-381 Artificial Intelligence Henry Lin Modified from excellent slides of Eamonn Keogh, Ziv Bar-Joseph, and Andrew Moore What is Clustering? Organizing data into clusters such that there is

### Information Retrieval and Web Search Engines

Information Retrieval and Web Search Engines Lecture 7: Document Clustering December 10 th, 2013 Wolf-Tilo Balke and Kinda El Maarry Institut für Informationssysteme Technische Universität Braunschweig

### ROBERTO BATTITI, MAURO BRUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Apr 2015

ROBERTO BATTITI, MAURO BRUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Apr 2015 http://intelligentoptimization.org/lionbook Roberto Battiti

### Lecture 9: Introduction to Pattern Analysis

Lecture 9: Introduction to Pattern Analysis g Features, patterns and classifiers g Components of a PR system g An example g Probability definitions g Bayes Theorem g Gaussian densities Features, patterns

### Machine Learning and Data Mining. Clustering. (adapted from) Prof. Alexander Ihler

Machine Learning and Data Mining Clustering (adapted from) Prof. Alexander Ihler Unsupervised learning Supervised learning Predict target value ( y ) given features ( x ) Unsupervised learning Understand

### MODULE 15 Clustering Large Datasets LESSON 34

MODULE 15 Clustering Large Datasets LESSON 34 Incremental Clustering Keywords: Single Database Scan, Leader, BIRCH, Tree 1 Clustering Large Datasets Pattern matrix It is convenient to view the input data

### Data Mining Techniques Chapter 11: Automatic Cluster Detection

Data Mining Techniques Chapter 11: Automatic Cluster Detection Clustering............................................................. 2 k-means Clustering......................................................

### Introduction to machine learning and pattern recognition Lecture 1 Coryn Bailer-Jones

Introduction to machine learning and pattern recognition Lecture 1 Coryn Bailer-Jones http://www.mpia.de/homes/calj/mlpr_mpia2008.html 1 1 What is machine learning? Data description and interpretation

### Flat Clustering K-Means Algorithm

Flat Clustering K-Means Algorithm 1. Purpose. Clustering algorithms group a set of documents into subsets or clusters. The cluster algorithms goal is to create clusters that are coherent internally, but

### MACHINE LEARNING IN HIGH ENERGY PHYSICS

MACHINE LEARNING IN HIGH ENERGY PHYSICS LECTURE #1 Alex Rogozhnikov, 2015 INTRO NOTES 4 days two lectures, two practice seminars every day this is introductory track to machine learning kaggle competition!

### A Survey of Kernel Clustering Methods

A Survey of Kernel Clustering Methods Maurizio Filippone, Francesco Camastra, Francesco Masulli and Stefano Rovetta Presented by: Kedar Grama Outline Unsupervised Learning and Clustering Types of clustering

### Distances, Clustering, and Classification. Heatmaps

Distances, Clustering, and Classification Heatmaps 1 Distance Clustering organizes things that are close into groups What does it mean for two genes to be close? What does it mean for two samples to be

### UNSUPERVISED MACHINE LEARNING TECHNIQUES IN GENOMICS

UNSUPERVISED MACHINE LEARNING TECHNIQUES IN GENOMICS Dwijesh C. Mishra I.A.S.R.I., Library Avenue, New Delhi-110 012 dcmishra@iasri.res.in What is Learning? "Learning denotes changes in a system that enable

### Cluster analysis Cosmin Lazar. COMO Lab VUB

Cluster analysis Cosmin Lazar COMO Lab VUB Introduction Cluster analysis foundations rely on one of the most fundamental, simple and very often unnoticed ways (or methods) of understanding and learning,

### Classification Techniques for Remote Sensing

Classification Techniques for Remote Sensing Selim Aksoy Department of Computer Engineering Bilkent University Bilkent, 06800, Ankara saksoy@cs.bilkent.edu.tr http://www.cs.bilkent.edu.tr/ saksoy/courses/cs551

### A comparison of various clustering methods and algorithms in data mining

Volume :2, Issue :5, 32-36 May 2015 www.allsubjectjournal.com e-issn: 2349-4182 p-issn: 2349-5979 Impact Factor: 3.762 R.Tamilselvi B.Sivasakthi R.Kavitha Assistant Professor A comparison of various clustering

### Important Characteristics of Cluster Analysis Techniques

Cluster Analysis Can we organize sampling entities into discrete classes, such that within-group similarity is maximized and amonggroup similarity is minimized according to some objective criterion? Sites

### Data Mining Project Report. Document Clustering. Meryem Uzun-Per

Data Mining Project Report Document Clustering Meryem Uzun-Per 504112506 Table of Content Table of Content... 2 1. Project Definition... 3 2. Literature Survey... 3 3. Methods... 4 3.1. K-means algorithm...

### An Enhanced Clustering Algorithm to Analyze Spatial Data

International Journal of Engineering and Technical Research (IJETR) ISSN: 2321-0869, Volume-2, Issue-7, July 2014 An Enhanced Clustering Algorithm to Analyze Spatial Data Dr. Mahesh Kumar, Mr. Sachin Yadav

### CHAPTER 3 DATA MINING AND CLUSTERING

CHAPTER 3 DATA MINING AND CLUSTERING 3.1 Introduction Nowadays, large quantities of data are being accumulated. The amount of data collected is said to be almost doubled every 9 months. Seeking knowledge

### Data Mining Cluster Analysis: Advanced Concepts and Algorithms. Lecture Notes for Chapter 9. Introduction to Data Mining

Data Mining Cluster Analysis: Advanced Concepts and Algorithms Lecture Notes for Chapter 9 Introduction to Data Mining by Tan, Steinbach, Kumar Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004

### Machine Learning using MapReduce

Machine Learning using MapReduce What is Machine Learning Machine learning is a subfield of artificial intelligence concerned with techniques that allow computers to improve their outputs based on previous

### PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical

### Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Data Mining Cluster Analsis: Basic Concepts and Algorithms Lecture Notes for Chapter 8 Introduction to Data Mining b Tan, Steinbach, Kumar Tan,Steinbach, Kumar Introduction to Data Mining /8/ What is Cluster

### Example: Document Clustering. Clustering: Definition. Notion of a Cluster can be Ambiguous. Types of Clusterings. Hierarchical Clustering

Overview Prognostic Models and Data Mining in Medicine, part I Cluster Analsis What is Cluster Analsis? K-Means Clustering Hierarchical Clustering Cluster Validit Eample: Microarra data analsis 6 Summar

### Extensive Survey on Hierarchical Clustering Methods in Data Mining

Extensive Survey on Hierarchical Clustering Methods in Data Mining Dipak P Dabhi 1, Mihir R Patel 2 1Dipak P Dabhi Assistant Professor, Computer and Information Technology (C.E & I.T), C.G.P.I.T, Gujarat,

### Introduction to Statistical Machine Learning

CHAPTER Introduction to Statistical Machine Learning We start with a gentle introduction to statistical machine learning. Readers familiar with machine learning may wish to skip directly to Section 2,

### Social Media Mining. Data Mining Essentials

Introduction Data production rate has been increased dramatically (Big Data) and we are able store much more data than before E.g., purchase data, social media data, mobile phone data Businesses and customers

### Statistical Machine Learning

Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

### Classifiers & Classification

Classifiers & Classification Forsyth & Ponce Computer Vision A Modern Approach chapter 22 Pattern Classification Duda, Hart and Stork School of Computer Science & Statistics Trinity College Dublin Dublin

### L10: Probability, statistics, and estimation theory

L10: Probability, statistics, and estimation theory Review of probability theory Bayes theorem Statistics and the Normal distribution Least Squares Error estimation Maximum Likelihood estimation Bayesian

### Unsupervised Learning and Data Mining. Unsupervised Learning and Data Mining. Clustering. Supervised Learning. Supervised Learning

Unsupervised Learning and Data Mining Unsupervised Learning and Data Mining Clustering Decision trees Artificial neural nets K-nearest neighbor Support vectors Linear regression Logistic regression...

### Data visualization and clustering. Genomics is to no small extend a data science

Data visualization and clustering Genomics is to no small extend a data science [www.data2discovery.org] Data visualization and clustering Genomics is to no small extend a data science [Andersson et al.,

### ARTIFICIAL INTELLIGENCE (CSCU9YE) LECTURE 6: MACHINE LEARNING 2: UNSUPERVISED LEARNING (CLUSTERING)

ARTIFICIAL INTELLIGENCE (CSCU9YE) LECTURE 6: MACHINE LEARNING 2: UNSUPERVISED LEARNING (CLUSTERING) Gabriela Ochoa http://www.cs.stir.ac.uk/~goc/ OUTLINE Preliminaries Classification and Clustering Applications

### Hierarchical Clustering. Clustering Overview

lustering Overview Last lecture What is clustering Partitional algorithms: K-means Today s lecture Hierarchical algorithms ensity-based algorithms: SN Techniques for clustering large databases IRH UR ata

### Efficient visual search of local features. Cordelia Schmid

Efficient visual search of local features Cordelia Schmid Visual search change in viewing angle Matches 22 correct matches Image search system for large datasets Large image dataset (one million images

### Text Analytics. Text Clustering. Ulf Leser

Text Analytics Text Clustering Ulf Leser Content of this Lecture (Text) clustering Cluster quality Clustering algorithms Application Ulf Leser: Text Analytics, Winter Semester 2010/2011 2 Clustering Clustering

### Cluster Analysis: Basic Concepts and Algorithms

Cluster Analsis: Basic Concepts and Algorithms What does it mean clustering? Applications Tpes of clustering K-means Intuition Algorithm Choosing initial centroids Bisecting K-means Post-processing Strengths

### Introduction to Machine Learning

Introduction to Machine Learning Prof. Alexander Ihler Prof. Max Welling icamp Tutorial July 22 What is machine learning? The ability of a machine to improve its performance based on previous results:

### CLUSTERING LARGE DATA SETS WITH MIXED NUMERIC AND CATEGORICAL VALUES *

CLUSTERING LARGE DATA SETS WITH MIED NUMERIC AND CATEGORICAL VALUES * ZHEUE HUANG CSIRO Mathematical and Information Sciences GPO Box Canberra ACT, AUSTRALIA huang@cmis.csiro.au Efficient partitioning

### Unsupervised Learning. What is clustering for? What is clustering for? (cont )

Unsupervised Learning c: Artificial Intelligence The University of Iowa Supervised learning vs. unsupervised learning Supervised learning: discover patterns in the data that relate data attributes with

### There are a number of different methods that can be used to carry out a cluster analysis; these methods can be classified as follows:

Statistics: Rosie Cornish. 2007. 3.1 Cluster Analysis 1 Introduction This handout is designed to provide only a brief introduction to cluster analysis and how it is done. Books giving further details are

### Data Mining Cluster Analysis: Advanced Concepts and Algorithms. Lecture Notes for Chapter 9. Introduction to Data Mining

Data Mining Cluster Analysis: Advanced Concepts and Algorithms Lecture Notes for Chapter 9 Introduction to Data Mining by Tan, Steinbach, Kumar Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004

### Linear Threshold Units

Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear

### Steven M. Ho!and. Department of Geology, University of Georgia, Athens, GA 30602-2501

CLUSTER ANALYSIS Steven M. Ho!and Department of Geology, University of Georgia, Athens, GA 30602-2501 January 2006 Introduction Cluster analysis includes a broad suite of techniques designed to find groups

### Data Warehousing and Data Mining

Data Warehousing and Data Mining Lecture Clustering Methods CITS CITS Wei Liu School of Computer Science and Software Engineering Faculty of Engineering, Computing and Mathematics Acknowledgement: The

### Segmentation & Clustering

EECS 442 Computer vision Segmentation & Clustering Segmentation in human vision K-mean clustering Mean-shift Graph-cut Reading: Chapters 14 [FP] Some slides of this lectures are courtesy of prof F. Li,

### Face Recognition using SIFT Features

Face Recognition using SIFT Features Mohamed Aly CNS186 Term Project Winter 2006 Abstract Face recognition has many important practical applications, like surveillance and access control.

### Statistical Databases and Registers with some datamining

Unsupervised learning - Statistical Databases and Registers with some datamining a course in Survey Methodology and O cial Statistics Pages in the book: 501-528 Department of Statistics Stockholm University

### SoSe 2014: M-TANI: Big Data Analytics

SoSe 2014: M-TANI: Big Data Analytics Lecture 4 21/05/2014 Sead Izberovic Dr. Nikolaos Korfiatis Agenda Recap from the previous session Clustering Introduction Distance mesures Hierarchical Clustering

### An Adaptive Index Structure for High-Dimensional Similarity Search

An Adaptive Index Structure for High-Dimensional Similarity Search Abstract A practical method for creating a high dimensional index structure that adapts to the data distribution and scales well with

### Data Mining Cluster Analysis: Basic Concepts and Algorithms. Clustering Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Data Mining Cluster Analsis: Basic Concepts and Algorithms Lecture Notes for Chapter 8 Introduction to Data Mining b Tan, Steinbach, Kumar Clustering Algorithms K-means and its variants Hierarchical clustering

### K-Means Cluster Analysis. Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1

K-Means Cluster Analsis Chapter 3 PPDM Class Tan,Steinbach, Kumar Introduction to Data Mining 4/18/4 1 What is Cluster Analsis? Finding groups of objects such that the objects in a group will be similar

### PCA, Clustering and Classification. By H. Bjørn Nielsen strongly inspired by Agnieszka S. Juncker

PCA, Clustering and Classification By H. Bjørn Nielsen strongly inspired by Agnieszka S. Juncker Motivation: Multidimensional data Pat1 Pat2 Pat3 Pat4 Pat5 Pat6 Pat7 Pat8 Pat9 209619_at 7758 4705 5342

### CS 2750 Machine Learning. Lecture 1. Machine Learning. http://www.cs.pitt.edu/~milos/courses/cs2750/ CS 2750 Machine Learning.

Lecture Machine Learning Milos Hauskrecht milos@cs.pitt.edu 539 Sennott Square, x5 http://www.cs.pitt.edu/~milos/courses/cs75/ Administration Instructor: Milos Hauskrecht milos@cs.pitt.edu 539 Sennott