Clustering. Data Mining. Abraham Otero. Data Mining. Agenda


 Samantha Benson
 1 years ago
 Views:
Transcription
1 Clustering 1/46 Agenda Introduction Distance Knearest neighbors Hierarchical clustering Quick reference 2/46 1
2 Introduction It seems logical that in a new situation we should act in a similar way as in previous similar situations, if we succeeded in them. In order to taking advantage of this strategy it is necessary to define what is meant by "similar, or the equivalent mathematical concept of "distance". It will also be necessary to determine when we are going to take advantage of this similarity: In an eager mode, processing the data available before starting the process. In a lazy mode, processing the data as it arrives. 3/46 Introduction Problem formulation: 4/46 2
3 Agenda Introduction Distance Knearest neighbors Hierarchical clustering Quick reference Open problems References 5/46 Distance Several common distances: pnorm(euclidean p=2, Minkowski p>2) Chebyshev Manhattan 6/46 3
4 Distance Be careful when applying distances: 7/46 Distance Be careful when applying distances: 8/46 4
5 Always normalize first: Distance 9/46 Distance But when normalizing beware of outliers!: 10/46 5
6 Distance Sometimes, we need to calculate the distance between a point and a set of points: 11/46 Agenda Introduction Distance Knearest neighbors Hierarchical clustering Quick reference Open problems References 12/46 6
7 knearest neighbors knearest neighbors algorithm (knn) is a method for classifying objects based on closest training examples in the feature space. It is an instancebased learning lazy algorithm. An object is classified by a majority vote of its neighbors. The object that is assigned to the class is the one that is most common amongst its k nearest neighbors. 13/46 knearest neighbors It is one of the simplest methods of clustering. Requires an initial set of labeled points. It is critical to determine an appropriate value for K. Try several values. Circle Square 14/46 7
8 Agenda Introduction Distance Knearest neighbors Hierarchical clustering Quick reference Open problems References 15/46 It is prototype based clustering. Each of the existing classes is represented by a prototype vector (a fictitious instance of the class) called centroid. Once the centroids have been calculated, if we need to classify a new element we simply calculate its closest centroid; this will be its class. Centroids share space in a set of regions called Voronoi regions. 16/46 8
9 Centroid calculation: 17/46 algorithm: 18/46 9
10 Sample (successful) run: 19/46 Initialization matters: Try different initial values. 20/46 10
11 The selection of K is critical: Try different K values. K=3 K=4 21/46 Limitations: Different cluster sizes 22/46 11
12 Limitations: Different density 23/46 Limitations: Nonglobular shapes 24/46 12
13 One possible solution is to use many clusters. Find parts of clusters. Then you need to put them together. 25/46 What about the nominal attributes? We can define a function if a=b, and otherwise. Therefore, the distance between two classes is given by: 26/46 13
14 KMeans demo: ering/tutorial_html/appletkm.html Applet/Code/Cluster.html 27/46 Agenda Introduction Distance Knearest neighbors Hierarchical clustering Quick reference Open problems References 28/46 14
15 (DensityBased Spatial Clustering of Applications with Noise) is a data clustering algorithm, not prototype based. It finds a number of clusters starting from the estimated density distribution of corresponding nodes. Classifies points in three categories: A point is a core point if it has more than a specified number of points (MinPts) within a radius Eps (these points are the interior of a cluster). A border point has fewer than MinPts within Eps, but is in the neighborhood of a core point. A noise point is any point that is not a core point or a border point. 29/46 Example: 30/46 15
16 Algorithm: Classify points as noise, border and core. Eliminate noise points. Perform clustering on the remaining points. 31/46 Example: 32/46 16
17 Strong points: Resistant to noise. Can handle clusters of different shapes and sizes. Weak points: Clusters with varying densities. Highdimensional data (it usually becomes too sparse). 33/46 34/46 17
18 Parameter determination. For MinPts a small number is usually employed. For twodimensional experimental data it has been shown that 4 is the most reasonable value. Eps is more tricky, as we have seen. A possible solution: For points in a cluster, their k th nearest neighbors are at roughly the same distance. Noise points have the k th nearest neighbor at a farther distance. So, plot sorted distance of every point to its k th nearest neighbor 35/46 Parameter determination. 36/46 18
19 demo: de/cluster.html 37/46 Agenda Introduction Distance Knearest neighbors Grid clustering Hierarchical clustering Quick reference 38/46 19
20 Hierarchical clustering Hierarchical clustering builds a hierarchy of clusters based on distance measurements. The traditional representation of this hierarchy is a tree (called a dendrogram), with individual elements on the leaves and a single cluster containing every element at the root. The tree like diagram can be interpreted as a sequences of merges or splits. Any desired number of clusters can be obtained by cutting the dendogram at the proper level. 39/46 Hierarchical clustering There are two main types of hierarchical clustering: Agglomerative (AGNES, Agglomerative NESting): Starts with the points as individual clusters. At each step, merge the closest pair of clusters until only one cluster (or k clusters) are left. Divisive (DIANA, Divisive ANAlysis Clustering): Start with one, allinclusive cluster. At each step, split a cluster until each cluster contains a point (or there are k clusters). In both cases, once a decision is made to combine/split two clusters, it cannot be undone. There is no global minimization. 40/46 20
21 Hierarchical clustering How to define intercluster distance? 41/46 Hierarchical clustering Single link Can handle non ellipitical clusters. Sensitive to noise and outliers Complete link Less sensitive to noise and outliers. Tends to break large clusters. Biased to globular clusters. Group and centroid average Less sensitive to noise and outliers Biased to globular clusters 42/46 21
22 Demo: Hierarchical clustering al_html/appleth.html 43/46 Agenda Introduction Distance Knearest neighbors Hierarchical clustering Quick reference 44/46 22
23 Quick reference Some general tips for choosing the clustering algorithm: Prototypebased and Hierarchical clustering (except singlelink) tend to form globular clusters. This is good for vector quantization but not for other kinds of data. Densitybased and graphbased (except those in the previous rule) tend to form nonglobular clusters. Most clustering algorithms work well for low dimensional spaces. If the dimensionality of the data is very large, think of reducing the dimensionality beforehand (PCA). 45/46 Quick reference If a taxonomy is to be created, consider hierarchical clustering. If a summarization of the data is needed, consider a partitional clustering. Can we allow the algorithm to discard outliers? (Ex: ). They might represent unusually profitable customers. Is it necessary to classify all the data? (Ex: we have to classify all documents in the database). Computing the mean makes sense only for realvalue attributes (KMeans). Define an appropriate distance (Ex: Euclidean distance is valid for realvalued attributes only). 46/46 23
2 Basic Concepts and Techniques of Cluster Analysis
The Challenges of Clustering High Dimensional Data * Michael Steinbach, Levent Ertöz, and Vipin Kumar Abstract Cluster analysis divides data into groups (clusters) for the purposes of summarization or
More informationIntroduction to partitioningbased clustering methods with a robust example
Reports of the Department of Mathematical Information Technology Series C. Software and Computational Engineering No. C. 1/2006 Introduction to partitioningbased clustering methods with a robust example
More informationPrototypebased classification by fuzzification of cases
Prototypebased classification by fuzzification of cases Parisa KordJamshidi Dep.Telecommunications and Information Processing Ghent university pkord@telin.ugent.be Bernard De Baets Dep. Applied Mathematics
More informationData Clustering: A Review
Data Clustering: A Review AK JAIN Michigan State University MN MURTY Indian Institute of Science AND PJ FLYNN The Ohio State University Clustering is the unsupervised classification of patterns (observations,
More informationDissimilarity representations allow for building good classifiers
Pattern Recognition Letters 23 (2002) 943 956 www.elsevier.com/locate/patrec Dissimilarity representations allow for building good classifiers El_zbieta Pezkalska *, Robert P.W. Duin Pattern Recognition
More informationA Survey of Outlier Detection Methodologies.
A Survey of Outlier Detection Methodologies. Victoria J. Hodge (vicky@cs.york.ac.uk) (austin@cs.york.ac.uk) Dept. of Computer Science, University of York, York, YO10 5DD UK tel: +44 1904 433067 fax: +44
More informationMODEL SELECTION FOR SOCIAL NETWORKS USING GRAPHLETS
MODEL SELECTION FOR SOCIAL NETWORKS USING GRAPHLETS JEANNETTE JANSSEN, MATT HURSHMAN, AND NAUZER KALYANIWALLA Abstract. Several network models have been proposed to explain the link structure observed
More informationRecall this chart that showed how most of our course would be organized:
Chapter 4 OneWay ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical
More information28 ClosestPoint Problems 
28 ClosestPoint Problems  Geometric problems involving points on the plane usually involve implicit or explicit treatment of distances
More informationSAP HANA Predictive Analysis Library (PAL)
PUBLIC SAP HANA Platform SPS 10 Document Version: 1.0 20150624 SAP HANA Predictive Analysis Library (PAL) Content 1 What is PAL?.... 5 2 Getting Started with PAL....6 2.1 Prerequisites....6 2.2 Application
More informationLearning Deep Architectures for AI. Contents
Foundations and Trends R in Machine Learning Vol. 2, No. 1 (2009) 1 127 c 2009 Y. Bengio DOI: 10.1561/2200000006 Learning Deep Architectures for AI By Yoshua Bengio Contents 1 Introduction 2 1.1 How do
More informationRobust Object Detection with Interleaved Categorization and Segmentation
Submission to the IJCV Special Issue on Learning for Vision and Vision for Learning, Sept. 2005, 2 nd revised version Aug. 2007. Robust Object Detection with Interleaved Categorization and Segmentation
More informationLarge Scale Online Learning of Image Similarity Through Ranking
Journal of Machine Learning Research 11 (21) 1191135 Submitted 2/9; Revised 9/9; Published 3/1 Large Scale Online Learning of Image Similarity Through Ranking Gal Chechik Google 16 Amphitheatre Parkway
More informationExtreme Value Analysis of Corrosion Data. Master Thesis
Extreme Value Analysis of Corrosion Data Master Thesis Marcin Glegola Delft University of Technology, Faculty of Electrical Engineering, Mathematics and Computer Science, The Netherlands July 2007 Abstract
More informationPossibilistic InstanceBased Learning
Possibilistic InstanceBased Learning Eyke Hüllermeier Department of Mathematics and Computer Science University of Marburg Germany Abstract A method of instancebased learning is introduced which makes
More informationAn Introduction to Point Pattern Analysis using CrimeStat
Introduction An Introduction to Point Pattern Analysis using CrimeStat Luc Anselin Spatial Analysis Laboratory Department of Agricultural and Consumer Economics University of Illinois, UrbanaChampaign
More informationMean Shift Based Clustering in High Dimensions: A Texture Classification Example
Mean Shift Based Clustering in High Dimensions: A Texture Classification Example Bogdan Georgescu µ Ilan Shimshoni µ Peter Meer ¾µ Computer Science µ Electrical and Computer Engineering ¾µ Rutgers University,
More informationMining Templates from Search Result Records of Search Engines
Mining Templates from Search Result Records of Search Engines Hongkun Zhao, Weiyi Meng State University of New York at Binghamton Binghamton, NY 13902, USA {hkzhao, meng}@cs.binghamton.edu Clement Yu University
More informationUniversità degli Studi di Bologna
Università degli Studi di Bologna DEIS Biometric System Laboratory Incremental Learning by Message Passing in Hierarchical Temporal Memory Davide Maltoni Biometric System Laboratory DEIS  University of
More informationGroup Sparse Coding. Fernando Pereira Google Mountain View, CA pereira@google.com. Dennis Strelow Google Mountain View, CA strelow@google.
Group Sparse Coding Samy Bengio Google Mountain View, CA bengio@google.com Fernando Pereira Google Mountain View, CA pereira@google.com Yoram Singer Google Mountain View, CA singer@google.com Dennis Strelow
More informationIBM SPSS Direct Marketing 22
IBM SPSS Direct Marketing 22 Note Before using this information and the product it supports, read the information in Notices on page 25. Product Information This edition applies to version 22, release
More informationIndexing by Latent Semantic Analysis. Scott Deerwester Graduate Library School University of Chicago Chicago, IL 60637
Indexing by Latent Semantic Analysis Scott Deerwester Graduate Library School University of Chicago Chicago, IL 60637 Susan T. Dumais George W. Furnas Thomas K. Landauer Bell Communications Research 435
More informationTop 10 algorithms in data mining
Knowl Inf Syst (2008) 14:1 37 DOI 10.1007/s1011500701142 SURVEY PAPER Top 10 algorithms in data mining Xindong Wu Vipin Kumar J. Ross Quinlan Joydeep Ghosh Qiang Yang Hiroshi Motoda Geoffrey J. McLachlan
More informationTHE development of methods for automatic detection
Learning to Detect Objects in Images via a Sparse, PartBased Representation Shivani Agarwal, Aatif Awan and Dan Roth, Member, IEEE Computer Society 1 Abstract We study the problem of detecting objects
More informationNCSS Statistical Software
Chapter 06 Introduction This procedure provides several reports for the comparison of two distributions, including confidence intervals for the difference in means, twosample ttests, the ztest, the
More informationRevisiting the Edge of Chaos: Evolving Cellular Automata to Perform Computations
Revisiting the Edge of Chaos: Evolving Cellular Automata to Perform Computations Melanie Mitchell 1, Peter T. Hraber 1, and James P. Crutchfield 2 In Complex Systems, 7:8913, 1993 Abstract We present
More informationAddressing Cold Start in Recommender Systems: A Semisupervised Cotraining Algorithm
Addressing Cold Start in Recommender Systems: A Semisupervised Cotraining Algorithm Mi Zhang,2 Jie Tang 3 Xuchen Zhang,2 Xiangyang Xue,2 School of Computer Science, Fudan University 2 Shanghai Key Laboratory
More informationView Controller Programming Guide for ios
View Controller Programming Guide for ios Contents About View Controllers 10 At a Glance 11 A View Controller Manages a Set of Views 11 You Manage Your Content Using Content View Controllers 11 Container
More informationIntroduction to Data Mining and Knowledge Discovery
Introduction to Data Mining and Knowledge Discovery Third Edition by Two Crows Corporation RELATED READINGS Data Mining 99: Technology Report, Two Crows Corporation, 1999 M. Berry and G. Linoff, Data Mining
More informationMining Midlevel Features for Image Classification
International Journal of Computer Vision manuscript No. (will be inserted by the editor) Mining Midlevel Features for Image Classification Basura Fernando Elisa Fromont Tinne Tuytelaars Received: date
More information