Triangular Visualization

Save this PDF as:

Size: px
Start display at page:

Transcription

1 Triangular Visualization Tomasz Maszczyk and W lodzis law Duch Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Abstract. The TriVis algorithm for visualization of multidimensional data proximities in two dimensions is presented. The algorithm preserves maximum number of exact distances, has simple interpretation, and unlike multidimensional scaling (MDS) does not require costly minimization. It may also provide an excellent starting point significantly reducing the number of required iterations in MDS. 1 Introduction Almost all datasets in real applications have many input variables that may be inter-related in subtle ways. Such datasets can be analyzed using conventional methods based on statistic, providing a numerical indication of the contribution of each feature to a specific category. Frequently exploratory data analysis is more informative when visual analysis and pattern recognition is done rather than direct analysis of numerical data. The visual interpretation of datasets is limited by human perception to two or three-dimensions. Projection methods and non-linear mapping methods that show interesting aspects of multidimensional data are therefore highly desirable [1]. Methods that represent topographical proximity of data are of great importance. They are usually variants of multidimensional scaling (MDS) techniques [2]. Here a simpler and more effective approach called TriVis is introduced, based on the growing structures of triangles that preserve exactly as many distances as possible. Mappings obtained in this way are easy to understand, and may also be used for MDS initialization. Other methods do not seem to find significantly better mappings. In the next section a few linear and non-linear visualization methods are described, and TriVis visualization based on sequential construction of triangles is introduced. Illustrative examples for several datasets are presented in section 3. Conclusions are given in the final section. 2 Visualization algorithms First a short description of two most commonly used methods, principal component analysis (PCA) [1] and multidimensional scaling (MDS) [2], is given. After this, our TriViz approach is presented. Comparison of these 3 methods is presented in the next section.

2 2 T. Maszczyk and W. Duch 2.1 Principal component analysis PCA is a linear projection method that finds orthogonal combinations of input features X = {x 1, x 2,..., x N } preserving most variation in the data. Principal component directions P i result from diagonalization of data covariance matrix [3]. They can be ordered from most to the least important, according to the size of the corresponding eigenvalues. Directions with maximum variability of data points guarantee minimal loss of information when position of points are recreated from their low-dimensional projections. Visualization can be done taking the first two principal components and projecting the data into the space defined by these components, y ij = P i X j. PCA complexity is dominated by covariance matrix diagonalization, for two highest eigenvalues it is at least O(N 2 ). For some data distributions this algorithm shows informative structures. Kernelized version of the standard PCA may be easily formulated [4], finding directions of maximum variance for vectors mapped to an extended space. This space is not constructed in an explicit way, the only condition is that the kernel mapping K(X, X ) of the original vectors should be a scalar product Φ(X) Φ(X ) in an extended space Φ(X). This enables interesting visualization of data, although interpretation is rather difficult. 2.2 Multidimensional scaling MDS is perhaps the most popular non-linear technique of proximity visualization. The main idea how to decrease dimensionality while preserving original distances in high-dimensional space has been rediscovered several times [5 7] and is done either by minimization of specific cost functions [2] or by solving a system of cubic equations [7]. MDS methods need only similarities between objects as inputs, so explicit vector representation of objects is not necessary. Qualitative information about pairwise similarities is sufficient for non-metric MDS, but here only quantitative evaluation of similarity calculated by numerical functions is used. MDS methods differ by their cost functions, optimization algorithms, the type of similarity functions and the use of feature weighting. There are many measures of topographical distortions due to the reduction of dimensionality, most of them weighted variants of the simple stress function [2]: S(d) = n W ij (D ij d ij ) 2 (1) i>j where d ij are distances (dissimilarities) in the target (low-dimensional) space, and D ij are distances in the input space. Weights W ij = 1 for simple stress function, or to reduce effect of large distances W ij = 1/D ij or W ij = 1/D 2 ij are used. The sum runs over all pairs of objects and thus contributes O(n 2 ) terms. In the k-dimensional target space there are kn k parameters for minimization. For visualization purposes the dimension of the target space is k = 1, 2 or 3 (usually k = 2).

3 TriVis: Triangular Visualization 3 MDS cost functions are not easy to minimize, with multiple local minima representing different mappings. Initial configuration is either selected randomly or is based on PCA projection. Dissimilar objects are represented by points that are far apart, and similar objects are represented by points that are close, showing clusters in the data. Orientation of axes in the MDS mapping is arbitrary, and the values of coordinates do not have any meaning, as only relative distances are preserved. Kohonen networks [8] are also a popular tool combining clusterization with visualization, but they do not minimize directly any measure of topographical distortion for visualization, therefore their visualization is not as good as that provided by MDS. 2.3 Triangular visualization TriVis algorithm creates representation of data points in two-dimensional space that exactly preserves as many distances between points as it is possible. Distances between any 3 vectors forming a triangle may always be correctly reproduced; a new point is iteratively added relatively to one side of existing triangle, forming a new triangle that exactly preserves two original distances. There are many possibilities of adding such points in relation to the existing triangle sides. To preserve the overall cluster structure 3 furthest points are selected for the start (an alternative is to use centers of 3 clusters), and the new point is choosen to minimize the MDS stress function S(d) = n i>j (D ij d ij ) 2. This gives mapping that preserves exactly 2n 3 out of n(n 1)/2 original distances, minimizing overall stress. Algorithm 1 1: Find three farthest vectors and mark them (preserving original distances) as points of the initial triangle. 2: Mark segments (pairs of points) forming triangle sides as available. 3: for i = 1 to n 3 do 4: Find the segment AB for which vector X i added as the point C=C(X i) forms a triangle ABC preserving two original distances AC and BC, and gives the smallest increase of the stress S i = m j=1 (D ij d ij ) 2. 5: Delete the AB segment from the list of available segments, and add to it segments AC and BC. 6: end for Complexity of this algorithm grows like O(n 2 ), but MDS has to perform minimization over positions of these points while TriVis simply calculates positions. To speed up visualization process this algorithm could be applied first to K vectors selected as the nearest points to the centers of K clusters (for example using the K-means or dendrogram clusterization). Plotting these points should preserve the overall structure of the data, and applying TriVis to points within each cluster decomposes the problem into K smaller O(n 2 k ) problems. For large number of vectors the use of jitter technique may be sufficient, plotting the vectors that belong to one specific cluster near the center of this cluster, with dispersion equal to the mean distance between these points and the center.

4 4 T. Maszczyk and W. Duch To measure what can be gained by full MDS minimization TriVis mapping should be used as a starting configuration for MDS. This should provide much lower stress at the beginning reducing the number of iterations. 3 Illustrative examples The usefulness of the TriVis sequential visualization method has been evaluated on four datasets downloaded from the UCI Machine Learning Repository [9] and from [10]. A summary of these datasets is presented in Tab. 1; their short description follows: 1. Iris the most popular dataset, it contains 3 classes of 50 instances each, where each class refers to a type of the Iris flowers. 2. Heart disease dataset consists of 270 samples, each described by 13 attributes, 150 cases belongs to group absence and 120 to presence of heart disease. 3. Wine wine data are the results of a chemical analysis of wines, grown in the same region in Italy, but derived from three different cultivars. 13 features characterizing each wine are provided, the data contains 178 examples. 4. Leukemia: microarray gene expressions for two types of leukemia (ALL and AML), with a total of 47 ALL and 25 AML samples measured with 7129 probes [10]. Visualization is based on 100 best features from simple feature ranking using FDA index [1]. For each dataset two-dimensional mappings have been created using PCA, TriVis, MDS starting from random configuration and MDS starting from TriVis configuration (Figs. 1-4). Visualization is sensitive to feature selection and weighting and frequently linear projections discovered through supervised learning may be more informative [11, 12]. Since our goal here is not the best visualization but rather comparison of TriVis algorithm with PCA and MDS methods all features have been used. Title #Features #Samples #Samples per class Source Iris Setosa 50 Virginica 50 Versicolor [9] Heart absence 139 presence [9] Wine C 1 71 C 2 48 C 3 [9] Leukemia ALL 25 AML [10] Table 1. Summary of datasets used for illustrations Mappings of both Iris and Cleveland Heart datasets are rather similar for all 4 techniques (selecting only relevant features will show a better separation between classes); PCA shows a bit more overlap and MDS optimization of TriVis configuration does not provide more information than the initial configuration. Wine dataset does not map well with PCA and different classes are somehow better separated using MDS with TriVis initialization. This is a good example showing that using TriVis configuration as the start for MDS leads to faster and better convergence.

5 TriVis: Triangular Visualization 5 Fig. 1. Iris dataset, top row: PCA and TriVis, bottom row: typical (randomly initialized) MDS and MDS initialized by TriVis. Leukemia shows good separation using TriVis projection (Fig. 4), providing a bit more interesting projection than other methods. To show the influence of TriVis initialization on MDS comparison of the convergence starting from random, PCA and TriVis configurations is presented in Fig. 5. TriVis initialization clearly works in the best way leading to a convergence in a few iterations and achieving the lowest stress values. This type of initialization may prevent MDS from getting stuck in poor local minimum. 4 Conclusions In exploratory data analysis PCA and MDS are the most popular methods for data visualization. Visualization based on proximity helps to understand the structure of the data, to see the outliers and to place interesting cases in their most similar context, it may also help to understand what black box classifiers really do [13, 14]. In safety-critical areas visual interpretation may be crucial for acceptance of proposed methods of data analysis. The TriVis algorithm presented in this paper has several advantages: it enables visualization of proximities in two dimensions, preserves maximum number of exact distances reducing distortions of others, has simple interpretation, allows for simple assesment of various similarity functions and feature selection and weighting techniques, it may unfold various manifolds [15] (hypersurfaces embedded in high-dimensional spaces). For large datasets it may be coupled with

6 6 T. Maszczyk and W. Duch Fig. 2. Cleveland Heart dataset, top row: PCA and TriVis, bottom row: typical MDS and MDS initialized by TriVis. Fig. 3. Wine dataset, top row: PCA and TriVis, bottom row: typical MDS and MDS initialized by TriVis.

7 TriVis: Triangular Visualization 7 Fig. 4. Leukemia dataset, top row: PCA and TriVis, bottom row: typical MDS and MDS initialized by TriVis. hierarchical dendrogram clusterization methods to represent with high accuracy relations between clusters. PCA does not preserve proximity information, while MDS is much more costly and does not seem to have advantages over TriVis. If MDS visualization is desired TriVis gives an excellent starting point significantly reducing the number of required iterations. References 1. Webb, A.: Statistical Pattern Recognition. J. Wiley & Sons (2002) 2. Cox, T., Cox, M.: Multidimensional Scaling, 2nd Ed. Chapman and Hall (2001) 3. Jolliffe, I.: Principal Component Analysis. Springer-Verlag, Berlin (1986) 4. Schölkopf, B., Smola, A.: Learning with Kernels. Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge, MA (2001) 5. Torgerson, W.: Multidimensional scaling. I. Theory and method. Psychometrika 17 (1952) Sammon, J.: A nonlinear mapping for data structure analysis. IEEE Transactions on Computers C18 (1969) Duch, W.: Quantitative measures for the self-organized topographical mapping. Open Systems and Information Dynamics 2 (1995) Kohonen, T.: Self-organizing maps. Springer-Verlag, Heidelberg Berlin (1995) 9. Asuncion, A., Newman, D.: UCI machine learning repository. mlearn/mlrepository.html (2009) 10. Golub, T.: Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. Science 286 (1999)

8 8 T. Maszczyk and W. Duch stress iterations Fig. 5. Comparison of 3 types of MDS initialization (Wine dataset): solid blue line - random, dotted green line - PCA, dashed red - TriVis. 11. Maszczyk, T., Duch, W.: Support vector machines for visualization and dimensionality reduction. Lecture Notes in Computer Science 5163 (2008) Maszczyk T, Grochowski M, D.W.: Discovering Data Structures using Metalearning, Visualization and Constructive Neural Networks. In: Studies in Computational Intelligence Vol Springer (2010) (in print) 13. Duch, W.: Visualization of hidden node activity in neural networks: I. visualization methods. In Rutkowski, L., Siekemann, J., Tadeusiewicz, R., Zadeh, L., eds.: Lecture Notes in Artificial Intelligence. Volume Physica Verlag, Springer, Berlin, Heidelberg, New York (2004) Duch, W.: Coloring black boxes: visualization of neural network decisions. In: Int. Joint Conf. on Neural Networks, Portland, Oregon. Volume I. IEEE Press (2003) Tenenbaum, J., de Silva, V., Langford., J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290 (2000)

Visualization of large data sets using MDS combined with LVQ.

Visualization of large data sets using MDS combined with LVQ. Antoine Naud and Włodzisław Duch Department of Informatics, Nicholas Copernicus University, Grudziądzka 5, 87-100 Toruń, Poland. www.phys.uni.torun.pl/kmk

INTERACTIVE DATA EXPLORATION USING MDS MAPPING

INTERACTIVE DATA EXPLORATION USING MDS MAPPING Antoine Naud and Włodzisław Duch 1 Department of Computer Methods Nicolaus Copernicus University ul. Grudziadzka 5, 87-100 Toruń, Poland Abstract: Interactive

ViSOM A Novel Method for Multivariate Data Projection and Structure Visualization

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 13, NO. 1, JANUARY 2002 237 ViSOM A Novel Method for Multivariate Data Projection and Structure Visualization Hujun Yin Abstract When used for visualization of

Layout Based Visualization Techniques for Multi Dimensional Data

Layout Based Visualization Techniques for Multi Dimensional Data Wim de Leeuw Robert van Liere Center for Mathematics and Computer Science, CWI Amsterdam, the Netherlands wimc,robertl @cwi.nl October 27,

A K-means-like Algorithm for K-medoids Clustering and Its Performance

A K-means-like Algorithm for K-medoids Clustering and Its Performance Hae-Sang Park*, Jong-Seok Lee and Chi-Hyuck Jun Department of Industrial and Management Engineering, POSTECH San 31 Hyoja-dong, Pohang

Feature Extraction by Neural Network Nonlinear Mapping for Pattern Classification

Lerner et al.:feature Extraction by NN Nonlinear Mapping 1 Feature Extraction by Neural Network Nonlinear Mapping for Pattern Classification B. Lerner, H. Guterman, M. Aladjem, and I. Dinstein Department

Visualization of Topology Representing Networks

Visualization of Topology Representing Networks Agnes Vathy-Fogarassy 1, Agnes Werner-Stark 1, Balazs Gal 1 and Janos Abonyi 2 1 University of Pannonia, Department of Mathematics and Computing, P.O.Box

Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data

CMPE 59H Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data Term Project Report Fatma Güney, Kübra Kalkan 1/15/2013 Keywords: Non-linear

High-dimensional labeled data analysis with Gabriel graphs

High-dimensional labeled data analysis with Gabriel graphs Michaël Aupetit CEA - DAM Département Analyse Surveillance Environnement BP 12-91680 - Bruyères-Le-Châtel, France Abstract. We propose the use

Methodology for Emulating Self Organizing Maps for Visualization of Large Datasets

Methodology for Emulating Self Organizing Maps for Visualization of Large Datasets Macario O. Cordel II and Arnulfo P. Azcarraga College of Computer Studies *Corresponding Author: macario.cordel@dlsu.edu.ph

Visualization of Breast Cancer Data by SOM Component Planes

International Journal of Science and Technology Volume 3 No. 2, February, 2014 Visualization of Breast Cancer Data by SOM Component Planes P.Venkatesan. 1, M.Mullai 2 1 Department of Statistics,NIRT(Indian

15.062 Data Mining: Algorithms and Applications Matrix Math Review

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

A Computational Framework for Exploratory Data Analysis

A Computational Framework for Exploratory Data Analysis Axel Wismüller Depts. of Radiology and Biomedical Engineering, University of Rochester, New York 601 Elmwood Avenue, Rochester, NY 14642-8648, U.S.A.

Exploratory data analysis for microarray data

Eploratory data analysis for microarray data Anja von Heydebreck Ma Planck Institute for Molecular Genetics, Dept. Computational Molecular Biology, Berlin, Germany heydebre@molgen.mpg.de Visualization

Visualization by Linear Projections as Information Retrieval

Visualization by Linear Projections as Information Retrieval Jaakko Peltonen Helsinki University of Technology, Department of Information and Computer Science, P. O. Box 5400, FI-0015 TKK, Finland jaakko.peltonen@tkk.fi

POISSON DISTRIBUTION BASED INITIALIZATION FOR FUZZY CLUSTERING

POISSON DISTRIBUTION BASED INITIALIZATION FOR FUZZY CLUSTERING Tomas Vintr, Vanda Vintrova, Hana Rezankova Abstract: A quality of centroid-based clustering is highly dependent on initialization. In the

Decision Support via Big Multidimensional Data Visualization

Decision Support via Big Multidimensional Data Visualization Audronė Lupeikienė 1, Viktor Medvedev 1, Olga Kurasova 1, Albertas Čaplinskas 1, Gintautas Dzemyda 1 1 Vilnius University, Institute of Mathematics

Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data

Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data Neil D. Lawrence Department of Computer Science, University of Sheffield, Regent Court, 211 Portobello Street, Sheffield,

Component Ordering in Independent Component Analysis Based on Data Power

Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals

CHAPTER 3 DATA MINING AND CLUSTERING

CHAPTER 3 DATA MINING AND CLUSTERING 3.1 Introduction Nowadays, large quantities of data are being accumulated. The amount of data collected is said to be almost doubled every 9 months. Seeking knowledge

Meta-learning. Synonyms. Definition. Characteristics

Meta-learning Włodzisław Duch, Department of Informatics, Nicolaus Copernicus University, Poland, School of Computer Engineering, Nanyang Technological University, Singapore wduch@is.umk.pl (or search

Self Organizing Maps for Visualization of Categories

Self Organizing Maps for Visualization of Categories Julian Szymański 1 and Włodzisław Duch 2,3 1 Department of Computer Systems Architecture, Gdańsk University of Technology, Poland, julian.szymanski@eti.pg.gda.pl

An Initial Study on High-Dimensional Data Visualization Through Subspace Clustering

An Initial Study on High-Dimensional Data Visualization Through Subspace Clustering A. Barbosa, F. Sadlo and L. G. Nonato ICMC Universidade de São Paulo, São Carlos, Brazil IWR Heidelberg University, Heidelberg,

Mixture Models. Jia Li. Department of Statistics The Pennsylvania State University. Mixture Models

Mixture Models Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Clustering by Mixture Models General bacground on clustering Example method: -means Mixture model based

DATA MINING CLUSTER ANALYSIS: BASIC CONCEPTS

DATA MINING CLUSTER ANALYSIS: BASIC CONCEPTS 1 AND ALGORITHMS Chiara Renso KDD-LAB ISTI- CNR, Pisa, Italy WHAT IS CLUSTER ANALYSIS? Finding groups of objects such that the objects in a group will be similar

Spectral Methods for Dimensionality Reduction

Spectral Methods for Dimensionality Reduction Prof. Lawrence Saul Dept of Computer & Information Science University of Pennsylvania UCLA IPAM Tutorial, July 11-14, 2005 Dimensionality reduction Question

Face Recognition using Principle Component Analysis

Face Recognition using Principle Component Analysis Kyungnam Kim Department of Computer Science University of Maryland, College Park MD 20742, USA Summary This is the summary of the basic idea about PCA

PCA, Clustering and Classification. By H. Bjørn Nielsen strongly inspired by Agnieszka S. Juncker

PCA, Clustering and Classification By H. Bjørn Nielsen strongly inspired by Agnieszka S. Juncker Motivation: Multidimensional data Pat1 Pat2 Pat3 Pat4 Pat5 Pat6 Pat7 Pat8 Pat9 209619_at 7758 4705 5342

Nonlinear Discriminative Data Visualization

Nonlinear Discriminative Data Visualization Kerstin Bunte 1, Barbara Hammer 2, Petra Schneider 1, Michael Biehl 1 1- University of Groningen - Institute of Mathematics and Computing Sciences P.O. Box 47,

Clustering and Data Mining in R

Clustering and Data Mining in R Workshop Supplement Thomas Girke December 10, 2011 Introduction Data Preprocessing Data Transformations Distance Methods Cluster Linkage Hierarchical Clustering Approaches

Support Vector Machines with Clustering for Training with Very Large Datasets

Support Vector Machines with Clustering for Training with Very Large Datasets Theodoros Evgeniou Technology Management INSEAD Bd de Constance, Fontainebleau 77300, France theodoros.evgeniou@insead.fr Massimiliano

Exploratory Data Analysis with MATLAB

Computer Science and Data Analysis Series Exploratory Data Analysis with MATLAB Second Edition Wendy L Martinez Angel R. Martinez Jeffrey L. Solka ( r ec) CRC Press VV J Taylor & Francis Group Boca Raton

Factor Analysis. Chapter 420. Introduction

Chapter 420 Introduction (FA) is an exploratory technique applied to a set of observed variables that seeks to find underlying factors (subsets of variables) from which the observed variables were generated.

Data Clustering. Dec 2nd, 2013 Kyrylo Bessonov

Data Clustering Dec 2nd, 2013 Kyrylo Bessonov Talk outline Introduction to clustering Types of clustering Supervised Unsupervised Similarity measures Main clustering algorithms k-means Hierarchical Main

CPSC 340: Machine Learning and Data Mining. K-Means Clustering Fall 2015

CPSC 340: Machine Learning and Data Mining K-Means Clustering Fall 2015 Admin Assignment 1 solutions posted after class. Tutorials for Assignment 2 on Monday. Random Forests Random forests are one of the

Supervised Feature Selection & Unsupervised Dimensionality Reduction

Supervised Feature Selection & Unsupervised Dimensionality Reduction Feature Subset Selection Supervised: class labels are given Select a subset of the problem features Why? Redundant features much or

An Introduction to Locally Linear Embedding

An Introduction to Locally Linear Embedding Lawrence K. Saul AT&T Labs Research 180 Park Ave, Florham Park, NJ 07932 SA lsaul@research.att.com Sam T. Roweis Gatsby Computational Neuroscience nit, CL 17

Improved Fuzzy C-means Clustering Algorithm Based on Cluster Density

Journal of Computational Information Systems 8: 2 (2012) 727 737 Available at http://www.jofcis.com Improved Fuzzy C-means Clustering Algorithm Based on Cluster Density Xiaojun LOU, Junying LI, Haitao

Principal Component Analysis Application to images

Principal Component Analysis Application to images Václav Hlaváč Czech Technical University in Prague Faculty of Electrical Engineering, Department of Cybernetics Center for Machine Perception http://cmp.felk.cvut.cz/

Decision Trees (Sections )

Decision Trees (Sections 8.1-8.4) Non-metric methods CART (Classification & regression Trees) Number of splits Query selection & node impurity Multiway splits When to stop splitting? Pruning Assignment

An Analysis on Density Based Clustering of Multi Dimensional Spatial Data

An Analysis on Density Based Clustering of Multi Dimensional Spatial Data K. Mumtaz 1 Assistant Professor, Department of MCA Vivekanandha Institute of Information and Management Studies, Tiruchengode,

EFFICIENT K-MEANS CLUSTERING ALGORITHM USING RANKING METHOD IN DATA MINING

EFFICIENT K-MEANS CLUSTERING ALGORITHM USING RANKING METHOD IN DATA MINING Navjot Kaur, Jaspreet Kaur Sahiwal, Navneet Kaur Lovely Professional University Phagwara- Punjab Abstract Clustering is an essential

THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS

THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS O.U. Sezerman 1, R. Islamaj 2, E. Alpaydin 2 1 Laborotory of Computational Biology, Sabancı University, Istanbul, Turkey. 2 Computer Engineering

Distance based clustering

// Distance based clustering Chapter ² ² Clustering Clustering is the art of finding groups in data (Kaufman and Rousseeuw, 99). What is a cluster? Group of objects separated from other clusters Means

Cluster Analysis. Isabel M. Rodrigues. Lisboa, 2014. Instituto Superior Técnico

Instituto Superior Técnico Lisboa, 2014 Introduction: Cluster analysis What is? Finding groups of objects such that the objects in a group will be similar (or related) to one another and different from

SELF-ORGANISING MAPPING NETWORKS (SOM) WITH SAS E-MINER

SELF-ORGANISING MAPPING NETWORKS (SOM) WITH SAS E-MINER C.Sarada, K.Alivelu and Lakshmi Prayaga Directorate of Oilseeds Research, Rajendranagar, Hyderabad saradac@yahoo.com Self Organising mapping networks

Face Recognition using SIFT Features

Face Recognition using SIFT Features Mohamed Aly CNS186 Term Project Winter 2006 Abstract Face recognition has many important practical applications, like surveillance and access control.

Robotics 2 Clustering & EM. Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Maren Bennewitz, Wolfram Burgard

Robotics 2 Clustering & EM Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Maren Bennewitz, Wolfram Burgard 1 Clustering (1) Common technique for statistical data analysis to detect structure (machine learning,

UNSUPERVISED MACHINE LEARNING TECHNIQUES IN GENOMICS

UNSUPERVISED MACHINE LEARNING TECHNIQUES IN GENOMICS Dwijesh C. Mishra I.A.S.R.I., Library Avenue, New Delhi-110 012 dcmishra@iasri.res.in What is Learning? "Learning denotes changes in a system that enable

Statistical Models in Data Mining

Statistical Models in Data Mining Sargur N. Srihari University at Buffalo The State University of New York Department of Computer Science and Engineering Department of Biostatistics 1 Srihari Flood of

Feature Selection with Decision Tree Criterion

Feature Selection with Decision Tree Criterion Krzysztof Grąbczewski and Norbert Jankowski Department of Computer Methods Nicolaus Copernicus University Toruń, Poland kgrabcze,norbert@phys.uni.torun.pl

Nonlinear Dimensionality Reduction

Nonlinear Dimensionality Reduction Piyush Rai CS5350/6350: Machine Learning October 25, 2011 Recap: Linear Dimensionality Reduction Linear Dimensionality Reduction: Based on a linear projection of the

Distance Metric Learning in Data Mining (Part I) Fei Wang and Jimeng Sun IBM TJ Watson Research Center

Distance Metric Learning in Data Mining (Part I) Fei Wang and Jimeng Sun IBM TJ Watson Research Center 1 Outline Part I - Applications Motivation and Introduction Patient similarity application Part II

Lecture 20: Clustering

Lecture 20: Clustering Wrap-up of neural nets (from last lecture Introduction to unsupervised learning K-means clustering COMP-424, Lecture 20 - April 3, 2013 1 Unsupervised learning In supervised learning,

HDDVis: An Interactive Tool for High Dimensional Data Visualization

HDDVis: An Interactive Tool for High Dimensional Data Visualization Mingyue Tan Department of Computer Science University of British Columbia mtan@cs.ubc.ca ABSTRACT Current high dimensional data visualization

Introduction of the Radial Basis Function (RBF) Networks

Introduction of the Radial Basis Function (RBF) Networks Adrian G. Bors adrian.bors@cs.york.ac.uk Department of Computer Science University of York York, YO10 5DD, UK Abstract In this paper we provide

Fig. 1 A typical Knowledge Discovery process [2]

Volume 4, Issue 7, July 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com A Review on Clustering

Computational intelligence methods for information understanding and information management

Computational intelligence methods for information understanding and information management Włodzisław Duch 1,2, Norbert Jankowski 1 and Krzysztof Grąbczewski 1 1 Department of Informatics, Nicolaus Copernicus

High-Dimensional Data Visualization by PCA and LDA

High-Dimensional Data Visualization by PCA and LDA Chaur-Chin Chen Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan Abbie Hsu Institute of Information Systems & Applications,

The Result Analysis of the Cluster Methods by the Classification of Municipalities

The Result Analysis of the Cluster Methods by the Classification of Municipalities PAVEL PETR, KAŠPAROVÁ MILOSLAVA System Engineering and Informatics Institute Faculty of Economics and Administration University

What do the dimensions or axes represent? Ordinations

Ordinations Goal order or arrange samples along one or two meaningful dimensions (axes or scales) Ordinations Not specifically designed for hypothesis testing Similar to goals of clustering, so why not

Self Organizing Maps: Fundamentals

Self Organizing Maps: Fundamentals Introduction to Neural Networks : Lecture 16 John A. Bullinaria, 2004 1. What is a Self Organizing Map? 2. Topographic Maps 3. Setting up a Self Organizing Map 4. Kohonen

EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set

EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set Amhmed A. Bhih School of Electrical and Electronic Engineering Princy Johnson School of Electrical and Electronic Engineering Martin

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

Neural Networks Lesson 5 - Cluster Analysis

Neural Networks Lesson 5 - Cluster Analysis Prof. Michele Scarpiniti INFOCOM Dpt. - Sapienza University of Rome http://ispac.ing.uniroma1.it/scarpiniti/index.htm michele.scarpiniti@uniroma1.it Rome, 29

Categorical Data Visualization and Clustering Using Subjective Factors

Categorical Data Visualization and Clustering Using Subjective Factors Chia-Hui Chang and Zhi-Kai Ding Department of Computer Science and Information Engineering, National Central University, Chung-Li,

SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING

AAS 07-228 SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING INTRODUCTION James G. Miller * Two historical uncorrelated track (UCT) processing approaches have been employed using general perturbations

A Study of Web Log Analysis Using Clustering Techniques

A Study of Web Log Analysis Using Clustering Techniques Hemanshu Rana 1, Mayank Patel 2 Assistant Professor, Dept of CSE, M.G Institute of Technical Education, Gujarat India 1 Assistant Professor, Dept

Virtual Landmarks for the Internet

Virtual Landmarks for the Internet Liying Tang Mark Crovella Boston University Computer Science Internet Distance Matters! Useful for configuring Content delivery networks Peer to peer applications Multiuser

ARTIFICIAL INTELLIGENCE (CSCU9YE) LECTURE 6: MACHINE LEARNING 2: UNSUPERVISED LEARNING (CLUSTERING)

ARTIFICIAL INTELLIGENCE (CSCU9YE) LECTURE 6: MACHINE LEARNING 2: UNSUPERVISED LEARNING (CLUSTERING) Gabriela Ochoa http://www.cs.stir.ac.uk/~goc/ OUTLINE Preliminaries Classification and Clustering Applications

BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES

BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES 123 CHAPTER 7 BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES 7.1 Introduction Even though using SVM presents

Dimensionality Reduction by Learning an Invariant Mapping

Dimensionality Reduction by Learning an Invariant Mapping Raia Hadsell, Sumit Chopra, Yann LeCun The Courant Institute of Mathematical Sciences New York University, 719 Broadway, New York, NY 1003, USA.

Intelligent Data Analysis. Topographic Maps of Vectorial Data. School of Computer Science University of Birmingham

Intelligent Data Analysis Topographic Maps of Vectorial Data Peter Tiňo School of Computer Science University of Birmingham Discovering low-dimensional spatial layout in higher dimensional spaces - 1-D/3-D

Clustering & Association

Clustering - Overview What is cluster analysis? Grouping data objects based only on information found in the data describing these objects and their relationships Maximize the similarity within objects

Fraudulent Behavior Forecast in Telecom Industry Based on Data Mining Technology

Communications of the IIMA Volume 7 Issue 4 Article 1 2007 Fraudulent Behavior Forecast in Telecom Industry Based on Data Mining Technology Sen Wu School of Economics and Management University of Science

USING THE AGGLOMERATIVE METHOD OF HIERARCHICAL CLUSTERING AS A DATA MINING TOOL IN CAPITAL MARKET 1. Vera Marinova Boncheva

382 [7] Reznik, A, Kussul, N., Sokolov, A.: Identification of user activity using neural networks. Cybernetics and computer techniques, vol. 123 (1999) 70 79. (in Russian) [8] Kussul, N., et al. : Multi-Agent

An Enhanced Clustering Algorithm to Analyze Spatial Data

International Journal of Engineering and Technical Research (IJETR) ISSN: 2321-0869, Volume-2, Issue-7, July 2014 An Enhanced Clustering Algorithm to Analyze Spatial Data Dr. Mahesh Kumar, Mr. Sachin Yadav

The Value of Visualization 2

The Value of Visualization 2 G Janacek -0.69 1.11-3.1 4.0 GJJ () Visualization 1 / 21 Parallel coordinates Parallel coordinates is a common way of visualising high-dimensional geometry and analysing multivariate

A Modified K-Means Clustering with a Density-Sensitive Distance Metric

A Modified K-Means Clustering with a Density-Sensitive Distance Metric Ling Wang, Liefeng Bo, Licheng Jiao Institute of Intelligent Information Processing, Xidian University Xi an 710071, China {wliiip,blf0218}@163.com,

10-810 /02-710 Computational Genomics. Clustering expression data

10-810 /02-710 Computational Genomics Clustering expression data What is Clustering? Organizing data into clusters such that there is high intra-cluster similarity low inter-cluster similarity Informally,

Data Mining: Exploring Data. Lecture Notes for Chapter 3. Introduction to Data Mining

Data Mining: Exploring Data Lecture Notes for Chapter 3 Introduction to Data Mining by Tan, Steinbach, Kumar What is data exploration? A preliminary exploration of the data to better understand its characteristics.

NEW VERSION OF DECISION SUPPORT SYSTEM FOR EVALUATING TAKEOVER BIDS IN PRIVATIZATION OF THE PUBLIC ENTERPRISES AND SERVICES

NEW VERSION OF DECISION SUPPORT SYSTEM FOR EVALUATING TAKEOVER BIDS IN PRIVATIZATION OF THE PUBLIC ENTERPRISES AND SERVICES Silvija Vlah Kristina Soric Visnja Vojvodic Rosenzweig Department of Mathematics

203.4770: Introduction to Machine Learning Dr. Rita Osadchy

203.4770: Introduction to Machine Learning Dr. Rita Osadchy 1 Outline 1. About the Course 2. What is Machine Learning? 3. Types of problems and Situations 4. ML Example 2 About the course Course Homepage:

Graph-based normalization and whitening for non-linear data analysis

Neural Networks 19 (2006) 864 876 www.elsevier.com/locate/neunet 2006 Special Issue Graph-based normalization and whitening for non-linear data analysis Catherine Aaron SAMOS-MATISSE, Paris, France Abstract

Social Media Mining. Data Mining Essentials

Introduction Data production rate has been increased dramatically (Big Data) and we are able store much more data than before E.g., purchase data, social media data, mobile phone data Businesses and customers

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Data Mining Cluster Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 8 by Tan, Steinbach, Kumar 1 What is Cluster Analysis? Finding groups of objects such that the objects in a group will

Visualization of Large Font Databases

Visualization of Large Font Databases Martin Solli and Reiner Lenz Linköping University, Sweden ITN, Campus Norrköping, Linköping University, 60174 Norrköping, Sweden Martin.Solli@itn.liu.se, Reiner.Lenz@itn.liu.se

Efficient visual search of local features. Cordelia Schmid

Efficient visual search of local features Cordelia Schmid Visual search change in viewing angle Matches 22 correct matches Image search system for large datasets Large image dataset (one million images

Manifold Learning Examples PCA, LLE and ISOMAP

Manifold Learning Examples PCA, LLE and ISOMAP Dan Ventura October 14, 28 Abstract We try to give a helpful concrete example that demonstrates how to use PCA, LLE and Isomap, attempts to provide some intuition

Making Better Features

Making Better Features 36-350: Data Mining 16 September 2009 Contents Reading: Sections 2.4, 3.4 and 3.5 in the textbook. 1 Standardizing and Transforming 1 2 Relationships Among Features and Low-Dimensional

Machine Learning and Pattern Recognition Logistic Regression

Machine Learning and Pattern Recognition Logistic Regression Course Lecturer:Amos J Storkey Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh Crichton Street,

Quantitative measures for the Self-Organizing Topographic Maps

Open Systems & Information Dynamics Vol. x No x (1994) xx-xx Nicholas Copernicus University Press Quantitative measures for the Self-Organizing Topographic Maps W ODZIS AW DUCH 1 Department of Computer

Visual analysis of self-organizing maps

488 Nonlinear Analysis: Modelling and Control, 2011, Vol. 16, No. 4, 488 504 Visual analysis of self-organizing maps Pavel Stefanovič, Olga Kurasova Institute of Mathematics and Informatics, Vilnius University

New Ensemble Combination Scheme

New Ensemble Combination Scheme Namhyoung Kim, Youngdoo Son, and Jaewook Lee, Member, IEEE Abstract Recently many statistical learning techniques are successfully developed and used in several areas However,

Standardization and Its Effects on K-Means Clustering Algorithm

Research Journal of Applied Sciences, Engineering and Technology 6(7): 399-3303, 03 ISSN: 040-7459; e-issn: 040-7467 Maxwell Scientific Organization, 03 Submitted: January 3, 03 Accepted: February 5, 03

Using Data Mining for Mobile Communication Clustering and Characterization

Using Data Mining for Mobile Communication Clustering and Characterization A. Bascacov *, C. Cernazanu ** and M. Marcu ** * Lasting Software, Timisoara, Romania ** Politehnica University of Timisoara/Computer

CSE 494 CSE/CBS 598 (Fall 2007): Numerical Linear Algebra for Data Exploration Clustering Instructor: Jieping Ye

CSE 494 CSE/CBS 598 Fall 2007: Numerical Linear Algebra for Data Exploration Clustering Instructor: Jieping Ye 1 Introduction One important method for data compression and classification is to organize