Data visualization using nonlinear dimensionality reduction techniques: method review and quality assessment John A. Lee Michel Verleysen

Similar documents
Visualizing Data using t-sne

Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data

A Computational Framework for Exploratory Data Analysis

So which is the best?

Visualization of Topology Representing Networks

Visualization by Linear Projections as Information Retrieval

Manifold Learning Examples PCA, LLE and ISOMAP

1 Spectral Methods for Dimensionality

Dimensionality Reduction - Nonlinear Methods

Selection of the Suitable Parameter Value for ISOMAP

Dimension Reduction. Wei-Ta Chu 2014/10/22. Multimedia Content Analysis, CSIE, CCU

Nonlinear Discriminative Data Visualization

Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data

Exploratory Data Analysis with MATLAB

Supervised Feature Selection & Unsupervised Dimensionality Reduction

Distance Metric Learning in Data Mining (Part I) Fei Wang and Jimeng Sun IBM TJ Watson Research Center

Online data visualization using the neural gas network

Visualization of Large Font Databases

A Survey on Pre-processing and Post-processing Techniques in Data Mining

ViSOM A Novel Method for Multivariate Data Projection and Structure Visualization

Supporting Online Material for

Visualization of textual data: unfolding the Kohonen maps.

Visualization of large data sets using MDS combined with LVQ.

Data Mining: Algorithms and Applications Matrix Math Review

Unsupervised dimensionality reduction: the challenges of big data visualisation

INTERACTIVE DATA EXPLORATION USING MDS MAPPING

Lluis Belanche + Alfredo Vellido. Intelligent Data Analysis and Data Mining

Introduction to Support Vector Machines. Colin Campbell, Bristol University

Cluster Analysis: Advanced Concepts

An Initial Study on High-Dimensional Data Visualization Through Subspace Clustering

HDDVis: An Interactive Tool for High Dimensional Data Visualization

Information Retrieval Perspective to Interactive Data Visualization

Unsupervised Data Mining (Clustering)

Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence

Lecture 6. Artificial Neural Networks

On the effects of dimensionality on data analysis with neural networks

Nonlinear Iterative Partial Least Squares Method

Manifold Learning with Variational Auto-encoder for Medical Image Analysis

Methodology for Emulating Self Organizing Maps for Visualization of Large Datasets

Chapter ML:XI (continued)

Data visualization and dimensionality reduction using kernel maps with a reference point

Methods and Applications for Distance Based ANN Training

Image Hallucination Using Neighbor Embedding over Visual Primitive Manifolds

Syllabus for MATH 191 MATH 191 Topics in Data Science: Algorithms and Mathematical Foundations Department of Mathematics, UCLA Fall Quarter 2015

An Introduction to Machine Learning

A Partially Supervised Metric Multidimensional Scaling Algorithm for Textual Data Visualization

Linear Threshold Units

How To Create A Multidimensional Data Projection

Class #6: Non-linear classification. ML4Bio 2012 February 17 th, 2012 Quaid Morris

DATA ANALYSIS II. Matrix Algorithms

Going Big in Data Dimensionality:

Exploratory Data Analysis Using Radial Basis Function Latent Variable Models

Models of Cortical Maps II

Neural Networks and Support Vector Machines

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on

Multimedia Databases. Wolf-Tilo Balke Philipp Wille Institut für Informationssysteme Technische Universität Braunschweig

Visualization of Breast Cancer Data by SOM Component Planes

Statistical Machine Learning

Method of Data Center Classifications

Self Organizing Maps: Fundamentals

8 Visualization of high-dimensional data

LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING. ----Changsheng Liu

How To Understand Multivariate Models

Neural Network Add-in

Exploratory data analysis for microarray data

Using Nonlinear Dimensionality Reduction in 3D Figure Animation

Machine Learning in Computer Vision A Tutorial. Ajay Joshi, Anoop Cherian and Ravishankar Shivalingam Dept. of Computer Science, UMN

DATA MINING CLUSTER ANALYSIS: BASIC CONCEPTS

Self-Organizing g Maps (SOM) COMP61021 Modelling and Visualization of High Dimensional Data

Social Media Mining. Network Measures

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 23, NO. 3, AUGUST

Part 2: Community Detection

6.2.8 Neural networks for data mining

Medical Information Management & Mining. You Chen Jan,15, 2013 You.chen@vanderbilt.edu

CITY UNIVERSITY OF HONG KONG 香 港 城 市 大 學. Self-Organizing Map: Visualization and Data Handling 自 組 織 神 經 網 絡 : 可 視 化 和 數 據 處 理

Semi-Supervised Support Vector Machines and Application to Spam Filtering

SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING

Probabilistic Latent Semantic Analysis (plsa)

Lecture 3: Linear methods for classification

Unsupervised and supervised dimension reduction: Algorithms and connections

Ranking on Data Manifolds

Accurate and robust image superresolution by neural processing of local image representations

Principles of Data Mining by Hand&Mannila&Smyth

USING SELF-ORGANIZING MAPS FOR INFORMATION VISUALIZATION AND KNOWLEDGE DISCOVERY IN COMPLEX GEOSPATIAL DATASETS

Novelty Detection in image recognition using IRF Neural Networks properties

Component Ordering in Independent Component Analysis Based on Data Power

CROP CLASSIFICATION WITH HYPERSPECTRAL DATA OF THE HYMAP SENSOR USING DIFFERENT FEATURE EXTRACTION TECHNIQUES

Big Data - Lecture 1 Optimization reminders

Advanced Web Usage Mining Algorithm using Neural Network and Principal Component Analysis

Soft Clustering with Projections: PCA, ICA, and Laplacian

Neural Networks for Machine Learning. Lecture 13a The ups and downs of backpropagation

Lecture Topic: Low-Rank Approximations

Making Sense of the Mayhem: Machine Learning and March Madness

Data Mining. Cluster Analysis: Advanced Concepts and Algorithms

An Overview of Knowledge Discovery Database and Data mining Techniques

Dimensionality Reduction: Principal Components Analysis

Linear Algebra Review. Vectors

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

Machine Learning and Data Analysis overview. Department of Cybernetics, Czech Technical University in Prague.

Linear Models and Conjoint Analysis with Nonlinear Spline Transformations

Transcription:

Data visualization using nonlinear dimensionality reduction techniques: method review and quality assessment John A. Lee Michel Verleysen Machine Learning Group, Université catholique de Louvain Louvain-la-Neuve, Belgium

How can we detect structure in data? Hopefully data convey some information Informal definition of structure : We assume that we have vectorial data in some space General probabilistic model: Data are distributed w.r.t. some distribution Two particular cases: Manifold data Clustered data.

How can we detect structure in data? Two main solutions Visualize data (the user s eyes play a central part) Data are left unchanged Many views are proposed Interactivity is inherent Examples: Scatter plots Projection pursuit Represent data (the software does a data processing job) Data are appropriately modified A single interesting representation is to be found (nonlinear) dimensionality reduction

High-dimensional spaces The curse of dimensionality Empty space phenomenon (function approximation requires an exponential number of points) Norm concentration phenomenon (distances in a normal distribution have a chi distribution) Unexpected consequences A hypercube looks like a sea urchin (many spiky corners!) Hypercube corners collapse towards the center in any projection The volume of a unit hypersphere tends to zero The sphere volume concentrates in a thin shell Tails of a Gaussian get heavier than the central bell Dimensionality reduction can hopefully address some of those issues. 3D 2D

The manifold hypothesis The key idea behind dimensionality reduction Data live in a D-dimensional space Data lie on some P-dimensional subspace Usual hypothesis: the subspace is a smooth manifold The manifold can be A linear subspace Any other function of some latent variables Dimensionality reduction aims at Inverting the latent variable mapping Unfolding the manifold (topology allows us to deform it) An appropriate noise model makes the connection with the general probabilistic model In practice: P is unknown estimator of the intrinsic dimensionality

Estimator of the intrinsic dimensionality General idea: estimate the fractal dimension Box counting (or capacity dimension) Create bins of width ε along each dimension Data sampled on a P-dimensional manifold occupy N(ε) α ε P boxes Compute the slope in a log-log diagram of N(ε) w.r.t. ε Simple but Subjective method (slope estimation at some scale) Not robust againt noise Computationally expensive.

Estimator of the intrinsic dimensionality Correlation dimension Any datum of a P-dimensional manifold is surrounded by C 2 (ε) α ε P neighbours, where ε is a small neighborhood radius Compute the slope of the correlation sum in a log-log diagram Noisy spiral Log-log plot of correlation sum Slope int.dim..

Estimator of the intrinsic dimensionality Other techniques Local PCAs Split manifold into small patch Manifold is locally linear Apply PCA on each patch Trial-and-error: Pick an appropriate DR method Run it for P = 1,, D and record the value E*(P) of the cost function after optimisation Draw the curve E*(P) w.r.t. P and detect its elbow. E*(P) P

Historical review of some NLDR methods Principal component analysis Classical metric multidimensional scaling Stress-based MDS & Sammon mapping Nonmetric multidimensional scaling Self-organizing map Auto-encoder Curvilinear component analysis Spectral methods Kernel PCA Isomap Locally linear embedding Laplacian eigenmaps Maximum variance unfolding Similarity-based embedding Stochastic neighbor embedding Simbed & CCA revisited Time 1900 1950 1965 1980 1990 1995 1996 2000 2003 2009

A technical slide (some reminders)

Yet another bad guy

Jamais deux sans trois (never 2 w/o 3)

Principal component analysis Pearson, 1901; Hotelling, 1933; Karhunen, 1946; Loève, 1948. Idea Decorrelate zero-mean data Keep large variance axes Fit a plane though the data cloud and project Details (maximise projected variance).

Principal component analysis Details (minimise the reconstruction error).

Principal component analysis Implementation Center data by removing the sample mean Multiply data set with top eigenvectors of the sample covariance matrix Illustration Salient features Spectral method Incremental embeddings Estimator of the intrinsic dimensionality (covariance eigenvalues = variance along the projection axes) Parametric mapping model

Classical metric multidimensional scaling Young & Householder, 1938; Torgerson, 1952. Idea Fit a plane through the data cloud and project Inner product preservation ( distance preservation) Details.

Classical metric multidimensional scaling Details (cont d).

Classical metric multidimensional scaling Implementation Double centering : It converts distances into inner products It indirectly cancels the sample mean in the Gram matrix Eigenvalue decomposition of the centered Gram matrix Scaled top eigenvectors provide projected coordinates Salient features Provides same solution as PCA iff dissimilarity = Eucl. distance Nonparametric model (Out-of-sample extension is possible with Nyström formula).

Stress-based MDS & Sammon mapping Kruskal, 1964; Sammon, 1969; de Leeuw, 1977. Idea True distance preservation, quantified by a cost function Particular case of stress-based MDS Details Distances: Objective functions: Strain Stress. Sammon s stress

Stress-based MDS & Sammon mapping Implementation Steepest descent of the stress function (Kruskal, 1964) Pseudo-Newton minimization of the stress function (diagonal approximation of the Hessian; used in Sammon, 1969) SMaCoF for weighted stress (scaling by majorizing a complex function; de Leeuw, 1977) Salient features Nonparametric mapping Main metaparameter: distance weights w ij How can we choose them? Give more importance to small distances Pick a decreasing function function of distance δ ij such as in Sammon mapping Sammon mapping has almost no metaparameters Any distance in the high-dim space can be used (e.g. geodesic distances; see Isomap) Optimization procedure can get stuck in local minima

Nonmetric multidimensional scaling Shepard, 1962; Kruskal, 1964. Idea Stress-based MDS for ordinal (nonmetric) data Try to preserve monotically transformed distances (and optimise the transformation) Details Cost function Implementation Monotone regression Salient features Ad hoc optimization Nonparametric model

Self-organizing map von der Malsburg, 1973; Kohonen, 1982. Idea Biological inspiration (brain cortex) Nonlinear version of PCA Replace PCA plane with an articulated grid Fit the grid through the data cloud ( K-means with a priori topology and winner takes most rule) Details A grid is defined in the low-dim space: Grid nodes have high-dim coordinates as well: The high-dim coordinates are updated in an adaptive procedure (at each epoch, all data vectors are presented 1 by 1 in random order): Best matching node: and. Coordinate update:

Self-organizing maps Illustrations in the high-dim space (cactus dataset). Epochs

Self-organizing map Visualisations in the grid space Salient features Nonparametric model Many metaparameters: grid topology and decay laws for α and λ Performs a vector quantization Batch (non-adaptive) versions exist Popular in visualization and exploratory data analysis Low-dim coordinates are fixed but principle can be reversed Isotop, XOM

Auto-encoder Kramer, 1991; DeMers & Cottrell, 1993; Hinton & Salakhutdinov, 2006. Idea Based on the TLS reconstruction error like PCA Cascaded codec with a bottleneck (as in an hourglass) Replace PCA linear mapping with a nonlinear one Details Depends on chosen function approximator (often a feed-forward ANN such as a multilayer perceptron) Implementation Apply the learning procedure to the cascaded networks Catch output value of the bottleneck layer Salient features Parametric model (out-of-sample extension is straightforward) Provides both backward and forward mapping The cascaded networks have a deep architecture learning can be inefficient Solution: initialize backpropagation with restricted Boltzmann machines

Auto-encoder Original figure in Kramer, 1991. Original figure in Salakhutdinov, 2006.

Curvilinear component analysis Demartines & Hérault, 1995. Idea Distance preservation Change Sammon weighting scheme (use decreasing function of the low-dim distance instead of decreasing function of the high-dim distance) Cost function Implementation Stochastic gradient descent (or pin-point radial update) Salient features Nonparametric mapping Metaparameters are the decay laws of α and λ Can be used with geodesic distances (Lee & Verleysen, 2000) Able to tear manifolds

CCA can tear manifolds Sphere Sammon NLM CCA

Historical review of some NLDR methods Principal component analysis Classical metric multidimensional scaling Stress-based MDS & Sammon mapping Nonmetric multidimensional scaling Self-organizing map Auto-encoder Curvilinear component analysis Spectral methods Kernel PCA Isomap Locally linear embedding Laplacian eigenmaps Maximum variance unfolding Similarity-based embedding Stochastic neighbor embedding Simbed & CCA revisited Time 1900 1950 1965 1980 1990 1995 1996 2000 2003 2009

Kernel PCA Schölkopf, Smola & Müller, 1996. Idea Apply kernel trick to classical metric MDS (and not to PCA!) Apply MDS in an (unknown) feature space Details.

Kernel PCA Implementation Compute the kernel matrix K (starting from pairwise distances or inner products) Perform double centering of K Run classical metric MDS on centered K Kernels from kernel Salient features Nonparametric mapping (Nyström formula can be used) Choice of the kernel? How to adjust its parameter(s)? Important milestone in the history of spectral embedding

Isomap Tenenbaum, 1998, 2000. Idea Apply classical metric MDS with a smart metric Replace Euclidean distance with geodesic distance Data-driven approximation of the geodesic distances with shortest paths in a graph of K-ary neighbourhoods. Original figure in Tenenbaum, 2000.

Isomap Model Classical metric MDS is optimal for linear manifold Isomap is optimal for Euclidean manifolds (a P-dimensional manifold is Euclidean iff it is isometric to a P-dimensional Euclidean space) Implementation Compute/collect pairwise distances Compute graph of K-ary neighbourhoods Compute the weighted shortest paths in the graph Apply classical metric MDS on the pairwise geodesic distances Salient features Nonparametric mapping (Nyström not applicable because ) Double-centred Gram matrix is not positive semidefinite (but fortunately not far from being so) Manifold must be convex Parameter K is critical with noisy data ( short circuits )

Locally linear embedding Roweis & Saul, 2000. Idea Each datum can be approximated by a (regularised) linear combination of its K nearest neighbors LLE tries to reproduce similar linear combinations in a lower-dimensional space. Original figure in Roweis, 2000.

Locally linear embedding Details Step 1: Step 2: Implementation Approximate each datum with a regularised linear combination of its K nearest neighbours Build the sparse matrix of neighbor weights (W) Compute the eigenvalue decomposition of M Bottom eigenvectors provides embedding coordinates Salient features Metaparameters are K and the regularization coefficient EVD such as in MDS, but bottom eigenvectors are used Nonparametric mapping (Nyström formula can be used)

Laplacian eigenmaps Belkin & Niyogi, 2002. Idea Embed neighbouring points close to each other shrink distances between neighbors in the embedding Avoid trivial solutions and undeterminacies by constraining the covariance matrix of the embedding Details Symmetric affinity matrix: Cost function:. Constrained optimization:

Laplacian eigenmaps Implementation Collect distances and compute K-ary neighbourhoods Compute the adjacency matrix and the corresponding weight matrix Compute the eigenvalue decomposition of the Laplacian matrix The bottom eigenvectors provide the embedding coordinates Salient features Nonparametric mapping (Nyström formula can be used) Connection with LLE (Laplacian operator applied twice) Spectral clustering and graph min-cut (Laplacian matrix normalization is different) Diffusion maps Classical metric MDS with commute time distance Metaparameters are K and/or soft neighbourhood kernel parameters

Maximum variance unfolding Weinberger & Saul, 2004. (a.k.a. semidefinite embedding ) Idea Do the opposite of Laplacian eigenmaps and try to unfold data Stretch distance of non-neighbouring points Classical metric MDS with missing pairwise distances Use semi definite programming (SDP) to maintain the properties of the Gram matrix. Original figure in Weinberger, 2004.

Maximum variance unfolding Details.

Maximum variance unfolding Implementation Collect pairwise distances and compute K-ary neighbourhoods Deduce the corresponding constraints on pairwise distances Formulate everything with inner products and run SDP engine Apply classical metric MDS Variants Distances between neighbours can shrink (and distances between non-neighbours can only grow as usual) Introduction of slack variables to soften the constraints Salient features MVU KPCA with data-driven local kernels MVU smart Isomap Semidefinite programming is computationally demanding Metaparameters are K and all flags of SDP engine.

t-distributed stochastic neighbor embedding Hinton & Roweis, 2005; Van der Maaten & Hinton, 2008. Idea Try to reproduce the pairwise probabilities of being neighbours ( similarities) Details Probability of being a neighbour in the high-dim space: where Symmetric similaritiy in the high-dim space: Similarity in the low-dim space:. where

t-distributed stochastic neighbor embedding Details (cont d) Cost function: Implementation Gradient descent of the cost function: Salient features Can get stuck in local minima SNE = t-sne with n t-sne much more efficient than SNE, especially for clustered data Similarity-based embedding (similarity preservation) Can also be related to distance preservation (with a specific weighting scheme and a distance transformation)

Simbed (similarity-based embedding) Lee & Verleysen, 2009. Idea: Probabilistic definition of the pairwise similarities Takes into account properties of high-dim spaces Details Distances in a multivariate Gaussian are chi-distributed Similarity function in the high-dim space Similarity function in the low-dim space. Q is the regularized upper incomplete Gamma function

Simbed (similarity-based embedding) Details (cont d) Cost function (SSE of HD & LD similarities) Implementation Stochastic gradient descent (CCA pin-point gradient) Salient Features Close relationship with CCA (CCA is nearly equivalent to Simbed with similarities defined as ) Close relationship with t-sne (similar gradients)

Theoretical method comparisons Purpose Visualization / data preprocessing Hard / soft dimensionality reduction Model characteristics Backward / forward (generative) Linear / non-linear Parametric / non-parametric (data embedding) With / without vector quantization Algorithmic criteria Spectral / non-spectral (soft-computings, ANN, etc.) Among spectral methods: dense / sparse matrix Several unifying paradigms or framework Distance preservation Force-directed placement Rank preservation

Spectral methods: duality Two types of spectral NLDR methods Dense matrix of dissimilarities (e.g. distances) Top eigenvectors (CM MDS, Isomap, MVU) Sparse matrix of similarities (or affinities ) Bottom eigenvectors (except last one) (LLE, LE, diff.maps, spectral clustering) Duality Pseudo-inverse of sparse matrix Yields a dense matrix Inverts and therefore flips the eigenvalue spectrum (bottom eigenvectors become leading ones and vice versa) Corollary All spectral methods (both sparse and dense) can be reformulated as applying CM MDS on a dense matrix Example: Laplacian eigenmaps = CM MDS with commute time distances (CTDs are related to the pseudo-inverse of the Laplacian matrix)

Spectral versus non-spectral NLDR Spectral NLDR Cost function is convex Convex optimization (spectral decomposition) Global optimum Incremental embeddings Intrinsic dimensionality can be estimated Cost function must fit within the spectral framework It often amounts to applying 1. An a priori nonlinear distance tranformation 2. Classical metric MDS (Dense/sparse duality!) Eigenspectrum tail of sparse methods tend to be flat Spiky embeddings Non-spectral NLDR Cost function is not convex Ad hoc optimization (e.g. gradient descent) Local optima Independent embeddings No simple way to estimate intrinsic dimensionality More freedom is granted in the choice of the cost fun. It is often fully data-driven.

Spiky spectral embeddings: examples LLE LLE MVU

Taxonomy (linear and spectral) Linear Nonlinear Classification Clustering Manifold Clusters LDA FA/PP ICA/BSS SVM PCA SpeClus CM MDS LE diff.maps KPCA Isomap MVU LLE Nonspectral NLDR Latent variable sep. NLDR

Taxonomy (DR only) Inner products Distances Spectral NLDR Stress-based MDS CM MDS PCA CCA Nonlinear auto-encoder t-sne SOM Principal curves Simbed Reconstruction error Similarities

Quality Assessment: Intuition 3D 2D Bad Good

Quality Assessment: Quantification We have: An NLDR method to assess Some ideas: Use its objective function Quantify the distance preservation Quantify the topology preservation Topology in practice: K-ary neighbourhoods Neighbourhood ranks Literature: 1962, Shepard: Shepard diagram (a.k.a. dy-dx ) 1992, Bauer & Pawelzik: topological product 1997, Villmann et al.: topological function 2001, Venna & Kaski: trustworthiness & continuity 2006, Chen & Buja: local continuity meta criterion 2007, Lee & Verleysen: mean relative rank errors T&C LC-MC MRREs

Distances, Ranks, and Neighbourhoods Distances: Ranks: Neighbourhoods: Co-ranking matrix: (Q is a sum of N permutation matrices of size N-1)

Co-ranking Matrix: Blocks Mild K-extrusions Hard K-extrusions 1 K N-1 r ij 1 Mild K-intrusions K Hard K-intrusions N-1 ρ ij Negative rank error Positive rank error ρ ij r ij = 0

Co-ranking Matrix: Blocks Mild K-extrusions Hard K-extrusions 1 K N-1 r ij 1 Mild K-intrusions K Hard K-intrusions N-1 ρ ij

Trustworthiness & Continuity Formulas: with Properties: Distinguish between points that errouneously enter a neighbourhood trustwortiness 1-FPr quit a neighbourhood continuity 1-FNr Functions of K (higher is better); range: [0,1] ([0.7,1]) Elements q kl are weighted

Mean Relative Rank Errors Formulas: with Properties: Two error types (same idea as in T&C) Functions of K (lower is better); range: [0,1] ([0,0.3]) Stricter than T&C: all rank errors are counted Different weighting of q kl

Local Continuity Meta-Criterion Formula: Properties Single measure Function of K (higher is better); range: [0,1] A priori milder than T&C and MRREs Presence of a baseline term (random neighbourhood overlap) No weighting of q kl

Unifying Framework MRREs LCMC T&C Only the upper left block is important!

Unifying Framework Connection with LCMC:

Q&B: Quality and behavior Up to now, we have distinguished 3 fractions Mild extrusions Mild intrusions Correct ranks All 3 are added up in the LC-MC sum, which indicates the overall quality What about the difference between the proportions of mild intrusions and extrusions? Definition of quality & behavior criterion: Similar reformulations exist for T&C and MRREs Positive for intrusive embeddings Negative for extrusive embeddings

Why are weightless criterion sufficient? Any hard in/extrusion is compensated for by several mild ex/intrusion (and vice versa) Mild K-extrusions 1 1 K Hard K-extrusions N-1 r ij Mild K-intrusions K Hard K-intrusions N-1 The fractions of mild in/extrusions reveal the severity of hard ex/intrusions No arbitrary weighting is needed ρ ij

Illustration: B. Frey s face database

Conclusions about QA Rank preservation is useful in NLDR QA: More powerful than distance preservation Reflects the appealing idea of topology preservation Unifying framework: Connects existing criteria LCMC MRREs T&C Relies on the co-ranking matrix ( Shepard diagram with ranks instead of distances) Accounts for different types of rank errors: A global error (like LCMC) Type I and II errors (like T&C and MRREs) Our proposal Overall quality criterion + embedding-specific behavior criterion Involves no (arbitrary) weighting Focuses on the inside of K-ary neighborhoods (mild intrusions and extrusions)

Final thoughts & perspectives In practice, you need Appropriate data preprocessing steps An estimator of the intrinsic dimensionality A NLDR method Method-independent quality criteria Main take-home messages Carefully adjust your model complexity Beware of (hidden) metaparameters Convex methods are not a panacea Extrusive methods work better than intrusive ones Always try PCA first Future The interest for spectral methods seems to diminish Will auto-encoders emerge again thanks to deep learning? Similarity-based NLDR is a hot topic Tighter connections are expected with the domains of Data mining and visualization Graph embedding

Thanks for your attention If you have any question John.Lee@uclouvain.be Nonlinear Dimensionality Reduction Springer, Series: Information Science and Statistics John A. Lee, Michel Verleysen, 2007 300 pp. ISBN: 978-0-387-39350-6