Exploratory Data Analysis Using Radial Basis Function Latent Variable Models
|
|
|
- Shanon Tucker
- 10 years ago
- Views:
Transcription
1 Exploratory Data Analysis Using Radial Basis Function Latent Variable Models Alan D. Marrs and Andrew R. Webb DERA St Andrews Road, Malvern Worcestershire U.K. WR14 Crown Copyright 1998 Abstract Two developments of nonlinear latent variable models based on radial basis functions are discussed: in the first, the use of priors or constraints on allowable models is considered as a means of preserving data structure in low-dimensional representations for visualisation purposes. Also, a resampling approach is introduced which makes more effective use of the latent samples in evaluating the likelihood. 1 INTRODUCTION Radial basis functions (RBF) have been extensively used for problems in discrimination and regression. Here we consider their application for obtaining low-dimensional representations of high-dimensional data as part of the exploratory data analysis process. There has been a great deal of research over the years into linear and nonlinear techniques for dimensionality reduction. The technique most commonly used is principal components analysis (PCA) and there have been several nonlinear generalisations, each taking a particular definition of PCA and generalising it to the nonlinear situation. One approach is to find surfaces of closest fit (as a generalisation of the PCA definition due to the work of Pearson (1901) for finding lines and planes of closest fit). This has been explored by Hastie and Stuetzle (1989), Tibshirani (1992) (and further by LeBlanc and Tibshirani, 1994) and various authors using a neural network approach (for example, Kramer, 1991). Another approach is one of variance maximisation subject to constraints on the transformation (Hotelling, 1933). This has been investigated by Webb (1996), using a transformation modelled as an RBF network, and in a supervised context in Webb (1998). An alternative strategy also using RBFs, based on metric multidimensional scaling, is described by Webb (1995) and Lowe and Tipping (1996). Here, an optimisation criterion,
2 530 A. D. Marrs and A. R. Webb termed stress, is defined in the transformed space and the weights in an RBF model determined by minimising the stress. The above methods use a radial basis function to model a transformation from the highdimensional data space to a low-dimensional representation space. A complementary approach is provided by Bishop et al (1998) in which the structure of the data is modelled as a function of hidden or latent variables. Termed generative topographic mapping (GTM), the model may be regarded as a nonlinear generalisation of factor analysis in which the mapping from latent space to data space is characterised by an RBF. Such generative models are relevant to a wide range of applications including radar target modelling, speech recognition and handwritten character recognition. However, one of the problems with GTM that limits its practical use for visualising data on manifolds in high dimensional space arises from distortions in the structure that it imposes. This is acknowledged in Bishop et al (1997) where 'magnification factors' are introduced to correct for the GTM's deficiency as a means of data visualisation. This paper considers two developments: constraints on the permissible models and resampiing of the latent space. Section 2 presents the background to latent variable models; Model constraints are discussed in Section 3. Section 4 describes a re-sampling approach to estimation of the posterior pdf on the latent samples. An illustration is provided in Section 5. 2 BACKGROUND Briefly, we shall re-state the basic GTM model, retaining the notation of Bishop et al (1998). Let Ui, i = 1,..., N}, ti E RP represent measurements on the data space variables; Z E R. represent the latent variables. Let t be normally-distributed with mean y(z; W) and covariance matrix {3-1 I; y(z; W) is a nonlinear transformation that depends on a set of parameters W. Specifically, we shall assume a basis function model M y(z; W) = L Wi<Pi(Z) where the vectors Wi E R. D are to be determined through optimisation and {<Pi, i = 1,..., M} is a set of basis functions defined on the latent space. The data distribution may be written where, under the assumptions of normality, i=1 p(tiw,{3) =! p(tlz; W,{3)p(z)dz (1) ( {3 ) D /2 {{3 } p(tlz; W,{3) = 271" exp -"2ly(z; W) - tll 2 Approximating the integral by a finite sum (assuming the functions p(z) and y(z) do not vary too greatly compared with the sample spacing), we have K p(tiw,{3) = LPiP(tlzi; W,{3) (2) which may be regarded as a function of the parameters W and {3 that characterise y. i =1
3 Exploratory Data Analysis Using Radial Basis Function Latent Variable Models 531 Given the data set {ti' i = 1,...,N}, the log likelihood is given by N L(W,J3) = I)n[p(t;IW,J3)] which may be maximised using a standard EM approach (Bishop et ai, 1998). In this case, we have 1 N Pj = N L~n n=l as the re-estimate of the mixture component weights, Pj, at the (m + 1) step, where j=l ~ _ p~m)p(tnizi; w(m), J3(m») n - EiP~m)p(tnlzi;W(m),J3(m») (3) (4) and (.) (m) denotes values at the mth step. Note that Bishop et al (1998) do not re-estimate P;; all values are taken to be equal. The number of P; terms to be re-estimated is K, the number of terms used to approximate the integral (1). We might expect that the density is smoothly varying and governed by a much fewer number of parameters (not dependent on K). The re-estimation equation for the D x M matrix W = [w I W M] is where G is the K x K diagonal matrix with W(m+l) = TT RT +[+T G+]-l (5) N Gjj = LRjn n=l and TT = [til... ltn], +T = [tfj(zl)i... ltfj(zk)]. The term J3 is re-estimated as l/j3(m) = 1/(ND) E~l E~l Rjilti - w(m+l)tfj(zj) 12. Once we have determined parameters of the transformation, we may invert the model by asking for the distribution of Z given a measurement ti. That is, we require p(zlti) = p(tilz)p(z) f p(tilz)p(z)dz For example, we may plot the position of the peak of the distribution p( Ziti) for each data sample ti. 3 APPLYING A CONSTRAINT One way to retain structure is to impose a condition that ensures that a unit step in the latent space corresponds to a unit step in the data space (more or less). For a single latent variable, Xl, we may impose the constraints that which may be written, in terms of W as lay 12 1 axl = ifwtwil 1 (6)
4 532 A. D. Marrs and A. R. Webb where;l = 8c/J. ~ The derivative of the data space variable with respect to the latent variable has unit magnitude. The derivative is of course a function of Xl and imposing such a condition at each sample point in latent space would not be possible owing to the smoothness of the RBF model. However, we may average over the latent space, where (.) denotes average over the latent space. In general, for L latent variables we may impose a constraint JTWTW J = 1 L leading to the penalty term Tr {A(JTWTW J - IL)} where J is an M x L matrix with jth column 8 /8xj and A is a symmetric matrix of Lagrange multipliers. This is very similar to regularisation terms. It is a condition on the norm of W; it incorporates the Jacobian matrix J and a symmetric L x L matrix of Lagrange multipliers, A. The re-estimation solution for W may be written with A chosen so that the constraint JT W T W J = 1 L is satisfied. We may also use the derivatives of the transformation to define a distortion measure or magnification factor, M(Zj W) = IIJTWTW J which is a function of the latent variables and the model parameters. A value of zero shows that there is no distortion 1 An alternative to the constraint approach above is to introduce a prior on the allowable transformations using the magnification factor; for example, P(W) ~ exp(-am(zj W)) (8) where A is a regularisation parameter. This leads to a modification to the M-step reestimation equation for W, providing a maximum a posteriori estimate. Equation (8) provides a natural generalisation of PCA since for the special case of a linear transformation (Pi = Xi, M = L), the solution for W is the PCA space as A ~ RESAMPLING THE LATENT SPACE Having obtained a mapping from latent space to data space using the above constraint, we seek a better estimate to the posterior pdf of the latent samples. Current versions of GTM require the latent samples to be uniformly distributed in the latent space which leads to distortions when the data of interest are projected into the latent space for visualisation. Since the responsibility matrix R can be used to determine a weight for each of the latent samples it is possible to update these samples using a resampling scheme. We propose to use a resampling scheme based upon adaptive kernel density estimation. The basic procedure places a Gaussian kernel on each latent sample. This results in a Gaussian 1 Note that this differs from the measure in the paper by Bishop et ai, where a rati()-()f-areas criterion is used, a factor which is unity for zero distortion, but may also be unity for some distortions. (7)
5 Exploratory Data Analysis Using Radial Basis Function Latent Variable Models 533 mixture representation of the pdf of the latent samples p( x It), K p(xlt) = ~PiN(lLi' E i ), (9) i=l where each mixture component is weighted according to the latent sample weight Pi. Initially, the Ei'S are all equal, taking their value from the standard formula of Silverman (1986), Ei = hly, (10) where matrix Y is an estimate of the covariance of p( x )and, If the kernels are centered exactly on the latent samples, this model artificially inflates the variance of the latent samples. Following West (1993) we perform kernel shrinkage by making the lli take the values where jl is the mean of the latent samples. This ensures that there is no artificial inflation of the variance. To reduce the redundancy in our initially large number of mixture components, we propose a kernel reduction scheme in a similar manner to West. However, the scheme used here differs from that of West and follows a scheme proposed by Salmond (1990). Essentially, we chose the component with the smallest weight and its nearest neighbour, denoting these with subscripts 1 and 2 respectively. These components are then combined into a single component denoted with subscript c as follows, (11) (12) Pc = Pl + P2 PllLl + P21L2 IL = --=---= c Pc (13) (14) Ec = Pl[El + (llc -lll)(llc -lll)t] + P2[E2 + (llc -1L2)(lLc -1L2)T]. (15) Pc This procedure is repeated until some stopping criterion is met. The stopping criterion could be a simple limit upon the number of mixture components ie; smaller than K but sufficiently large to model the data structure. Alternatively, the average kernel covariance and between kernel covariance can be monitored and the reduction stopped before some multiple (eg. 10) of the average kernel covariance exceeds the between kernel covariance. Once a final mixture density estimate is obtained, a new set of equally weighted latent samples can be drawn from it. The new latent samples represent a better estimate of the posterior pdf of the latent samples and can be used, along with the existing RBF mapping, to calculate a new responsibility matrix R. This procedure can be repeated to obtain a further improved estimate of the posterior pdf which, after only a couple of iterations can lead to good estimates of the posterior pdf which further iterations fail to improve upon. 5 RESULTS A latent variable model based oil a spherically-symmetric Gaussian RBF has been implemented. The weights and the centres of the RBF were initialised so that the solution best approximated the zer<rdistortion principal components solution for tw<rdimensional projection.
6 534 A. D. Marrs and A. R. Webb For our example we chose to construct a simulated data set with easily identifiable structure. Four hundred points lying on the letters "NIPS" were sampled and projected onto a sphere of radius 50 such that the points lay between 25 0 and longitude and 75 0 and latitude with Gaussian noise of variance 4.0 on the radius of each point. The resulting data are shown in figure 1. ToY dataset Figure 1: Simulated data... ~. :: I I.,:.'.:\ ~ "I:.i,~. > I I.~.,... I -u -to.,. r... :.,... 1 I ~: l' Figure 2: Results for standard GTM model. Figure 3: Results for regularisedlresampled model. Figure 2 shows results for the standard GTM (uniform grid of latent samples) projection of the data to two dimensions. The central figure shows the projection onto the latent space, exhibiting significant distortion. The left figure shows the projection of the regular grid of latent samples (red points) into the data space. Distortion of this grid can be easily seen. The right figure is a plot of the magnification factor as defined in section 3, with mean value of For this data set most stretching occurs at the edges of the latent variable space. Figure 3 shows results for the regularisedlresampled version of the latent variable model for A = 1.0. Again the central figure shows the projection onto the latent space after 2 iterations of the resampling procedure. The left-hand figure shows the projection of the initial regular grid of latent samples into the data space. The effect of regularisation is evident by the lack of severe distortions. Finally the magnification factors can be seen in the right-hand figure to be lower, with a mean value of
7 Exploratory Data Analysis Using Radial Basis Function Latent Variable Models DISCUSSION We have considered two developments of the GTM latent variable model: the incorporation of priors on the allowable model and a resampling approach to the maximum likelihood parameter estimation. Results have been presented for this regularisedlresampling approach and magnification factors lower than the standard model achieved, using the same RBF model. However, further reduction in magnification factor is possible with different RBF models, but the example illustrates that resampling offers a more robust approach. Current work is aimed at assessing the approach on realistic data sets. References Bishop, C.M. and Svensen, M. and Williams, C.K.1. (1997). Magnification factors for the GTM algorithm. lee International Conference on Artificial Neural Networks, Bishop, C.M. and Svensen, M. and Williams, C.K.1. (1998). GTM: the generative topographic mapping. Neural Computation, 10, Hastie, T. and Stuetzle, W. (1989). Principal curves, Journal of the American Statistical Association, 84, Hotelling, H. (1933). Analysis of a complex of statistical variables into principal components. Journal of Educational Psychology, 24, , Kramer, M.A. (1991). Nonlinear principal component analysis using autoassociative neural networks. American Institute of Chemical Engineers Journal, 37(2), LeBlanc, M. and Tibshirani, R. (1994). Adaptive principal surfaces. Journal of the American Statistical Association, 89(425), Lowe, D. and Tipping, M. (1996). Feed-forward neural networks and topographic mappings for exploratory data analysis. Neural Computing and Applications, 4, Pearson, K. (1901). On lines and planes of closest fit. Philosophical Magazine, 6, Salmond, D.J. (1990). Mixture reduction algorithms for target tracking in clutter. Signal & Data processing of small targets, edited by O. Drummond, SPlE, Silverman, B.W. (1986). Density Estimation for Statistics and Data Analysis. Chapman & Hall,1986. Tibshirani, R. (1992). Principal curves revisited. Statistics and Computing, 2(4), Webb, A.R. (1995). Multidimensional scaling by iterative majorisation using radial basis functions. Pattern Recognition, 28(5), Webb, A.R. (1996). An approach to nonlinear principal components analysis using radially-symmetric kernel functions. Statistics and Computing, 6, Webb, A.R. (1997). Radial basis functions for exploratory data analysis: an iterative majorisation approach for Minkowski distances based on multidimensional scaling. Journal of Classification, 14(2), Webb, A.R. (1998). Supervised nonlinear principal components analysis. (submitted for publication ). West, M. (1993). Approximating posterior distributions by mixtures. J. R. Statist. Soc B, 55(2),
Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data
Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data Neil D. Lawrence Department of Computer Science, University of Sheffield, Regent Court, 211 Portobello Street, Sheffield,
Christfried Webers. Canberra February June 2015
c Statistical Group and College of Engineering and Computer Science Canberra February June (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 829 c Part VIII Linear Classification 2 Logistic
Least Squares Estimation
Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David
Machine Learning and Pattern Recognition Logistic Regression
Machine Learning and Pattern Recognition Logistic Regression Course Lecturer:Amos J Storkey Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh Crichton Street,
Linear Models for Classification
Linear Models for Classification Sumeet Agarwal, EEL709 (Most figures from Bishop, PRML) Approaches to classification Discriminant function: Directly assigns each data point x to a particular class Ci
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical
CS 2750 Machine Learning. Lecture 1. Machine Learning. http://www.cs.pitt.edu/~milos/courses/cs2750/ CS 2750 Machine Learning.
Lecture Machine Learning Milos Hauskrecht [email protected] 539 Sennott Square, x5 http://www.cs.pitt.edu/~milos/courses/cs75/ Administration Instructor: Milos Hauskrecht [email protected] 539 Sennott
Exploratory Data Analysis with MATLAB
Computer Science and Data Analysis Series Exploratory Data Analysis with MATLAB Second Edition Wendy L Martinez Angel R. Martinez Jeffrey L. Solka ( r ec) CRC Press VV J Taylor & Francis Group Boca Raton
Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression
Logistic Regression Department of Statistics The Pennsylvania State University Email: [email protected] Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max
MACHINE LEARNING IN HIGH ENERGY PHYSICS
MACHINE LEARNING IN HIGH ENERGY PHYSICS LECTURE #1 Alex Rogozhnikov, 2015 INTRO NOTES 4 days two lectures, two practice seminars every day this is introductory track to machine learning kaggle competition!
Linear Threshold Units
Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear
Lecture 3: Linear methods for classification
Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,
Statistical Machine Learning from Data
Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Gaussian Mixture Models Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole Polytechnique
Component Ordering in Independent Component Analysis Based on Data Power
Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals
Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression
Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression Saikat Maitra and Jun Yan Abstract: Dimension reduction is one of the major tasks for multivariate
Nonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
Classification by Pairwise Coupling
Classification by Pairwise Coupling TREVOR HASTIE * Stanford University and ROBERT TIBSHIRANI t University of Toronto Abstract We discuss a strategy for polychotomous classification that involves estimating
Linear Classification. Volker Tresp Summer 2015
Linear Classification Volker Tresp Summer 2015 1 Classification Classification is the central task of pattern recognition Sensors supply information about an object: to which class do the object belong
Using artificial intelligence for data reduction in mechanical engineering
Using artificial intelligence for data reduction in mechanical engineering L. Mdlazi 1, C.J. Stander 1, P.S. Heyns 1, T. Marwala 2 1 Dynamic Systems Group Department of Mechanical and Aeronautical Engineering,
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set Amhmed A. Bhih School of Electrical and Electronic Engineering Princy Johnson School of Electrical and Electronic Engineering Martin
CS229 Lecture notes. Andrew Ng
CS229 Lecture notes Andrew Ng Part X Factor analysis Whenwehavedatax (i) R n thatcomesfromamixtureofseveral Gaussians, the EM algorithm can be applied to fit a mixture model. In this setting, we usually
Class #6: Non-linear classification. ML4Bio 2012 February 17 th, 2012 Quaid Morris
Class #6: Non-linear classification ML4Bio 2012 February 17 th, 2012 Quaid Morris 1 Module #: Title of Module 2 Review Overview Linear separability Non-linear classification Linear Support Vector Machines
Analysis of kiva.com Microlending Service! Hoda Eydgahi Julia Ma Andy Bardagjy December 9, 2010 MAS.622j
Analysis of kiva.com Microlending Service! Hoda Eydgahi Julia Ma Andy Bardagjy December 9, 2010 MAS.622j What is Kiva? An organization that allows people to lend small amounts of money via the Internet
Introduction to Support Vector Machines. Colin Campbell, Bristol University
Introduction to Support Vector Machines Colin Campbell, Bristol University 1 Outline of talk. Part 1. An Introduction to SVMs 1.1. SVMs for binary classification. 1.2. Soft margins and multi-class classification.
Manifold Learning Examples PCA, LLE and ISOMAP
Manifold Learning Examples PCA, LLE and ISOMAP Dan Ventura October 14, 28 Abstract We try to give a helpful concrete example that demonstrates how to use PCA, LLE and Isomap, attempts to provide some intuition
THE SVM APPROACH FOR BOX JENKINS MODELS
REVSTAT Statistical Journal Volume 7, Number 1, April 2009, 23 36 THE SVM APPROACH FOR BOX JENKINS MODELS Authors: Saeid Amiri Dep. of Energy and Technology, Swedish Univ. of Agriculture Sciences, P.O.Box
UW CSE Technical Report 03-06-01 Probabilistic Bilinear Models for Appearance-Based Vision
UW CSE Technical Report 03-06-01 Probabilistic Bilinear Models for Appearance-Based Vision D.B. Grimes A.P. Shon R.P.N. Rao Dept. of Computer Science and Engineering University of Washington Seattle, WA
Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.
Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C
STA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! [email protected]! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct
CS 688 Pattern Recognition Lecture 4. Linear Models for Classification
CS 688 Pattern Recognition Lecture 4 Linear Models for Classification Probabilistic generative models Probabilistic discriminative models 1 Generative Approach ( x ) p C k p( C k ) Ck p ( ) ( x Ck ) p(
These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop
Music and Machine Learning (IFT6080 Winter 08) Prof. Douglas Eck, Université de Montréal These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher
Visualization of textual data: unfolding the Kohonen maps.
Visualization of textual data: unfolding the Kohonen maps. CNRS - GET - ENST 46 rue Barrault, 75013, Paris, France (e-mail: [email protected]) Ludovic Lebart Abstract. The Kohonen self organizing
Predict the Popularity of YouTube Videos Using Early View Data
000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050
203.4770: Introduction to Machine Learning Dr. Rita Osadchy
203.4770: Introduction to Machine Learning Dr. Rita Osadchy 1 Outline 1. About the Course 2. What is Machine Learning? 3. Types of problems and Situations 4. ML Example 2 About the course Course Homepage:
Mixtures of Robust Probabilistic Principal Component Analyzers
Mixtures of Robust Probabilistic Principal Component Analyzers Cédric Archambeau, Nicolas Delannay 2 and Michel Verleysen 2 - University College London, Dept. of Computer Science Gower Street, London WCE
Statistical Machine Learning
Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes
Self Organizing Maps: Fundamentals
Self Organizing Maps: Fundamentals Introduction to Neural Networks : Lecture 16 John A. Bullinaria, 2004 1. What is a Self Organizing Map? 2. Topographic Maps 3. Setting up a Self Organizing Map 4. Kohonen
α = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
Classification Problems
Classification Read Chapter 4 in the text by Bishop, except omit Sections 4.1.6, 4.1.7, 4.2.4, 4.3.3, 4.3.5, 4.3.6, 4.4, and 4.5. Also, review sections 1.5.1, 1.5.2, 1.5.3, and 1.5.4. Classification Problems
ViSOM A Novel Method for Multivariate Data Projection and Structure Visualization
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 13, NO. 1, JANUARY 2002 237 ViSOM A Novel Method for Multivariate Data Projection and Structure Visualization Hujun Yin Abstract When used for visualization of
Monotonicity Hints. Abstract
Monotonicity Hints Joseph Sill Computation and Neural Systems program California Institute of Technology email: [email protected] Yaser S. Abu-Mostafa EE and CS Deptartments California Institute of Technology
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
Gaussian Processes in Machine Learning
Gaussian Processes in Machine Learning Carl Edward Rasmussen Max Planck Institute for Biological Cybernetics, 72076 Tübingen, Germany [email protected] WWW home page: http://www.tuebingen.mpg.de/ carl
Principal components analysis
CS229 Lecture notes Andrew Ng Part XI Principal components analysis In our discussion of factor analysis, we gave a way to model data x R n as approximately lying in some k-dimension subspace, where k
Visualization of Breast Cancer Data by SOM Component Planes
International Journal of Science and Technology Volume 3 No. 2, February, 2014 Visualization of Breast Cancer Data by SOM Component Planes P.Venkatesan. 1, M.Mullai 2 1 Department of Statistics,NIRT(Indian
Package EstCRM. July 13, 2015
Version 1.4 Date 2015-7-11 Package EstCRM July 13, 2015 Title Calibrating Parameters for the Samejima's Continuous IRT Model Author Cengiz Zopluoglu Maintainer Cengiz Zopluoglu
Dimension Reduction. Wei-Ta Chu 2014/10/22. Multimedia Content Analysis, CSIE, CCU
1 Dimension Reduction Wei-Ta Chu 2014/10/22 2 1.1 Principal Component Analysis (PCA) Widely used in dimensionality reduction, lossy data compression, feature extraction, and data visualization Also known
Bayesian Statistics: Indian Buffet Process
Bayesian Statistics: Indian Buffet Process Ilker Yildirim Department of Brain and Cognitive Sciences University of Rochester Rochester, NY 14627 August 2012 Reference: Most of the material in this note
Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data
CMPE 59H Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data Term Project Report Fatma Güney, Kübra Kalkan 1/15/2013 Keywords: Non-linear
SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING
AAS 07-228 SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING INTRODUCTION James G. Miller * Two historical uncorrelated track (UCT) processing approaches have been employed using general perturbations
Machine Learning and Data Mining. Regression Problem. (adapted from) Prof. Alexander Ihler
Machine Learning and Data Mining Regression Problem (adapted from) Prof. Alexander Ihler Overview Regression Problem Definition and define parameters ϴ. Prediction using ϴ as parameters Measure the error
Exact Inference for Gaussian Process Regression in case of Big Data with the Cartesian Product Structure
Exact Inference for Gaussian Process Regression in case of Big Data with the Cartesian Product Structure Belyaev Mikhail 1,2,3, Burnaev Evgeny 1,2,3, Kapushev Yermek 1,2 1 Institute for Information Transmission
Methodology for Emulating Self Organizing Maps for Visualization of Large Datasets
Methodology for Emulating Self Organizing Maps for Visualization of Large Datasets Macario O. Cordel II and Arnulfo P. Azcarraga College of Computer Studies *Corresponding Author: [email protected]
AUTOMATION OF ENERGY DEMAND FORECASTING. Sanzad Siddique, B.S.
AUTOMATION OF ENERGY DEMAND FORECASTING by Sanzad Siddique, B.S. A Thesis submitted to the Faculty of the Graduate School, Marquette University, in Partial Fulfillment of the Requirements for the Degree
Coding and decoding with convolutional codes. The Viterbi Algor
Coding and decoding with convolutional codes. The Viterbi Algorithm. 8 Block codes: main ideas Principles st point of view: infinite length block code nd point of view: convolutions Some examples Repetition
Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence
Artificial Neural Networks and Support Vector Machines CS 486/686: Introduction to Artificial Intelligence 1 Outline What is a Neural Network? - Perceptron learners - Multi-layer networks What is a Support
Least-Squares Intersection of Lines
Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
One-Class Classifiers: A Review and Analysis of Suitability in the Context of Mobile-Masquerader Detection
Joint Special Issue Advances in end-user data-mining techniques 29 One-Class Classifiers: A Review and Analysis of Suitability in the Context of Mobile-Masquerader Detection O Mazhelis Department of Computer
High-Dimensional Image Warping
Chapter 4 High-Dimensional Image Warping John Ashburner & Karl J. Friston The Wellcome Dept. of Imaging Neuroscience, 12 Queen Square, London WC1N 3BG, UK. Contents 4.1 Introduction.................................
Supervised and unsupervised learning - 1
Chapter 3 Supervised and unsupervised learning - 1 3.1 Introduction The science of learning plays a key role in the field of statistics, data mining, artificial intelligence, intersecting with areas in
Support Vector Machine (SVM)
Support Vector Machine (SVM) CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin
Introduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
1816 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 7, JULY 2006. Principal Components Null Space Analysis for Image and Video Classification
1816 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 7, JULY 2006 Principal Components Null Space Analysis for Image and Video Classification Namrata Vaswani, Member, IEEE, and Rama Chellappa, Fellow,
11. Time series and dynamic linear models
11. Time series and dynamic linear models Objective To introduce the Bayesian approach to the modeling and forecasting of time series. Recommended reading West, M. and Harrison, J. (1997). models, (2 nd
Solving Simultaneous Equations and Matrices
Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering
Visualization of large data sets using MDS combined with LVQ.
Visualization of large data sets using MDS combined with LVQ. Antoine Naud and Włodzisław Duch Department of Informatics, Nicholas Copernicus University, Grudziądzka 5, 87-100 Toruń, Poland. www.phys.uni.torun.pl/kmk
Using Mixtures-of-Distributions models to inform farm size selection decisions in representative farm modelling. Philip Kostov and Seamus McErlean
Using Mixtures-of-Distributions models to inform farm size selection decisions in representative farm modelling. by Philip Kostov and Seamus McErlean Working Paper, Agricultural and Food Economics, Queen
Data Mining and Data Warehousing. Henryk Maciejewski. Data Mining Predictive modelling: regression
Data Mining and Data Warehousing Henryk Maciejewski Data Mining Predictive modelling: regression Algorithms for Predictive Modelling Contents Regression Classification Auxiliary topics: Estimation of prediction
A Study on the Comparison of Electricity Forecasting Models: Korea and China
Communications for Statistical Applications and Methods 2015, Vol. 22, No. 6, 675 683 DOI: http://dx.doi.org/10.5351/csam.2015.22.6.675 Print ISSN 2287-7843 / Online ISSN 2383-4757 A Study on the Comparison
Jitter Measurements in Serial Data Signals
Jitter Measurements in Serial Data Signals Michael Schnecker, Product Manager LeCroy Corporation Introduction The increasing speed of serial data transmission systems places greater importance on measuring
Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus
Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus Tihomir Asparouhov and Bengt Muthén Mplus Web Notes: No. 15 Version 8, August 5, 2014 1 Abstract This paper discusses alternatives
CCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York
BME I5100: Biomedical Signal Processing Linear Discrimination Lucas C. Parra Biomedical Engineering Department CCNY 1 Schedule Week 1: Introduction Linear, stationary, normal - the stuff biology is not
NEURAL NETWORKS A Comprehensive Foundation
NEURAL NETWORKS A Comprehensive Foundation Second Edition Simon Haykin McMaster University Hamilton, Ontario, Canada Prentice Hall Prentice Hall Upper Saddle River; New Jersey 07458 Preface xii Acknowledgments
Computing with Finite and Infinite Networks
Computing with Finite and Infinite Networks Ole Winther Theoretical Physics, Lund University Sölvegatan 14 A, S-223 62 Lund, Sweden [email protected] Abstract Using statistical mechanics results,
Support Vector Machines for Classification and Regression
UNIVERSITY OF SOUTHAMPTON Support Vector Machines for Classification and Regression by Steve R. Gunn Technical Report Faculty of Engineering, Science and Mathematics School of Electronics and Computer
Data, Measurements, Features
Data, Measurements, Features Middle East Technical University Dep. of Computer Engineering 2009 compiled by V. Atalay What do you think of when someone says Data? We might abstract the idea that data are
11 Linear and Quadratic Discriminant Analysis, Logistic Regression, and Partial Least Squares Regression
Frank C Porter and Ilya Narsky: Statistical Analysis Techniques in Particle Physics Chap. c11 2013/9/9 page 221 le-tex 221 11 Linear and Quadratic Discriminant Analysis, Logistic Regression, and Partial
MUSICAL INSTRUMENT FAMILY CLASSIFICATION
MUSICAL INSTRUMENT FAMILY CLASSIFICATION Ricardo A. Garcia Media Lab, Massachusetts Institute of Technology 0 Ames Street Room E5-40, Cambridge, MA 039 USA PH: 67-53-0 FAX: 67-58-664 e-mail: rago @ media.
degrees of freedom and are able to adapt to the task they are supposed to do [Gupta].
1.3 Neural Networks 19 Neural Networks are large structured systems of equations. These systems have many degrees of freedom and are able to adapt to the task they are supposed to do [Gupta]. Two very
Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors
1 Chapter 13. VECTORS IN THREE DIMENSIONAL SPACE Let s begin with some names and notation for things: R is the set (collection) of real numbers. We write x R to mean that x is a real number. A real number
Path Estimation from GPS Tracks
Path Estimation from GPS Tracks Chris Brunsdon Department of Geography University of Leicester University Road, Leicester LE1 7RU Telephone: +44 116 252 3843 Fax: +44 116 252 3854 Email: [email protected]
Metrics on SO(3) and Inverse Kinematics
Mathematical Foundations of Computer Graphics and Vision Metrics on SO(3) and Inverse Kinematics Luca Ballan Institute of Visual Computing Optimization on Manifolds Descent approach d is a ascent direction
Subspace Analysis and Optimization for AAM Based Face Alignment
Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China [email protected] Stan Z. Li Microsoft
Comparing Neural Networks and ARMA Models in Artificial Stock Market
Comparing Neural Networks and ARMA Models in Artificial Stock Market Jiří Krtek Academy of Sciences of the Czech Republic, Institute of Information Theory and Automation. e-mail: [email protected]
Regularized Logistic Regression for Mind Reading with Parallel Validation
Regularized Logistic Regression for Mind Reading with Parallel Validation Heikki Huttunen, Jukka-Pekka Kauppi, Jussi Tohka Tampere University of Technology Department of Signal Processing Tampere, Finland
Lecture 9: Introduction to Pattern Analysis
Lecture 9: Introduction to Pattern Analysis g Features, patterns and classifiers g Components of a PR system g An example g Probability definitions g Bayes Theorem g Gaussian densities Features, patterns
Going Big in Data Dimensionality:
LUDWIG- MAXIMILIANS- UNIVERSITY MUNICH DEPARTMENT INSTITUTE FOR INFORMATICS DATABASE Going Big in Data Dimensionality: Challenges and Solutions for Mining High Dimensional Data Peer Kröger Lehrstuhl für
Reject Inference in Credit Scoring. Jie-Men Mok
Reject Inference in Credit Scoring Jie-Men Mok BMI paper January 2009 ii Preface In the Master programme of Business Mathematics and Informatics (BMI), it is required to perform research on a business
Probabilistic Latent Semantic Analysis (plsa)
Probabilistic Latent Semantic Analysis (plsa) SS 2008 Bayesian Networks Multimedia Computing, Universität Augsburg [email protected] www.multimedia-computing.{de,org} References
Introduction to General and Generalized Linear Models
Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby
Orthogonal Projections
Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors
Using Smoothed Data Histograms for Cluster Visualization in Self-Organizing Maps
Technical Report OeFAI-TR-2002-29, extended version published in Proceedings of the International Conference on Artificial Neural Networks, Springer Lecture Notes in Computer Science, Madrid, Spain, 2002.
Machine Learning Algorithms for GeoSpatial Data. Applications and Software Tools
iemss 2008: International Congress on Environmental Modelling and Software Integrating Sciences and Information Technology for Environmental Assessment and Decision Making 4 th Biennial Meeting of iemss,
Support Vector Machines with Clustering for Training with Very Large Datasets
Support Vector Machines with Clustering for Training with Very Large Datasets Theodoros Evgeniou Technology Management INSEAD Bd de Constance, Fontainebleau 77300, France [email protected] Massimiliano
Two vectors are equal if they have the same length and direction. They do not
Vectors define vectors Some physical quantities, such as temperature, length, and mass, can be specified by a single number called a scalar. Other physical quantities, such as force and velocity, must
Comparing the Results of Support Vector Machines with Traditional Data Mining Algorithms
Comparing the Results of Support Vector Machines with Traditional Data Mining Algorithms Scott Pion and Lutz Hamel Abstract This paper presents the results of a series of analyses performed on direct mail
