Introduction to the R package TDA
|
|
|
- Audrey Joseph
- 9 years ago
- Views:
Transcription
1 Introduction to the R package TDA Brittany T. Fasy, Jisu Kim, Fabrizio Lecci, Clément Maria, and Vincent Rouvreau In collaboration with the CMU TopStat Group Abstract We present a short tutorial and introduction to using the R package TDA, which provides some tools for Topological Data Analysis. In particular, it includes implementations of functions that, given some data, provide topological information about the underlying space, such as the distance function, the distance to a measure, the knn density estimator, the kernel density estimator, and the kernel distance. The salient topological features of the sublevel sets (or superlevel sets) of these functions can be quantified with persistent homology. We provide an R interface for the efficient algorithms of the C++ libraries GUDHI, Dionysus, and PHAT, including a function for the persistent homology of the Rips filtration, and one for the persistent homology of sublevel sets (or superlevel sets) of arbitrary functions evaluated over a grid of points. The significance of the features in the resulting persistence diagrams can be analyzed with functions that implement the methods discussed in Fasy, Lecci, Rinaldo, Wasserman, Balakrishnan, and Singh (2014), Chazal, Fasy, Lecci, Rinaldo, and Wasserman (2014c) and Chazal, Fasy, Lecci, Michel, Rinaldo, and Wasserman (2014a). The R package TDA also includes the implementation of an algorithm for density clustering, which allows us to identify the spatial organization of the probability mass associated to a density function and visualize it by means of a dendrogram, the cluster tree. Keywords: Topological Data Analysis, Persistent Homology, Density Clustering. 1. Introduction Topological Data Analysis (TDA) refers to a collection of methods for finding topological structure in data (Carlsson 2009). Recent advances in computational topology have made it possible to actually compute topological invariants from data. The input of these procedures typically takes the form of a point cloud, regarded as possibly noisy observations from an unknown lower-dimensional set S whose interesting topological features were lost during sampling. The output is a collection of data summaries that are used to estimate the topological features of S. One approach to TDA is persistent homology (Edelsbrunner and Harer 2010), a method for studying the homology at multiple scales simultaneously. More precisely, it provides a framework and efficient algorithms to quantify the evolution of the topology of a family of nested topological spaces. Given a real-valued function f, such as the ones described in Section 2, persistent homology describes how the topology of the lower level sets {x : f(x) t} (or superlevel sets {x : f(x) t}) changes as t increases from to (or decreases from to ). This information is encoded in the persistence diagram, a multiset of points in the plane, each corresponding to the birth and death of a homological feature that existed for
2 2 Introduction to the R package TDA some interval of t. This paper is devoted to the presentation of the R package TDA, which provides a userfriendly interface for the efficient algorithms of the C++ libraries GUDHI (Maria 2014), Dionysus (Morozov 2007), and PHAT (Bauer, Kerber, and Reininghaus 2012). In Section 2, we describe how to compute some widely studied functions that, starting from a point cloud, provide some topological information about the underlying space: the distance function (distfct), the distance to a measure function (dtm), the k Nearest Neighbor density estimator (knnde), the kernel density estimator (kde), and the kernel distance (kerneldist). Section 3 is devoted to the computation of persistence diagrams: the function griddiag can be used to compute persistent homology of sublevel sets (or superlevel sets) of functions evaluated over a grid of points; the function ripsdiag returns the persistence diagram of the Rips filtration built on top of a point cloud. One of the key challenges in persistent homology is to find a way to isolate the points of the persistence diagram representing the topological noise. Statistical methods for persistent homology provide an alternative to its exact computation. Knowing with high confidence that an approximated persistence diagrams is close to the true computationally infeasible diagram is often enough for practical purposes. Fasy et al. (2014), Chazal et al. (2014c), and Chazal et al. (2014a) propose several statistical methods to construct confidence sets for persistence diagrams and other summary functions that allow us to separate topological signal from topological noise. The methods are implemented in the TDA package and described in Section 3. Finally, the TDA package provides the implementation of an algorithm for density clustering. This method allows us to identify and visualize the spatial organization of the data, without specific knowledge about the data generating mechanism and in particular without any a priori information about the number of clusters. In Section 4, we describe the function clustertree, that, given a density estimator, encodes the hierarchy of the connected components of its superlevel sets into a dendrogram, the cluster tree (Kpotufe and von Luxburg 2011; Kent 2013). 2. Distance Functions and Density Estimators As a first toy example to using the TDA package, we show how to compute distance functions and density estimators over a grid of points. The setting is the typical one in TDA: a set of points X = {x 1,..., x n } R d has been sampled from some distribution P and we are interested in recovering the topological features of the underlying space by studying some functions of the data. The following code generates a sample of 400 points from the unit circle and constructs a grid of points over which we will evaluate the functions. > library("tda") > X <- circleunif(400) > Xlim <- c(-1.6, 1.6); Ylim <- c(-1.7, 1.7); by < > Xseq <- seq(xlim[1], Xlim[2], by = by) > Yseq <- seq(ylim[1], Ylim[2], by = by) > Grid <- expand.grid(xseq, Yseq) The TDA package provides implementations of the following functions:
3 Brittany Terese Fasy, Jisu Kim, Fabrizio Lecci, Clément Maria, Vincent Rouvreau 3 The distance function is defined for each y R d as (y) = inf x X x y 2 and is computed for each point of the Grid with the following code: > distance <- distfct(x = X, Grid = Grid) Given a probability measure P, the distance to measure (DTM) is defined for each y R d as ( 1 m0 1/r d m0 (y) = (G 1 y (u)) du) r, m0 0 where G y (t) = P ( X y t), and m0 (0, 1) and r [1, ) are tuning parameters. As m0 increases, DTM function becomes smoother, so m0 can be understood as a smoothing parameter. r affects less but also changes DTM function as well. The default value of r is 2. The DTM can be seen as a smoothed version of the distance function. See (Chazal, Cohen-Steiner, and Mérigot 2011, Definition 3.2) and (Chazal, Massart, and Michel 2015, Equation (2)) for a formal definition of the distance to measure function. Given X = {x 1,..., x n }, the empirical version of the DTM is ˆd m0 (y) = 1 k x i N k (y) x i y r where k = m0 n and N k (y) is the set containing the k nearest neighbors of y among x 1,..., x n. For more details, see (Chazal et al. 2011) and (Chazal et al. 2015). The DTM is computed for each point of the Grid with the following code: 1/r, > m0 <- 0.1 > DTM <- dtm(x = X, Grid = Grid, m0 = m0) The k Nearest Neighbor density estimator, for each y R d, is defined as ˆδ k (y) = k n v d r d k (y), where v n is the volume of the Euclidean d dimensional unit ball and rk d (x) is the Euclidean distance form point x to its kth closest neighbor among the points of X. It is computed for each point of the Grid with the following code: > k <- 60 > knn <- knnde(x = X, Grid = Grid, k = k)
4 4 Introduction to the R package TDA The Gaussian Kernel Density Estimator (KDE), for each y R d, is defined as 1 ˆp h (y) = n( 2πh) d n ( y xi 2 ) 2 exp 2h 2. i=1 where h is a smoothing parameter. It is computed for each point of the Grid with the following code: > h <- 0.3 > KDE <- kde(x = X, Grid = Grid, h = h) The Kernel distance estimator, for each y R d, is defined as ˆκ h (y) = 1 n n n 2 K h (x i, x j ) + K h (y, y) 2 1 n i=1 j=1 n K h (y, x i ), ( x y 2 where K h (x, y) = exp 2 is the Gaussian Kernel with smoothing parameter h. The Kernel distance is computed for each point of the Grid with the following code: 2h 2 ) > h <- 0.3 > Kdist <- kerneldist(x = X, Grid = Grid, h = h) i=1 For this 2 dimensional example, we can visualize the functions using persp form the graphics package. For example the following code produces the KDE plot in Figure 1: > persp(xseq, Yseq, + matrix(kde, ncol = length(yseq), nrow = length(xseq)), xlab = "", + ylab = "", zlab = "", theta = -20, phi = 35, ltheta = 50, + col = 2, border = NA, main = "KDE", d = 0.5, scale = FALSE, + expand = 3, shade = 0.9)
5 Brittany Terese Fasy, Jisu Kim, Fabrizio Lecci, Clément Maria, Vincent Rouvreau Sample X Distance Function DTM knn KDE Kernel Distance Figure 1: distance functions and density estimators evaluated over a grid of points Bootstrap Confidence Bands We can construct a (1 α) confidence band for a function using the bootstrap algorithm, which we briefly describe using the kernel density estimator: 1. Given a sample X = {x 1,..., x n }, compute the kernel density estimator ˆp h ; 2. Draw X = {x 1,..., x n} from X = {x 1,..., x n } (with replacement), and compute θ = n ˆp h (x) ˆp h(x), where ˆp h is the density estimator computed using X ; 3. Repeat the previous step B times to obtain θ 1,..., θ B ; 4. Compute q α = inf { q : 1 B B j=1 I(θ j q) α } ; 5. The (1 α) confidence band for E[ˆp h ] is [ ˆp h qα n, ˆp h + qα n ]. Fasy et al. (2014) and Chazal et al. (2014a) prove the validity of the bootstrap algorithm for kernel density estimators, distance to measure, and kernel distance, and use it in the framework of persistent homology. The bootstrap algorithm is implemented in the function bootstrapband, which provides the option of parallelizing the algorithm (parallel = TRUE) using the package parallel. The following code computes a 90% confidence band for E[ˆp h ], showed in Figure 2.
6 6 Introduction to the R package TDA > band <- bootstrapband(x = X, FUN = kde, Grid = Grid, B = 100, + parallel = FALSE, alpha = 0.1, h = h) Figure 2: the 90% confidence band for E[ˆp h ] has the form [l, u] = [ˆp h q α / n, ˆp h + q α / n]. The plot on the right shows a section of the functions: the red surface is the KDE ˆp h ; the pink surfaces are l and u.
7 Brittany Terese Fasy, Jisu Kim, Fabrizio Lecci, Clément Maria, Vincent Rouvreau 7 3. Persistent Homology We provide an informal description of the implemented methods of persistent homology. We assume the reader is familiar with the basic concepts and, for a rigorous exposition, we refer to the textbook Edelsbrunner and Harer (2010) Persistent Homology Over a Grid In this section, we describe how to use the griddiag function to compute the persistent homology of sublevel (and superlevel) sets of the functions described in Section 2. The function griddiag evaluates a given real valued function over a triangulated grid, constructs a filtration of simplices using the values of the function, and computes the persistent homology of the filtration. From version 1.2, griddiag works in arbitrary dimension. The core of the function is written in C++ and the user can choose to compute persistence diagrams using either the Dionysus library or the PHAT library. The following code computes the persistent homology of the superlevel sets (sublevel = FALSE) of the kernel density estimator (FUN = kde, h = 0.3) using the point cloud stored in the matrix X from the previous example. The same code would work for the other functions defined in Section 2 (it is sufficient to replace kde and its smoothing parameter h with another function and the corresponding parameter). The function griddiag returns an object of the class "diagram". The other inputs are the features of the grid over which the kde is evaluated (lim and by), the smoothing parameter h, and a logical variable that indicates whether a progress bar should be printed (printprogress). > Diag <- griddiag(x = X, FUN = kde, h = 0.3, lim = cbind(xlim, Ylim), + by = by, sublevel = FALSE, library = "Dionysus", + printprogress = FALSE) We plot the data and the diagram, using the function plot, implemented as a standard S3 method for objects of the class "diagram". The following command produces the third plot in Figure 3. > plot(diag[["diagram"]], band = 2 * band[["width"]], + main = "KDE Diagram") The option (band = 2 * band[["width"]]) produces a pink confidence band for the persistence diagram, using the confidence band constructed for the corresponding kernel density estimator in the previous section. The features above the band can be interpreted as representing significant homological features, while points in the band are not significantly different from noise. The validity of the bootstrap confidence band for persistence diagrams of KDE, DTM, and Kernel Distance derive from the Stability Theorem (Chazal, de Silva, Glisse, and Oudot 2012) and is discussed in detail in Fasy et al. (2014) and Chazal et al. (2014a). The function plot for the class "diagram" provide the options of rotating the diagram (rotated = TRUE), drawing the barcode in place of the diagram (barcode = TRUE), as well as other standard graphical options. See Figure 4.
8 8 Introduction to the R package TDA Sample X KDE KDE Diagram Death Birth Figure 3: The plot on the right shows the persistence diagram of the superlevel sets of the KDE. Black points represent connected components and red triangles represent loops. The features are born at high levels of the density and die at lower levels. The pink 90% confidence band separates significant features from noise. > par(mfrow = c(1, 2), mai = c(0.8, 0.8, 0.3, 0.1)) > plot(diag[["diagram"]], rotated = TRUE, band = band[["width"]], + main = "Rotated Diagram") > plot(diag[["diagram"]], barcode = TRUE, main = "Barcode") Rotated Diagram (Death+Birth)/2 (Birth Death)/2 Barcode time Figure 4: Rotated Persistence Diagram and Barcode 3.2. Rips Diagrams The Vietoris-Rips complex R(X, ε) consists of simplices with vertices in X = {x 1,..., x n } R d and diameter at most ε. In other words, a simplex σ is included in the complex if each pair of vertices in σ is at most ε apart. The sequence of Rips complexes obtained by gradually increasing the radius ε creates a filtration. The ripsdiag function computes the persistence diagram of the Rips filtration built on top of a point cloud. The user can choose to compute the Rips persistence diagram using either the C++ library GUDHI, or Dionysus. The following code generates 60 points from two circles:
9 Brittany Terese Fasy, Jisu Kim, Fabrizio Lecci, Clément Maria, Vincent Rouvreau 9 > Circle1 <- circleunif(60) > Circle2 <- circleunif(60, r = 2) + 3 > Circles <- rbind(circle1, Circle2) We specify the limit of the Rips filtration and the max dimension of the homological features we are interested in (0 for components, 1 for loops, 2 for voids, etc.): > maxscale <- 5 # limit of the filtration > maxdimension <- 1 # components and loops and we generate the persistence diagram: > Diag <- ripsdiag(x = Circles, maxdimension, maxscale, + library = "GUDHI", printprogress = FALSE) Alternatively, using the option (dist = "arbitrary") in ripsdiag(), the input X can be an n n matrix of distances. This option is useful when the user wants to consider a Rips filtration constructed using an arbitrary distance and is currently only available for the option (library = "Dionysus"). Finally we plot the data and the diagram: > par(mfrow = c(1, 2), mai=c(0.8, 0.8, 0.3, 0.3)) > plot(circles, pch = 16, xlab = "",ylab = "") > plot(diag[["diagram"]]) Death Birth Figure 5: Rips persistence diagram. Black points represent connected components and red triangles represent loops Alpha Complex Persistence Diagram For a finite set of points X R d, the Alpha complex Alpha(X, s) is a simplicial subcomplex of the Delaunay complex of X consisting of simplices of circumradius less than or equal to s.
10 10 Introduction to the R package TDA For each u X, let V u be its Voronoi cell, i.e. V u = {x R d : d(x, u) d(x, v) for all v X}, and B u (r) be the closed ball with center u and radius r. Let R u (r) consists of be the intersection of earh ball of radius r with the voronoi cell of u, i.e. R u (r) = B u (r) V u. Then Alpha(X, s) is defined as { Alpha(X, r) = σ X : R u ( } s). See (Edelsbrunner and Harer 2010, Section 3.4) and (Rouvreau 2015). The sequence of Alpha complexes obtained by gradually increasing the parameter s creates an Alpha complex filtration. The alphacomplexdiag function computes the persistence diagram of the alpha complex filtration built on top of a point cloud, using the C++ library GUDHI. Currently the function alphacomplexdiag only works on 64-bit R. We first generate 30 points from a circle: > X <- circleunif(n = 30) and the following code compute the persistence diagram of the alpha complex filtration using the point cloud X, with printing its progress (printprogress = FALSE). The function alphacomplexdiag returns an object of the class "diagram". > DiagAlphaComplex <- alphacomplexdiag(x = X, printprogress = TRUE) # Generated complex of size: 115 # Persistence timer: Elapsed time [ ] seconds And we plot the diagram: > plot(diagalphacomplex[["diagram"]]) u σ Death Birth Figure 6: Persistence diagram of Alpha complex. Black points represent connected components and red triangles represent loops Persistence Diagram of Alpha Shape The Alpha shape complex S(X, α) is the polytope with its boundary consisting of α-exposed simplices, where a simplex σ is α-exposed if there is an open ball b of radius α such that
11 Brittany Terese Fasy, Jisu Kim, Fabrizio Lecci, Clément Maria, Vincent Rouvreau 11 b X = and b X = σ. Suppose R d is filled with ice cream, then consider scooping out the ice cream with sphere-shaped spoon of radius α without touching the points X. S(X, α) is the remaining polytope with straightening round surfaces. See (Fischer 2005) and (Edelsbrunner and Mücke 1994). The sequence of Alpha shape complexes obtained by gradually increasing the parameter α creates an Alpha shape complex filtration. The alphashapediag function computes the persistence diagram of the alpha shape filtration built on top of a point cloud in 3 dimension, using the C++ library GUDHI. Currently the point data cloud should lie in 3 dimension. We first generate 500 points from a sphere: > XX <- sphereunif(n = 500, d = 2) and the following code compute the persistence diagram of the alpha shape filtration using the point cloud XX, with printing its progress (printprogress = FALSE). The function alphashapediag returns an object of the class "diagram". > DiagAlphaShape <- alphashapediag(x = XX, printprogress = FALSE) And we plot the diagram: > plot(diagalphashape[["diagram"]]) Death Birth Figure 7: Persistence diagram of Alpha shape. Black points represent connected components, red triangles represent loops, and blue diamonds represent voids Bottleneck and Wasserstein Distances Standard metrics for measuring the distance between two persistence diagrams are the bottleneck distance and the pth Wasserstein distance (Edelsbrunner and Harer 2010). The TDA package includes the functions bottleneck and wasserstein, which are R wrappers of the functions bottleneck distance and wasserstein distance of the C++ library Dionysus. We generate two persistence diagrams of the Rips filtrations built on top of the two (separate) circles of the previous example,
12 12 Introduction to the R package TDA > Diag1 <- ripsdiag(circle1, maxdimension = 1, maxscale = 5) > Diag2 <- ripsdiag(circle2, maxdimension = 1, maxscale = 5) and we compute the bottleneck distance and the 2nd Wasserstein distance between the two diagrams. In the following code, the option dimension = 1 specifies that the distances between diagrams are computed using only one dimensional features (loops). > print(bottleneck(diag1[["diagram"]], Diag2[["diagram"]], + dimension = 1)) [1] > print(wasserstein(diag1[["diagram"]], Diag2[["diagram"]], p = 2, + dimension = 1)) [1] Landscapes and Silhouettes Persistence landscapes and silhouettes are real-valued functions that further summarize the information contained in a persistence diagram. They have been introduced and studied in Bubenik (2012), Chazal et al. (2014c), and Chazal, Fasy, Lecci, Michel, Rinaldo, and Wasserman (2014b). We briefly introduce the two functions. Landscape. The persistence landscape is a collection of continuous, piecewise linear functions λ: Z + R R that summarizes a persistence diagram. To define the landscape, consider the set of functions created by tenting each each point p = (x, y) = ( b+d 2, d b ) 2 representing a birth-death pair (b, d) in the persistence diagram D as follows: t x + y t [x y, x] t b t [b, b+d 2 ] Λ p (t) = x + y t t (x, x + y] = d t t ( b+d 2, d] (1) 0 otherwise 0 otherwise. We obtain an arrangement of piecewise linear curves by overlaying the graphs of the functions {Λ p } p ; see Figure 8 (left). The persistence landscape of D is a summary of this arrangement. Formally, the persistence landscape of D is the collection of functions λ(k, t) = kmax p Λ p (t), t [0, T ], k N, (2) where kmax is the kth largest value in the set; in particular, 1max is the usual maximum function. see Figure 8 (middle). Silhouette. Consider a persistence diagram with N off diagonal points {(b j, d j )} N j=1. For every 0 < p < we define the power-weighted silhouette φ (p) (t) = N j=1 d j b j p Λ j (t) N j=1 d j b j p. The value p can be thought of as a trade-off parameter between uniformly treating all pairs in the persistence diagram and considering only the most persistent pairs. Specifically, when
13 Brittany Terese Fasy, Jisu Kim, Fabrizio Lecci, Clément Maria, Vincent Rouvreau 13 Triangles 1st Landscape Silhouette p = 1 (Death Birth) / (Birth + Death) / Figure 8: Left: we use the rotated axes to represent a persistence diagram D. A feature (b, d) D is represented by the point ( b+d 2, d b 2 ) (pink). In words, the x-coordinate is the average parameter value over which the feature exists, and the y-coordinate is the half-life of the feature. Middle: the blue curve is the landscape λ(1, ). Right: the blue curve is the silhouette φ (1) ( ). p is small, φ (p) (t) is dominated by the effect of low persistence features. Conversely, when p is large, φ (p) (t) is dominated by the most persistent features; see Figure 8 (right). The landscape and silhouette functions can be evaluated over a one-dimensional grid of points tseq using the functions landscape and silhouette. In the following code, we use the persistence diagram from Figure 5 to construct the corresponding landscape and silhouette for one-dimensional features (dimension = 1). The option (KK = 1) specifies that we are interested in the 1st landscape function, and (p = 1) is the power of the weights in the definition of the silhouette function. > tseq <- seq(0, maxscale, length = 1000) #domain > Land <- landscape(diag[["diagram"]], dimension = 1, KK = 1, tseq) > Sil <- silhouette(diag[["diagram"]], p = 1, dimension = 1, tseq) The functions landscape and silhouette return real valued vectors, which can be simply plotted with plot(tseq, Land, type = "l"); plot(tseq, Sil, type = "l"). See Figure 9. 1st Landscape, dim = 1 Silhouette(p = 1), dim = Figure 9: Landscape and Silhouette of the one-dimensional features of the diagram of Figure 5.
14 14 Introduction to the R package TDA 3.7. Confidence Bands for Landscapes and Silhouettes i λ i, an estimate of E(λ i ), which can Recent results in Chazal et al. (2014c) and Chazal et al. (2014b) show how to construct confidence bands for landscapes and silhouettes, using a bootstrap algorithm (multiplier bootstrap). This strategy is useful in the following scenario. We have a very large dataset with N points. There is a diagram D and landscape λ corresponding to some filtration built on the data. When N is large, computing D is prohibitive. Instead, we draw n subsamples, each of size m. We compute a diagram and a landscape for each subsample yielding landscapes λ 1,..., λ n. (Assuming m is much smaller than N, these subsamples are essentially independent and identically distributed.) Then we compute 1 n be regarded as an approximation of λ. The function multipbootstrap uses the landscapes λ 1,..., λ n to construct a confidence band for E(λ i ). The same strategy is valid for silhouette functions. We illustrate the method with a simple example. First we sample N points from two circles: > N < > XX1 <- circleunif(n / 2) > XX2 <- circleunif(n / 2, r = 2) + 3 > X <- rbind(xx1, XX2) Then we specify the number of subsamples n, the subsample size m, and we create the objects that will store the n diagrams and landscapes: > m <- 80 # subsample size > n <- 10 # we will compute n landscapes using subsamples of size m > tseq <- seq(0, maxscale, length = 500) #domain of landscapes > #here we store n Rips diags > Diags <- list() > #here we store n landscapes > Lands <- matrix(0, nrow = n, ncol = length(tseq)) For n times, we subsample from the large point cloud, compute n Rips diagrams and the corresponding 1st landscape functions (KK = 1), using 1 dimensional features (dimension = 1): > for (i in seq_len(n)) { + subx <- X[sample(seq_len(N), m), ] + Diags[[i]] <- ripsdiag(subx, maxdimension = 1, maxscale = 5) + Lands[i, ] <- landscape(diags[[i]][["diagram"]], dimension = 1, + KK = 1, tseq) + } Finally we use the n landscapes to construct a 95% confidence band for the mean landscape > bootland <- multipbootstrap(lands, B = 100, alpha = 0.05, + parallel = FALSE)
15 Brittany Terese Fasy, Jisu Kim, Fabrizio Lecci, Clément Maria, Vincent Rouvreau 15 which is plotted by the following code. See Figure 10. > plot(tseq, bootland[["mean"]], main = "Mean Landscape with 95% band") > polygon(c(tseq, rev(tseq)), + c(bootland[["band"]][, 1], rev(bootland[["band"]][, 2])), + col = "pink") > lines(tseq, bootland[["mean"]], lwd = 2, col = 2) Large Sample from Circles Mean Landscape with 95% band Figure 10: 95% confidence band for the mean landscape function Selection of Smoothing Parameters An unsolved problem in topological inference is how to choose the smoothing parameters, for example h for KDE and m0 for DTM. Chazal et al. (2014a) suggest the following method, that we describe here for the kernel density estimator, but works also for the kernel distance and the distance to measure. Let l 1 (h), l 2 (h),..., be the lifetimes of the features of a persistence diagram at scale h. Let q α (h)/ n be the width of the confidence band for the kernel density estimator at scale h, as described in Section 2.1. We define two quantities that measure the amount of significant information at level h: The number of significant features, N(h) = # { i : l(i) > 2 qα(h) n }; The total significant persistence, S(h) = i [ l i 2 qα(h) n ]+. These measures are small when h is small since q α (h) is large. On the other hand, they are small when h is large since then all the features of the KDE are smoothed out. Thus we have a kind of topological bias-variance tradeoff. We choose h to maximize N(h) or S(h). The method is implemented in the function maxpersistence, as shown in the following toy example. First, we sample 1600 point from two circles (plus some clutter noise) and we specify the limits of the grid over which the KDE is evaluated:
16 16 Introduction to the R package TDA > XX1 <- circleunif(600) > XX2 <- circleunif(1000, r = 1.5) > noise <- cbind(runif(80, -2, 5), runif(80, -2, 5)) > X <- rbind(xx1, XX2, noise) > # Grid limits > Xlim <- c(-2, 5) > Ylim <- c(-2, 5) > by <- 0.2 Then we specify a sequence of smoothing parameters among which we will select the optimal one, the number of bootstrap iterations and the level of the confidence bands to be computed: > parameterskde <- seq(0.1, 0.6, by = 0.05) > B <- 50 # number of bootstrap iterations. Should be large. > alpha <- 0.1 # level of the confidence bands The function maxpersistence can be parallelized (parallel = TRUE) and a progress bar can be printed (printprogress = TRUE): > maxkde <- maxpersistence(kde, parameterskde, X, + lim = cbind(xlim, Ylim), by = by, sublevel = FALSE, + B = B, alpha = alpha, parallel = TRUE, + printprogress = TRUE, bandfun = "bootstrapband") *************************************************** The S3 methods summary and plot are implemented for the class "maxpersistence". We can display the values of the parameters that maximize the two criteria: > print(summary(maxkde)) Call: maxpersistence(fun = kde, parameters = parameterskde, X = X, lim = cbind(xlim, Ylim), by = by, sublevel = FALSE, B = B, alpha = alpha, bandfun = "bootstrapband", parallel = TRUE, printprogress = TRUE) The number of significant features is maximized by [1] The total significant persistence is maximized by [1] 0.25 and produce the summary plot of Figure 11.
17 Brittany Terese Fasy, Jisu Kim, Fabrizio Lecci, Clément Maria, Vincent Rouvreau 17 > par(mfrow = c(1, 2), mai = c(0.8, 0.8, 0.35, 0.3)) > plot(x, pch = 16, cex = 0.5, main = "Two Circles") > plot(maxkde, main = "Max Persistence - KDE") Two Circles x1 x2 Max Persistence KDE parameter Persistence Figure 11: Max Persistence Method for the selection of smoothing parameters. For each value of the smoothing parameter we display the persistence of the corresponding homological features, along with a (pink) confidence band that separates the statistically significant features from the topological noise. 4. Density Clustering The last example of this vignette illustrates the use of the function clustertree, which is an implementation of Algorithm 1 in Kent, Rinaldo, and Verstynen (2013). First, we briefly describe the task of density clustering; we defer the reader to Kent (2013) for a more rigorous and complete description. Let f be the density of the probability distribution P generating the observed sample X = {x 1,..., x n } R d. For a threshold value λ > 0, the corresponding super level set of f is L f (λ) := cl({x R s : f(x) > λ}), and its d-dimensional subsets are called high-density regions. The high-density clusters of P are the maximal connected subsets of L f (λ). By considering all the level sets simultaneously (from λ = 0 to λ = ), we can record the evolution and the hierarchy of the high-density clusters of P. This naturally leads to the notion of the cluster density tree of P (see, e.g., Hartigan (1981)), defined as the collection of sets T := {L f (λ), λ 0}, which satisfies the tree property: A, B T implies that A B or B A or A B =. We will refer to this construction as the λ-tree. Alternatively, Kent et al. (2013) introduced the α-tree and κ-tree, which facilitate the interpretation of the tree by precisely encoding the probability content of each tree branch rather than the density level. Cluster trees are particularly useful for high dimensional data, whose spatial organization is difficult to represent. We illustrate the strategy with a simple example. First we generate a 2D point cloud from three (not so well) separated clusters (see top left plot of Figure 12): > X1 <- cbind(rnorm(300, 1,.8), rnorm(300, 5, 0.8)) > X2 <- cbind(rnorm(300, 3.5,.8), rnorm(300, 5, 0.8))
18 18 Introduction to the R package TDA > X3 <- cbind(rnorm(300, 6, 1), rnorm(300, 1, 1)) > XX <- rbind(x1, X2, X3) Then we use the function clustertree to compute cluster trees using the k Nearest Neighbors density estimator (k = 100 nearest neighbors) and the Gaussian kernel density estimator, with smoothing parameter h. > Tree <- clustertree(xx, k = 100, density = "knn", + printprogress = FALSE) > TreeKDE <- clustertree(xx, k = 100, h = 0.3, density = "kde", + printprogress = FALSE) Note that, even when kde is used to estimate the density, we have to provide the option (k = 100), so that the algorithm can compute the connected components at each level of the density using a k Nearest Neighbors graph. The "clustertree" objects Tree and TreeKDE contain information about the λ-tree, α-tree and κ-tree. The function plot for objects of the class "clustertree" produces the plots in Figure 12. > plot(tree, type = "lambda", main = "lambda Tree (knn)") > plot(tree, type = "kappa", main = "kappa Tree (knn)") > plot(treekde, type = "lambda", main = "lambda Tree (kde)") > plot(treekde, type = "kappa", main = "kappa Tree (kde)")
19 Brittany Terese Fasy, Jisu Kim, Fabrizio Lecci, Clément Maria, Vincent Rouvreau Data XX[,1] XX[,2] lambda Tree (knn) lambda lambda Tree (kde) lambda cluster labels (knn) XX[,2] kappa Tree (knn) kappa kappa Tree (kde) kappa Figure 12: The lambda trees and kappa trees of the k Nearest Neighbor density estimator and the kernel density estimator. References Bauer U, Kerber M, Reininghaus J (2012). PHAT, a software library for persistent homology. Bubenik P (2012). Statistical topological data analysis using persistence landscapes. arxiv preprint arxiv: Carlsson G (2009). Topology and data. Bulletin of the American Mathematical Society, 46(2), Chazal F, Cohen-Steiner D, Mérigot Q (2011). Geometric inference for probability measures. Foundations of Computational Mathematics, 11(6), Chazal F, de Silva V, Glisse M, Oudot S (2012). The structure and stability of persistence modules. arxiv preprint arxiv: Chazal F, Fasy BT, Lecci F, Michel B, Rinaldo A, Wasserman L (2014a). Robust Topological Inference: Distance-To-a-Measure and Kernel Distance. Technical Report. Chazal F, Fasy BT, Lecci F, Michel B, Rinaldo A, Wasserman L (2014b). Subsampling Methods for Persistent Homology. arxiv preprint arxiv:
20 20 Introduction to the R package TDA Chazal F, Fasy BT, Lecci F, Rinaldo A, Wasserman L (2014c). Stochastic Convergence of Persistence Landscapes and Silhouettes. In Annual Symposium on Computational Geometry, pp ACM. Chazal F, Massart P, Michel B (2015). Rates of convergence for robust geometric inference. CoRR, abs/ URL Edelsbrunner H, Harer J (2010). Computational topology: an introduction. American Mathematical Society. Edelsbrunner H, Mücke EP (1994). Three-dimensional Alpha Shapes. ACM Trans. Graph., 13(1), ISSN doi: / URL org/ / Fasy BT, Lecci F, Rinaldo A, Wasserman L, Balakrishnan S, Singh A (2014). Confidence Sets For Persistence Diagrams. The Annals of Statistics. Fischer K (2005). Introduction to Alpha Shapes. ddm/texts/delaunay/alphashapes.pdf. Hartigan JA (1981). Consistency of single linkage for high-density clusters. Journal of the American Statistical Association, 76(374), Kent B (2013). Level Set Trees for Applied Statistics. Ph.D. thesis, Department of Statistics, Carnegie Mellon University. Kent BP, Rinaldo A, Verstynen T (2013). DeBaCl: A Python Package for Interactive DEnsity-BAsed CLustering. arxiv preprint arxiv: Kpotufe S, von Luxburg U (2011). Pruning nearest neighbor cluster trees. In International Conference on Machine Learning (ICML). Maria C (2014). GUDHI, Simplicial Complexes and Persistent Homology Packages. https: //project.inria.fr/gudhi/software/. Morozov D (2007). Dionysus, a C++ library for computing persistent homology. http: // Rouvreau V (2015). Alpha complex. In GUDHI User and Reference Manual. GUDHI Editorial Board. URL alpha complex.html. Affiliation: Brittany T. Fasy Computer Science Department Montana State University Website: [email protected]
21 Brittany Terese Fasy, Jisu Kim, Fabrizio Lecci, Clément Maria, Vincent Rouvreau 21 Jisu Kim Department of Statistics Carnegie Mellon University Website: Fabrizio Lecci Clément Maria School of Mathematics and Physics The University of Queensland Website: Vincent Rouvreau GUDHI Group INRIA Saclay TopStat Group Carnegie Mellon University Website:
Topological Data Analysis Applications to Computer Vision
Topological Data Analysis Applications to Computer Vision Vitaliy Kurlin, http://kurlin.org Microsoft Research Cambridge and Durham University, UK Topological Data Analysis quantifies topological structures
Metric Spaces. Chapter 7. 7.1. Metrics
Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some
A fast and robust algorithm to count topologically persistent holes in noisy clouds
A fast and robust algorithm to count topologically persistent holes in noisy clouds Vitaliy Kurlin Durham University Department of Mathematical Sciences, Durham, DH1 3LE, United Kingdom [email protected],
Each function call carries out a single task associated with drawing the graph.
Chapter 3 Graphics with R 3.1 Low-Level Graphics R has extensive facilities for producing graphs. There are both low- and high-level graphics facilities. The low-level graphics facilities provide basic
Geometry and Topology from Point Cloud Data
Geometry and Topology from Point Cloud Data Tamal K. Dey Department of Computer Science and Engineering The Ohio State University Dey (2011) Geometry and Topology from Point Cloud Data WALCOM 11 1 / 51
Clustering. Data Mining. Abraham Otero. Data Mining. Agenda
Clustering 1/46 Agenda Introduction Distance K-nearest neighbors Hierarchical clustering Quick reference 2/46 1 Introduction It seems logical that in a new situation we should act in a similar way as in
sample median Sample quartiles sample deciles sample quantiles sample percentiles Exercise 1 five number summary # Create and view a sorted
Sample uartiles We have seen that the sample median of a data set {x 1, x, x,, x n }, sorted in increasing order, is a value that divides it in such a way, that exactly half (i.e., 50%) of the sample observations
DATA MINING CLUSTER ANALYSIS: BASIC CONCEPTS
DATA MINING CLUSTER ANALYSIS: BASIC CONCEPTS 1 AND ALGORITHMS Chiara Renso KDD-LAB ISTI- CNR, Pisa, Italy WHAT IS CLUSTER ANALYSIS? Finding groups of objects such that the objects in a group will be similar
Solving Geometric Problems with the Rotating Calipers *
Solving Geometric Problems with the Rotating Calipers * Godfried Toussaint School of Computer Science McGill University Montreal, Quebec, Canada ABSTRACT Shamos [1] recently showed that the diameter of
Computational Geometry. Lecture 1: Introduction and Convex Hulls
Lecture 1: Introduction and convex hulls 1 Geometry: points, lines,... Plane (two-dimensional), R 2 Space (three-dimensional), R 3 Space (higher-dimensional), R d A point in the plane, 3-dimensional space,
Data Mining. Cluster Analysis: Advanced Concepts and Algorithms
Data Mining Cluster Analysis: Advanced Concepts and Algorithms Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1 More Clustering Methods Prototype-based clustering Density-based clustering Graph-based
Applied Algorithm Design Lecture 5
Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design
2.3 Convex Constrained Optimization Problems
42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions
Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization
Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization 2.1. Introduction Suppose that an economic relationship can be described by a real-valued
Arrangements And Duality
Arrangements And Duality 3.1 Introduction 3 Point configurations are tbe most basic structure we study in computational geometry. But what about configurations of more complicated shapes? For example,
Lecture 3: Linear methods for classification
Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,
Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling
Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one
The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].
Probability Theory Probability Spaces and Events Consider a random experiment with several possible outcomes. For example, we might roll a pair of dice, flip a coin three times, or choose a random real
Least-Squares Intersection of Lines
Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok
THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE Alexer Barvinok Papers are available at http://www.math.lsa.umich.edu/ barvinok/papers.html This is a joint work with J.A. Hartigan
MACHINE LEARNING IN HIGH ENERGY PHYSICS
MACHINE LEARNING IN HIGH ENERGY PHYSICS LECTURE #1 Alex Rogozhnikov, 2015 INTRO NOTES 4 days two lectures, two practice seminars every day this is introductory track to machine learning kaggle competition!
Machine Learning and Data Mining. Regression Problem. (adapted from) Prof. Alexander Ihler
Machine Learning and Data Mining Regression Problem (adapted from) Prof. Alexander Ihler Overview Regression Problem Definition and define parameters ϴ. Prediction using ϴ as parameters Measure the error
24. The Branch and Bound Method
24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no
Exact Inference for Gaussian Process Regression in case of Big Data with the Cartesian Product Structure
Exact Inference for Gaussian Process Regression in case of Big Data with the Cartesian Product Structure Belyaev Mikhail 1,2,3, Burnaev Evgeny 1,2,3, Kapushev Yermek 1,2 1 Institute for Information Transmission
Shortcut sets for plane Euclidean networks (Extended abstract) 1
Shortcut sets for plane Euclidean networks (Extended abstract) 1 J. Cáceres a D. Garijo b A. González b A. Márquez b M. L. Puertas a P. Ribeiro c a Departamento de Matemáticas, Universidad de Almería,
Notes on metric spaces
Notes on metric spaces 1 Introduction The purpose of these notes is to quickly review some of the basic concepts from Real Analysis, Metric Spaces and some related results that will be used in this course.
Fast and Robust Normal Estimation for Point Clouds with Sharp Features
1/37 Fast and Robust Normal Estimation for Point Clouds with Sharp Features Alexandre Boulch & Renaud Marlet University Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ParisTech Symposium on Geometry Processing
Graphics in R. Biostatistics 615/815
Graphics in R Biostatistics 615/815 Last Lecture Introduction to R Programming Controlling Loops Defining your own functions Today Introduction to Graphics in R Examples of commonly used graphics functions
THREE DIMENSIONAL GEOMETRY
Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,
Big Data Analytics CSCI 4030
High dim. data Graph data Infinite data Machine learning Apps Locality sensitive hashing PageRank, SimRank Filtering data streams SVM Recommen der systems Clustering Community Detection Web advertising
Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.
Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.
! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.
Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of
EQUATIONS and INEQUALITIES
EQUATIONS and INEQUALITIES Linear Equations and Slope 1. Slope a. Calculate the slope of a line given two points b. Calculate the slope of a line parallel to a given line. c. Calculate the slope of a line
Copyright 2011 Casa Software Ltd. www.casaxps.com. Centre of Mass
Centre of Mass A central theme in mathematical modelling is that of reducing complex problems to simpler, and hopefully, equivalent problems for which mathematical analysis is possible. The concept of
THE BANACH CONTRACTION PRINCIPLE. Contents
THE BANACH CONTRACTION PRINCIPLE ALEX PONIECKI Abstract. This paper will study contractions of metric spaces. To do this, we will mainly use tools from topology. We will give some examples of contractions,
3 An Illustrative Example
Objectives An Illustrative Example Objectives - Theory and Examples -2 Problem Statement -2 Perceptron - Two-Input Case -4 Pattern Recognition Example -5 Hamming Network -8 Feedforward Layer -8 Recurrent
UNIFORMLY DISCONTINUOUS GROUPS OF ISOMETRIES OF THE PLANE
UNIFORMLY DISCONTINUOUS GROUPS OF ISOMETRIES OF THE PLANE NINA LEUNG Abstract. This paper discusses 2-dimensional locally Euclidean geometries and how these geometries can describe musical chords. Contents
Lecture 2: Homogeneous Coordinates, Lines and Conics
Lecture 2: Homogeneous Coordinates, Lines and Conics 1 Homogeneous Coordinates In Lecture 1 we derived the camera equations λx = P X, (1) where x = (x 1, x 2, 1), X = (X 1, X 2, X 3, 1) and P is a 3 4
Mathematics (Project Maths Phase 1)
2012. M128 S Coimisiún na Scrúduithe Stáit State Examinations Commission Leaving Certificate Examination, 2012 Sample Paper Mathematics (Project Maths Phase 1) Paper 2 Ordinary Level Time: 2 hours, 30
Geometry and Measurement
The student will be able to: Geometry and Measurement 1. Demonstrate an understanding of the principles of geometry and measurement and operations using measurements Use the US system of measurement for
Mathematics Pre-Test Sample Questions A. { 11, 7} B. { 7,0,7} C. { 7, 7} D. { 11, 11}
Mathematics Pre-Test Sample Questions 1. Which of the following sets is closed under division? I. {½, 1,, 4} II. {-1, 1} III. {-1, 0, 1} A. I only B. II only C. III only D. I and II. Which of the following
Factoring Patterns in the Gaussian Plane
Factoring Patterns in the Gaussian Plane Steve Phelps Introduction This paper describes discoveries made at the Park City Mathematics Institute, 00, as well as some proofs. Before the summer I understood
Solutions to Homework 10
Solutions to Homework 1 Section 7., exercise # 1 (b,d): (b) Compute the value of R f dv, where f(x, y) = y/x and R = [1, 3] [, 4]. Solution: Since f is continuous over R, f is integrable over R. Let x
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical
High-dimensional labeled data analysis with Gabriel graphs
High-dimensional labeled data analysis with Gabriel graphs Michaël Aupetit CEA - DAM Département Analyse Surveillance Environnement BP 12-91680 - Bruyères-Le-Châtel, France Abstract. We propose the use
Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data
CMPE 59H Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data Term Project Report Fatma Güney, Kübra Kalkan 1/15/2013 Keywords: Non-linear
4. Expanding dynamical systems
4.1. Metric definition. 4. Expanding dynamical systems Definition 4.1. Let X be a compact metric space. A map f : X X is said to be expanding if there exist ɛ > 0 and L > 1 such that d(f(x), f(y)) Ld(x,
Math 241, Exam 1 Information.
Math 241, Exam 1 Information. 9/24/12, LC 310, 11:15-12:05. Exam 1 will be based on: Sections 12.1-12.5, 14.1-14.3. The corresponding assigned homework problems (see http://www.math.sc.edu/ boylan/sccourses/241fa12/241.html)
Clustering & Visualization
Chapter 5 Clustering & Visualization Clustering in high-dimensional databases is an important problem and there are a number of different clustering paradigms which are applicable to high-dimensional data.
Analyzing Piecewise Functions
Connecting Geometry to Advanced Placement* Mathematics A Resource and Strategy Guide Updated: 04/9/09 Analyzing Piecewise Functions Objective: Students will analyze attributes of a piecewise function including
Reflection and Refraction
Equipment Reflection and Refraction Acrylic block set, plane-concave-convex universal mirror, cork board, cork board stand, pins, flashlight, protractor, ruler, mirror worksheet, rectangular block worksheet,
1 Norms and Vector Spaces
008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)
Section 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
ENUMERATION OF INTEGRAL TETRAHEDRA
ENUMERATION OF INTEGRAL TETRAHEDRA SASCHA KURZ Abstract. We determine the numbers of integral tetrahedra with diameter d up to isomorphism for all d 1000 via computer enumeration. Therefore we give an
Support Vector Machines Explained
March 1, 2009 Support Vector Machines Explained Tristan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introduction This document has been written in an attempt to make the Support Vector Machines (SVM),
HT2015: SC4 Statistical Data Mining and Machine Learning
HT2015: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Bayesian Nonparametrics Parametric vs Nonparametric
Getting started with qplot
Chapter 2 Getting started with qplot 2.1 Introduction In this chapter, you will learn to make a wide variety of plots with your first ggplot2 function, qplot(), short for quick plot. qplot makes it easy
The Math Circle, Spring 2004
The Math Circle, Spring 2004 (Talks by Gordon Ritter) What is Non-Euclidean Geometry? Most geometries on the plane R 2 are non-euclidean. Let s denote arc length. Then Euclidean geometry arises from the
1.7 Graphs of Functions
64 Relations and Functions 1.7 Graphs of Functions In Section 1.4 we defined a function as a special type of relation; one in which each x-coordinate was matched with only one y-coordinate. We spent most
a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
Solving Simultaneous Equations and Matrices
Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering
The Graphical Method: An Example
The Graphical Method: An Example Consider the following linear program: Maximize 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2 0, where, for ease of reference,
Linear Threshold Units
Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear
Viewing Ecological data using R graphics
Biostatistics Illustrations in Viewing Ecological data using R graphics A.B. Dufour & N. Pettorelli April 9, 2009 Presentation of the principal graphics dealing with discrete or continuous variables. Course
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
VISUALIZATION OF DENSITY FUNCTIONS WITH GEOGEBRA
VISUALIZATION OF DENSITY FUNCTIONS WITH GEOGEBRA Csilla Csendes University of Miskolc, Hungary Department of Applied Mathematics ICAM 2010 Probability density functions A random variable X has density
Answer Key for California State Standards: Algebra I
Algebra I: Symbolic reasoning and calculations with symbols are central in algebra. Through the study of algebra, a student develops an understanding of the symbolic language of mathematics and the sciences.
Lecture 9: Geometric map transformations. Cartographic Transformations
Cartographic Transformations Analytical and Computer Cartography Lecture 9: Geometric Map Transformations Attribute Data (e.g. classification) Locational properties (e.g. projection) Graphics (e.g. symbolization)
Neural Networks Lesson 5 - Cluster Analysis
Neural Networks Lesson 5 - Cluster Analysis Prof. Michele Scarpiniti INFOCOM Dpt. - Sapienza University of Rome http://ispac.ing.uniroma1.it/scarpiniti/index.htm [email protected] Rome, 29
Solve addition and subtraction word problems, and add and subtract within 10, e.g., by using objects or drawings to represent the problem.
Solve addition and subtraction word problems, and add and subtract within 10, e.g., by using objects or drawings to represent the problem. Solve word problems that call for addition of three whole numbers
Current Standard: Mathematical Concepts and Applications Shape, Space, and Measurement- Primary
Shape, Space, and Measurement- Primary A student shall apply concepts of shape, space, and measurement to solve problems involving two- and three-dimensional shapes by demonstrating an understanding of:
Principal components analysis
CS229 Lecture notes Andrew Ng Part XI Principal components analysis In our discussion of factor analysis, we gave a way to model data x R n as approximately lying in some k-dimension subspace, where k
Representing Reversible Cellular Automata with Reversible Block Cellular Automata
Discrete Mathematics and Theoretical Computer Science Proceedings AA (DM-CCG), 2001, 145 154 Representing Reversible Cellular Automata with Reversible Block Cellular Automata Jérôme Durand-Lose Laboratoire
1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
8 Fractals: Cantor set, Sierpinski Triangle, Koch Snowflake, fractal dimension.
8 Fractals: Cantor set, Sierpinski Triangle, Koch Snowflake, fractal dimension. 8.1 Definitions Definition If every point in a set S has arbitrarily small neighborhoods whose boundaries do not intersect
Data Mining: An Overview. David Madigan http://www.stat.columbia.edu/~madigan
Data Mining: An Overview David Madigan http://www.stat.columbia.edu/~madigan Overview Brief Introduction to Data Mining Data Mining Algorithms Specific Eamples Algorithms: Disease Clusters Algorithms:
Linear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
Cluster Analysis using R
Cluster analysis or clustering is the task of assigning a set of objects into groups (called clusters) so that the objects in the same cluster are more similar (in some sense or another) to each other
Topological Data Analysis
INF563 Topological Data Analysis Steve Oudot, Mathieu Carrière {firstname.lastname}@inria.fr Context: The data deluge - Les donnees de ce type apparaissent dans des contextes scientifiques et industrie
Derive 5: The Easiest... Just Got Better!
Liverpool John Moores University, 1-15 July 000 Derive 5: The Easiest... Just Got Better! Michel Beaudin École de Technologie Supérieure, Canada Email; [email protected] 1. Introduction Engineering
E3: PROBABILITY AND STATISTICS lecture notes
E3: PROBABILITY AND STATISTICS lecture notes 2 Contents 1 PROBABILITY THEORY 7 1.1 Experiments and random events............................ 7 1.2 Certain event. Impossible event............................
CHAPTER 2 Estimating Probabilities
CHAPTER 2 Estimating Probabilities Machine Learning Copyright c 2016. Tom M. Mitchell. All rights reserved. *DRAFT OF January 24, 2016* *PLEASE DO NOT DISTRIBUTE WITHOUT AUTHOR S PERMISSION* This is a
Georgia Standards of Excellence Curriculum Map. Mathematics. GSE 8 th Grade
Georgia Standards of Excellence Curriculum Map Mathematics GSE 8 th Grade These materials are for nonprofit educational purposes only. Any other use may constitute copyright infringement. GSE Eighth Grade
Introduction to Support Vector Machines. Colin Campbell, Bristol University
Introduction to Support Vector Machines Colin Campbell, Bristol University 1 Outline of talk. Part 1. An Introduction to SVMs 1.1. SVMs for binary classification. 1.2. Soft margins and multi-class classification.
5 VECTOR GEOMETRY. 5.0 Introduction. Objectives. Activity 1
5 VECTOR GEOMETRY Chapter 5 Vector Geometry Objectives After studying this chapter you should be able to find and use the vector equation of a straight line; be able to find the equation of a plane in
egyptigstudentroom.com
UNIVERSITY OF CAMBRIDGE INTERNATIONAL EXAMINATIONS International General Certificate of Secondary Education *5128615949* MATHEMATICS 0580/04, 0581/04 Paper 4 (Extended) May/June 2007 Additional Materials:
Probability Generating Functions
page 39 Chapter 3 Probability Generating Functions 3 Preamble: Generating Functions Generating functions are widely used in mathematics, and play an important role in probability theory Consider a sequence
SOLUTIONS TO HOMEWORK ASSIGNMENT #4, MATH 253
SOLUTIONS TO HOMEWORK ASSIGNMENT #4, MATH 253 1. Prove that the following differential equations are satisfied by the given functions: (a) 2 u + 2 u 2 y + 2 u 2 z =0,whereu 2 =(x2 + y 2 + z 2 ) 1/2. (b)
1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
Largest Fixed-Aspect, Axis-Aligned Rectangle
Largest Fixed-Aspect, Axis-Aligned Rectangle David Eberly Geometric Tools, LLC http://www.geometrictools.com/ Copyright c 1998-2016. All Rights Reserved. Created: February 21, 2004 Last Modified: February
Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)
Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume
Lecture 8 : Coordinate Geometry. The coordinate plane The points on a line can be referenced if we choose an origin and a unit of 20
Lecture 8 : Coordinate Geometry The coordinate plane The points on a line can be referenced if we choose an origin and a unit of 0 distance on the axis and give each point an identity on the corresponding
Vector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
Euler: A System for Numerical Optimization of Programs
Euler: A System for Numerical Optimization of Programs Swarat Chaudhuri 1 and Armando Solar-Lezama 2 1 Rice University 2 MIT Abstract. We give a tutorial introduction to Euler, a system for solving difficult
COMMON CORE STATE STANDARDS FOR MATHEMATICS 3-5 DOMAIN PROGRESSIONS
COMMON CORE STATE STANDARDS FOR MATHEMATICS 3-5 DOMAIN PROGRESSIONS Compiled by Dewey Gottlieb, Hawaii Department of Education June 2010 Operations and Algebraic Thinking Represent and solve problems involving
Sample Test Questions
mathematics College Algebra Geometry Trigonometry Sample Test Questions A Guide for Students and Parents act.org/compass Note to Students Welcome to the ACT Compass Sample Mathematics Test! You are about
