# Carnegie Mellon University. Extract from Andrew Moore's PhD Thesis: Ecient Memory-based Learning for Robot Control

Save this PDF as:

Size: px
Start display at page:

Download "Carnegie Mellon University. Extract from Andrew Moore's PhD Thesis: Ecient Memory-based Learning for Robot Control"

## Transcription

1 An intoductory tutorial on kd-trees Andrew W. Moore Carnegie Mellon University Extract from Andrew Moore's PhD Thesis: Ecient Memory-based Learning for Robot Control PhD. Thesis Technical Report No. 209, Computer Laboratory, University ofcambridge

2 Chapter 6 Kd-trees for Cheap Learning This chapter gives a specication of the nearest neighbour algorithm. It also gives both an informal and formal introduction to the kd-tree data structure. Then there is an explicit, detailed account of how the nearest neighbour search algorithm is implemented eciently, which is followed byanempirical investigation into the algorithm's performance. Finally, there is discussion of some other algorithms related to the nearest neighbour search. 6.1 Nearest Neighbour Specication Given two multi-dimensional spaces Domain = < k d and Range = < kr,letan exemplar be a member of Domain Range and let an exemplar-set be a nite set of exemplars. Given an exemplar-set, E, and a target domain vector, d, then a nearest neighbour of d is any. exemplar (d 0 r 0 ) 2 E such that None-nearer(E d d 0 ). Notice that there might be more than one suitable exemplar. This ambiguity captures the requirement that any nearest neighbour is adequate. None-nearer is dened thus: None-nearer(E d d 0 ), 8(d 00 r 00 ) 2 E j d ; d 0 jj d ; d 00 j (6.1) In Equation 6.1 the distance metric is Euclidean, though any other L p -norm could have been used. j d ; d 0 j= where d i is the ith component ofvector d. vu u X t i=k d i=1 (d i ; d 0 i) 2 (6.2) In the following sections I describe some algorithms to realize this abstract specication with the additional informal requirement that the computation time should be relatively short. 6-1

3 Algorithm: Nearest Neighbour by Scanning. Data Structures: domain-vector Avector of k d oating point numbers. range-vector Avector of k r oating point numbers. exemplar A pair: (domain-vector range-vector) Input: exlist, oftype list of exemplar dom, oftype domain-vector Output: nearest, oftype exemplar Preconditions: exlist is not empty Postconditions: if nearest represents the exemplar (d 0 r 0 ), and exlist represents the exemplar set E, and dom represents the vector d, then (d 0 r 0 ) 2 E and None-nearer(E d d 0 ). Code: 1. nearest-dist := innity 2. nearest := undened 3. for ex := each exemplar in exlist 3.1 dist := distance between dom and the domain of ex 3.2 if dist < nearest-dist then nearest-dist := dist nearest := ex Table 6.1: Finding Nearest Neighbour by scanning a list. 6.2 Naive Nearest Neighbour This operation could be achieved by representing the exemplar-set as a list of exemplars. In Table 6.1, I give the trivial nearest neighbour algorithm which scans the entire list. This algorithm has time complexity O(N) where N is the size of E. By structuring the exemplar-set more intelligently, it is possible to avoid making a distance computation for every member. 6.3 Introduction to kd-trees A kd-tree is a data structure for storing a nite set of points from a k-dimensional space. It was examined in detail by J.Bentley [Bentley, 1980 Friedman et al., 1977]. Recently, S.Omohundro has recommended it in a survey of possible techniques to increase the speed of neural network learning [Omohundro, 1987]. A kd-tree is a binary tree. The contents of each node are depicted in Table 6.2. Here I provide an informal description of the structure and meaning of the tree, and in the following subsection I 6-2

4 Field Name: Field Type Description dom-elt domain-vector Apoint from k d -d space range-elt range-vector Apoint from k r -d space split integer The splitting dimension left kd-tree A kd-tree representing those points to the left of the splitting plane right kd-tree A kd-tree representing those points to the right of the splitting plane Table 6.2: The elds of a kd-tree node give a formal denition of the invariants and semantics. The exemplar-set E is represented by the set of nodes in the kd-tree, each node representing one exemplar. The dom-elt eld represents the domain-vector of the exemplar and the range-elt eld represents the range-vector. The dom-elt component is the index for the node. It splits the space into two subspaces according to the splitting hyperplane of the node. All the points in the \left" subspace are represented by theleft subtree, and the points in the \right" subspace by the right subtree. The splitting hyperplane is a plane which passes through dom-elt and which is perpendicular to the direction specied by the split eld. Let i be the value of the split eld. Then apoint is to the left of dom-elt if and only if its ith component is less than the ith component of dom-elt. The complimentary denition holds for the right eld. If a node has no children, then the splitting hyperplane is not required. Figure 6.1 demonstrates a kd-tree representation of the four dom-elt points (2 5), (3 8), (6 3) and (8 9). The root node, with dom-elt (2 5) splits the plane in the y-axis into two subspaces. The point (3 8) lies in the lower subspace, that is f(x y) j y<5g, and so is in the left subtree. Figure 6.2 shows how the nodes partition the plane Formal Specication of a kd-tree The reader who is content with the informal description above can omit this section. I dene a mapping which maps the tree to the exemplar-set it represents: exset-rep : kd-tree! exemplar-set (6.3) exset-rep(empty) = exset-rep(< d r ; empty empty >) = f(d r)g exset-rep(< d r split tree left tree right >) = exset-rep(tree left ) [f(d r)g[exset-rep(tree right ) (6.4) 6-3

5 [2,5] [6,3] [3,8] [8,9] Figure 6.1 A 2d-tree of four elements. The splitting planes are not indicated. The [2,5] node splits along the y = 5 plane and the [3,8] node splits along the x = 3 plane. [3,8] [8,9] Figure 6.2 [2,5] How the tree of Figure 6.1 splits up the x,y plane. [6,3] 6-4

6 The invariant is that subtrees only ever contain dom-elts which are on the correct side of all their ancestors' splitting planes. Is-legal-kdtree(empty): Is-legal-kdtree(< d r ; empty empty >): Is-legal-kdtree(< d r split tree left tree right >), 8(d 0 r 0 ) 2 exset-rep(tree left ) d 0 split d split^ 8(d 0 r 0 ) 2 exset-rep(tree right ) d 0 split >d split^ Is-legal-kdtree(tree left )^ Is-legal-kdtree(tree right ) (6.5) Constructing a kd-tree Given an exemplar-set E, akd-tree can be constructed by the algorithm in Table 6.3. The pivotchoosing procedure of Step 2 inspects the set and chooses a \good" domain vector from this set to use as the tree's root. The discussion of how sucharootischosen is deferred to Section 6.7. Whichever exemplar is chosen as root will not aect the correctness of the kd-tree, though the tree's maximum depth and the shape of the hyperregions will be aected. 6.4 Nearest Neighbour Search In this section, I describe the nearest neighbour algorithm which operates on kd-trees. I begin with an informal description and worked example, and then give the precise algorithm. A rst approximation is initially found at the leaf node which contains the target point. In Figure 6.3 the target point is marked X and the leaf node of the region containing the target is coloured black. As is exemplied in this case, this rst approximation is not necessarily the nearest neighbour, but at least we know any potential nearer neighbour must lie closer, and so must lie within the circle centred on X and passing through the leaf node. We nowback up to the parent of the current node. In Figure 6.4 this parent is the black node. We compute whether it is possible for a closer solution to that so far found to exist in this parent's other child. Here it is not possible, because the circle does not intersect with the (shaded) space occupied by the parent's other child. If no closer neighbour can exist in the other child, the algorithm can immediately move up a further level, else it must recursively explore the other child. In this example, the next parent which is checked will need to be explored, because the area it covers (i.e. everywhere north of the central horizontal line) does intersect with the best circle so far. Table 6.4 describes my actual implementation of the nearest neighbour algorithm. It is called with four parameters: the kd-tree, the target domain vector, a representation of a hyperrectangle in Domain, andavalue indicating the maximum distance from the target which isworth searching. The search will only take place within those portions of the kd-tree which lie both in the hyper- 6-5

7 Algorithm: Constructing a kd-tree Input: exset, oftype exemplar-set Output: kd, oftype kdtree Pre: None Post: exset = exset-rep(kd) ^ Is-legal-kdtree(kd) Code: 1. If exset is empty then return the empty kdtree 2. Call pivot-choosing procedure, which returns twovalues: ex := a member of exset split := the splitting dimension 3. d := domain vector of ex 4. exset' := exset with ex removed 5. r := range vector of ex 6. exsetleft := f(d 0 r 0 ) 2 exset' j d 0 split d split g 7. exsetright := f(d 0 r 0 ) 2 exset' j d 0 split >d split g 8. kdleft := recursively construct kd-tree from exsetleft 9. kdright := recursively construct kd-tree from exsetright 10. kd := < d r split kdleft kdright > Proof: By induction on the length of exset and the denitions of exset-rep and Is-legal-kdtree. Table 6.3: Constructing a kd-tree from a set of exemplars. Figure 6.3 The black dot is the dot which owns the leaf node containing the target (the cross). Any nearer neighbour must lie inside this circle. 6-6

8 Figure 6.4 The black dot is the parent of the closest found so far. In this case the black dot's other child (shaded grey) need not be searched. rectangle, and within the maximum distance to the target. The caller of the routine will generally specify the innite hyperrectangle which covers the whole of Domain, and the innite maximum distance. Before discussing its execution, I will explain how the operations on the hyperrectangles can be implemented. A hyperrectangle is represented by two arrays: one of its minimum coordinates, the other of its maximum coordinates. To cut the hyperrectangle, so that one of its edges is moved closer to its centre, the appropriate array component is altered. To check to see if a hyperrectangle hr intersects with a hypersphere radius r centered at point t, we nd the point p in hr which is closest to t. Write hr min i as the minimum extreme of hr in the ith dimension and hr max i maximum extreme. p i,theith component of this closest point is computed thus: p i = 8 >< >: hr min if t i i hr min i t i if hr min <t i i < hr max i hr min if t i i hr max i The objects intersect only if the distance between p and t is less than or equal to r. as the (6.6) The search is depth rst, and uses the heuristic of searching rst the child node which contains the target. Step 1 deals with the trivial empty tree case, and Steps 2 and 3 assign two important local variables. Step 4 cuts the current hyperrectangle into the two hyperrectangles covering the space occupied by the child nodes. Steps 5{7 determine which child contains the target. After Step 8, when this initial child is searched, it may be possible to prove that there cannot be any closer point in the hyperrectangle of the further child. In particular, the point at the current node must be out of range. The test is made in Steps 9 and 10. Step 9 restricts the maximum radius in which any possible closer point could lie, and then the test in Step 10 checks whether there is any 6-7

9 Algorithm: Nearest Neighbour in a kd-tree Input: kd, oftype kdtree target, oftype domain vector hr, oftype hyperrectangle max-dist-sqd, oftype oat Output: nearest, oftype exemplar dist-sqd, oftype oat Pre: Is-legal-kdtree(kd) Post: Informally, the postcondition is that nearest is a nearest exemplar to target which also lies both within the hyperrectangle hr and within distance p max-dist-sqd of target. p dist-sqd is the distance of this nearest point. If there is no such point then dist-sqd contains innity. Code: 1. if kd is empty then set dist-sqd to innity and exit. 2. s := split eld of kd 3. pivot := dom-elt eld of kd 4. Cut hr into two sub-hyperrectangles left-hr and right-hr. The cut plane is through pivot and perpendicular to the s dimension. 5. target-in-left := target s pivot s 6. if target-in-left then 6.1 nearer-kd := left eld of kd and nearer-hr := left-hr 6.2 further-kd := right eld of kd and further-hr := right-hr 7. if not target-in-left then 7.1 nearer-kd := right eld of kd and nearer-hr := right-hr 7.2 further-kd := left eld of kd and further-hr := left-hr 8. Recursively call Nearest Neighbour with parameters (nearer-kd,target, nearer-hr,max-dist-sqd), storing the results in nearest and dist-sqd 9. max-dist-sqd := minimum of max-dist-sqd and dist-sqd 10. A nearer point could only lie in further-kd if there were some part of further-hr within distance p max-dist-sqd of target. if this is the case then 10.1 if (pivot ; target) 2 < dist-sqd then nearest := (pivot range-elt eld of kd) dist-sqd := (pivot ; target) max-dist-sqd := dist-sqd 10.2 Recursively call Nearest Neighbour with parameters (further-kd,target, further-hr,max-dist-sqd), storing the results in temp-nearest and temp-dist-sqd 10.3 If temp-dist-sqd < dist-sqd then nearest := temp-nearest and dist-sqd := temp-dist-sqd Proof: Outlined in text Table 6.4: The Nearest Neighbour Algorithm 6-8

10 space in the hyperrectangle of the further child which lies within this radius. If it is not possible then no further search is necessary. Ifitis possible, then Step 10:1checks if the point associated with the current node of the tree is closer than the closest yet. Then, in Step 10.2, the further child is recursively searched. The maximum distance worth examining in this further search isthe distance to the closest point yet discovered. The proof that this will nd the nearest neighbour within the constraints is by induction on the size of the kd-tree. If the cuto were not made in Step 10, then the proof would be straightforward: the point returned is the closest out of (i) the closest point in the nearer child, (ii) the point at the current node and (iii) the closest point in the further child. If the cuto were made in Step 10, then the point returned is the closest point in the nearest child, and we can show that neither the current point, nor any point in the further child can possibly be closer. Many local optimizations are possible which while not altering the asymptotic performance of the algorithm will multiply the speed by a constant factor. In particular, it is in practice possible to hold almost all of the search state globally, instead of passing it as recursive parameters. 6.5 Theoretical Behaviour Given a kd-tree with N nodes, how many nodes need to be inspected in order to nd the proven nearest neighbour using the algorithm in Section 6.4?. It is clear at once that on average, at least O(log N) inspections are necessary, because any nearest neighbour search requires traversal to at least one leaf of the tree. It is also clear that no more than N nodes are searched: the algorithm visits each node at most once. Figure 6.5 graphically shows why we might expect considerably fewer than N nodes to be visited: the shaded areas correspond to areas of the kd-tree which were cut o. The important values are (i) the worst case number of inspections and (ii) the expected number of inspections. It is actually easy to construct worst case distributions of the points which will force nearly all the nodes to be inspected. In Figure 6.6, the tree is two-dimensional, and the points are scattered along the circumference of a circle. If we request the nearest neighbour with the target close to the centre of the circle, it will therefore be necessary for each rectangle, and hence each leaf, to be inspected (this is in order to ensure that there is no point lying inside the circle in any rectangle). Calculation of the expected number of inspections is dicult, because the analysis depends critically on the expected distribution of the points in the kd-tree, and the expected distribution of the target points presented to the nearest neighbour algorithm. The analysis is performed in [Friedman et al., 1977]. This paper considers the expected number of hyperrectangles corresponding to leaf nodes which will provably need to be searched. Such hyperrectangles intersect the volume enclosed by ahypersphere centered on the query point whose surface passes through the nearest neighbour. For example, in Figure 6.5 the hypersphere (in this 6-9

11 Figure 6.5 Generally during a nearest neighbour search only a few leaf nodes need to be inspected. Figure 6.6 A bad distribution which forces almost all nodes to be inspected. 6-10

12 case a circle) is shown, and the number of intersecting hyperrectangles is two. The paper shows that the expected number of intersecting hyperrectangles is independent of N, the number of exemplars. The asymptotic search timeisthus logarithmic because the time to descend from the root of the tree to the leaves is logarithmic (in a balanced tree), and then an expected constant amountofbacktracking is required. However, this reasoning was based on the assumption that the hyperrectangles in the tree tend to be hypercubic in shape. Empirical evidence in my investigations has shown that this is not generally the case for their tree building strategy. This is discussed and demonstrated in Section 6.7. A second danger is that the cost, while independent ofn, is exponentially dependent onk, the dimensionality of the domain vectors 1. Thus theoretical analysis provides some insight into the cost, but here, empirical investigation will be used to examine the expense of nearest neighbour in practice. 6.6 Empirical Behaviour In this section I investigate the empirical behaviour of the nearest neighbour searching algorithm. We expect that the number of nodes inspected in the tree varies according to the following properties of the tree: N, the size of the tree. k dom, the dimensionality of the domain vectors in the tree. This value is the k in kd-tree. d distrib, the distribution of the domain vectors. This can be quantied as the \true" dimensionality of the vectors. For example, if the vectors had three components, but all lay onthe surface of a sphere, then the underlying dimensionality would be 2. In general, discovery of the underlying dimensionality ofagiven sample of points is extremely dicult, but for these tests it is a straightforward matter to generate such points. To makeakd-tree with underlying dimensionality d distrib, I use randomly generated k dom -dimensional domain vectors which lie on a d distrib -dimensional hyperelliptical surface. The random vector generation algorithm is as follows: Generate d distrib random angles i 2 [0 2) where 0 i<d distrib. Then let the jth component of the vector be Q i=d;1 i=0 sin( i + ij ). The phase angles ij are dened as ij = 1 2 if the jth bit of the binary representation of i is 1 and is zero otherwise. d target, the probability distribution from which the search target vector will be selected. I shall assume that this distribution is the same as that which determines the domain vectors. This is indeed what will happen when the kd-tree is used for learning control. 1 This was pointed out to the author by N. Maclaren. 6-11

13 Figure 6.7 Number of inspections required during a nearest neighbour search against the size of the kd-tree. In this experiment the tree was four-dimensional and the underlying distribution of the points was threedimensional In the following sections I investigate how performance depends on each of these properties Performance against Tree Size Figures 6.7 and 6.8 graph the number of nodes inspected against the number of nodes in the entire kd-tree. Each value was derived by generating a random kd-tree, and then requesting 500 random nearest neighbour searches on the kd-tree. The average number of inspected nodes was recorded. Anodewas counted as being inspected if the distance between it and the target was computed. Figure 6.7 was obtained from a 4d-tree with an distribution distribution d distrib = 3. Figure 6.8 used an 8d-tree with an underlying distribution d distrib =8. It is immediately clear that after a certain point, the expense of a nearest neighbour search has no detectable increase with the size of the tree. This agrees with the proposed model of search cost logarithmic with a large additive constant term Performance against the \k" in kd-tree Figure 6.9 graphs the number of nodes inspected against k dom, the number of components in the kd-tree's domain vectors for a 10,000 node tree. The underlying dimensionality was also k dom. The number of inspections per search rises very quickly, possibly exponentially, withk dom. This behaviour, the massive increase in cost with dimension, is familiar in computational geometry. 6-12

14 Figure 6.8 Number of inspections against kd-tree size for an eight-dimensional tree with an eight-dimensional underlying distribution Figure Number of inspections graphed against tree dimension. In these experiments the points had an underlying distribution with the same dimensionality as the tree

15 Figure Number of inspections graphed against underlying dimensionality for a fourteen-dimensional tree Performance against the Distribution Dimensionality This experiment conrms that it is d distrib, the distribution dimension from which the points were selected, rather than k dom which critically aects the search performance. The trials for Figure 6.10 used a 10,000 node kd-tree with domain dimension of 14, for various values of d distrib. The important observation is that for 14d-trees, the performance does improve greatly if the underlying distribution-dimension is relatively low. Conversely, Figure 6.11 shows that for a xed (4-d) underlying dimensionality, the search expense does not seem to increase any worse than linearly with k dom When the Target is not Chosen from the kd-tree's Distribution In this experiment the points were distributed on a three dimensional elliptical surface in tendimensional space. The target vector was, however, chosen at random from a ten-dimensional distribution. The kd-tree contained 10,000 points. The average number of inspections over 50 searches was found to be 8,396. This compares with another experiment in which bothpoints and target were distributed in ten dimensions and the average number of inspections was only 248. The reason for the appalling performance was exemplied in Figure 6.6: if the target is very far from its nearest neighbour then very many leaf nodes must be checked Conclusion The speed of the search (measured as the number of distance computations required) seems to vary ::: 6-14

16 Figure 6.11 Number of inspections graphed against tree dimension, given a constant four dimensional underlying distribution :::only marginally with tree size. If the tree is suciently large with respect to the number of dimensions, it is essentially constant. :::very quickly with the dimensionality of the distribution of the datapoints, d distrib. :::linearly with the number of components in the kd-tree's domain (k dom ), given a xed distribution dimension (d distrib ). There is also evidence to suggest that unless the target vector is drawn from the same distribution as the kd-tree points, performance can be greatly worsened. These results support the belief that real time searching for nearest neighbours is practical in a robotic system where we can expect the underlying dimensionality of the data points to be low, roughly less than 10. This need not mean that the vectors in the input space should have less than ten components. For data points obtained from robotic systems it will not be easy to decide what the underlying dimensionality is. However Chapter 10 will show that the data does tend to lie within a number of low dimensional subspaces. 6.7 Further kd-tree Operations In this section I discuss some other operations on kd-trees which are required for use in the SAB learning system. These include incrementally adding a point toakd-tree, range searching, and selecting a pivot point. 6-15

17 6.7.1 Range Searching a kd-tree range-search : exemplar-set Domain <!exemplar-set The abstract range search operation on an exemplar-set nds all exemplars whose domain vectors are within a given distance of a target point: range-search(e d r) = f(d 0 r 0 ) 2 E j (d ; d 0 ) 2 <r 2 g This is implemented by a modied nearest neighbour search. The modications are that (i) the initial distance is not reduced as closer points are discovered and (ii) all discovered points within the distance are returned, not just the nearest. The complexity of this operation is shown, in [Preparata and Shamos, 1985], to still be logarithmic in N (the size of E) for a xed range size Choosing a Pivot from an Exemplar Set The tree building algorithm of Section 6.3 requires that a pivot and a splitting plane be selected from which to build the root of a kd-tree. It is desirable for the tree to be reasonably balanced, and also for the shapes of the hyperregions corresponding to leaf nodes to be fairly equally proportioned. The rst criterion is important because a badly unbalanced tree would perhaps have O(N) accessing behaviour instead of O(log N). The second criterion is in order to maximize cuto opportunities for the nearest neighbour search. This is dicult to formalize, but can be motivated by an illustration. In Figure 6.12 is a perfectly balanced kd-tree in which the leaf regions are very non-square. Figure 6.13 illustrates a kd-tree representing the same set of points, but which promotes squareness at the expense of some balance. One pivoting strategy which would lead to a perfectly balanced tree, and which is suggested in [Omohundro, 1987], istopick the splitting dimension as that with maximum variance, and let the pivot be the point with the median split component. This will, it is hoped, tend to promote square regions because having split in one dimension, the next level in the tree is unlikely to nd that the same dimension has maximum spread, and so will choose a dierent dimension. For uniform distributions this tends to perform reasonably well, but for badly skewed distributions the hyperregions tend to take long thin shapes. This is exemplied in Figure 6.12 which has been balanced using this standard median pivot choice. To avoid this bad case, I choose a pivot which splits the exemplar set in the middle of the range of the most spread dimension. As can be seen in Figure 6.13, this tends to favour squarer regions at the expense of a slight imbalance in the kd-tree. This means that large empty areas of space are lled with only a few hyperrectangles which are themselves large. Thus, the number of leaf nodes which need to be inspected in case they contain a nearer neighbour is smaller than for the original case, which had many small thin hyperrectangles. 6-16

18 Figure 6.12 A 2d tree balanced using the median of the most spread dimension' pivoting strategy. Figure 6.13 A 2d tree balanced using the closest to the centre of the widest dimension' pivoting strategy. 6-17

19 My pivot choice algorithm is to rstly choose the splitting dimension as the longest dimension of the current hyperrectangle, and then choose the pivot as the point closest to the middle of the hyperrectangle along this dimension. Occasionally, this pivot may even be an extreme point along its dimension, leading to an entirely unbalanced node. This is worth it, because it creates a large empty leaf node. It is possible but extremely unlikely that the points could be distributed in such away as to cause the tree to have one empty child at every level. This would be unacceptable, and so above a certain depth threshold, the pivots are chosen using the standard median technique. Selecting the median as the split and selecting the closest to the centre of the range are both O(N) operations, and so either way a tree rebalance is O(N log N) Incrementally Adding a Point to a kd-tree Firstly, the leaf node which contains the new point is computed. The hyperrectangle corresponding to this leaf is also obtained. See Section 6.4 for hyperrectangle implementation. When the leaf node is found it may either be (i) empty, in which case it is simply replaced by a new singleton node, or (ii) it contains a singleton node. In case (ii) the singleton node must be given a child, and so its previously irrelevant spliteldmust be dened. The split eld should be chosen to preserve the squareness of the new subhyperrectangles. A simple heuristic is used. The split dimension is chosen as the dimension in which thehyperrectangle is longest. This heuristic is motivated by the same requirement as for tree balancing that the regions should be as square as possible, even if this means some loss of balance. This splitting choice is just a heuristic, and there is no guarantee that a series of points added in this way will preserve the balance of the kd-tree, nor that the hyperrectangles will be well shaped for nearest neighbour search. Thus, on occasion (such as when the depth exceeds a small multiple of the best possible depth) the tree is rebuilt. Incremental addition costs O(log N) Q Nearest Neighbours This uses a modied version of the nearest neighbour search. Instead of only searching within a sphere whose radius is the closest distance yet found, the search is within a sphere whose radius is the Qth closest yet found. Until Q points have been found, this distance is innity Deleting a Point from a kd-tree If the point is at a leaf, this is straightforward. Otherwise, it is dicult, because the structure of both trees below this node are pivoted around the point we wish to remove. One solution would be to rebuild the tree below the deleted point, but on occasion this would be very expensive. My solution is to mark the point as deleted with an extra eld in the kd-tree node, and to ignore deletion nodes in nearest neighbour and similar searches. When the tree is next rebuilt, all deletion nodes are removed. 6-18

20 Bibliography [Bentley, 1980] J. L. Bentley. Multidimensional Divide and Conquer. Communications of the ACM, 23(4): , [Friedman et al., 1977] J. H. Friedman, J. L. Bentley, and R. A. Finkel. An Algorithm for Finding Best Matches in Logarithmic Expected Time. ACM Trans. on Mathematical Software, 3(3):209{ 226, September [Omohundro, 1987] S. M. Omohundro. Ecient Algorithms with Neural Network Behaviour. Journal of Complex Systems, 1(2):273{347, [Preparata and Shamos, 1985] F. P. Preparata and M. Shamos. Computational Geometry. Springer-Verlag, Bib-1

### Data Warehousing und Data Mining

Data Warehousing und Data Mining Multidimensionale Indexstrukturen Ulf Leser Wissensmanagement in der Bioinformatik Content of this Lecture Multidimensional Indexing Grid-Files Kd-trees Ulf Leser: Data

### Section 1.1. Introduction to R n

The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

### Computational Geometry. Lecture 1: Introduction and Convex Hulls

Lecture 1: Introduction and convex hulls 1 Geometry: points, lines,... Plane (two-dimensional), R 2 Space (three-dimensional), R 3 Space (higher-dimensional), R d A point in the plane, 3-dimensional space,

### Divide-and-Conquer. Three main steps : 1. divide; 2. conquer; 3. merge.

Divide-and-Conquer Three main steps : 1. divide; 2. conquer; 3. merge. 1 Let I denote the (sub)problem instance and S be its solution. The divide-and-conquer strategy can be described as follows. Procedure

### R-trees. R-Trees: A Dynamic Index Structure For Spatial Searching. R-Tree. Invariants

R-Trees: A Dynamic Index Structure For Spatial Searching A. Guttman R-trees Generalization of B+-trees to higher dimensions Disk-based index structure Occupancy guarantee Multiple search paths Insertions

### Two General Methods to Reduce Delay and Change of Enumeration Algorithms

ISSN 1346-5597 NII Technical Report Two General Methods to Reduce Delay and Change of Enumeration Algorithms Takeaki Uno NII-2003-004E Apr.2003 Two General Methods to Reduce Delay and Change of Enumeration

### 3. INNER PRODUCT SPACES

. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

### Common Core Unit Summary Grades 6 to 8

Common Core Unit Summary Grades 6 to 8 Grade 8: Unit 1: Congruence and Similarity- 8G1-8G5 rotations reflections and translations,( RRT=congruence) understand congruence of 2 d figures after RRT Dilations

### SOLVING DATABASE QUERIES USING ORTHOGONAL RANGE SEARCHING

SOLVING DATABASE QUERIES USING ORTHOGONAL RANGE SEARCHING CPS234: Computational Geometry Prof. Pankaj Agarwal Prepared by: Khalid Al Issa (khalid@cs.duke.edu) Fall 2005 BACKGROUND : RANGE SEARCHING, a

### Vector storage and access; algorithms in GIS. This is lecture 6

Vector storage and access; algorithms in GIS This is lecture 6 Vector data storage and access Vectors are built from points, line and areas. (x,y) Surface: (x,y,z) Vector data access Access to vector

### Design, Implementation and Preliminary Analysis of General Multidimensional Trees

Exchange Internship Design, Implementation and Preliminary Analysis of General Multidimensional Trees Author: Nikolett Bereczky Supervisors: Amalia Duch Brown, Krisztián Németh Ponent: Lluís Solano Created

### Introduction to Statistical Machine Learning

CHAPTER Introduction to Statistical Machine Learning We start with a gentle introduction to statistical machine learning. Readers familiar with machine learning may wish to skip directly to Section 2,

### A Non-Linear Schema Theorem for Genetic Algorithms

A Non-Linear Schema Theorem for Genetic Algorithms William A Greene Computer Science Department University of New Orleans New Orleans, LA 70148 bill@csunoedu 504-280-6755 Abstract We generalize Holland

### Classification algorithm in Data mining: An Overview

Classification algorithm in Data mining: An Overview S.Neelamegam #1, Dr.E.Ramaraj *2 #1 M.phil Scholar, Department of Computer Science and Engineering, Alagappa University, Karaikudi. *2 Professor, Department

### Topology-based network security

Topology-based network security Tiit Pikma Supervised by Vitaly Skachek Research Seminar in Cryptography University of Tartu, Spring 2013 1 Introduction In both wired and wireless networks, there is the

### MATHEMATICS CLASS - XII BLUE PRINT - II. (1 Mark) (4 Marks) (6 Marks)

BLUE PRINT - II MATHEMATICS CLASS - XII S.No. Topic VSA SA LA TOTAL ( Mark) (4 Marks) (6 Marks). (a) Relations and Functions 4 () 6 () 0 () (b) Inverse trigonometric Functions. (a) Matrices Determinants

### Binary Search Trees. A Generic Tree. Binary Trees. Nodes in a binary search tree ( B-S-T) are of the form. P parent. Key. Satellite data L R

Binary Search Trees A Generic Tree Nodes in a binary search tree ( B-S-T) are of the form P parent Key A Satellite data L R B C D E F G H I J The B-S-T has a root node which is the only node whose parent

### Outline BST Operations Worst case Average case Balancing AVL Red-black B-trees. Binary Search Trees. Lecturer: Georgy Gimel farb

Binary Search Trees Lecturer: Georgy Gimel farb COMPSCI 220 Algorithms and Data Structures 1 / 27 1 Properties of Binary Search Trees 2 Basic BST operations The worst-case time complexity of BST operations

### can be extended into NC and RNC algorithms, respectively, that solve more

Extending NC and RNC Algorithms Nimrod Megiddo A technique is presented by which NC and RNC algorithms for some problems can be extended into NC and RNC algorithms, respectively, that solve more general

### Efficient visual search of local features. Cordelia Schmid

Efficient visual search of local features Cordelia Schmid Visual search change in viewing angle Matches 22 correct matches Image search system for large datasets Large image dataset (one million images

### Full and Complete Binary Trees

Full and Complete Binary Trees Binary Tree Theorems 1 Here are two important types of binary trees. Note that the definitions, while similar, are logically independent. Definition: a binary tree T is full

### Section 2.1 Rectangular Coordinate Systems

P a g e 1 Section 2.1 Rectangular Coordinate Systems 1. Pythagorean Theorem In a right triangle, the lengths of the sides are related by the equation where a and b are the lengths of the legs and c is

### 28 Closest-Point Problems -------------------------------------------------------------------

28 Closest-Point Problems ------------------------------------------------------------------- Geometric problems involving points on the plane usually involve implicit or explicit treatment of distances

### Approximation Algorithms

Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

### Computational Geometry

and range trees Lecture 6: and range trees Lecture 7: and range trees Database queries 1D range trees Databases Databases store records or objects Personnel database: Each employee has a name, id code,

### Faster deterministic integer factorisation

David Harvey (joint work with Edgar Costa, NYU) University of New South Wales 25th October 2011 The obvious mathematical breakthrough would be the development of an easy way to factor large prime numbers

### 4.4 The recursion-tree method

4.4 The recursion-tree method Let us see how a recursion tree would provide a good guess for the recurrence = 3 4 ) Start by nding an upper bound Floors and ceilings usually do not matter when solving

### Balanced search trees

Lecture 8 Balanced search trees 8.1 Overview In this lecture we discuss search trees as a method for storing data in a way that supports fast insert, lookup, and delete operations. (Data structures handling

### Multimedia Databases. Wolf-Tilo Balke Philipp Wille Institut für Informationssysteme Technische Universität Braunschweig http://www.ifis.cs.tu-bs.

Multimedia Databases Wolf-Tilo Balke Philipp Wille Institut für Informationssysteme Technische Universität Braunschweig http://www.ifis.cs.tu-bs.de 14 Previous Lecture 13 Indexes for Multimedia Data 13.1

### ! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of

### princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Sanjeev Arora Scribe: One of the running themes in this course is the notion of

### CSE 5311 Homework 2 Solution

CSE 5311 Homework 2 Solution Problem 6.2-6 Show that the worst-case running time of MAX-HEAPIFY on a heap of size n is Ω(lg n). (Hint: For a heap with n nodes, give node values that cause MAX- HEAPIFY

### NEW MEXICO Grade 6 MATHEMATICS STANDARDS

PROCESS STANDARDS To help New Mexico students achieve the Content Standards enumerated below, teachers are encouraged to base instruction on the following Process Standards: Problem Solving Build new mathematical

### Clustering Via Decision Tree Construction

Clustering Via Decision Tree Construction Bing Liu 1, Yiyuan Xia 2, and Philip S. Yu 3 1 Department of Computer Science, University of Illinois at Chicago, 851 S. Morgan Street, Chicago, IL 60607-7053.

### Redundant Bit Vectors for the Audio Fingerprinting Server. John Platt Jonathan Goldstein Chris Burges

Redundant Bit Vectors for the Audio Fingerprinting Server John Platt Jonathan Goldstein Chris Burges Structure of Talk. Problem Statement 2. Problems with Existing Techniques 3. Bit Vectors 4. Partitioning

### Math 181 Spring 2007 HW 1 Corrected

Math 181 Spring 2007 HW 1 Corrected February 1, 2007 Sec. 1.1 # 2 The graphs of f and g are given (see the graph in the book). (a) State the values of f( 4) and g(3). Find 4 on the x-axis (horizontal axis)

### Introduction to Support Vector Machines. Colin Campbell, Bristol University

Introduction to Support Vector Machines Colin Campbell, Bristol University 1 Outline of talk. Part 1. An Introduction to SVMs 1.1. SVMs for binary classification. 1.2. Soft margins and multi-class classification.

### Module1. x 1000. y 800.

Module1 1 Welcome to the first module of the course. It is indeed an exciting event to share with you the subject that has lot to offer both from theoretical side and practical aspects. To begin with,

Chapter 4 Multi-Stage Interconnection Networks The general concept of the multi-stage interconnection network, together with its routing properties, have been used in the preceding chapter to describe

### Binary Space Partitions for Axis-Parallel Line Segments: Size-Height Tradeoffs

Binary Space Partitions for Axis-Parallel Line Segments: Size-Height Tradeoffs Sunil Arya Department of Computer Science The Hong Kong University of Science and Technology Clear Water Bay, Kowloon, Hong

### Systems of Linear Equations in Two Variables

Chapter 6 Systems of Linear Equations in Two Variables Up to this point, when solving equations, we have always solved one equation for an answer. However, in the previous chapter, we saw that the equation

### NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS

NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS TEST DESIGN AND FRAMEWORK September 2014 Authorized for Distribution by the New York State Education Department This test design and framework document

### Clustering. Chapter 7. 7.1 Introduction to Clustering Techniques. 7.1.1 Points, Spaces, and Distances

240 Chapter 7 Clustering Clustering is the process of examining a collection of points, and grouping the points into clusters according to some distance measure. The goal is that points in the same cluster

### Current Standard: Mathematical Concepts and Applications Shape, Space, and Measurement- Primary

Shape, Space, and Measurement- Primary A student shall apply concepts of shape, space, and measurement to solve problems involving two- and three-dimensional shapes by demonstrating an understanding of:

### I/O-Efficient Spatial Data Structures for Range Queries

I/O-Efficient Spatial Data Structures for Range Queries Lars Arge Kasper Green Larsen MADALGO, Department of Computer Science, Aarhus University, Denmark E-mail: large@madalgo.au.dk,larsen@madalgo.au.dk

### Linear Codes. Chapter 3. 3.1 Basics

Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length

### Efficiency of algorithms. Algorithms. Efficiency of algorithms. Binary search and linear search. Best, worst and average case.

Algorithms Efficiency of algorithms Computational resources: time and space Best, worst and average case performance How to compare algorithms: machine-independent measure of efficiency Growth rate Complexity

### Introduction to Voronoi Diagrams

C.S. 252 Prof. Roberto Tamassia Computational Geometry Sem. II, 1992 1993 Lecture 13 Date: March 22, 1993 Scribe: Scott S. Snibbe 1 Introduction Introduction to Voronoi Diagrams This lecture introduces

### root node level: internal node edge leaf node CS@VT Data Structures & Algorithms 2000-2009 McQuain

inary Trees 1 A binary tree is either empty, or it consists of a node called the root together with two binary trees called the left subtree and the right subtree of the root, which are disjoint from each

### Utah Core Curriculum for Mathematics

Core Curriculum for Mathematics correlated to correlated to 2005 Chapter 1 (pp. 2 57) Variables, Expressions, and Integers Lesson 1.1 (pp. 5 9) Expressions and Variables 2.2.1 Evaluate algebraic expressions

### Algebra I. In this technological age, mathematics is more important than ever. When students

In this technological age, mathematics is more important than ever. When students leave school, they are more and more likely to use mathematics in their work and everyday lives operating computer equipment,

### 4 Basics of Trees. Petr Hliněný, FI MU Brno 1 FI: MA010: Trees and Forests

4 Basics of Trees Trees, actually acyclic connected simple graphs, are among the simplest graph classes. Despite their simplicity, they still have rich structure and many useful application, such as in

### Solving nonlinear equations in one variable

Chapter Solving nonlinear equations in one variable Contents.1 Bisection method............................. 7. Fixed point iteration........................... 8.3 Newton-Raphson method........................

### Binary Search. Search for x in a sorted array A.

Divide and Conquer A general paradigm for algorithm design; inspired by emperors and colonizers. Three-step process: 1. Divide the problem into smaller problems. 2. Conquer by solving these problems. 3.

### Lecture Notes on Spanning Trees

Lecture Notes on Spanning Trees 15-122: Principles of Imperative Computation Frank Pfenning Lecture 26 April 26, 2011 1 Introduction In this lecture we introduce graphs. Graphs provide a uniform model

### Clustering. Danilo Croce Web Mining & Retrieval a.a. 2015/201 16/03/2016

Clustering Danilo Croce Web Mining & Retrieval a.a. 2015/201 16/03/2016 1 Supervised learning vs. unsupervised learning Supervised learning: discover patterns in the data that relate data attributes with

### 1/1 7/4 2/2 12/7 10/30 12/25

Binary Heaps A binary heap is dened to be a binary tree with a key in each node such that: 1. All leaves are on, at most, two adjacent levels. 2. All leaves on the lowest level occur to the left, and all

### Persistent Data Structures

6.854 Advanced Algorithms Lecture 2: September 9, 2005 Scribes: Sommer Gentry, Eddie Kohler Lecturer: David Karger Persistent Data Structures 2.1 Introduction and motivation So far, we ve seen only ephemeral

### Improved Billboard Clouds for Extreme Model Simplification

Improved Billboard Clouds for Extreme Model Simplification I.-T. Huang, K. L. Novins and B. C. Wünsche Graphics Group, Department of Computer Science, University of Auckland, Private Bag 92019, Auckland,

### Manifold Learning Examples PCA, LLE and ISOMAP

Manifold Learning Examples PCA, LLE and ISOMAP Dan Ventura October 14, 28 Abstract We try to give a helpful concrete example that demonstrates how to use PCA, LLE and Isomap, attempts to provide some intuition

### Lecture 4 Online and streaming algorithms for clustering

CSE 291: Geometric algorithms Spring 2013 Lecture 4 Online and streaming algorithms for clustering 4.1 On-line k-clustering To the extent that clustering takes place in the brain, it happens in an on-line

### Efficient Data Structures for Decision Diagrams

Artificial Intelligence Laboratory Efficient Data Structures for Decision Diagrams Master Thesis Nacereddine Ouaret Professor: Supervisors: Boi Faltings Thomas Léauté Radoslaw Szymanek Contents Introduction...

### OPTIMUM PARTITION PARAMETER OF DIVIDE-AND-CONQUER ALGORITHM FOR SOLVING CLOSEST-PAIR PROBLEM

OPTIMUM PARTITION PARAMETER OF DIVIDE-AND-CONQUER ALGORITHM FOR SOLVING CLOSEST-PAIR PROBLEM Mohammad Zaidul Karim 1 and Nargis Akter 2 1 Department of Computer Science and Engineering, Daffodil International

### A. V. Gerbessiotis CS Spring 2014 PS 3 Mar 24, 2014 No points

A. V. Gerbessiotis CS 610-102 Spring 2014 PS 3 Mar 24, 2014 No points Problem 1. Suppose that we insert n keys into a hash table of size m using open addressing and uniform hashing. Let p(n, m) be the

### The divide and conquer strategy has three basic parts. For a given problem of size n,

1 Divide & Conquer One strategy for designing efficient algorithms is the divide and conquer approach, which is also called, more simply, a recursive approach. The analysis of recursive algorithms often

### Mathematics Standards

1 Table of Contents Mathematics Standards Subject Pages Algebra 1-2 2-4 Algebra 3-4 5-6 AP Calculus AB and BC Standards 7 AP Statistics Standards 8 Consumer Math 9 Geometry 1-2 10-11 Honors Differential

### Graphs without proper subgraphs of minimum degree 3 and short cycles

Graphs without proper subgraphs of minimum degree 3 and short cycles Lothar Narins, Alexey Pokrovskiy, Tibor Szabó Department of Mathematics, Freie Universität, Berlin, Germany. August 22, 2014 Abstract

### Binary Heap Algorithms

CS Data Structures and Algorithms Lecture Slides Wednesday, April 5, 2009 Glenn G. Chappell Department of Computer Science University of Alaska Fairbanks CHAPPELLG@member.ams.org 2005 2009 Glenn G. Chappell

### Data Mining Cluster Analysis: Advanced Concepts and Algorithms. ref. Chapter 9. Introduction to Data Mining

Data Mining Cluster Analysis: Advanced Concepts and Algorithms ref. Chapter 9 Introduction to Data Mining by Tan, Steinbach, Kumar 1 Outline Prototype-based Fuzzy c-means Mixture Model Clustering Density-based

### Planar Tree Transformation: Results and Counterexample

Planar Tree Transformation: Results and Counterexample Selim G Akl, Kamrul Islam, and Henk Meijer School of Computing, Queen s University Kingston, Ontario, Canada K7L 3N6 Abstract We consider the problem

### Introduction to Algorithms March 10, 2004 Massachusetts Institute of Technology Professors Erik Demaine and Shafi Goldwasser Quiz 1.

Introduction to Algorithms March 10, 2004 Massachusetts Institute of Technology 6.046J/18.410J Professors Erik Demaine and Shafi Goldwasser Quiz 1 Quiz 1 Do not open this quiz booklet until you are directed

### Copyrighted Material. Chapter 1 DEGREE OF A CURVE

Chapter 1 DEGREE OF A CURVE Road Map The idea of degree is a fundamental concept, which will take us several chapters to explore in depth. We begin by explaining what an algebraic curve is, and offer two

### 2 The Euclidean algorithm

2 The Euclidean algorithm Do you understand the number 5? 6? 7? At some point our level of comfort with individual numbers goes down as the numbers get large For some it may be at 43, for others, 4 In

### 1. What s wrong with the following proofs by induction?

ArsDigita University Month : Discrete Mathematics - Professor Shai Simonson Problem Set 4 Induction and Recurrence Equations Thanks to Jeffrey Radcliffe and Joe Rizzo for many of the solutions. Pasted

### Converting a Number from Decimal to Binary

Converting a Number from Decimal to Binary Convert nonnegative integer in decimal format (base 10) into equivalent binary number (base 2) Rightmost bit of x Remainder of x after division by two Recursive

### KDDCS: A Load-Balanced In-Network Data-Centric Storage Scheme for Sensor Networks

KDDCS: A Load-Balanced In-Network Data-Centric Storage Scheme for Sensor Networks Mohamed Aly Kirk Pruhs Panos K. Chrysanthis Department of Computer Science University of Pittsburgh Pittsburgh, PA 1526,

### Rational inequality. Sunil Kumar Singh. 1 Sign scheme or diagram for rational function

OpenStax-CNX module: m15464 1 Rational inequality Sunil Kumar Singh This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 2.0 Rational inequality is an inequality

### Number Sense and Operations

Number Sense and Operations representing as they: 6.N.1 6.N.2 6.N.3 6.N.4 6.N.5 6.N.6 6.N.7 6.N.8 6.N.9 6.N.10 6.N.11 6.N.12 6.N.13. 6.N.14 6.N.15 Demonstrate an understanding of positive integer exponents

### Going Big in Data Dimensionality:

LUDWIG- MAXIMILIANS- UNIVERSITY MUNICH DEPARTMENT INSTITUTE FOR INFORMATICS DATABASE Going Big in Data Dimensionality: Challenges and Solutions for Mining High Dimensional Data Peer Kröger Lehrstuhl für

### CS420/520 Algorithm Analysis Spring 2010 Lecture 04

CS420/520 Algorithm Analysis Spring 2010 Lecture 04 I. Chapter 4 Recurrences A. Introduction 1. Three steps applied at each level of a recursion a) Divide into a number of subproblems that are smaller

### Arrangements And Duality

Arrangements And Duality 3.1 Introduction 3 Point configurations are tbe most basic structure we study in computational geometry. But what about configurations of more complicated shapes? For example,

### Mathematical Procedures

CHAPTER 6 Mathematical Procedures 168 CHAPTER 6 Mathematical Procedures The multidisciplinary approach to medicine has incorporated a wide variety of mathematical procedures from the fields of physics,

### Machine Learning Final Project Spam Email Filtering

Machine Learning Final Project Spam Email Filtering March 2013 Shahar Yifrah Guy Lev Table of Content 1. OVERVIEW... 3 2. DATASET... 3 2.1 SOURCE... 3 2.2 CREATION OF TRAINING AND TEST SETS... 4 2.3 FEATURE

### Lecture 7: Approximation via Randomized Rounding

Lecture 7: Approximation via Randomized Rounding Often LPs return a fractional solution where the solution x, which is supposed to be in {0, } n, is in [0, ] n instead. There is a generic way of obtaining

### On the Order of Search for Personal Identification with Biometric Images

On the Order of Search for Personal Identification with Biometric Images Kensuke Baba Library, Kyushu University 10-1, Hakozaki 6, Higashi-ku Fukuoka, 812-8581, Japan baba.kensuke.060@m.kyushu-u.ac.jp

### Final Exam, Spring 2007

10-701 Final Exam, Spring 2007 1. Personal info: Name: Andrew account: E-mail address: 2. There should be 16 numbered pages in this exam (including this cover sheet). 3. You can use any material you brought:

### RIT scores between 191 and 200

Measures of Academic Progress for Mathematics RIT scores between 191 and 200 Number Sense and Operations Whole Numbers Solve simple addition word problems Find and extend patterns Demonstrate the associative,

### Medical Information Management & Mining. You Chen Jan,15, 2013 You.chen@vanderbilt.edu

Medical Information Management & Mining You Chen Jan,15, 2013 You.chen@vanderbilt.edu 1 Trees Building Materials Trees cannot be used to build a house directly. How can we transform trees to building materials?

### Local outlier detection in data forensics: data mining approach to flag unusual schools

Local outlier detection in data forensics: data mining approach to flag unusual schools Mayuko Simon Data Recognition Corporation Paper presented at the 2012 Conference on Statistical Detection of Potential

### Visualising the distribution of primes

University of Luxembourg BASI Mathematics Experimental Mathematics Lab Project Visualising the distribution of primes Authors: Jim Barthel Pietro Sgobba Fa Zhu Supervisors: Prof. Gabor Wiese Dr. Panagiotis

### x 2 f 2 e 1 e 4 e 3 e 2 q f 4 x 4 f 3 x 3

Karl-Franzens-Universitat Graz & Technische Universitat Graz SPEZIALFORSCHUNGSBEREICH F 003 OPTIMIERUNG und KONTROLLE Projektbereich DISKRETE OPTIMIERUNG O. Aichholzer F. Aurenhammer R. Hainz New Results

### Many algorithms, particularly divide and conquer algorithms, have time complexities which are naturally

Recurrence Relations Many algorithms, particularly divide and conquer algorithms, have time complexities which are naturally modeled by recurrence relations. A recurrence relation is an equation which

### Fast Matching of Binary Features

Fast Matching of Binary Features Marius Muja and David G. Lowe Laboratory for Computational Intelligence University of British Columbia, Vancouver, Canada {mariusm,lowe}@cs.ubc.ca Abstract There has been

### Factoring Patterns in the Gaussian Plane

Factoring Patterns in the Gaussian Plane Steve Phelps Introduction This paper describes discoveries made at the Park City Mathematics Institute, 00, as well as some proofs. Before the summer I understood

### Distance based clustering

// Distance based clustering Chapter ² ² Clustering Clustering is the art of finding groups in data (Kaufman and Rousseeuw, 99). What is a cluster? Group of objects separated from other clusters Means

### ! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

### Pythagorean Theorem. Overview. Grade 8 Mathematics, Quarter 3, Unit 3.1. Number of instructional days: 15 (1 day = minutes) Essential questions

Grade 8 Mathematics, Quarter 3, Unit 3.1 Pythagorean Theorem Overview Number of instructional days: 15 (1 day = 45 60 minutes) Content to be learned Prove the Pythagorean Theorem. Given three side lengths,

### Balanced Binary Search Tree

AVL Trees / Slide 1 Balanced Binary Search Tree Worst case height of binary search tree: N-1 Insertion, deletion can be O(N) in the worst case We want a tree with small height Height of a binary tree with