Classification using Intersection Kernel Support Vector Machines is Efficient
|
|
|
- Paulina Beasley
- 10 years ago
- Views:
Transcription
1 To Appear in IEEE Computer Vision and Pattern Recognition 28, Anchorage Classification using Intersection Kernel Support Vector Machines is Efficient Subhransu Maji EECS Department U.C. Berkeley Alexander C. Berg Yahoo! Research Jitendra Malik EECS Department U.C. Berkeley Abstract Straightforward classification using kernelized SVMs requires evaluating the kernel for a test vector and each of the support vectors. For a class of kernels we show that one can do this much more efficiently. In particular we show that one can build histogram intersection kernel SVMs (IKSVMs) with runtime complexity of the classifier logarithmic in the number of support vectors as opposed to linear for the standard approach. We further show that by precomputing auxiliary tables we can construct an approximate classifier with constant runtime and space requirements, independent of the number of support vectors, with negligible loss in classification accuracy on various tasks. This approximation also applies to 1 χ 2 and other kernels of similar form. We also introduce novel features based on a multi-level histograms of oriented edge energy and present experiments on various detection datasets. On the INRIA pedestrian dataset an approximate IKSVM classifier based on these features has the current best performance, with a miss rate 13% lower at 1 6 False Positive Per Window than the linear SVM detector of Dalal & Triggs. On the Daimler Chrysler pedestrian dataset IKSVM gives comparable accuracy to the best results (based on quadratic SVM), while being 15 faster. In these experiments our approximate IKSVM is up to 2 faster than a standard implementation and requires 2 less memory. Finally we show that a 5 speedup is possible using approximate IKSVM based on spatial pyramid features on the Caltech 11 dataset with negligible loss of accuracy. 1. Introduction Discriminative classification using Support Vector Machines (SVMs) and variants of boosted decision trees are two of the leading techniques used in vision tasks rang- This work is funded by ARO MURI W911NF and ONR MURI N ing from detection of objects in images like faces [18, 27], pedestrians [7, 16] and cars [2], multicategory object recognition in Caltech-11 [1, 15], to texture discrimination [3]. Part of the appeal for SVMs is that non-linear decision boundaries can be learnt using the so called kernel trick. Though SVMs have faster training speed, the runtime complexity of a non linear SVM classifier is high. Boosted decision trees on the other hand have faster classification speed but are significantly slower to train and the complexity of training can grow exponentially with the number of classes [25]. Thus, linear kernel SVMs have become popular for real-time applications as they enjoy both faster training and classification speeds, with significantly less memory requirements than non-linear kernels due to the compact representation of the decision function. Discriminative approaches to recognition problems often depend on comparing distributions of features, e.g. a kernelized SVM, where the kernel measures the similarity between histograms describing the features. In order to evaluate the classification function, a test histogram is compared to histograms representing each of the support vectors. This paper presents a technique to greatly speed up that process for histogram comparison functions of a certain form basically where the comparison is a linear combination of functions of each coordinate of the histogram. This more efficient implementation makes a class of kernelized SVMs used in many of the current most successful object detection/recognition algorithms efficient enough to apply much more broadly, even possibly to realtime applications. The class of kernels includes the pyramid matching or intersection kernels used in Grauman & Darell [11] ; and Lazebnik et al. [15], and the chi squared kernel used by Bosch et al. [1]; Varma & Ray [26]; and Chum & Zisserman [5], which together represent some of the best results on the Caltech and Pascal VOC datasets. In addition the decision functions learned are generalizations of linear decision functions and offer better classification performance at small additional computation cost. This allows improvements in performance of object detection systems based on linear classifiers such as those used by Dalal & Triggs [7]
2 and Felzenszwalb et al. [9]. These ideas will be illustrated with the histogram intersection kernel, and generalizations will be introduced in Section 3.3. The histogram intersection kernel [24], k HI (h a, h b ) = n min (h a(i), h b (i)) is often used as a measurement of similarity between histograms h a and h b, and because it is positive definite [17] it can be used as a kernel for discriminative classification using SVMs. Recently, intersection kernel SVMs (henceforth referred to as IKSVMs), have been shown to be successful for detection and recognition, e.g. pyramid match kernel [11] and spatial pyramid matching [15]. Unfortunately this success typically comes at great computational expense compared to simpler linear SVMs, because non-linear kernels require memory and computation linearly proportional to the number of support vectors for classification. 1. Given feature vectors of dimension n and learned support vector classifier consisting of m support vectors, the time complexity for classification and space complexity for storing the support vectors of a standard IKSVM classifier is O(mn). We present an algorithm for IKSVM classification with time complexity O(n log m) and space complexity O(mn). We then present an approximation scheme whose time and space complexity is O(n), independent of the number of support vectors. The key idea is that for a class of kernels including the intersection kernel, the classifier can be decomposed as a sum of functions, one for each histogram bin, each of which can be efficiently computed. In various experiments with thousands of support vectors we observe speedups and space savings up to 2 and 2 respectively, compared to a standard implementation. On Caltech-11, one-vs-all classifiers on an average have 175 support vectors and the approximate IKSVM classifier is about 5x faster. The rest of the paper is structured as follows: In Section 2 we describe the background on state of the art in improving support vector classification speeds. In Section 3 we describe the key insight of the algorithm and present various approximations. In Section 4 we present our multi-level histogram of oriented gradients based feature and experimental results on on various datasets. Conclusions are in Section Background Due to the immense practical importance of SVM based classifiers, there has been a fair amount of research on reducing their run time complexity. The complexity of classification for a SVM using a non linear kernel is the number of support vectors the complexity of evaluating the kernel function (see equation 4). The later is generally an increasing function of the dimension of the feature vectors. There 1 It has also been shown recently that the number of support vectors grow linearly with the number of training examples [23] so the complexity can only get worse with more training data are two general approaches to speeding up classification: reducing the number of support vectors by constructing sparse representations of the classifier, or reducing the dimension of the features using a coarse to fine approach. We discuss each of these briefly. A class of methods start with the SVM classifier and construct sparse representations using a small subset of the support vectors [3, 19]. Instead of having a single approximation, one can have a series of approximations with more and more support vectors to obtain a cascade of classifiers, an idea which has been used in [22] to build fast face detectors. Another class of methods build classifiers by having a regularizer in the optimization function which encourages sparseness, (e.g. L 1 -norm on the alphas) or pick support vectors in a greedy manner till a stopping criteria is met [14]. These methods achieve accuracies comparable to the full classifier using only a small fraction of the support vectors on various UCI-Datasets. However, our technique makes these methods irrelevant for intersection kernels as we have an exact method which has a logarithmic dependence on the number of support vectors. The coarse to fine approach for speeding up the classification is popular in many realtime applications. The idea is to use simpler features and classifiers which are used to reject easy examples quickly in a cascade. This idea has been applied, to face detection [12] and recently to pedestrian detection [28] achieve a speedup of 6-8 over a non cascaded version of the detector. These kinds of speedups are orthogonal to the speedups we achieve using our methods. Thus our technique together with a coarse to fine approach may lead to even greater speedups. There is not much literature on quick evaluation of the decision function 2 and the approach closest in spirit to our work is the work by Yang et al. [29] who use the Fast Gauss Transform to build efficient classifiers based on the Gaussian kernel. However their method is approximate and is suitable only when the dimension of the features is small. 3. Algorithm We first begin with a review of support vector machines for classification. Given labeled training data of the form {(y i, x i )} N, with y i { 1, +1}, x i R n, we use a C- SVM formulation [6]. For a kernel on data points, k(x, z) : R n R n R, that is the inner product, Φ(x) Φ(z), in an unrealized, possibly high dimensional, feature space, the algorithm finds a hyperplane which best separates the data by minimizing: τ(w, ξ) = 1 2 w 2 + C N ξ i (1) i=i 2 See[13] for work with related kernels and on-line learning.
3 subject to y i ((w x i ) + b) 1 ξ i and ξ i, where C > is the tradeoff between regularization and constraint violation. In the dual formulation we maximize: N W (α) = α i 1 α i α j y i y j k(x i, x j ) (2) 2 i=i subject to: α i C and α i y i = (3) The decision function is sign (h(x)), where: h(x) = ij m α l y l k(x, x l ) + b (4) For clarity, in a slight abuse of notation the features x l : l {1, 2,..., m} will be referred to as support vectors. Thus, in general m kernel computations are needed to classify a point with a kernelized SVM and all m support vector must be stored. For linear kernels we can do better because, k(x, z) = x, z, so h() can be written as h(x) = w, x + b, where w = m α ly l x l. As a result classifying with a linear SVM only requires O(n) operations, and O(n) memory Fast Exact Intersection in O(n log m) Now we show that it is possible to speed up classification for IKSVMs. For feature vectors x, z R n +, the intersection kernel is k(x, z): n k(x, z) = min (x(i), z(i)) (5) and classification is based on evaluating: h(x) = = m α l y l k(x, x l ) + b (6) ( m n ) α l y l min (x(i), x l (i)) + b (7) The non linearity of min prevents us from collapsing in a similar manner to linear kernels. This is in general true for any non linear kernel including radial basis functions, polynomial kernels, etc. Thus the complexity of evaluating h(x) in the naive way is O(mn). The trick for intersection kernels is that we can exchange the summations in equation 7 to obtain: ( m n ) h(x) = α l y l min (x(i), x l (i)) + b (8) = = ( n m ) α l y l min (x(i), x l (i))) + b (9) n h i (x(i)) + b (1) Rewriting the function h(x) as the sum of the individual functions, h i, one for each dimension, where h i (s) = m α l y l min (s, x l (i))) (11) So far we have gained nothing as the complexity of computing each h i (s) is O(m) with an overall complexity of computing h(x) still O(mn). We now show how to compute each h i in O(log m) time. Consider the functions h i (s) for a fixed value of i. Let x l (i) denote the sorted values of x l (i) in increasing order with corresponding α s and labels as ᾱ l and ȳ l. If s < x 1 (i) then h i (s) =, otherwise let r be the largest integer such that x r (i) s. Then we have, h i (s) = m ᾱ l ȳ l min (s, x l (i)) (12) = 1 l r Where we have defined, ᾱ l ȳ l x l (i) + s r<l m ᾱ l ȳ l (13) = A i (r) + sb i (r) (14) A i (r) = B i (r) = 1 l r r<l m ᾱ l ȳ l x l (i), (15) ᾱ l ȳ l (16) Equation 14 shows that h i is piecewise linear. Furthermore h i is continuous because: h i ( x r+1 ) = A i (r) + x r+1 B i (r) = A i (r + 1) + x r+1 B i (r + 1). Notice that the functions A i and B i are independent of the input data and depend only on the support vectors and α. Thus, if we precompute h i ( x r ) then h i (s) can be computed by first finding r, the position of s = x(i) in the sorted list x(i) using binary search and linearly interpolating between h i ( x r ) and h i ( x r+1 ). This requires storing the x l as well as the h i ( x l ) or twice the storage of the standard implementation. Thus the runtime complexity of computing h(x) is O(n log m) as opposed to O(nm), a speed up of O(m/ log m). In our experiments we typically have SVMs with a few thousand support vectors and the resulting speedup is quite significant Fast Approximate Intersection in O(n) We now show how to efficiently approximate the functions h i in constant time and space independent of the number of support vectors, which leads to significant savings in memory at classification time. We showed in Section 3.1
4 that h i (s) can be represented exactly using m + 1 piecewise linear segments. In our experiments we observe the distributions of the support vectors along each dimension tend to be smooth and concentrated 3 and that the resulting h i are also relatively smooth as seen in Figure 1. This allows approximating h i with simpler functions that can be evaluated quickly and require much less storage than the exact method presented above. We consider both piecewise constant and piecewise linear approximations with uniform spacing between segments in both cases. Uniform spacing allows computing h i (s) by a table lookup in the piecewise constant approximation or two table lookups and linear interpolation in the piecewise linear approximation. In practice we find that 3-5 linear segments is enough to avoid any loss in classification accuracy yielding a huge decrease in memory requirements. As an example in our experiments using the Daimler Chrysler pedestrian dataset, we routinely learn classifiers with more than 5 support vectors. The piecewise linear approximations require only 3 values to obtain the same classification accuracy in our experiments, using less than 1% of the memory Generalizations of Histogram Intersection The algorithms we propose can be applied to more than just the histogram intersection kernel. In fact whenever a function can be decomposed as in equation 8 we can apply the approximation speedup in Section 3.2. One example is the generalized histogram intersection kernel [2] defined by: n K GHI (x, z) = min ( x(i) β, z(i) β) that is positive definite for all β >. Chappelle et al. [4] observe that this remapping of the histogram bin values by x x β, improves the performance of linear kernel SVMs to become comparable to RBF kernels on an image classification task over the Corel Stock Photo Collection. In fact, we can use n K f GHI (x, z) = min (f (x(i)), f (z(i))) (17) for arbitrary f taking non-negative values as a kernel, evaluate it exactly, and approximate it with our technique. Another example is the 1 χ 2 kernel, which has been used successfully in [1] and could be approximated. 4. Experiments We perform experiments to answer several questions; (1) How much does the fast exact method speed up clas- 3 This is interesting because the features themselves have a heavy tail distribution. sification? (2) What space savings does the approximate method provide and at what cost in terms of the classification performance? (3) What improvement in classification performance does IKSVM offer over other kernel SVMs? (4) Does the intersection kernel allow simpler features for pedestrian detection? In order to answer the first three questions we use the IN- RIA and Daimler-Chrysler pedestrian dataset for detection oriented experiments and Caltech-11 for classification. In order to answer the fourth question, we introduce features based on histograms of oriented edge energy responses and test the framework on the two pedestrian detection datasets. We compare our features to the features used by the state of the art algorithms in each dataset in terms of both simplicity and classification accuracy. The next section describes how our features are computed Multi-Level Oriented Edge Energy Features Our features are based on a multi-level version of the HOG descriptor [7] and use histogram intersection kernel SVM based on spatial pyramid match kernel introduced in [15]. Our features are computed as follows: (1) We compute the oriented edge energy responses in 8 directions using the magnitude of the odd elongated oriented filters from [21] at a fine scale (σ = 1), with non max suppression performed independently in each orientation. (2) The response is then L 1 normalized over all the orientations in non overlapping cells of fixed size h n w n (3) At each level l {1, 2... L}, the image is divided into non overlapping cells of size h l w l, and a histogram feature is constructed by summing up normalized response within the cell. (4) The features at a level l are weighted by a factor c l and concatenated to form our feature vector, which is used to train an IKSVM classifier. The oriented edge energy histogram based features have been used successfully before for gesture recognition [1]. Figure 2 shows the first three stages of our feature pipeline. Compared to the HOG features used in the linear SVM based detector of Dalal and Triggs [7], our features are much simpler as we do not have overlapping cells, Gaussian weighting of the responses within a cell, or image normalization, leading to a much smaller dimensional feature vector. Our features also do not require the training step used to learn the Local Receptive Field features [16], which in conjunction with quadratic kernel SVMs give the best results on Daimler-Chrysler pedestrian dataset. Our experiments show the multi-level histogram of oriented edge energy features lead to better classification accuracies on INRIA dataset while being as good as the state of the art on Daimler-Chrysler dataset, when used with IKSVM and is significantly better than a linear classifier trained on the same features. The next three sections describe the experiments on these datasets in detail.
5 (a) (b) (c) (d) Figure 1. Each column (a-d) shows the distribution of the support vectors values along a dimension with a Gaussian fit (top) and the function h i(s) vs. s with a piecewise linear fit using 2 uniformly spaced points (bottom) of an IKSVM model trained on the INRIA dataset. Unlike the distribution of the training data which are heavy tailed, the values of the support vectors tend to be clustered. Figure 2. The three stage pipeline of the feature computation process. (1) The input grayscale image of size is convolved with oriented filters (σ = 1) in 8 directions, to obtain oriented energy responses. (2) The responses are then L 1 normalized over all directions in each non overlapping blocks independently to obtain normalized responses. (3) Multilevel features are then extracted by constructing histograms of oriented gradients by summing up the normalized response in each cell. The diagram depicts progressively smaller cell sizes from to INRIA Pedestrian Dataset The INRIA pedestrian dataset was introduced in [7]) as an alternate to the existing pedestrian datasets (e.g. MIT Pedestrian Dataset) and is significantly hard because of wide variety of articulated poses, variable appearance/clothing, illumination changes and complex backgrounds. Linear kernel SVMs with histograms of oriented gradients (HOG) features achieve high accuracy and speed on this dataset [7]. In our experiments we use the multilevel oriented edge energy features introduced in Section 4.1 with L = 4 levels, cell sizes of 64 64, 32 32, and 8 8 at levels 1, 2, 3 and 4 respectively and block normalization window size of w n h n = The features at a level l are weighted by a factor c l = 1/4 (L l) to obtain a 136 dimensional vector which is used to train an IKSVM classifier. The performance of the exact IKSVM classifier and various approximations are shown in Figure 3. Firstly, the performace of the IKSVM classifier using our features is significantly better. We have a miss rate of.167 at 1 6 FPPW and.25 at 1 4 False Positive Per Window (FPPW), an improvement of about 13% and 11% respectively, over the Dalal and Triggs pedestrian detector. This is without any tuning of hyperparameters like the number of levels, level weights, choice of cell sizes, which is remarkable. We observe that the piecewise linear approximation with 3 bins is almost as accurate as the exact classifier. The piecewise constant approximation however requires, thrice as many bins to obtain similar accuracies. The detection results as a function of number of bins for piecewise linear and piecewise constant approximations are shown in Fig-
6 INRIA DET PieceWise Linear INRIA DET PieceWise Const miss rate dalal triggs Exact 1 bins 3 bins 1 bins 3 bins miss rate dalal triggs Exact 1 bins 3 bins 1 bins 3 bins false positives per window (FPPW) 1 (a) DC DET PieceWise Linear false positives per window (FPPW) 1 (b) DC DET PieceWise Const.8.8 Detection Rate.6.4 exact.2 1 bins 3 bins 1 bins False Positive Rate Detection Rate.6.4 exact.2 1 bins 3 bins 1 bins False Positive Rate (c) (d) Figure 3. Performance of our methods on INRIA pedestrian dataset (top row) and Daimler Chrysler pedestrian dataset (bottom row). Detection plots for an exact IKSVM and its (a) piecewise linear approximation (b) piecewise constant approximation. The performance of the exact IKSVM is significantly better than the Dalal and Triggs detector. About 3 bins are required for the piecewise linear approximation to have an accuracy close to the exact, while the piecewise constant do the same with 1 bins. Mean detection rate with error bars of the exact IKSVM and its (c) piecewise linear approximation and (d) piecewise constant approximation. The error bars are plotted by training on 2 out of the 3 training sets and testing on 2 test sets for a total of 6 results. The results are as good as the best published results on this task. Once again 3 bins for a piecewise linear and 1 bins for a piecewise constant approximation are close to the exact. ure 3. Table 1 compares the classification speeds of various methods. The classification speed of our classifier is 6 slower than a linear SVM using 136 features. As a comparison a cell size of 8 8, stride of 8 8, block size of and 9 orientation bins used to generate the curve using the HOG features has a dimension of 378, about three times as many features than ours. Thus, compared to a linear classifier with 378 features the approximate IKSVM classifier is only about 2x slower. Overall the approximate IKSVM classifier is 1 faster and requires 1 less memory than the standard implementation of intersection kernel SVM, with practically the same accuracy. Interestingly our features are not the optimal features for a linear SVM classifier, which has a miss rate of.5 at 1 4 FPPW. Figure 4 shows a sample of the errors made by our detector on this dataset Daimler Chrysler Pedestrian Dataset We use the Daimler Chrysler Pedestrian Benchmark dataset, created by Munder and Gavrila [16]. The dataset is split into five disjoint sets, three for training and two for testing. Each training set has 5 positive and negative examples each, while each test set has 49 positive and neg-
7 ative examples each. We report the ROC curves by training on two out of three training sets at a time and testing on each of the test sets to obtain six curves from which the confidence intervals are plotted. The third training set is meant to be used for cross validation for tuning of the hyperparameters but we do not use it. Due to small size of the images (18 36), we only compute the multi-level features with only three levels (L = 3) of pyramid with cell sizes 18 18, 6 6 and 3 3 at levels 1, 2 and 3 respectively. The block normalization is done with a cell size of w n h n = The features at level l are weighted by a factor c l = 1/4 (L l) to obtain a 656 dimensional vector, which is used to train an IKSVM classifier. The classification results using the exact methods and approximations are shown in Figure 3. Our results are comparable to the best results for this task [16]. The experiments suggests that relatively simpler features compared to the LRF features when used in conjunction with IKSVMs perform as well as other kernel SVMs. Table 1 compares the classification speeds of various methods. The speedups obtained for this task are significantly large due to large number of support vectors for each classifier. The piecewise linear with 3 bins is about 2 faster and requires 2 less memory, with no loss in classification accuracy. The piecewise constant approximation on the other hand requires about 1 bins for similar accuracies and is even faster. We do not however optimize the process of computing the features which on an implementation in Matlab takes about 17ms per image. The time for classification (.2ms) is negligible compared to this. Compared to the 25ms required by the cascaded SVM based classifiers in the paper, our combined time for computing the features and classification is 15 lesser. Figure 4 shows a sample of the errors made by our detector Caltech 11 Our third set of experiments are on Caltech-11 [8]. The aim here is to show that existing methods can be made significantly faster, even when the number of support vectors in each classifier is small. We use the framework of [15] and use our own implementation of their weak features and achieve an accuracy of 52% (compared to their 54%), with 15 training and test examples per class and one-vs-all classifiers based on IKSVM. The performance of a linear SVM using the same features is about 4%. The IKSVM classifiers on average have 175 support vectors and a piecewise linear approximation with 6 bins is 5 faster than a standard implementation, with 3 additional misclassifications out of 153 test examples on average (see Table 1). 5. Conclusions The theoretical contribution of this paper is a technique for exactly evaluating intersection kernel SVMs with runtime logarithmic in the number of support vectors as opposed to linear for the standard implementation. Furthermore we have shown that an approximation of the IKSVM classifier can be built with the same classification performance but runtime constant in the number of support vectors. This puts IKSVM classifiers in the same order of computational cost for evaluation as linear SVMs. Our experimental results show that approximate IKSVM consistently outperforms linear SVMs at a modest increase in runtime for evaluation. This is especially relevant for detection tasks where a major component of training models is evaluating the detector on large training sets. Finally we introduced a multi-scale histogram of oriented edge energy feature quite similar to HOG, but with a simpler design and lower dimensionality. This feature and IKSVM produce classification rates significantly better than the linear SVM based detector of Dalal and Triggs, leading to the current state of the art on pedestrian detection. References [1] A. Bosch, A. Zisserman, and X. Munoz. Representing shape with a spatial pyramid kernel. In ICIVR, 27. [2] S. Boughorbel, J.-P. Tarel, and N. Boujemaa. Generalized histogram intersection kernel for image recognition. In ICIP, Genova, Italy, 25. [3] C. J. C. Burges. Simplified support vector decision rules. In ICML, pages 71 77, [4] O. Chapelle, S. P. Haffner, and V. Vapnik. Support vector machines for histogram-based image classification. IEEE Trans. on Neural Networks, 1(5): , May [5] O. Chum and A. Zisserman. Presented at visual recognition challenge workshop. 27. [6] C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 2(3): , [7] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 25. [8] L. Fei-Fei, R. Fergus, and P. Perona. One-shot learning of object categories. IEEE T. PAMI, 28(4): , 26. [9] P. Felzenszwalb, D. McAllester, and D. Ramanan. A discriminatively trained, multiscale, deformable part model. In CVPR, 28. [1] W. T. Freeman and M. Roth. Orientation histograms for hand gesture recognition. In Intl. Workshop on Automatic Face and Gesture Recognition, pages , [11] K. Grauman and T. Darrell. The pyramid match kernel: Discriminative classification with sets of image features. ICCV, 2, 25. [12] B. Heisele, T. Serre, S. Prentice, and T. Poggio. Hierarchical classification and feature reduction for fast face detection with support vector machines. Pattern Recognition, 36:27 217(11), September 23. [13] M. Herbster. Learning additive models online with fast evaluating kernels. In Fourteenth Annual Conference on Computational Learning Theory, volume 2111, pages , 21. [14] S. S. Keerthi, O. Chapelle, and D. DeCoste. Building support vector machines with reduced classifier complexity. JMLR, 7: , 26. [15] L. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In CVPR, 26. [16] S. Munder and D. M. Gavrila. An experimental study on pedestrian classification. IEEE T. PAMI, 28(11): , 26.
8 Figure 4. Top two rows are a sample of errors on the INRIA pedestrian dataset and the bottom two are on the Daimler-Chrysler dataset. For each set the first row is a sample of the false negatives, while the second row is a sample of the false positives. Model parameters SVM kernel type fast IKSVMs Dataset #SVs #features linear intersection binary search piecewise-const piecewise-lin INRIA Ped ± ± ±.3.34±.1.43±.1 DC Ped 5474± ± ± ±.2.18±.1.22±. Caltech ± ± ± ±.12.33±.3.46±.3 Table 1. Time taken in seconds to classify 1, features using various techniques. #SVs is the number of support vectors in the classifier and #features is the feature dimension. The next two columns show the speed of a linear kernel SVM with the same features and a standard implementation of a intersection kernel SVM (IKSVM). The next three columns show the speeds using the speedup techniques we propose. The numbers are averaged over various runs in the INRIA dataset, categories in the Caltech 11 dataset and six training and testing splits in the Daimler-Chrysler dataset. The binary search based IKSVM is exact and has logarithmic dependence on the number of support vectors, while the piecewise constant and linear approximations theoretically have a runtime independent of the number of support vectors. One can see this from the fact that even with about 2 increase in the number of support vectors from Caltech 11 to INRIA pedestrian dataset, the speeds of the fast IKSVM using binary search increases 1.5 while the speeds for the piecewise linear and constant approximations are unchanged. The speed of a naive implementation is about 25 worse however. The approximate IKSVM classifiers are on an average just 5-6 times slower than linear kernel SVM. The exact version using binary search is also orders of magnitude faster on the pedestrian detection tasks than a naive implementation of the classifier. All the experiments were done on an Intel QuadCore 2.4 GHz machine with 4GB RAM. [17] A. Odone F. Barla, A. Verri. Building kernels from binary strings for image matching. IEEE T. Image Processing, 14(2):169 18, Feb. 25. [18] E. Osuna, R. Freund, and F. Girosi. Training support vector machines: an application to face detection. CVPR, [19] E. E. Osuna and F. Girosi. Reducing the run-time complexity in support vector machines. Advances in kernel methods: support vector learning, pages , [2] C. Papageorgiou and T. Poggio. A trainable system for object detection. IJCV, 38(1):15 33, 2. [21] J. Perona, P.; Malik. Detecting and localizing edges composed of steps, peaks and roofs. ICCV, pages 52 57, 4-7 Dec 199. [22] S. Romdhani, P. Torr, B. Scholkopf, and A. Blake. Computationally efficient face detection. ICCV, 2:695, 21. [23] I. Steinwart. Sparseness of support vector machines-some asymptotically sharp bounds. In NIPS, 23. [24] M. J. Swain and D. H. Ballard. Color indexing. IJCV, 7(1):11 32, [25] A. Torralba, K. Murphy, and W. Freeman. Sharing features: efficient boosting procedures for multiclass object detection. In CVPR, 24. [26] M. Varma and D. Ray. Learning the discriminative power-invariance trade-off. In ICCV, October 27. [27] P. Viola and M. J. Jones. Robust real-time face detection. IJCV, 57(2): , 24. [28] G. J. Z. Wei Zhang and D. Samaras. Real-time accurate object detection using multiple resolutions. In ICCV, 27. [29] C. Yang, R. Duraiswami, N. A. Gumerov, and L. Davis. Improved fast gauss transform and efficient kernel density estimation. In ICCV, 23. [3] J. Zhang, M. Marszalek, S. Lazebnik, and C. Schmid. Local features and kernels for classification of texture and object categories: A comprehensive study. In CVPRW, 26.
Local features and matching. Image classification & object localization
Overview Instance level search Local features and matching Efficient visual recognition Image classification & object localization Category recognition Image classification: assigning a class label to
Recognizing Cats and Dogs with Shape and Appearance based Models. Group Member: Chu Wang, Landu Jiang
Recognizing Cats and Dogs with Shape and Appearance based Models Group Member: Chu Wang, Landu Jiang Abstract Recognizing cats and dogs from images is a challenging competition raised by Kaggle platform
Support Vector Machines with Clustering for Training with Very Large Datasets
Support Vector Machines with Clustering for Training with Very Large Datasets Theodoros Evgeniou Technology Management INSEAD Bd de Constance, Fontainebleau 77300, France [email protected] Massimiliano
Mean-Shift Tracking with Random Sampling
1 Mean-Shift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of
Scalable Object Detection by Filter Compression with Regularized Sparse Coding
Scalable Object Detection by Filter Compression with Regularized Sparse Coding Ting-Hsuan Chao, Yen-Liang Lin, Yin-Hsi Kuo, and Winston H Hsu National Taiwan University, Taipei, Taiwan Abstract For practical
Simple and efficient online algorithms for real world applications
Simple and efficient online algorithms for real world applications Università degli Studi di Milano Milano, Italy Talk @ Centro de Visión por Computador Something about me PhD in Robotics at LIRA-Lab,
Transform-based Domain Adaptation for Big Data
Transform-based Domain Adaptation for Big Data Erik Rodner University of Jena Judy Hoffman Jeff Donahue Trevor Darrell Kate Saenko UMass Lowell Abstract Images seen during test time are often not from
Predict Influencers in the Social Network
Predict Influencers in the Social Network Ruishan Liu, Yang Zhao and Liuyu Zhou Email: rliu2, yzhao2, [email protected] Department of Electrical Engineering, Stanford University Abstract Given two persons
Support Vector Machines Explained
March 1, 2009 Support Vector Machines Explained Tristan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introduction This document has been written in an attempt to make the Support Vector Machines (SVM),
Introduction to Support Vector Machines. Colin Campbell, Bristol University
Introduction to Support Vector Machines Colin Campbell, Bristol University 1 Outline of talk. Part 1. An Introduction to SVMs 1.1. SVMs for binary classification. 1.2. Soft margins and multi-class classification.
3D Model based Object Class Detection in An Arbitrary View
3D Model based Object Class Detection in An Arbitrary View Pingkun Yan, Saad M. Khan, Mubarak Shah School of Electrical Engineering and Computer Science University of Central Florida http://www.eecs.ucf.edu/
Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report
Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 69 Class Project Report Junhua Mao and Lunbo Xu University of California, Los Angeles [email protected] and lunbo
Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence
Artificial Neural Networks and Support Vector Machines CS 486/686: Introduction to Artificial Intelligence 1 Outline What is a Neural Network? - Perceptron learners - Multi-layer networks What is a Support
Finding people in repeated shots of the same scene
Finding people in repeated shots of the same scene Josef Sivic 1 C. Lawrence Zitnick Richard Szeliski 1 University of Oxford Microsoft Research Abstract The goal of this work is to find all occurrences
Comparing the Results of Support Vector Machines with Traditional Data Mining Algorithms
Comparing the Results of Support Vector Machines with Traditional Data Mining Algorithms Scott Pion and Lutz Hamel Abstract This paper presents the results of a series of analyses performed on direct mail
Lecture 2: The SVM classifier
Lecture 2: The SVM classifier C19 Machine Learning Hilary 2015 A. Zisserman Review of linear classifiers Linear separability Perceptron Support Vector Machine (SVM) classifier Wide margin Cost function
Scalable Developments for Big Data Analytics in Remote Sensing
Scalable Developments for Big Data Analytics in Remote Sensing Federated Systems and Data Division Research Group High Productivity Data Processing Dr.-Ing. Morris Riedel et al. Research Group Leader,
Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.
Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C
Multi-Class Active Learning for Image Classification
Multi-Class Active Learning for Image Classification Ajay J. Joshi University of Minnesota Twin Cities [email protected] Fatih Porikli Mitsubishi Electric Research Laboratories [email protected] Nikolaos Papanikolopoulos
CHARACTERISTICS IN FLIGHT DATA ESTIMATION WITH LOGISTIC REGRESSION AND SUPPORT VECTOR MACHINES
CHARACTERISTICS IN FLIGHT DATA ESTIMATION WITH LOGISTIC REGRESSION AND SUPPORT VECTOR MACHINES Claus Gwiggner, Ecole Polytechnique, LIX, Palaiseau, France Gert Lanckriet, University of Berkeley, EECS,
Cees Snoek. Machine. Humans. Multimedia Archives. Euvision Technologies The Netherlands. University of Amsterdam The Netherlands. Tree.
Visual search: what's next? Cees Snoek University of Amsterdam The Netherlands Euvision Technologies The Netherlands Problem statement US flag Tree Aircraft Humans Dog Smoking Building Basketball Table
Human Pose Estimation from RGB Input Using Synthetic Training Data
Human Pose Estimation from RGB Input Using Synthetic Training Data Oscar Danielsson and Omid Aghazadeh School of Computer Science and Communication KTH, Stockholm, Sweden {osda02, omida}@kth.se arxiv:1405.1213v2
Support Vector Machine. Tutorial. (and Statistical Learning Theory)
Support Vector Machine (and Statistical Learning Theory) Tutorial Jason Weston NEC Labs America 4 Independence Way, Princeton, USA. [email protected] 1 Support Vector Machines: history SVMs introduced
Convolutional Feature Maps
Convolutional Feature Maps Elements of efficient (and accurate) CNN-based object detection Kaiming He Microsoft Research Asia (MSRA) ICCV 2015 Tutorial on Tools for Efficient Object Detection Overview
Recognition. Sanja Fidler CSC420: Intro to Image Understanding 1 / 28
Recognition Topics that we will try to cover: Indexing for fast retrieval (we still owe this one) History of recognition techniques Object classification Bag-of-words Spatial pyramids Neural Networks Object
Image Classification for Dogs and Cats
Image Classification for Dogs and Cats Bang Liu, Yan Liu Department of Electrical and Computer Engineering {bang3,yan10}@ualberta.ca Kai Zhou Department of Computing Science [email protected] Abstract
The Artificial Prediction Market
The Artificial Prediction Market Adrian Barbu Department of Statistics Florida State University Joint work with Nathan Lay, Siemens Corporate Research 1 Overview Main Contributions A mathematical theory
Deformable Part Models with CNN Features
Deformable Part Models with CNN Features Pierre-André Savalle 1, Stavros Tsogkas 1,2, George Papandreou 3, Iasonas Kokkinos 1,2 1 Ecole Centrale Paris, 2 INRIA, 3 TTI-Chicago Abstract. In this work we
MVA ENS Cachan. Lecture 2: Logistic regression & intro to MIL Iasonas Kokkinos [email protected]
Machine Learning for Computer Vision 1 MVA ENS Cachan Lecture 2: Logistic regression & intro to MIL Iasonas Kokkinos [email protected] Department of Applied Mathematics Ecole Centrale Paris Galen
Semantic Recognition: Object Detection and Scene Segmentation
Semantic Recognition: Object Detection and Scene Segmentation Xuming He [email protected] Computer Vision Research Group NICTA Robotic Vision Summer School 2015 Acknowledgement: Slides from Fei-Fei
Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006
Practical Tour of Visual tracking David Fleet and Allan Jepson January, 2006 Designing a Visual Tracker: What is the state? pose and motion (position, velocity, acceleration, ) shape (size, deformation,
Class #6: Non-linear classification. ML4Bio 2012 February 17 th, 2012 Quaid Morris
Class #6: Non-linear classification ML4Bio 2012 February 17 th, 2012 Quaid Morris 1 Module #: Title of Module 2 Review Overview Linear separability Non-linear classification Linear Support Vector Machines
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical
A Comparison of Keypoint Descriptors in the Context of Pedestrian Detection: FREAK vs. SURF vs. BRISK
A Comparison of Keypoint Descriptors in the Context of Pedestrian Detection: FREAK vs. SURF vs. BRISK Cameron Schaeffer Stanford University CS Department [email protected] Abstract The subject of keypoint
Lecture 6: CNNs for Detection, Tracking, and Segmentation Object Detection
CSED703R: Deep Learning for Visual Recognition (206S) Lecture 6: CNNs for Detection, Tracking, and Segmentation Object Detection Bohyung Han Computer Vision Lab. [email protected] 2 3 Object detection
The Visual Internet of Things System Based on Depth Camera
The Visual Internet of Things System Based on Depth Camera Xucong Zhang 1, Xiaoyun Wang and Yingmin Jia Abstract The Visual Internet of Things is an important part of information technology. It is proposed
MAXIMIZING RETURN ON DIRECT MARKETING CAMPAIGNS
MAXIMIZING RETURN ON DIRET MARKETING AMPAIGNS IN OMMERIAL BANKING S 229 Project: Final Report Oleksandra Onosova INTRODUTION Recent innovations in cloud computing and unified communications have made a
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
Group Sparse Coding. Fernando Pereira Google Mountain View, CA [email protected]. Dennis Strelow Google Mountain View, CA strelow@google.
Group Sparse Coding Samy Bengio Google Mountain View, CA [email protected] Fernando Pereira Google Mountain View, CA [email protected] Yoram Singer Google Mountain View, CA [email protected] Dennis Strelow
Pedestrian Detection with RCNN
Pedestrian Detection with RCNN Matthew Chen Department of Computer Science Stanford University [email protected] Abstract In this paper we evaluate the effectiveness of using a Region-based Convolutional
A fast multi-class SVM learning method for huge databases
www.ijcsi.org 544 A fast multi-class SVM learning method for huge databases Djeffal Abdelhamid 1, Babahenini Mohamed Chaouki 2 and Taleb-Ahmed Abdelmalik 3 1,2 Computer science department, LESIA Laboratory,
Probabilistic Latent Semantic Analysis (plsa)
Probabilistic Latent Semantic Analysis (plsa) SS 2008 Bayesian Networks Multimedia Computing, Universität Augsburg [email protected] www.multimedia-computing.{de,org} References
Object Recognition. Selim Aksoy. Bilkent University [email protected]
Image Classification and Object Recognition Selim Aksoy Department of Computer Engineering Bilkent University [email protected] Image classification Image (scene) classification is a fundamental
Robert Collins CSE598G. More on Mean-shift. R.Collins, CSE, PSU CSE598G Spring 2006
More on Mean-shift R.Collins, CSE, PSU Spring 2006 Recall: Kernel Density Estimation Given a set of data samples x i ; i=1...n Convolve with a kernel function H to generate a smooth function f(x) Equivalent
Face Recognition in Low-resolution Images by Using Local Zernike Moments
Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August14-15, 014 Paper No. 15 Face Recognition in Low-resolution Images by Using Local Zernie
Learning Detectors from Large Datasets for Object Retrieval in Video Surveillance
2012 IEEE International Conference on Multimedia and Expo Learning Detectors from Large Datasets for Object Retrieval in Video Surveillance Rogerio Feris, Sharath Pankanti IBM T. J. Watson Research Center
Edge Boxes: Locating Object Proposals from Edges
Edge Boxes: Locating Object Proposals from Edges C. Lawrence Zitnick and Piotr Dollár Microsoft Research Abstract. The use of object proposals is an effective recent approach for increasing the computational
Open-Set Face Recognition-based Visitor Interface System
Open-Set Face Recognition-based Visitor Interface System Hazım K. Ekenel, Lorant Szasz-Toth, and Rainer Stiefelhagen Computer Science Department, Universität Karlsruhe (TH) Am Fasanengarten 5, Karlsruhe
Support Vector Machine (SVM)
Support Vector Machine (SVM) CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin
Fast Matching of Binary Features
Fast Matching of Binary Features Marius Muja and David G. Lowe Laboratory for Computational Intelligence University of British Columbia, Vancouver, Canada {mariusm,lowe}@cs.ubc.ca Abstract There has been
Data Mining. Nonlinear Classification
Data Mining Unit # 6 Sajjad Haider Fall 2014 1 Nonlinear Classification Classes may not be separable by a linear boundary Suppose we randomly generate a data set as follows: X has range between 0 to 15
Character Image Patterns as Big Data
22 International Conference on Frontiers in Handwriting Recognition Character Image Patterns as Big Data Seiichi Uchida, Ryosuke Ishida, Akira Yoshida, Wenjie Cai, Yaokai Feng Kyushu University, Fukuoka,
Data Mining Practical Machine Learning Tools and Techniques
Ensemble learning Data Mining Practical Machine Learning Tools and Techniques Slides for Chapter 8 of Data Mining by I. H. Witten, E. Frank and M. A. Hall Combining multiple models Bagging The basic idea
Tensor Methods for Machine Learning, Computer Vision, and Computer Graphics
Tensor Methods for Machine Learning, Computer Vision, and Computer Graphics Part I: Factorizations and Statistical Modeling/Inference Amnon Shashua School of Computer Science & Eng. The Hebrew University
An Embedded Hardware-Efficient Architecture for Real-Time Cascade Support Vector Machine Classification
An Embedded Hardware-Efficient Architecture for Real-Time Support Vector Machine Classification Christos Kyrkou, Theocharis Theocharides KIOS Research Center, Department of Electrical and Computer Engineering
Feature Selection using Integer and Binary coded Genetic Algorithm to improve the performance of SVM Classifier
Feature Selection using Integer and Binary coded Genetic Algorithm to improve the performance of SVM Classifier D.Nithya a, *, V.Suganya b,1, R.Saranya Irudaya Mary c,1 Abstract - This paper presents,
Scale-Invariant Object Categorization using a Scale-Adaptive Mean-Shift Search
in DAGM 04 Pattern Recognition Symposium, Tübingen, Germany, Aug 2004. Scale-Invariant Object Categorization using a Scale-Adaptive Mean-Shift Search Bastian Leibe ½ and Bernt Schiele ½¾ ½ Perceptual Computing
High Level Describable Attributes for Predicting Aesthetics and Interestingness
High Level Describable Attributes for Predicting Aesthetics and Interestingness Sagnik Dhar Vicente Ordonez Tamara L Berg Stony Brook University Stony Brook, NY 11794, USA [email protected] Abstract
Several Views of Support Vector Machines
Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min
Classifying Large Data Sets Using SVMs with Hierarchical Clusters. Presented by :Limou Wang
Classifying Large Data Sets Using SVMs with Hierarchical Clusters Presented by :Limou Wang Overview SVM Overview Motivation Hierarchical micro-clustering algorithm Clustering-Based SVM (CB-SVM) Experimental
11 Linear and Quadratic Discriminant Analysis, Logistic Regression, and Partial Least Squares Regression
Frank C Porter and Ilya Narsky: Statistical Analysis Techniques in Particle Physics Chap. c11 2013/9/9 page 221 le-tex 221 11 Linear and Quadratic Discriminant Analysis, Logistic Regression, and Partial
Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall
Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
Histograms of Oriented Gradients for Human Detection
Histograms of Oriented Gradients for Human Detection Navneet Dalal and Bill Triggs INRIA Rhône-Alps, 655 avenue de l Europe, Montbonnot 38334, France {Navneet.Dalal,Bill.Triggs}@inrialpes.fr, http://lear.inrialpes.fr
A Simple Introduction to Support Vector Machines
A Simple Introduction to Support Vector Machines Martin Law Lecture for CSE 802 Department of Computer Science and Engineering Michigan State University Outline A brief history of SVM Large-margin linear
Clustering Big Data. Anil K. Jain. (with Radha Chitta and Rong Jin) Department of Computer Science Michigan State University November 29, 2012
Clustering Big Data Anil K. Jain (with Radha Chitta and Rong Jin) Department of Computer Science Michigan State University November 29, 2012 Outline Big Data How to extract information? Data clustering
Who are you? Learning person specific classifiers from video
Who are you? Learning person specific classifiers from video Josef Sivic, Mark Everingham 2 and Andrew Zisserman 3 INRIA, WILLOW Project, Laboratoire d Informatique de l Ecole Normale Superieure, Paris,
Knowledge Discovery from patents using KMX Text Analytics
Knowledge Discovery from patents using KMX Text Analytics Dr. Anton Heijs [email protected] Treparel Abstract In this white paper we discuss how the KMX technology of Treparel can help searchers
Visual Categorization with Bags of Keypoints
Visual Categorization with Bags of Keypoints Gabriella Csurka, Christopher R. Dance, Lixin Fan, Jutta Willamowski, Cédric Bray Xerox Research Centre Europe 6, chemin de Maupertuis 38240 Meylan, France
Learning Mid-Level Features For Recognition
Learning Mid-Level Features For Recognition Y-Lan Boureau 1,3,4 Francis Bach 1,4 Yann LeCun 3 Jean Ponce 2,4 1 INRIA 2 Ecole Normale Supérieure 3 Courant Institute, New York University Abstract Many successful
Multiple Kernel Learning on the Limit Order Book
JMLR: Workshop and Conference Proceedings 11 (2010) 167 174 Workshop on Applications of Pattern Analysis Multiple Kernel Learning on the Limit Order Book Tristan Fletcher Zakria Hussain John Shawe-Taylor
Convolution. 1D Formula: 2D Formula: Example on the web: http://www.jhu.edu/~signals/convolve/
Basic Filters (7) Convolution/correlation/Linear filtering Gaussian filters Smoothing and noise reduction First derivatives of Gaussian Second derivative of Gaussian: Laplacian Oriented Gaussian filters
Module 5. Deep Convnets for Local Recognition Joost van de Weijer 4 April 2016
Module 5 Deep Convnets for Local Recognition Joost van de Weijer 4 April 2016 Previously, end-to-end.. Dog Slide credit: Jose M 2 Previously, end-to-end.. Dog Learned Representation Slide credit: Jose
Machine Learning in FX Carry Basket Prediction
Machine Learning in FX Carry Basket Prediction Tristan Fletcher, Fabian Redpath and Joe D Alessandro Abstract Artificial Neural Networks ANN), Support Vector Machines SVM) and Relevance Vector Machines
Online Learning in Biometrics: A Case Study in Face Classifier Update
Online Learning in Biometrics: A Case Study in Face Classifier Update Richa Singh, Mayank Vatsa, Arun Ross, and Afzel Noore Abstract In large scale applications, hundreds of new subjects may be regularly
Making Sense of the Mayhem: Machine Learning and March Madness
Making Sense of the Mayhem: Machine Learning and March Madness Alex Tran and Adam Ginzberg Stanford University [email protected] [email protected] I. Introduction III. Model The goal of our research
The Role of Size Normalization on the Recognition Rate of Handwritten Numerals
The Role of Size Normalization on the Recognition Rate of Handwritten Numerals Chun Lei He, Ping Zhang, Jianxiong Dong, Ching Y. Suen, Tien D. Bui Centre for Pattern Recognition and Machine Intelligence,
FAST APPROXIMATE NEAREST NEIGHBORS WITH AUTOMATIC ALGORITHM CONFIGURATION
FAST APPROXIMATE NEAREST NEIGHBORS WITH AUTOMATIC ALGORITHM CONFIGURATION Marius Muja, David G. Lowe Computer Science Department, University of British Columbia, Vancouver, B.C., Canada [email protected],
Server Load Prediction
Server Load Prediction Suthee Chaidaroon ([email protected]) Joon Yeong Kim ([email protected]) Jonghan Seo ([email protected]) Abstract Estimating server load average is one of the methods that
Multiclass Classification. 9.520 Class 06, 25 Feb 2008 Ryan Rifkin
Multiclass Classification 9.520 Class 06, 25 Feb 2008 Ryan Rifkin It is a tale Told by an idiot, full of sound and fury, Signifying nothing. Macbeth, Act V, Scene V What Is Multiclass Classification? Each
Machine Learning Final Project Spam Email Filtering
Machine Learning Final Project Spam Email Filtering March 2013 Shahar Yifrah Guy Lev Table of Content 1. OVERVIEW... 3 2. DATASET... 3 2.1 SOURCE... 3 2.2 CREATION OF TRAINING AND TEST SETS... 4 2.3 FEATURE
Supervised Learning (Big Data Analytics)
Supervised Learning (Big Data Analytics) Vibhav Gogate Department of Computer Science The University of Texas at Dallas Practical advice Goal of Big Data Analytics Uncover patterns in Data. Can be used
Lecture 6: Classification & Localization. boris. [email protected]
Lecture 6: Classification & Localization boris. [email protected] 1 Agenda ILSVRC 2014 Overfeat: integrated classification, localization, and detection Classification with Localization Detection. 2 ILSVRC-2014
Early defect identification of semiconductor processes using machine learning
STANFORD UNIVERISTY MACHINE LEARNING CS229 Early defect identification of semiconductor processes using machine learning Friday, December 16, 2011 Authors: Saul ROSA Anton VLADIMIROV Professor: Dr. Andrew
Ensemble Methods. Knowledge Discovery and Data Mining 2 (VU) (707.004) Roman Kern. KTI, TU Graz 2015-03-05
Ensemble Methods Knowledge Discovery and Data Mining 2 (VU) (707004) Roman Kern KTI, TU Graz 2015-03-05 Roman Kern (KTI, TU Graz) Ensemble Methods 2015-03-05 1 / 38 Outline 1 Introduction 2 Classification
Case Study Report: Building and analyzing SVM ensembles with Bagging and AdaBoost on big data sets
Case Study Report: Building and analyzing SVM ensembles with Bagging and AdaBoost on big data sets Ricardo Ramos Guerra Jörg Stork Master in Automation and IT Faculty of Computer Science and Engineering
Component Ordering in Independent Component Analysis Based on Data Power
Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals
Robust Real-Time Face Detection
Robust Real-Time Face Detection International Journal of Computer Vision 57(2), 137 154, 2004 Paul Viola, Michael Jones 授 課 教 授 : 林 信 志 博 士 報 告 者 : 林 宸 宇 報 告 日 期 :96.12.18 Outline Introduction The Boost
Pictorial Structures Revisited: People Detection and Articulated Pose Estimation
Pictorial Structures Revisited: People Detection and Articulated Pose Estimation Mykhaylo Andriluka, Stefan Roth, and Bernt Schiele Department of Computer Science, TU Darmstadt Abstract Non-rigid object
CS 1699: Intro to Computer Vision. Deep Learning. Prof. Adriana Kovashka University of Pittsburgh December 1, 2015
CS 1699: Intro to Computer Vision Deep Learning Prof. Adriana Kovashka University of Pittsburgh December 1, 2015 Today: Deep neural networks Background Architectures and basic operations Applications Visualizing
AIMS Big data. AIMS Big data. Outline. Outline. Lecture 5: Structured-output learning January 7, 2015 Andrea Vedaldi
AMS Big data AMS Big data Lecture 5: Structured-output learning January 7, 5 Andrea Vedaldi. Discriminative learning. Discriminative learning 3. Hashing and kernel maps 4. Learning representations 5. Structured-output
Segmentation & Clustering
EECS 442 Computer vision Segmentation & Clustering Segmentation in human vision K-mean clustering Mean-shift Graph-cut Reading: Chapters 14 [FP] Some slides of this lectures are courtesy of prof F. Li,
Image Normalization for Illumination Compensation in Facial Images
Image Normalization for Illumination Compensation in Facial Images by Martin D. Levine, Maulin R. Gandhi, Jisnu Bhattacharyya Department of Electrical & Computer Engineering & Center for Intelligent Machines
