More Generality in Efficient Multiple Kernel Learning

Size: px
Start display at page:

Download "More Generality in Efficient Multiple Kernel Learning"

Transcription

1 Manik Varma Microsoft Research India, Second Main Road, Sadashiv Nagar, Bangalore , India Bodla Rakesh Babu CVIT, International Institute of Information Technology Hyderabad, Gachibowli, Hyderabad , India Abstract Recent advances in Multiple Kernel Learning (MKL) have positioned it as an attractive tool for tackling many supervised learning tasks. The development of efficient gradient descent based optimization schemes has made it possible to tackle large scale problems. Simultaneously, MKL based algorithms have achieved very good results on challenging real world applications. Yet, despite their successes, MKL approaches are limited in that they focus on learning a linear combination of given base kernels. In this paper, we observe that existing MKL formulations can be extended to learn general kernel combinations subject to general regularization. This can be achieved while retaining all the efficiency of existing large scale optimization algorithms. To highlight the advantages of generalized kernel learning, we tackle feature selection problems on benchmark vision and UCI databases. It is demonstrated that the proposed formulation can lead to better results not only as compared to traditional MKL but also as compared to state-of-the-art wrapper and filter methods for feature selection. 1. Introduction Support Vector Machines are basic tools in machine learning that are used for tasks such as classification, regression, etc. They find application in diverse areas ranging from vision to bioinformatics to naturallanguage processing. The success of SVMs in these fields is often dependent on the choice of a good kernel and features ones that are typically hand-crafted and Appearing in Proceedings of the 26 th International Conference on Machine Learning, Montreal, Canada, Copyright 2009 by the author(s)/owner(s). fixed in advance. However, hand-tuning kernel parameters can be difficult as can selecting and combining appropriate sets of features. Multiple Kernel Learning (MKL) seeks to address this issue by learning the kernel from training data. In particular, it focuses on how the kernel can be learnt as a linear combination of given base kernels. Many MKL formulations have been proposed in the literature. In (Sonnenburg et al., 2006), it was shown that the MKL Block l 1 formulation of (Bach et al., 2004) could be expressed as a Semi-infinite Linear Program. Column generation methods and existing SVM solvers could then be used for efficient optimization and to tackle large scale problems involving as many as a million data points. Gradient descent can be more efficient than solving a series of linear programs and (Rakotomamonjy et al., 2008) demonstrated that training time could be further reduced by nearly an order of magnitude on some standard UCI datasets when the number of kernels was large. Simultaneously, MKL based algorithms have achieved very good results on bioinformatics and computer vision applications. These methods established the viability of MKL as a tool for tackling challenging real world problems. Nevertheless, MKL approaches are limited in that they focus on learning linear combinations of base kernels corresponding to the concatenation of individual kernel feature spaces. Far richer representations can be achieved by combining kernels in other fashions. For example, taking products of kernels corresponds to taking a tensor product of their feature spaces. This leads to a much higher dimensional feature representation as compared to feature concatenation. Furthermore, by focusing mainly on feature concatenation, MKL approaches do not consider the fundamental question of what are appropriate feature representations for a given task. Some attempts have been made at addressing this issue. Most recently, (Bach, 2008) develop an innovative way of learning linear combinations of an exponential number of kernels of a certain type. The method can

2 therefore be leveraged in situations where the desired kernel combination can be approximated as a linear combination of appropriately chosen base kernels. In this paper, we observe that it is fairly straight forward to extend traditional MKL formulations to handle generic kernel combinations. Furthermore, the gradient descent optimization developed and used in (Bach, 2008; Chapelle et al., 2002; Rakotomamonjy et al., 2008; Varma & Ray, 2007) can still be applied out of the box. It is therefore possible to learn rich feature representations without having to sacrifice any of the advantages of a well developed, large scale optimization toolkit. In addition to the kernel function, it is also possible to generalize the regularization on the kernel parameters. This can be used to incorporate prior knowledge about the kernel parameters if available. However, the price that one has to pay for such generality, is that the new MKL formulation is no longer convex. Nevertheless, we feel that the ability to explore appropriate feature representation is probably more important than being able to converge to the global optimum (of an inappropriate representation). This is borne out by our experimental results. We highlight the advantages of generalized kernel learning by tackling feature selection problems on benchmark computer vision tasks, such as gender identification, as well as on some UCI datasets. We show that for a fixed number of selected features, standard MKL can lag behind our formulation by as much as 10% in some cases. Stated in another way, our formulation is capable of reaching the same classification accuracy as MKL with only a sixth of the features. We also present comparative results with AdaBoost, OWL-QN (Andrew & Gao, 2007), LP-SVM (Fung & Mangasarian, 2002), Sparse SVM (Chan et al., 2007) and BAHSIC (Song et al., 2007). The rest of the paper is organized as follows: In Section 2 we review the development of MKL from initial work to current state-of-the-art. We then present our formulation in Section 3. The formulation is quite general and can be used for kernel combination, kernel parameter tuning, non-linear feature selection and dimensionality reduction. Since our empirical focus is on feature selection we review other feature selection methods in Section 4 and present comparative results to them in Section 5. We conclude in Section Related Work Some of the earliest work on MKL was developed in (Crammer et al., 2002; Cristianini et al., 2001). Their focus was on optimizing loss functions such as kernel target alignment rather than the specific classification or regression problem at hand. This was addressed in the influential work of (Lanckriet et al., 2004) which showed how MKL could be formulated appropriately for a given task and optimized as an SDP or QCQP (for non-negative kernel weights). Nevertheless, QCQPs do not scale well to large problems and one of the first practical MKL algorithms was presented in (Bach et al., 2004). The block l 1 formulation, in conjunction with M-Y regularization, was developed so that efficient gradient descent could be performed using the SMO algorithm while still generating a sparse solution. As already mentioned in the introduction, (Sonnenburg et al., 2006) retained the block l 1 regularization but reformulated the problem as a SILP. This made it applicable to large scale problems and the authors were impressively able to train their algorithm on a million splice data set. Further efficiency was obtained in (Rakotomamonjy et al., 2008; Varma & Ray, 2007) via gradient descent optimization and (Bach, 2008) opened up the possibility of training on an exponentially large number of kernels. Other interesting approaches have been proposed in (Argyriou et al., 2005; Ong et al., 2005; Zien & Ong, 2007) and include Hyperkernels and multi-class MKL. Note that these methods essentially learn linear combinations of base kernels subject to l 1, or sometimes l 2 (Cristianini et al., 2001; Kloft et al., 2008), regularization of the kernel parameters. Most formulations are convex or can be made so by a change of variables. 3. Generalized MKL Our objective is to learn a function of the form f(x) = w t φ d (x) + b with the kernel k d (x i,x j ) = φ t d (x i)φ d (x j ) representing the dot product in feature space φ parameterized by d. The function can be used directly for regression or the sign of the function can be used for classification. The goal in SVM learning is to learn the globally optimal values of w and b from training data {(x i,y i )}. In addition, MKL also estimates the kernel parameters d. We extend the MKL formulation of (Varma & Ray, 2007) to 1 min w,b,d 2 wt w + l(y i,f(x i )) + r(d) (1) i subject to d 0 (2) where both the regularizer r and the kernel can be any general differentiable functions of d with continuous derivative and l could be one of various loss functions such as l = C max(0,1 y i f(x i )) for classification or l = C max(0, y i f(x i ) ǫ) for regression.

3 Three things are worth noting about the primal. First, we choose to use a non-convex formulation, as opposed to the convex k wt k w k/d k (Rakotomamonjy et al., 2008), since for general kernel combinations, w t k w k need not tend to zero when d k tends to zero. Second, we place r(d) in the objective and incorporate a scale parameter within it rather than having it as an equality constraint (typically k d k = 1 or k d2 k = 1). Third, the constraint d 0 can often be relaxed to a more general one which simply requires the learnt kernel to be positive definite. Conversely, the constraints can also be strengthened if prior knowledge is available. In either case, if d r exists then the gradient descent based optimization is still applicable. However, the projection back into the feasible set can get more expensive. In order to leverage existing large scale optimizers, we follow the standard procedure (Chapelle et al., 2002) of reformulating the primal as a nested two step optimization. In the outer loop, the kernel is learnt by optimizing over d while, in the inner loop, the kernel is held fixed and the SVM parameters are learnt. This can be achieved by rewriting the primal as follows Min d where T(d) subject to d 0 (3) T(d) = Min w,b 1 2 wt w + i l(y i,f(x i )) + r(d) We now need to prove that d T exists, and calculate it efficiently, if we are to utilize gradient descent in the outer loop. This can be achieved by moving to the dual formulation of T given by (for classification and regression respectively) and W C (d) = max 1 t α 1 α 2 αt YK d Yα + r(d) (4) subject to 1 t Yα = 0, 0 α C (5) W R (d) = max α 1 t Yα 1 2 αt K d α +r(d) ǫ1 t α (6) subject to 1 t α = 0, 0 α C (7) where K d is the kernel matrix for a given d and Y is a diagonal matrix with the labels on the diagonal. Note that we can write T = r+p and W = r+d with strong duality holding between P and D. Therefore, T(d) = W(d) for any given value of d, and it is sufficient for us to show that W is differentiable and calculate d W. Proof of the differentiability of W C and W R comes from Danskin s Theorem (Danskin, 1967). Since the feasible set is compact, the gradient can be shown to exist if k, r, d k and d r are smoothly varying functions of d and if α, the value of α that optimizes W, is unique. Furthermore, a straight forward extension of Lemma 2 in (Chapelle et al., 2002) can be used to show that W C and W R (as well as others obtained from loss functions for novelty detection, ranking, etc.) have derivatives given by T = W = r 1 H d k d k d 2α t α (8) k d k where H = YKY for classification and H = K for regression. Thus, in order to take a gradient step, all we need to do is obtain α. Note that since W C or W R are equivalent to their corresponding single kernel SVM duals with kernel matrix K d, α can be obtained by any SVM optimization package. The final algorithm is given in Algorithm 1 and we refer to it as Generalized MKL (GMKL). The step size s n is chosen based on the Armijo rule to guarantee convergence and the projection step, for the constraints d 0, is as simple as d max(0, d). Note that the algorithm is virtually unchanged from (Varma & Ray, 2007) apart from the more general form of the kernel k and regularizer r. If a faster rate of convergence was required, our assumptions could be suitably modified so as to take second order steps rather than perform gradient descent. Only very mild restrictions have been placed on the form of the learnt kernel k and regularizer r. As regards k, the only constraints that have been imposed are that K be strictly positive definite for all valid d and that d k exists and be continuous. Many kernels can be constructed that satisfy these properties. One can, of course, learn the standard sum of base kernels. More generally, products of base kernels, and other combinations which yield positive definite kernels, can also be learnt now. In addition, one can also tune kernel parameters in general kernels such as k d (x i,x j ) = (d 0 + m d mx t i A mx j ) n or Algorithm 1 Generalized MKL. 1: n 0 2: Initialize d 0 randomly. 3: repeat 4: K k(d n ) 5: Use an SVM solver of choice to solve the single kernel problem with ( kernel K and obtain ) α. 6: d n+1 k d n k sn r d k 1 H 2α t d k α 7: Project d n+1 onto the feasible set if any constraints are violated. 8: n n + 1 9: until converged

4 k d (x i,x j ) = e m dmxt i Amxj. Combined with a sparsity promoting regularizer on d, this can be used for non-linear dimensionality reduction and feature selection for appropriate choices of A. Note, however, that such kernels do not lead to convex formulations. As regards r, we only require that its derivative should exist and be continuous. Since d can be restricted to the non-negative orthant, various forms of p-norm regularisers with p 1 fall in this category. In particular, l 1 regularization with r(d) = σ t d or variations of (Chan et al., 2007) could be used for learning sparse solutions. Alternatively, l 2 regularization of the form r(d) = (d µ) t Σ 1 (d µ) can be used when only a small number of relevant kernels are present or if prior knowledge in the form of µ and Σ is available (for instance, from transfer learning). Finally, for regression, one could incorporate the term log K d into r so as to obtain a MAP estimate of a probabilistic formulation. The naïve substitution α = K 1 y would then render our formulation near identical to a marginal likelihood approach in Gaussian Processes. 4. Feature Selection Methods We compare our formulation to traditional MKL as well as the following feature selection methods Boosting : The LPBoost formulation of (Bi et al., 2004) is similar to that of standard MKL and boosting generalizes standard MKL s decision function. Boosting can therefore be used to learn linear combinations of base kernels. Individual weak classifier SVMs are pre-learnt from each of the given base kernels and combined using AdaBoost. This can be attractive when there are a large number of kernels or when kernels are made available incrementally. While the computational costs are low, the empirical results were found to be poor as the learnt kernel weights could not influence the pre-learnt weak classifiers. Of course, in traditional boosting, the weak classifiers and the weights are learnt together and we present comparative results to (Baluja & Rowley, 2007) which represents a stateof-the-art boosting method for gender identification. OWL-QN (Andrew & Gao, 2007) : This is a large scale implementation of l 1 logistic regression. The method learns a function of the form f(x) = w t x by minimizing (1/C) w 1 + i l(y i,f(x i )) where l is the log loss. Despite being linear, the method can sometimes outperform boosting. Nevertheless, the overall performance is poor as compared to the other linear methods since OWL-QN does not have an explicit bias term. One could simulate a bias by adding a constant feature but the corresponding weight could be set to zero due to the l 1 regularization. When this doesn t happen, OWL-QN s performance is comparable to LP- SVM and Sparse-SVM. LP-SVM (Fung & Mangasarian, 2002) : This is the standard SVM formulation but with the l 2 regularization on w replaced by l 1 regularization. We consider the linear formulation which learns a function of the form f(x) = w t x + b by minimizing w 1 +C i l(y i,f(x i )) where l is the hinge loss. Seeing that the hinge loss is very similar to the log loss, the formulation appears to be very similar to OWL-QN. However, due to the explicit bias term b which is not included in the l 1 regularization, LP-SVM can sometimes perform much better than OWL-QN. Somewhat surprisingly, the performance of the linear LP-SVM could even be better than that of non-linear MKL (though not GMKL). Sparse-SVM (Chan et al., 2007) : This method does not use explicit l 1 regularization to enforce sparsity. Instead, it places a direct cardinality constraint on the hyperplane normal. It learns a function of the form f(x) = w t x + b by minimizing w 2 + C i l(y i,f(x i )) subject to w 0 r, where l is the hinge loss. We take the convex QCQP relaxation (QCQP-SSVM) proposed by the authors. Empirically, we found the performance of Sparse-SVM to be very similar to that of LP-SVM though, being a QCQP, it took much longer to optimize. BAHSIC (Song et al., 2007) : This is a leading filter method which runs a backward selection algorithm discarding features based on their label dependence as measured by the Hilbert-Schmidt independence criterion. We use an RBF kernel for the data (the same as used by boosting, MKL and GMKL) and a linear kernel for the labels. BAHSIC outputs a ranked list of features from which a subset of the desired size can be selected. An SVM with an RBF kernel is then trained using the selected features. In our experiments, we found backward selection to be computationally very expensive without offering any advantages in terms of classification accuracy. Given identical kernels, BAH- SIC performed substantially worse than GMKL, 5. Experiments In this section we evaluate generalized kernel learning on various feature selection problems. We investigate gender identification from frontal facial images using as few pixels as possible. It is demonstrated that, on the benchmark database of (Moghaddam & Yang, 2002), GMKL can outperform MKL by as much as 10% for a given number of features. Similarly, on the

5 Table 1: Gender identification results. The final row summarizes the average number of features selected (in brackets) by each wrapper method and the resultant classification accuracy. See text for details. N d AdaBoost B&R 2007 OWL-QN LP-SVM S-SVM BAHSIC MKL GMKL ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± (12.6) 91 (221.3) 91 (58.3) 90.8 (252) 91.6 (146.3) 95.5 (69.6) UCI datasets, there can be as much as a 6% to 10% difference in performance between GMKL and MKL. We also demonstrate that GMKL performs better than the other methods considered. To generate feature selection results, we can vary the hyper-parameter C in the wrapper methods to select a desired number of features. However, this strategy does not yield good classification results even though globally optimal solutions are obtained. Low values of C encouraged greater sparsity but also permitted more classification errors. We obtained much better results by the theoretically suboptimal strategy of fixing C to a large value (chosen via cross-validation so as to minimize classification error), learning the classifier, taking the top ranked components of w (or d) and relearning the classifier using only the selected features. This technique was used to generate the results in Tables 1 and 2. For each dataset, the very last row summarizes the number of features selected (N s ) by each wrapper method and the resultant classification accuracy. When the number of desired features (N d ) is less than N s, the classification accuracy is determined using the N d top ranked features. Otherwise, when N d > N s, the table entry is left blank as the classification accuracy either plateaus or decreases as suboptimal features are added. In such a situation, it is better to choose only N s features and maintain accuracy Gender Identification We tackle the binary gender identification problem on the benchmark database of (Moghaddam & Yang, 2002). The database has images of 1044 male and 711 female faces giving a total of 1755 images in all. We follow the standard experimental setup and use 1053 images for training and 702 for testing. Results are averaged over 3 random splits of the data. Each image in the database has been pre-processed by (Moghaddam & Yang, 2002) to be aligned and has been scaled down to have dimensions Thus, each image has 252 pixels and we associate an RBF kernel with each pixel based on its grey scale intensity directly. For Generalized MKL, the 252 base kernels are combined by taking their product to get k d (x i,x j ) = 252 m=1 where x e dm(xim xjm)2 im and x jm represent the intensity of the m th pixel in image i and j respectively. For standard MKL, the same 252 base kernels are combined using the sum representation to get k d (x i,x j ) = 252 m=1 d me γm(xim xjm)2. Both methods are subject to l 1 regularization on d so that only a few kernels are selected. AdaBoost can also be used to combine the 252 weak classifiers derived from the individual base kernels. The method of (Baluja & Rowley, 2007) can be considered as a state-of-the-art boosting variant for this problem and operates on pairs of pixels. We use RBF kernels for BAHSIC as well while the other methods are linear. Table 1 lists the feature selection results. AdaBoost tended to perform the worst and selected only 12.6 features on average. The poor performance was due to the fact that each of the 252 SVMs was learnt independently. The weak classifier coefficients (i.e. kernel weights) did not influence the individual SVM parameters. By contrast, there is a very tight coupling between the two in MKL and GMKL and this ensures better performance. Of course, other forms of boosting do not have this limitation and the state-of-the-art boosting method of (Baluja & Rowley, 2007) performs better but is still significantly inferior to GMKL. The performance of the linear feature selection methods is quite variable. Logistic regression performs poorly as the implicit bias weight got set to zero due to the l 1 regularization. On the other hand, LP- SVM and Sparse-SVM perform even better than standard MKL (though not better than GMKL). BAHSIC, which is a non-linear filter method, follows the same

6 Table 2: UCI results with datasets having N training points and M features. See text for details. Ionosphere: N = 246, M = 34, Uniform MKL = 89.9 ± 2.5, Uniform GMKL = 94.6 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± (9.8) 89.2 (25.2) 92.6 (34.0) 92.9 (34.0) 88.1 (29.3) 94.4 (16.9) Parkinsons: N = 136, M = 22, Uniform MKL = 87.3 ± 3.9, Uniform GMKL = 91.0 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± (5.2) 83.6 (11.1) 87.2 (22.0) 87.2 (22.0) 88.3 (14.6) 92.7 (9.0) Musk: N = 333, M = 166, Uniform MKL = 90.2 ± 3.2, Uniform GMKL = 93.8 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± (31.1) 83.5 (86.7) 83.4 (166.0) 83.3 (166.0) 88.2 (73.2) 93.6 (57.9) Sonar: N = 145, M = 60, Uniform MKL = 82.9 ± 3.4, Uniform GMKL = 84.6 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± (18.7) 74.7 (39.3) 73.6 (60.0) 73.5 (60.0) 81.4 (38.6) 82.3 (20.4) Wpbc: N = 135, M = 34, Uniform MKL = 72.1 ± 5.4, Uniform GMKL = 77.0 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± (5.1) 78.3 (20.8) 79.0 (34.0) 78.9 (34.0) 69.3 (24.8) 80.0 (16.8)

7 trend and its performance is significantly worse than GMKL with identical kernels. This would suggest that wrapper methods based on the right feature representation should be preferable to filter methods which do not directly optimize for classification accuracy. The comparison between MKL and GMKL is even starker. GMKL achieves a classification accuracy of 93.2% using as few as 20 features and 95.1% using only 30 features. In comparison, standard MKL achieves just 83.8% and 86.3% respectively. This reinforces the observation that choosing the right kernel representation is much more important than converging to the globally optimal solution. Finally, the MKL and GMKL results for fixed uniform weights chosen via cross-validation are 92.6 ± 0.9 and 94.3 ± 0.1 respectively. Note that these results are obtained using all 252 features. GMKL can obtain a similar classification accuracy using as few as 25 features. In summary, there can be a difference of almost 10% between standard MKL and GMKL for a fixed small number of features. GMKL also does significantly better than BAHSIC and achieves nearly a 10x compression factor as compared to taking uniform weights UCI Datasets We also present comparative results on UCI datasets. We follow the standard experimental methodology (Rakotomamonjy et al., 2008) where 70% of the points are used for training and the remaining 30% for testing. Results are reported over 20 random splits of the data. All datasets are preprocessed to have zero mean and unit variance. An RBF kernel is assigned to each of the M features in a given dataset. The M RBF kernels are then combined linearly for standard MKL and by taking their product for GMKL. The learnt kernels are of the form k d (x i,x j ) = M m=1 d me γm(xim xjm)2 and k d (x i,x j ) = M m=1 e dm(xim xjm)2 respectively. Table 2 lists the feature selection results. The analysis Table 3: Comparison with the results in (Rakotomamonjy et al., 2008). GMKL achieves slightly better results but takes far fewer kernels as input. Database SimpleMKL GMKL Sonar 80.6 ± 5.1 (793) 82.3 ± 4.8 (60) Wpbc 76.7 ± 1.2 (442) 79.0 ± 3.5 (34) Ionosphere 91.5 ± 2.5 (442) 93.0 ± 2.1 (34) Liver 65.9 ± 2.3 (091) 72.7 ± 4.0 (06) Pima 76.5 ± 2.6 (117) 77.2 ± 2.1 (08) is similar to that of gender identification except that OWL-QN performs better as the implicit bias is not always being set to zero. For a fixed number of features, GMKL has the best classification results even as compared to MKL or BAHSIC (though the variance can be high for all the methods). Furthermore, a kernel with fixed uniform weights yields ballpark classification accuracies though GMKL can achieve the same results using far fewer features Comparison to SimpleMKL and HMKL The classification performance of standard MKL can be improved by adding extra base kernels which are either more informative or help better approximate a desired kernel function. However, this can lead to a more complex and costlier learning task. We therefore leave aside feature selection for the moment and compare our results to those reported in (Rakotomamonjy et al., 2008). Table 3 lists classification performance on the 5 datasets of (Rakotomamonjy et al., 2008). The number of kernels input to each method are reported in brackets. As can be seen, GMKL can achieve slightly better performance than SimpleMKL while training on far fewer kernels. Of course, one can reduce the number of kernels input to SimpleMKL but this will result in reduced accuracy. In the limit that only a single kernel is used per feature, we will get back the results of Table 2 where GMKL does much better than standard MKL. The results for Liver were obtained using l 2 regularization. This lead to a 7% improvement in performance over SimpleMKL as a sparse representation is not suitable for this database (there are only 6 features). Pima also has very few features but l 1 and l 2 regularization give comparable results. Finally, we also compare results to Hierarchical MKL (Bach, 2008). We use a quadratic kernel of the form k d (x i,x j ) = (1 + m d mx im x jm ) 2 as compared to the more powerful k d (x i,x j ) = m (1 + x imx jm ) 4 of (Bach, 2008) which decomposes to a linear combination of 5 M kernels. Nevertheless, as shown in Table 4, we achieve slightly better results with less computational cost. Table 4: Comparison between HKL and GMKL. Database N M HKL GMKL Magic ± ± 1.2 Spambase ± ± 0.8 Mushroom ± ± 0.0

8 6. Conclusions In this paper, we have shown how traditional MKL formulations can be easily extended to learn general kernel combinations subject to general regularization on the kernel parameters. While our focus was on binary classification our approach can be applied to other loss functions and even other formulations such as Local MKL, multi-class and multi-label MKL. Generalized kernel learning can be achieved very efficiently based on gradient descent optimization and existing large scale SVM solvers. As such, it is now possible to learn much richer feature representations as compared to standard MKL without sacrificing any efficiency in terms of speed of optimization. Our GMKL formulation based on products of kernels was shown to give good results for various feature selection problems not only as compared to traditional MKL but also as compared to leading wrapper and filter feature selection methods. Of course, taking products of kernels might not always be the right approach to every problem. In such cases, our formulation can be used to learn other appropriate representation including sums of kernels. Finally, it should be noted that the classification accuracy of GMKL with learnt weights tends to be much the same as that obtained using uniform weights chosen through cross-validation. The advantage in learning would therefore seem to lie in the fact that GMKL can learn to achieve the same classification accuracy but using far fewer features. Acknowledgements We are grateful to P. Anandan, Andrew Zisserman and our reviewers for providing helpful feedback. References Andrew, G., & Gao, J. (2007). Scalable training of L 1 - regularized log-linear models. ICML (pp ). Argyriou, A., Micchelli, C. A., & Pontil, M. (2005). Learning convex combinations of continuously parameterized basic kernels. COLT (pp ). Bach, F. R. (2008). Exploring large feature spaces with hierarchical multiple kernel learning. NIPS (pp ). Bach, F. R., Lanckriet, G. R. G., & Jordan, M. I. (2004). Multiple kernel learning, conic duality, and the SMO algorithm. ICML (pp. 6 13). Baluja, S., & Rowley, H. (2007). Boosting sex identification performance. IJCV, 71, Bi, J., Zhang, T., & Bennet, K. P. (2004). Columngeneration boosting methods for mixture of kernels. Proc. SIGKDD (pp ). Chan, A. B., Vasconcelos, N., & Lanckriet, G. (2007). Direct convex relaxations of sparse SVM. ICML (pp ). Chapelle, O., Vapnik, V., Bousquet, O., & Mukherjee, S. (2002). Choosing multiple parameters for Support Vector Machines. Machine Learning, 46, Crammer, K., Keshet, J., & Singer, Y. (2002). Kernel design using boosting. NIPS (pp ). Cristianini, N., Shawe-Taylor, J., Elisseeff, A., & Kandola, J. (2001). On kernel-target alignment. NIPS (pp ). Danskin, J. M. (1967). The theorey of max-min and its applications to weapons allocation problems. Fung, G., & Mangasarian, O. L. (2002). A feature selection newton method for support vector machine classification (Technical Report 02-03). Univ. of Wisconsin. Kloft, M., Brefeld, U., Laskov, P., & Sonnenburg, S. (2008). Non-sparse Multiple Kernel Learning. NIPS Workshop on Kernel Learning. Lanckriet, G. R. G., Cristianini, N., Bartlett, P., El Ghaoui, L., & Jordan, M. I. (2004). Learning the kernel matrix with semidefinite programming. JMLR, 5, Moghaddam, B., & Yang, M. H. (2002). Learning gender with support faces. IEEE PAMI, 24, Ong, C. S., Smola, A. J., & Williamson, R. C. (2005). Learning the kernel with hyperkernels. JMLR, 6, Rakotomamonjy, A., Bach, F., Grandvalet, Y., & Canu, S. (2008). Simplemkl. JMLR, 9, Song, L., Smola, A., Gretton, A., Borgwardt, K., & Bedo, J. (2007). Supervised feature selection via dependence estimation. ICML (pp ). Sonnenburg, S., Raetsch, G., Schaefer, C., & Schoelkopf, B. (2006). Large scale multiple kernel learning. JMLR, 7, Varma, M., & Ray, D. (2007). Learning the discriminative power-invariance trade-off. ICCV. Zien, A., & Ong, C. S. (2007). Multiclass multiple kernel learning. ICML (pp ).

Simple and efficient online algorithms for real world applications

Simple and efficient online algorithms for real world applications Simple and efficient online algorithms for real world applications Università degli Studi di Milano Milano, Italy Talk @ Centro de Visión por Computador Something about me PhD in Robotics at LIRA-Lab,

More information

DUOL: A Double Updating Approach for Online Learning

DUOL: A Double Updating Approach for Online Learning : A Double Updating Approach for Online Learning Peilin Zhao School of Comp. Eng. Nanyang Tech. University Singapore 69798 zhao6@ntu.edu.sg Steven C.H. Hoi School of Comp. Eng. Nanyang Tech. University

More information

Introduction to Support Vector Machines. Colin Campbell, Bristol University

Introduction to Support Vector Machines. Colin Campbell, Bristol University Introduction to Support Vector Machines Colin Campbell, Bristol University 1 Outline of talk. Part 1. An Introduction to SVMs 1.1. SVMs for binary classification. 1.2. Soft margins and multi-class classification.

More information

Multiple Kernel Learning on the Limit Order Book

Multiple Kernel Learning on the Limit Order Book JMLR: Workshop and Conference Proceedings 11 (2010) 167 174 Workshop on Applications of Pattern Analysis Multiple Kernel Learning on the Limit Order Book Tristan Fletcher Zakria Hussain John Shawe-Taylor

More information

Maximum Margin Clustering

Maximum Margin Clustering Maximum Margin Clustering Linli Xu James Neufeld Bryce Larson Dale Schuurmans University of Waterloo University of Alberta Abstract We propose a new method for clustering based on finding maximum margin

More information

Infinite Kernel Learning

Infinite Kernel Learning Max Planck Institut für biologische Kybernetik Max Planck Institute for Biological Cybernetics Technical Report No. TR-78 Infinite Kernel Learning Peter Vincent Gehler and Sebastian Nowozin October 008

More information

Financial Forecasting with Gompertz Multiple Kernel Learning

Financial Forecasting with Gompertz Multiple Kernel Learning 2010 IEEE International Conference on Data Mining Financial Forecasting with Gompertz Multiple Kernel Learning Han Qin Computer and Information Science University of Oregon Eugene, OR 97403, USA qinhan@cs.uoregon.edu

More information

Direct Convex Relaxations of Sparse SVM

Direct Convex Relaxations of Sparse SVM Antoni B. Chan abchan@ucsd.edu Nuno Vasconcelos nuno@ece.ucsd.edu Gert R. G. Lanckriet gert@ece.ucsd.edu Department of Electrical and Computer Engineering, University of California, San Diego, CA, 9037,

More information

Support Vector Machine. Tutorial. (and Statistical Learning Theory)

Support Vector Machine. Tutorial. (and Statistical Learning Theory) Support Vector Machine (and Statistical Learning Theory) Tutorial Jason Weston NEC Labs America 4 Independence Way, Princeton, USA. jasonw@nec-labs.com 1 Support Vector Machines: history SVMs introduced

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical

More information

Linear Threshold Units

Linear Threshold Units Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear

More information

Semi-Supervised Support Vector Machines and Application to Spam Filtering

Semi-Supervised Support Vector Machines and Application to Spam Filtering Semi-Supervised Support Vector Machines and Application to Spam Filtering Alexander Zien Empirical Inference Department, Bernhard Schölkopf Max Planck Institute for Biological Cybernetics ECML 2006 Discovery

More information

Comparing the Results of Support Vector Machines with Traditional Data Mining Algorithms

Comparing the Results of Support Vector Machines with Traditional Data Mining Algorithms Comparing the Results of Support Vector Machines with Traditional Data Mining Algorithms Scott Pion and Lutz Hamel Abstract This paper presents the results of a series of analyses performed on direct mail

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

More information

Nonlinear Optimization: Algorithms 3: Interior-point methods

Nonlinear Optimization: Algorithms 3: Interior-point methods Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Nonlinear optimization c 2006 Jean-Philippe Vert,

More information

Class #6: Non-linear classification. ML4Bio 2012 February 17 th, 2012 Quaid Morris

Class #6: Non-linear classification. ML4Bio 2012 February 17 th, 2012 Quaid Morris Class #6: Non-linear classification ML4Bio 2012 February 17 th, 2012 Quaid Morris 1 Module #: Title of Module 2 Review Overview Linear separability Non-linear classification Linear Support Vector Machines

More information

Large Scale Learning to Rank

Large Scale Learning to Rank Large Scale Learning to Rank D. Sculley Google, Inc. dsculley@google.com Abstract Pairwise learning to rank methods such as RankSVM give good performance, but suffer from the computational burden of optimizing

More information

Support Vector Machines with Clustering for Training with Very Large Datasets

Support Vector Machines with Clustering for Training with Very Large Datasets Support Vector Machines with Clustering for Training with Very Large Datasets Theodoros Evgeniou Technology Management INSEAD Bd de Constance, Fontainebleau 77300, France theodoros.evgeniou@insead.fr Massimiliano

More information

Tensor Methods for Machine Learning, Computer Vision, and Computer Graphics

Tensor Methods for Machine Learning, Computer Vision, and Computer Graphics Tensor Methods for Machine Learning, Computer Vision, and Computer Graphics Part I: Factorizations and Statistical Modeling/Inference Amnon Shashua School of Computer Science & Eng. The Hebrew University

More information

Lecture 2: The SVM classifier

Lecture 2: The SVM classifier Lecture 2: The SVM classifier C19 Machine Learning Hilary 2015 A. Zisserman Review of linear classifiers Linear separability Perceptron Support Vector Machine (SVM) classifier Wide margin Cost function

More information

Support Vector Machine (SVM)

Support Vector Machine (SVM) Support Vector Machine (SVM) CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

Lecture 6: Logistic Regression

Lecture 6: Logistic Regression Lecture 6: CS 194-10, Fall 2011 Laurent El Ghaoui EECS Department UC Berkeley September 13, 2011 Outline Outline Classification task Data : X = [x 1,..., x m]: a n m matrix of data points in R n. y { 1,

More information

The Artificial Prediction Market

The Artificial Prediction Market The Artificial Prediction Market Adrian Barbu Department of Statistics Florida State University Joint work with Nathan Lay, Siemens Corporate Research 1 Overview Main Contributions A mathematical theory

More information

Support Vector Machines Explained

Support Vector Machines Explained March 1, 2009 Support Vector Machines Explained Tristan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introduction This document has been written in an attempt to make the Support Vector Machines (SVM),

More information

Online Classification on a Budget

Online Classification on a Budget Online Classification on a Budget Koby Crammer Computer Sci. & Eng. Hebrew University Jerusalem 91904, Israel kobics@cs.huji.ac.il Jaz Kandola Royal Holloway, University of London Egham, UK jaz@cs.rhul.ac.uk

More information

Distributed Machine Learning and Big Data

Distributed Machine Learning and Big Data Distributed Machine Learning and Big Data Sourangshu Bhattacharya Dept. of Computer Science and Engineering, IIT Kharagpur. http://cse.iitkgp.ac.in/~sourangshu/ August 21, 2015 Sourangshu Bhattacharya

More information

Supervised Learning (Big Data Analytics)

Supervised Learning (Big Data Analytics) Supervised Learning (Big Data Analytics) Vibhav Gogate Department of Computer Science The University of Texas at Dallas Practical advice Goal of Big Data Analytics Uncover patterns in Data. Can be used

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Charlie Frogner 1 MIT 2011 1 Slides mostly stolen from Ryan Rifkin (Google). Plan Regularization derivation of SVMs. Analyzing the SVM problem: optimization, duality. Geometric

More information

Maximum-Margin Matrix Factorization

Maximum-Margin Matrix Factorization Maximum-Margin Matrix Factorization Nathan Srebro Dept. of Computer Science University of Toronto Toronto, ON, CANADA nati@cs.toronto.edu Jason D. M. Rennie Tommi S. Jaakkola Computer Science and Artificial

More information

Employer Health Insurance Premium Prediction Elliott Lui

Employer Health Insurance Premium Prediction Elliott Lui Employer Health Insurance Premium Prediction Elliott Lui 1 Introduction The US spends 15.2% of its GDP on health care, more than any other country, and the cost of health insurance is rising faster than

More information

Predict Influencers in the Social Network

Predict Influencers in the Social Network Predict Influencers in the Social Network Ruishan Liu, Yang Zhao and Liuyu Zhou Email: rliu2, yzhao2, lyzhou@stanford.edu Department of Electrical Engineering, Stanford University Abstract Given two persons

More information

LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING. ----Changsheng Liu 10-30-2014

LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING. ----Changsheng Liu 10-30-2014 LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING ----Changsheng Liu 10-30-2014 Agenda Semi Supervised Learning Topics in Semi Supervised Learning Label Propagation Local and global consistency Graph

More information

On the Path to an Ideal ROC Curve: Considering Cost Asymmetry in Learning Classifiers

On the Path to an Ideal ROC Curve: Considering Cost Asymmetry in Learning Classifiers On the Path to an Ideal ROC Curve: Considering Cost Asymmetry in Learning Classifiers Francis R. Bach Computer Science Division University of California Berkeley, CA 9472 fbach@cs.berkeley.edu Abstract

More information

A semi-supervised Spam mail detector

A semi-supervised Spam mail detector A semi-supervised Spam mail detector Bernhard Pfahringer Department of Computer Science, University of Waikato, Hamilton, New Zealand Abstract. This document describes a novel semi-supervised approach

More information

Tree based ensemble models regularization by convex optimization

Tree based ensemble models regularization by convex optimization Tree based ensemble models regularization by convex optimization Bertrand Cornélusse, Pierre Geurts and Louis Wehenkel Department of Electrical Engineering and Computer Science University of Liège B-4000

More information

D-optimal plans in observational studies

D-optimal plans in observational studies D-optimal plans in observational studies Constanze Pumplün Stefan Rüping Katharina Morik Claus Weihs October 11, 2005 Abstract This paper investigates the use of Design of Experiments in observational

More information

Learning with Local and Global Consistency

Learning with Local and Global Consistency Learning with Local and Global Consistency Dengyong Zhou, Olivier Bousquet, Thomas Navin Lal, Jason Weston, and Bernhard Schölkopf Max Planck Institute for Biological Cybernetics, 7276 Tuebingen, Germany

More information

Learning with Local and Global Consistency

Learning with Local and Global Consistency Learning with Local and Global Consistency Dengyong Zhou, Olivier Bousquet, Thomas Navin Lal, Jason Weston, and Bernhard Schölkopf Max Planck Institute for Biological Cybernetics, 7276 Tuebingen, Germany

More information

Author Gender Identification of English Novels

Author Gender Identification of English Novels Author Gender Identification of English Novels Joseph Baena and Catherine Chen December 13, 2013 1 Introduction Machine learning algorithms have long been used in studies of authorship, particularly in

More information

Adapting Codes and Embeddings for Polychotomies

Adapting Codes and Embeddings for Polychotomies Adapting Codes and Embeddings for Polychotomies Gunnar Rätsch, Alexander J. Smola RSISE, CSL, Machine Learning Group The Australian National University Canberra, 2 ACT, Australia Gunnar.Raetsch, Alex.Smola

More information

Multiclass Classification. 9.520 Class 06, 25 Feb 2008 Ryan Rifkin

Multiclass Classification. 9.520 Class 06, 25 Feb 2008 Ryan Rifkin Multiclass Classification 9.520 Class 06, 25 Feb 2008 Ryan Rifkin It is a tale Told by an idiot, full of sound and fury, Signifying nothing. Macbeth, Act V, Scene V What Is Multiclass Classification? Each

More information

Big Data - Lecture 1 Optimization reminders

Big Data - Lecture 1 Optimization reminders Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Schedule Introduction Major issues Examples Mathematics

More information

Data Mining Practical Machine Learning Tools and Techniques

Data Mining Practical Machine Learning Tools and Techniques Ensemble learning Data Mining Practical Machine Learning Tools and Techniques Slides for Chapter 8 of Data Mining by I. H. Witten, E. Frank and M. A. Hall Combining multiple models Bagging The basic idea

More information

A Simple Introduction to Support Vector Machines

A Simple Introduction to Support Vector Machines A Simple Introduction to Support Vector Machines Martin Law Lecture for CSE 802 Department of Computer Science and Engineering Michigan State University Outline A brief history of SVM Large-margin linear

More information

Scalable Developments for Big Data Analytics in Remote Sensing

Scalable Developments for Big Data Analytics in Remote Sensing Scalable Developments for Big Data Analytics in Remote Sensing Federated Systems and Data Division Research Group High Productivity Data Processing Dr.-Ing. Morris Riedel et al. Research Group Leader,

More information

Making Sense of the Mayhem: Machine Learning and March Madness

Making Sense of the Mayhem: Machine Learning and March Madness Making Sense of the Mayhem: Machine Learning and March Madness Alex Tran and Adam Ginzberg Stanford University atran3@stanford.edu ginzberg@stanford.edu I. Introduction III. Model The goal of our research

More information

Large-Scale Similarity and Distance Metric Learning

Large-Scale Similarity and Distance Metric Learning Large-Scale Similarity and Distance Metric Learning Aurélien Bellet Télécom ParisTech Joint work with K. Liu, Y. Shi and F. Sha (USC), S. Clémençon and I. Colin (Télécom ParisTech) Séminaire Criteo March

More information

CSCI567 Machine Learning (Fall 2014)

CSCI567 Machine Learning (Fall 2014) CSCI567 Machine Learning (Fall 2014) Drs. Sha & Liu {feisha,yanliu.cs}@usc.edu September 22, 2014 Drs. Sha & Liu ({feisha,yanliu.cs}@usc.edu) CSCI567 Machine Learning (Fall 2014) September 22, 2014 1 /

More information

A Hybrid Forecasting Methodology using Feature Selection and Support Vector Regression

A Hybrid Forecasting Methodology using Feature Selection and Support Vector Regression A Hybrid Forecasting Methodology using Feature Selection and Support Vector Regression José Guajardo, Jaime Miranda, and Richard Weber, Department of Industrial Engineering, University of Chile Abstract

More information

CHARACTERISTICS IN FLIGHT DATA ESTIMATION WITH LOGISTIC REGRESSION AND SUPPORT VECTOR MACHINES

CHARACTERISTICS IN FLIGHT DATA ESTIMATION WITH LOGISTIC REGRESSION AND SUPPORT VECTOR MACHINES CHARACTERISTICS IN FLIGHT DATA ESTIMATION WITH LOGISTIC REGRESSION AND SUPPORT VECTOR MACHINES Claus Gwiggner, Ecole Polytechnique, LIX, Palaiseau, France Gert Lanckriet, University of Berkeley, EECS,

More information

Machine learning challenges for big data

Machine learning challenges for big data Machine learning challenges for big data Francis Bach SIERRA Project-team, INRIA - Ecole Normale Supérieure Joint work with R. Jenatton, J. Mairal, G. Obozinski, N. Le Roux, M. Schmidt - December 2012

More information

A Feature Selection Methodology for Steganalysis

A Feature Selection Methodology for Steganalysis A Feature Selection Methodology for Steganalysis Yoan Miche 1, Benoit Roue 2, Amaury Lendasse 1, Patrick Bas 12 1 Laboratory of Computer and Information Science Helsinki University of Technology P.O. Box

More information

IFT3395/6390. Machine Learning from linear regression to Neural Networks. Machine Learning. Training Set. t (3.5, -2,..., 127, 0,...

IFT3395/6390. Machine Learning from linear regression to Neural Networks. Machine Learning. Training Set. t (3.5, -2,..., 127, 0,... IFT3395/6390 Historical perspective: back to 1957 (Prof. Pascal Vincent) (Rosenblatt, Perceptron ) Machine Learning from linear regression to Neural Networks Computer Science Artificial Intelligence Symbolic

More information

Towards better accuracy for Spam predictions

Towards better accuracy for Spam predictions Towards better accuracy for Spam predictions Chengyan Zhao Department of Computer Science University of Toronto Toronto, Ontario, Canada M5S 2E4 czhao@cs.toronto.edu Abstract Spam identification is crucial

More information

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic

More information

Adaptive Online Gradient Descent

Adaptive Online Gradient Descent Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

More information

Machine Learning in FX Carry Basket Prediction

Machine Learning in FX Carry Basket Prediction Machine Learning in FX Carry Basket Prediction Tristan Fletcher, Fabian Redpath and Joe D Alessandro Abstract Artificial Neural Networks ANN), Support Vector Machines SVM) and Relevance Vector Machines

More information

Factorization Machines

Factorization Machines Factorization Machines Steffen Rendle Department of Reasoning for Intelligence The Institute of Scientific and Industrial Research Osaka University, Japan rendle@ar.sanken.osaka-u.ac.jp Abstract In this

More information

Several Views of Support Vector Machines

Several Views of Support Vector Machines Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min

More information

Introduction to Online Learning Theory

Introduction to Online Learning Theory Introduction to Online Learning Theory Wojciech Kot lowski Institute of Computing Science, Poznań University of Technology IDSS, 04.06.2013 1 / 53 Outline 1 Example: Online (Stochastic) Gradient Descent

More information

Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence

Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence Artificial Neural Networks and Support Vector Machines CS 486/686: Introduction to Artificial Intelligence 1 Outline What is a Neural Network? - Perceptron learners - Multi-layer networks What is a Support

More information

Gaussian Processes in Machine Learning

Gaussian Processes in Machine Learning Gaussian Processes in Machine Learning Carl Edward Rasmussen Max Planck Institute for Biological Cybernetics, 72076 Tübingen, Germany carl@tuebingen.mpg.de WWW home page: http://www.tuebingen.mpg.de/ carl

More information

Towards Ultrahigh Dimensional Feature Selection for Big Data

Towards Ultrahigh Dimensional Feature Selection for Big Data Journal of Machine Learning Research 15 (2014) 1371-1429 Submitted 5/12; Revised 1/14; Published 4/14 Towards Ultrahigh Dimensional Feature Selection for Big Data Mingkui Tan School of Computer Engineering

More information

SUPPORT VECTOR MACHINE (SVM) is the optimal

SUPPORT VECTOR MACHINE (SVM) is the optimal 130 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 1, JANUARY 2008 Multiclass Posterior Probability Support Vector Machines Mehmet Gönen, Ayşe Gönül Tanuğur, and Ethem Alpaydın, Senior Member, IEEE

More information

A User s Guide to Support Vector Machines

A User s Guide to Support Vector Machines A User s Guide to Support Vector Machines Asa Ben-Hur Department of Computer Science Colorado State University Jason Weston NEC Labs America Princeton, NJ 08540 USA Abstract The Support Vector Machine

More information

Optimization for Machine Learning

Optimization for Machine Learning Optimization for Machine Learning Lecture 4: SMO-MKL S.V. N. (vishy) Vishwanathan Purdue University vishy@purdue.edu July 11, 2012 S.V. N. Vishwanathan (Purdue University) Optimization for Machine Learning

More information

Transform-based Domain Adaptation for Big Data

Transform-based Domain Adaptation for Big Data Transform-based Domain Adaptation for Big Data Erik Rodner University of Jena Judy Hoffman Jeff Donahue Trevor Darrell Kate Saenko UMass Lowell Abstract Images seen during test time are often not from

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct

More information

(Quasi-)Newton methods

(Quasi-)Newton methods (Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting

More information

large-scale machine learning revisited Léon Bottou Microsoft Research (NYC)

large-scale machine learning revisited Léon Bottou Microsoft Research (NYC) large-scale machine learning revisited Léon Bottou Microsoft Research (NYC) 1 three frequent ideas in machine learning. independent and identically distributed data This experimental paradigm has driven

More information

Feature Selection using Integer and Binary coded Genetic Algorithm to improve the performance of SVM Classifier

Feature Selection using Integer and Binary coded Genetic Algorithm to improve the performance of SVM Classifier Feature Selection using Integer and Binary coded Genetic Algorithm to improve the performance of SVM Classifier D.Nithya a, *, V.Suganya b,1, R.Saranya Irudaya Mary c,1 Abstract - This paper presents,

More information

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725 Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T

More information

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression Logistic Regression Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max

More information

Table 1: Summary of the settings and parameters employed by the additive PA algorithm for classification, regression, and uniclass.

Table 1: Summary of the settings and parameters employed by the additive PA algorithm for classification, regression, and uniclass. Online Passive-Aggressive Algorithms Koby Crammer Ofer Dekel Shai Shalev-Shwartz Yoram Singer School of Computer Science & Engineering The Hebrew University, Jerusalem 91904, Israel {kobics,oferd,shais,singer}@cs.huji.ac.il

More information

An Introduction to Machine Learning

An Introduction to Machine Learning An Introduction to Machine Learning L5: Novelty Detection and Regression Alexander J. Smola Statistical Machine Learning Program Canberra, ACT 0200 Australia Alex.Smola@nicta.com.au Tata Institute, Pune,

More information

Enhanced Protein Fold Recognition through a Novel Data Integration Approach

Enhanced Protein Fold Recognition through a Novel Data Integration Approach Enhanced Protein Fold Recognition through a Novel Data Integration Approach Yiming Ying 1, Kaizhu Huang 2 and Colin Campbell 1 1 Department of Engineering Mathematics, University of Bristol, Bristol BS8

More information

The Data Mining Process

The Data Mining Process Sequence for Determining Necessary Data. Wrong: Catalog everything you have, and decide what data is important. Right: Work backward from the solution, define the problem explicitly, and map out the data

More information

Ensemble Methods. Knowledge Discovery and Data Mining 2 (VU) (707.004) Roman Kern. KTI, TU Graz 2015-03-05

Ensemble Methods. Knowledge Discovery and Data Mining 2 (VU) (707.004) Roman Kern. KTI, TU Graz 2015-03-05 Ensemble Methods Knowledge Discovery and Data Mining 2 (VU) (707004) Roman Kern KTI, TU Graz 2015-03-05 Roman Kern (KTI, TU Graz) Ensemble Methods 2015-03-05 1 / 38 Outline 1 Introduction 2 Classification

More information

Data Mining - Evaluation of Classifiers

Data Mining - Evaluation of Classifiers Data Mining - Evaluation of Classifiers Lecturer: JERZY STEFANOWSKI Institute of Computing Sciences Poznan University of Technology Poznan, Poland Lecture 4 SE Master Course 2008/2009 revised for 2010

More information

Lecture 3: Linear methods for classification

Lecture 3: Linear methods for classification Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,

More information

Search Taxonomy. Web Search. Search Engine Optimization. Information Retrieval

Search Taxonomy. Web Search. Search Engine Optimization. Information Retrieval Information Retrieval INFO 4300 / CS 4300! Retrieval models Older models» Boolean retrieval» Vector Space model Probabilistic Models» BM25» Language models Web search» Learning to Rank Search Taxonomy!

More information

A fast multi-class SVM learning method for huge databases

A fast multi-class SVM learning method for huge databases www.ijcsi.org 544 A fast multi-class SVM learning method for huge databases Djeffal Abdelhamid 1, Babahenini Mohamed Chaouki 2 and Taleb-Ahmed Abdelmalik 3 1,2 Computer science department, LESIA Laboratory,

More information

Trading Strategies and the Cat Tournament Protocol

Trading Strategies and the Cat Tournament Protocol M A C H I N E L E A R N I N G P R O J E C T F I N A L R E P O R T F A L L 2 7 C S 6 8 9 CLASSIFICATION OF TRADING STRATEGIES IN ADAPTIVE MARKETS MARK GRUMAN MANJUNATH NARAYANA Abstract In the CAT Tournament,

More information

Large-Scale Machine Learning with Stochastic Gradient Descent

Large-Scale Machine Learning with Stochastic Gradient Descent Large-Scale Machine Learning with Stochastic Gradient Descent Léon Bottou NEC Labs America, Princeton NJ 08542, USA leon@bottou.org Abstract. During the last decade, the data sizes have grown faster than

More information

A Potential-based Framework for Online Multi-class Learning with Partial Feedback

A Potential-based Framework for Online Multi-class Learning with Partial Feedback A Potential-based Framework for Online Multi-class Learning with Partial Feedback Shijun Wang Rong Jin Hamed Valizadegan Radiology and Imaging Sciences Computer Science and Engineering Computer Science

More information

Massive Data Classification via Unconstrained Support Vector Machines

Massive Data Classification via Unconstrained Support Vector Machines Massive Data Classification via Unconstrained Support Vector Machines Olvi L. Mangasarian and Michael E. Thompson Computer Sciences Department University of Wisconsin 1210 West Dayton Street Madison, WI

More information

Classification of Bad Accounts in Credit Card Industry

Classification of Bad Accounts in Credit Card Industry Classification of Bad Accounts in Credit Card Industry Chengwei Yuan December 12, 2014 Introduction Risk management is critical for a credit card company to survive in such competing industry. In addition

More information

Machine Learning. CUNY Graduate Center, Spring 2013. Professor Liang Huang. huang@cs.qc.cuny.edu

Machine Learning. CUNY Graduate Center, Spring 2013. Professor Liang Huang. huang@cs.qc.cuny.edu Machine Learning CUNY Graduate Center, Spring 2013 Professor Liang Huang huang@cs.qc.cuny.edu http://acl.cs.qc.edu/~lhuang/teaching/machine-learning Logistics Lectures M 9:30-11:30 am Room 4419 Personnel

More information

Which Is the Best Multiclass SVM Method? An Empirical Study

Which Is the Best Multiclass SVM Method? An Empirical Study Which Is the Best Multiclass SVM Method? An Empirical Study Kai-Bo Duan 1 and S. Sathiya Keerthi 2 1 BioInformatics Research Centre, Nanyang Technological University, Nanyang Avenue, Singapore 639798 askbduan@ntu.edu.sg

More information

Linear Classification. Volker Tresp Summer 2015

Linear Classification. Volker Tresp Summer 2015 Linear Classification Volker Tresp Summer 2015 1 Classification Classification is the central task of pattern recognition Sensors supply information about an object: to which class do the object belong

More information

Early defect identification of semiconductor processes using machine learning

Early defect identification of semiconductor processes using machine learning STANFORD UNIVERISTY MACHINE LEARNING CS229 Early defect identification of semiconductor processes using machine learning Friday, December 16, 2011 Authors: Saul ROSA Anton VLADIMIROV Professor: Dr. Andrew

More information

Linear smoother. ŷ = S y. where s ij = s ij (x) e.g. s ij = diag(l i (x)) To go the other way, you need to diagonalize S

Linear smoother. ŷ = S y. where s ij = s ij (x) e.g. s ij = diag(l i (x)) To go the other way, you need to diagonalize S Linear smoother ŷ = S y where s ij = s ij (x) e.g. s ij = diag(l i (x)) To go the other way, you need to diagonalize S 2 Online Learning: LMS and Perceptrons Partially adapted from slides by Ryan Gabbard

More information

Machine Learning Logistic Regression

Machine Learning Logistic Regression Machine Learning Logistic Regression Jeff Howbert Introduction to Machine Learning Winter 2012 1 Logistic regression Name is somewhat misleading. Really a technique for classification, not regression.

More information

Beating the NCAA Football Point Spread

Beating the NCAA Football Point Spread Beating the NCAA Football Point Spread Brian Liu Mathematical & Computational Sciences Stanford University Patrick Lai Computer Science Department Stanford University December 10, 2010 1 Introduction Over

More information

Logistic Regression for Spam Filtering

Logistic Regression for Spam Filtering Logistic Regression for Spam Filtering Nikhila Arkalgud February 14, 28 Abstract The goal of the spam filtering problem is to identify an email as a spam or not spam. One of the classic techniques used

More information

Detecting Corporate Fraud: An Application of Machine Learning

Detecting Corporate Fraud: An Application of Machine Learning Detecting Corporate Fraud: An Application of Machine Learning Ophir Gottlieb, Curt Salisbury, Howard Shek, Vishal Vaidyanathan December 15, 2006 ABSTRACT This paper explores the application of several

More information

Online learning of multi-class Support Vector Machines

Online learning of multi-class Support Vector Machines IT 12 061 Examensarbete 30 hp November 2012 Online learning of multi-class Support Vector Machines Xuan Tuan Trinh Institutionen för informationsteknologi Department of Information Technology Abstract

More information

Large Linear Classification When Data Cannot Fit In Memory

Large Linear Classification When Data Cannot Fit In Memory Large Linear Classification When Data Cannot Fit In Memory ABSTRACT Hsiang-Fu Yu Dept. of Computer Science National Taiwan University Taipei 106, Taiwan b93107@csie.ntu.edu.tw Kai-Wei Chang Dept. of Computer

More information

Class-specific Sparse Coding for Learning of Object Representations

Class-specific Sparse Coding for Learning of Object Representations Class-specific Sparse Coding for Learning of Object Representations Stephan Hasler, Heiko Wersing, and Edgar Körner Honda Research Institute Europe GmbH Carl-Legien-Str. 30, 63073 Offenbach am Main, Germany

More information

Sparse Online Learning via Truncated Gradient

Sparse Online Learning via Truncated Gradient Sparse Online Learning via Truncated Gradient John Langford Yahoo! Research jl@yahoo-inc.com Lihong Li Department of Computer Science Rutgers University lihong@cs.rutgers.edu Tong Zhang Department of Statistics

More information