Training Robust Support Vector Regression via D. C. Program
|
|
- Easter Hart
- 8 years ago
- Views:
Transcription
1 Journal of Information & Computational Science 7: 12 (2010) Available at ttp:// Training Robust Support Vector Regression via D. C. Program Kuaini Wang, Ping Zong, Yaoong Zao College of Science, Cina Agricultural University, Beijing , Cina Abstract Te classical support vector macines are sensitive to noise and outliers. In tis paper, we propose a truncated quadratic insensitive loss function and develop a robust support vector regression wic as strong ability of suppressing te impact of noise and outliers wile at te same time keeps te sparseness. Since te truncated quadratic insensitive loss function is non-convex and non-differentiable, we construct a smoot loss function wic is te combination of two Huber loss functions as its approximation. Te resultant optimization problem can be formulated as a difference of convex functions program. We establis a Newton-type algoritm to solve it. Numerical experiments on te bencmark datasets sow tat te proposed algoritm as promising performance. Keywords: Support Vector Macine; Regression; Loss Function; Robustness; D. C. Program 1 Introduction Support vector macine (SVM) is a useful tool for macine learning, and it earns success in various aspects ranging from pattern recognition, classification, function estimation, time series prediction and so on [1, 2, 3]. In practice, sampling errors, modeling errors and instrument errors may corrupt te training samples wit noise and outliers. Te classical SVM yields poor generalization performance in te presence of noise and outliers. Tere are several kinds of metods to construct te robust SVMs. Te commonly used approac constructs te robust models by introducing weigted values to errors caused by training samples [4, 5, 6]. Anoter approac constructs te robust models based on te ramp loss functions [7, 8, 9]. In addition, we can construct te robust models by te second order cone programming [10, 11, 12]. As we know, loss functions play te essential role in supervised learning. One of important and popular loss functions is te quadratic loss function, and many SVMs are constructed by using tis loss function, suc as L2-SVM [1] and least squares SVM (LS-SVM) [13]. In tis paper, we introduce a non-convex and non-differentiable loss function based on te quadratic insensitive Project supported by te National Nature Science Foundation of Cina (No ) and Innovation Fund for Graduate Student of Cina Agricultural University (No. KYCX ). Corresponding autor. address: pingsunsine@yaoo.com.cn (Ping Zong) / Copyrigt 2010 Binary Information Press December 2010
2 2386 K. Wang et al. /Journal of Information & Computational Science 7: 12 (2010) loss function and propose a robust support vector regression (SVR). We smoot te proposed loss function by te combination of two Huber loss functions and formulate te associated nonconvex optimization as a difference of convex functions (d. c.) program. Te d. c. algoritm (DCA) was successfully applied to a lot of different and various non-differentiable non-convex optimization problems to wic it quite often gave global solutions and proved to be more robust and efficient tan related standard metods, especially in te large-scale setting [14, 15]. We employ te concave-convex procedure [16] and develop a Newton-type algoritm to solve te robust SVR, wic can explicitly incorporate noise and outlier suppression and sparseness in te training process. Experimental results on bencmark datasets confirm te effectiveness of te proposed algoritm. Te rest of tis paper is organized as follows. Section 2 presents SVR in te primal. In section 3, we propose te non-convex loss function and te robust model. In section 4, a Newton-type algoritm is developed for solving te robust SVR. Section 5 presents te experimental results on bencmark datasets. Finally, section 6 gives te conclusions. 2 Support Vector Regression in te Primal In tis section, we briefly describe L2-SVR in te primal. Considering a regression problem wit training samples {(x i, y i )} n, were x i R d is te input sample and y i is te corresponding target, we can obtain a predictor by solving te following optimization problem: min w, b, ξ, ξ 1 2 w 2 + C (ξi 2 + ξ i 2 ) (1) s.t. w ϕ(x i ) + b y i ε + ξ i, i = 1,, n (2) y i (w ϕ(x i ) + b) ε + ξ i, i = 1,, n (3) were ϕ( ) is a nonlinear map from te input space to te feature space, C is te regularization factor wic balances te tradeoff between te fitting errors and model complexity. Program (1) (3) can be written as an unconstrained optimization in an associated reproducing kernel Hilbert space H: min f 1 2 f 2 H + C l(f(x i ) y i ) (4) were l(z) = (max (0, z ε)) 2 wit ε > 0 is te quadratic insensitive loss function. For te sake of simplicity, we can drop te bias b witout loss of generalization performance of SVR [17]. According to [17] te optimal function for (4) can be expressed as a linear combination of te training samples in te feature space f(x) = n β ik(x, x i ), were k(, ) is a kernel function. Ten we ave min L(β) = 1 β 2 β Kβ + C l(z i ) (5) were K is te kernel matrix wit K ij = k(x i, x j ), K i is te it row of K, and z i = K i β y i. Eq.(5) is te formulation of SVR in te primal.
3 K. Wang et al. /Journal of Information & Computational Science 7: 12 (2010) Robust Model Noise and outliers existing in te training samples tend to cause large residuals. Hence, tey keep more influence on te optimal solution of (5), wic may result in te decisive yperplane of SVR deviating from te original position and tus deteriorate te generalization performance of SVR. We introduce a non-convex loss function to limit te impact of tem. By setting te upper bound, we get te following loss function: l θ (z) = min {θ 2, (max(0, z ε)) 2 } (6) were θ > 0 is a constant. It is easy seen tat l θ (z) can control te residuals caused by noise and outliers. However, l θ (z) is neiter convex nor differentiable, and te resultant optimization problem is difficult to be solved. To overcome tis dilemma, we first propose a smoot loss function as te approximation of l θ (z). To do so, we construct two Huber loss functions l1 u (z) and l2 u (z): 0 if z ε l1 u (z) = ( z ε) 2 if ε < z ε + θ (7) θ[2 z (2ε + θ)] if z > ε + θ 0 if z ε + θ l2 u (z) = θ( z ε θ) 2 / if ε + θ < z ε + θ + (8) θ[2 z (2ε + 2θ + )] if z > ε + θ + were > 0 is te Huber parameter. Combining l u 1 (z) and l u 2 (z), we obtain l u θ,(z) = l1 u (z) + l2 u (z) 0 if z ε ( z ε) 2 if ε < z ε + θ = θ[2 z (2ε + θ)] θ( z ε θ) 2 / if ε + θ < z ε + θ + θ 2 + θ if z > ε + θ + (9) It is easy to verify tat lθ, u (z) is continuous and differentiable. Its sape is sown in Fig. 1. Wen 0, lθ, u (z) approaces l θ(z) defined by (6). So lθ, u (z) is a smoot approximation of l θ (z). Substituting (9) into (5), we propose te robust model as follows: min L θ, (β) = 1 β 2 β Kβ + C l u θ,(z i ) (10) Note tat te objective function of (10) is non-convex. Denote u(β) = 1 2 β Kβ + C and v(β) = C l u 2 (z i ). Ten optimization problem (10) can be expressed as l u 1 (z i ) min β u(β) v(β) (11)
4 2388 K. Wang et al. /Journal of Information & Computational Science 7: 12 (2010) θ 2 +θ _ 0 _ 4 2 ε θ _ ε _ θ ε 0 ε ε+θ 2 4 ε+θ+ ESV SV 2 SV 1 NSV SV 1 SV 2 ESV Fig. 1: Smoot non-convex loss function l u θ, (z) (11) is a d.c. program since u and v are convex functions. In te d.c. programming literature, te DCA [14, 15] was proposed for solving a general d.c. program of form min{u(x) v(x) : x R n } wit u and v being proper lower semi-continuous convex functions, wic form a large class of functions tan te class of differentiable functions. DCA solves two sets of convex programs called te primal and dual programs iteratively in succession suc tat te solution of te primal is te initialization to te dual and vice-verse. It is pointed out tat since tere are as many as DCA as tere are DC decompositions, te suitable coices of te DC decomposition of te objective function and te initial point are important for te computational efficiency. It can be sown tat if v is differentiable, ten DCA exactly reduces to concave-convex procedure (CCCP) [16]. Te CCCP algoritm is an iterative procedure tat solves a sequence of convex programs: x t+1 arg min x {u(x) x v(x t )}. Te resulting algoritm is proved to own global convergence beavior, i.e., for any random initialization, te sequence generated by CCCP converges to a stationary point of te d.c. program. In our program (11), since v is differentiable, we can solve it by CCCP. Te optimal solution β of (11) can be obtained by iteratively solving te following optimization problem: β t+1 = arg min β {u(β) β v(β t )} (12) were v(β t ) is te derivative of v(β t ) wit respect to β at te tt iteration: v(β t ) = v(βt ) β = C l2 u (zi) t z i z i β = C ηik t i (13) were 0 if zi t ε + θ ηi t = 2θ [(ε + θ)st i zi] t if ε + θ < zi t ε + θ + 2θs t i if zi t > ε + θ + (14)
5 K. Wang et al. /Journal of Information & Computational Science 7: 12 (2010) { 1 if z t wit s t i = sign(zi) t = i 0 1 if zi t < 0 convex optimization problem:. In eac iteration, we only need to solve te following min L θ, (β) = u(β) + C β 4 Newton Algoritm for Robust SVR ηik t i β (15) Since (15) is a convex optimization, we can establis Newton-type algoritm to solve it. First, we divide te training samples into four groups according to z t i = K i β t y i at te tt iteration: (1) Te samples wit z t i ε are regarded as non-support vectors lying in NSV region illustrated in Fig. 1, and te number of training samples in tis region is denoted by NSV. (2) Te samples wit ε < z t i ε + θ + are regarded as support vectors. We furter divide tem into two subgroups, i.e. te samples wit ε < z t i ε + θ lying in SV 1, and te samples wit ε + θ < z t i ε + θ + lying in SV 2 region. We denote te number of samples in tese two subgroups by SV 1 and SV 2, respectively. (3) Te samples wit z t i > ε + θ + are regarded as error support vectors wo lie in ESV region sown in Fig. 1, and te number of samples in tis region is denoted by ESV. For convenience of expression, we arrange te four regions of samples in te order of SV 1, SV 2, ESV and NSV. Let I 1 and I 2 be n n diagonal matrices, were I 1 as te first SV 1 entries being 1 and te oters 0, and I 2 as te first SV 1 entries being 0, followed by te SV 2 entries being 1 and 0 for te rest. In order to develop a Newton-type algoritm for (15), we need to calculate te gradient and Hessian of te objective function of (15). Te gradient is [ L θ, (β) = Kβ + 2CK I 1 (Kβ y εs) + θi 2 s θi ] 2(z t (ε + θ)s t ) were y = [y 1,, y n ], s = [sign(z 1 ),, sign(z n )], z t = [z t 1,, z t n], and s t = [s t 1,, s t n], and te Hessian is G = K + 2CKI 1 K (17) Ten te solution β t+1 of (15) at te tt CCCP iteration can be updated by [ β t+1 = β t G 1 L θ, (β t ) = 2C(I n + 2CI 1 K) 1 I 1 (y + εs t ) θi 2 s t + θi ] 2(z t (ε + θ)s t ) (18) were I n denotes n n identity matrix. In Eq. (18), we need to calculate te inverse of I n +2CI 1 K. Notice tat it is a sparse matrix: I SV1 + 2CK SV1, SV 1 2CK SV1,SV 2 2CK SV1,ESV 2CK SV1,NSV I n + 2CI 1 K = 0 I SV I ESV I NSV (16)
6 2390 K. Wang et al. /Journal of Information & Computational Science 7: 12 (2010) Its inverse can be derived as follows: A 2CAK SV1,SV 2 2CAK SV1,ESV 2CAK SV1,NSV (I n + 2CI 1 K) 1 0 I SV2 0 0 = 0 0 I ESV I NSV (19) were A = (I SV1 + 2CK SV1, SV 1 ) 1. Substituting (19) into (18), we get te optimal solution at te (t + 1)t iteration A { [ y SV1 + εs t SV 1 + 2CθK SV1,SV 2 s t SV2 + ((ε + θ)s t SV 2 z t SV 2 )/ ]} β t+1 =2C θ [ s t SV 2 + ((ε + θ)s t SV 2 z t SV 2 )/ ] 0 = 0 β t+1 SV 1 β t+1 SV (20) It is sown by Eq. (20) tat te samples in te ESV region ave no influence on te optimal solution because te corresponding elements in β t+1 are fixed at 0. Considering tat te noise and outliers are always lying in te ESV region, te robust SVR is muc less insensitive to tem and tus gains better generalization performance. In addition, te robust SVR also keeps te sparseness since te elements of β t+1 in NSV region are fixed at 0. Algoritm NRSVR (Newton-type algoritm for robust SVR) Given te training samples S = {(x i, y i )} n, kernel matrix K and a small positive constant ρ, te predefined constants ε, θ,. 1. Initialization: β 0 is solved using a classical SVM toolbox on a small subset of S. Let t = 0 and divide te training samples into four regions according to K i β 0 y i ; 2. Rearrange te regions in te order of SV 1, SV 2, ESV and NSV, and adjust K and y correspondingly. Calculate te gradient L θ, (β t ) and ceck weter L θ, (β t ) ρ. If so, stop; else go to te next step; 3. Compute β t+1 according to Eq. (20); 4. Spilt training samples into four regions according to K i β t+1 y i. Set t = t + 1 and go to step 2. Notice tat in te above procedure, we need not reorder K and y during te computation in step 2. In fact, we only need to remember te indices of te samples in te different groups. Wen tey are required, we may abstract te corresponding rows or columns from te original matrices or vectors. In practice, we coose te start point β 0 suc tat not all zi 0 = K i β 0 y i satisfy zi 0 ε or zi 0 > ε + θ +. Since te case tat zi 0 ε or zi 0 > ε + θ + for all i implies β t = 0, t. Te objective function L θ, (β) of (10) monotonously decreases wit respect to te sequence {β t } generated by NRSVR. In fact, if β t+1 is te optimal solution at tt iteration for (15), ten u(β t+1 ) + C ηik t i β t+1 u(β t ) + C ηik t i β t (21)
7 K. Wang et al. /Journal of Information & Computational Science 7: 12 (2010) Since v(β) is convex function, we ave v(β t+1 ) v(β t ) v(β t ) (β t+1 β t ) = C ηik t i β t C ηik t i β t+1 (22) From (21) and (22), we obtain L θ, (β t+1 ) L θ, (β t ). In addition, obviously, L θ, (β) 0. Hence, according to te analysis in [16], NRSVR converges. Next, we discuss te computational complexity of NRSVR. Since te most time-consuming stage is to calculate te iterations, we merely consider one iteration complexity. In step 2, te complexity of computing L θ, (β) is O(n( SV 1 + SV 2 )). In step 3, te cost of updating β t is max{o( SV 1 3 ), O( SV 1 ( SV 1 + SV 2 ))}. Hence, te total computational complexity is O(n( SV 1 + SV 2 ) + SV 1 3 ), wic is comparable wit tose of algoritms wit convex loss functions [1, 13]. 5 Number Experiments and Analysis In order to verify te robustness of te proposed algoritm, we compared NRSVR wit LS-SVR and L2-SVR on several bencmark datasets. Gaussian kernel k(x i, x j ) = exp( x i x j 2 /σ 2 ) was used in te experiments. Tere exist five parameters: C, σ, ε, θ, and. LS-SVR needs to coose te prior two parameters, L2-SVR needs to coose te prior tree parameters, and te last two parameters are introduced by NRSVR. We searced te optimal parameters (C, σ, ε, θ, ) from te sets {2 10,, 2 10 } {2 10,, 2 10 } {10 3, , , 10 2, ,, , 10 1 } {0.001, 0.005, 0.01, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45} {0.001, 0.005, 0.01, 0.05, 0.1} by fivefold cross validation. We adopted tree popular criteria, root mean square error (RMSE), mean absolute error (MAE), and mean relative error (MRE), to evaluate te generalization performance of tese tree algoritms. All te experiments were carried on Intel Pentium IV 3.00GHz PC wit 2GB of RAM using Matlab 7.0 under Microsoft Windows XP. We test te tree algoritms on a collection of seven bencmark datasets from te UCI 1 and StatLib 2. Pyrim, Triazines, AutoMPG, and Boston ousing are taken from UCI. Pollution, Bodyfat, and Concrete are taken from StatLib. In order to test te robustness of te tree algoritms, 20% large noise was added in eac dataset. For eac dataset, some samples were randomly cosen for training, and te rest samples were employed for test. Te specific numbers are listed in TrNum and TeNum items in Table 1, respectively. We used te same training and test sets to test te tree algoritms on eac dataset. Te experimental results are summarized in Table 1. It can be seen tat NRSVR gains te best performance among te tree algoritms for all datasets. Next, we discuss te influence of parameters θ and introduced in our proposed NRSVR. is a Huber parameter used to smoot te non-convex loss function, and its value is usually small. For our experience, = 10 3 is appropriate. Parameter θ is introduced to limit te upper bound of loss function. In general, it sould not be too large or too small. If te value of θ is too large, noise and outliers can be easily treated as support vectors, wic will not only reduce te prediction accuracy of NRSVR, but also aggravate te testing burden because of more support vectors appearing in te optimal solution. If θ is too small, some normal samples are taken as outliers in te training pase and do not take part in determining te decision yperplane. Tis results in 1 Available from URL: ttp://arcive.ics.uci.edu/ml/. 2 Available from URL:ttp://lib.stat.cmu.edu/datasets/.
8 2392 K. Wang et al. /Journal of Information & Computational Science 7: 12 (2010) Table 1: Experimental results on bencmark datasets Dataset Algoritm RMSE MAE MRE TrNum TeNum LS-SVR Pollution L2-SVR NRSVR LS-SVR Pyrim L2-SVR NRSVR LS-SVR Triazines L2-SVR NRSVR LS-SVR Bodyfat L2-SVR NRSVR LS-SVR AutoMPG L2-SVR NRSVR LS-SVR Boston ousing L2-SVR NRSVR LS-SVR Concrete L2-SVR NRSVR poor generalization performance. Terefore, we need to find a suitable value wo can suppress te impact of outliers wile at te same time keep te good generalization performance. We took Pollution and Pyrim datasets as examples to illustrate te influence of tese two parameters. Wen one parameter is analyzed, te rest parameters are fixed. Te effects of θ and on te RMSE values for te two datasets are sown in Figs. 2 and 3, respectively. Te results validate te above analysis. 6 Conclusion In tis paper, we propose a non-convex and non-differentiable loss function and develop a robust support vector regression wic as strong ability of suppressing te impact of noise and outliers and also keeps te sparseness. We construct a smoot loss function wic is te combination of two Huber loss functions to approximate te non-convex loss function. Te resultant optimization problem can be formulated as a d. c. program. We employ te concave-convex procedure and develop a Newton-type algoritm to solve it. Numerical experiments on te bencmark datasets sow te effectiveness of te proposed algoritm. In tis paper, we only focus on constructing te robust model based on te truncated quadratic loss function. Furter researc is required for discussing te general form of non-convex loss
9 K. Wang et al. /Journal of Information & Computational Science 7: 12 (2010) function to establis a general robust model. RMSE RMSE Fig. 2: Influence of θ (left grap) and (rigt grap) on RMSE values for Pollution RMSE RMSE Fig. 3: Influence of θ (left grap) and (rigt grap) on RMSE values for Pyrim References [1] N. Cristianini and J. Sawe-Taylor, An Introduction to Support Vector Macines, Cambridge University Press, 2000 [2] B. Scölkopf and A. J. Smola, Learning wit kernels, MIT Press, 2002 [3] V. N. Vapnik, Te nature of statistical learning Teory, Springer-Verlag, New York, 1995 [4] J. Suykens, J. DeBrabanter, and L. Lukas, Weigted least squares support vector macines: robustness and sparse approximation, Neurocomputing 48 (2002) [5] C. Lin and S. Wang, Fuzzy support vector macines, IEEE Transactions on Neural Networks 13 (2002) [6] H. Huang and Y. Liu, Fuzzy support vector macines for pattern recognition and data mining, International Journal of Fuzzy Systems 4 (2002) 3-12 [7] R. Collobert, F. Sinz, J. Weston, and L. Bottou, Trading convexity for scalability, in: Proceedings of te 23rd International Conference on Macine Learning, ACM Press, 2006, pp
10 2394 K. Wang et al. /Journal of Information & Computational Science 7: 12 (2010) [8] L. Xu, K. Crammer, and D. Scuurmans, Robust support vector macine training via convex outlier ablation, in: Proceedings of te 21st National Conference on Artificial Intelligence, 2006, pp [9] S. Yang and B. Hu, A stagewise least square loss function for classification, in: Proceedings of te 2008 SIAM International Conference on Data Mining, IEEE 2008, pp [10] B. Trafalis, Gilbert C. Robust classification and regression using support vector macines, European Journal of Operational Researc 173 (2006) [11] P. Zong, M. Fukusima, Second order cone programming formulations for robust multi-class classification, Neural Computation 19 (2007) [12] P. Zong, L.Wang, Support vector regression wit input data uncertainty, International Journal of Innovative Computing, Information and Control 4(2008) [13] J. A. K. Suykens and J. Vandewalle, Becmarking least squares support vector macine clssifiers, Macine Learning 54 (2004) 5-32 [14] P. D. Tao and L. T. H. An, D. C. optimization algoritms for solving te trust region subproblem, SIAM Journal of Optimization 8 (1998) [15] L. T. H. An. and P.D. Tao, Te DC (difference of convex functions) programming and DCA revisited wit DC models of real world nonconvex optimization problems, Annals of Operations Researc 133 (2005) [16] A. L. Yuille and A. Rangarajan Te concave-convex procedure, Neural Computation 15 (2003) [17] O. Capelle, Training a support vector macine in te primal, Neural Computation 19 (2007)
Verifying Numerical Convergence Rates
1 Order of accuracy Verifying Numerical Convergence Rates We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, suc as te grid size or time step, and
More informationResearch on the Anti-perspective Correction Algorithm of QR Barcode
Researc on te Anti-perspective Correction Algoritm of QR Barcode Jianua Li, Yi-Wen Wang, YiJun Wang,Yi Cen, Guoceng Wang Key Laboratory of Electronic Tin Films and Integrated Devices University of Electronic
More informationFINITE DIFFERENCE METHODS
FINITE DIFFERENCE METHODS LONG CHEN Te best known metods, finite difference, consists of replacing eac derivative by a difference quotient in te classic formulation. It is simple to code and economic to
More informationOptimized Data Indexing Algorithms for OLAP Systems
Database Systems Journal vol. I, no. 2/200 7 Optimized Data Indexing Algoritms for OLAP Systems Lucian BORNAZ Faculty of Cybernetics, Statistics and Economic Informatics Academy of Economic Studies, Bucarest
More informationThe EOQ Inventory Formula
Te EOQ Inventory Formula James M. Cargal Matematics Department Troy University Montgomery Campus A basic problem for businesses and manufacturers is, wen ordering supplies, to determine wat quantity of
More informationSAMPLE DESIGN FOR THE TERRORISM RISK INSURANCE PROGRAM SURVEY
ASA Section on Survey Researc Metods SAMPLE DESIG FOR TE TERRORISM RISK ISURACE PROGRAM SURVEY G. ussain Coudry, Westat; Mats yfjäll, Statisticon; and Marianne Winglee, Westat G. ussain Coudry, Westat,
More informationComputer Science and Engineering, UCSD October 7, 1999 Goldreic-Levin Teorem Autor: Bellare Te Goldreic-Levin Teorem 1 Te problem We æx a an integer n for te lengt of te strings involved. If a is an n-bit
More informationCan a Lump-Sum Transfer Make Everyone Enjoy the Gains. from Free Trade?
Can a Lump-Sum Transfer Make Everyone Enjoy te Gains from Free Trade? Yasukazu Icino Department of Economics, Konan University June 30, 2010 Abstract I examine lump-sum transfer rules to redistribute te
More informationHow To Ensure That An Eac Edge Program Is Successful
Introduction Te Economic Diversification and Growt Enterprises Act became effective on 1 January 1995. Te creation of tis Act was to encourage new businesses to start or expand in Newfoundland and Labrador.
More informationArtificial Neural Networks for Time Series Prediction - a novel Approach to Inventory Management using Asymmetric Cost Functions
Artificial Neural Networks for Time Series Prediction - a novel Approac to Inventory Management using Asymmetric Cost Functions Sven F. Crone University of Hamburg, Institute of Information Systems crone@econ.uni-amburg.de
More informationTangent Lines and Rates of Change
Tangent Lines and Rates of Cange 9-2-2005 Given a function y = f(x), ow do you find te slope of te tangent line to te grap at te point P(a, f(a))? (I m tinking of te tangent line as a line tat just skims
More informationGeometric Stratification of Accounting Data
Stratification of Accounting Data Patricia Gunning * Jane Mary Horgan ** William Yancey *** Abstract: We suggest a new procedure for defining te boundaries of te strata in igly skewed populations, usual
More informationSupport Vector Machines with Clustering for Training with Very Large Datasets
Support Vector Machines with Clustering for Training with Very Large Datasets Theodoros Evgeniou Technology Management INSEAD Bd de Constance, Fontainebleau 77300, France theodoros.evgeniou@insead.fr Massimiliano
More informationA system to monitor the quality of automated coding of textual answers to open questions
Researc in Official Statistics Number 2/2001 A system to monitor te quality of automated coding of textual answers to open questions Stefania Maccia * and Marcello D Orazio ** Italian National Statistical
More informationDistances in random graphs with infinite mean degrees
Distances in random graps wit infinite mean degrees Henri van den Esker, Remco van der Hofstad, Gerard Hoogiemstra and Dmitri Znamenski April 26, 2005 Abstract We study random graps wit an i.i.d. degree
More informationPredict Influencers in the Social Network
Predict Influencers in the Social Network Ruishan Liu, Yang Zhao and Liuyu Zhou Email: rliu2, yzhao2, lyzhou@stanford.edu Department of Electrical Engineering, Stanford University Abstract Given two persons
More informationStrategic trading in a dynamic noisy market. Dimitri Vayanos
LSE Researc Online Article (refereed) Strategic trading in a dynamic noisy market Dimitri Vayanos LSE as developed LSE Researc Online so tat users may access researc output of te Scool. Copyrigt and Moral
More informationMachine Learning in FX Carry Basket Prediction
Machine Learning in FX Carry Basket Prediction Tristan Fletcher, Fabian Redpath and Joe D Alessandro Abstract Artificial Neural Networks ANN), Support Vector Machines SVM) and Relevance Vector Machines
More informationEquilibria in sequential bargaining games as solutions to systems of equations
Economics Letters 84 (2004) 407 411 www.elsevier.com/locate/econbase Equilibria in sequential bargaining games as solutions to systems of equations Tasos Kalandrakis* Department of Political Science, Yale
More informationIntroduction to Support Vector Machines. Colin Campbell, Bristol University
Introduction to Support Vector Machines Colin Campbell, Bristol University 1 Outline of talk. Part 1. An Introduction to SVMs 1.1. SVMs for binary classification. 1.2. Soft margins and multi-class classification.
More information2 Limits and Derivatives
2 Limits and Derivatives 2.7 Tangent Lines, Velocity, and Derivatives A tangent line to a circle is a line tat intersects te circle at exactly one point. We would like to take tis idea of tangent line
More informationSchedulability Analysis under Graph Routing in WirelessHART Networks
Scedulability Analysis under Grap Routing in WirelessHART Networks Abusayeed Saifulla, Dolvara Gunatilaka, Paras Tiwari, Mo Sa, Cenyang Lu, Bo Li Cengjie Wu, and Yixin Cen Department of Computer Science,
More informationMassive Data Classification via Unconstrained Support Vector Machines
Massive Data Classification via Unconstrained Support Vector Machines Olvi L. Mangasarian and Michael E. Thompson Computer Sciences Department University of Wisconsin 1210 West Dayton Street Madison, WI
More informationImproved dynamic programs for some batcing problems involving te maximum lateness criterion A P M Wagelmans Econometric Institute Erasmus University Rotterdam PO Box 1738, 3000 DR Rotterdam Te Neterlands
More informationA fast multi-class SVM learning method for huge databases
www.ijcsi.org 544 A fast multi-class SVM learning method for huge databases Djeffal Abdelhamid 1, Babahenini Mohamed Chaouki 2 and Taleb-Ahmed Abdelmalik 3 1,2 Computer science department, LESIA Laboratory,
More informationCatalogue no. 12-001-XIE. Survey Methodology. December 2004
Catalogue no. 1-001-XIE Survey Metodology December 004 How to obtain more information Specific inquiries about tis product and related statistics or services sould be directed to: Business Survey Metods
More informationCHAPTER 7. Di erentiation
CHAPTER 7 Di erentiation 1. Te Derivative at a Point Definition 7.1. Let f be a function defined on a neigborood of x 0. f is di erentiable at x 0, if te following it exists: f 0 fx 0 + ) fx 0 ) x 0 )=.
More informationInstantaneous Rate of Change:
Instantaneous Rate of Cange: Last section we discovered tat te average rate of cange in F(x) can also be interpreted as te slope of a scant line. Te average rate of cange involves te cange in F(x) over
More informationON LOCAL LIKELIHOOD DENSITY ESTIMATION WHEN THE BANDWIDTH IS LARGE
ON LOCAL LIKELIHOOD DENSITY ESTIMATION WHEN THE BANDWIDTH IS LARGE Byeong U. Park 1 and Young Kyung Lee 2 Department of Statistics, Seoul National University, Seoul, Korea Tae Yoon Kim 3 and Ceolyong Park
More informationOPTIMAL FLEET SELECTION FOR EARTHMOVING OPERATIONS
New Developments in Structural Engineering and Construction Yazdani, S. and Sing, A. (eds.) ISEC-7, Honolulu, June 18-23, 2013 OPTIMAL FLEET SELECTION FOR EARTHMOVING OPERATIONS JIALI FU 1, ERIK JENELIUS
More informationTo motivate the notion of a variogram for a covariance stationary process, { Ys ( ): s R}
4. Variograms Te covariogram and its normalized form, te correlogram, are by far te most intuitive metods for summarizing te structure of spatial dependencies in a covariance stationary process. However,
More informationComparison between two approaches to overload control in a Real Server: local or hybrid solutions?
Comparison between two approaces to overload control in a Real Server: local or ybrid solutions? S. Montagna and M. Pignolo Researc and Development Italtel S.p.A. Settimo Milanese, ITALY Abstract Tis wor
More informationUnemployment insurance/severance payments and informality in developing countries
Unemployment insurance/severance payments and informality in developing countries David Bardey y and Fernando Jaramillo z First version: September 2011. Tis version: November 2011. Abstract We analyze
More informationCSCI567 Machine Learning (Fall 2014)
CSCI567 Machine Learning (Fall 2014) Drs. Sha & Liu {feisha,yanliu.cs}@usc.edu September 22, 2014 Drs. Sha & Liu ({feisha,yanliu.cs}@usc.edu) CSCI567 Machine Learning (Fall 2014) September 22, 2014 1 /
More informationIn other words the graph of the polynomial should pass through the points
Capter 3 Interpolation Interpolation is te problem of fitting a smoot curve troug a given set of points, generally as te grap of a function. It is useful at least in data analysis (interpolation is a form
More informationCollege Planning Using Cash Value Life Insurance
College Planning Using Cas Value Life Insurance CAUTION: Te advisor is urged to be extremely cautious of anoter college funding veicle wic provides a guaranteed return of premium immediately if funded
More informationSAT Subject Math Level 1 Facts & Formulas
Numbers, Sequences, Factors Integers:..., -3, -2, -1, 0, 1, 2, 3,... Reals: integers plus fractions, decimals, and irrationals ( 2, 3, π, etc.) Order Of Operations: Aritmetic Sequences: PEMDAS (Parenteses
More informationLarge-Scale Machine Learning with Stochastic Gradient Descent
Large-Scale Machine Learning with Stochastic Gradient Descent Léon Bottou NEC Labs America, Princeton NJ 08542, USA leon@bottou.org Abstract. During the last decade, the data sizes have grown faster than
More informationCyber Epidemic Models with Dependences
Cyber Epidemic Models wit Dependences Maocao Xu 1, Gaofeng Da 2 and Souuai Xu 3 1 Department of Matematics, Illinois State University mxu2@ilstu.edu 2 Institute for Cyber Security, University of Texas
More informationAn inquiry into the multiplier process in IS-LM model
An inquiry into te multiplier process in IS-LM model Autor: Li ziran Address: Li ziran, Room 409, Building 38#, Peing University, Beijing 00.87,PRC. Pone: (86) 00-62763074 Internet Address: jefferson@water.pu.edu.cn
More informationACT Math Facts & Formulas
Numbers, Sequences, Factors Integers:..., -3, -2, -1, 0, 1, 2, 3,... Rationals: fractions, tat is, anyting expressable as a ratio of integers Reals: integers plus rationals plus special numbers suc as
More informationTRADING AWAY WIDE BRANDS FOR CHEAP BRANDS. Swati Dhingra London School of Economics and CEP. Online Appendix
TRADING AWAY WIDE BRANDS FOR CHEAP BRANDS Swati Dingra London Scool of Economics and CEP Online Appendix APPENDIX A. THEORETICAL & EMPIRICAL RESULTS A.1. CES and Logit Preferences: Invariance of Innovation
More informationStaffing and routing in a two-tier call centre. Sameer Hasija*, Edieal J. Pinker and Robert A. Shumsky
8 Int. J. Operational Researc, Vol. 1, Nos. 1/, 005 Staffing and routing in a two-tier call centre Sameer Hasija*, Edieal J. Pinker and Robert A. Sumsky Simon Scool, University of Rocester, Rocester 1467,
More informationPredicting the behavior of interacting humans by fusing data from multiple sources
Predicting te beavior of interacting umans by fusing data from multiple sources Erik J. Sclict 1, Ritcie Lee 2, David H. Wolpert 3,4, Mykel J. Kocenderfer 1, and Brendan Tracey 5 1 Lincoln Laboratory,
More informationLecture 10: What is a Function, definition, piecewise defined functions, difference quotient, domain of a function
Lecture 10: Wat is a Function, definition, piecewise defined functions, difference quotient, domain of a function A function arises wen one quantity depends on anoter. Many everyday relationsips between
More informationSupport Vector Machines Explained
March 1, 2009 Support Vector Machines Explained Tristan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introduction This document has been written in an attempt to make the Support Vector Machines (SVM),
More informationM(0) = 1 M(1) = 2 M(h) = M(h 1) + M(h 2) + 1 (h > 1)
Insertion and Deletion in VL Trees Submitted in Partial Fulfillment of te Requirements for Dr. Eric Kaltofen s 66621: nalysis of lgoritms by Robert McCloskey December 14, 1984 1 ackground ccording to Knut
More informationOPTIMAL DISCONTINUOUS GALERKIN METHODS FOR THE ACOUSTIC WAVE EQUATION IN HIGHER DIMENSIONS
OPTIMAL DISCONTINUOUS GALERKIN METHODS FOR THE ACOUSTIC WAVE EQUATION IN HIGHER DIMENSIONS ERIC T. CHUNG AND BJÖRN ENGQUIST Abstract. In tis paper, we developed and analyzed a new class of discontinuous
More informationSAT Math Must-Know Facts & Formulas
SAT Mat Must-Know Facts & Formuas Numbers, Sequences, Factors Integers:..., -3, -2, -1, 0, 1, 2, 3,... Rationas: fractions, tat is, anyting expressabe as a ratio of integers Reas: integers pus rationas
More information2.23 Gambling Rehabilitation Services. Introduction
2.23 Gambling Reabilitation Services Introduction Figure 1 Since 1995 provincial revenues from gambling activities ave increased over 56% from $69.2 million in 1995 to $108 million in 2004. Te majority
More informationMATHEMATICS FOR ENGINEERING DIFFERENTIATION TUTORIAL 1 - BASIC DIFFERENTIATION
MATHEMATICS FOR ENGINEERING DIFFERENTIATION TUTORIAL 1 - BASIC DIFFERENTIATION Tis tutorial is essential pre-requisite material for anyone stuing mecanical engineering. Tis tutorial uses te principle of
More informationTis Problem and Retail Inventory Management
Optimizing Inventory Replenisment of Retail Fasion Products Marsall Fiser Kumar Rajaram Anant Raman Te Warton Scool, University of Pennsylvania, 3620 Locust Walk, 3207 SH-DH, Piladelpia, Pennsylvania 19104-6366
More information1.6. Analyse Optimum Volume and Surface Area. Maximum Volume for a Given Surface Area. Example 1. Solution
1.6 Analyse Optimum Volume and Surface Area Estimation and oter informal metods of optimizing measures suc as surface area and volume often lead to reasonable solutions suc as te design of te tent in tis
More informationStrategic trading and welfare in a dynamic market. Dimitri Vayanos
LSE Researc Online Article (refereed) Strategic trading and welfare in a dynamic market Dimitri Vayanos LSE as developed LSE Researc Online so tat users may access researc output of te Scool. Copyrigt
More informationSimple and efficient online algorithms for real world applications
Simple and efficient online algorithms for real world applications Università degli Studi di Milano Milano, Italy Talk @ Centro de Visión por Computador Something about me PhD in Robotics at LIRA-Lab,
More informationOn Convergence Rate of Concave-Convex Procedure
On Convergence Rate of Concave-Conve Procedure Ian En-Hsu Yen r00922017@csie.ntu.edu.tw Po-Wei Wang b97058@csie.ntu.edu.tw Nanyun Peng Johns Hopkins University Baltimore, MD 21218 npeng1@jhu.edu Shou-De
More informationFactoring Synchronous Grammars By Sorting
Factoring Syncronous Grammars By Sorting Daniel Gildea Computer Science Dept. Uniersity of Rocester Rocester, NY Giorgio Satta Dept. of Information Eng g Uniersity of Padua I- Padua, Italy Hao Zang Computer
More informationAn Introduction to Machine Learning
An Introduction to Machine Learning L5: Novelty Detection and Regression Alexander J. Smola Statistical Machine Learning Program Canberra, ACT 0200 Australia Alex.Smola@nicta.com.au Tata Institute, Pune,
More informationCHARACTERISTICS IN FLIGHT DATA ESTIMATION WITH LOGISTIC REGRESSION AND SUPPORT VECTOR MACHINES
CHARACTERISTICS IN FLIGHT DATA ESTIMATION WITH LOGISTIC REGRESSION AND SUPPORT VECTOR MACHINES Claus Gwiggner, Ecole Polytechnique, LIX, Palaiseau, France Gert Lanckriet, University of Berkeley, EECS,
More informationResearch on Risk Assessment of PFI Projects Based on Grid-fuzzy Borda Number
Researc on Risk Assessent of PFI Projects Based on Grid-fuzzy Borda Nuber LI Hailing 1, SHI Bensan 2 1. Scool of Arcitecture and Civil Engineering, Xiua University, Cina, 610039 2. Scool of Econoics and
More informationA New Cement to Glue Nonconforming Grids with Robin Interface Conditions: The Finite Element Case
A New Cement to Glue Nonconforming Grids wit Robin Interface Conditions: Te Finite Element Case Martin J. Gander, Caroline Japet 2, Yvon Maday 3, and Frédéric Nataf 4 McGill University, Dept. of Matematics
More information100 Austrian Journal of Statistics, Vol. 32 (2003), No. 1&2, 99-129
AUSTRIAN JOURNAL OF STATISTICS Volume 3 003, Number 1&, 99 19 Adaptive Regression on te Real Line in Classes of Smoot Functions L.M. Artiles and B.Y. Levit Eurandom, Eindoven, te Neterlands Queen s University,
More informationPre-trial Settlement with Imperfect Private Monitoring
Pre-trial Settlement wit Imperfect Private Monitoring Mostafa Beskar University of New Hampsire Jee-Hyeong Park y Seoul National University July 2011 Incomplete, Do Not Circulate Abstract We model pretrial
More information- 1 - Handout #22 May 23, 2012 Huffman Encoding and Data Compression. CS106B Spring 2012. Handout by Julie Zelenski with minor edits by Keith Schwarz
CS106B Spring 01 Handout # May 3, 01 Huffman Encoding and Data Compression Handout by Julie Zelenski wit minor edits by Keit Scwarz In te early 1980s, personal computers ad ard disks tat were no larger
More information(Quasi-)Newton methods
(Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting
More informationMultigrid computational methods are
M ULTIGRID C OMPUTING Wy Multigrid Metods Are So Efficient Originally introduced as a way to numerically solve elliptic boundary-value problems, multigrid metods, and teir various multiscale descendants,
More informationA Hybrid Forecasting Methodology using Feature Selection and Support Vector Regression
A Hybrid Forecasting Methodology using Feature Selection and Support Vector Regression José Guajardo, Jaime Miranda, and Richard Weber, Department of Industrial Engineering, University of Chile Abstract
More informationMATHEMATICAL ENGINEERING TECHNICAL REPORTS. DC Algorithm for Extended Robust Support Vector Machine
MATHEMATICAL ENGINEERING TECHNICAL REPORTS DC Algorithm for Extended Robust Support Vector Machine Shuhei FUJIWARA, Akiko TAKEDA and Takafumi KANAMORI METR 204 38 December 204 DEPARTMENT OF MATHEMATICAL
More informationA Simple Introduction to Support Vector Machines
A Simple Introduction to Support Vector Machines Martin Law Lecture for CSE 802 Department of Computer Science and Engineering Michigan State University Outline A brief history of SVM Large-margin linear
More informationA strong credit score can help you score a lower rate on a mortgage
NET GAIN Scoring points for your financial future AS SEEN IN USA TODAY S MONEY SECTION, JULY 3, 2007 A strong credit score can elp you score a lower rate on a mortgage By Sandra Block Sales of existing
More informationDerivatives Math 120 Calculus I D Joyce, Fall 2013
Derivatives Mat 20 Calculus I D Joyce, Fall 203 Since we ave a good understanding of its, we can develop derivatives very quickly. Recall tat we defined te derivative f x of a function f at x to be te
More informationNote nine: Linear programming CSE 101. 1 Linear constraints and objective functions. 1.1 Introductory example. Copyright c Sanjoy Dasgupta 1
Copyrigt c Sanjoy Dasgupta Figure. (a) Te feasible region for a linear program wit two variables (see tet for details). (b) Contour lines of te objective function: for different values of (profit). Te
More informationOn Distributed Key Distribution Centers and Unconditionally Secure Proactive Verifiable Secret Sharing Schemes Based on General Access Structure
On Distributed Key Distribution Centers and Unconditionally Secure Proactive Verifiable Secret Saring Scemes Based on General Access Structure (Corrected Version) Ventzislav Nikov 1, Svetla Nikova 2, Bart
More informationMulticlass Classification. 9.520 Class 06, 25 Feb 2008 Ryan Rifkin
Multiclass Classification 9.520 Class 06, 25 Feb 2008 Ryan Rifkin It is a tale Told by an idiot, full of sound and fury, Signifying nothing. Macbeth, Act V, Scene V What Is Multiclass Classification? Each
More informationA Comparative Study of the Pickup Method and its Variations Using a Simulated Hotel Reservation Data
A Comparative Study of the Pickup Method and its Variations Using a Simulated Hotel Reservation Data Athanasius Zakhary, Neamat El Gayar Faculty of Computers and Information Cairo University, Giza, Egypt
More informationMath 113 HW #5 Solutions
Mat 3 HW #5 Solutions. Exercise.5.6. Suppose f is continuous on [, 5] and te only solutions of te equation f(x) = 6 are x = and x =. If f() = 8, explain wy f(3) > 6. Answer: Suppose we ad tat f(3) 6. Ten
More informationGURLS: A Least Squares Library for Supervised Learning
Journal of Machine Learning Research 14 (2013) 3201-3205 Submitted 1/12; Revised 2/13; Published 10/13 GURLS: A Least Squares Library for Supervised Learning Andrea Tacchetti Pavan K. Mallapragada Center
More informationFUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM
International Journal of Innovative Computing, Information and Control ICIC International c 0 ISSN 34-48 Volume 8, Number 8, August 0 pp. 4 FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT
More informationA New Quantitative Behavioral Model for Financial Prediction
2011 3rd International Conference on Information and Financial Engineering IPEDR vol.12 (2011) (2011) IACSIT Press, Singapore A New Quantitative Behavioral Model for Financial Prediction Thimmaraya Ramesh
More informationA Multigrid Tutorial part two
A Multigrid Tutorial part two William L. Briggs Department of Matematics University of Colorado at Denver Van Emden Henson Center for Applied Scientific Computing Lawrence Livermore National Laboratory
More information6. Cholesky factorization
6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix
More informationThe modelling of business rules for dashboard reporting using mutual information
8 t World IMACS / MODSIM Congress, Cairns, Australia 3-7 July 2009 ttp://mssanz.org.au/modsim09 Te modelling of business rules for dasboard reporting using mutual information Gregory Calbert Command, Control,
More informationOnline Multi-Task Learning for Policy Gradient Methods
Online Multi-ask Learning for Policy Gradient Metods aitam Bou Ammar AIAMB@SEASUPENNEDU Eric Eaton EEAON@CISUPENNEDU University of Pennsylvania, Computer and Information Science Department, Piladelpia,
More informationDUOL: A Double Updating Approach for Online Learning
: A Double Updating Approach for Online Learning Peilin Zhao School of Comp. Eng. Nanyang Tech. University Singapore 69798 zhao6@ntu.edu.sg Steven C.H. Hoi School of Comp. Eng. Nanyang Tech. University
More informationLogistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression
Logistic Regression Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max
More informationSWITCH T F T F SELECT. (b) local schedule of two branches. (a) if-then-else construct A & B MUX. one iteration cycle
768 IEEE RANSACIONS ON COMPUERS, VOL. 46, NO. 7, JULY 997 Compile-ime Sceduling of Dynamic Constructs in Dataæow Program Graps Soonoi Ha, Member, IEEE and Edward A. Lee, Fellow, IEEE Abstract Sceduling
More informationAnalyzing the Effects of Insuring Health Risks:
Analyzing te Effects of Insuring Healt Risks: On te Trade-off between Sort Run Insurance Benefits vs. Long Run Incentive Costs Harold L. Cole University of Pennsylvania and NBER Soojin Kim University of
More informationBACK CALCULATION PROCEDURE FOR THE STIFFNESS MODULUS OF CEMENT TREATED BASE LAYERS USING COMPUTATIONAL INTELLIGENCE BASED MODELS
BACK CALCULATION PROCEDURE FOR THE STIFFNESS MODULUS OF CEMENT TREATED BASE LAYERS USING COMPUTATIONAL INTELLIGENCE BASED MODELS Maryam Miradi m.miradi@tudelft.nl André.A. A. Molenaar * a.a.a.molenaar@tudelft.nl
More informationMULTY BINARY TURBO CODED WOFDM PERFORMANCE IN FLAT RAYLEIGH FADING CHANNELS
Volume 49, Number 3, 28 MULTY BINARY TURBO CODED WOFDM PERFORMANCE IN FLAT RAYLEIGH FADING CHANNELS Marius OLTEAN Maria KOVACI Horia BALTA Andrei CAMPEANU Faculty of, Timisoara, Romania Bd. V. Parvan,
More informationBig Data Analytics: Optimization and Randomization
Big Data Analytics: Optimization and Randomization Tianbao Yang, Qihang Lin, Rong Jin Tutorial@SIGKDD 2015 Sydney, Australia Department of Computer Science, The University of Iowa, IA, USA Department of
More informationBonferroni-Based Size-Correction for Nonstandard Testing Problems
Bonferroni-Based Size-Correction for Nonstandard Testing Problems Adam McCloskey Brown University October 2011; Tis Version: October 2012 Abstract We develop powerful new size-correction procedures for
More informationOne-sided Support Vector Regression for Multiclass Cost-sensitive Classification
One-sided Support Vector Regression for Multiclass Cost-sensitive Classification Han-Hsing Tu r96139@csie.ntu.edu.tw Hsuan-Tien Lin htlin@csie.ntu.edu.tw Department of Computer Science and Information
More informationA Study on SMO-type Decomposition Methods for Support Vector Machines
1 A Study on SMO-type Decomposition Methods for Support Vector Machines Pai-Hsuen Chen, Rong-En Fan, and Chih-Jen Lin Department of Computer Science, National Taiwan University, Taipei 106, Taiwan cjlin@csie.ntu.edu.tw
More informationModeling and Prediction of Network Traffic Based on Hybrid Covariance Function Gaussian Regressive
Journal of Information & Computational Science 12:9 (215) 3637 3646 June 1, 215 Available at http://www.joics.com Modeling and Prediction of Network Traffic Based on Hybrid Covariance Function Gaussian
More informationSeveral Views of Support Vector Machines
Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min
More informationOnline Passive-Aggressive Algorithms on a Budget
Zhuang Wang Dept. of Computer and Information Sciences Temple University, USA zhuang@temple.edu Slobodan Vucetic Dept. of Computer and Information Sciences Temple University, USA vucetic@temple.edu Abstract
More informationStatistical Models in Data Mining
Statistical Models in Data Mining Sargur N. Srihari University at Buffalo The State University of New York Department of Computer Science and Engineering Department of Biostatistics 1 Srihari Flood of
More informationFeature Selection using Integer and Binary coded Genetic Algorithm to improve the performance of SVM Classifier
Feature Selection using Integer and Binary coded Genetic Algorithm to improve the performance of SVM Classifier D.Nithya a, *, V.Suganya b,1, R.Saranya Irudaya Mary c,1 Abstract - This paper presents,
More informationFor Sale By Owner Program. We can help with our for sale by owner kit that includes:
Dawn Coen Broker/Owner For Sale By Owner Program If you want to sell your ome By Owner wy not:: For Sale Dawn Coen Broker/Owner YOUR NAME YOUR PHONE # Look as professional as possible Be totally prepared
More informationVerification of Security Protocols with Lists: from Length One to Unbounded Length
Verification of Security Protocols wit Lists: from Lengt One to Unbounded Lengt INRIA, Miriam Paiola and Bruno Blancet École Normale Supérieure, CNRS, Paris {paiola,blancet}@di.ens.fr Abstract. We present
More information