Support Vector Machines for Pattern Classification

Similar documents
Support Vector Machine (SVM)

A Simple Introduction to Support Vector Machines

Introduction to Support Vector Machines. Colin Campbell, Bristol University

Support Vector Machines Explained

Support Vector Machines with Clustering for Training with Very Large Datasets

Support Vector Machines

A fast multi-class SVM learning method for huge databases

Neural Networks and Support Vector Machines

Linear Threshold Units

Class #6: Non-linear classification. ML4Bio 2012 February 17 th, 2012 Quaid Morris

Lecture 2: The SVM classifier

Linear smoother. ŷ = S y. where s ij = s ij (x) e.g. s ij = diag(l i (x)) To go the other way, you need to diagonalize S

Support Vector Machines

Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence

Distributed Machine Learning and Big Data

Big Data Analytics CSCI 4030

Statistical Machine Learning

An Introduction to Machine Learning

Multiclass Classification Class 06, 25 Feb 2008 Ryan Rifkin

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Several Views of Support Vector Machines

Nonlinear Programming Methods.S2 Quadratic Programming

How To Solve The Cluster Algorithm

A Study on SMO-type Decomposition Methods for Support Vector Machines

Search Taxonomy. Web Search. Search Engine Optimization. Information Retrieval

AdaBoost. Jiri Matas and Jan Šochman. Centre for Machine Perception Czech Technical University, Prague

Lecture 8 February 4

Survey on Multiclass Classification Methods

Working Set Selection Using Second Order Information for Training Support Vector Machines

3 An Illustrative Example

Adapting Codes and Embeddings for Polychotomies

Lecture Topic: Low-Rank Approximations

Blood Vessel Classification into Arteries and Veins in Retinal Images

Classifying Large Data Sets Using SVMs with Hierarchical Clusters. Presented by :Limou Wang

Large Margin DAGs for Multiclass Classification

Nonlinear Iterative Partial Least Squares Method

Support Vector Machines for Classification and Regression

Online Classification on a Budget

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

Introduction to Matrix Algebra

2.3 Convex Constrained Optimization Problems

Support Vector Machine. Tutorial. (and Statistical Learning Theory)

Making Sense of the Mayhem: Machine Learning and March Madness

SUPPORT VECTOR MACHINE (SVM) is the optimal

Approximation Algorithms

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

A Tutorial on Support Vector Machines for Pattern Recognition

Predict Influencers in the Social Network

LCs for Binary Classification

SUPPORT vector machine (SVM) formulation of pattern

Neural Networks Lesson 5 - Cluster Analysis

These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop

Online (and Offline) on an Even Tighter Budget

Least-Squares Intersection of Lines

Analysis of kiva.com Microlending Service! Hoda Eydgahi Julia Ma Andy Bardagjy December 9, 2010 MAS.622j

Nonlinear Optimization: Algorithms 3: Interior-point methods

A Content based Spam Filtering Using Optical Back Propagation Technique

Lecture 3: Linear methods for classification

Online Learning in Biometrics: A Case Study in Face Classifier Update

Sub-class Error-Correcting Output Codes

Probabilistic Linear Classification: Logistic Regression. Piyush Rai IIT Kanpur

Vector and Matrix Norms

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

STA 4273H: Statistical Machine Learning

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Logistic Regression. Vibhav Gogate The University of Texas at Dallas. Some Slides from Carlos Guestrin, Luke Zettlemoyer and Dan Weld.

Maximum Margin Clustering

4.1 Learning algorithms for neural networks

Machine Learning and Pattern Recognition Logistic Regression

4.6 Linear Programming duality

1 Lecture: Integration of rational functions by decomposition

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.

SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING

E-commerce Transaction Anomaly Classification

An Overview Of Software For Convex Optimization. Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM

FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM

Trading regret rate for computational efficiency in online learning with limited feedback

Data clustering optimization with visualization

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

DUOL: A Double Updating Approach for Online Learning

CSCI567 Machine Learning (Fall 2014)

Cluster Analysis. Isabel M. Rodrigues. Lisboa, Instituto Superior Técnico

Case Study Report: Building and analyzing SVM ensembles with Bagging and AdaBoost on big data sets

Lecture 6. Artificial Neural Networks

NEURAL NETWORKS A Comprehensive Foundation

Pa8ern Recogni6on. and Machine Learning. Chapter 4: Linear Models for Classifica6on

Equilibrium computation: Part 1

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski

Fóra Gyula Krisztián. Predictive analysis of financial time series

Big Data Analytics: Optimization and Randomization

Interior-Point Algorithms for Quadratic Programming

Data Mining. Supervised Methods. Ciro Donalek Ay/Bi 199ab: Methods of Sciences hcp://esci101.blogspot.

AUTOMATION OF ENERGY DEMAND FORECASTING. Sanzad Siddique, B.S.

Support-Vector Networks

DATA ANALYSIS II. Matrix Algorithms

Classification algorithm in Data mining: An Overview

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

Derivative Free Optimization

Linear Classification. Volker Tresp Summer 2015

Recurrent Neural Networks

Transcription:

Support Vector Machines for Pattern Classification Shigeo Abe Graduate School of Science and Technology Kobe University Kobe, Japan

My Research History on NN, FS, and SVM Neural Networks (1988-) Convergence characteristics of Hopfield networks Synthesis of multilayer neural networks Fuzzy Systems (1992-) Trainable fuzzy classifiers Fuzzy classifiers with ellipsoidal regions Support Vector Machines (1999-) Characteristics of solutions Multiclass problems

Contents 1. Direct and Indirect Decision Functions 2. Architecture of SVMs 3. Characteristics of L1 and L2 SVMs 4. Multiclass SVMs 5. Training Methods 6. SVM-inspired Methods 6.1 Kernel-based Methods 6.2 Maximum Margin Fuzzy Classifiers 6.3 Maximum Margin Neural Networks

Multilayer Neural Networks Neural networks are trained to output the target values for the given input. Class 1 N.N. Class 1 1 0 Class 2 Class 2

Multilayer Neural Networks Indirect decision functions: decision boundaries change as the initial weights are changed. Class 1 N.N. Class 1 1 0 Class 2 Class 2

Class 1 Support Vector Machines Direct decision functions: decision boundaries are determined to minimize the classification error of both training data and unknown data. SVM If D(x) > 0, Class 1 Otherwise, Class 2 Class 2 Maximum margin Optimal hyperplane (OHP) D(x) = 0

Summary When the number of training data is small, SVMs outperform conventional classifiers. By maximizing margins performance of conventional classifiers can be improved.

Contents 1. Direct and Indirect Decision Functions 2. Architecture of SVMs 3. Characteristics of L1 and L2 SVMs 4. Multiclass SVMs 5. Training Methods 6. SVM-inspired Methods 6.1 Kernel-based Methods 6.2 Maximum Margin Fuzzy Classifiers 6.3 Maximum Margin Neural Networks

Architecture of SVMs Formulated for two-class classification problems Map the input space into the feature space Determine the optimal hyperplane in the feature space Class 2 g(x) Class 1 OHP Input Space Feature Space

Types of SVMs Hard margin SVMs linearly separable in the feature space maximize generalization ability Soft margin SVMs Not separable in the feature space minimize classification error and maximize generalization ability L1 soft margin SVMs (commonly used) L2 soft margin SVMs Class 1 Class 2 Feature Space

Hard Margin SVMs We determine OHP so that the generalization region is maximized: maximize δ subject to w t g(x i ) + b 1 for Class 1 w t g(x i ) b 1 for Class 2 Combining the two: Margin δ Class 2 y i = 1 Generalization region Class 1 y i = 1 + side y i (w t g( x i ) + b) 1 OHP D(x) = w t g(x i ) + b = 0

Hard Margin SVM (2) The distance from x to OHP is given by y i D(x) / w. Thus all the training data must satisfy Margin δ Generalization region y i D(x) / w δ. Imposing w δ = 1, the problem is to minimize w 2 /2 Class 2 y i = 1 Class 1 y i = 1 subject to y i D(x i ) 1 for i = 1,,M. OHP D(x) = w t g(x i ) + b = 0 + side

If the problem is nonseparable, we introduce slack variables ξ i. Soft Margin SVMs Margin δ Minimize w 2 /2 + C/p Σ i=1,m ξ i p subject to y i D(x i ) 1 ξ i where C: margin parameter, p = 1: L1 SVM, = 2: L2 SVM Class 2 y i = 1 1> ξ i > 0 ξ k >1 OHP D(x) = 0 Class 1 y i = 1 + side

Conversion to Dual Problems Introducing the Lagrange multiplies α i and β i, Q = w 2 /2 +C/p Σ i ξ p i Σ i α i (y i (w t g(x i ) + b) 1+ξ i ) [ Σ i β i ξ i ] The Karush-Kuhn-Tacker (KKT) optimality conditions: Q/ w = 0, Q/ b = 0, Q/ ξ i = 0, α i > 0, β i > 0 KKT complementarity conditions α i (y i (w t g(x i ) + b) 1+ξ i ) = 0, [(β i ξ i = 0 ]. When p = 2, terms in [ ] are not necessary.

Dual Problems of SVMs L1 SVM Maximize Σ i α i C/2 Σ i,j α i α j y i y j g(x i ) t g(x j ) subject to Σ i y i α i = 0, C α i 0. L2 SVM Maximize Σ i α i C/2 Σ i,j α i α j y i y j (g(x i ) t g(x j ) + δ ij /C) subject to Σ i y i α i = 0, α i 0, where δ ij : 1 for i = j and 0 for i j.

KKT Complementarity Condition For L1 SVMs, from α i (y i (w t g(x i ) + b) 1+ξ i ) = 0, β i ξ i = (C α i ) ξ i = 0, there are three cases for α i : 1. α i = 0. Then ξ i = 0. Thus x i is correctly classified, 2. 0 < α i < C. Then ξ i = 0, and w t g(x i ) + b = y i, 3. α i = C. Then ξ i 0. Training data x i with α i > 0 are called support vectors and those with α i = C are called bounded support vectors. The resulting decision function is given by D(x) = Σ i α i y i g(x i ) t g(x) + b.

Kernel Trick Since mapping function g(x) appears in the form of g(x) t g(x ), we can avoid treating the variables in the feature space by introducing the kernel: H(x,x ) = g(x) t g(x ). The following kernels are commonly used: 1. Dot product kernels: H(x,x ) = x t x 2. Polynomial kernels: H(x,x ) = (x t x +1) d 3. RBF kernels : H(x,x ) = exp( γ x x 2 )

Summary The global optimum solution by quadratic programming (no local minima). Robust classification for outliers is possible by proper value selection of C. Adaptable to problems by proper selection of kernels.

Contents 1. Direct and Indirect Decision Functions 2. Architecture of SVMs 3. Characteristics of L1 and L2 SVMs 4. Multiclass SVMs 5. Training Methods 6. SVM-inspired Methods 6.1 Kernel-based Methods 6.2 Maximum Margin Fuzzy Classifiers 6.3 Maximum Margin Neural Networks

Hessian Matrix Substituting α s = - y s Σy i α i into the objective function, Q = α t 1-1/2 α t H α we derive the Hessian matrix. L1 SVM H L1 = ( y i (g(x i ) g(x s ) ) ) t ( y i (g(x i ) g(x s ) ) ) H L1 : positive semidefinite L2 SVM H L2 = H L1 + {(y i y j + δ ij )/C} H L2 : positive definite, which results in stabler training.

Non-unique Solutions Strictly convex functions give unique solutions. Table Uniqueness Primal Dual L1 SVM Non-unique* Non-unique L2 SVM Unique* Unique* *: Burges and Crisp (2000) Convex objective function

Property 1 For the L1 SVM, the vectors that satisfy y i (w t g(x i ) + b) = 1 are not always support vectors. We call these boundary vectors. 3 2 1

Irreducible Set of Support Vectors A set of support vectors is irreducible if deletion of boundary vectors and any support vectors result in the change of the optimal hyperplane.

Property 2 For the L1 SVM, let all the support vectors be unbounded. Then the Hessian matrix associated with the irreducible set is positive definite. Property 3 For the L1 SVM, if there is only one irreducible set, and support vectors are all unbounded, the solution is unique.

2 Irreducible sets {1, 3}, {2, 4} 4 3 1 The dual problem is non-unique, but the primal problem is unique. In general the number of support vectors of L2 SVM is larger than that of L1 SVM.

Computer Simulation We used white blood cell data with 13 inputs and 2 classes, each class having app. 400 data for training and testing. We trained SVMs using the steepest ascent method (Abe et al. (2002)) We used a personal computer (Athlon 1.6Ghz) for training.

Recognition Rates for Polynomial Kernels 100 95 The difference is small. % 90 85 80 L1 Test L2 Test L1 Train. L2 Train. 75 dot poly2 poly3 poly4 poly5

Support Vectors for Polynomial Kernels For poly4 and poly5 the numbers are the same. 200 180 160 140 120 100 80 60 40 20 0 dot poly2 poly3 poly4 poly5 L1 L2

Training Time Comparison As the polynomial degree increases, the difference becomes smaller. 60 50 40 (s) 30 20 10 L1 SVM L2 SVM 0 dot poly2 poly3 poly4 poly5

Summary The Hessian matrix of an L1 SVM is positive semi-definite, but that of an L2 SVM is always positive definite. Thus dual solutions of an L1 SVM are nonunique. When non-critical the difference between L1 and L2 SVMs is small.

Contents 1. Direct and Indirect Decision Functions 2. Architecture of SVMs 3. Characteristics of L1 and L2 SVMs 4. Multiclass SVMs 5. Training Methods 6. SVM-inspired Methods 6.1 Kernel-based Methods 6.2 Maximum Margin Fuzzy Classifiers 6.3 Maximum Margin Neural Networks

Multiclass SVMs One-against-all SVMs Continuous decision functions Fuzzy SVMs Decision-tree-based SVMs Pairwise SVMs Decision-tree-based SVMs (DDAGs, ADAGs) Fuzzy SVMs ECOC SVMs All-at-once SVMs

One-against-all SVMs Determine the ith decision function D i (x) so that class i is separated from the remaining classes. Classify x into the class with D i (x) > 0. Class 2 D 3 (x) = 0 D 2 (x) = 0 Class 2 Class 1 Class 3 Class 1 Class 3 D 1 (x) = 0 Unclassifiable

Continuous SVMs Classify x into the class with max D i (x). 0 > D 2 (x) > D 3 (x) 0 > D 2 (x) > D 1 (x) Class 2 D 3 (x) = 0 D 2 (x) = 0 D 2 (x) > D 3 (x) Class 2 Class 1 Class 3 Class 1 Class 3 D 1 (x) = 0

Fuzzy SVMs D 1 (x) = 1 D 1 (x) = 0 m 11 (x) Class 1 1 0 Class 2 Class 3 We define a onedimensional membership function in the direction orthogonal to the decision function D ij (x). Membership function

Class i Membership Class i membership function m i ( x) = min m j= 1,..., n ij ( x). Class 1 m 1 (x) =1 m 1 (x) = 0.5 The region that satisfies m i (x) > 0 corresponds to the classifiable region for class i. m 1 (x) = 0

Resulting Class Boundaries by Fuzzy SVMs The generalization regions are the same with those by continuous SVMs. Class 2 Class 2 Class 1 Class 3 Class 1 Class 3

Decision-tree-based SVMs Each node corresponds to the hyperplane. Training step Classification step At each node, determine OHP. Class j Class i f j (x) = 0 Class k f i(x) = 0 Starting from the top node, calculate the value until a leaf node is reached. + Class i f i (x) - f j (x) + - Class j Classk

The problem of the decision tree Unclassifiable region can be resolved. The region of each class depends on the structure of a decision tree. Class i f i(x) = 0 Class i Class k f j(x) = 0 Class k Class j f j (x) = 0 Class j f k(x) = 0 How do we determine the decision tree?

Determination of the decision tree? The overall classification performance becomes worse. The separable classes need to be separated at the upper node. The separability measures: Euclidean distances between class centers Classification errors by the Mahalanobis distance 1 vs. remaining classes Some vs. remaining classes

1 Class vs. Remaining Classes Using Distances between Class Centers Separate the farthest class from remaining classes. f i(x) = 0 Class i Class j f j (x) = 0 Class l Class k f l (x) = 0 Calculate distances between class centers. For each class, find the smallest value. Separate the class with the largest value in step 2. Repeat for remaining classes.

Pairwise SVMs For all pairs of classes i, j, we define n(n-1)/2 decision functions and classify x into class arg max i D i (x) where D i (x) = Σ j sign D ij (x) Class 1 D 12 (x) = 0 D 31 (x) = 0 Class 3 Class 2 Unclassifiable regions still exist. D 1 (x) = D 2 (x) = D 3 (x) = 1 D 23 (x) = 0

Pairwise Fuzzy SVMs The generalization ability of the P-FSVM is better than the P-SVM. Class 1 Class 1 Class 3 class 2 Class 3 Class 2 P-SVM P-FSVM

Decision-tree-based SVMs (DDAGs) Generalization regions change according to the tree structure. D 12 (x) 2 1 Class 1 3 D 13 (x) 1 2 D 32 (x) 3 Class 3 class 2 Class 1 Class 3 Class 2

Decision-tree-based SVMs (ADAGs) For three-class problems ADAGs and DDAGs are equivalent. In general, DDAGs include ADAGs. 1 2 3 2 D 12 (x) 1 3 D 13 (x) 1 2 D 32 (x) 3 ADAG (Tennis Tournament) Class 1 Class 3 Class 2 Equivalent DDAG

ECC Capability The maximum number of decision functions = 2 n-1-1, where n is the number of classes. Error correcting capability = (h-1)/2, where h is the Hamming distance. For n = 4, h = 4 and ecc of 1. Class 1 1 1 1-1 1 1-1 Class 2 1 1-1 1 1-1 1 Class 3 1-1 1 1-1 1 1 Class 4-1 1 1 1-1 -1-1

Continuous Hamming Distance D i (x) = 1 D i (x) = 0 Hamming Distance Class k Class l = Σ ER i (x) = Σ(1 m i (x)) 1 0 Equivalent to membership functions with sum operators Error function Membership function

Decision functions All-at-once SVMs w i g(x) + b i > w j g(x) + b j for j i, j=1,,n Original formulation n M variables where n: number of classes M: number of training data Crammer and Singer (2000) s M variables

Japanese License Plate Performance Evaluation Compare recognition rates of test data for one-against-all and pairwise SVMs. Data sets used: white blood cell data thyroid data hiragana data with 50 inputs hiragana data with 13 inputs 水戸 57 ひ 59-16

Data Sets Used for Evaluation Data Inputs Classes Train. Test Blood Cell 13 12 3097 3100 Thyroid 21 3 3772 3428 H-50 50 39 4610 4610 H-13 13 38 8375 8356

Performance Improvement for 1-all SVMs SVM FSVM Tree (%) 100 98 96 94 92 90 88 86 84 82 Blood Thyroid H-50 Performance improves by fuzzy SVMs and decision trees.

Performance Improvement for Pairwise Classification SVM ADAG MX ADAV MN FSVM ADAG Av (%) 100 98 96 94 92 90 88 86 Blood Thyroid H-13 FSVMs are comparable with ADAG MX.

Summary One-against-all SVMs with continuous decision functions are equivalent to 1-all fuzzy SVMs. Performance of pairwise fuzzy SVMs is comparable to that of ADAGs with maximum recognition rates. There is no so much difference between oneagainst-all and pairwise SVMs.

Contents 1. Direct and Indirect Decision Functions 2. Architecture of SVMs 3. Characteristics of L1 and L2 SVMs 4. Multiclass SVMs 5. Training Methods 6. SVM-inspired Methods 6.1 Kernel-based Methods 6.2 Maximum Margin Fuzzy Classifiers 6.3 Maximum Margin Neural Networks

Research Status Too slow to train by quadratic programming for a large number of training data. Several training methods have been developed: decomposition technique by Osuna (1997) Kernel-Adatron by Friess et al. (1998) SMO (Sequential Minimum Optimization) by Platt (1999) Steepest Ascent Training by Abe et al. (2002)

Decomposition Technique Decompose the index set into two: B and N. Maximize Q(α) = Σ i B α i C/2 Σ i,j B α i α j y i y j H(x i, x j ) C Σ i B, j N α i α j y i y j H(x i, x j ) C/2 Σ i,j N α i α j y i y j H(x i, x j ) + Σ i N α i subject to Σ i B y i α i = Σ i N y i α i, C α i 0 for i Β fixing α i for i Ν.

Solution Framework Outer Loop Inner Loop Use of QP Package e.g., LOQO (1998) Violation of KKT Cond Solve for working Set {α i i B} α i = 0 Fixed Set {α i i N}

Solution without Using QP Package SMO B = 2, i.e., solve the problem for two variables The subproblem is solvable without matrix calculations Steepest Ascent Training Speedup SMO by solving subproblems with variables more than two

Solution Framework for Steepest Ascent Training Outer Loop Inner Loop Violation of KKT Cond Update α i for i B until all α i in B are updated. Working Set {α i i B } B B α i = 0 Fixed Set {α i i N}

Solution Procedure Set the index set B. Select α s and eliminate the equality constraint by substituting α s = y s Σ y j α j into the objective function. Calculate corrections by α B = ( 2 Q/ α 2 B ) 1 Q/ α B where B = B -{s}. Correct variables if possible.

Calculation of corrections We calculate corrections by the Cholesky decomposition. For the L2 SVM, since the Hessian matrix is positive definite, it is regular. For the L1 SVM, since the Hessian matrix is positive semi-definite, it may be singular. D ii = If D ii < η, we discard the variables α j, j i.

Update of Variables Case 1 Variables are updated. 2 3 1 Case 2 Corrections are reduced to satisfy constraints. Case 3 Variables are not corrected.

Japanese License Plate Performance Evaluation Compare training time by the steepest ascent method, SMO, and the interior point method. Data sets used: white blood cell data thyroid data hiragana data with 50 inputs hiragana data with 13 inputs 水戸 57 ひ 59-16

Data Sets Used for Evaluation Data Inputs Classes Train. Test Blood Cell 13 12 3097 3100 Thyroid 21 3 3772 3428 H-50 50 39 4610 4610 H-13 13 38 8375 8356

Effect of Working Set Size for Blood Cell Data L1 SVM L2 SVM (s) 1200 1000 800 600 400 200 Dot product kernels are used. For a larger size, L2 SVMs are trained faster. 0 2 4 13 20 30

Effect of Working Set Size for Blood Cell Data (Cond.) L1 SVM L2 SVM (s) 700 600 500 400 300 200 100 Polynomial kernels with degree 4 are used. No much difference for L1 and L2 SVMs. 0 2 20 40

Training Time Comparison (s) LOQO SMO SAM 10000 1000 100 10 1 Thyroid Blood H-50 H-13 LOQO: LOQO is combined with decomposition technique. SMO is the slowest. LOQO and SAM are comparable for hiragana data.

Summary The steepest ascent method is faster than SMO and comparable for some cases with the interiorpoint method combined with decomposition technique. For the critical cases, L2 SVMs are trained faster than L1 SVMs, but for normal cases they are almost the same.

Contents 1. Direct and Indirect Decision Functions 2. Architecture of SVMs 3. Characteristics of L1 and L2 SVMs 4. Multiclass SVMs 5. Training Methods 6. SVM-inspired Methods 6.1 Kernel-based Methods 6.2 Maximum Margin Fuzzy Classifiers 6.3 Maximum Margin Neural Networks

SVM-inspired Methods Kernel-based Methods Kernel Perceptron Kernel Least Squares Kernel Mahalanobis Distance Kernel Principal Component Analysis Maximum Margin Classifiers Fuzzy Classifiers Neural Networks

Contents 1. Direct and Indirect Decision Functions 2. Architecture of SVMs 3. Characteristics of L1 and L2 SVMs 4. Multiclass SVMs 5. Training Methods 6. SVM-inspired Methods 6.1 Kernel-based Methods 6.2 Maximum Margin Fuzzy Classifiers 6.3 Maximum Margin Neural Networks

Fuzzy Classifier with Ellipsoidal Regions Membership function m i (x) = exp(- h i 2 (x)) h i 2 (x) = d i 2 (x)/α i d i 2 (x) = (x c i ) t Q i -1 (x c i ) where α i : a tuning parameter, d i 2 (x): a Mahalanobis distance.

1. For each class calculate the center and the covariance matrix. 2. Tune the membership function so that misclassification is resolved. Training

Comparison of Fuzzy Classifiers with Support Vector Machines Training of fuzzy classifiers is faster than support vector machines. Comparable performance for the overlapping classes. Inferior performance when data are scarce since the covariance matrix Q i becomes singular.

Improvement of Generalization Ability When Data Are Scarce by Symmetric Cholesky factorization, by maximizing margins.

Symmetric Cholesky Factorization In factorizing Q i into two triangular matrices: Q i = L i L it, if the diagonal element l aa is l aa < η l aa namely, Q i is singular, we set l aa = η.

Concept of Maximizing Margins If there are no overlap between classes, we set the boundary at the middle of two classes, by tuning α i.

Upper Bound of α ii The class i datum remains correctly classified for the increase of α i. We calculate the upper bound for all the class i data.

The class j datum remains correctly classified for the decrease of α i. We calculate the lower bound for all the data not belonging to class i. Lower Bound of α ii

Tuning Procedure 1. Calculate the upper bound L i and lower bound U i of α i. 2. Set α i = (L i + U i )/2. 3. Iterate the above procedure for all the classes.

Data Sets Used for Evaluation Data Inputs Classes Train. Test H-50 50 39 4610 4610 H-105 105 38 8375 8356 H-13 13 38 8375 8356

Recognition Rate of Hiragana-50 Data (Cholesky Factorization) (%) 100 90 80 70 60 50 40 30 20 10 0 Test Train. 10-5 10-4 10-3 10-2 10-1 η The recognition rate improved as η was increased.

Recognition Rate of Hiragana-50 Data (Maximizing Margins) Test Train. (%) 100 98 96 94 92 90 88 86 84 10-5 10-4 10-3 10-2 10-1 η The better recognition rate for small η.

Performance Comparison 91 91 91 91 92 92 92 92 93 93 93 93 94 94 94 94 95 95 95 95 96 96 96 96 97 97 97 97 98 98 98 98 99 99 99 99 100 100 100 100 H-50 H-50 H-50 H-50 H-105 H-105 H-105 H-105 H-13 H-13 H-13 H-13 CF CF CF CF CF+MM CF+MM CF+MM CF+MM SVD SVD SVD SVD SVM SVM SVM SVM (%) Performance excluding SVD is comparable.

Summary When the number of training data is small, the generalization ability of the fuzzy classifier with ellipsoidal regions is improved by the Cholesky factorization with singular avoidance, tuning membership functions when there is no overlap between classes. Simulation results show the improvement of the generalization ability.

Contents 1. Direct and Indirect Decision Functions 2. Architecture of SVMs 3. Characteristics of L1 and L2 SVMs 4. Multiclass SVMs 5. Training Methods 6. SVM-inspired Methods 6.1 Kernel-based Methods 6.2 Maximum Margin Fuzzy Classifiers 6.3 Maximum Margin Neural Networks

Maximum Margin Neural Networks Training of multilayer neural networks (NNs) Back propagation algorithm (BP) - Slow training Support vector machine (SVM) with sigmoid kernels - High generalization ability by maximizing margins - Restriction to parameter values CARVE Algorithm(Young & Downs 1998) - Efficient training method not developed yet

CARVE Algorithm A constructive method of training NNs Any pattern classification problems can be synthesized in 3-layers (input layer included) Needs to find hyperplanes that separate data of one class from the others

CARVE Algorithm (hidden layer) : The class for separation x 2 Only data are separated x 1 Input space x 1 x 2 Bias The weights between input layer and hidden layer represent the hyperplane Input layer Hidden layer

CARVE Algorithm (hidden layer) : The class for separation x 2 x 1 Input space Only data are separated Separated data are not used in next training When only data of one class remain, the hidden layer training is finished x 1 x 2 Bias The weights between input layer and hidden layer represent the hyperplane Input layer Hidden layer

CARVE Algorithm (output layer) (0,1,0) Sigmoid function (0,0,1) (0,0,0) (0,0,1) 1 (1,0,0) Input space 0 (1,1,0) (0,0,0) (1,0,0) Hidden layer output space All data can be linearly separable by the output layer

Proposed Method NN training based on CARVE algorithm Maximizing margins at each layer Optimal hyperplane On the positive side, data of other classes may exist by SVM training Input space Not appropriate hyperplanes for CARVE algorithm Extend DirectSVM method in hidden layer training and use conventional SVM training in output layer

Extension of DirectSVM Nearest data The class labels are set so that the class for separation are +1, and other classes are -1 Largest violation ex. target values : +1 : 1 Determine the initial hyperplane by DirectSVM method Input space Check the violation of the data with label 1

Extension of DirectSVM Nearest data Largest violation Update the hyperplane so that the misclassified datum with -1 is classified correctly Input space If there are no violating data with label 1, we stop updating the hyperplane

Training of output layer Apply conventional SVM training Maximum margin Set weights by SVM with dot product kernels x 1 x 2 Bias Hidden layer output space Input layer Output layer Bias Hidden layer

Data Sets Used for Evaluation Data Inputs Classes Train. Test Blood Cell 13 12 3097 3100 Thyroid 21 3 3772 3428 H-50 50 39 4610 4610 H-13 13 38 8375 8356

Performance Comparison BP FSVM MM-NN (%) 100 98 96 94 92 90 88 FSVM: 1 vs. all MM-NN is better than BP and comparable to FSVMs. 86 Blood Thyroid H-50

Summary NNs are generated layer by layer by the CARVE algorithm and by maximizing margins. Generalization ability is better than that of BP NN and comparable to that of SVMs.