Combining SVM classifiers for email anti-spam filtering



Similar documents
A Partially Supervised Metric Multidimensional Scaling Algorithm for Textual Data Visualization

T : Classification as Spam or Ham using Naive Bayes Classifier. Santosh Tirunagari :

Social Media Mining. Data Mining Essentials

Knowledge Discovery and Data Mining. Bootstrap review. Bagging Important Concepts. Notes. Lecture 19 - Bagging. Tom Kelsey. Notes

A Two-Pass Statistical Approach for Automatic Personalized Spam Filtering

An Approach to Detect Spam s by Using Majority Voting

Visualization of large data sets using MDS combined with LVQ.

Spam Detection System Combining Cellular Automata and Naive Bayes Classifier

CAS-ICT at TREC 2005 SPAM Track: Using Non-Textual Information to Improve Spam Filtering Performance

Classification of Bad Accounts in Credit Card Industry

Single-Class Learning for Spam Filtering: An Ensemble Approach

Towards better accuracy for Spam predictions

How To Cluster

A Content based Spam Filtering Using Optical Back Propagation Technique

Support Vector Machines with Clustering for Training with Very Large Datasets

SVM-Based Spam Filter with Active and Online Learning

Data Mining. Nonlinear Classification

Detecting Spam Using Spam Word Associations

Spam detection with data mining method:

Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data

Ensemble Learning Better Predictions Through Diversity. Todd Holloway ETech 2008

SVM Ensemble Model for Investment Prediction

Getting Even More Out of Ensemble Selection

Differential Voting in Case Based Spam Filtering

A General Approach to Incorporate Data Quality Matrices into Data Mining Algorithms

A Study Of Bagging And Boosting Approaches To Develop Meta-Classifier

How To Identify A Churner

Ensemble Data Mining Methods

A Personalized Spam Filtering Approach Utilizing Two Separately Trained Filters

Analysis of kiva.com Microlending Service! Hoda Eydgahi Julia Ma Andy Bardagjy December 9, 2010 MAS.622j

Classification algorithm in Data mining: An Overview

Methodology for Emulating Self Organizing Maps for Visualization of Large Datasets

Comparison of Data Mining Techniques used for Financial Data Analysis

Component Ordering in Independent Component Analysis Based on Data Power

An Experimental Study on Rotation Forest Ensembles

1816 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 7, JULY Principal Components Null Space Analysis for Image and Video Classification

E-commerce Transaction Anomaly Classification

Impact of Boolean factorization as preprocessing methods for classification of Boolean data

An Introduction to Data Mining. Big Data World. Related Fields and Disciplines. What is Data Mining? 2/12/2015

Bagged Ensemble Classifiers for Sentiment Classification of Movie Reviews

Spam Filtering Based on Latent Semantic Indexing

Classification Using Data Reduction Method

Chapter 6. The stacking ensemble approach

Data Mining Project Report. Document Clustering. Meryem Uzun-Per

Knowledge Discovery from patents using KMX Text Analytics

Comparing the Results of Support Vector Machines with Traditional Data Mining Algorithms

Artificial Neural Network, Decision Tree and Statistical Techniques Applied for Designing and Developing Classifier

Machine Learning in Spam Filtering

Spam Detection A Machine Learning Approach

International Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 3, May-June 2015

BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES

PSSF: A Novel Statistical Approach for Personalized Service-side Spam Filtering

Three-Way Decisions Solution to Filter Spam An Empirical Study

New Ensemble Combination Scheme

II. RELATED WORK. Sentiment Mining

Knowledge Discovery and Data Mining

SURVEY OF TEXT CLASSIFICATION ALGORITHMS FOR SPAM FILTERING

ECUE: A Spam Filter that Uses Machine Learning to Track Concept Drift

Ensemble Methods. Knowledge Discovery and Data Mining 2 (VU) ( ) Roman Kern. KTI, TU Graz

Advanced Ensemble Strategies for Polynomial Models

ClusterOSS: a new undersampling method for imbalanced learning

IMPROVING SPAM FILTERING EFFICIENCY USING BAYESIAN BACKWARD APPROACH PROJECT

Feature vs. Classifier Fusion for Predictive Data Mining a Case Study in Pesticide Classification

BIOINF 585 Fall 2015 Machine Learning for Systems Biology & Clinical Informatics

A Survey on Pre-processing and Post-processing Techniques in Data Mining

WE DEFINE spam as an message that is unwanted basically

Using Random Forest to Learn Imbalanced Data

Data Mining - Evaluation of Classifiers

Supervised Feature Selection & Unsupervised Dimensionality Reduction

An Overview of Knowledge Discovery Database and Data mining Techniques

MHI3000 Big Data Analytics for Health Care Final Project Report

L25: Ensemble learning

Knowledge Discovery and Data Mining

Lasso-based Spam Filtering with Chinese s

1 Introductory Comments. 2 Bayesian Probability

HYBRID PROBABILITY BASED ENSEMBLES FOR BANKRUPTCY PREDICTION

Dissimilarity representations allow for building good classifiers

DATA ANALYSIS II. Matrix Algorithms

INTERACTIVE DATA EXPLORATION USING MDS MAPPING

Combining Global and Personal Anti-Spam Filtering

A Game Theoretical Framework for Adversarial Learning

Developing Methods and Heuristics with Low Time Complexities for Filtering Spam Messages

Impact of Feature Selection Technique on Classification

An Initial Study on High-Dimensional Data Visualization Through Subspace Clustering

Using multiple models: Bagging, Boosting, Ensembles, Forests

Active Learning SVM for Blogs recommendation

Spam or Not Spam That is the question

Investigation of Support Vector Machines for Classification

Search Taxonomy. Web Search. Search Engine Optimization. Information Retrieval

ViSOM A Novel Method for Multivariate Data Projection and Structure Visualization

Tweaking Naïve Bayes classifier for intelligent spam detection

A Proposed Algorithm for Spam Filtering s by Hash Table Approach

How To Solve The Kd Cup 2010 Challenge

Data Mining: Algorithms and Applications Matrix Math Review

DATA MINING TECHNIQUES AND APPLICATIONS

Anti-Spam Filter Based on Naïve Bayes, SVM, and KNN model

Chapter 6. Orthogonality

Customer Classification And Prediction Based On Data Mining Technique

ENSEMBLE DECISION TREE CLASSIFIER FOR BREAST CANCER DATA

Roulette Sampling for Cost-Sensitive Learning

Transcription:

Combining SVM classifiers for email anti-spam filtering Ángela Blanco Manuel Martín-Merino Abstract Spam, also known as Unsolicited Commercial Email (UCE) is becoming a nightmare for Internet users and providers. Machine learning techniques such as the Support Vector Machines (SVM) have achieved a high accuracy filtering the spam messages. However, a certain amount of legitimate emails are often classified as spam (false positive errors) although this kind of errors are prohibitively expensive. In this paper we address the problem of reducing particularly the false positive errors in anti-spam email filters based on the SVM. To this aim, an ensemble of SVMs that combines multiple dissimilarities is proposed. The experimental results suggest that the new method outperforms classifiers based solely on a single dissimilarity and a widely used combination strategy such as bagging. 1 Introduction Unsolicited commercial email also known as spam is becoming a serious problem for Internet users and providers [9]. Several researchers have applied machine learning techniques in order to improve the detection of spam messages. Naive Bayes models are the most popular [2] but other authors have applied Support Vector Machines (SVM) [8], boosting and decision trees [5] with remarkable results. SVM has revealed particularly attractive in this application because it is robust against noise and is able to handle a large number of features [21]. Errors in anti-spam email filtering are strongly asymmetric. Thus, false positive errors or valid messages that are blocked, are prohibitively expensive. Several authors have proposed new versions of the original SVM algorithm that help to reduce the false positive errors [20, 14]. In particular, it has been suggested that combining non-optimal classifiers can help to reduce particularly the variance of the predictor [20, 4, 3] and consequently the misclassification error. In order to achieve this goal, different versions of the classifier are usually built by sampling the patterns or the features [4]. However, in our application it is expected that the aggregation of strong classifiers will help to reduce more the false positive errors [18, 11, 7]. Universidad Pontificia de Salamanca, C/Compañía 5, 37002, Salamanca, Spain. Emails: ablancogo@upsa.es, mmartinmac@upsa.es

In this paper we address the problem of reducing the false positive errors by combining classifiers based on multiple dissimilarities. To this aim, a diversity of classifiers is built considering dissimilarities that reflect different features of the data. The dissimilarities are first embedded into an Euclidean space where a SVM is adjusted for each measure. Next, the classifiers are aggregated using a voting strategy [13]. The method proposed has been applied to the Spam UCI machine learning database [19] with remarkable results. This paper is organized as follows. Section 2 introduces the dissimilarities considered by the ensemble of classifiers. Section 3 presents our method to combine classifiers based on dissimilarities. Section 4 illustrates the performance of the algorithm in the challenging problem spam filtering. Finally, section 5 gets conclusions and outlines future research trends. 2 The problem of distances revisited An important step in the design of a classifier is the choice of the proper dissimilarity that reflects the proximities among the objects. However, the choice of a good dissimilarity for the problem at hand is not an easy task. Each measure reflects different features of the dataset and no dissimilarity outperforms the others in a wide range of problems. In this section, we comment shortly the main differences among several dissimilarities that can be applied to model the proximities among emails. Let x, y be the vectorial representation of two emails. The Euclidean distance is defined as: d euclid ( x, y) = d (x i y i ) 2, (1) where d is the dimensionality of of the vectorial representation and x i is the value of feature i in the email x. The Euclidean distance evaluates if the features considered differ significantly in both messages. This measure is sensible to the size of the body email. The cosine dissimilarity reflects the angle between the emails x and y and is defined as: i=1 d cosine ( x, y) = 1 xt y x y, (2) The value is independent of the message length. It differs significantly from the Euclidean distance when the data is not normalized. The correlation measure checks if the features that codify the spam change in the same way in both emails and it is defined as:

d (x i x)(y i ȳ) i=1 dcor.( x, y) = 1, (3) d (x i x) 2 d (y j ȳ) 2 i=1 Correlation based measures tend to group together samples whose features are linearly related. The correlation differs significantly from the cosine if the mean of the vectors that represents the emails are not zero. The correlation measure introduced earlier is distorted by outliers. The Spearman rank coefficient avoids this problem by computing a correlation between the ranks of the features. It is defined as: dspearm.( x, y ) = 1, (4) d (x i x ) 2 d (y j ȳ ) 2 i=1 j=1 d (x i x )(y i ȳ ) where where x i = rank( x i) and y j = rank( y j ). Notice that this measure doesn t take into account the information about the quantitative values of the features. Another kind of correlation measure that helps to overcome the problem of outliers is the kendall-τ index which is related to the Mutual Information probabilistic measure. It is defined as: i=1 j=1 d d d kend-tau ( x, y) = 1 i=1 j=1 where C xij = sign(x i x j ) and C yij = sign(y i y j ). C xij C yij d(d 1), (5) The above discussion suggests that the normalization of the data should be avoided because this preprocessing may partially destroy the disparity among the dissimilarities. Besides when the emails are codified in high dimensional and noisy spaces, the dissimilarities mentioned above are affected by the curse of dimensionality [1, 15]. Hence, most of the dissimilarities become almost constant and the differences among dissimilarities are lost [12, 16]. This problem can be avoided selecting a small number of features before the dissimilarities are computed. 3 Combining classifiers based on dissimilarities The SVM is a powerful machine learning technique that is able to work with high dimensional and noisy data [21]. However, the original SVM algorithm is not able to work directly from a dissimilarity matrix. To overcome this problem, we follow the approach of [17]. First, each

dissimilarity is embedded into an Euclidean space such that the inter-pattern distances reflect approximately the original dissimilarities. Next, the test points are embedded via a linear algebra operation and finally the SVM is adjusted and evaluated. Let D R n n be a dissimilarity matrix made up of the object proximities. A configuration in a low dimensional Euclidean space can be found via a metric multidimensional scaling algorithm (MDS) [6] such that the original dissimilarities are approximately preserved. Let X = [ x 1,..., x n ] T be the matrix of the object coordinates for the training patterns. Define B = XX T as the matrix of inner products which is related to the dissimilarity matrix via the following equation: B = 1 2 JD2 J, (6) where J = I 1 n 11T R n n is the centering matrix and I is the identity matrix. If B is positive definite, the object coordinates in the low dimensional space R k can be found through a singular value decomposition [6, 10]: X k = V k Λ 1/2 k, (7) where V k R n k is an orthogonal matrix with columns the first eigen-vectors of XX T and Λ k R k k is a diagonal matrix with the corresponding eigenvalues. Several dissimilarities introduced in section 2 generate inner product matrices B non-definite positive. To avoid this problem, we have added a non-zero constant to non-diagonal elements of the dissimilarity matrix [17]. Once the training patterns have been embedded into a low dimensional space, the test pattern can be added to this space via a linear projection [17]. Next we detail briefly the process. Let X R n k be the object configuration for the training patterns in R k and X n = [ x 1,..., x s ] T R s k the matrix of the object coordinates sought for the test patterns. Let Dn 2 R s n be the matrix of dissimilarities between the s test patterns and the n training patterns that have been already projected. The matrix B n R s n of inner products among the test and training patterns can be found as: B n = 1 2 (D2 nj UD 2 J), (8) where J R n n is the centering matrix and U = 1 n 1T 1 R s n. Since the matrix of inner products verifies B n = X n X T (9) then, X n can be found as the least mean-square error solution to (9), that is: X n = B n X(X T X) 1, (10)

Given that X T X = Λ and considering that X = V k Λ 1/2 k the coordinates for the test points can be obtained as: X n = B n V k Λ 1/2 k, (11) which can be easily evaluated through simple linear algebraic operations. Next we introduce the method proposed to combine classifiers based on different dissimilarities. Our method is based on the evidence that different dissimilarities reflect different features of the dataset (see section 2). Therefore, classifiers based on different measures will missclassify a different set of patterns. Figure 1 shows for instance that bold patterns are assigned to the wrong class by only one classifier but using a voting strategy the patterns will be assigned to the right class. Figure 1: Aggregation of classifiers using a voting strategy. Bold patterns are missclassified by a single hyperplane but not by the combination. Hence, our combination algorithm proceeds as follows: First, the dissimilarities introduced in section 2 are computed. Each dissimilarity is embedded into an Euclidean space, training and test pattern coordinates are obtained using equations (7) and (11) respectively. To increase the diversity of classifiers, once the dissimilarities are embedded a bootstrap sample of the patterns is drawn. Next, we train a SVM for each dissimilarity and bootstrap sample. Thus, it is expected that misclassification errors will change from one classifier to another. So the combination of classifiers by a voting strategy will help to reduce the misclassification errors. A related technique to combine classifiers is the Bagging [4, 3]. This method generates a diversity of classifiers that are trained using several bootstrap samples. Next, the classifiers are aggregated using a voting strategy. Nevertheless there are three important differences between bagging and the method proposed in this section. First, our method generates the diversity of classifiers by considering different dissimilarities and thus will induce a stronger diversity among classifiers. A second advantage of our method is that it is able to work directly with a dissimilarity matrix. Finally, the combination of several dissimilarities avoids the problem of choosing a particular dissimilarity for the application we are dealing with. This is a difficult and time consuming task.

Notice that the algorithm proposed earlier can be easily applied to other classifiers such as the k-nearest neighbor algorithm that are based on distances. 4 Experimental results In this section, the ensemble of classifiers proposed is applied to the identification of spam messages. The spam collection considered is available from the UCI Machine learning database [19]. The corpus is made up of 4601 emails from which 39.4% are spam and 60.6% legitimate messages. The number of features considered to codify the emails are 57 and they are described in [19]. The dissimilarities have been computed without normalizing the variables because this preprocessing may increase the correlation among them. As we have mentioned in section 3, the disparity among the dissimilarities will help to improve the performance of the ensemble of classifiers. Once the dissimilarities have been embedded in a Euclidean space, the variables are normalized to unit variance and zero mean. This preprocessing improves the SVM accuracy and the speed of convergence. Regarding the ensemble of classifiers, an important issue is the dimensionality in which the dissimilarity matrix is embedded. To this aim, a metric Multidimensional Scaling Algorithm is first run. The number of eigenvectors considered is determined by the curve induced by the eigenvalues. For the dataset considered, figure 2 shows that the first twenty eigenvalues preserve the main structure of the dataset. Anyway, the sensibility to this parameter is not high provided that the number of eigenvalues chosen is large enough. For the dataset considered values larger than twenty give good experimental results. Eigenvalues 0 100 200 300 400 0 5 10 15 20 25 30 k Figure 2: Eigenvalues for the Multidimensional Scaling Algorithm with the cosine dissimilarity. The combination strategy proposed in this paper has been also applied to the k-nearest neighbor classifier. An important parameter in this algorithm is the number of neighbors which has been estimated using 20% of the patterns as a validation set.

The classifiers have been evaluated from two different points of view: on the one hand we have computed the misclassification errors. But in our application, false positives errors are very expensive and should be avoided. Therefore false positive errors are a good index of the algorithm performance and are also provided. Finally the errors have been evaluated considering a subset of 20% of the patterns drawn randomly without replacement from the original dataset. Linear kernel Polynomial kernel Method Error False positive Error False positive Euclidean 8.1% 4.0% 15% 11% Cosine 19.1% 15.3% 30.4% 8% Correlation 18.7% 9.8% 31% 7.8% Manhattan 12.6% 6.3% 19.2% 7.1% Kendall-τ 6.5% 3.1% 11.1% 5.4% Spearman 6.6% 3.1% 11.1% 5.4% Bagging Euclidean 7.3% 3.0% 14.3% 4% Combination 6.1% 3% 11.1% 1.8% Parameters: Linear kernel: C=0.1, m=20; Polynomial kernel: Degree=2, C=5, m=20 Table 1: Experimental results for the ensemble of SVM classifiers. Classifiers based solely on a single dissimilarity and Bagging have been taken as reference. Table 1 shows the experimental results for the ensemble of classifiers using the SVM. The method proposed has been compared with bagging introduced in section 3 and with classifiers based on a single dissimilarity. The m parameter determines the number of bootstrap samples considered for the combination strategies. C is the standard regulation parameter in the C-SVM [21]. From the analysis of table 1, the following conclusions can be drawn: The combination strategy improves significantly the the Euclidean distance which is usually considered by most SVM algorithms. The combination strategy with polynomial kernel reduces significantly the false positive errors of the best single classifier. The improvement is smaller for the linear kernel. This can be explained because the non-linear kernel allow us to build classifiers with larger variance and therefore the combination strategy can achieve a larger improvement of the false positive errors. The combination strategy proposed outperforms a widely used aggregation method such as Bagging. The improvement is particularly important for the polynomial kernel. Table 2 shows the experimental results for the ensemble of k-nns classifiers. k denotes the number of nearest neighbors considered. As in the previous case, the combination strategy proposed improves particularly the false positive errors of classifiers based on a single distance.

Method Error False positive Euclidean 22.5% 9.3% Cosine 23.3% 14.0% Correlation 23.2% 14.0% Manhattan 23.2% 12.2% Kendall-τ 21.7% 6% Spearman 11.2% 6.5% Bagging 19.1% 11.6% Combination 11.5% 5.5% Parameters: k = 2 Table 2: Experimental results for the ensemble of k-nn classifiers. Classifiers based solely on a single dissimilarity and Bagging have been taken as reference We also report that Bagging is not able to reduce the false positive errors of the Euclidean distance. Besides, our combination strategy improves significantly the Bagging algorithm. Finally, we observe that the misclassification errors are larger for k-nn than for the SVM. This can be explained because the SVM has a higher generalization ability when the number of features is large. 5 Conclusions and future research trends In this paper, we have proposed an ensemble of classifiers based on a diversity of dissimilarities. Our approach aims to reduce particularly the false positive errors of classifiers based solely on a single distance. Besides, the algorithm is able to work directly from a dissimilarity matrix. The algorithm has been applied to the identification of spam messages. The experimental results suggest that the method proposed help to improve both, misclassification errors and false positive errors. We also report that our algorithm outperforms classifiers based on a single dissimilarity and other combination strategies such as bagging. As future research trends, we will try to apply other combination strategies that assign different weight to each classifier. Acknowledgments. This work was partially supported by the Junta de Castilla y León grant PON05B06. References [1] C. C. Aggarwal, Re-designing distance functions and distance-based applications for high dimensional applications, in Proc. of the ACM International Conference on Management of Data and Symposium on Principles of Database Systems (SIGMOD-PODS), vol. 1, March 2001, pp. 13 18.

[2] I. Androutsopoulos, J. Koutsias, K. V. Chandrinosand C. D. Spyropoulos. An experimental comparison of naive bayesian and keyword-based anti-spam filtering with personal E-mail Messages. In 23rd Annual Internacional ACM SIGIR Conference on Research and Development in Information Retrieval, 160-167, Athens, Greece, 2000. [3] E. Bauer and R. Kohavi, An empirical comparison of voting classification algorithms: Bagging, boosting, and variants, Machine Learning, vol. 36, pp. 105 139, 1999. [4] L. Breiman, Bagging predictors, Machine Learning, vol. 24, pp. 123 140, 1996. [5] X. Carreras and I. Márquez. Boosting Trees for Anti-Spam Email Filtering. In RANLP- 01, Forth International Conference on Recent Advances in Natural Language Processing, 58-64, Tzigov Chark, BG, 2001. [6] T. Cox and M. Cox, Multidimensional Scaling, 2nd ed. New York: Chapman & Hall/CRC Press, 2001. [7] P. Domingos. MetaCost: A General Method for Making Classifiers Cost-Sensitive. ACM Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD), 155-165, San Diego, USA, 1999. [8] H. Drucker, D. Wu and V. N. Vapnik. Support Vector Machines for Spam Categorization. IEEE Transactions on Neural Networks, 10(5), 1048-1054, September, 1999. [9] T. Fawcett. In vivo spam filtering: A challenge problem for KDD. ACM Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD), 5(2), 140-148, December, 2003. [10] G. H. Golub and C. F. V. Loan, Matrix Computations, 3rd ed. Baltimore, Maryland, USA: Johns Hopkins University press, 1996. [11] S. Hershkop and J. S. Salvatore. Combining Email Models for False Positive Reduction. ACM Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD), 98-107, August, Chicago, Illinois, 2005. [12] C. C. A. A. Hinneburg and D. A. Keim, What is the nearest neighbor in high dimensional spaces? in Proc. of the International Conference on Database Theory (ICDT). Cairo, Egypt: Morgan Kaufmann, September 2000, pp. 506 515. [13] J. Kittler, M. Hatef, R. Duin, and J. Matas, On combining classifiers, IEEE Transactions on Neural Networks, vol. 20, no. 3, pp. 228 239, March 1998. [14] A. Kolcz and J. Alspector. SVM-based filtering of E-mail Spam with content-specific misclassification costs. In Workshop on Text Mining (TextDM 2001), 1-14, San Jose, California, 2001. [15] M. Martín-Merino and A. Muñoz, A new Sammon algorithm for sparse data visualization, in International Conference on Pattern Recognition (ICPR), vol. 1. Cambridge (UK): IEEE Press, August 2004, pp. 477 481.

[16] M. Martín-Merino and A. Muñoz, Self organizing map and Sammon mapping for asymmetric proximities, Neurocomputing, vol. 63, pp. 171 192, 2005. [17] E. Pekalska, P. Paclick, and R. Duin, A generalized kernel approach to dissimilarity-based classification, Journal of Machine Learning Research, vol. 2, pp. 175 211, 2001. [18] F. Provost and T. Fawcett. Robust Classification for Imprecise Environments. Machine Learning, 42, 203-231, 2001. [19] UCI Machine Learning Database. Available from: www.ics.uci.edu/ mlearn/mlrepository.html [20] G. Valentini and T. Dietterich, Bias-variance analysis of support vector machines for the development of svm-based ensemble methods, Journal of Machine Learning Research, vol. 5, pp. 725 775, 2004. [21] V. Vapnik, Statistical Learning Theory. New York: John Wiley & Sons, 1998.