A Survey of Outlier Detection Methodologies.


 Alberta Lloyd
 1 years ago
 Views:
Transcription
1 A Survey of Outlier Detection Methodologies. Victoria J. Hodge Dept. of Computer Science, University of York, York, YO10 5DD UK tel: fax: and Jim Austin Abstract. Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. Their detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. It can identify errors and remove their contaminating effect on the data set and as such to purify the data for processing. The original outlier detection methods were arbitrary but now, principled and systematic techniques are used, drawn from the full gamut of Computer Science and Statistics. In this paper, we introduce a survey of contemporary techniques for outlier detection. We identify their respective motivations and distinguish their advantages and disadvantages in a comparative review. Keywords: Outlier, Novelty, Anomaly, Noise, Deviation, Detection, Recognition 1. Introduction Outlier detection encompasses aspects of a broad spectrum of techniques. Many techniques employed for detecting outliers are fundamentally identical but with different names chosen by the authors. For example, authors describe their various approaches as outlier detection, novelty detection, anomaly detection, noise detection, deviation detection or exception mining. In this paper, we have chosen to call the technique outlier detection although we also use novelty detection where we feel appropriate but we incorporate approaches from all five categories named above. Additionally, authors have proposed many definitions for an outlier with seemingly no universally accepted definition. We will take the definition of Grubbs (Grubbs, 1969) and quoted in Barnett & Lewis (Barnett and Lewis, 1994): An outlying observation, or outlier, is one that appears to deviate markedly from other members of the sample in which it occurs. This work was supported by EPSRC Grant No. GR/R55191/01 c 2004 Kluwer Academic Publishers. Printed in the Netherlands. Hodge+Austin_OutlierDetection_AIRE381.tex; 19/01/2004; 13:18; p.1
2 2 A further outlier definition from Barnett & Lewis (Barnett and Lewis, 1994) is: An observation (or subset of observations) which appears to be inconsistent with the remainder of that set of data. In figure 2, there are five outlier points labelled V, W, X, Y and Z which are clearly isolated and inconsistent with the main cluster of points. The data in the figures in this survey paper is adapted from the Wine data set (Blake+Merz, 1998). John (John, 1995) states that an outlier may also be surprising veridical data, a point belonging to class A but actually situated inside class B so the true (veridical) classification of the point is surprising to the observer. Aggarwal (Aggarwal and Yu, 2001) notes that outliers may be considered as noise points lying outside a set of defined clusters or alternatively outliers may be defined as the points that lie outside of the set of clusters but are also separated from the noise. These outliers behave differently from the norm. In this paper, we focus on the two definitions quoted from (Barnett and Lewis, 1994) above and do not consider the dual classmembership problem or separating noise and outliers. Outlier detection is a critical task in many safety critical environments as the outlier indicates abnormal running conditions from which significant performance degradation may well result, such as an aircraft engine rotation defect or a flow problem in a pipeline. An outlier can denote an anomalous object in an image such as a land mine. An outlier may pinpoint an intruder inside a system with malicious intentions so rapid detection is essential. Outlier detection can detect a fault on a factory production line by constantly monitoring specific features of the products and comparing the realtime data with either the features of normal products or those for faults. It is imperative in tasks such as credit card usage monitoring or mobile phone monitoring to detect a sudden change in the usage pattern which may indicate fraudulent usage such as stolen card or stolen phone airtime. Outlier detection accomplishes this by analysing and comparing the time series of usage statistics. For application processing, such as loan application processing or social security benefit payments, an outlier detection system can detect any anomalies in the application before approval or payment. Outlier detection can additionally monitor the circumstances of a benefit claimant over time to ensure the payment has not slipped into fraud. Equity or commodity traders can use outlier detection methods to monitor individual shares or markets and detect novel trends which may indicate buying or selling opportunities. A news delivery system Hodge+Austin_OutlierDetection_AIRE381.tex; 19/01/2004; 13:18; p.2
3 can detect changing news stories and ensure the supplier is first with the breaking news. In a database, outliers may indicate fraudulent cases or they may just denote an error by the entry clerk or a misinterpretation of a missing value code, either way detection of the anomaly is vital for data base consistency and integrity. A more exhaustive list of applications that utilise outlier detection is: 3 Fraud detection  detecting fraudulent applications for credit cards, state benefits or detecting fraudulent usage of credit cards or mobile phones. Loan application processing  to detect fraudulent applications or potentially problematical customers. Intrusion detection  detecting unauthorised access in computer networks. Activity monitoring  detecting mobile phone fraud by monitoring phone activity or suspicious trades in the equity markets. Network performance  monitoring the performance of computer networks, for example to detect network bottlenecks. Fault diagnosis  monitoring processes to detect faults in motors, generators, pipelines or space instruments on space shuttles for example. Structural defect detection  monitoring manufacturing lines to detect faulty production runs for example cracked beams. Satellite image analysis  identifying novel features or misclassified features. Detecting novelties in images  for robot neotaxis or surveillance systems. Motion segmentation  detecting image features moving independently of the background. Timeseries monitoring  monitoring safety critical applications such as drilling or highspeed milling. Medical condition monitoring  such as heartrate monitors. Pharmaceutical research  identifying novel molecular structures. Hodge+Austin_OutlierDetection_AIRE381.tex; 19/01/2004; 13:18; p.3
4 4 Detecting novelty in text  to detect the onset of news stories, for topic detection and tracking or for traders to pinpoint equity, commodities, FX trading stories, outperforming or under performing commodities. Detecting unexpected entries in databases  for data mining to detect errors, frauds or valid but unexpected entries. Detecting mislabelled data in a training data set. Outliers arise because of human error, instrument error, natural deviations in populations, fraudulent behaviour, changes in behaviour of systems or faults in systems. How the outlier detection system deals with the outlier depends on the application area. If the outlier indicates a typographical error by an entry clerk then the entry clerk can be notified and simply correct the error so the outlier will be restored to a normal record. An outlier resulting from an instrument reading error can simply be expunged. A survey of human population features may include anomalies such as a handful of very tall people. Here the anomaly is purely natural, although the reading may be worth flagging for verification to ensure no errors, it should be included in the classification once it is verified. A system should use a classification algorithm that is robust to outliers to model data with naturally occurring outlier points. An outlier in a safety critical environment, a fraud detection system, an image analysis system or an intrusion monitoring system must be detected immediately (in realtime) and a suitable alarm sounded to alert the system administrator to the problem. Once the situation has been handled, this anomalous reading may be stored separately for comparison with any new fraud cases but would probably not be stored with the main system data as these techniques tend to model normality and use this to detect anomalies. There are three fundamental approaches to the problem of outlier detection: 1. Type 1  Determine the outliers with no prior knowledge of the data. This is essentially a learning approach analogous to unsupervised clustering. The approach processes the data as a static distribution, pinpoints the most remote points, and flags them as potential outliers. Type 1 assumes that errors or faults are separated from the normal data and will thus appear as outliers. In figure3,pointsv,w,x,yandzaretheremotepointsseparated from the main cluster and would be flagged as possible outliers. We note that the main cluster may be subdivided if necessary into more Hodge+Austin_OutlierDetection_AIRE381.tex; 19/01/2004; 13:18; p.4
5 than one cluster to allow both classification and outlier detection as with figure 4. The approach is predominantly retrospective and is analogous to a batchprocessing system. It requires that all data be available before processing and that the data is static. However, once the system possesses a sufficiently large database with good coverage, then it can compare new items with the existing data. There are two subtechniques commonly employed, diagnosis and accommodation (Rousseeuw and Leroy, 1996). An outlier diagnostic approach highlights the potential outlying points. Once detected, the system may remove these outlier points from future processing of the data distribution. Many diagnostic approaches iteratively prune the outliers and fit their system model to the remaining data until no more outliers are detected. An alternative methodology is accommodation that incorporates the outliers into the distribution model generated and employs a robust classification method. These robust approaches can withstand outliers in the data and generally induce a boundary of normality around the majority of the data which thus represents normal behaviour. In contrast, nonrobust classifier methods produce representations which are skewed when outliers are left in. Nonrobust methods are best suited when there are only a few outliers in the data set (as in figure 3) as they are computationally cheaper than the robust methods but a robust method must be used if there are a large number of outliers to prevent this distortion. Torr & Murray (Torr and Murray, 1993) use a cheap Least Squares algorithm if there are only a few outliers but switch to a more expensive but robust algorithm for higher frequencies of outliers. 2. Type 2  Model both normality and abnormality. This approach is analogous to supervised classification and requires prelabelled data, tagged as normal or abnormal. In figure 4, there are three classes of normal data with prelabelled outliers in isolated areas. The entire area outside the normal class represents the outlier class. The normal points could be classified as a single class or subdivided into the three distinct classes according to the requirements of the system to provide a simple normal/abnormal classification or to provide an abnormal and 3classes of normally classifier. Classifiers are best suited to static data as the classification needs to be rebuilt from first principles if the data distribution shifts unless the system uses an incremental classifier such as an evolutionary neural network. We describe one such approach later which 5 Hodge+Austin_OutlierDetection_AIRE381.tex; 19/01/2004; 13:18; p.5
6 6 uses a Grow When Required (Marsland, 2001) network. A type 2 approach can be used for online classification, where the classifier learns the classification model and then classifies new exemplars as and when required against the learned model. If the new exemplar lies in a region of normality it is classified as normal, otherwise it is flagged as an outlier. Classification algorithms require a good spread of both normal and abnormal data, i.e., the data should cover the entire distribution to allow generalisation by the classifier. New exemplars may then be classified correctly as classification is limited to a known distribution and a new exemplar derived from a previously unseen region of the distribution may not be classified correctly unless the generalisation capabilities of the underlying classification algorithm are good. 3. Type 3  Model only normality or in a very few cases model abnormality (Fawcett and Provost, 1999), (Japkowicz et al., 1995). Authors generally name this technique novelty detection or novelty recognition. It is analogous to a semisupervised recognition or detection task and can be considered semisupervised as the normal class is taught but the algorithm learns to recognise abnormality. The approach needs preclassified data but only learns data marked normal. It is suitable for static or dynamic data as it only learns one class which provides the model of normality. It can learn the model incrementally as new data arrives, tuning the model to improve the fit as each new exemplar becomes available. It aims to define a boundary of normality. A type 3 system recognises a new exemplar as normal if it lies within the boundary and recognises the new exemplar as novel otherwise. In figure 5, the novelty recogniser has learned the same data as shown in figure 2 but only the normal class is learned and a boundary of normality induced. If points V, W, X, Y and Z from figure 2 are compared to the novelty recogniser they will be labelled as abnormal as they lie outside the induced boundary. This boundary may be hard where a point lies wholly within or wholly outside the boundary or soft where the boundary is graduated depending on the underlying detection algorithm. A soft bounded algorithm can estimate the degree of outlierness. It requires the full gamut of normality to be available for training to permit generalisation. However, it requires no abnormal data for training unlike type 2. Abnormal data is often difficult to obtain or expensive in many fault detection domains such as aircraft engine Hodge+Austin_OutlierDetection_AIRE381.tex; 19/01/2004; 13:18; p.6
7 monitoring. It would be extremely costly to sabotage an aircraft engine just to obtain some abnormal running data. Another problem with type 2 is it cannot always handle outliers from unexpected regions, for example, in fraud detection a new method of fraud never previously encountered or previously unseen fault in a machine may not be handled correctly by the classifier unless generalisation is very good. In this method, as long as the new fraud lies outside the boundary of normality then the system will be correctly detect the fraud. If normality shifts then the normal class modelled by the system may be shifted by relearning the data model or shifting the model if the underlying modelling technique permits such as evolutionary neural networks. The outlier approaches described in this survey paper generally map data onto vectors. The vectors comprise numeric and symbolic attributes to represent continuousvalued, discrete (ordinal), categorical (unordered numeric), ordered symbolic or unordered symbolic data. The vectors may be monotype or multitype. The statistical and neural network approaches typically require numeric monotype attributes and need to map symbolic data onto suitable numeric values 1 but the machine learning techniques described are able to accommodate multitype vectors and symbolic attributes. The outliers are determined from the closeness of vectors using some suitable distance metric. Different approaches work better for different types of data, for different numbers of vectors, for different numbers of attributes, according to the speed required and according to the accuracy required. The two fundamental considerations when selecting an appropriate methodology for an outlier detection system are: 7 selecting an algorithm which can accurately model the data distribution and accurately highlight outlying points for a clustering, classification or recognition type technique. The algorithm should also be scalable to the data sets to be processed. selecting a suitable neighbourhood of interest for an outlier. The selection of the neighbourhood of interest is nontrivial. Many algorithms define boundaries around normality during processing and autonomously induce a threshold. However, these approaches are often parametric enforcing a specific distribution model or require userspecified parameters such as the number of clusters. Other techniques discussed below require userdefined parameters to define the size or density of neighbourhoods for outlier 1 There are statistical models for symbolic data (such as loglinear models) but these are not generally used for outlier analysis Hodge+Austin_OutlierDetection_AIRE381.tex; 19/01/2004; 13:18; p.7
8 8 thresholding. The choice of neighbourhood whether userdefined or autonomously induced needs to be applicable for all density distributions likely to be encountered and can potentially include those with sharp density variations. In the remainder of this paper, we categorise and analyse broad range of outlier detection methodologies. We pinpoint how each handles outliers and make recommendations for when each methodology is appropriate for clustering, classification and/or recognition. Barnett & Lewis (Barnett and Lewis, 1994) and Rousseeuw & Leroy (Rousseeuw and Leroy, 1996) describe and analyse a broad range of statistical outlier techniques and Marsland (Marsland, 2001) analyses a wide range of neural methods. We have observed that outlier detection methods are derived from three fields of computing: statistics (proximitybased, parametric, nonparametric and semiparametric), neural networks (supervised and unsupervised) and machine learning. In the next 4 sections, we describe and analyse techniques from all three fields and a collection of hybrid techniques that utilise algorithms from multiple fields. The approaches described here encompass distancebased, setbased, densitybased, depthbased, modelbased and graphbased algorithms. 2. Statistical models Statistical approaches were the earliest algorithms used for outlier detection. Some of the earliest are applicable only for single dimensional data sets. In fact, many of the techniques described in both (Barnett and Lewis, 1994) and (Rousseeuw and Leroy, 1996) are single dimensional or at best univariate. One such single dimensional method is Grubbs method (Extreme Studentized Deviate) (Grubbs, 1969) which calculates a Z value as the difference between the mean value for the attribute and the query value divided by the standard deviation for the attribute where the mean and standard deviation are calculated from all attribute values including the query value. The Z value for the query is compared with a 1% or 5% significance level. The technique requires no user parameters as all parameters are derived directly from data. However, the technique is susceptible to the number of exemplars in the data set. The higher the number of records the more statistically representative the sample is likely to be. Statistical models are generally suited to quantitative realvalued data sets or at the very least quantitative ordinal data distributions where the ordinal data can be transformed to suitable numerical values for statistical (numerical) processing. This limits their applicability and Hodge+Austin_OutlierDetection_AIRE381.tex; 19/01/2004; 13:18; p.8
9 increases the processing time if complex data transformations are necessary before processing. Probably one of the simplest statistical outlier detection techniques described here, Laurikkala et al. (Laurikkala et al., 2000) use informal box plots to pinpoint outliers in both univariate and multivariate data sets. This produces a graphical representation (see figure 1 for an example box plot) and allows a human auditor to visually pinpoint the outlying points. It is analogous to a visual inspection of figure 2. Their approach can handle realvalued, ordinal and categorical (no order) attributes. Box plots plot the lower extreme, lower quartile, median, upper quartile and upper extreme points. For univariate data, this is a simple 5point plot, as in figure 1. The outliers are the points beyond the lower and upper extreme values of the box plot, such as V, Y and Z in figure 1. Laurikkala et al. suggest a heuristic of 1.5 interquartile range beyond the upper and lower extremes for outliers but this would need to vary across different data sets. For multivariate data sets the authors note that there are no unambiguous total orderings but recommend using the reduced subordering based on the generalised distance metric using the Mahalanobis distance measure (see equation 2). The Mahalanobis distance measure includes the interattribute dependencies so the system can compare attribute combinations. The authors found the approach most accurate for multivariate data where a panel of experts agreed with the outliers detected by the system. For univariate data, outliers are more subjective and may be naturally occurring, for example the heights of adult humans, so there was generally more disagreement. Box plots make no assumptions about the data distribution model but are reliant on a human to note the extreme points plotted on the box plot. Statistical models use different approaches to overcome the problem of increasing dimensionality which both increases the processing time and distorts the data distribution by spreading the convex hull. Some methods preselect key exemplars to reduce the processing time (Datta and Kibler, 1995), (Skalak, 1994). As the dimensionality increases, the data points are spread through a larger volume and become less dense. This makes the convex hull harder to discern and is known as the Curse of Dimensionality. The most efficient statistical techniques automatically focus on the salient attributes and are able to process the higher number of dimensions in tractable time. However, many techniques such as knn, neural networks, Minimum Volume Ellipsoid or Convex Peeling described in this survey are susceptible to the Curse of Dimensionality. These approaches may utilise a preprocessing al 9 Hodge+Austin_OutlierDetection_AIRE381.tex; 19/01/2004; 13:18; p.9
10 10 gorithm to preselect the salient attributes (Aha and Bankert, 1994), (Skalak, 1994), (Skalak and Rissland, 1990). These feature selection techniques essentially remove noise from the data distribution and focus the main cluster of normal data points while isolating the outliers as with figure 2. Only a few attributes usually contribute to the deviation of an outlier case from a normal case. An alternative technique is to use an algorithm to project the data onto a lower dimensional subspace to compact the convex hull (Aggarwal and Yu, 2001) or use Principal Component Analysis (Faloutsos et al., 1997), (Parra et al., 1996) Proximitybased Techniques Proximitybased techniques are simple to implement and make no prior assumptions about the data distribution model. They are suitable for both type 1 and type 2 outlier detection. However, they suffer exponential computational growth as they are founded on the calculation of the distances between all records. The computational complexity is directly proportional to both the dimensionality of the data m and the number of records n. Hence, methods such as knearest neighbour (also known as instancebased learning and described next) with O(n 2 m) runtime are not feasible for high dimensionality data sets unless the running time can be improved. There are various flavours of knearest Neighbour (knn) algorithm for outlier detection but all calculate the nearest neighbours of a record using a suitable distance calculation metric such as Euclidean distance or Mahalanobis distance. Euclidean distance is given by equation 1 n (x i y i ) 2 (1) i=1 and is simply the vector distance whereas the Mahalanobis distance given by equation 2 (x µ) T C 1 (x µ) (2) calculates the distance from a point to the centroid (µ) defined by correlated attributes given by the Covariance matrix (C). Mahalanobis distance is computationally expensive to calculate for large high dimensional data sets compared to the Euclidean distance as it requires a pass through the entire data set to identify the attribute correlations. Ramaswamy et al. (Ramaswamy et al., 2000) introduce an optimised k NN to produce a ranked list of potential outliers. A point p is an outlier if no more than n 1 other points in the data set have a higher D m Hodge+Austin_OutlierDetection_AIRE381.tex; 19/01/2004; 13:18; p.10
11 (distance to m th neighbour) where m is a userspecified parameter. In figure 3, V is most isolated followed by X, W, Y then Z so the outlier rank would be V, X, W, Y, Z. This approach is susceptible to the computational growth as the entire distance matrix must be calculated for all points (ALL knn ) so Ramaswamy et al. include techniques for speeding the knn algorithm such as partitioning the data into cells. If any cell and its directly adjacent neighbours contains more than k points, then the points in the cell are deemed to lie in a dense area of the distribution so the points contained are unlikely to be outliers. If the number of points lying in cells more than a prespecified distance apart is less than k then all points in the cell are labelled as outliers. Hence, only a small number of cells not previously labelled need to be processed and only a relatively small number of distances need to be calculated for outlier detection. Authors have also improved the running speed of knn by creating an efficient index using a computationally efficient indexing structure (Ester and Xu, 1996) with linear running time. Knorr & Ng (Knorr and Ng, 1998) introduce an efficient type 1 k NN approach. If m of the k nearest neighbours (where m < k) lie within a specific distance threshold d then the exemplar is deemed to lie in a sufficiently dense region of the data distribution to be classified as normal. However, if there are less than m neighbours inside the distance threshold then the exemplar is an outlier. A very similar type 1 approach for identifying land mines from satellite ground images (Byers and Raftery, 1998) is to take the mth neighbour and find the distance D m. If this distance is less than a threshold d then the exemplar lies in a sufficiently dense region of the data distribution and is classified as normal. However, if the distance is more than the threshold value then the exemplar must lie in a locally sparse area and is an outlier. This has reduced the number of dataspecific parameters from Knorr & Ng s (Knorr and Ng, 1998) approach by one as we now have d and m but no k value.infigure3,pointsv,w,x,yandzarerelativelydistant from their neighbours and will have fewer than m neighbours within d and a high D m so both Knorr & Ng and Byers & Raftery classify them as outliers. This is less susceptible to the computational growth than the ALL knn approach as only the k nearest neighbours need to be calculated for a new exemplar rather than the entire distance matrix for all points. A type 2 classification knn method such as the majority voting approach (Wettschereck, 1994) requires a labelled data set with both normal and abnormal vectors classified. The knearest neighbours for the new exemplar are calculated and it is classified according to the 11 Hodge+Austin_OutlierDetection_AIRE381.tex; 19/01/2004; 13:18; p.11
12 12 majority classification of the nearest neighbours (Wettschereck, 1994). An extension incorporates the distance where the voting power of each nearest neighbour is attenuated according to its distance from the new item (Wettschereck, 1994) with the voting power systematically decreasing as the distance increases. Tang et al. (Tang et al., 2002) introduce an type 1 outlier diagnostic which unifies weighted knn with a connectivitybased approach and calculates a weighted distance score rather than a weighted classification. It calculates the average chaining distance (path length) between a point p and its k neighbours. The early distances are assigned higher weights so if a point lies in a sparse region as points V, W, X, Y and Z in figure 4 its nearest neighbours in the path will be relatively distant and the average chaining distance will be high. In contrast, Wettschereck requires a data set with good coverage for both normal and abnormal points. The distribution in figure 4 would cause problems for voted knn as there are relatively few examples of outliers and their nearest neighbours, although distant, will in fact be normal points so they are classified as normal. Tang s underlying principle is to assimilate both density and isolation. A point can lie in a relatively sparse region of a distribution without being an outlier but a point in isolation is an outlier. However, the technique is computationally complex with a similar expected runtime to a full knn matrix calculation as it relies on calculating paths between all points and their k neighbours. Wettschereck s approach is less susceptible as only the k nearest neighbours are calculated relative to the single new item. Another technique for optimising knn is reducing the number of features. These feature subset selectors are applicable for most systems described in this survey, not just the knn techniques. Aha & Bankert (Aha and Bankert, 1994) describe a forward sequential selection feature selector which iteratively evaluates feature subsets adding one extra dimension per iteration until the extra dimension entails no improvement in performance. The feature selector demonstrates high accuracy when coupled with an instancebased classifier (nearestneighbour) but is computationally expensive due to the combinatorial problem of subset selection. Aggarwal & Yu (Aggarwal and Yu, 2001) employ lower dimensional projections of the data set and focus on key attributes. The method assumes that outliers are abnormally sparse in certain lower dimensional projections where the combination of attributes in the projection correlates to the attributes that are deviant. Aggarwal & Yu use an evolutionary search algorithm to determine the projections which has a faster running time than the conventional approach employed by Aha & Bankert. Hodge+Austin_OutlierDetection_AIRE381.tex; 19/01/2004; 13:18; p.12
13 13 Proximity based methods are also computationally susceptible to the number of instances in the data set as they necessitate the calculation of all vector distances. Datta & Kibler (Datta and Kibler, 1995) use a diagnostic prototype selection to reduce the storage requirements to a few seminal vectors. Skalak (Skalak, 1994) employs both feature selection and prototype selection where a large data set can be stored as a few lower dimensional prototype vectors. Noise and outliers will not be stored as prototypes so the method is robust. Limiting the number of prototypes prevents overfitting just as pruning a decision tree can prevent overfitting by limiting the number of leaves (prototypes) stored. However, prototyping must be applied carefully and selectively as it will increase the sparsity of the distribution and the density of the nearest neighbours. A majority voting knn technique such as (Wettschereck, 1994) will be less affected but an approach relying on the distances to the nth neighbour (Byers and Raftery, 1998) or counting the number of neighbours within specific distances (Knorr and Ng, 1998) will be strongly affected. Prototyping is in many ways similar to the kmeans and kmedoids described next but with prototyping a new instance is compared to the prototypes using conventional knearest neighbour whereas kmeans and kmedoids prototypes have a kernel with a locally defined radius and the new instance is compared with the kernel boundaries. A prototype approach is also applicable for reducing the data set for neural networks and decision trees. This is particularly germane for nodebased neural networks which require multiple passes through the data set to train so a reduced data set will entail less training steps. Dragon Research (Allan et al., 1998) and Nairac (Nairac et al., 1999) (discussed in section 5) use kmeans for novelty detection. Dragon perform online event detection to identify news stories concerning new events. Each of the k clusters provides a local model of the data. The algorithm represents each of the k clusters by a prototype vector with attribute values equivalent to the mean value across all points in that cluster. In figure 4, if k is 3 then the algorithm would effectively circumscribe each class (1, 2 and 3) with a hypersphere and these 3 hyperspheres effectively represent and classify normality. Kmeans requires the user to specify the value of k in advance. Determining the optimum number of clusters is a hard problem and necessitates running the kmeans algorithm a number of times with different k values and selecting the best results for the particular data set. However, the k value is usually small compared with the number of records in the Hodge+Austin_OutlierDetection_AIRE381.tex; 19/01/2004; 13:18; p.13
14 14 data set. Thus, the computational complexity for classification of new instances and the storage overhead are vastly reduced as new vectors need only be compared to k prototype vectors and only k prototype vectors need be stored unlike knn where all vectors need be stored and compared to each other. Kmeans initially chooses random cluster prototypes according to a userdefined selection process, the input data is applied iteratively and the algorithm identifies the best matching cluster and updates the cluster centre to reflect the new exemplar and minimise the sumofsquares clustering function given by equation 3 K j=1 n S j x n µ j 2 (3) where µ is the mean of the points (x n )inclusters j. Dragon use an adapted similarity metric that incorporates the word count from the news story, the distance to the clusters and the effect the insertion has on the prototype vector for the cluster. After training, each cluster has a radius which is the distance between the prototype and the most distant point lying in the cluster. This radius defines the bounds of normality and is local to each cluster rather than the global distance settings used in many approaches such as (Knorr and Ng, 1998), (Ramaswamy et al., 2000) and (Byers and Raftery, 1998) knn approaches. A new exemplar is compared with the kcluster model. If the point lies outside all clusters then it is an outlier. A very similar partitional algorithm is the kmedoids algorithm or PAM (Partition Around Medoids) which represents each cluster using an actual point and a radius rather than a prototype (average) point and a radius. Bolton & Hand (Bolton and Hand, 2001) use a kmedoids type approach they call Peer Group Analysis for fraud detection. K medoids is robust to outliers as it does not use optimisation to solve the vector placement problem but rather uses actual data points to represent cluster centres. Kmedoids is less susceptible to local minima than standard kmeans during training where kmeans often converges to poor quality clusters. It is also dataorder independent unlike standard kmeans where the order of the input data affects the positioning of the cluster centres and Bradley (Bradley et al., 1999) shows that kmedoids provides better class separation than kmeans and hence is better suited to a novelty recognition task due to the improved separation capabilities. However, kmeans outperforms kmedoids and can handle larger data sets more efficiently as kmedoids can require O(n 2 ) running time Hodge+Austin_OutlierDetection_AIRE381.tex; 19/01/2004; 13:18; p.14
15 per iteration whereas kmeans is O(n). Both approaches can generalise from a relatively small data set. Conversely, the classification accuracy of knn, Least Squares Regression or Grubbs method is susceptible to the number of exemplars in the data set like the kernelbased Parzen windows and the nodebased supervised neural networks described later as they all model and analyse the density of the input distribution. The data mining partitional algorithm CLARANS (Ng and Han, 1994), is an optimised derivative of the kmedoids algorithm and can handle outlier detection which is achieved as a byproduct of the clustering process. It applies a random but bounded heuristic search to find an optimal clustering by only searching a random selection of cluster updates. It requires two userspecified parameters, the value of k and the number of cluster updates to randomly select. Rather than searching the entire data set for the optimal medoid it tests a prespecified number of potential medoids and selects the first medoid it tests which improves the cluster quality. However, it still has O(n 2 k) running time so is only really applicable for small to medium size data sets. Another proximitybased variant is the graph connectivity method. Shekhar et al. (Shekhar et al., 2001) introduce an approach to traffic monitoring which examines the neighbourhoods of points but from a topologically connected rather than distancebased perspective. Shekhar detects traffic monitoring stations producing sensor readings which are inconsistent with stations in the immediately connected neighbourhood. A station is an outlier if the difference between its sensor value and the average sensor value of its topological neighbours differs by more than a threshold percentage from the mean difference between all nodes and their topological neighbours. This is analogous to calculating the average distance between each point i and its k neighbours avgdi k and then finding any points whose average distance avgd k differs by more than a specified percentage. The technique only considers topologically connected neighbours so there is no prerequisite for specifying k and the number can vary locally depending on the number of connections. However, it is best suited to domains where a connected graph of nodes is available such as analysing traffic flow networks where the individual monitoring stations represent nodes in the connected network Parametric Methods Many of the methods we have just described do not scale well unless modifications and optimisations are made to the standard algorithm. 15 Hodge+Austin_OutlierDetection_AIRE381.tex; 19/01/2004; 13:18; p.15
16 16 Parametric methods allow the model to be evaluated very rapidly for new instances and are suitable for large data sets; the model grows only with model complexity not data size. However, they limit their applicability by enforcing a preselected distribution model to fit the data. If the user knows their data fits such a distribution model then these approaches are highly accurate but many data sets do not fit one particular model. One such approach is Minimum Volume Ellipsoid estimation (MVE) (Rousseeuw and Leroy, 1996) which fits the smallest permissible ellipsoid volume around the majority of the data distribution model (generally covering 50% of the data points). This represents the densely populated normal region shown in figure 2 (with outliers shown) and figure 5 (with outliers removed). A similar approach, Convex Peeling peels away the records on the boundaries of the data distribution s convex hull (Rousseeuw and Leroy, 1996) and thus peels away the outliers. In contrast MVE maintains all points and defines a boundary around the majority of points. In convex peeling, each point is assigned a depth. The outliers will have the lowest depth thus placing them on the boundary of the convex hull and are shed from the distribution model. For example in figure 2, V, W, X, Y and Z would each be assigned the lowest depth and shed during the first iteration. Peeling repeats the convex hull generation and peeling process on the remaining records until it has removed a prespecified number of records. The technique is a type 1, unsupervised clustering outlier detector. Unfortunately, it is susceptible to peeling away p + 1 points from the data distribution on each iteration and eroding the stock of normal points too rapidly (Rousseeuw and Leroy, 1996). Both MVE and Convex Peeling are robust classifiers that fit boundaries around specific percentages of the data irrespective of the sparseness of the outlying regions and hence outlying data points do not skew the boundary. Both however, rely on a good spread of the data. Figure 2 has few outliers so an ellipsoid circumscribing 50% of the data would omit many normal points from the boundary of normality. Both MVE and convex peeling are only applicable for lower dimensional data sets (Barnett and Lewis, 1994) (usually three dimensional or less for convex peeling) as they suffer the Curse of Dimensionality where the convex hull is stretched as more dimensions are added and the surface becomes too difficult to discern. Torr and Murray (Torr and Murray, 1993) also peel away outlying Hodge+Austin_OutlierDetection_AIRE381.tex; 19/01/2004; 13:18; p.16
17 points by iteratively pruning and refitting. They measure the effect of deleting points on the placement of the Least Squares standard regression line for a diagnostic outlier detector. The LS line is placed to minimise equation 4 n (y i ŷ i ) 2 (4) i=1 where ŷ i is the estimated value. Torr & Murray repeatedly delete the single point with maximal influence (the point that causes the greatest deviation in the placement of the regression line) thus allowing the fitted model to stay away from the outliers. They refit the regression to the remaining data until there are no more outliers, i.e., the next point with maximal influence lies below a threshold value. Least Squares regression is not robust as the outliers affect the placement of the regression line so it is best suited to outlier diagnostics where the outliers are removed from the next iteration. Figure 6 shows a Least Squares regression line (dashed line) fitted to a data distribution with the outliers A and B present and then again after point A and B have been removed. Although there are only two outliers, they have a considerable affect on the line placement. Torr and Murray (Torr and Murray, 1993) extend the technique for image segmentation. They use a computationally cheap nonrobust least median of squares (LMS) regression if the number of outliers is small which minimises equation 5 17 Median n i=1(y i ŷ i ) 2 (5) or a computationally expensive robust random sampling algorithm if the number of outliers is high. LMS is able to accommodate more outliers than LS as it uses the median values. However, random sampling can accommodate larger numbers of outliers which eventually distort LMS. LMS has also been improved to produce the Least Trimmed Squares approach (Rousseeuw and Leroy, 1996) which has faster convergence and minimises equation 6 h ((y i ŷ i ) 2 ) i:n (6) i=1 where (y i ŷ i ) 2 ) 1:n... (y i ŷ i ) 2 ) n:n are the ordered square residuals. The summation function accommodates outliers in the distribution by fitting the regression to only the majority of the data rather than all of the data as in LMS. This region thus depicts normality and LTS highlights outliers as the points with large deviations from the majority. Hodge+Austin_OutlierDetection_AIRE381.tex; 19/01/2004; 13:18; p.17
18 18 MVE and Convex Peeling aim to compact the convex hull and circumscribe the data with a decision boundary but are only applicable for low dimensional data. Principal Component Analysis (PCA) (Faloutsos et al., 1997)(Parra et al., 1996) in contrast, is suitable for higher dimensional data. It identifies correlated attributes in the data distribution and projects the data onto this lower dimensional subspace. PCA is an unsupervised classifier but is linear and incapable of outperforming the complex nonlinear class boundaries identified by the Support Vector Machine (see section 2.4) or neural methods described in section 3. PCA assumes that the subspaces determined by the principal components are compact and this limits its applicability particularly for sparse distributions. However, it is an ideal preprocessor to select a subset of attributes for methods which suffer the Curse of Dimensionality such as the MultiLayer Perceptron in section 3, proximitybased techniques or symplectic transformations described next. PCA identifies the principal component of greatest variance as each component has an associated eigenvalue whose magnitude corresponds to the variance of the points from the component vector. PCA retains the k principal components with greatest variance and discards all others to preserve maximum information and retain minimal redundancy. Faloutsos et al.(faloutsos et al., 1997) recommend retaining sufficient components so the sum of the eigenvalues of all retained components is at least 85% of the sum of all eigenvalues. They use the principal components to predict attribute values in records by finding the intersection between the given values for the record (i.e., excluding the omitted attribute) and the principal components. If the actual value for an attribute and the predicted value differ then the record is flagged as an outlier. Parra et al. (Parra et al., 1996) have developed a type3 motor fault detector system which applies PCA to the data and then applies a symplectic transformation to the first few principal components. The symplectic transformation may be used with nonlinear data distributions. It maps the input data onto a Gaussian distribution, conserving the volume and separating the training data (normal data) from the outliers. This double transformation preserves information while removing redundancy, generates a density estimation of the data set and thus allows a circular contour of the density to act as the decision boundary. Baker et al. (Baker et al., 1999) employ one of the hierarchical approaches detailed in this survey. The other hierarchical approaches are the decision tree and cluster trees detailed in the machine learning Hodge+Austin_OutlierDetection_AIRE381.tex; 19/01/2004; 13:18; p.18
19 section. Baker uses a parametric modelbased approach for novelty detection in a news story monitor. A hierarchy allows the domain knowledge to be represented at various levels of abstraction so points can be compared for novelty at a finegrained or less specific level. The hierarchical statistical algorithm induces a topic hierarchy from the word distributions of news stories using Expectation Maximisation (EM) to estimate the parameter settings followed by Deterministic Annealing (DA). DA constructs the hierarchy via maximum likelihood and information theory using a divisive clustering approach to split nodes into subnodes, starting from a single cluster, and build the hierarchy topdown. DA stochastically determines the node to split. The system detects novelty when new nodes are added to the hierarchy that represent documents that do not belong to any of the existing event clusters so new events are effectively described by their position in the hierarchy. When EM is used in conjunction with DA it avoids some of the initialisation dependence of EM but at the cost of computational efficiency. DA can avoid local minima which EM is susceptible to but it may produce suboptimal results NonParametric Methods Many Statistical methods described in this section have dataspecific parameters ranging from the k values of knn and kmeans to distance thresholds for the proximitybased approaches to complex model parameters. Other techniques such as those based around convex hulls and regression and the PCA approaches assume the data follows a specific model. These all require aprioridata knowledge. Such information is often not available or is expensive to compute. Many data sets simply do not follow one specific distribution model and are often randomly distributed. Hence, these approaches may be applicable for an outlier detector where all data is accumulated beforehand and may be preprocessed to determine parameter settings or for data where the distribution model is known. Nonparametric approaches, in contrast are more flexible and autonomous. Dasgupta & Forrest (Dasgupta and Forrest, 1996) introduce a nonparametric approach for novelty detection in machinery operation. The authors recognise novelty which contrasts to the other type 3 approaches we describe such as kmeans (Nairac et al., 1999) or the ART neural approach (Caudell and Newman, 1993) in section 3 which recognise or classify the normal data space. The machinery operation produces a timeseries of realvalued machinery measurements which Dasgupta & Forrest map onto binary vectors using quantisation (binning). The 19 Hodge+Austin_OutlierDetection_AIRE381.tex; 19/01/2004; 13:18; p.19
20 20 binary vector (string) effectively represents an encoding of the last n realvalues from the time series. As the machinery is constantly monitored, new strings (binary vector windows) are generated to represent the current operating characteristics. Dasgupta & Forrest use a set of detectors where all detectors fail to match any strings defining normality (where two strings match if they are identical in a fixed number of contiguous positions (r)). If any detectors match a new string (a new time window of operating characteristics) then a novelty has been detected. The value of r affects the performance of the algorithm and the value must be selected carefully by empirical testing which inevitably slows processing. Each individual recogniser represents a subsection of the input distribution and compares the input. Dasgupta & Forrest s approach would effectively model figure 5 by failing to match any point within the normal boundary but would match any point from outside SemiParametric Methods Semiparametric methods apply local kernel models rather than a single global distribution model. They aim to combine the speed and complexity growth advantage of parametric methods with the model flexibility of nonparametric methods. Kernelbased methods estimate the density distribution of the input space and identify outliers as lying in regions of low density. Tarassenko & Roberts (Roberts and Tarassenko, 1995) and Bishop (Bishop, 1994) use Gaussian Mixture Models to learn a model of normal data by incrementally learning new exemplars. The GMM is represented by equation 7 M p(t x) = α j (x)φ j (t x) (7) j=1 where M is the number of kernels (φ), α j (x) the mixing coefficients, x the input vector and t the target vector. Tarassenko & Roberts classify EEG signatures to detect abnormal signals which represent medical conditions such as epilepsy. In both approaches, each mixture represents a kernel whose width is autonomously determined by the spread of the data. In Bishop s approach the number of mixture models is determined using crossvalidation. Tarassenko & Roberts technique adds new mixture models incrementally. If the mixture that best represents the new exemplar is above a threshold distance, then the algorithm adds a new mixture. This distance threshold is determined autonomously during system training. Once training is completed, the final distance threshold represents the novelty threshold for new items to compare against. A Gaussian probability density function is defined Hodge+Austin_OutlierDetection_AIRE381.tex; 19/01/2004; 13:18; p.20
! #! # %%%%&%%% &( )& +,)%%+. / 0 (& ) 0112 ( 3& + )
! #! # %%%%&%%% &( )& +,)%%+. / 0 (& ) 0112 ( 3& + ) 4 White Rose Consortium eprints Repository http://eprints.whiterose.ac.uk/ This is an author produced version of a paper published in Artificial Intelligence
More informationRobust Outlier Detection Technique in Data Mining: A Univariate Approach
Robust Outlier Detection Technique in Data Mining: A Univariate Approach Singh Vijendra and Pathak Shivani Faculty of Engineering and Technology Mody Institute of Technology and Science Lakshmangarh, Sikar,
More informationCluster Analysis: Advanced Concepts
Cluster Analysis: Advanced Concepts and dalgorithms Dr. Hui Xiong Rutgers University Introduction to Data Mining 08/06/2006 1 Introduction to Data Mining 08/06/2006 1 Outline Prototypebased Fuzzy cmeans
More informationOUTLIER ANALYSIS. Data Mining 1
OUTLIER ANALYSIS Data Mining 1 What Are Outliers? Outlier: A data object that deviates significantly from the normal objects as if it were generated by a different mechanism Ex.: Unusual credit card purchase,
More informationSocial Media Mining. Data Mining Essentials
Introduction Data production rate has been increased dramatically (Big Data) and we are able store much more data than before E.g., purchase data, social media data, mobile phone data Businesses and customers
More informationOUTLIER ANALYSIS. Authored by CHARU C. AGGARWAL IBM T. J. Watson Research Center, Yorktown Heights, NY, USA
OUTLIER ANALYSIS OUTLIER ANALYSIS Authored by CHARU C. AGGARWAL IBM T. J. Watson Research Center, Yorktown Heights, NY, USA Kluwer Academic Publishers Boston/Dordrecht/London Contents Preface Acknowledgments
More informationInternational Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 3, MayJune 2015
RESEARCH ARTICLE OPEN ACCESS Data Mining Technology for Efficient Network Security Management Ankit Naik [1], S.W. Ahmad [2] Student [1], Assistant Professor [2] Department of Computer Science and Engineering
More informationA Taxonomy Framework for Unsupervised Outlier Detection Techniques for MultiType Data Sets
A Taxonomy Framework for Unsupervised Outlier Detection Techniques for MultiType Data Sets Yang Zhang, Nirvana Meratnia, Paul Havinga Department of Computer Science, University of Twente, P.O.Box 217
More informationData, Measurements, Features
Data, Measurements, Features Middle East Technical University Dep. of Computer Engineering 2009 compiled by V. Atalay What do you think of when someone says Data? We might abstract the idea that data are
More informationPATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical
More informationMachine Learning and Pattern Recognition Logistic Regression
Machine Learning and Pattern Recognition Logistic Regression Course Lecturer:Amos J Storkey Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh Crichton Street,
More informationChapter ML:XI (continued)
Chapter ML:XI (continued) XI. Cluster Analysis Data Mining Overview Cluster Analysis Basics Hierarchical Cluster Analysis Iterative Cluster Analysis DensityBased Cluster Analysis Cluster Evaluation Constrained
More informationChapter 7. Cluster Analysis
Chapter 7. Cluster Analysis. What is Cluster Analysis?. A Categorization of Major Clustering Methods. Partitioning Methods. Hierarchical Methods 5. DensityBased Methods 6. GridBased Methods 7. ModelBased
More informationLinear Threshold Units
Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear
More informationCLASSIFICATION AND CLUSTERING. Anveshi Charuvaka
CLASSIFICATION AND CLUSTERING Anveshi Charuvaka Learning from Data Classification Regression Clustering Anomaly Detection Contrast Set Mining Classification: Definition Given a collection of records (training
More informationData Mining. Cluster Analysis: Advanced Concepts and Algorithms
Data Mining Cluster Analysis: Advanced Concepts and Algorithms Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1 More Clustering Methods Prototypebased clustering Densitybased clustering Graphbased
More informationResourcebounded Fraud Detection
Resourcebounded Fraud Detection Luis Torgo LIAADINESC Porto LA / FEP, University of Porto R. de Ceuta, 118, 6., 4050190 Porto, Portugal ltorgo@liaad.up.pt http://www.liaad.up.pt/~ltorgo Abstract. This
More informationCluster Analysis. Isabel M. Rodrigues. Lisboa, 2014. Instituto Superior Técnico
Instituto Superior Técnico Lisboa, 2014 Introduction: Cluster analysis What is? Finding groups of objects such that the objects in a group will be similar (or related) to one another and different from
More informationLecture 20: Clustering
Lecture 20: Clustering Wrapup of neural nets (from last lecture Introduction to unsupervised learning Kmeans clustering COMP424, Lecture 20  April 3, 2013 1 Unsupervised learning In supervised learning,
More informationDATA MINING CLUSTER ANALYSIS: BASIC CONCEPTS
DATA MINING CLUSTER ANALYSIS: BASIC CONCEPTS 1 AND ALGORITHMS Chiara Renso KDDLAB ISTI CNR, Pisa, Italy WHAT IS CLUSTER ANALYSIS? Finding groups of objects such that the objects in a group will be similar
More informationNeural Networks Lesson 5  Cluster Analysis
Neural Networks Lesson 5  Cluster Analysis Prof. Michele Scarpiniti INFOCOM Dpt.  Sapienza University of Rome http://ispac.ing.uniroma1.it/scarpiniti/index.htm michele.scarpiniti@uniroma1.it Rome, 29
More informationA Survey on Outlier Detection Techniques for Credit Card Fraud Detection
IOSR Journal of Computer Engineering (IOSRJCE) eissn: 22780661, p ISSN: 22788727Volume 16, Issue 2, Ver. VI (MarApr. 2014), PP 4448 A Survey on Outlier Detection Techniques for Credit Card Fraud
More informationIntroduction to Machine Learning. Speaker: Harry Chao Advisor: J.J. Ding Date: 1/27/2011
Introduction to Machine Learning Speaker: Harry Chao Advisor: J.J. Ding Date: 1/27/2011 1 Outline 1. What is machine learning? 2. The basic of machine learning 3. Principles and effects of machine learning
More informationUnsupervised Data Mining (Clustering)
Unsupervised Data Mining (Clustering) Javier Béjar KEMLG December 01 Javier Béjar (KEMLG) Unsupervised Data Mining (Clustering) December 01 1 / 51 Introduction Clustering in KDD One of the main tasks in
More informationClassification Techniques for Remote Sensing
Classification Techniques for Remote Sensing Selim Aksoy Department of Computer Engineering Bilkent University Bilkent, 06800, Ankara saksoy@cs.bilkent.edu.tr http://www.cs.bilkent.edu.tr/ saksoy/courses/cs551
More informationGoing Big in Data Dimensionality:
LUDWIG MAXIMILIANS UNIVERSITY MUNICH DEPARTMENT INSTITUTE FOR INFORMATICS DATABASE Going Big in Data Dimensionality: Challenges and Solutions for Mining High Dimensional Data Peer Kröger Lehrstuhl für
More informationData Clustering. Dec 2nd, 2013 Kyrylo Bessonov
Data Clustering Dec 2nd, 2013 Kyrylo Bessonov Talk outline Introduction to clustering Types of clustering Supervised Unsupervised Similarity measures Main clustering algorithms kmeans Hierarchical Main
More informationUsing multiple models: Bagging, Boosting, Ensembles, Forests
Using multiple models: Bagging, Boosting, Ensembles, Forests Bagging Combining predictions from multiple models Different models obtained from bootstrap samples of training data Average predictions or
More informationStatistical Machine Learning
Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes
More informationClass #6: Nonlinear classification. ML4Bio 2012 February 17 th, 2012 Quaid Morris
Class #6: Nonlinear classification ML4Bio 2012 February 17 th, 2012 Quaid Morris 1 Module #: Title of Module 2 Review Overview Linear separability Nonlinear classification Linear Support Vector Machines
More informationClustering. Danilo Croce Web Mining & Retrieval a.a. 2015/201 16/03/2016
Clustering Danilo Croce Web Mining & Retrieval a.a. 2015/201 16/03/2016 1 Supervised learning vs. unsupervised learning Supervised learning: discover patterns in the data that relate data attributes with
More informationIntroduction to machine learning and pattern recognition Lecture 1 Coryn BailerJones
Introduction to machine learning and pattern recognition Lecture 1 Coryn BailerJones http://www.mpia.de/homes/calj/mlpr_mpia2008.html 1 1 What is machine learning? Data description and interpretation
More informationEnvironmental Remote Sensing GEOG 2021
Environmental Remote Sensing GEOG 2021 Lecture 4 Image classification 2 Purpose categorising data data abstraction / simplification data interpretation mapping for land cover mapping use land cover class
More informationAdaptive Anomaly Detection for Network Security
International Journal of Computer and Internet Security. ISSN 09742247 Volume 5, Number 1 (2013), pp. 19 International Research Publication House http://www.irphouse.com Adaptive Anomaly Detection for
More informationPrinciples of Data Mining by Hand&Mannila&Smyth
Principles of Data Mining by Hand&Mannila&Smyth Slides for Textbook Ari Visa,, Institute of Signal Processing Tampere University of Technology October 4, 2010 Data Mining: Concepts and Techniques 1 Differences
More informationNovelty Detection: A Review Part 1: Statistical Approaches
Novelty Detection: A Review Part 1: Statistical Approaches Markos Markou and Sameer Singh PANN Research, Department of Computer Science University of Exeter, Exeter EX4 4PT, UK {m.markou, s.singh}@ex.ac.uk
More information. Learn the number of classes and the structure of each class using similarity between unlabeled training patterns
Outline Part 1: of data clustering NonSupervised Learning and Clustering : Problem formulation cluster analysis : Taxonomies of Clustering Techniques : Data types and Proximity Measures : Difficulties
More informationLocal outlier detection in data forensics: data mining approach to flag unusual schools
Local outlier detection in data forensics: data mining approach to flag unusual schools Mayuko Simon Data Recognition Corporation Paper presented at the 2012 Conference on Statistical Detection of Potential
More informationAn Introduction to Machine Learning
An Introduction to Machine Learning L5: Novelty Detection and Regression Alexander J. Smola Statistical Machine Learning Program Canberra, ACT 0200 Australia Alex.Smola@nicta.com.au Tata Institute, Pune,
More informationComparison of Nonlinear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data
CMPE 59H Comparison of Nonlinear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data Term Project Report Fatma Güney, Kübra Kalkan 1/15/2013 Keywords: Nonlinear
More informationData Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining
Data Mining Cluster Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 8 by Tan, Steinbach, Kumar 1 What is Cluster Analysis? Finding groups of objects such that the objects in a group will
More informationBEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES
BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES 123 CHAPTER 7 BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES 7.1 Introduction Even though using SVM presents
More informationMedical Information Management & Mining. You Chen Jan,15, 2013 You.chen@vanderbilt.edu
Medical Information Management & Mining You Chen Jan,15, 2013 You.chen@vanderbilt.edu 1 Trees Building Materials Trees cannot be used to build a house directly. How can we transform trees to building materials?
More informationClustering. Data Mining. Abraham Otero. Data Mining. Agenda
Clustering 1/46 Agenda Introduction Distance Knearest neighbors Hierarchical clustering Quick reference 2/46 1 Introduction It seems logical that in a new situation we should act in a similar way as in
More informationCS 591.03 Introduction to Data Mining Instructor: Abdullah Mueen
CS 591.03 Introduction to Data Mining Instructor: Abdullah Mueen LECTURE 3: DATA TRANSFORMATION AND DIMENSIONALITY REDUCTION Chapter 3: Data Preprocessing Data Preprocessing: An Overview Data Quality Major
More informationOneClass Classifiers: A Review and Analysis of Suitability in the Context of MobileMasquerader Detection
Joint Special Issue Advances in enduser datamining techniques 29 OneClass Classifiers: A Review and Analysis of Suitability in the Context of MobileMasquerader Detection O Mazhelis Department of Computer
More informationEM Clustering Approach for MultiDimensional Analysis of Big Data Set
EM Clustering Approach for MultiDimensional Analysis of Big Data Set Amhmed A. Bhih School of Electrical and Electronic Engineering Princy Johnson School of Electrical and Electronic Engineering Martin
More informationClassification algorithm in Data mining: An Overview
Classification algorithm in Data mining: An Overview S.Neelamegam #1, Dr.E.Ramaraj *2 #1 M.phil Scholar, Department of Computer Science and Engineering, Alagappa University, Karaikudi. *2 Professor, Department
More informationMachine Learning and Data Analysis overview. Department of Cybernetics, Czech Technical University in Prague. http://ida.felk.cvut.
Machine Learning and Data Analysis overview Jiří Kléma Department of Cybernetics, Czech Technical University in Prague http://ida.felk.cvut.cz psyllabus Lecture Lecturer Content 1. J. Kléma Introduction,
More informationClustering & Visualization
Chapter 5 Clustering & Visualization Clustering in highdimensional databases is an important problem and there are a number of different clustering paradigms which are applicable to highdimensional data.
More informationClustering. Adrian Groza. Department of Computer Science Technical University of ClujNapoca
Clustering Adrian Groza Department of Computer Science Technical University of ClujNapoca Outline 1 Cluster Analysis What is Datamining? Cluster Analysis 2 Kmeans 3 Hierarchical Clustering What is Datamining?
More informationSupervised Feature Selection & Unsupervised Dimensionality Reduction
Supervised Feature Selection & Unsupervised Dimensionality Reduction Feature Subset Selection Supervised: class labels are given Select a subset of the problem features Why? Redundant features much or
More informationThe Data Mining Process
Sequence for Determining Necessary Data. Wrong: Catalog everything you have, and decide what data is important. Right: Work backward from the solution, define the problem explicitly, and map out the data
More informationDistance based clustering
// Distance based clustering Chapter ² ² Clustering Clustering is the art of finding groups in data (Kaufman and Rousseeuw, 99). What is a cluster? Group of objects separated from other clusters Means
More informationData Mining Part 5. Prediction
Data Mining Part 5. Prediction 5.1 Spring 2010 Instructor: Dr. Masoud Yaghini Outline Classification vs. Numeric Prediction Prediction Process Data Preparation Comparing Prediction Methods References Classification
More informationUnsupervised Outlier Detection in Time Series Data
Unsupervised Outlier Detection in Time Series Data Zakia Ferdousi and Akira Maeda Graduate School of Science and Engineering, Ritsumeikan University Department of Media Technology, College of Information
More informationDoptimal plans in observational studies
Doptimal plans in observational studies Constanze Pumplün Stefan Rüping Katharina Morik Claus Weihs October 11, 2005 Abstract This paper investigates the use of Design of Experiments in observational
More informationClustering and Data Mining in R
Clustering and Data Mining in R Workshop Supplement Thomas Girke December 10, 2011 Introduction Data Preprocessing Data Transformations Distance Methods Cluster Linkage Hierarchical Clustering Approaches
More informationCheng Soon Ong & Christfried Webers. Canberra February June 2016
c Cheng Soon Ong & Christfried Webers Research Group and College of Engineering and Computer Science Canberra February June (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 31 c Part I
More informationClassifying Large Data Sets Using SVMs with Hierarchical Clusters. Presented by :Limou Wang
Classifying Large Data Sets Using SVMs with Hierarchical Clusters Presented by :Limou Wang Overview SVM Overview Motivation Hierarchical microclustering algorithm ClusteringBased SVM (CBSVM) Experimental
More informationINTRODUCTION TO NEURAL NETWORKS
INTRODUCTION TO NEURAL NETWORKS Pictures are taken from http://www.cs.cmu.edu/~tom/mlbookchapterslides.html http://research.microsoft.com/~cmbishop/prml/index.htm By Nobel Khandaker Neural Networks An
More informationStep 5: Conduct Analysis. The CCA Algorithm
Model Parameterization: Step 5: Conduct Analysis P Dropped species with fewer than 5 occurrences P Logtransformed species abundances P Rownormalized species log abundances (chord distance) P Selected
More informationCluster Analysis: Basic Concepts and Algorithms
8 Cluster Analysis: Basic Concepts and Algorithms Cluster analysis divides data into groups (clusters) that are meaningful, useful, or both. If meaningful groups are the goal, then the clusters should
More informationMachine Learning using MapReduce
Machine Learning using MapReduce What is Machine Learning Machine learning is a subfield of artificial intelligence concerned with techniques that allow computers to improve their outputs based on previous
More informationInternational Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 3, MayJun 2014
RESEARCH ARTICLE OPEN ACCESS A Survey of Data Mining: Concepts with Applications and its Future Scope Dr. Zubair Khan 1, Ashish Kumar 2, Sunny Kumar 3 M.Tech Research Scholar 2. Department of Computer
More informationApplication of Event Based Decision Tree and Ensemble of Data Driven Methods for Maintenance Action Recommendation
Application of Event Based Decision Tree and Ensemble of Data Driven Methods for Maintenance Action Recommendation James K. Kimotho, Christoph SondermannWoelke, Tobias Meyer, and Walter Sextro Department
More informationData mining and statistical models in marketing campaigns of BT Retail
Data mining and statistical models in marketing campaigns of BT Retail Francesco Vivarelli and Martyn Johnson Database Exploitation, Segmentation and Targeting group BT Retail Pp501 Holborn centre 120
More informationData Mining  Evaluation of Classifiers
Data Mining  Evaluation of Classifiers Lecturer: JERZY STEFANOWSKI Institute of Computing Sciences Poznan University of Technology Poznan, Poland Lecture 4 SE Master Course 2008/2009 revised for 2010
More informationData Mining: Overview. What is Data Mining?
Data Mining: Overview What is Data Mining? Recently * coined term for confluence of ideas from statistics and computer science (machine learning and database methods) applied to large databases in science,
More informationA Survey on Preprocessing and Postprocessing Techniques in Data Mining
, pp. 99128 http://dx.doi.org/10.14257/ijdta.2014.7.4.09 A Survey on Preprocessing and Postprocessing Techniques in Data Mining Divya Tomar and Sonali Agarwal Indian Institute of Information Technology,
More informationUnsupervised Learning and Data Mining. Unsupervised Learning and Data Mining. Clustering. Supervised Learning. Supervised Learning
Unsupervised Learning and Data Mining Unsupervised Learning and Data Mining Clustering Decision trees Artificial neural nets Knearest neighbor Support vectors Linear regression Logistic regression...
More informationPredictive Dynamix Inc
Predictive Modeling Technology Predictive modeling is concerned with analyzing patterns and trends in historical and operational data in order to transform data into actionable decisions. This is accomplished
More informationFlat Clustering KMeans Algorithm
Flat Clustering KMeans Algorithm 1. Purpose. Clustering algorithms group a set of documents into subsets or clusters. The cluster algorithms goal is to create clusters that are coherent internally, but
More informationCredit Card Fraud Detection Using Self Organised Map
International Journal of Information & Computation Technology. ISSN 09742239 Volume 4, Number 13 (2014), pp. 13431348 International Research Publications House http://www. irphouse.com Credit Card Fraud
More informationAn Overview of Knowledge Discovery Database and Data mining Techniques
An Overview of Knowledge Discovery Database and Data mining Techniques Priyadharsini.C 1, Dr. Antony Selvadoss Thanamani 2 M.Phil, Department of Computer Science, NGM College, Pollachi, Coimbatore, Tamilnadu,
More informationData Mining Cluster Analysis: Advanced Concepts and Algorithms. Lecture Notes for Chapter 9. Introduction to Data Mining
Data Mining Cluster Analysis: Advanced Concepts and Algorithms Lecture Notes for Chapter 9 Introduction to Data Mining by Tan, Steinbach, Kumar Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004
More informationData Mining for Knowledge Management. Classification
1 Data Mining for Knowledge Management Classification Themis Palpanas University of Trento http://disi.unitn.eu/~themis Data Mining for Knowledge Management 1 Thanks for slides to: Jiawei Han Eamonn Keogh
More informationArtificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence
Artificial Neural Networks and Support Vector Machines CS 486/686: Introduction to Artificial Intelligence 1 Outline What is a Neural Network?  Perceptron learners  Multilayer networks What is a Support
More informationAn Introduction to Data Mining. Big Data World. Related Fields and Disciplines. What is Data Mining? 2/12/2015
An Introduction to Data Mining for Wind Power Management Spring 2015 Big Data World Every minute: Google receives over 4 million search queries Facebook users share almost 2.5 million pieces of content
More informationUsing Data Mining for Mobile Communication Clustering and Characterization
Using Data Mining for Mobile Communication Clustering and Characterization A. Bascacov *, C. Cernazanu ** and M. Marcu ** * Lasting Software, Timisoara, Romania ** Politehnica University of Timisoara/Computer
More informationIntroduction to Support Vector Machines. Colin Campbell, Bristol University
Introduction to Support Vector Machines Colin Campbell, Bristol University 1 Outline of talk. Part 1. An Introduction to SVMs 1.1. SVMs for binary classification. 1.2. Soft margins and multiclass classification.
More informationData Mining and Knowledge Discovery in Databases (KDD) State of the Art. Prof. Dr. T. Nouri Computer Science Department FHNW Switzerland
Data Mining and Knowledge Discovery in Databases (KDD) State of the Art Prof. Dr. T. Nouri Computer Science Department FHNW Switzerland 1 Conference overview 1. Overview of KDD and data mining 2. Data
More informationSupervised Learning (Big Data Analytics)
Supervised Learning (Big Data Analytics) Vibhav Gogate Department of Computer Science The University of Texas at Dallas Practical advice Goal of Big Data Analytics Uncover patterns in Data. Can be used
More informationIEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 20, NO. 7, JULY 2009 1181
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 20, NO. 7, JULY 2009 1181 The Global Kernel kmeans Algorithm for Clustering in Feature Space Grigorios F. Tzortzis and Aristidis C. Likas, Senior Member, IEEE
More informationA semisupervised Spam mail detector
A semisupervised Spam mail detector Bernhard Pfahringer Department of Computer Science, University of Waikato, Hamilton, New Zealand Abstract. This document describes a novel semisupervised approach
More informationARTIFICIAL INTELLIGENCE (CSCU9YE) LECTURE 6: MACHINE LEARNING 2: UNSUPERVISED LEARNING (CLUSTERING)
ARTIFICIAL INTELLIGENCE (CSCU9YE) LECTURE 6: MACHINE LEARNING 2: UNSUPERVISED LEARNING (CLUSTERING) Gabriela Ochoa http://www.cs.stir.ac.uk/~goc/ OUTLINE Preliminaries Classification and Clustering Applications
More informationSTATISTICA. Clustering Techniques. Case Study: Defining Clusters of Shopping Center Patrons. and
Clustering Techniques and STATISTICA Case Study: Defining Clusters of Shopping Center Patrons STATISTICA Solutions for Business Intelligence, Data Mining, Quality Control, and Webbased Analytics Table
More informationOutlier Ensembles. Charu C. Aggarwal IBM T J Watson Research Center Yorktown, NY 10598. Keynote, Outlier Detection and Description Workshop, 2013
Charu C. Aggarwal IBM T J Watson Research Center Yorktown, NY 10598 Outlier Ensembles Keynote, Outlier Detection and Description Workshop, 2013 Based on the ACM SIGKDD Explorations Position Paper: Outlier
More informationData Mining Analytics for Business Intelligence and Decision Support
Data Mining Analytics for Business Intelligence and Decision Support Chid Apte, T.J. Watson Research Center, IBM Research Division Knowledge Discovery and Data Mining (KDD) techniques are used for analyzing
More informationStructural Health Monitoring Tools (SHMTools)
Structural Health Monitoring Tools (SHMTools) Getting Started LANL/UCSD Engineering Institute LACC14046 c Copyright 2014, Los Alamos National Security, LLC All rights reserved. May 30, 2014 Contents
More informationModelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
More informationSureSense Software Suite Overview
SureSense Software Overview Eliminate Failures, Increase Reliability and Safety, Reduce Costs and Predict Remaining Useful Life for Critical Assets Using SureSense and Health Monitoring Software What SureSense
More informationThe Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
More informationHT2015: SC4 Statistical Data Mining and Machine Learning
HT2015: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Bayesian Nonparametrics Parametric vs Nonparametric
More informationNeural Networks. CAP5610 Machine Learning Instructor: GuoJun Qi
Neural Networks CAP5610 Machine Learning Instructor: GuoJun Qi Recap: linear classifier Logistic regression Maximizing the posterior distribution of class Y conditional on the input vector X Support vector
More informationData Mining Cluster Analysis: Advanced Concepts and Algorithms. Lecture Notes for Chapter 9. Introduction to Data Mining
Data Mining Cluster Analysis: Advanced Concepts and Algorithms Lecture Notes for Chapter 9 Introduction to Data Mining by Tan, Steinbach, Kumar Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004
More informationData Mining: An Introduction
Data Mining: An Introduction Michael J. A. Berry and Gordon A. Linoff. Data Mining Techniques for Marketing, Sales and Customer Support, 2nd Edition, 2004 Data mining What promotions should be targeted
More informationDistances, Clustering, and Classification. Heatmaps
Distances, Clustering, and Classification Heatmaps 1 Distance Clustering organizes things that are close into groups What does it mean for two genes to be close? What does it mean for two samples to be
More informationClassifiers & Classification
Classifiers & Classification Forsyth & Ponce Computer Vision A Modern Approach chapter 22 Pattern Classification Duda, Hart and Stork School of Computer Science & Statistics Trinity College Dublin Dublin
More informationFig. 1 A typical Knowledge Discovery process [2]
Volume 4, Issue 7, July 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com A Review on Clustering
More information