Outlier Detection Techniques
|
|
|
- Madison Mills
- 10 years ago
- Views:
Transcription
1 LUDWIG- MAXIMILIANS- UNIVERSITÄT MÜNCHEN INSTITUTE FOR INFORMATICS DATABASE The 2010 SIAM International Conference on Data Mining Outlier Detection Techniques Hans-Peter Kriegel, Peer Kröger, Arthur Zimek Ludwig-Maximilians-Universität München Munich, Germany Tutorial Notes: SIAM SDM 2010, Columbus, Ohio
2 General Issues 1. Please feel free to ask questions at any time during the presentation 2. Aim of the tutorial: t get the big picture NOT in terms of a long list of methods and algorithms BUT in terms of the basic algorithmic approaches Sample algorithms for these basic approaches will be sketched The selection of the presented algorithms is somewhat arbitrary Please don t mind if your favorite algorithm is missing Anyway you should be able to classify any other algorithm not covered here by means of which of the basic approaches is implemented 3. The revised version of tutorial notes will soon be available on our websites 2
3 Introduction What is an outlier? Definition of Hawkins [Hawkins 1980]: : An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism Statistics-based intuition Normal data objects follow a generating mechanism, e.g. some given statistical process Abnormal objects deviate from this generating mechanism 3
4 Introduction Example: Hadlum vs. Hadlum (1949) [Barnett 1978] The birth of a child to Mrs. Hadlum happened 349 days after Mr. Hadlum left for military service. Average human gestation period is 280 days (40 weeks). Statistically, 349 days is an outlier. 4
5 Introduction Example: Hadlum vs. Hadlum (1949) [Barnett 1978] blue: statistical basis (13634 observations of gestation periods) green: assumed underlying Gaussian process Very low probability for the birth of Mrs. Hadlums child for being generated by this process red: assumption of Mr. Hadlum (another Gaussian process responsible for the observed birth, where the gestation period responsible) Under this assumption the gestation period has an average duration and highest-possible probability 5
6 Introduction Sample applications of outlier detection Fraud detection Purchasing behavior of a credit card owner usually changes when the card is stolen Abnormal buying patterns can characterize credit card abuse Medicine Unusual symptoms or test results may indicate potential health problems of a patient Whether a particular test result is abnormal may depend on other characteristics of the patients (e.g. gender, age, ) Public health The occurrence of a particular disease, e.g. tetanus, scattered across various hospitals of a city indicate problems with the corresponding vaccination program in that city Whether an occurrence is abnormal depends on different aspects like frequency, spatial correlation, etc. 6
7 Introduction Sample applications of outlier detection (cont.) Sports statistics In many sports, various parameters are recorded for players in order to evaluate the players performances Outstanding (in a positive as well as a negative sense) players may be identified as having abnormal parameter values Sometimes, players show abnormal values only on a subset or a special combination of the recorded parameters Detecting measurement errors Data derived from sensors (e.g. in a given scientific experiment) may contain measurement errors Abnormal values could provide an indication of a measurement error Removing such errors can be important in other data mining and data analysis tasks One person s noise could be another person s signal. 7
8 Introduction Discussion of the basic intuition based on Hawkins Data is usually multivariate, i.e., multi-dimensional => basic model is univariate, i.e., 1-dimensional There is usually more than one generating mechanism/statistical process underlying the data => basic model assumes only one normal generating mechanism Anomalies may represent a different class (generating mechanism) of objects, so there may be a large class of similar objects that are the outliers => basic model assumes that outliers are rare observations Consequence: a lot of models and approaches have evolved in the past years in order to exceed these assumptions and it is not easy to keep track with this evolution. New models often involve typical, new, though h usually hidden assumptions and restrictions. 8
9 Introduction General application scenarios Supervised scenario In some applications, training data with normal and abnormal data objects are provided There may be multiple normal and/or abnormal classes Often, the classification problem is highly unbalanced Semi-supervised Scenario In some applications, only training data for the normal class(es) (or only the abnormal class(es)) are provided Unsupervised Scenario In most applications there are no training data available In this tutorial, we focus on the unsupervised scenario 9
10 Introduction Are outliers just a side product of some clustering algorithms? Many clustering algorithms do not assign all points to clusters but account for noise objects Look for outliers by applying one of those algorithms and retrieve the noise set Problem: Clustering algorithms are optimized to find clusters rather than outliers Accuracy of outlier detection depends on how good the clustering algorithm captures the structure of clusters A set of many abnormal data objects that t are similar il to each other would be recognized as a cluster rather than as noise/outliers 10
11 Introduction We will focus on three different classification approaches Global versus local outlier detection Considers the set of reference objects relative to which each point s outlierness is judged Labeling versus scoring outliers Considers the output of an algorithm Modeling properties Considers the concepts based on which outlierness is modeled NOTE: we focus on models and methods for Euclidean data but many of those can be also used for other data types (because they only require a distance measure) 11
12 Introduction Global versus local approaches Considers the resolution of the reference set w.r.t. which the outlierness of a particular data object is determined Global approaches The reference set contains all other data objects Basic assumption: there is only one normal mechanism Basic problem: other outliers are also in the reference set and may falsify the results Local approaches The reference contains a (small) subset of data objects No assumption on the number of normal mechanisms Basic problem: how to choose a proper reference set NOTE: Some approaches are somewhat in between The resolution of the reference set is varied e.g. from only a single object (local) to the entire database (global) automatically or by a user-defined input parameter 12
13 Introduction Labeling versus scoring Considers the output of an outlier detection algorithm Labeling approaches Binary output Data objects are labeled either as normal or outlier Scoring approaches Continuous output For each object an outlier score is computed (e.g. the probability for being an outlier) Data objects can be sorted according to their scores Notes Many scoring approaches focus on determining the top-n outliers (parameter n is usually given by the user) Scoring approaches can usually also produce binary output if necessary (e.g. by defining a suitable threshold on the scoring values) 13
14 Introduction Approaches classified by the properties of the underlying modeling approach Model-based Approaches Rational Apply a model to represent normal data points Outliers are points that do not fit to that model Sample approaches Probabilistic tests based on statistical models Depth-based approaches Deviation-based approaches Some subspace outlier detection approaches 14
15 Introduction Proximity-based Approaches Rational Examine the spatial proximity of each object in the data space If the proximity of an object considerably deviates from the proximity of other objects it is considered an outlier Sample approaches Distance-based approaches Density-based approaches Some subspace outlier detection approaches Angle-based approaches Rational Examine the spectrum of pairwise angles between a given point and all other points Outliers are points that have a spectrum featuring high fluctuation 15
16 Outline 1. Introduction 2. Statistical Tests 3. Depth-based Approaches 4. Deviation-based Approaches 5. Distance-based Approaches 6. Density-based Approaches 7. High-dimensional Approaches 8. Summary statistical ti ti model model based on spatial proximity adaptation of different models to a special problem 16
17 Statistical Tests General idea Given a certain kind of statistical distribution (e.g., Gaussian) Compute the parameters assuming all data points have been generated by such a statistical distribution (e.g., mean and standard deviation) Outliers are points that have a low probability to be generated by the overall distribution (e.g., deviate more than 3 times the standard deviation from the mean) Basic assumption Normal data objects follow a (known) distribution and occur in a high probability region of this model Outliers deviate strongly from this distribution 17
18 Statistical Tests A huge number of different tests are available differing in Type of data distribution (e.g. Gaussian) Number of variables, i.e., dimensions of the data objects (univariate/multivariate) Number of distributions (mixture models) Parametric versus non-parametric ti (e.g. histogram-based) Example on the following slides Gaussian distribution Multivariate 1 model Parametric 18
19 Statistical Tests Probability density function of a multivariate normal distribution N( x) = 1 ( 2π ) d e Σ ( x μ ) T Σ 2 1 ( x μ ) μ is the mean value of all points (usually data is normalized such that μ=0) Σ is the covariance matrix from the mean 1 MDist( x, μ) = ( x μ) Σ ( x μ) is the Mahalanobis distance of point x to μ MDist follows a χ 2 -distribution with d degrees of freedom (d = data dimensionality) All points x, with MDist(x,μ) > χ 2 (0,975) [ 3. σ] 19
20 Statistical Tests Visualization (2D) [Tan et al. 2006] 20
21 Statistical Tests Problems Curse of dimensionality The larger the degree of freedom, the more similar the MDist values for all points x-axis: observed MDist values y-axis: frequency of observation 21
22 Statistical Tests Problems (cont.) Robustness Mean and standard deviation are very sensitive to outliers These values are computed for the complete data set (including potential outliers) The MDist is used to determine outliers although the MDist values are influenced by these outliers => Minimum Covariance Determinant [Rousseeuw and Leroy 1987] minimizes the influence of outliers on the Mahalanobis distance Discussion Data distribution is fixed Low flexibility (no mixture model) Global method Outputs a label but can also output a score μ DB 22
23 Outline 1. Introduction 2. Statistical Tests 3. Depth-based Approaches 4. Deviation-based Approaches 5. Distance-based Approaches 6. Density-based Approaches 7. High-dimensional Approaches 8. Summary 23
24 Depth-based Approaches General idea Search for outliers at the border of the data space but independent of statistical distributions Organize data objects in convex hull layers Outliers are objects on outer layers Basic assumption Outliers are located at the border of the data space Normal objects are in the center of the data space Picture taken from [Johnson et al. 1998] 24
25 Depth-based Approaches Model [Tukey 1977] Points on the convex hull of the full data space have depth = 1 Points on the convex hull of the data set after removing all points with depth = 1 have depth = 2 Points having a depth k are reported as outliers Picture taken from [Preparata and Shamos 1988] 25
26 Depth-based Approaches Sample algorithms ISODEPTH [Ruts and Rousseeuw 1996] FDC [Johnson et al. 1998] Discussion Similar idea like classical statistical approaches (k = 1 distributions) but independent from the chosen kind of distribution Convex hull computation o is usually only efficient e in 2D / 3D spaces Originally outputs a label but can be extended for scoring easily (take depth as scoring value) Uses a global reference set for outlier detection 26
27 Outline 1. Introduction 2. Statistical Tests 3. Depth-based Approaches 4. Deviation-based Approaches 5. Distance-based Approaches 6. Density-based Approaches 7. High-dimensional Approaches 8. Summary 27
28 Deviation-based Approaches General idea Given a set of data points (local group or global set) Outliers are points that do not fit to the general characteristics of that set, i.e., the variance of the set is minimized when removing the outliers Basic assumption Outliers are the outermost points of the data set 28
29 Deviation-based Approaches Model [Arning et al. 1996] Given a smoothing factor SF(I) that computes for each I DB how much the variance of DB is decreased when I is removed from DB With equal decrease in variance, a smaller exception set is better The outliers are the elements of the exception set E DB for which the following holds: SF(E) SF(I) for all I DB Discussion: Similar idea like classical statistical approaches (k = 1 distributions) but independent from the chosen kind of distribution Naïve solution is in O(2 n ) for n data objects Heuristics like random sampling or best first search are applied Applicable to any data type (depends on the definition of SF) Originally designed as a global method Outputs a labeling 29
30 Outline 1. Introduction 2. Statistical Tests 3. Depth-based Approaches 4. Deviation-based Approaches 5. Distance-based Approaches 6. Density-based Approaches 7. High-dimensional Approaches 8. Summary 30
31 Distance-based Approaches General Idea Judge a point based on the distance(s) to its neighbors Several variants proposed Basic Assumption Normal data objects have a dense neighborhood Outliers are far apart from their neighbors, i.e., have a less dense neighborhood 31
32 Distance-based Approaches DB(ε,π)-Outliers Basic model [Knorr and Ng 1997] Given a radius ε and a percentage π A point p is considered an outlier if at most π percent of all other points have a distance to p less than ε Card({ q DB dist( p, q) < ε}) OutlierSet( ε, π ) = { p π} Card( DB) p 1 ε range-query with radius ε p 2 32
33 Distance-based Approaches Algorithms Index-based [Knorr and Ng 1998] Compute distance range join using spatial index structure Exclude point from further consideration if its ε-neighborhood contains more than Card(DB). π points Nested-loop based [Knorr and Ng 1998] Divide id buffer in two parts Use second part to scan/compare all points with the points from the first part Grid-based [Knorr and Ng 1998] Build grid such that any two points from the same grid cell have a distance of at most ε to each other Points need only compared with points from neighboring cells 33
34 Distance-based Approaches Deriving intensional knowledge [Knorr and Ng 1999] Relies on the DB(ε,π)-outlier model Find the minimal subset(s) of attributes that explains the outlierness of a point, i.e., in which the point is still an outlier Example Identified outliers Derived intensional knowledge (sketch) 34
35 Distance-based Approaches Outlier scoring based on knn distances General models Take the knn distance of a point as its outlier score [Ramaswamy et al 2000] Aggregate the distances of a point to all its 1NN, 2NN,, knn as an outlier score [Angiulli and Pizzuti 2002] Algorithms General approaches Nested-Loop» Naïve approach: For each object: compute knns with a sequential scan» Enhancement: use index structures for knn queries Partition-based» Partition data into micro clusters» Aggregate information for each partition (e.g. minimum bounding rectangles)» Allows to prune micro clusters that cannot qualify when searching for the knns of a particular point 35
36 Distance-based Approaches Sample Algorithms (computing top-n outliers) Nested-Loop [Ramaswamy et al 2000] Simple NL algorithm with index support for knn queries Partition-based algorithm (based on a clustering algorithm that has linear time complexity) Algorithm for the simple knn-distance model Linearization i [Angiulli and Pizzuti 2002] Linearization of a multi-dimensional data set using space-fill curves 1D representation is partitioned into micro clusters Algorithm for the average knn-distance model ORCA [Bay and Schwabacher 2003] NL algorithm with randomization and simple pruning Pruning: if a point has a score greater than the top-n outlier so far (cut-off), remove this point from further consideration => non-outliers are pruned => works good on randomized data (can be done in linear time) => worst-case: naïve NL algorithm Algorithm for both knn-distance models and the DB(ε,π)-outlier model 36
37 Distance-based Approaches Sample Algorithms (cont.) RBRP [Ghoting et al. 2006], Idea: try to increase the cut-off as uick as possible => increase the pruning power Compute approximate knns for each point to get a better cut-off For approximate knn search, the data points are partitioned into micro clusters and knns are only searched within each micro cluster Algorithm for both knn-distance models Further approaches Also apply partitioning-based algorithms using micro clusters [McCallum et al 2000], [Tao et al. 2006] Approximate solution based on reference points [Pei et al. 2006] Discussion Output can be a scoring (knn-distance models) or a labeling (knndistance models and the DB(ε,π)-outlier model) Approaches are local (resolution can be adjusted by the user via ε or k) 37
38 Distance-based Approaches Variant Outlier Detection using In-degree Number [Hautamaki et al. 2004] Idea Construct the knn graph for a data set» Vertices: data points» Edge: if q knn(p) then there is a directed edge from p to q A vertex that has an indegree less than equal to T (user defined threshold) is an outlier Discussion The indegree of a vertex in the knn graph equals to the number of reverse knns (RkNN) of the corresponding point The RkNNs of a point p are those data objects having p among their knns Intuition of the model: outliers are» points that are among the knns of less than T other points» have less than T RkNNs Outputs an outlier label Is a local approach (depending on user defined parameter k) 38
39 Distance-based Approaches Resolution-based outlier factor (ROF) [Fan et al. 2006] Model Depending on the resolution of applied distance thresholds, points are outliers or within a cluster With the maximal resolution Rmax (minimal distance threshold) all points are outliers With the minimal resolution Rmin (maximal distance threshold) all points are within a cluster Change resolution from Rmax to Rmin so that at each step at least one point changes from being outlier to being a member of a cluster Cluster is defined similar as in DBSCAN [Ester et al 1996] as a transitive closure of r-neighborhoods (where r is the current resolution) ROF value ROF( p) = Discussion Outputs a score (the ROF value) Resolution is varied automatically from local to global R min r R max clustersizer 1( p) 1 clustersize ( p) r 39
40 Outline 1. Introduction 2. Statistical Tests 3. Depth-based Approaches 4. Deviation-based Approaches 5. Distance-based Approaches 6. Density-based Approaches 7. High-dimensional Approaches 8. Summary 40
41 Density-based Approaches General idea Compare the density around a point with the density around its local neighbors The relative density of a point compared to its neighbors is computed as an outlier score Approaches also differ in how to estimate density Basic assumption The density around a normal data object is similar to the density around its neighbors The density around an outlier is considerably different to the density around its neighbors 41
42 Density-based Approaches Local Outlier Factor (LOF) [Breunig et al. 1999], [Breunig et al. 2000] Motivation: Distance-based outlier detection models have problems with different densities How to compare the neighborhood of points from areas of different densities? Example DB(ε,π)-outlier model» Parameters ε and π cannot be chosen so that o 2 is an outlier but none of the points in cluster C 1 (e.g. q) is an outlier Outliers based on knn-distance» knn-distances of objects in C 1 (e.g. q) ) C 1 q are larger than the knn-distance of o 2 C 2 o 2 Solution: consider relative density o 1 42
43 Density-based Approaches Model Reachability distance Introduces a smoothing factor reach distk ( p, o) = max{ k distance( o), dist( p, o)} Local reachability distance (lrd) of point p Inverse of the average reach-dists of the knns of p lrd k ( p) = 1/ o knn ( p) reach dist Card Local outlier factor (LOF) of point p k ( p, o) ( knn( ( p) ) ) Average ratio of lrds of neighbors of p and lrd of p o knn ( p) LOFk ( p) = Card lrd lrd k k ( o) ( p) ( knn( ( p) ) ) 43
44 Density-based Approaches Properties LOF 1: point is in a cluster (region with homogeneous density around the point and its neighbors) LOF >> 1: point is an outlier Data set LOFs (MinPts = 40) Discussion Choice of k (MinPts in the original paper) specifies the reference set Originally implements a local approach (resolution depends on the user s s choice for k) Outputs a scoring (assigns an LOF value to each point) 44
45 Density-based Approaches Variants of LOF Mining top-n local outliers [Jin et al. 2001] Idea: Usually, a user is only interested in the top-n outliers Do not compute the LOF for all data objects => save runtime Method Compress data points into micro clusters using the CFs of BIRCH [Zhang et al. 1996] Derive upper and lower bounds of the reachability distances, lrd-values, and LOF-values for points within a micro clusters Compute upper and lower bounds of LOF values for micro clusters and sort results w.r.t. ascending lower bound Prune micro clusters that cannot accommodate points among the top-n outliers (n highest LOF values) Iteratively refine remaining micro clusters and prune points accordingly 45
46 Density-based Approaches Variants of LOF (cont.) Connectivity-based outlier factor (COF) [Tang et al. 2002] Motivation In regions of low density, it may be hard to detect outliers Choose a low value for k is often not appropriate Solution Treat low density and isolation differently Example Data set LOF COF 46
47 Density-based Approaches Influenced Outlierness (INFLO) [Jin et al. 2006] Motivation If clusters of different densities are not clearly separated, LOF will have problems Point p will have a higher LOF than points q or r which is counter intuitive Idea Take symmetric neighborhood h relationship into account Influence space (kis(p)) of a point p includes its knns (knn(p)) and its reverse knns (RkNN(p)) kis(p) = knn(p) RkNN(p)) = {q 1, q 2, q 4 } 47
48 Density-based Approaches Model Density is simply measured by the inverse of the knn distance, i.e., den(p) = 1/k-distance(p) Influenced outlierness of a point p INFLO k ( p) = den(o) o kis ( p ) Card ( kis ( p)) den( p) INFLO takes the ratio of the average density of objects in the neighborhood of a point p (i.e., in knn(p) RkNN(p)) to p s density Proposed algorithms for mining top-n n outliers Index-based Two-way approach Micro cluster based approach 48
49 Density-based Approaches Properties Similar to LOF INFLO 1: point is in a cluster INFLO >> 1: point is an outlier Discussion Outputs an outlier score Originally proposed as a local approach (resolution of the reference set kis can be adjusted by the user setting parameter k) 49
50 Density-based Approaches Local outlier correlation integral (LOCI) [Papadimitriou et al. 2003] Idea is similar to LOF and variants Differences to LOF Take the ε-neighborhood instead of knns as reference set Test multiple resolutions (here called granularities ) of the reference set to get rid of any input parameter Model ε-neighborhood of a point p: N(p,ε) = {q dist(p,q) ε} Local density of an object p: number of objects in N(p,ε) Average density of the neighborhood den ( p, ε, α ) = q N ( p, ε ) Card( N( q, α ε )) Card( N( p, ε )) Multi-granularity Deviation Factor (MDEF) MDEF( p, ε, α) = den ( p, ε, α ) Card ( N ( p, α ε )) Card ( N ( p, α ε )) = 1 den( p, ε, α) den( p, ε, α) 50
51 Density-based Approaches Intuition N(p 1, α. ε) N(p,ε) N(p,α. ε) N(p 2, α. ε) den( p, ε, α) = q N ( p, ε ) Card( N( q, α ε )) Card( N( p, ε )) N(p 3, α. ε) den( p, ε, α) Card( N( p, α ε)) Card( N( p, α ε)) MDEFp (, ε, α) = = 1 den( p, ε, α) den( p, ε, α) σmdef(p,ε,α) is the normalized standard deviation of the densities of all points from N(p,ε) Properties MDEF = 0 for points within a cluster MDEF > 0 for outliers or MDEF > 3. σmdef => outlier 51
52 Density-based Approaches Features Parameters ε and α are automatically determined In fact, all possible values for ε are tested LOCI plot displays for a given point p the following values w.r.t. ε Card(N(p, α. ε)) den(p, ε, α) with a border of ± 3. σden(p, ε, α) ε ε ε 52
53 Density-based Approaches Algorithms Exact solution is rather expensive (compute MDEF values for all possible ε values) aloci: fast, approximate solution Discretize data space using a grid with side length 2αε Approximate range queries trough grid cells 2αε ε - neighborhood of point p: ζ(p,ε) all cells that are completely covered by ε-sphere around p Then, 2 c 2αε Card( N( q, α ε )) = j c ζ ( p, ε ) j c j c ζ ( p, ε ) j where c j is the object count the corresponding cell Since different ε values are needed, different grids are constructed with varying resolution These different grids can be managed efficiently using a Quad-tree pip ε 53
54 Density-based Approaches Discussion Exponential runtime w.r.t. data dimensionality Output: Label: it MDEF of a point > 3. σmdef then this point is marked as outlier LOCI plot» At which resolution is a point an outlier (if any)» Additional information such as diameter of clusters, distances to clusters, etc. All interesting resolutions, i.e., possible values for ε, (from local to global) are tested 54
55 Outline 1. Introduction 2. Statistical Tests 3. Depth-based Approaches 4. Deviation-based Approaches 5. Distance-based Approaches 6. Density-based Approaches 7. High-dimensional Approaches 8. Summary 55
56 High-dimensional Approaches Challenges Curse of dimensionality Relative contrast between distances decreases with increasing dimensionality Data is very sparse, almost all points are outliers Concept of neighborhood becomes meaningless Solutions Use more robust distance functions and find full-dimensional outliers Find outliers in projections (subspaces) of the original feature space 56
57 High-dimensional Approaches ABOD angle-based outlier degree [Kriegel et al. 2008] Rational Angles are more stable than distances in high dimensional spaces (cf. e.g. the popularity of cosine-based similarity measures for text data) Object o is an outlier if most other objects are located in similar directions Object o is no outlier if many other objects are located in varying directions o o outlier no outlier
58 High-dimensional Approaches Basic assumption Outliers are at the border of the data distribution Normal points are in the center of the data distribution Model angle between py x Consider for a given point p the angle between px and py p px and py for any two xyfrom x,y the database py Consider the spectrum of all these angles The broadness of this spectrum is a score for the outlierness of a point y inner point outlier
59 High-dimensional Approaches Model (cont.) Measure the variance of the angle spectrum Weighted by the corresponding distances (for lower dimensional data sets where angles are less reliable) Properties xp ABOD( p) = VAR x, y DB 2 xp Small ABOD => outlier High ABOD => no outlier p, yp yp 2 59
60 High-dimensional Approaches Algorithms Naïve algorithm is in O(n 3 ) Approximate algorithm based on random sampling for mining top-n outliers Do not consider all pairs of other points x,y in the database to compute the angles Compute ABOD based on samples => lower bound of the real ABOD Filter out points that have a high lower bound Refine (compute the exact ABOD value) only for a small number of points Discussion Global approach to outlier detection Outputs an outlier score 60
61 High-dimensional Approaches Grid-based subspace outlier detection [Aggarwal and Yu 2000] Model Partition data space by an equi-depth grid (Φ = number of cells in each dimension) Sparsity coefficient S(C) for a k-dimensional grid cell C k count( C) n ( 1 Φ ) S( C) = ( 1 k ) (1 ( 1 k n ) ) Φ Φ where count(c) is the number of data objects in C S(C) < 0 => count(c) is lower than expected Outliers are those objects that are located in lower-dimensional cells with negative sparsity coefficient Φ =3 61
62 High-dimensional Approaches Algorithm Find the m grid cells (projections) with the lowest sparsity coefficients Brute-force algorithm is in O(Φ d ) Evolutionary algorithm (input: m and the dimensionality of the cells) Discussion Results need not be the points from the optimal cells Very coarse model (all objects that are in cell with less points than to be expected) Quality depends on grid resolution and grid position Outputs a labeling Implements a global approach (key criterion: globally expected number of points within a cell) 62
63 High-dimensional Approaches SOD subspace outlier degree [Kriegel et al. 2009] Motivation Outliers may be visible only in subspaces of the original data Model Compute the subspace in which h the knns of a point p minimize the variance Compute the hyperplane H (knn(p)) that is orthogonal to that subspace Take the distance of p to the hyperplane as measure for its outlierness A 2 A 2 H (knn(p)) x x x A 3 x x x x x p x x x xx x A 1 p dist(h (knn(p), p) A 1 63
64 High-dimensional Approaches Discussion Assumes that knns of outliers have a lower-dimensional projection with small variance Resolution is local (can be adjusted by the user via the parameter k) Output is a scoring (SOD value) 64
65 Outline 1. Introduction 2. Statistical Tests 3. Depth-based Approaches 4. Deviation-based Approaches 5. Distance-based Approaches 6. Density-based Approaches 7. High-dimensional Approaches 8. Summary 65
66 Summary Summary Different models are based on different assumptions to model outliers Different models provide different types of output (labeling/scoring) Different models consider outlier at different resolutions (global/local) Thus, different models will produce different results A thorough and comprehensive comparison between different models and approaches is still missing 66
67 Summary Outlook Experimental evaluation of different approaches to understand and compare differences and common properties A first step towards unification of the diverse approaches: providing density-based outlier scores as probability values [Kriegel et al. 2009a]: judging the deviation of the outlier score from the expected value Visualization [Achtert et al. 2010] New models Performance issues Complex data types High-dimensional data 67
68 Outline 1. Introduction 2. Statistical Tests 3. Depth-based Approaches 4. Deviation-based Approaches 5. Distance-based Approaches 6. Density-based Approaches 7. High-dimensional Approaches 8. Summary 68
69 List of References 69
70 Literature Achtert, E., Kriegel, H.-P., Reichert, L., Schubert, E., Wojdanowski, R., Zimek, A Visual Evaluation of Outlier Detection Models. In Proc. International Conference on Database Systems for Advanced Applications (DASFAA), Tsukuba, Japan. Aggarwal, C.C. C and Yu, P.S Outlier detection for high dimensional data. In Proc. ACM SIGMOD Int. Conf. on Management of Data (SIGMOD), Dallas, TX. Angiulli, F. and Pizzuti, C Fast outlier detection in high dimensional spaces. In Proc. European Conf. on Principles of Knowledge Discovery and Data Mining, Helsinki, Finland. Arning, A., Agrawal, R., and Raghavan, P A linear method for deviation detection in large databases. In Proc. Int. Conf. on Knowledge Discovery and Data Mining (KDD), Portland, OR. Barnett, V The study of outliers: purpose and model. Applied Statistics, 27(3), Bay, S.D. and Schwabacher, M Mining distance-based outliers in near linear time with randomization and a simple pruning rule. In Proc. Int. Conf. on Knowledge Discovery and Data Mining (KDD), Washington, DC. Breunig, M.M., Kriegel, H.-P., Ng, R.T., and Sander, J OPTICS-OF: identifying local outliers. In Proc. European Conf. on Principles of Data Mining and Knowledge Discovery (PKDD), Prague, Czech Republic. Breunig, M.M., Kriegel, H.-P., Ng, R.T., and Sander, J LOF: identifying density-based local outliers. In Proc. ACM SIGMOD Int. Conf. on Management of Data (SIGMOD), Dallas, TX. 70
71 Literature Ester, M., Kriegel, H.-P., Sander, J., and Xu, X A density-based algorithm for discovering clusters in large spatial databases with noise. In Proc. Int. Conf. on Knowledge Discovery and Data Mining (KDD), Portland, OR. Fan, H., Zaïane, O., Foss, A., and Wu, J A nonparametric outlier detection for efficiently discovering top-n outliers from engineering data. In Proc. Pacific-Asia Conf. on Knowledge Discovery and Data Mining (PAKDD), Singapore. Ghoting, A., Parthasarathy, S., and Otey, M Fast mining of distance-based outliers in high dimensional spaces. In Proc. SIAM Int. Conf. on Data Mining (SDM), Bethesda, ML. Hautamaki, V., Karkkainen, I., and Franti, P Outlier detection using k-nearest neighbour graph. In Proc. IEEE Int. Conf. on Pattern Recognition (ICPR), Cambridge, UK. Hawkins, D Identification of Outliers. Chapman and Hall. Jin, W., Tung, A., and Han, J Mining top-n local outliers in large databases. In Proc. ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining (SIGKDD), San Francisco, CA. Jin, W., Tung, A., Han, J., and Wang, W Ranking outliers using symmetric neighborhood relationship. In Proc. Pacific-Asia Conf. on Knowledge Discovery and Data Mining (PAKDD), Singapore. Johnson, T., Kwok, I., and Ng, R.T Fast computation of 2-dimensional depth contours. In Proc. Int. Conf. on Knowledge Discovery and Data Mining (KDD), New York, NY. Knorr, E.M. and Ng, R.T A unified approach for mining outliers. In Proc. Conf. of the Centre for Advanced Studies on Collaborative Research (CASCON), Toronto, Canada. 71
72 Literature Knorr, E.M. and NG, R.T Algorithms for mining distance-based outliers in large datasets. In Proc. Int. Conf. on Very Large Data Bases (VLDB), New York, NY. Knorr, E.M. and Ng, R.T Finding intensional knowledge of distance-based outliers. In Proc. Int. Conf. on Very Large Data Bases (VLDB), Edinburgh, Scotland. Kriegel, H.-P., Kröger, P., Schubert, E., and Zimek, A Outlier detection in axis-parallel subspaces of high dimensional data. In Proc. Pacific-Asia Conf. on Knowledge Discovery and Data Mining (PAKDD), Bangkok, Thailand. Kriegel, H.-P., Kröger, P., Schubert, E., and Zimek, A. 2009a. LoOP: Local Outlier Probabilities. In Proc. ACM Conference on Information and Knowledge Management (CIKM), Hong Kong, China. Kriegel, H.-P., Schubert, M., and Zimek, A Angle-based outlier detection, In Proc. ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining (SIGKDD), Las Vegas, NV. McCallum, A., Nigam, K., and Ungar, L.H Efficient clustering of high-dimensional data sets with application to reference matching. In Proc. ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining (SIGKDD), Boston, MA. Papadimitriou, S., Kitagawa, H., Gibbons, P., and Faloutsos, C LOCI: Fast outlier detection using the local correlation integral. In Proc. IEEE Int. Conf. on Data Engineering (ICDE), Hong Kong, China. Pei, Y., Zaiane, O., and Gao, Y An efficient reference-based approach to outlier detection in large datasets. In Proc. 6th Int. Conf. on Data Mining (ICDM), Hong Kong, China. Preparata, F. and Shamos, M Computational Geometry: an Introduction. Springer Verlag. 72
73 Literature Ramaswamy, S. Rastogi, R. and Shim, K Efficient algorithms for mining outliers from large data sets. In Proc. ACM SIGMOD Int. Conf. on Management of Data (SIGMOD), Dallas, TX. Rousseeuw, P.J. and Leroy, A.M Robust Regression and Outlier Detection. John Wiley. Ruts, I. and Rousseeuw, P.J Computing depth contours of bivariate point clouds. Computational Statistics and Data Analysis, 23, Tao Y., Xiao, X. and Zhou, S Mining distance-based outliers from large databases in any metric space. In Proc. ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining (SIGKDD), New York, NY. Tan, P.-N., Steinbach, M., and Kumar, V Introduction to Data Mining. Addison Wesley. Tang, J., Chen, Z., Fu, A.W.-C., and Cheung, D.W Enhancing effectiveness of outlier detections for low density patterns. In Proc. Pacific-Asia Conf. on Knowledge Discovery and Data Mining (PAKDD), Taipei, Taiwan. Tukey, J Exploratory Data Analysis. Addison-Wesley. Zhang, T., Ramakrishnan, R., Livny, M BIRCH: an efficient data clustering method for very large databases. In Proc. ACM SIGMOD Int. Conf. on Management of Data (SIGMOD), Montreal, Canada. 73
Robust Outlier Detection Technique in Data Mining: A Univariate Approach
Robust Outlier Detection Technique in Data Mining: A Univariate Approach Singh Vijendra and Pathak Shivani Faculty of Engineering and Technology Mody Institute of Technology and Science Lakshmangarh, Sikar,
An Analysis on Density Based Clustering of Multi Dimensional Spatial Data
An Analysis on Density Based Clustering of Multi Dimensional Spatial Data K. Mumtaz 1 Assistant Professor, Department of MCA Vivekanandha Institute of Information and Management Studies, Tiruchengode,
Map-Reduce Algorithm for Mining Outliers in the Large Data Sets using Twister Programming Model
Map-Reduce Algorithm for Mining Outliers in the Large Data Sets using Twister Programming Model Subramanyam. RBV, Sonam. Gupta Abstract An important problem that appears often when analyzing data involves
Cluster Analysis: Advanced Concepts
Cluster Analysis: Advanced Concepts and dalgorithms Dr. Hui Xiong Rutgers University Introduction to Data Mining 08/06/2006 1 Introduction to Data Mining 08/06/2006 1 Outline Prototype-based Fuzzy c-means
Histogram-based Outlier Score (HBOS): A fast Unsupervised Anomaly Detection Algorithm
Histogram-based Outlier Score (HBOS): A fast Unsupervised Anomaly Detection Algorithm Markus Goldstein and Andreas Dengel German Research Center for Artificial Intelligence (DFKI), Trippstadter Str. 122,
OUTLIER ANALYSIS. Authored by CHARU C. AGGARWAL IBM T. J. Watson Research Center, Yorktown Heights, NY, USA
OUTLIER ANALYSIS OUTLIER ANALYSIS Authored by CHARU C. AGGARWAL IBM T. J. Watson Research Center, Yorktown Heights, NY, USA Kluwer Academic Publishers Boston/Dordrecht/London Contents Preface Acknowledgments
Identifying erroneous data using outlier detection techniques
Identifying erroneous data using outlier detection techniques Wei Zhuang 1, Yunqing Zhang 2 and J. Fred Grassle 2 1 Department of Computer Science, Rutgers, the State University of New Jersey, Piscataway,
How To Perform An Ensemble Analysis
Charu C. Aggarwal IBM T J Watson Research Center Yorktown, NY 10598 Outlier Ensembles Keynote, Outlier Detection and Description Workshop, 2013 Based on the ACM SIGKDD Explorations Position Paper: Outlier
Anomaly Detection using multidimensional reduction Principal Component Analysis
IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661, p- ISSN: 2278-8727Volume 16, Issue 1, Ver. II (Jan. 2014), PP 86-90 Anomaly Detection using multidimensional reduction Principal Component
Going Big in Data Dimensionality:
LUDWIG- MAXIMILIANS- UNIVERSITY MUNICH DEPARTMENT INSTITUTE FOR INFORMATICS DATABASE Going Big in Data Dimensionality: Challenges and Solutions for Mining High Dimensional Data Peer Kröger Lehrstuhl für
Data Mining. Cluster Analysis: Advanced Concepts and Algorithms
Data Mining Cluster Analysis: Advanced Concepts and Algorithms Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1 More Clustering Methods Prototype-based clustering Density-based clustering Graph-based
A Survey on Outlier Detection Techniques for Credit Card Fraud Detection
IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661, p- ISSN: 2278-8727Volume 16, Issue 2, Ver. VI (Mar-Apr. 2014), PP 44-48 A Survey on Outlier Detection Techniques for Credit Card Fraud
Comparative Analysis of EM Clustering Algorithm and Density Based Clustering Algorithm Using WEKA tool.
International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 9, Issue 8 (January 2014), PP. 19-24 Comparative Analysis of EM Clustering Algorithm
Unsupervised Data Mining (Clustering)
Unsupervised Data Mining (Clustering) Javier Béjar KEMLG December 01 Javier Béjar (KEMLG) Unsupervised Data Mining (Clustering) December 01 1 / 51 Introduction Clustering in KDD One of the main tasks in
Outlier Detection in Clustering
Outlier Detection in Clustering Svetlana Cherednichenko 24.01.2005 University of Joensuu Department of Computer Science Master s Thesis TABLE OF CONTENTS 1. INTRODUCTION...1 1.1. BASIC DEFINITIONS... 1
GraphZip: A Fast and Automatic Compression Method for Spatial Data Clustering
GraphZip: A Fast and Automatic Compression Method for Spatial Data Clustering Yu Qian Kang Zhang Department of Computer Science, The University of Texas at Dallas, Richardson, TX 75083-0688, USA {yxq012100,
Chapter 3: Cluster Analysis
Chapter 3: Cluster Analysis 3.1 Basic Concepts of Clustering 3.2 Partitioning Methods 3.3 Hierarchical Methods 3.4 Density-Based Methods 3.5 Model-Based Methods 3.6 Clustering High-Dimensional Data 3.7
OUTLIER ANALYSIS. Data Mining 1
OUTLIER ANALYSIS Data Mining 1 What Are Outliers? Outlier: A data object that deviates significantly from the normal objects as if it were generated by a different mechanism Ex.: Unusual credit card purchase,
DATA MINING CLUSTER ANALYSIS: BASIC CONCEPTS
DATA MINING CLUSTER ANALYSIS: BASIC CONCEPTS 1 AND ALGORITHMS Chiara Renso KDD-LAB ISTI- CNR, Pisa, Italy WHAT IS CLUSTER ANALYSIS? Finding groups of objects such that the objects in a group will be similar
A Survey on Outliers Detection in Distributed Data Mining for Big Data
J. Basic. Appl. Sci. Res., 5(2)31-38, 2015 2015, TextRoad Publication ISSN 2090-4304 Journal of Basic and Applied Scientific Research www.textroad.com A Survey on Outliers Detection in Distributed Data
Local Anomaly Detection for Network System Log Monitoring
Local Anomaly Detection for Network System Log Monitoring Pekka Kumpulainen Kimmo Hätönen Tampere University of Technology Nokia Siemens Networks [email protected] [email protected] Abstract
Adaptive Anomaly Detection for Network Security
International Journal of Computer and Internet Security. ISSN 0974-2247 Volume 5, Number 1 (2013), pp. 1-9 International Research Publication House http://www.irphouse.com Adaptive Anomaly Detection for
Data Mining Cluster Analysis: Advanced Concepts and Algorithms. Lecture Notes for Chapter 9. Introduction to Data Mining
Data Mining Cluster Analysis: Advanced Concepts and Algorithms Lecture Notes for Chapter 9 Introduction to Data Mining by Tan, Steinbach, Kumar Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004
Visual Data Mining with Pixel-oriented Visualization Techniques
Visual Data Mining with Pixel-oriented Visualization Techniques Mihael Ankerst The Boeing Company P.O. Box 3707 MC 7L-70, Seattle, WA 98124 [email protected] Abstract Pixel-oriented visualization
Unsupervised Outlier Detection in Time Series Data
Unsupervised Outlier Detection in Time Series Data Zakia Ferdousi and Akira Maeda Graduate School of Science and Engineering, Ritsumeikan University Department of Media Technology, College of Information
Principles of Data Mining by Hand&Mannila&Smyth
Principles of Data Mining by Hand&Mannila&Smyth Slides for Textbook Ari Visa,, Institute of Signal Processing Tampere University of Technology October 4, 2010 Data Mining: Concepts and Techniques 1 Differences
A Distribution-Based Clustering Algorithm for Mining in Large Spatial Databases
Published in the Proceedings of 14th International Conference on Data Engineering (ICDE 98) A Distribution-Based Clustering Algorithm for Mining in Large Spatial Databases Xiaowei Xu, Martin Ester, Hans-Peter
Data Mining and Knowledge Discovery in Databases (KDD) State of the Art. Prof. Dr. T. Nouri Computer Science Department FHNW Switzerland
Data Mining and Knowledge Discovery in Databases (KDD) State of the Art Prof. Dr. T. Nouri Computer Science Department FHNW Switzerland 1 Conference overview 1. Overview of KDD and data mining 2. Data
An Introduction to Data Mining. Big Data World. Related Fields and Disciplines. What is Data Mining? 2/12/2015
An Introduction to Data Mining for Wind Power Management Spring 2015 Big Data World Every minute: Google receives over 4 million search queries Facebook users share almost 2.5 million pieces of content
Data Mining Cluster Analysis: Advanced Concepts and Algorithms. Lecture Notes for Chapter 9. Introduction to Data Mining
Data Mining Cluster Analysis: Advanced Concepts and Algorithms Lecture Notes for Chapter 9 Introduction to Data Mining by Tan, Steinbach, Kumar Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004
Data Mining Process Using Clustering: A Survey
Data Mining Process Using Clustering: A Survey Mohamad Saraee Department of Electrical and Computer Engineering Isfahan University of Techno1ogy, Isfahan, 84156-83111 [email protected] Najmeh Ahmadian
The SPSS TwoStep Cluster Component
White paper technical report The SPSS TwoStep Cluster Component A scalable component enabling more efficient customer segmentation Introduction The SPSS TwoStep Clustering Component is a scalable cluster
Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining
Data Mining Cluster Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 8 by Tan, Steinbach, Kumar 1 What is Cluster Analysis? Finding groups of objects such that the objects in a group will
A Two-stage Algorithm for Outlier Detection Under Cloud Environment Based on Clustering and Perceptron Model
Journal of Computational Information Systems 11: 20 (2015) 7529 7541 Available at http://www.jofcis.com A Two-stage Algorithm for Outlier Detection Under Cloud Environment Based on Clustering and Perceptron
Top Top 10 Algorithms in Data Mining
ICDM 06 Panel on Top Top 10 Algorithms in Data Mining 1. The 3-step identification process 2. The 18 identified candidates 3. Algorithm presentations 4. Top 10 algorithms: summary 5. Open discussions ICDM
Social Media Mining. Data Mining Essentials
Introduction Data production rate has been increased dramatically (Big Data) and we are able store much more data than before E.g., purchase data, social media data, mobile phone data Businesses and customers
Quality Assessment in Spatial Clustering of Data Mining
Quality Assessment in Spatial Clustering of Data Mining Azimi, A. and M.R. Delavar Centre of Excellence in Geomatics Engineering and Disaster Management, Dept. of Surveying and Geomatics Engineering, Engineering
A Two-Step Method for Clustering Mixed Categroical and Numeric Data
Tamkang Journal of Science and Engineering, Vol. 13, No. 1, pp. 11 19 (2010) 11 A Two-Step Method for Clustering Mixed Categroical and Numeric Data Ming-Yi Shih*, Jar-Wen Jheng and Lien-Fu Lai Department
Discovering Local Subgroups, with an Application to Fraud Detection
Discovering Local Subgroups, with an Application to Fraud Detection Abstract. In Subgroup Discovery, one is interested in finding subgroups that behave differently from the average behavior of the entire
How To Cluster
Data Clustering Dec 2nd, 2013 Kyrylo Bessonov Talk outline Introduction to clustering Types of clustering Supervised Unsupervised Similarity measures Main clustering algorithms k-means Hierarchical Main
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall
Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin
ALGORITHM FOR SPATIAL CLUSTERING WITH OBSTACLES
ALGORITHM FOR SPATIAL CLUSTERING WITH OBSTACLES Mohamed E. El-Sharkawi Dept. of Information Systems Faculty of Computers & Information Cairo University Cairo, Egypt [email protected] Mohamed A.
Distance Metric Learning in Data Mining (Part I) Fei Wang and Jimeng Sun IBM TJ Watson Research Center
Distance Metric Learning in Data Mining (Part I) Fei Wang and Jimeng Sun IBM TJ Watson Research Center 1 Outline Part I - Applications Motivation and Introduction Patient similarity application Part II
Clustering. Adrian Groza. Department of Computer Science Technical University of Cluj-Napoca
Clustering Adrian Groza Department of Computer Science Technical University of Cluj-Napoca Outline 1 Cluster Analysis What is Datamining? Cluster Analysis 2 K-means 3 Hierarchical Clustering What is Datamining?
International Journal of Advance Research in Computer Science and Management Studies
Volume 2, Issue 12, December 2014 ISSN: 2321 7782 (Online) International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online
Forschungskolleg Data Analytics Methods and Techniques
Forschungskolleg Data Analytics Methods and Techniques Martin Hahmann, Gunnar Schröder, Phillip Grosse Prof. Dr.-Ing. Wolfgang Lehner Why do we need it? We are drowning in data, but starving for knowledge!
Top 10 Algorithms in Data Mining
Top 10 Algorithms in Data Mining Xindong Wu ( 吴 信 东 ) Department of Computer Science University of Vermont, USA; 合 肥 工 业 大 学 计 算 机 与 信 息 学 院 1 Top 10 Algorithms in Data Mining by the IEEE ICDM Conference
High-dimensional labeled data analysis with Gabriel graphs
High-dimensional labeled data analysis with Gabriel graphs Michaël Aupetit CEA - DAM Département Analyse Surveillance Environnement BP 12-91680 - Bruyères-Le-Châtel, France Abstract. We propose the use
Specific Usage of Visual Data Analysis Techniques
Specific Usage of Visual Data Analysis Techniques Snezana Savoska 1 and Suzana Loskovska 2 1 Faculty of Administration and Management of Information systems, Partizanska bb, 7000, Bitola, Republic of Macedonia
Grid Density Clustering Algorithm
Grid Density Clustering Algorithm Amandeep Kaur Mann 1, Navneet Kaur 2, Scholar, M.Tech (CSE), RIMT, Mandi Gobindgarh, Punjab, India 1 Assistant Professor (CSE), RIMT, Mandi Gobindgarh, Punjab, India 2
Multi-represented knn-classification for Large Class Sets
Multi-represented knn-classification for Large Class Sets Hans-Peter Kriegel, Alexey Pryakhin, and Matthias Schubert Institute for Computer Science University of Munich Oettingenstr. 67, 80538 Munich,
Local outlier detection in data forensics: data mining approach to flag unusual schools
Local outlier detection in data forensics: data mining approach to flag unusual schools Mayuko Simon Data Recognition Corporation Paper presented at the 2012 Conference on Statistical Detection of Potential
Linköpings Universitet - ITN TNM033 2011-11-30 DBSCAN. A Density-Based Spatial Clustering of Application with Noise
DBSCAN A Density-Based Spatial Clustering of Application with Noise Henrik Bäcklund (henba892), Anders Hedblom (andh893), Niklas Neijman (nikne866) 1 1. Introduction Today data is received automatically
Chapter 16 Frequent Pattern Mining Algorithms for Data Clustering
Chapter 16 Frequent Pattern Mining Algorithms for Data Clustering Arthur Zimek, Ira Assent and Jilles Vreeken Abstract Discovering clusters in subspaces, or subspace clustering and related clustering paradigms,
Anomaly Detection Using Unsupervised Profiling Method in Time Series Data
Anomaly Detection Using Unsupervised Profiling Method in Time Series Data Zakia Ferdousi 1 and Akira Maeda 2 1 Graduate School of Science and Engineering, Ritsumeikan University, 1-1-1, Noji-Higashi, Kusatsu,
Knowledge Discovery and Data Mining. Structured vs. Non-Structured Data
Knowledge Discovery and Data Mining Unit # 2 1 Structured vs. Non-Structured Data Most business databases contain structured data consisting of well-defined fields with numeric or alphanumeric values.
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set Amhmed A. Bhih School of Electrical and Electronic Engineering Princy Johnson School of Electrical and Electronic Engineering Martin
Data, Measurements, Features
Data, Measurements, Features Middle East Technical University Dep. of Computer Engineering 2009 compiled by V. Atalay What do you think of when someone says Data? We might abstract the idea that data are
Data Mining: Concepts and Techniques. Jiawei Han. Micheline Kamber. Simon Fräser University К MORGAN KAUFMANN PUBLISHERS. AN IMPRINT OF Elsevier
Data Mining: Concepts and Techniques Jiawei Han Micheline Kamber Simon Fräser University К MORGAN KAUFMANN PUBLISHERS AN IMPRINT OF Elsevier Contents Foreword Preface xix vii Chapter I Introduction I I.
Anomaly Detection in Application Performance Monitoring Data
Anomaly Detection in Application Performance Monitoring Data Thomas J. Veasey and Stephen J. Dodson Abstract Performance issues and outages in IT systems have significant impact on business. Traditional
Data Mining Project Report. Document Clustering. Meryem Uzun-Per
Data Mining Project Report Document Clustering Meryem Uzun-Per 504112506 Table of Content Table of Content... 2 1. Project Definition... 3 2. Literature Survey... 3 3. Methods... 4 3.1. K-means algorithm...
A Comparative Analysis of Various Clustering Techniques used for Very Large Datasets
A Comparative Analysis of Various Clustering Techniques used for Very Large Datasets Preeti Baser, Assistant Professor, SJPIBMCA, Gandhinagar, Gujarat, India 382 007 Research Scholar, R. K. University,
A Comparative Study of clustering algorithms Using weka tools
A Comparative Study of clustering algorithms Using weka tools Bharat Chaudhari 1, Manan Parikh 2 1,2 MECSE, KITRC KALOL ABSTRACT Data clustering is a process of putting similar data into groups. A clustering
A Survey of Outlier Detection Methodologies.
A Survey of Outlier Detection Methodologies. Victoria J. Hodge ([email protected]) ([email protected]) Dept. of Computer Science, University of York, York, YO10 5DD UK tel: +44 1904 433067 fax: +44
A Clustering Model for Mining Evolving Web User Patterns in Data Stream Environment
A Clustering Model for Mining Evolving Web User Patterns in Data Stream Environment Edmond H. Wu,MichaelK.Ng, Andy M. Yip,andTonyF.Chan Department of Mathematics, The University of Hong Kong Pokfulam Road,
MACHINE LEARNING IN HIGH ENERGY PHYSICS
MACHINE LEARNING IN HIGH ENERGY PHYSICS LECTURE #1 Alex Rogozhnikov, 2015 INTRO NOTES 4 days two lectures, two practice seminars every day this is introductory track to machine learning kaggle competition!
Clustering methods for Big data analysis
Clustering methods for Big data analysis Keshav Sanse, Meena Sharma Abstract Today s age is the age of data. Nowadays the data is being produced at a tremendous rate. In order to make use of this large-scale
Scalable Cluster Analysis of Spatial Events
International Workshop on Visual Analytics (2012) K. Matkovic and G. Santucci (Editors) Scalable Cluster Analysis of Spatial Events I. Peca 1, G. Fuchs 1, K. Vrotsou 1,2, N. Andrienko 1 & G. Andrienko
CS 2750 Machine Learning. Lecture 1. Machine Learning. http://www.cs.pitt.edu/~milos/courses/cs2750/ CS 2750 Machine Learning.
Lecture Machine Learning Milos Hauskrecht [email protected] 539 Sennott Square, x5 http://www.cs.pitt.edu/~milos/courses/cs75/ Administration Instructor: Milos Hauskrecht [email protected] 539 Sennott
Utility-Based Fraud Detection
Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence Utility-Based Fraud Detection Luis Torgo and Elsa Lopes Fac. of Sciences / LIAAD-INESC Porto LA University of
Public Transportation BigData Clustering
Public Transportation BigData Clustering Preliminary Communication Tomislav Galba J.J. Strossmayer University of Osijek Faculty of Electrical Engineering Cara Hadriana 10b, 31000 Osijek, Croatia [email protected]
On Data Clustering Analysis: Scalability, Constraints and Validation
On Data Clustering Analysis: Scalability, Constraints and Validation Osmar R. Zaïane, Andrew Foss, Chi-Hoon Lee, and Weinan Wang University of Alberta, Edmonton, Alberta, Canada Summary. Clustering is
Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining
Data Mining Cluster Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar Tan,Steinbach, Kumar Introduction to Data Mining 4/8/2004 Hierarchical
SCAN: A Structural Clustering Algorithm for Networks
SCAN: A Structural Clustering Algorithm for Networks Xiaowei Xu, Nurcan Yuruk, Zhidan Feng (University of Arkansas at Little Rock) Thomas A. J. Schweiger (Acxiom Corporation) Networks scaling: #edges connected
Using multiple models: Bagging, Boosting, Ensembles, Forests
Using multiple models: Bagging, Boosting, Ensembles, Forests Bagging Combining predictions from multiple models Different models obtained from bootstrap samples of training data Average predictions or
The Data Mining Process
Sequence for Determining Necessary Data. Wrong: Catalog everything you have, and decide what data is important. Right: Work backward from the solution, define the problem explicitly, and map out the data
Categorical Data Visualization and Clustering Using Subjective Factors
Categorical Data Visualization and Clustering Using Subjective Factors Chia-Hui Chang and Zhi-Kai Ding Department of Computer Science and Information Engineering, National Central University, Chung-Li,
A Survey on Pre-processing and Post-processing Techniques in Data Mining
, pp. 99-128 http://dx.doi.org/10.14257/ijdta.2014.7.4.09 A Survey on Pre-processing and Post-processing Techniques in Data Mining Divya Tomar and Sonali Agarwal Indian Institute of Information Technology,
Character Image Patterns as Big Data
22 International Conference on Frontiers in Handwriting Recognition Character Image Patterns as Big Data Seiichi Uchida, Ryosuke Ishida, Akira Yoshida, Wenjie Cai, Yaokai Feng Kyushu University, Fukuoka,
Linear Threshold Units
Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear
Analyzing The Role Of Dimension Arrangement For Data Visualization in Radviz
Analyzing The Role Of Dimension Arrangement For Data Visualization in Radviz Luigi Di Caro 1, Vanessa Frias-Martinez 2, and Enrique Frias-Martinez 2 1 Department of Computer Science, Universita di Torino,
Chapter 7. Cluster Analysis
Chapter 7. Cluster Analysis. What is Cluster Analysis?. A Categorization of Major Clustering Methods. Partitioning Methods. Hierarchical Methods 5. Density-Based Methods 6. Grid-Based Methods 7. Model-Based
Cluster Analysis. Alison Merikangas Data Analysis Seminar 18 November 2009
Cluster Analysis Alison Merikangas Data Analysis Seminar 18 November 2009 Overview What is cluster analysis? Types of cluster Distance functions Clustering methods Agglomerative K-means Density-based Interpretation
Example application (1) Telecommunication. Lecture 1: Data Mining Overview and Process. Example application (2) Health
Lecture 1: Data Mining Overview and Process What is data mining? Example applications Definitions Multi disciplinary Techniques Major challenges The data mining process History of data mining Data mining
Classification algorithm in Data mining: An Overview
Classification algorithm in Data mining: An Overview S.Neelamegam #1, Dr.E.Ramaraj *2 #1 M.phil Scholar, Department of Computer Science and Engineering, Alagappa University, Karaikudi. *2 Professor, Department
Extend Table Lens for High-Dimensional Data Visualization and Classification Mining
Extend Table Lens for High-Dimensional Data Visualization and Classification Mining CPSC 533c, Information Visualization Course Project, Term 2 2003 Fengdong Du [email protected] University of British Columbia
