Multidimensional data analysis
|
|
|
- Scot Parker
- 10 years ago
- Views:
Transcription
1 Multidimensional data analysis Ella Bingham Dept of Computer Science, University of Helsinki June 2008 The Finnish Graduate School in Astronomy and Space Physics Summer School 2008: Multi-dimensional Data Analysis and Data Mining Ella Bingham
2 Practicalities My lectures and exercises are all of Monday, and Tuesday until lunch at noon Please ask questions at any time Contact: during the lectures and breaks (in Finnish and English). I will leave on Tuesday at noon! Available later via and in person in Kumpula, Helsinki Choice of material: hopefully related to types of data you work with Ella Bingham 1
3 Acknowledgements to my colleagues Saara Hyvönen, Juho Rousu, Pirjo Moen etc for putting their lecture material in public domain Acknowledgements to Noora Partamies and Mikko Syrjäsuo for providing some data and codes Ella Bingham 2
4 Contents Part I: Some general concepts Part II: Methods Ella Bingham 3
5 Part I: Some general concepts These concepts and issues are related to all kinds of data mining, irrespective of the method chosen. Ella Bingham 4
6 What is data mining Data mining, statistical data analysis, multidimensional data analysis, etc will be used as synonyms Goals: (1) Summarize, describe and explore the data (2) Infer the nature of the process which produced the data. Often (1) is done to accomplish (2) Extraction of interesting (= non-trivial, unknown, potentially useful) information or patterns from data in large databases Ella Bingham 5
7 Tell me something interesting about this data. Describe this data. Data mining mainly means we take the data as it is: we do not assume a model that the data would follow (An important difference to your daily research!) we explore the properties of the data, and try to infer about the underlying phenomenon We try to construct a model of what we see in the data. This is called learning. Here model is a very general concept; details will depend on the type of data and the method chosen to model the data Ella Bingham 6
8 Some literature Han & Kamber: Data Mining: Concepts and Techniques, Morgan Kaufmann, hanj/bk2/ Tan & Stenbach & Kumar: Introduction to Data Mining, Addison-Wesley, kumar/dmbook/index.php An electronic mini version of the book H. Lohninger: Teach/Me Data Analysis, Springer-Verlag, 1999 is at Wikipedia Ella Bingham 7
9 Variables and observations An issue to decide at startup: what are your variables (columns, dimensions, features, attributes) and what are your observations (rows, items, measurements, replications,... ) The distinction is not always easy, but must be done! Examples: in image data of handwritten digits, each pixel is a variable, and each image is one observation In contrast, in spectral image data, you only have one image but several measurements of it: each spectral channel is a variable, and each pixel is an observation Ella Bingham 8
10 in time series, each time point is one observation, and whatever you measure are the variables Notation to be used: j = 1,..., D are the variables and i = 1,..., N are the observations. x ij is the jth variable of the ith observation. Ella Bingham 9
11 Data preprocessing Partly based on expert knowledge, but some guidelines exist too. In data mining, we aim at preprocessing the data so that the choice of the modeling method is independent of the original form of the data (whether images, text,... ) Outlier removal: might be measurement errors, and might make subsequent preprocessing error-prone. Try to identify them by plotting the data with respect to different variables plotting the histograms of each variable separately Ella Bingham 10
12 Handling of missing values: use caution. remove the whole observation if one of its variables is missing? replace by the average value? replace by NaN? (Often works in Matlab) do not replace by 0 or or -1 etc! Preprocessing needed if the data are not in vector/matrix form Needed if the variables are not comparable to each other (e.g. one variable is measured as kilometres, another as millimetres, another as colours) Ella Bingham 11
13 Needed to make it meaningful to measure the distances between data points D Euclidean distance: d(x, z) = j=1 (x j z j ) 2 Manhattan distance: d(x, z) = D j=1 x j z j Hamming distance: d(x, z) = D j=1 1 x j z j Here x j are the variables in observation x. When measuring distances, each variable is equally important. If two variables measure a more or less similar phenomenon, then this phenomenon becomes double as important as the other variables. If you do not want this to happen, you must either remove such duplicate variables or make your variables uncorrelated we will return to this Ella Bingham 12
14 Preprocessing needed if the data is not numerical and we want to use a method for numerical variables Example: Transforming a nominal (categorical) variable into numerical. Assume the variable Fruit takes values in the set { apple, pear, orange }. Replace the variable Fruit with 3 new variables: Apple, Pear, Orange. If the value of Fruit in observation i was pear, then set the values of the new variables in observation i as Apple=0, Pear=1, Orange=0. Preprocessing needed if the variance within one variable j is much larger than the variance within another variable j. Here a small difference in j should be as important as a large difference in j, but our data mining method does not understand this. Ella Bingham 13
15 Normalization solves the problems of different scales (kilometres vs millimetres) and variances: First, center the variable j by subtracting the mean µ j over all observations; this makes the variable zero mean. Then divide by the standard deviation σ j of variable j. The new, normalized variable is thus x new j = (x j µ j )/σ j. Replace all observations x ij by x new ij. Also other possibilities: logarithmic transformation x new j = log x j if the values of x j vary very much, and it is hard to see the behaviour at small values. Also gaussian and sigmoid transformations. Ella Bingham 14
16 Instead of preprocessing heterogenous data, we can construct a suitable distance function at each variable but we might prefer automatic methods! Ella Bingham 15
17 Bootstrapping Resampling: Assume your data set had N observations. Create a new data set having N observations by sampling from the original data with replacement N times. (Some observations will appear several times, some not at all.) Create M such new data sets; typically M = 1000 or M = Measure the phenomenon of interest in each of these new data sets. Does it vary a lot? If yes then the phenomenon was not real or consistent. Confidence intervals for the phenomenon of interest can be obtained by bootstrapping (software exists) Ella Bingham 16
18 Characterizing the uncertainty How do you know when you don t know? Most data mining methods always give you an output, also when there is not much evidence behind the output. It would be good to know how reliable the output is. Methods: Bootstrap confidence intervals of the phenomenon Permutation tests Randomization Ella Bingham 17
19 Permutation tests You have observed an interesting phenomenon, using a nice method. Could it occur just by chance in your data, using your method? Example: You notice a strong correlation between variables x j and x j. Reorder the observations randomly (that is, permute) and measure the correlation between x j (in the original ordering) and (in permuted ordering). x perm j Is the correlation still as strong? Ella Bingham 18
20 Repeat the permutation 1000 times. In how many cases did you get as strong correlation as (or stronger than) in the original setting? Several cases The phenomenon was just by chance. Very few cases The phenomenon is real. You can even get a P value: P = percentage of repeatments in which the phenomenon was as strong or stronger than in the original setting. Applicable also when your data is not Normally distributed (and many other methods are inappropriate) Ella Bingham 19
21 Overfitting A problem to be concerned about. Your model is too flexible, it will learn unnecessary details in your data, and is not applicable to other data. Will occur if you use too complicated a model, especially in high dimensional spaces where the observations are far away from each other, and it is practically impossible to have dense observations. Ella Bingham 20
22 Curse of dimensionality Dimensionality = number of variables. Suppose you have observations that follow a uniform distribution, in 1 dimension (left) and 2 dimensions (right). N=100000, 1D N=100000, 2D Ella Bingham 21
23 You don t know the distribution but try to infer it from the observations. You proceed by dividing the data into 100 bins in each dimension, and calculate the number of observations in each bin. If the number is roughly the same in each bin, you conclude that the distribution is uniform. Assume you have N observations that fall into these bins randomly. In a perfect uniform distribution you should have exactly N/100 (1- dimensional) or N/( ) (2-dimensional) observations in each bin. Ella Bingham 22
24 In practice there is some variation, and the average squared distance from the uniform distribution depends on N: Average squared distance from uniform distribution number of observations N 1-dimensional 2-dimensional Ella Bingham 23
25 If you want to reach the same accuracy in 2D as in 1D, and you had N observations in 1D, you need 100 N observations in 2D! In 3D you would need N observations exponential increase! Very high dimensional spaces have many counterintuitive properties: Observations are far away from each other To obtain a densely populated set of observations (important when learning), a large number N is needed Some other motivation: Small dimensions are easier for a human being to understand Ella Bingham 24
26 and storing the data is cheaper and finding outlier observations is easier and noise removal is easier We will see how to reduce the dimensionality A high number of observations can also be a problem take random samples. Ella Bingham 25
27 Global or local? The models can be divided into global and local global: the model characterizes all observations local: the model describes a small subset of observations Ella Bingham 26
28 Supervised or unsupervised? Data mining can be divided into supervised and unsupervised learning supervised: we have some training data (training observations) in which we know what the model should give as an output. Example of training data: we have measured a few variables and observed that Aurora borealis always appear when the variables take values in a certain range, and never appear when the variables take some other values. We construct a model that fits these observations, the output of the model is whether Aurora borealis appear or not. Ella Bingham 27
29 After that, we take a separate test data set and check if our model outputs a correct value at those observations. Methods of supervised learning: classification, regression Unsupervised: no such training data nor output exists reasons for this: it is very expensive or difficult to get values of the output, or hard to define what the output would be but also in unsupervised learning, we can split our data randomly into training and test data. Construct the model using the training data, and measure how well our model fits the test data Ella Bingham 28
30 Methods of unsupervised learning: clustering, latent variable models At some unsupervised methods, it is difficult to determine when the model fits the test data, if our aim is only to explore the data Ella Bingham 29
31 Cross validation We would like to use as many observations as possible in training, to get the best possible model But then the test data set becomes very small, and we cannot draw conclusions on it Solution: cross validation Divide the observations into S subsets of equal size Use S 1 of them as training data to construct the model, and 1 as test data to measure the error Ella Bingham 30
32 Repeat S times, using each subset in turn as test data The final error is the average of these S errors. In practice S = 5 or S = 10 are used Ella Bingham 31
33 Part II: Methods We will look at some modeling methods in detail, and others only superficially. But you can go and find more information yourself later. Ella Bingham 32
34 Dimensionality reduction These methods can either be used as preprocessing, prior to some other data mining methods; or as such, to describe the data Two approaches: Feature selection: discard some variables (=features) completely Latent variables: Form a small set of new variables that describe the main characteristics of the data Let s look at latent variables first. Ella Bingham 33
35 Example: Handwritten digits We want to classify handwritten digits into classes 0 9 Each digit is presented as a 100x100 gray scale image: one observation in a dimensional space. Ella Bingham 34
36 But the contents of the image can be described much more simply than using variables The contents are: which digit is shown, and how is it deformed (e.g. translated, rotated or scaled) The data can be presented with a much smaller number of variables: latent variables Ella Bingham 35
37 Latent variables A data matrix of size N D is presented using K latent variables as a product of two matrices: X N D = Y N K W K D, where the columns of Y are the latent variables and the rows of W are called basis vectors. A lot of different methods, outputting different latent variables. Here we have a linear latent variable representation; could also be nonlinear Ella Bingham 36
38 Singular value decomposition (SVD) Every matrix X of size N D can be presented as where M = min(n, D). X N D = U N M S M M (V D M ) T, The columns of matrix U are orthonormal (that is, perpendicular and of unit length), and they are called the left singular vectors. The columns of V are also orthonormal, and called the right singular vectors. Ella Bingham 37
39 Matrix S is a diagonal matrix: S = diag(σ 1, σ 2,..., σ M ), σ 1 σ 2... σ M 0. The values σ are called the singular values. The ith singular value shows the amount of variation along the ith dimension. Large value important dimension The ith singular value corresponds to the ith left singular vector and to the ith right singular vector The singular value decomposition can also be written as Xv j = σ j u j, X T u j = σ j v j. Ella Bingham 38
40 In practice we use some ready made software: Matlab, GNU Scientific Library, ALGLIB, JAMA, LAPACK, SVDLIBC (Note that SVD is unique up to sign flips, and depending on your software you may get different results: some columns of U will have a minus sign, and similarly the corresponding columns in V.) Ella Bingham 39
41 Matlab example X= [U,S,V]=svd(X, econ ) U= S= V= Ella Bingham 40
42 Approximating a matrix Let U K = (u 1 u 2... u K ), V K = (v 1 v 2... v K ) and S K = diag(σ 1, σ 2,..., σ K ), and let X K = U K S K V T K. Now min rank(b) K X B 2 = X X K 2 = σ K+1. That is: the best (in terms of the Euclidean norm, and some other norms) rank K approximation of a matrix is given by the singular value decomposition, by taking the K first singular vectors and singular values. Ella Bingham 41
43 Intuitively: in linear regression we find a one-dimensional line that best fits the two-dimensional points. In SVD we find a K-dimensional space that best fits a D-dimensional space where K D. (What is a rank of a matrix? It is the number of linearly independent columns. And the columns are linearly independent if they cannot be presented as sums of other columns, possibly scaled. Linearly dependent columns do not bring any new information to the data. Also, the rank of a matrix is the number of nonzero singular values. ) Ella Bingham 42
44 Then what? The low rank representation can be used in noise removal compressing the data finding the sensible number of dimensions in a matrix as a means of preprocessing Ella Bingham 43
45 Example: noise removal Let X = X K + N, where X K is a matrix having a (low) rank K, and the noise in N is negligible such that N is small. In such a case the singular values reveal the amount of interesting content in the data: 8 singular values index Ella Bingham 44
46 2 log of singular values index In practice: plug your data into SVD, and the output might reveal how much information there is, and how much the data could be compressed, by representing it using the SVD. Ella Bingham 45
47 Example: handwritten digits In a data set containing handwritten digits 3, represent each 3 as a vector of pixel values, put the vectors into rows of a matrix, and do the SVD. Left singular vectors u j : Ella Bingham 46
48 The first singular vector is typically some kind of an average, and the others bring new aspects of the data Ella Bingham 47
49 Computational complexity of SVD The time required for computing grows as M 2 (M = min(nr of rows, nr of columns)) and K 3 (K=number of singular values) Efficient implementations exist if data is sparse If the number of observations N is large, we can compute SVD for a subset of the observations only and if the rest of the observations must also be presented in the SVD basis, multiply them by V But most importantly, use as small K as possible, but not too small Ella Bingham 48
50 Choosing K In practice, a few dozens is often fine. More advanced methods: Monitor the decay of the singular values and stop when there is a large gap between two values Compute the Frobenius norm of X K versus X: let Y = X ˆX, then F= sum(diag(y T Y)) and compare this to the norm of the original X; if small then ˆX is good enough and k is large enough Compare the sum of all singular values and the sum of K largest singular values, and choose K for which e.g. 85 % is obtained. (But this requires computing all of them) Ella Bingham 49
51 Principal component analysis (PCA) The goal is to find a small set of artificial variables such that as much as possible of the variance of the data is retained. The first principal component is a vector in whose direction the variation of the data is the largest The next principal component is orthogonal to the first one, and it is in the direction of the second largest variation PCA is most suitable when the observations are Normally distributed or at least continuous. The output is hard to interpret if data are discrete. Ella Bingham 50
52 1st principal component x x x x x x x x x x x x 2nd principal component x x x Ella Bingham 51
53 Computing the PCA Make the data zero mean (Important!) by subtracting from each column (variable) the mean of the column. Denote the new data by X. Compute the SVD: X = USV T. The principal components are the right singular vectors in V. The variance along the ith principal component is given by the square of the ith singular value σ 2 i The new coordinates of the observations in the PCA basis are US The new observations US are uncorrelated and behave nicely: remember we talked about measuring the distances in the 1st lecture Ella Bingham 52
54 The new observations are of smaller dimensionality and hopefully less noisy. They can be used in clustering, classification etc. instead of the original observations. Ella Bingham 53
55 PCA and eigenvalue decomposition You might have heard that PCA is given by the eigenvalue decomposition of the covariance matrix of the (zero-mean) data Correct. But the eigenvalue decomposition is numerically less stable than the singular value decomposition, so we prefer the latter. There are nice connections between the eigenvectors and singular vectors Ella Bingham 54
56 Matlab code % Data matrix X, columns: variables, rows: data points % Matlab function for computing the first k principal components of X. function [pc,score] = pca(x,k); [rows,cols] = size(x); Xmeans = repmat(mean(x,1),rows,1); % matrix whose rows are column means X = X - Xmeans; % centering the data [U,S,V] = svds(x,k); % k is the number of pc:s desired pc = V; score = U*S; % now X = score*pc + Xmeans; Ella Bingham 55
57 Literature: Tutorial site by Todd Will: Wikipedia: Singular value decomposition Ella Bingham 56
58 Other uses of SVD Google s PageRank: ranking the pages to be displayed at a given query Spectral ordering: ranking the observations according to their similarity with respect to which variables are active ( nonzero ) Ella Bingham 57
59 Independent component analysis (ICA) A statistical technique that represents multidimensional data as a linear combination of nongaussian variables ( independent components ) The variables are statistically as independent as possible Similar but stronger than PCA: suitable for non-normally distributed data ICA has many applications in data analysis, source separation, and feature extraction An efficient algorithm, FastICA, implemented into Matlab, C++, R etc. Ella Bingham 58
60 ICA components of natural images are edges etc. Ella Bingham 59
61 Other latent variable methods Nonnegative matrix factorization Probabilistic PCA for continuous data (does not assume that data is Normally distributed) Mixtures of Bernoulli, Probabilistic latent semantic analysis, Multinomial PCA, Latent Dirichlet Allocation, Logistic PCA etc. for discretevalued data Ella Bingham 60
62 Random projection Not a latent variable method actually, but a dimensionality reduction method Suitable for high dimensional data where the variables are comparable to each other and equally important: e.g. pixels of an image; words of a document Take a random matrix R D K whose entries are Normally distributed. Multiply the original data matrix by it: X RP N K = X N DR D K The new data is of smaller dimensionality Ella Bingham 61
63 Random projection preserves the similarities of data points: on average, points that are close to each other in the original data X, are still close to each other in X RP Very useful if we must measure the similarities of high dimensional data points (images, text documents etc.) Succesful in noise reduction in image data, too Computational complexity: cheaper than PCA, and comparable to discrete cosine transform (DCT). Preserves similarities almost as well as PCA and better than DCT. Ella Bingham 62
64 Ella Bingham and Heikki Mannila: Random projection in dimensionality reduction: applications to image and text data, Proc. KDD-2001, pp kdd.pdf Ella Bingham 63
65 Feature selection Select the variables according to the final goal of your modeling task (regression, classification, clustering... ) Suitable if you have a modest number of variables, and you want to find a smaller subset of them. Forward stepwise selection: At each of your variables separately, construct a model. Measure the error of each model (regression error, classification error, clustering error... ) Ella Bingham 64
66 Take the variable that gave the smallest error. Then construct a 2-variable model using the chosen variable and all other variables separately. Again measure the error of each 2-variable model and choose the 2nd variable which gave the smallest error Continue: at each step, choose the variable which (together with the chosen variables) gives the smallest error Stop when adding new variables does not significantly decrease the error any more, compared to the previous step. Ella Bingham 65
67 Backward stepwise selection works in the other way round: Start by constructing a model using all variables Then remove one variable and measure the error; repeat at each variable in turn In which case was the error smallest? Continue with this subset of variables. Again try removing each of the remaining variables in turn, and find the subset of variables giving the smallest error Stop when removing new variables increases the error significantly, compared to the previous step. Ella Bingham 66
68 This is unsupervised learning For time series data Segmentation Partition the time points into segments such that observations inside a segment are similar and different segments are dissimilar Easy for one-dimensional data Applicable to multi-dimensional data too A theoretically optimal method is Dynamical programming. Computationally slow. Ella Bingham 67
69 Can also combine with PCA: dimensionality reduction first, then segmenting of the PCA representation of the data Ella Bingham, Aristides Gionis, Niina Haiminen, Heli Hiisilä, Heikki Mannila, Evimaria Terzi, Segmentation and dimensionality reduction SIAM Conference on Data Mining, pp segmentation pa Ella Bingham 68
70 This is unsupervised learning Clustering Task: Divide the observations into groups (clusters) so that similar observations go together Must choose: Which method to use? How to measure the goodness of clustering? Number of clusters K Ella Bingham 69
71 Three types of clustering methods Partition the data directly into K clusters (example: K-means) Hierarchical methods: iteratively merge the observations into groups, and merge similar groups together, until you have K groups Probabilistic mixture models (Will be omitted) Ella Bingham 70
72 Measuring the goodness of clustering Choose a representative for each cluster C k as the average member: For numerical data, r k = 1 C k x C k x For categorical data, r k = the most common element in the cluster We will call r k as the mean of the cluster Observations inside each cluster should be similar: Minimize the within cluster (wc) variation. Measure the distance between each observation and the cluster mean: wc total = K k=1 wc(c k) = K k=1 Different clusters should be dissimilar: Maximize the between cluster variation x C k d(x, r k ) Ella Bingham 71
73 K-means clustering Start from a random clustering, and iteratively change the clustering so that the goodness of clustering (=similarity of observations in a cluster) increases at each step. Continue until the clustering does not change. Ella Bingham 72
74 Pseudo code: Algorithm K-means(data, K) Initialize the cluster means r 1,..., r K by picking K observations k(x) = argmin k=1,...,k d(x, r k ) is the cluster of x C k = {x k(x) = k} for all k = 1,..., K while any C k was changed r k = 1 C k x C k x is the mean of cluster k; repeat at all k k(x) = argmin k=1,...,k d(x, r k ) is the cluster of x C k = {x k(x) = k} for all k = 1,..., K end while Ella Bingham 73
75 Properties of K-means Works often well in practice Assumes the clusters are of nearly similar size, and roughly ball-shaped Time needed to run K-means is linearly dependent on K, N, l and D where l is the number of while iterations (typically smallish) So this is reasonably quick Problem: might find local minima : clusterings that are not as good as they could be Ella Bingham 74
76 Solution: repeat at several different initializations, and choose the best clustering (in terms of within cluster variation, or other) Choosing K: no clear answer. Some attempts: Stability of clustering a small change in data will only result in a small change in the clustering, if K is correct Try different K = 1, 2,... and compute the within cluster variation (wc) at each K. The wc decreases steeply until the correct K is found, and then only decreases a little. In hierarchical clustering the output shows the behavior at different K Ella Bingham 75
77 Hierarchical clustering Visualized as a dendrogram, a tree Choice of K not required as an input Agglomerative (bottom-up): Start by placing each object in its own cluster, and iteratively merge the clusters that are close to each other. Divisive (top-down): Start by placing all objects in the same cluster, and iteratively divide the cluster into two smaller clusters. Ella Bingham 76
78 In both approaches, the change in the within cluster error is seen at each step. The choice of K can be based on this. Ella Bingham 77
79 Other clustering methods Self-organizing map for continuous data Spectral clustering for discrete or categorical data (Closely related to spectral ordering, mentioned along SVD) Ella Bingham 78
80 This is supervised learning k-nearest-neighbors classification In your training data, you are given observations and their class labels For a new observation, find the k nearest neighbors among the training observations using a suitable distance function Classify the new observation by the major vote of those k labeled observations Ella Bingham 79
81 Pseudo code: Algorithm KNN(trainX,trainY,testX,k) for each observation x in testx compute the Euclidean distance between x and all observations in trainx find the k observations having the smallest distance to x find the most frequent class label in these k observations output the class label for observation x end for Ella Bingham 80
82 Using SVD as preprocessing prior to classification: One might guess that representing a new handwritten 3 using the singular vectors from a data of several 3 s is easier than when using the singular vectors from a data of several 5 s. Easier meaning that the representation error is smaller. Application to classification: find the SVD basis of each of the numbers 1, 2,.... Classify a new observation into the set whose SVD basis is most suitable for representing the observation. Ella Bingham 81
83 Support vector machine Supervised learning Classification when the classes are not linearly separable State-of-the art method of classification today Matlab: smvtrain Ella Bingham 82
84 Decision trees Supervised learning: prediction and classification Suitable especially when the data contain both numerical and categorical variables Then it is difficult to construct a distance function for regression or k-nearest-neighbors etc. A decision tree can help Ella Bingham 83
85 Linear regression Supervised learning, continuous valued data Simple if there is one unknown variable to be predicted Multiple unknown variables: matrix Y contains them as columns X T Xw = X T Y This is a set of equations, solved by Gaussian elimination, or Matlab s \ Ella Bingham 84
86 This is unsupervised learning For categorical data Contingency tables Interaction analysis, association analysis,... Are two phenomena dependent? Count the numbers of occurrence and non-occurrence of both of them, and apply a Chi squared test or binomial test What is the application area here? Ella Bingham 85
87 Association rules Applicable to both supervised and unsupervised learning Applicable to data in which the numerical values are not interesting, but only the presence or absence of a variable in an observation Example: market basket data. Observations are the customers, and variables are the products. We are not interested in how many pieces of each product the customer is buying Applicable when the data is quite sparse (each customer only buys a small amount of different products, compared to the total number of products available). Do you have such data? Ella Bingham 86
88 Gather the frequent itemsets in the data: sets of products that are typically bought together Construct rules if a customer is buying bread and diapers, he will also buy milk. Observing subset of variables implies that another subset of variables will be observed too. Ella Bingham 87
Medical Information Management & Mining. You Chen Jan,15, 2013 [email protected]
Medical Information Management & Mining You Chen Jan,15, 2013 [email protected] 1 Trees Building Materials Trees cannot be used to build a house directly. How can we transform trees to building materials?
Social Media Mining. Data Mining Essentials
Introduction Data production rate has been increased dramatically (Big Data) and we are able store much more data than before E.g., purchase data, social media data, mobile phone data Businesses and customers
How To Cluster
Data Clustering Dec 2nd, 2013 Kyrylo Bessonov Talk outline Introduction to clustering Types of clustering Supervised Unsupervised Similarity measures Main clustering algorithms k-means Hierarchical Main
Review Jeopardy. Blue vs. Orange. Review Jeopardy
Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?
Component Ordering in Independent Component Analysis Based on Data Power
Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals
Chapter ML:XI (continued)
Chapter ML:XI (continued) XI. Cluster Analysis Data Mining Overview Cluster Analysis Basics Hierarchical Cluster Analysis Iterative Cluster Analysis Density-Based Cluster Analysis Cluster Evaluation Constrained
Non-negative Matrix Factorization (NMF) in Semi-supervised Learning Reducing Dimension and Maintaining Meaning
Non-negative Matrix Factorization (NMF) in Semi-supervised Learning Reducing Dimension and Maintaining Meaning SAMSI 10 May 2013 Outline Introduction to NMF Applications Motivations NMF as a middle step
Manifold Learning Examples PCA, LLE and ISOMAP
Manifold Learning Examples PCA, LLE and ISOMAP Dan Ventura October 14, 28 Abstract We try to give a helpful concrete example that demonstrates how to use PCA, LLE and Isomap, attempts to provide some intuition
BUSINESS ANALYTICS. Data Pre-processing. Lecture 3. Information Systems and Machine Learning Lab. University of Hildesheim.
Tomáš Horváth BUSINESS ANALYTICS Lecture 3 Data Pre-processing Information Systems and Machine Learning Lab University of Hildesheim Germany Overview The aim of this lecture is to describe some data pre-processing
DATA MINING CLUSTER ANALYSIS: BASIC CONCEPTS
DATA MINING CLUSTER ANALYSIS: BASIC CONCEPTS 1 AND ALGORITHMS Chiara Renso KDD-LAB ISTI- CNR, Pisa, Italy WHAT IS CLUSTER ANALYSIS? Finding groups of objects such that the objects in a group will be similar
Dimensionality Reduction: Principal Components Analysis
Dimensionality Reduction: Principal Components Analysis In data mining one often encounters situations where there are a large number of variables in the database. In such situations it is very likely
Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining
Data Mining Cluster Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 8 by Tan, Steinbach, Kumar 1 What is Cluster Analysis? Finding groups of objects such that the objects in a group will
Nonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
CS 2750 Machine Learning. Lecture 1. Machine Learning. http://www.cs.pitt.edu/~milos/courses/cs2750/ CS 2750 Machine Learning.
Lecture Machine Learning Milos Hauskrecht [email protected] 539 Sennott Square, x5 http://www.cs.pitt.edu/~milos/courses/cs75/ Administration Instructor: Milos Hauskrecht [email protected] 539 Sennott
Data Mining Practical Machine Learning Tools and Techniques
Ensemble learning Data Mining Practical Machine Learning Tools and Techniques Slides for Chapter 8 of Data Mining by I. H. Witten, E. Frank and M. A. Hall Combining multiple models Bagging The basic idea
Data Mining Project Report. Document Clustering. Meryem Uzun-Per
Data Mining Project Report Document Clustering Meryem Uzun-Per 504112506 Table of Content Table of Content... 2 1. Project Definition... 3 2. Literature Survey... 3 3. Methods... 4 3.1. K-means algorithm...
Data Preprocessing. Week 2
Data Preprocessing Week 2 Topics Data Types Data Repositories Data Preprocessing Present homework assignment #1 Team Homework Assignment #2 Read pp. 227 240, pp. 250 250, and pp. 259 263 the text book.
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
Multimedia Databases. Wolf-Tilo Balke Philipp Wille Institut für Informationssysteme Technische Universität Braunschweig http://www.ifis.cs.tu-bs.
Multimedia Databases Wolf-Tilo Balke Philipp Wille Institut für Informationssysteme Technische Universität Braunschweig http://www.ifis.cs.tu-bs.de 14 Previous Lecture 13 Indexes for Multimedia Data 13.1
The Data Mining Process
Sequence for Determining Necessary Data. Wrong: Catalog everything you have, and decide what data is important. Right: Work backward from the solution, define the problem explicitly, and map out the data
Clustering. Danilo Croce Web Mining & Retrieval a.a. 2015/201 16/03/2016
Clustering Danilo Croce Web Mining & Retrieval a.a. 2015/201 16/03/2016 1 Supervised learning vs. unsupervised learning Supervised learning: discover patterns in the data that relate data attributes with
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set Amhmed A. Bhih School of Electrical and Electronic Engineering Princy Johnson School of Electrical and Electronic Engineering Martin
CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on
CS 5614: (Big) Data Management Systems B. Aditya Prakash Lecture #18: Dimensionality Reduc7on Dimensionality Reduc=on Assump=on: Data lies on or near a low d- dimensional subspace Axes of this subspace
Data Exploration and Preprocessing. Data Mining and Text Mining (UIC 583 @ Politecnico di Milano)
Data Exploration and Preprocessing Data Mining and Text Mining (UIC 583 @ Politecnico di Milano) References Jiawei Han and Micheline Kamber, "Data Mining: Concepts and Techniques", The Morgan Kaufmann
Neural Networks Lesson 5 - Cluster Analysis
Neural Networks Lesson 5 - Cluster Analysis Prof. Michele Scarpiniti INFOCOM Dpt. - Sapienza University of Rome http://ispac.ing.uniroma1.it/scarpiniti/index.htm [email protected] Rome, 29
Linear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka [email protected] http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
An Introduction to Data Mining. Big Data World. Related Fields and Disciplines. What is Data Mining? 2/12/2015
An Introduction to Data Mining for Wind Power Management Spring 2015 Big Data World Every minute: Google receives over 4 million search queries Facebook users share almost 2.5 million pieces of content
Principal components analysis
CS229 Lecture notes Andrew Ng Part XI Principal components analysis In our discussion of factor analysis, we gave a way to model data x R n as approximately lying in some k-dimension subspace, where k
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder
APPM4720/5720: Fast algorithms for big data Gunnar Martinsson The University of Colorado at Boulder Course objectives: The purpose of this course is to teach efficient algorithms for processing very large
Collaborative Filtering. Radek Pelánek
Collaborative Filtering Radek Pelánek 2015 Collaborative Filtering assumption: users with similar taste in past will have similar taste in future requires only matrix of ratings applicable in many domains
Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics
INTERNATIONAL BLACK SEA UNIVERSITY COMPUTER TECHNOLOGIES AND ENGINEERING FACULTY ELABORATION OF AN ALGORITHM OF DETECTING TESTS DIMENSIONALITY Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree
Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.
Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C
Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data
CMPE 59H Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data Term Project Report Fatma Güney, Kübra Kalkan 1/15/2013 Keywords: Non-linear
Unsupervised learning: Clustering
Unsupervised learning: Clustering Salissou Moutari Centre for Statistical Science and Operational Research CenSSOR 17 th September 2013 Unsupervised learning: Clustering 1/52 Outline 1 Introduction What
Clustering. Adrian Groza. Department of Computer Science Technical University of Cluj-Napoca
Clustering Adrian Groza Department of Computer Science Technical University of Cluj-Napoca Outline 1 Cluster Analysis What is Datamining? Cluster Analysis 2 K-means 3 Hierarchical Clustering What is Datamining?
Machine Learning for Data Science (CS4786) Lecture 1
Machine Learning for Data Science (CS4786) Lecture 1 Tu-Th 10:10 to 11:25 AM Hollister B14 Instructors : Lillian Lee and Karthik Sridharan ROUGH DETAILS ABOUT THE COURSE Diagnostic assignment 0 is out:
Introduction to Data Mining
Introduction to Data Mining 1 Why Data Mining? Explosive Growth of Data Data collection and data availability Automated data collection tools, Internet, smartphones, Major sources of abundant data Business:
MS1b Statistical Data Mining
MS1b Statistical Data Mining Yee Whye Teh Department of Statistics Oxford http://www.stats.ox.ac.uk/~teh/datamining.html Outline Administrivia and Introduction Course Structure Syllabus Introduction to
How To Perform An Ensemble Analysis
Charu C. Aggarwal IBM T J Watson Research Center Yorktown, NY 10598 Outlier Ensembles Keynote, Outlier Detection and Description Workshop, 2013 Based on the ACM SIGKDD Explorations Position Paper: Outlier
Cluster Analysis: Advanced Concepts
Cluster Analysis: Advanced Concepts and dalgorithms Dr. Hui Xiong Rutgers University Introduction to Data Mining 08/06/2006 1 Introduction to Data Mining 08/06/2006 1 Outline Prototype-based Fuzzy c-means
Knowledge Discovery and Data Mining. Structured vs. Non-Structured Data
Knowledge Discovery and Data Mining Unit # 2 1 Structured vs. Non-Structured Data Most business databases contain structured data consisting of well-defined fields with numeric or alphanumeric values.
Least-Squares Intersection of Lines
Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
Adaptive Framework for Network Traffic Classification using Dimensionality Reduction and Clustering
IV International Congress on Ultra Modern Telecommunications and Control Systems 22 Adaptive Framework for Network Traffic Classification using Dimensionality Reduction and Clustering Antti Juvonen, Tuomo
Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression
Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression Saikat Maitra and Jun Yan Abstract: Dimension reduction is one of the major tasks for multivariate
Using multiple models: Bagging, Boosting, Ensembles, Forests
Using multiple models: Bagging, Boosting, Ensembles, Forests Bagging Combining predictions from multiple models Different models obtained from bootstrap samples of training data Average predictions or
Data Mining and Knowledge Discovery in Databases (KDD) State of the Art. Prof. Dr. T. Nouri Computer Science Department FHNW Switzerland
Data Mining and Knowledge Discovery in Databases (KDD) State of the Art Prof. Dr. T. Nouri Computer Science Department FHNW Switzerland 1 Conference overview 1. Overview of KDD and data mining 2. Data
Statistical machine learning, high dimension and big data
Statistical machine learning, high dimension and big data S. Gaïffas 1 14 mars 2014 1 CMAP - Ecole Polytechnique Agenda for today Divide and Conquer principle for collaborative filtering Graphical modelling,
Statistical Models in Data Mining
Statistical Models in Data Mining Sargur N. Srihari University at Buffalo The State University of New York Department of Computer Science and Engineering Department of Biostatistics 1 Srihari Flood of
Distance Metric Learning in Data Mining (Part I) Fei Wang and Jimeng Sun IBM TJ Watson Research Center
Distance Metric Learning in Data Mining (Part I) Fei Wang and Jimeng Sun IBM TJ Watson Research Center 1 Outline Part I - Applications Motivation and Introduction Patient similarity application Part II
Data Mining Part 5. Prediction
Data Mining Part 5. Prediction 5.1 Spring 2010 Instructor: Dr. Masoud Yaghini Outline Classification vs. Numeric Prediction Prediction Process Data Preparation Comparing Prediction Methods References Classification
Lecture 5: Singular Value Decomposition SVD (1)
EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system
DATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
Unsupervised and supervised dimension reduction: Algorithms and connections
Unsupervised and supervised dimension reduction: Algorithms and connections Jieping Ye Department of Computer Science and Engineering Evolutionary Functional Genomics Center The Biodesign Institute Arizona
Environmental Remote Sensing GEOG 2021
Environmental Remote Sensing GEOG 2021 Lecture 4 Image classification 2 Purpose categorising data data abstraction / simplification data interpretation mapping for land cover mapping use land cover class
Hierarchical Cluster Analysis Some Basics and Algorithms
Hierarchical Cluster Analysis Some Basics and Algorithms Nethra Sambamoorthi CRMportals Inc., 11 Bartram Road, Englishtown, NJ 07726 (NOTE: Please use always the latest copy of the document. Click on this
Unsupervised Data Mining (Clustering)
Unsupervised Data Mining (Clustering) Javier Béjar KEMLG December 01 Javier Béjar (KEMLG) Unsupervised Data Mining (Clustering) December 01 1 / 51 Introduction Clustering in KDD One of the main tasks in
An Overview of Knowledge Discovery Database and Data mining Techniques
An Overview of Knowledge Discovery Database and Data mining Techniques Priyadharsini.C 1, Dr. Antony Selvadoss Thanamani 2 M.Phil, Department of Computer Science, NGM College, Pollachi, Coimbatore, Tamilnadu,
Exploratory Data Analysis with MATLAB
Computer Science and Data Analysis Series Exploratory Data Analysis with MATLAB Second Edition Wendy L Martinez Angel R. Martinez Jeffrey L. Solka ( r ec) CRC Press VV J Taylor & Francis Group Boca Raton
Data Mining: A Preprocessing Engine
Journal of Computer Science 2 (9): 735-739, 2006 ISSN 1549-3636 2005 Science Publications Data Mining: A Preprocessing Engine Luai Al Shalabi, Zyad Shaaban and Basel Kasasbeh Applied Science University,
Information Management course
Università degli Studi di Milano Master Degree in Computer Science Information Management course Teacher: Alberto Ceselli Lecture 01 : 06/10/2015 Practical informations: Teacher: Alberto Ceselli ([email protected])
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )
Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates
HDDVis: An Interactive Tool for High Dimensional Data Visualization
HDDVis: An Interactive Tool for High Dimensional Data Visualization Mingyue Tan Department of Computer Science University of British Columbia [email protected] ABSTRACT Current high dimensional data visualization
FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM
International Journal of Innovative Computing, Information and Control ICIC International c 0 ISSN 34-48 Volume 8, Number 8, August 0 pp. 4 FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT
Cluster Analysis. Isabel M. Rodrigues. Lisboa, 2014. Instituto Superior Técnico
Instituto Superior Técnico Lisboa, 2014 Introduction: Cluster analysis What is? Finding groups of objects such that the objects in a group will be similar (or related) to one another and different from
Data Mining: An Overview. David Madigan http://www.stat.columbia.edu/~madigan
Data Mining: An Overview David Madigan http://www.stat.columbia.edu/~madigan Overview Brief Introduction to Data Mining Data Mining Algorithms Specific Eamples Algorithms: Disease Clusters Algorithms:
Server Load Prediction
Server Load Prediction Suthee Chaidaroon ([email protected]) Joon Yeong Kim ([email protected]) Jonghan Seo ([email protected]) Abstract Estimating server load average is one of the methods that
Introduction to Clustering
Introduction to Clustering Yumi Kondo Student Seminar LSK301 Sep 25, 2010 Yumi Kondo (University of British Columbia) Introduction to Clustering Sep 25, 2010 1 / 36 Microarray Example N=65 P=1756 Yumi
Search Taxonomy. Web Search. Search Engine Optimization. Information Retrieval
Information Retrieval INFO 4300 / CS 4300! Retrieval models Older models» Boolean retrieval» Vector Space model Probabilistic Models» BM25» Language models Web search» Learning to Rank Search Taxonomy!
Data Mining and Exploration. Data Mining and Exploration: Introduction. Relationships between courses. Overview. Course Introduction
Data Mining and Exploration Data Mining and Exploration: Introduction Amos Storkey, School of Informatics January 10, 2006 http://www.inf.ed.ac.uk/teaching/courses/dme/ Course Introduction Welcome Administration
Data, Measurements, Features
Data, Measurements, Features Middle East Technical University Dep. of Computer Engineering 2009 compiled by V. Atalay What do you think of when someone says Data? We might abstract the idea that data are
Text Analytics (Text Mining)
CSE 6242 / CX 4242 Apr 3, 2014 Text Analytics (Text Mining) LSI (uses SVD), Visualization Duen Horng (Polo) Chau Georgia Tech Some lectures are partly based on materials by Professors Guy Lebanon, Jeffrey
Soft Clustering with Projections: PCA, ICA, and Laplacian
1 Soft Clustering with Projections: PCA, ICA, and Laplacian David Gleich and Leonid Zhukov Abstract In this paper we present a comparison of three projection methods that use the eigenvectors of a matrix
Lecture 3: Linear methods for classification
Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,
CS 591.03 Introduction to Data Mining Instructor: Abdullah Mueen
CS 591.03 Introduction to Data Mining Instructor: Abdullah Mueen LECTURE 3: DATA TRANSFORMATION AND DIMENSIONALITY REDUCTION Chapter 3: Data Preprocessing Data Preprocessing: An Overview Data Quality Major
Chapter 12 Discovering New Knowledge Data Mining
Chapter 12 Discovering New Knowledge Data Mining Becerra-Fernandez, et al. -- Knowledge Management 1/e -- 2004 Prentice Hall Additional material 2007 Dekai Wu Chapter Objectives Introduce the student to
Supervised Feature Selection & Unsupervised Dimensionality Reduction
Supervised Feature Selection & Unsupervised Dimensionality Reduction Feature Subset Selection Supervised: class labels are given Select a subset of the problem features Why? Redundant features much or
Data Mining. Cluster Analysis: Advanced Concepts and Algorithms
Data Mining Cluster Analysis: Advanced Concepts and Algorithms Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1 More Clustering Methods Prototype-based clustering Density-based clustering Graph-based
BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES
BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES 123 CHAPTER 7 BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES 7.1 Introduction Even though using SVM presents
Standardization and Its Effects on K-Means Clustering Algorithm
Research Journal of Applied Sciences, Engineering and Technology 6(7): 399-3303, 03 ISSN: 040-7459; e-issn: 040-7467 Maxwell Scientific Organization, 03 Submitted: January 3, 03 Accepted: February 5, 03
Exploratory data analysis for microarray data
Eploratory data analysis for microarray data Anja von Heydebreck Ma Planck Institute for Molecular Genetics, Dept. Computational Molecular Biology, Berlin, Germany [email protected] Visualization
How To Solve The Kd Cup 2010 Challenge
A Lightweight Solution to the Educational Data Mining Challenge Kun Liu Yan Xing Faculty of Automation Guangdong University of Technology Guangzhou, 510090, China [email protected] [email protected]
Using Data Mining for Mobile Communication Clustering and Characterization
Using Data Mining for Mobile Communication Clustering and Characterization A. Bascacov *, C. Cernazanu ** and M. Marcu ** * Lasting Software, Timisoara, Romania ** Politehnica University of Timisoara/Computer
Chapter 7. Cluster Analysis
Chapter 7. Cluster Analysis. What is Cluster Analysis?. A Categorization of Major Clustering Methods. Partitioning Methods. Hierarchical Methods 5. Density-Based Methods 6. Grid-Based Methods 7. Model-Based
DATA MINING TECHNIQUES AND APPLICATIONS
DATA MINING TECHNIQUES AND APPLICATIONS Mrs. Bharati M. Ramageri, Lecturer Modern Institute of Information Technology and Research, Department of Computer Application, Yamunanagar, Nigdi Pune, Maharashtra,
Data Mining Essentials
This chapter is from Social Media Mining: An Introduction. By Reza Zafarani, Mohammad Ali Abbasi, and Huan Liu. Cambridge University Press, 2014. Draft version: April 20, 2014. Complete Draft and Slides
STA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! [email protected]! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct
ADVANCES IN INDEPENDENT COMPONENT ANALYSIS WITH APPLICATIONS TO DATA MINING
Helsinki University of Technology Dissertations in Computer and Information Science Espoo 2003 Report D4 ADVANCES IN INDEPENDENT COMPONENT ANALYSIS WITH APPLICATIONS TO DATA MINING Ella Bingham Dissertation
Learning, Sparsity and Big Data
Learning, Sparsity and Big Data M. Magdon-Ismail (Joint Work) January 22, 2014. Out-of-Sample is What Counts NO YES A pattern exists We don t know it We have data to learn it Tested on new cases? Teaching
Text Mining in JMP with R Andrew T. Karl, Senior Management Consultant, Adsurgo LLC Heath Rushing, Principal Consultant and Co-Founder, Adsurgo LLC
Text Mining in JMP with R Andrew T. Karl, Senior Management Consultant, Adsurgo LLC Heath Rushing, Principal Consultant and Co-Founder, Adsurgo LLC 1. Introduction A popular rule of thumb suggests that
Learning Example. Machine learning and our focus. Another Example. An example: data (loan application) The data and the goal
Learning Example Chapter 18: Learning from Examples 22c:145 An emergency room in a hospital measures 17 variables (e.g., blood pressure, age, etc) of newly admitted patients. A decision is needed: whether
Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining
Data Mining Cluster Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar Tan,Steinbach, Kumar Introduction to Data Mining 4/8/2004 Hierarchical
BIDM Project. Predicting the contract type for IT/ITES outsourcing contracts
BIDM Project Predicting the contract type for IT/ITES outsourcing contracts N a n d i n i G o v i n d a r a j a n ( 6 1 2 1 0 5 5 6 ) The authors believe that data modelling can be used to predict if an
Multivariate Analysis of Ecological Data
Multivariate Analysis of Ecological Data MICHAEL GREENACRE Professor of Statistics at the Pompeu Fabra University in Barcelona, Spain RAUL PRIMICERIO Associate Professor of Ecology, Evolutionary Biology
Predict Influencers in the Social Network
Predict Influencers in the Social Network Ruishan Liu, Yang Zhao and Liuyu Zhou Email: rliu2, yzhao2, [email protected] Department of Electrical Engineering, Stanford University Abstract Given two persons
BIOINF 585 Fall 2015 Machine Learning for Systems Biology & Clinical Informatics http://www.ccmb.med.umich.edu/node/1376
Course Director: Dr. Kayvan Najarian (DCM&B, [email protected]) Lectures: Labs: Mondays and Wednesdays 9:00 AM -10:30 AM Rm. 2065 Palmer Commons Bldg. Wednesdays 10:30 AM 11:30 AM (alternate weeks) Rm.
Foundations of Artificial Intelligence. Introduction to Data Mining
Foundations of Artificial Intelligence Introduction to Data Mining Objectives Data Mining Introduce a range of data mining techniques used in AI systems including : Neural networks Decision trees Present
