Data Mining Practical Machine Learning Tools and Techniques
|
|
|
- Collin Nelson
- 10 years ago
- Views:
Transcription
1 Credibility: Evaluating what s been learned Data Mining Practical Machine Learning Tools and Techniques Slides for Chapter 5 of Data Mining by I. H. Witten, E. Frank and M. A. Hall Issues: training, testing, tuning Predicting performance: confidence limits Holdout, cross validation, bootstrap Comparing schemes: the t test Predicting probabilities: loss functions Cost sensitive measures Evaluating numeric prediction The Minimum Description Length principle 2 Evaluation: the key to success Issues in evaluation How predictive is the model we learned? Error on the training data is not a good indicator of performance on future data Otherwise 1 NN would be the optimum classifier! Simple solution that can be used if lots of (labeled) data is available: Split data into training and test set However: (labeled) data is usually limited More sophisticated techniques need to be used Statistical reliability of estimated differences in performance ( significance tests) Choice of performance measure: Number of correct classifications Accuracy of probability estimates Error in numeric predictions Costs assigned to different types of errors Many practical applications involve costs 3 4
2 Training and testing I Training and testing II Natural performance measure for classification problems: error rate Success: instance s class is predicted correctly Error: instance s class is predicted incorrectly Error rate: proportion of errors made over the whole set of instances Resubstitution error: error rate obtained from training data Resubstitution error is (hopelessly) optimistic! Test set: independent instances that have played no part in formation of classifier Assumption: both training data and test data are representative samples of the underlying problem Test and training data may differ in nature Example: classifiers built using customer data from two different towns A and B To estimate performance of classifier from town A in completely new town, test it on data from B 5 6 Note on parameter tuning Making the most of the data It is important that the test data is not used in any way to create the classifier Some learning schemes operate in two stages: Stage 1: build the basic structure Stage 2: optimize parameter settings The test data can t be used for parameter tuning! Proper procedure uses three sets: training data, validation data, and test data Validation data is used to optimize parameters Once evaluation is complete, all the data can be used to build the final classifier Generally, the larger the training data the better the classifier (but returns diminish) The larger the test data the more accurate the error estimate Holdout procedure: method of splitting original data into training and test set Dilemma: ideally both training set and test set should be large! 7 8
3 Predicting performance Confidence intervals Assume the estimated error rate is 25%. How close is this to the true error rate? Depends on the amount of test data Prediction is just like tossing a (biased!) coin Head is a success, tail is an error In statistics, a succession of independent events like this is called a Bernoulli process Statistical theory provides us with confidence intervals for the true underlying proportion We can say: p lies within a certain specified interval with a certain specified confidence Example: S=750 successes in N=1000 trials Estimated success rate: 75% How close is this to true success rate p? Answer: with 80% confidence p in [73.2,76.7] Another example: S=75 and N=100 Estimated success rate: 75% With 80% confidence p in [69.1,80.1] 9 10 Mean and variance Confidence limits Mean and variance for a Bernoulli trial: p, p (1 p) Expected success rate f=s/n Mean and variance for f : p, p (1 p)/n For large enough N, f follows a Normal distribution c% confidence interval [ z X z] for random variable with 0 mean is given by: Pr [ z X z ]=c With a symmetric distribution: Pr [ z X z ]=1 2 Pr [x z ] Confidence limits for the normal distribution with 0 mean and a variance of 1: Thus: Pr[X z] 0.1% 0.5% 10% 20% 40% To use this we have to reduce our random variable f to have 0 mean and unit variance 1% 5% Pr[ 1.65 X 1.65]=90% z
4 Transforming f Examples Transformed value for f : f p p 1 p /N (i.e. subtract the mean and divide by the standard deviation) f = 75%, N = 1000, c = 80% (so that z = 1.28): p [0.732,0.767] f = 75%, N = 100, c = 80% (so that z = 1.28): Resulting equation: Solving for p : Pr [ z f p p(1 p)/ N z ]=c p [0.691,0.801] Note that normal distribution assumption is only valid for large N (i.e. N > 100) f = 75%, N = 10, c = 80% (so that z = 1.28): p= f z2 2N z f N f 2 N z2 4N 2 / 1 z2 N p [0.549,0.881] (should be taken with a grain of salt) Holdout estimation Repeated holdout method What to do if the amount of data is limited? The holdout method reserves a certain amount for testing and uses the remainder for training Usually: one third for testing, the rest for training Problem: the samples might not be representative Example: class might be missing in the test data Advanced version uses stratification Ensures that each class is represented with approximately equal proportions in both subsets Holdout estimate can be made more reliable by repeating the process with different subsamples In each iteration, a certain proportion is randomly selected for training (possibly with stratificiation) The error rates on the different iterations are averaged to yield an overall error rate This is called the repeated holdout method Still not optimum: the different test sets overlap Can we prevent overlapping? 15 16
5 Cross validation More on cross validation Cross validation avoids overlapping test sets First step: split data into k subsets of equal size Second step: use each subset in turn for Testing, the remainder for Learning (training) Called k fold cross validation Often the subsets are stratified before the cross validation is performed The error estimates are averaged to yield an overall error estimate Round S1 S2 S3 1 T L L 2 L T L 3 L L T Standard method for evaluation: stratified ten fold cross validation Why ten? Extensive experiments have shown that this is the best choice to get an accurate estimate There is also some theoretical evidence for this Stratification reduces the estimate s variance Even better: repeated stratified cross validation E.g. ten fold cross validation is repeated ten times and results are averaged (reduces the variance) Leave One Out cross validation Leave One Out CV and stratification Leave One Out: a particular form of cross validation: Set number of folds to number of training instances I.e., for n training instances, build classifier n times Makes best use of the data Involves no random subsampling Very computationally expensive (exception: NN) Disadvantage of Leave One Out CV: stratification is not possible It guarantees a non stratified sample because there is only one instance in the test set! Extreme example: random dataset split equally into two classes Best inducer predicts majority class 50% accuracy on fresh data Leave One Out CV estimate is 100% error! 19 20
6 The bootstrap CV uses sampling without replacement The same instance, once selected, can not be selected again for a particular training/test set The bootstrap uses sampling with replacement to form the training set Sample a dataset of n instances n times with replacement to form a new dataset of n instances Use this data as the training set Use the instances from the original dataset that don t occur in the new training set for testing The bootstrap Also called the bootstrap A particular instance has a probability of 1 1/n of not being picked Thus its probability of ending up in the test data is: 1 1 n n e This means the training data will contain approximately 63.2% of the instances Estimating error with the bootstrap More on the bootstrap The error estimate on the test data will be very pessimistic Trained on just ~63% of the instances Therefore, combine it with the resubstitution error: err=0.632 e test instances e training_instances The resubstitution error gets less weight than the error on the test data Repeat process several times with different replacement samples; average the results Probably the best way of estimating performance for very small datasets However, it has some problems Consider the random dataset from above A perfect memorizer will achieve 0% resubstitution error and ~50% error on test data Bootstrap estimate for this classifier: err= % %=31.6% True expected error: 50% 23 24
7 Comparing data mining schemes Comparing schemes II Frequent question: which of two learning schemes performs better? Note: this is domain dependent! Obvious way: compare 10 fold CV estimates Generally sufficient in applications (we don't loose if the chosen method is not truly better) However, what about machine learning research? Need to show convincingly that a particular method works better Want to show that scheme A is better than scheme B in a particular domain For a given amount of training data On average, across all possible training sets Let's assume we have an infinite amount of data from the domain: Sample infinitely many dataset of specified size Obtain cross validation estimate on each dataset for each scheme Check if mean accuracy for scheme A is better than mean accuracy for scheme B Paired t test Distribution of the means In practice we have limited data and a limited number of estimates for computing the mean Student s t test tells whether the means of two samples are significantly different In our case the samples are cross validation estimates for different datasets from the domain Use a paired t test because the individual samples are paired The same CV is applied twice x1 x 2 x k and y 1 y 2 y k are the 2k samples for the k different datasets mx and m y are the means With enough samples, the mean of a set of independent samples is normally distributed Estimated variances of the means are x2 /k and y2 /k William Gosset Born: 1876 in Canterbury; Died: 1937 in Beaconsfield, England Obtained a post as a chemist in the Guinness brewery in Dublin in Invented the t test to handle small samples for quality control in brewing. Wrote under the name "Student". If x and y are the true means then m x x x 2 /k are approximately normally distributed with mean 0, variance 1 m y y y 2 /k 27 28
8 Student s distribution Distribution of the differences With small samples (k < 100) the mean follows Student s distribution with k 1 degrees of freedom Confidence limits: Let m d = m x m y The difference of the means (m d ) also has a Student s distribution with k 1 degrees of freedom Let d 2 be the variance of the difference Assuming we have 10 estimates 9 degrees of freedom normal distribution Pr[X z] 0.1% 0.5% 1% 5% 10% 20% z Pr[X z] 0.1% 0.5% 1% 5% 10% 20% z The standardized version of m d is called the t statistic: t= m d d 2 /k We use t to perform the t test Performing the test Unpaired observations Fix a significance level If a difference is significant at the % level, there is a (100 )% chance that the true means differ Divide the significance level by two because the test is two tailed I.e. the true difference can be +ve or ve Look up the value for z that corresponds to /2 If t z or t z then the difference is significant I.e. the null hypothesis (that the difference is zero) can be rejected If the CV estimates are from different datasets, they are no longer paired (or maybe we have k estimates for one scheme, and j estimates for the other one) Then we have to use an unpaired t test with min(k, j) 1 degrees of freedom The estimate of the variance of the difference of the means becomes: x 2 2 y k j 31 32
9 Dependent estimates We assumed that we have enough data to create several datasets of the desired size Need to re use data if that's not the case E.g. running cross validations with different randomizations on the same data Samples become dependent insignificant differences can become significant A heuristic test is the corrected resampled t test: Assume we use the repeated (k fold) hold out method, with n 1 instances for training and n 2 for testing New test statistic is: t= m d 1 k n 2 n 1 d 2 Predicting probabilities Performance measure so far: success rate Also called 0 1 loss function: i { 0 if prediction is correct 1 if prediction is incorrect } Most classifiers produces class probabilities Depending on the application, we might want to check the accuracy of the probability estimates 0 1 loss is not the right thing to use in those cases Quadratic loss function Informational loss function p 1 p k are probability estimates for an instance c is the index of the instance s actual class a 1 a k = 0, except for a c which is 1 Quadratic loss is: Want to minimize j (p j a j ) 2 = j!=c p j 2 +(1 p c ) 2 E[ j p j a j 2 ] The informational loss function is log(p c ), where c is the index of the instance s actual class Number of bits required to communicate the actual class * * Let p 1 p k be the true class probabilities Then the expected value for the loss function is: p 1 log 2 p 1... p k log 2 p k Can show that this is minimized when p j = p j*, the true probabilities Justification: minimized when p j = p j * Difficulty: zero frequency problem 35 36
10 Discussion Counting the cost Which loss function to choose? Both encourage honesty Quadratic loss function takes into account all class probability estimates for an instance Informational loss focuses only on the probability estimate for the actual class Quadratic loss is bounded: 2 it can never exceed 2 1 j p j Informational loss can be infinite Informational loss is related to MDL principle [later] In practice, different types of classification errors often incur different costs Examples: Terrorist profiling Not a terrorist correct 99.99% of the time Loan decisions Oil slick detection Fault diagnosis Promotional mailing Counting the cost The confusion matrix: Aside: the kappa statistic Two confusion matrices for a 3 class problem: actual predictor (left) vs. random predictor (right) Actual class Yes No Predicted class Yes No True positive False negative False positive True negative There are many other types of cost! E.g.: cost of collecting training data Number of successes: sum of entries in diagonal (D) Kappa statistic: D observed D random D perfect D random measures relative improvement over random predictor 39 40
11 Classification with costs Cost sensitive classification Two cost matrices: Success rate is replaced by average cost per prediction Cost is given by appropriate entry in the cost matrix Can take costs into account when making predictions Basic idea: only predict high cost class when very confident about prediction Given: predicted class probabilities Normally we just predict the most likely class Here, we should make the prediction that minimizes the expected cost Expected cost: dot product of vector of class probabilities and appropriate column in cost matrix Choose column (class) that minimizes expected cost Cost sensitive classification: Example Cost sensitive learning Cost Matrix Spam predicted class actual class Spam 0 1 Let p s =Pr(Spam) Ham Ham 10 0 Regard expected costs for each class (dot product of vector of class probabilities and appropriate column in cost matrix) EC(Spam)=p s *0 + (1 p s )*10 EC(Ham)=p s *1 + (1 p s )*0 Decide for Spam iff EC(Spam) < EC(Ham): p s > 10/11 So far we haven't taken costs into account at training time Most learning schemes do not perform costsensitive learning They generate the same classifier no matter what costs are assigned to the different classes Example: standard decision tree learner Simple methods for cost sensitive learning: Resampling of instances according to costs Weighting of instances according to costs Some schemes can take costs into account by varying a parameter, e.g. naïve Bayes 43 44
12 Lift charts Generating a lift chart In practice, costs are rarely known Decisions are usually made by comparing possible scenarios Example: promotional mailout to 1,000,000 households Mail to all; 0.1% respond (1000) Data mining tool identifies subset of 100,000 most promising, 0.4% of these respond (400) 40% of responses for 10% of cost may pay off Identify subset of 400,000 most promising, 0.2% respond (800) A lift chart allows a visual comparison Sort instances according to predicted probability of being positive: Predicted probability x axis is sample size y axis is number of true positives Actual class Yes Yes No Yes A hypothetical lift chart ROC curves 40% of responses for 10% of cost 80% of responses for 40% of cost ROC curves are similar to lift charts Stands for receiver operating characteristic Used in signal detection to show tradeoff between hit rate and false alarm rate over noisy channel Differences to lift chart: y axis shows percentage of true positives in sample rather than absolute number x axis shows percentage of false positives in sample rather than sample size 47 48
13 A sample ROC curve Cross validation and ROC curves Simple method of getting a ROC curve using cross validation: Collect probabilities for instances in test folds Sort instances according to probabilities This method is implemented in WEKA However, this is just one possibility Another possibility is to generate an ROC curve for each fold and average them Jagged curve one set of test data Smooth curve use cross validation ROC curves for two schemes The convex hull Given two learning schemes we can achieve any point on the convex hull! TP and FP rates for scheme 1: t 1 and f 1 For a small, focused sample, use method A For a larger one, use method B In between, choose between A and B with appropriate probabilities TP and FP rates for scheme 2: t 2 and f 2 If scheme 1 is used to predict 100 q % of the cases and scheme 2 for the rest, then TP rate for combined scheme: q t 1 + (1 q) t 2 FP rate for combined scheme: q f 1 +(1 q) f
14 More measures... Summary of some measures Percentage of retrieved documents that are relevant: precision=tp/(tp+fp) Percentage of relevant documents that are returned: recall =TP/(TP+FN) Precision/recall curves have hyperbolic shape Summary measures: average precision at 20%, 50% and 80% recall (three point average recall) F measure=(2 recall precision)/(recall+precision) sensitivity specificity = (TP / (TP + FN)) (TN / (FP + TN)) Area under the ROC curve (AUC): probability that randomly chosen positive instance is ranked above randomly chosen negative one Lift chart ROC curve Recallprecision curve Domain Marketing Communicatio ns Information retrieval Plot TP Subset size TP rate FP rate Recall Precisio n Explanation TP (TP+FP)/ (TP+FP+TN+FN) TP/(TP+FN) FP/(FP+TN) TP/(TP+FN) TP/(TP+FP) Cost curves Cost curves plot expected costs directly Example for case with uniform costs (i.e. error): Cost curves: example with costs Probability cost functionp c [+]= p[+]c[+ -] p[+]c[+ -] p[-]c[- +] Normalized expected cost=fn p c [+] fp 1 p c [+] 55 56
15 Evaluating numeric prediction Same strategies: independent test set, cross validation, significance tests, etc. Difference: error measures Actual target values: a 1 a 2 a n Predicted target values: p 1 p 2 p n Most popular measure: mean squared error p 1 a p n a n 2 Easy to manipulate mathematically n Other measures The root mean squared error : p 1 a p n a n 2 n The mean absolute error is less sensitive to outliers than the mean squared error: p 1 a 1... p n a n n Sometimes relative error values are more appropriate (e.g. 10% for an error of 50 when predicting 500) Improvement on the mean Correlation coefficient How much does the scheme improve on simply predicting the average? Measures the statistical correlation between the predicted values and the actual values The relative squared error is: p 1 a p n a n 2 a a a a n 2 The relative absolute error is: p 1 a 1... p n a n a a 1... a a n S PA = i p i p a i a n 1 Scale independent, between 1 and +1 Good performance leads to large values! S PA S P S A S P = i p i p 2 n 1 S A = i a i a 2 n
16 Which measure? The MDL principle Best to look at all of them Often it doesn t matter Example: A Root mean-squared error 67.8 Mean absolute error 41.3 Root rel squared error 42.2% Relative absolute error 43.1% Correlation coefficient 0.88 B % 40.1% 0.88 C % 34.8% 0.89 D % 30.4% 0.91 MDL stands for minimum description length The description length is defined as: space required to describe a theory + space required to describe the theory s mistakes In our case the theory is the classifier and the mistakes are the errors on the training data Aim: we seek a classifier with minimal DL MDL principle is a model selection criterion D best C second-best A, B arguable Model selection criteria Elegance vs. errors Model selection criteria attempt to find a good compromise between: The complexity of a model Its prediction accuracy on the training data Reasoning: a good model is a simple model that achieves high accuracy on the given data Also known as Occam s Razor : the best theory is the smallest one that describes all the facts William of Ockham, born in the village of Ockham in Surrey (England) about 1285, was the most influential philosopher of the 14th century and a controversial theologian. Theory 1: very simple, elegant theory that explains the data almost perfectly Theory 2: significantly more complex theory that reproduces the data without mistakes Theory 1 is probably preferable Classical example: Kepler s three laws on planetary motion Less accurate than Copernicus s latest refinement of the Ptolemaic theory of epicycles 63 64
17 MDL and compression MDL and Bayes s theorem MDL principle relates to data compression: The best theory is the one that compresses the data the most I.e. to compress a dataset we generate a model and then store the model and its mistakes We need to compute (a) size of the model, and (b) space needed to encode the errors (b) easy: use the informational loss function (a) need a method to encode the model 65 L[T]= length of the theory L[E T]=training set encoded wrt the theory Description length= L[T] + L[E T] Bayes s theorem gives a posteriori probability of a theory given the data: Pr [E T]Pr [T] Pr [T E]= Pr [E] Equivalent to: logpr [T E]= logpr [E T] logpr[t] logpr[e] constant 66 MDL and MAP Discussion of MDL principle MAP stands for maximum a posteriori probability Finding the MAP theory corresponds to finding the MDL theory Difficult bit in applying the MAP principle: determining the prior probability Pr[T] of the theory Corresponds to difficult part in applying the MDL principle: coding scheme for the theory I.e. if we know a priori that a particular theory is more likely we need fewer bits to encode it Advantage: makes full use of the training data when selecting a model Disadvantage 1: appropriate coding scheme/prior probabilities for theories are crucial Disadvantage 2: no guarantee that the MDL theory is the one which minimizes the expected error Note: Occam s Razor is an axiom! Epicurus s principle of multiple explanations: keep all theories that are consistent with the data 67 68
18 MDL and clustering Description length of theory: bits needed to encode the clusters e.g. cluster centers Description length of data given theory: encode cluster membership and position relative to cluster e.g. distance to cluster center Works if coding scheme uses less code space for small numbers than for large ones With nominal attributes, must communicate probability distributions for each cluster 69
Evaluation & Validation: Credibility: Evaluating what has been learned
Evaluation & Validation: Credibility: Evaluating what has been learned How predictive is a learned model? How can we evaluate a model Test the model Statistical tests Considerations in evaluating a Model
Data Mining - Evaluation of Classifiers
Data Mining - Evaluation of Classifiers Lecturer: JERZY STEFANOWSKI Institute of Computing Sciences Poznan University of Technology Poznan, Poland Lecture 4 SE Master Course 2008/2009 revised for 2010
Overview. Evaluation Connectionist and Statistical Language Processing. Test and Validation Set. Training and Test Set
Overview Evaluation Connectionist and Statistical Language Processing Frank Keller [email protected] Computerlinguistik Universität des Saarlandes training set, validation set, test set holdout, stratification
Chapter 6. The stacking ensemble approach
82 This chapter proposes the stacking ensemble approach for combining different data mining classifiers to get better performance. Other combination techniques like voting, bagging etc are also described
Data Mining Practical Machine Learning Tools and Techniques
Ensemble learning Data Mining Practical Machine Learning Tools and Techniques Slides for Chapter 8 of Data Mining by I. H. Witten, E. Frank and M. A. Hall Combining multiple models Bagging The basic idea
Performance Measures for Machine Learning
Performance Measures for Machine Learning 1 Performance Measures Accuracy Weighted (Cost-Sensitive) Accuracy Lift Precision/Recall F Break Even Point ROC ROC Area 2 Accuracy Target: 0/1, -1/+1, True/False,
Knowledge Discovery and Data Mining
Knowledge Discovery and Data Mining Unit # 10 Sajjad Haider Fall 2012 1 Supervised Learning Process Data Collection/Preparation Data Cleaning Discretization Supervised/Unuspervised Identification of right
Performance Measures in Data Mining
Performance Measures in Data Mining Common Performance Measures used in Data Mining and Machine Learning Approaches L. Richter J.M. Cejuela Department of Computer Science Technische Universität München
Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.
Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C
Performance Metrics for Graph Mining Tasks
Performance Metrics for Graph Mining Tasks 1 Outline Introduction to Performance Metrics Supervised Learning Performance Metrics Unsupervised Learning Performance Metrics Optimizing Metrics Statistical
Knowledge Discovery and Data Mining
Knowledge Discovery and Data Mining Unit # 11 Sajjad Haider Fall 2013 1 Supervised Learning Process Data Collection/Preparation Data Cleaning Discretization Supervised/Unuspervised Identification of right
Data Mining. Nonlinear Classification
Data Mining Unit # 6 Sajjad Haider Fall 2014 1 Nonlinear Classification Classes may not be separable by a linear boundary Suppose we randomly generate a data set as follows: X has range between 0 to 15
Cross-Validation. Synonyms Rotation estimation
Comp. by: BVijayalakshmiGalleys0000875816 Date:6/11/08 Time:19:52:53 Stage:First Proof C PAYAM REFAEILZADEH, LEI TANG, HUAN LIU Arizona State University Synonyms Rotation estimation Definition is a statistical
Social Media Mining. Data Mining Essentials
Introduction Data production rate has been increased dramatically (Big Data) and we are able store much more data than before E.g., purchase data, social media data, mobile phone data Businesses and customers
How To Cluster
Data Clustering Dec 2nd, 2013 Kyrylo Bessonov Talk outline Introduction to clustering Types of clustering Supervised Unsupervised Similarity measures Main clustering algorithms k-means Hierarchical Main
Performance Analysis of Naive Bayes and J48 Classification Algorithm for Data Classification
Performance Analysis of Naive Bayes and J48 Classification Algorithm for Data Classification Tina R. Patil, Mrs. S. S. Sherekar Sant Gadgebaba Amravati University, Amravati [email protected], [email protected]
NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )
Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates
STATISTICA Formula Guide: Logistic Regression. Table of Contents
: Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary
Data Mining Techniques Chapter 5: The Lure of Statistics: Data Mining Using Familiar Tools
Data Mining Techniques Chapter 5: The Lure of Statistics: Data Mining Using Familiar Tools Occam s razor.......................................................... 2 A look at data I.........................................................
W6.B.1. FAQs CS535 BIG DATA W6.B.3. 4. If the distance of the point is additionally less than the tight distance T 2, remove it from the original set
http://wwwcscolostateedu/~cs535 W6B W6B2 CS535 BIG DAA FAQs Please prepare for the last minute rush Store your output files safely Partial score will be given for the output from less than 50GB input Computer
Machine Learning Final Project Spam Email Filtering
Machine Learning Final Project Spam Email Filtering March 2013 Shahar Yifrah Guy Lev Table of Content 1. OVERVIEW... 3 2. DATASET... 3 2.1 SOURCE... 3 2.2 CREATION OF TRAINING AND TEST SETS... 4 2.3 FEATURE
Azure Machine Learning, SQL Data Mining and R
Azure Machine Learning, SQL Data Mining and R Day-by-day Agenda Prerequisites No formal prerequisites. Basic knowledge of SQL Server Data Tools, Excel and any analytical experience helps. Best of all:
Experiments in Web Page Classification for Semantic Web
Experiments in Web Page Classification for Semantic Web Asad Satti, Nick Cercone, Vlado Kešelj Faculty of Computer Science, Dalhousie University E-mail: {rashid,nick,vlado}@cs.dal.ca Abstract We address
Knowledge Discovery and Data Mining
Knowledge Discovery and Data Mining Unit # 6 Sajjad Haider Fall 2014 1 Evaluating the Accuracy of a Classifier Holdout, random subsampling, crossvalidation, and the bootstrap are common techniques for
On Cross-Validation and Stacking: Building seemingly predictive models on random data
On Cross-Validation and Stacking: Building seemingly predictive models on random data ABSTRACT Claudia Perlich Media6 New York, NY 10012 [email protected] A number of times when using cross-validation
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation. Lecture Notes for Chapter 4. Introduction to Data Mining
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Introduction to Data Mining by Tan, Steinbach, Kumar Tan,Steinbach, Kumar Introduction to Data
Nonparametric statistics and model selection
Chapter 5 Nonparametric statistics and model selection In Chapter, we learned about the t-test and its variations. These were designed to compare sample means, and relied heavily on assumptions of normality.
Health Care and Life Sciences
Sensitivity, Specificity, Accuracy, Associated Confidence Interval and ROC Analysis with Practical SAS Implementations Wen Zhu 1, Nancy Zeng 2, Ning Wang 2 1 K&L consulting services, Inc, Fort Washington,
Practical Data Science with Azure Machine Learning, SQL Data Mining, and R
Practical Data Science with Azure Machine Learning, SQL Data Mining, and R Overview This 4-day class is the first of the two data science courses taught by Rafal Lukawiecki. Some of the topics will be
Supervised Learning (Big Data Analytics)
Supervised Learning (Big Data Analytics) Vibhav Gogate Department of Computer Science The University of Texas at Dallas Practical advice Goal of Big Data Analytics Uncover patterns in Data. Can be used
Knowledge Discovery and Data Mining
Knowledge Discovery and Data Mining Lecture 15 - ROC, AUC & Lift Tom Kelsey School of Computer Science University of St Andrews http://tom.home.cs.st-andrews.ac.uk [email protected] Tom Kelsey ID5059-17-AUC
How To Check For Differences In The One Way Anova
MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. One-Way
T-61.3050 : Email Classification as Spam or Ham using Naive Bayes Classifier. Santosh Tirunagari : 245577
T-61.3050 : Email Classification as Spam or Ham using Naive Bayes Classifier Santosh Tirunagari : 245577 January 20, 2011 Abstract This term project gives a solution how to classify an email as spam or
Data Mining Application in Direct Marketing: Identifying Hot Prospects for Banking Product
Data Mining Application in Direct Marketing: Identifying Hot Prospects for Banking Product Sagarika Prusty Web Data Mining (ECT 584),Spring 2013 DePaul University,Chicago [email protected] Keywords:
Northumberland Knowledge
Northumberland Knowledge Know Guide How to Analyse Data - November 2012 - This page has been left blank 2 About this guide The Know Guides are a suite of documents that provide useful information about
t Tests in Excel The Excel Statistical Master By Mark Harmon Copyright 2011 Mark Harmon
t-tests in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. [email protected] www.excelmasterseries.com
BIDM Project. Predicting the contract type for IT/ITES outsourcing contracts
BIDM Project Predicting the contract type for IT/ITES outsourcing contracts N a n d i n i G o v i n d a r a j a n ( 6 1 2 1 0 5 5 6 ) The authors believe that data modelling can be used to predict if an
Data quality in Accounting Information Systems
Data quality in Accounting Information Systems Comparing Several Data Mining Techniques Erjon Zoto Department of Statistics and Applied Informatics Faculty of Economy, University of Tirana Tirana, Albania
A Property & Casualty Insurance Predictive Modeling Process in SAS
Paper AA-02-2015 A Property & Casualty Insurance Predictive Modeling Process in SAS 1.0 ABSTRACT Mei Najim, Sedgwick Claim Management Services, Chicago, Illinois Predictive analytics has been developing
Decision Trees from large Databases: SLIQ
Decision Trees from large Databases: SLIQ C4.5 often iterates over the training set How often? If the training set does not fit into main memory, swapping makes C4.5 unpractical! SLIQ: Sort the values
Performance Metrics. number of mistakes total number of observations. err = p.1/1
p.1/1 Performance Metrics The simplest performance metric is the model error defined as the number of mistakes the model makes on a data set divided by the number of observations in the data set, err =
Simple Predictive Analytics Curtis Seare
Using Excel to Solve Business Problems: Simple Predictive Analytics Curtis Seare Copyright: Vault Analytics July 2010 Contents Section I: Background Information Why use Predictive Analytics? How to use
Descriptive Statistics
Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize
ROC Curve, Lift Chart and Calibration Plot
Metodološki zvezki, Vol. 3, No. 1, 26, 89-18 ROC Curve, Lift Chart and Calibration Plot Miha Vuk 1, Tomaž Curk 2 Abstract This paper presents ROC curve, lift chart and calibration plot, three well known
Data Mining Algorithms Part 1. Dejan Sarka
Data Mining Algorithms Part 1 Dejan Sarka Join the conversation on Twitter: @DevWeek #DW2015 Instructor Bio Dejan Sarka ([email protected]) 30 years of experience SQL Server MVP, MCT, 13 books 7+ courses
Statistical Machine Learning
Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes
DESCRIPTIVE STATISTICS. The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses.
DESCRIPTIVE STATISTICS The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses. DESCRIPTIVE VS. INFERENTIAL STATISTICS Descriptive To organize,
Why Taking This Course? Course Introduction, Descriptive Statistics and Data Visualization. Learning Goals. GENOME 560, Spring 2012
Why Taking This Course? Course Introduction, Descriptive Statistics and Data Visualization GENOME 560, Spring 2012 Data are interesting because they help us understand the world Genomics: Massive Amounts
Linear Models in STATA and ANOVA
Session 4 Linear Models in STATA and ANOVA Page Strengths of Linear Relationships 4-2 A Note on Non-Linear Relationships 4-4 Multiple Linear Regression 4-5 Removal of Variables 4-8 Independent Samples
DATA INTERPRETATION AND STATISTICS
PholC60 September 001 DATA INTERPRETATION AND STATISTICS Books A easy and systematic introductory text is Essentials of Medical Statistics by Betty Kirkwood, published by Blackwell at about 14. DESCRIPTIVE
T O P I C 1 2 Techniques and tools for data analysis Preview Introduction In chapter 3 of Statistics In A Day different combinations of numbers and types of variables are presented. We go through these
Data Mining Techniques Chapter 6: Decision Trees
Data Mining Techniques Chapter 6: Decision Trees What is a classification decision tree?.......................................... 2 Visualizing decision trees...................................................
Chicago Booth BUSINESS STATISTICS 41000 Final Exam Fall 2011
Chicago Booth BUSINESS STATISTICS 41000 Final Exam Fall 2011 Name: Section: I pledge my honor that I have not violated the Honor Code Signature: This exam has 34 pages. You have 3 hours to complete this
Introduction to Regression and Data Analysis
Statlab Workshop Introduction to Regression and Data Analysis with Dan Campbell and Sherlock Campbell October 28, 2008 I. The basics A. Types of variables Your variables may take several forms, and it
Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining
Data Mining Cluster Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar Tan,Steinbach, Kumar Introduction to Data Mining 4/8/2004 Hierarchical
1. Classification problems
Neural and Evolutionary Computing. Lab 1: Classification problems Machine Learning test data repository Weka data mining platform Introduction Scilab 1. Classification problems The main aim of a classification
Gerry Hobbs, Department of Statistics, West Virginia University
Decision Trees as a Predictive Modeling Method Gerry Hobbs, Department of Statistics, West Virginia University Abstract Predictive modeling has become an important area of interest in tasks such as credit
11 Linear and Quadratic Discriminant Analysis, Logistic Regression, and Partial Least Squares Regression
Frank C Porter and Ilya Narsky: Statistical Analysis Techniques in Particle Physics Chap. c11 2013/9/9 page 221 le-tex 221 11 Linear and Quadratic Discriminant Analysis, Logistic Regression, and Partial
Data Mining Lab 5: Introduction to Neural Networks
Data Mining Lab 5: Introduction to Neural Networks 1 Introduction In this lab we are going to have a look at some very basic neural networks on a new data set which relates various covariates about cheese
Measurement in ediscovery
Measurement in ediscovery A Technical White Paper Herbert Roitblat, Ph.D. CTO, Chief Scientist Measurement in ediscovery From an information-science perspective, ediscovery is about separating the responsive
Normality Testing in Excel
Normality Testing in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. [email protected]
CI6227: Data Mining. Lesson 11b: Ensemble Learning. Data Analytics Department, Institute for Infocomm Research, A*STAR, Singapore.
CI6227: Data Mining Lesson 11b: Ensemble Learning Sinno Jialin PAN Data Analytics Department, Institute for Infocomm Research, A*STAR, Singapore Acknowledgements: slides are adapted from the lecture notes
A POPULATION MEAN, CONFIDENCE INTERVALS AND HYPOTHESIS TESTING
CHAPTER 5. A POPULATION MEAN, CONFIDENCE INTERVALS AND HYPOTHESIS TESTING 5.1 Concepts When a number of animals or plots are exposed to a certain treatment, we usually estimate the effect of the treatment
Quantitative Methods for Finance
Quantitative Methods for Finance Module 1: The Time Value of Money 1 Learning how to interpret interest rates as required rates of return, discount rates, or opportunity costs. 2 Learning how to explain
AP Physics 1 and 2 Lab Investigations
AP Physics 1 and 2 Lab Investigations Student Guide to Data Analysis New York, NY. College Board, Advanced Placement, Advanced Placement Program, AP, AP Central, and the acorn logo are registered trademarks
Ensemble Methods. Knowledge Discovery and Data Mining 2 (VU) (707.004) Roman Kern. KTI, TU Graz 2015-03-05
Ensemble Methods Knowledge Discovery and Data Mining 2 (VU) (707004) Roman Kern KTI, TU Graz 2015-03-05 Roman Kern (KTI, TU Graz) Ensemble Methods 2015-03-05 1 / 38 Outline 1 Introduction 2 Classification
A General Framework for Mining Concept-Drifting Data Streams with Skewed Distributions
A General Framework for Mining Concept-Drifting Data Streams with Skewed Distributions Jing Gao Wei Fan Jiawei Han Philip S. Yu University of Illinois at Urbana-Champaign IBM T. J. Watson Research Center
Statistics I for QBIC. Contents and Objectives. Chapters 1 7. Revised: August 2013
Statistics I for QBIC Text Book: Biostatistics, 10 th edition, by Daniel & Cross Contents and Objectives Chapters 1 7 Revised: August 2013 Chapter 1: Nature of Statistics (sections 1.1-1.6) Objectives
Simple Regression Theory II 2010 Samuel L. Baker
SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the
DATA MINING CLUSTER ANALYSIS: BASIC CONCEPTS
DATA MINING CLUSTER ANALYSIS: BASIC CONCEPTS 1 AND ALGORITHMS Chiara Renso KDD-LAB ISTI- CNR, Pisa, Italy WHAT IS CLUSTER ANALYSIS? Finding groups of objects such that the objects in a group will be similar
CONTENTS OF DAY 2. II. Why Random Sampling is Important 9 A myth, an urban legend, and the real reason NOTES FOR SUMMER STATISTICS INSTITUTE COURSE
1 2 CONTENTS OF DAY 2 I. More Precise Definition of Simple Random Sample 3 Connection with independent random variables 3 Problems with small populations 8 II. Why Random Sampling is Important 9 A myth,
WISE Power Tutorial All Exercises
ame Date Class WISE Power Tutorial All Exercises Power: The B.E.A.. Mnemonic Four interrelated features of power can be summarized using BEA B Beta Error (Power = 1 Beta Error): Beta error (or Type II
Why do statisticians "hate" us?
Why do statisticians "hate" us? David Hand, Heikki Mannila, Padhraic Smyth "Data mining is the analysis of (often large) observational data sets to find unsuspected relationships and to summarize the data
Some Essential Statistics The Lure of Statistics
Some Essential Statistics The Lure of Statistics Data Mining Techniques, by M.J.A. Berry and G.S Linoff, 2004 Statistics vs. Data Mining..lie, damn lie, and statistics mining data to support preconceived
Tutorial 5: Hypothesis Testing
Tutorial 5: Hypothesis Testing Rob Nicholls [email protected] MRC LMB Statistics Course 2014 Contents 1 Introduction................................ 1 2 Testing distributional assumptions....................
Multiple Linear Regression in Data Mining
Multiple Linear Regression in Data Mining Contents 2.1. A Review of Multiple Linear Regression 2.2. Illustration of the Regression Process 2.3. Subset Selection in Linear Regression 1 2 Chap. 2 Multiple
Recall this chart that showed how most of our course would be organized:
Chapter 4 One-Way ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical
Leveraging Ensemble Models in SAS Enterprise Miner
ABSTRACT Paper SAS133-2014 Leveraging Ensemble Models in SAS Enterprise Miner Miguel Maldonado, Jared Dean, Wendy Czika, and Susan Haller SAS Institute Inc. Ensemble models combine two or more models to
" Y. Notation and Equations for Regression Lecture 11/4. Notation:
Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through
MATH BOOK OF PROBLEMS SERIES. New from Pearson Custom Publishing!
MATH BOOK OF PROBLEMS SERIES New from Pearson Custom Publishing! The Math Book of Problems Series is a database of math problems for the following courses: Pre-algebra Algebra Pre-calculus Calculus Statistics
Classification of Titanic Passenger Data and Chances of Surviving the Disaster Data Mining with Weka and Kaggle Competition Data
Proceedings of Student-Faculty Research Day, CSIS, Pace University, May 2 nd, 2014 Classification of Titanic Passenger Data and Chances of Surviving the Disaster Data Mining with Weka and Kaggle Competition
Dongfeng Li. Autumn 2010
Autumn 2010 Chapter Contents Some statistics background; ; Comparing means and proportions; variance. Students should master the basic concepts, descriptive statistics measures and graphs, basic hypothesis
Artificial Neural Network, Decision Tree and Statistical Techniques Applied for Designing and Developing E-mail Classifier
International Journal of Recent Technology and Engineering (IJRTE) ISSN: 2277-3878, Volume-1, Issue-6, January 2013 Artificial Neural Network, Decision Tree and Statistical Techniques Applied for Designing
Data Mining Analysis (breast-cancer data)
Data Mining Analysis (breast-cancer data) Jung-Ying Wang Register number: D9115007, May, 2003 Abstract In this AI term project, we compare some world renowned machine learning tools. Including WEKA data
Exercise 1.12 (Pg. 22-23)
Individuals: The objects that are described by a set of data. They may be people, animals, things, etc. (Also referred to as Cases or Records) Variables: The characteristics recorded about each individual.
American Association for Laboratory Accreditation
Page 1 of 12 The examples provided are intended to demonstrate ways to implement the A2LA policies for the estimation of measurement uncertainty for methods that use counting for determining the number
1 Maximum likelihood estimation
COS 424: Interacting with Data Lecturer: David Blei Lecture #4 Scribes: Wei Ho, Michael Ye February 14, 2008 1 Maximum likelihood estimation 1.1 MLE of a Bernoulli random variable (coin flips) Given N
Lecture 13: Validation
Lecture 3: Validation g Motivation g The Holdout g Re-sampling techniques g Three-way data splits Motivation g Validation techniques are motivated by two fundamental problems in pattern recognition: model
Annex 6 BEST PRACTICE EXAMPLES FOCUSING ON SAMPLE SIZE AND RELIABILITY CALCULATIONS AND SAMPLING FOR VALIDATION/VERIFICATION. (Version 01.
Page 1 BEST PRACTICE EXAMPLES FOCUSING ON SAMPLE SIZE AND RELIABILITY CALCULATIONS AND SAMPLING FOR VALIDATION/VERIFICATION (Version 01.1) I. Introduction 1. The clean development mechanism (CDM) Executive
Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)
Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume
Data Analysis Tools. Tools for Summarizing Data
Data Analysis Tools This section of the notes is meant to introduce you to many of the tools that are provided by Excel under the Tools/Data Analysis menu item. If your computer does not have that tool
Clustering. Danilo Croce Web Mining & Retrieval a.a. 2015/201 16/03/2016
Clustering Danilo Croce Web Mining & Retrieval a.a. 2015/201 16/03/2016 1 Supervised learning vs. unsupervised learning Supervised learning: discover patterns in the data that relate data attributes with
Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay
Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 17 Shannon-Fano-Elias Coding and Introduction to Arithmetic Coding
S03-2008 The Difference Between Predictive Modeling and Regression Patricia B. Cerrito, University of Louisville, Louisville, KY
S03-2008 The Difference Between Predictive Modeling and Regression Patricia B. Cerrito, University of Louisville, Louisville, KY ABSTRACT Predictive modeling includes regression, both logistic and linear,
2 Decision tree + Cross-validation with R (package rpart)
1 Subject Using cross-validation for the performance evaluation of decision trees with R, KNIME and RAPIDMINER. This paper takes one of our old study on the implementation of cross-validation for assessing
CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA
We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA April 2016 www.voyagersopris.com Mathematical
How To Write A Data Analysis
Mathematics Probability and Statistics Curriculum Guide Revised 2010 This page is intentionally left blank. Introduction The Mathematics Curriculum Guide serves as a guide for teachers when planning instruction
Data Mining - The Next Mining Boom?
Howard Ong Principal Consultant Aurora Consulting Pty Ltd Abstract This paper introduces Data Mining to its audience by explaining Data Mining in the context of Corporate and Business Intelligence Reporting.
Lecture 10: Regression Trees
Lecture 10: Regression Trees 36-350: Data Mining October 11, 2006 Reading: Textbook, sections 5.2 and 10.5. The next three lectures are going to be about a particular kind of nonlinear predictive model,
How To Understand How Weka Works
More Data Mining with Weka Class 1 Lesson 1 Introduction Ian H. Witten Department of Computer Science University of Waikato New Zealand weka.waikato.ac.nz More Data Mining with Weka a practical course
