Björn Þór Jónsson bjrr@itu.dk Data Mining Individual Assignment report This report outlines the implementation and results gained from the Data Mining methods of preprocessing, supervised learning, frequent pattern mining and clustering, using data from questionnaire results submitted by students in a 2014 Data Mining class. The implementation is split into Java packages, one for each Data Mining method, and the package names accompany each section name here below, for easy reference. Comments may be sparse but descriptive method and variable names should make up for that that s a coding style I ve come to appreciate, where meta data in code comments can do more harm than good when they re not maintained as the code changes and become outdated. Hope the implementation proves to be readable. Plots of generated data are made with simple R scripts that can be found in the plots directory within the project root. Preprocessing code namespace: is.bthj.itu.datamining.preprocessing The attributes chosen from the data to work with are: age, programming skill, years at university, preferred operating system, favorite programming languages, whether more mountains should be in Denmark and if one is fed up with the winter, and the favorite color. Cleaning the data consists of normalization in the form of inferring consistent values from ones that are considered the same and clamping numerical values to a defined range. After that process, tuples are removed that still have unknown values. Specifically, age values are only accepted if they are between 18 and 120, inclusive; programming skill is clamped to the range 1 10; years at university values are accepted as they are if they prove to be a known numerical value; prefered operating system answers are set to consistent values inferred from a list of alternative spellings, as can be seen in OSSynonyms; values from the list of favorite programming languages are in a similar way set to consistent ones inferred from lists of synonyms in the enumeration ProgrammingLanguages; the boolean attributes about mountains and winter in Denmark are set to either Yes or No by comparing with many different synonyms for those words, in the enumeration BooleanSynonyms; favorite color is set to the closest match found in the list of color names in BasicColorNames. Cleaning the data in this way and writing it to disk can be done by running the mainmethod of CSVFileReaderin the.preprocessingpackage; the results can be seen in the file
cleaned dataset.csvin the project s root. In the rest of the project, the cleaning method QuestionairePreProcessor.getCleanedQuestionairesis called directly in code instead of reading from this file, for ease over efficiency. Supervised learning: classification is.bthj.itu.datamining.classification For classification with supervised learning, the knn method was chosen and the target attribute: Do you think there should be more mountains in Denmark? Different combinations of the other attributes, that are both numerical and nominal, were tried to compute the distance between tuples (by commenting out different parts of ClassificationKNN.distanceBetweenTwoTuples that could indeed have been done in a more elegant way). The implementation can be tested by running the mainmethod in the ClassificationKNNclass. Plots of classification accuracy for a few of the different combinations can be seen here below, where the Favorite color attribute alone proves to be best for classifying the tuples, where k = 11 gives 89% accuracy. Distance metric by: color attribute age attribute age, programming skill and operating system all attributes years at university
Frequent pattern / association mining is.bthj.itu.datamining.association For finding frequent patterns with a given support and association rules with a given minimum confidence, the Apriori algorithm was implemented and targeted at the Favorite programming languages attribute. The implementation can be tested by running the mainmethod in the Aprioriclass. To test and validate the implementation, data was used from Example 6.3 and Table 6.1 in the textbook, Data Mining Concepts and Techniques, 3rd edition see method Apriori.getTextBookTransactionalData. That proved to be a good idea as it uncovered errors in the implementation, when compared with the results in Example 6.3; One error was in the frequent item set search, where support for candidate sets was found by only comparing the first elements of the set with the first elements of each set in the data, in other words depending the same order of occurrence of the compared elements, instead of searching specifically for the existence of each element in the candidate set, anywhere in each data record set see method Apriori.countSupport. Another uncovered error was in the generation of association rules where the confidence calculation was flawed as confidence( A => B)was computed as support_count( B ) / support_count( A )instead of support_count( A U B) / support_count( A ) see method Apriori.printAssociationRules Output from the implementation, by running the main method in the Apriori class, with support set to 2 and and minimum confidence set to 70%, is the following: ***Frequent itemsets with minimum support: 2 [C, CSharp, Java] [CPlusPlus, CSharp, Java] [CSharp, FSharp, Java] [CSharp, FSharp, Scala] [CSharp, Java, JavaScript] [CSharp, Java, PHP] [CSharp, Java, Python] [CSharp, JavaScript, Python] ***Association rules with minimum conficence = 70% C,CSharp => Java, confidence = 2/2 = 100% C,Java => CSharp, confidence = 2/2 = 100% CPlusPlus,CSharp => Java, confidence = 2/2 = 100% FSharp,Scala => CSharp, confidence = 2/2 = 100% Java,JavaScript => CSharp, confidence = 3/4 = 75% CSharp,PHP => Java, confidence = 7/8 = 88% Java,PHP => CSharp, confidence = 7/8 = 88% PHP => CSharp,Java, confidence = 7/10 = 70% From this we can for example say that Java and JavaScript preference implies CSharp preference, with 75% confidence.
Clustering is.bthj.itu.datamining.clustering To cluster the tuples into k numbers of partitions, the k Means technique was implemented. Only one dimension of the data was used to partition by age but more dimensions could easily be added by expanding the method KMeans.getTupleValue. The implementation can be tested by running the mainmethod in the KMeansclass. To measure the quality of the clusters formed in this dimension, for different values of k, the sum of square errors for each partition count k was computed, and as initial cluster centroids are chosen at random, an average of errors from 10 computations for each k was computed: Average of 10 sums of square errors for partition size k = 2: 22.065603 Average of 10 sums of square errors for partition size k = 3: 11.402188 Average of 10 sums of square errors for partition size k = 4: 13.638395 Average of 10 sums of square errors for partition size k = 5: 5.0326624 Average of 10 sums of square errors for partition size k = 6: 1.6928288 Average of 10 sums of square errors for partition size k = 7: 2.0382862 Average of 10 sums of square errors for partition size k = 8: 6.4158945 Average of 10 sums of square errors for partition size k = 9: 1.112445 Average of 10 sums of square errors for partition size k = 10: 0.49283415 k = 2...10 k = 2...30 From this can be seen that k = 6 gives a comparatively low local minimum of error, with a reasonably low number of partitions, so k = 6 seems to be a good choice when clustering the tuples from values in the age attribute. Though clustering is unsupervised, and so has no predefined classes, it could be interesting to look at how well this clustering method performs as a classifier, for example by measuring how dominantly similar single nominal values are within each cluster, like Favorite color, as a measure of goodness, but I ll let the sum of square errors suffice as a measure for now.
Conclusion: It has been interesting to get acquainted with those Data Mining methods and I can foresee using them in my future game development. IT University of Copenhagen spring 2014 Björn Þór Jónsson (bjrr@itu.dk)