Fine Particulate Matter Concentration Level Prediction by using Tree-based Ensemble Classification Algorithms



Similar documents
Journal of Asian Scientific Research COMPARISON OF THREE CLASSIFICATION ALGORITHMS FOR PREDICTING PM2.5 IN HONG KONG RURAL AREA.

Machine learning algorithms for predicting roadside fine particulate matter concentration level in Hong Kong Central

Data Mining. Nonlinear Classification

Data Mining Methods: Applications for Institutional Research

Model Combination. 24 Novembre 2009

Chapter 6. The stacking ensemble approach

Decision Trees from large Databases: SLIQ

CI6227: Data Mining. Lesson 11b: Ensemble Learning. Data Analytics Department, Institute for Infocomm Research, A*STAR, Singapore.

Classification and Regression by randomforest

Predicting borrowers chance of defaulting on credit loans

Data Mining Practical Machine Learning Tools and Techniques

Using multiple models: Bagging, Boosting, Ensembles, Forests

6 Classification and Regression Trees, 7 Bagging, and Boosting

Comparison of Data Mining Techniques used for Financial Data Analysis

Knowledge Discovery and Data Mining

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski

Chapter 12 Bagging and Random Forests

M1 in Economics and Economics and Statistics Applied multivariate Analysis - Big data analytics Worksheet 3 - Random Forest

Mining Direct Marketing Data by Ensembles of Weak Learners and Rough Set Methods

Benchmarking Open-Source Tree Learners in R/RWeka

On the effect of data set size on bias and variance in classification learning

Gerry Hobbs, Department of Statistics, West Virginia University

Data Mining Classification: Decision Trees

Class #6: Non-linear classification. ML4Bio 2012 February 17 th, 2012 Quaid Morris

How To Make A Credit Risk Model For A Bank Account

Data mining techniques: decision trees

DECISION TREE INDUCTION FOR FINANCIAL FRAUD DETECTION USING ENSEMBLE LEARNING TECHNIQUES

Classification/Decision Trees (II)

Package trimtrees. February 20, 2015

Ensemble Data Mining Methods

Knowledge Discovery and Data Mining. Bootstrap review. Bagging Important Concepts. Notes. Lecture 19 - Bagging. Tom Kelsey. Notes

Feature vs. Classifier Fusion for Predictive Data Mining a Case Study in Pesticide Classification

A Learning Algorithm For Neural Network Ensembles

Decision Tree Learning on Very Large Data Sets

Leveraging Ensemble Models in SAS Enterprise Miner

Using Random Forest to Learn Imbalanced Data

Ensemble Methods. Knowledge Discovery and Data Mining 2 (VU) ( ) Roman Kern. KTI, TU Graz

An Overview of Data Mining: Predictive Modeling for IR in the 21 st Century

COMP3420: Advanced Databases and Data Mining. Classification and prediction: Introduction and Decision Tree Induction

A Study Of Bagging And Boosting Approaches To Develop Meta-Classifier

L25: Ensemble learning

Social Media Mining. Data Mining Essentials

Data Mining for Knowledge Management. Classification

REPORT DOCUMENTATION PAGE

How Boosting the Margin Can Also Boost Classifier Complexity

ENSEMBLE DECISION TREE CLASSIFIER FOR BREAST CANCER DATA

Data Mining - Evaluation of Classifiers

Knowledge Discovery and Data Mining

Data Analytics and Business Intelligence (8696/8697)

II. RELATED WORK. Sentiment Mining

THE RISE OF THE BIG DATA: WHY SHOULD STATISTICIANS EMBRACE COLLABORATIONS WITH COMPUTER SCIENTISTS XIAO CHENG. (Under the Direction of Jeongyoun Ahn)

Better credit models benefit us all

Decompose Error Rate into components, some of which can be measured on unlabeled data

Bike sharing model reuse framework for tree-based ensembles

arxiv: v1 [stat.ml] 21 Jan 2013

Predictive Data modeling for health care: Comparative performance study of different prediction models

Knowledge Discovery and Data Mining

Weather forecast prediction: a Data Mining application

Decision Trees. Andrew W. Moore Professor School of Computer Science Carnegie Mellon University.

Applied Data Mining Analysis: A Step-by-Step Introduction Using Real-World Data Sets

Performance Analysis of Decision Trees

HYBRID PROBABILITY BASED ENSEMBLES FOR BANKRUPTCY PREDICTION

Why Ensembles Win Data Mining Competitions

Generalizing Random Forests Principles to other Methods: Random MultiNomial Logit, Random Naive Bayes, Anita Prinzie & Dirk Van den Poel

Chapter 11 Boosting. Xiaogang Su Department of Statistics University of Central Florida - 1 -

IMPLEMENTING CLASSIFICATION FOR INDIAN STOCK MARKET USING CART ALGORITHM WITH B+ TREE

CS570 Data Mining Classification: Ensemble Methods

DATA MINING METHODS WITH TREES

On Adaboost and Optimal Betting Strategies

Selecting Data Mining Model for Web Advertising in Virtual Communities

Event driven trading new studies on innovative way. of trading in Forex market. Michał Osmoła INIME live 23 February 2016

BOOSTING - A METHOD FOR IMPROVING THE ACCURACY OF PREDICTIVE MODEL

Ensemble Approach for the Classification of Imbalanced Data

Logistic Model Trees

Data Mining with R. Decision Trees and Random Forests. Hugh Murrell

D-optimal plans in observational studies

Data Science with R Ensemble of Decision Trees

Studying Auto Insurance Data

Ensembles and PMML in KNIME

Monday Morning Data Mining

Mining Wiki Usage Data for Predicting Final Grades of Students

Model Trees for Classification of Hybrid Data Types

Tree Ensembles: The Power of Post- Processing. December 2012 Dan Steinberg Mikhail Golovnya Salford Systems

Beating the NCAA Football Point Spread

Classification tree analysis using TARGET

Applied Multivariate Analysis - Big data analytics

Package missforest. February 20, 2015

How To Do Data Mining In R

A Methodology for Predictive Failure Detection in Semiconductor Fabrication

On Cross-Validation and Stacking: Building seemingly predictive models on random data

Classification and Regression Trees (CART) Theory and Applications

Package bigrf. February 19, 2015

Homework Assignment 7

New Ensemble Combination Scheme

Inductive Learning in Less Than One Sequential Data Scan

The Operational Value of Social Media Information. Social Media and Customer Interaction

TOWARDS SIMPLE, EASY TO UNDERSTAND, AN INTERACTIVE DECISION TREE ALGORITHM

THE HYBRID CART-LOGIT MODEL IN CLASSIFICATION AND DATA MINING. Dan Steinberg and N. Scott Cardell

REVIEW OF ENSEMBLE CLASSIFICATION

Predicting Student Performance by Using Data Mining Methods for Classification

Transcription:

Fine Particulate Matter Concentration Level Prediction by using Tree-based Ensemble Classification Algorithms Yin Zhao School of Mathematical Sciences Universiti Sains Malaysia (USM) Penang, Malaysia Yahya Abu Hasan School of Mathematical Sciences Universiti Sains Malaysia (USM) Penang, Malaysia Abstract Pollutant forecasting is an important problem in the environmental sciences. Data mining is an approach to discover knowledge from large data. This paper tries to use data mining methods to forecast concentration level, which is an important air pollutant. There are several tree-based classification algorithms available in data mining, such as CART, C4.5, Random Forest (RF) and C5.0. RF and C5.0 are popular ensemble methods, which are, RF builds on CART with Bagging and C5.0 builds on C4.5 with Boosting, respectively. This paper builds concentration level predictive models based on RF and C5.0 by using R packages. The data set includes 2000-2011 period data in a new town of Hong Kong. The concentration is divided into 2 levels, the critical points is 25μg/ (24 hours mean). According to 100 times 10-fold cross validation, the best testing accuracy is from RF model, which is around 0.845~0.854. Keywords Random Forest; C5.0; PM2.5 prediction; data mining. I. INTRODUCTION Air pollution is a major problem for some time. Various organic and inorganic pollutants from all aspects of human activities are added daily to the air. One of the most important pollutants is particulate matter. Particulate matter (PM) can be defined as a mixture of fine particles and droplets in the air and this can be characterized by their sizes. refers to particulate matter whose size is 2.5 micrometers or smaller. Due to its effect on health, it is crucial to prevent the pollution getting worse in a long run. According to WHO's report, the mortality in cities with high levels of pollution exceeds that observed in relatively cleaner cities by 15 20% [1]. Forecasting of air quality is much needed in a short term so that necessary preventive action can be taken during episodes of air pollution. WHO s Air Quality Guideline (AQG) [2] says the mean of concentration in 24-hour level should be less than 25μg/, although Hong Kong s proposed Air Quality Objectives (AQOs) [3] is 75μg/ right now. Because the target data is from a new town in Hong Kong, which means there are lots of people living in this area, so it is need to be a stricter standard of air pollution in such area. As a result, we use 25μg/ based on 24 hours mean as our standard points. The number of particulate at a particular time is dependent on many environmental factors, especially the meteorological data and time serious factors. Predictive models for can vary from the simple to the complex; hence we have CART, C4.5, Artificial Neural Networks, Support Vector Machine among others. In this paper, we try to build models for predicting next day's concentration level by using two popular tree-based classification algorithms, which are, Random Forest (RF) [4-5] and C5.0 [6-7]. CART and C4.5 are simple decision tree models because there is only one decision tree in each model. While RF and C5.0 are ensemble methods based on CART and C4.5, and each of them has a bunch of basic decision trees in the model. Some of the differences among these two algorithms are shown in Table 1. Algorithms RandomForest TABLE I. BRIEF DIFFERENCE BETWEEN RF&C5.0 Number of Trees Multiple C5.0 Multiple Methods Bagging and Voting Boosting and Voting Basic Classifier CART C4.5 R [8] is an open source programming language and software environment for statistical computing and graphics. It is widely used for data analysis and statistical computing projects. In this paper, we will use some R packages as our analysis tools, namely randomforest package [9] and C50 package [10]. Moreover, we also use some packages for plotting figures, such as reshape2 package [11] and ggplot2 package [12]. The structure of the paper is: Section 2 reviews some basic concept of tree-based classification methods, while Section 3 and 4 will describe the data and the experiments. The conclusion will be given in Section 5. II. METHODOLOGY A. Random Forest (RF) RF is an effective prediction tool in data mining, which is based on CART. It employs the Bagging [13] method to produce a randomly sampled set of training data for each of the trees. 21 P a g e

CART uses Gini index which is an impurity-based criterion that measures the divergences among the probability distributions of target attribute's values. Definition 1 (Gini Index): Given a training set S and the target attribute takes on k different values, then the Gini index of S is defined as Gini( S) 1 k Where is the probability of S belonging to class i. Definition 2 (Gini Gain): Gini Gain is the evaluation criterion for selecting the attribute A which is defined as p 2 i GiniGain( A, S) Gini( S) Gini( A, S) n Gini( S) Gini( ) S Where is the partition of S induced by the value of attribute A. CART algorithm can deal with the case of features with nominal variables as well as continuous ranges. Pruning a tree is the action to replace a whole sub-tree by a leaf. CART uses a pruning technique called minimal costcomplexity pruning which assuming that the bias in the resubstitution error of a tree increases linearly with the number of leaves. Formally, given a tree T and a real number α>0 which is called the complexity parameter, then the costcomplexity risk of T with respect to α is: R ( T) R( T) T where is the number of terminal nodes (i.e. leaves) and R(T) is the re-substitution risk estimate of T. Bagging, which stands for bootstrap aggregating, is an ensemble classification method. It repeatedly samples from a data set with replacement according to a uniform probability distribution. Each sample has a probability 1 of being selected, where N is the number of observations in the training set. If N is sufficiently large, this probability converges to 1 1 e 0.632, that is, a bootstrap sample contains approximately 63% of the original training data, while other data is a natural good testing dataset which is known as OOB (Out of Bag [14]) in RF. nce every sample has an equal probability of being selected (i.e. 1 N), bagging does not focus on any particular instance of the training data. Therefore, it is less sensitive to model overfitting when applied to noisy data. RF constructs a series of tree-based learners. At each tree node, a random sample of m features is drawn, and only those m features are considered for splitting. Typically m= (as default in R randomforest package), where p is the number of features. The essential difference between Bagging and RF is the latter not only selecting samples randomly but also the features being selected randomly. RF will not prune the trees during the whole growing procedure. B. C5.0 C5.0 is an advanced decision tree algorithm developed based on C4.5. It includes all functionalities of C4.5 and applies some new technologies, the most important application among them is Boosting (i.e. AdaBoost [15]), which is a technique for generating and combining multiple classifiers to improve predictive accuracy. C4.5 uses information gain ratio which is an impuritybased criterion that employs the entropy measure as an impurity measure. Definition 3 (Information Entropy): Given a training set S, the target attribute takes on k different values, and then the entropy of S is defined as k Entropy( S) pilog 2 p Where is the probability of S belonging to class i. Definition 4 (Information Gain): The information gain of an attribute A, relative to the collection of examples S, is defined as InfoGain( A, S) Entropy( S) Entropy( A, S) n Entropy( S) Entropy( ) S Where is the partition of S induced by the value of attribute A. Definition 5 (Gain Ratio): The gain ratio normalizes the information gain as follows: InfoGain( A, S) GainRatio( A, S) SplitEntropy( A, S) InfoGain( A, S) n log2 S S milar to CART, C4.5 can also deal with both nominal and continuous variables. Error Based Pruning (EBP) is the pruning method which is implemented in C4.5 algorithm. The idea behind EBP is to minimize the estimated number of errors that a tree would make on unseen data. The estimated number of errors of a tree is computed as the sum of the estimated number of errors of all its leaves. AdaBoost stands for adaptive boosting, it increases the weights of incorrectly classified examples and decreases the ones of those classified correctly. C5.0 is much efficient than C4.5 also on the aspect of unordered rule sets. That is, when a case is classified, all applicable rules are found and voted. This improves both the interpretability of rule sets and their predictive accuracy. i 22 P a g e

PM2.5 Days in One Year III. DATA PREPARATION All of data for the 2000-2011 period were obtained from Hong Kong Environmental Protection Department (HKEPD) and Hong Kong Met-online. The air monitoring station is Tung Chung Air Monitoring Station (Latitude 22 17'19"N, Longitude 113 56'35"E) which is in a new town of Hong Kong, and the meteorological monitoring station is Hong 2000-2011 PM2.5 Concentration Level 200 Level High Low 100 0 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 Years 2000-2011 Fig.1. concentration levels in 2000-2011 Kong International Airport Weather Station (Latitude 22 18'34"N, Longitude 113 55'19"E) which is the nearest station from Tung Chung. As mentioned in Section 1, accurately predicting high concentration is of most value from a public health standpoint, thus, the response variable has two classes, which are, Low indicating the daily mean concentration of is below 25μg/ and High representing above it. Figure 1 shows that the days of two levels in 2000-2011. We learn that the air quality is the best in 2010 (i.e. it has the most Low days), while the worst is in 2004 among these 12 years. In summary, the percentage of Low and High level is around 40.0% and 59% during 12 years in this area, respectively (around 1% missing values). Thus, if a predictive model obtains the accuracy is less than 60%, which means it approximately equals the randomly guess, that would be failure. The purpose to use data mining method is to raise the accuracy, say, at least more than 60%. We convert all hourly data to daily mean values and the meteorological data is the original daily data. In addition, all of air data and meteorological data are numeric. We certainly cannot ignore the effects of seasonal changes and human activities; hence we add two time variables, namely the month (Figure 2) and the day of week (Figure 3). Figure 2 clearly shows that concentration reaches a low level from May to August, during which is the rainy season in Hong Kong. But the pollutant is serious from October to next January, especially in December and January. We should know that the rainfall may not be an important factor in the experiment as the response variable is the next day s, and it is easy to understand that rainy season includes variant meteorological factors. Figure 3 presents the trends of people s activities in some senses. We learn that the air pollution waves slightly during the week. The concentration levels are similar from Tuesday to Thursday, while the lowest level appears on Sunday. This situation can be related to Tung Chung is a living area in Hong Kong, which means the air is less influenced by factories or other pollution source (i.e. different from business area or industrial area ). At last, there are 4326 observations by deleting all NAs and 14 predictor variables (Table 2) and 1 response variable which is the next day s concentration level. 50 40 30 20 10 Monthly PM2.5 Median Concentration in 2000-2011 1 2 3 4 5 6 7 8 9 10 11 12 Month Fig.2. Monthly Concentration in 2000-2011 23 P a g e

Testing Accuracy PM2.5 34 32 30 28 Day of Week PM2.5 Median Concentration in 2000-2011 1 2 3 4 5 6 7 WeekDay Fig.3. Daily Concentration in 2000-2011 TABLE II. VARIABLE LIST Notation Description Class MP Mean Pressure AT1 Max Air AT2 Mean Air AT3 Min Air MDPT Mean Dew Point RH1 Max Relative Humidity RH2 Mean Relative Humidity RH3 Min Relative Humidity TR Total Rainfall PWD Prevailing Wind Direction MWS Mean Wind Speed PM2.5 concentration MONTH Month Nominal WEEK Day of week Nominal IV. EXPERIMENTS The experiments include three sections: the first and second experiment will test RF and C5.0, respectively. We try to train the model and select the proper parameters in each one. The third one will compare them by using 100 times 10- fold cross validation (10-fold CV) in order to understand which one is more accurate as well as stable. A. Random Forest (RF) We use R package randomforest to train and test the performance of RF model. There are two important parameters in RF model, that is, the number of splitting feathers (i.e. mtry ) and the number of trees (i.e. ntree ). We try to select a proper number in order to obtain the best testing accuracy by 10-fold CV. Firstly, we set ntree from 1 to 500 and mtry from 2 to 5. The result is shown in Figure 4. We learn that when the number of splitting feathers is 2 or 3 is somewhat better than other two values as the accuracy of both are tightness. The best accuracy of each splitting number is shown in Table 3. According to this result we choose mty = 3 and ntree = 98 in the following experiments. Note that the default value in R package is mtry = (as we mentioned in Section 2) and ntree = 500, generally speaking, we can use mtry as the default value and select ntree by using 10-fold CV. Alternatively, one can choose the function tunerf for selecting parameters in RF model and the details can be checked in the help file of randomforest package. Figure 5 shows that the importance of variables in RF model, we can see the most important predictor is the previous, and then MDPT, MP, and MONTH. The criterion of this list is according to the mean decreasing Gini gain of each predictor. Why the variable WEEK is not important in RF model? A reasonable explanation is that WEEK waves slightly on each day, moreover, all the medians of concentration are higher than 25μg/ (see Figure 3) which is the boundary between response variable. 0.84 0.82 0.80 0.78 Fig.4. mtry2 mtry3 mtry4 mtry5 Different Value of mtry Accuracy on Different Number of Splitting Feathers TABLE III. Violin Plot of RF by Changing mtry RESULT OF RF mtry ntree Testing Accuracy 2 95 0.852 3 98 0.853 4 246 0.851 5 349 0.851 B. C5.0 We will use C50 package for building C5.0 model in this paper. milar as RF, we try to obtain the best number of trees at first. Note that the maximum number of trees in C50 package is 100, which is much less than RF (i.e. ntree = 500). Some of the results are shown in Table 4. We find that the highest accuracy is at the 32 nd tree, whose testing accuracy is 0.852. Figure 6 indicates the trends of accuracy by changing number of trees in RF and C5.0. The training accuracy of both models increases steadily and stays at a stable level at last. The testing accuracy waves little serious at the beginning, variable mtry2 mtry3 mtry4 mtry5 24 P a g e

Accuracy RFImportance especially, C5.0 is much higher than RF when there are only a few trees. But both of them float in a moderate level later, RF is higher than C5.0 at this phase. Figure 7 shows the variables importance of C5.0 model. According to this result, we learn that the most important predictor is the previous, too. And MONTH, PWD are also important variables, but it is different from the result of RF. C5.0 calculates the percentage of splits associated with each predictor. We think this result is more accurate than RF algorithm, because Gini gain will be in favor of those variables having more values and thus offering more splits [16]. But C5.0 uses gain radio which avoids this problem. WEEK is still not important in this model. TABLE IV. RESULT OF C5.0 Trees Training Testing 1 0.868 0.843 2 0.868 0.843 3 0.877 0.842 31 0.943 0.847 32 0.945 0.852 33 0.945 0.849 500 Importance of RF 400 300 200 100 AT1 AT2 AT3 MDPT MONTH MP MWS PM2.5 PWD RH1 RH2 RH3 TR WEEK 0 AT1 AT2 AT3 MDPT MONTH MP MWS PM2.5 PWD RH1 RH2 RH3 TR WEEK Fig.5. importance of RF Trends of Accuray by Changing Trees 1.00 0.95 0.90 factor(variable) RFtrain RFtest C5.0train C5.0test 0.85 0.80 0 25 50 75 100 Number of Trees Fig.6. Trends of Testing Accuracy by Changing Number of Trees in RF & C5.0 25 P a g e

Testing Accuracy C5.0Importance Importance of C5.0 20 10 AT1 AT2 AT3 MDPT MONTH MP MWS PM2.5 PWD RH1 RH2 RH3 TR WEEK 0 Fig.7. importance of C5.0 AT1 AT2 AT3 MDPT MONTH MP MWS PM2.5 PWD RH1 RH2 RH3 TR WEEK Violin Plot of 100 times 10-fold CV 0.850 variable RF C5.0 0.845 RF C5.0 Algorithm Fig.8. Violin Plot of 100 Times 10-fold CV C. Comparison We compare two algorithms by using 100 times 10-fold CV with the result shown in Table 5. We learn that RF obtains the best result, and its accuracy is around 0.845~0.854. While C5.0 also gets a moderate accuracy, or say, only a little bit worse than RF. Another issue is the stability of these two algorithms during repeated times. Figure 8 shows the violin plot of 100 times 10-fold CV. We can see that RF is more stable than C5.0 during 100 times and its accuracy is also better than C5.0. TABLE V. COMPARISON BETWEEN RF&C5.0 Maximum Minimum Median RF 0.854 0.845 0.849 C5.0 0.852 0.841 0.846 V. CONCLUSION In this paper, we build concentration levels predictive models by using two popular data mining algorithms, which are RF and C5.0. The dataset, which is from a new town in Hong Kong, includes 4326 rows and 15 columns by deleting all missing values. Based on all experiments, we have our conclusions as below. 26 P a g e

1) Selecting the best parameters in each model based on the testing accuracy by using 10-fold CV. For RF model, the number of trees is 98 and the number of splitting feathers is 3. For C5.0 model, the best number of trees is 32. We prefer to use default value of mtry in RF model and select ntree by 10- fold CV. 2) According to 100 times 10-fold CV, the best result is from RF which is around 0.845~0.854. It not only obtains the highest accuracy but also performs more stable than C5.0. 3) Another issue between them is the importance of variables, and we prefer the result of C5.0 as it is unbiased. 4) The advice of using RF or C5.0 in practice is to select the number of iterations at first. 10-fold CV is the selecting method in this paper, while researchers can repeat this process many times, for instance, 10 times 10-fold CV should be better than once. In summary, the selecting process has to maximum limit reducing the random error. ACKNOWLEDGMENT The authors wish to thank Hong Kong Environmental Protection Department (HKEPD) and Hong Kong Met-online allowing us to use all the data in this paper. REFERENCES [1] Air quality and health, http://www.who.int/mediacentre/factsheets/fs313/en/index.html [2] WHO Air quality guidelines for particulate matter, ozone, nitrogen dioxide and sulfur dioxide (Global update 2005), World Health Organization, whqlibdoc.who.int/hq/2006/who_sde_phe_oeh_06.02_eng.pdf [3] Proposed New AQOs for Hong Kong, http://www.epd.gov.hk/epd/english/environmentinhk/air/air_quality_obj ectives/files/proposed_newaqos_eng.pdf [4] Leo Breiman, Random Forests, Machine Learning, Volume 45, Issue 1, pp. 5-32, 2001. [5] Leo Breiman, Statistical Modeling: The Two Cultures, Statistical Science, Volume 16, No. 3, pp. 199-231, 2001. [6] J. R. Quinlan, C4.5: Programs for Machine Learning, Morgan Kaufmann, Los Altos, 1993. [7] C5.0: An Informal Tutorial, www.rulequest.com/see5-unix.html [8] R-project: http://www.r-project.org/ [9] Andy Liaw, Matthew Wiener, randomforest package, Version 4.6.7, http://cran.r-project.org/web/packages/randomforest/randomforest.pdf [10] Max Kuhn, Steve Weston, Nathan Coulter, C50 package, Version 0.1.0 14, http://cran.r-project.org/web/packages/c50/c50.pdf [11] Hadley Wickham, reshape2 package, Version 1.2.2, http://cran.rproject.org/web/packages/reshape2/reshape2.pdf [12] Hadley Wickham, ggplot2 package, Version 0.9.3.1, http://cran.rproject.org/web/packages/ggplot2/ggplot2.pdf [13] Leo Breiman, Bagging Predictors, Machine Learning, volume 24 issue 2, pp. 123-140, 1996. [14] Leo Breiman, Out-of-bag estimation, http://www.stat.berkeley.edu/~breiman/oobestimation.pdf [15] Yoav Freund, Robert Schapire, A decision-theoretic generalization of on-line learning and an application to boosting, European Conference on Computational Learning Theory, pp. 23-37, 1995. [16] Leo Breiman, Jerome Friedman, Richard Olshen, Charles Stone, Classification and Regression Trees, Wadsworth Int. Group, pp.42, 1984. 27 P a g e