Application of Event Based Decision Tree and Ensemble of Data Driven Methods for Maintenance Action Recommendation



Similar documents
Social Media Mining. Data Mining Essentials

Artificial Neural Network, Decision Tree and Statistical Techniques Applied for Designing and Developing Classifier

Chapter 2 The Research on Fault Diagnosis of Building Electrical System Based on RBF Neural Network

AUTOMATION OF ENERGY DEMAND FORECASTING. Sanzad Siddique, B.S.

Classification of Bad Accounts in Credit Card Industry

International Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 3, May-June 2015

BIOINF 585 Fall 2015 Machine Learning for Systems Biology & Clinical Informatics

Data Mining Techniques for Prognosis in Pancreatic Cancer

Predict Influencers in the Social Network

Classification algorithm in Data mining: An Overview

An Introduction to Data Mining. Big Data World. Related Fields and Disciplines. What is Data Mining? 2/12/2015

Class #6: Non-linear classification. ML4Bio 2012 February 17 th, 2012 Quaid Morris

Towards better accuracy for Spam predictions

BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES

Data Mining Practical Machine Learning Tools and Techniques

Data Mining - Evaluation of Classifiers

The Data Mining Process

Scalable Developments for Big Data Analytics in Remote Sensing

The Operational Value of Social Media Information. Social Media and Customer Interaction

Spam Detection A Machine Learning Approach

Analysis of kiva.com Microlending Service! Hoda Eydgahi Julia Ma Andy Bardagjy December 9, 2010 MAS.622j

A Content based Spam Filtering Using Optical Back Propagation Technique

Intrusion Detection via Machine Learning for SCADA System Protection

An Overview of Knowledge Discovery Database and Data mining Techniques

Knowledge Discovery and Data Mining. Bootstrap review. Bagging Important Concepts. Notes. Lecture 19 - Bagging. Tom Kelsey. Notes

Drugs store sales forecast using Machine Learning

Data Mining. Nonlinear Classification

Classifying Large Data Sets Using SVMs with Hierarchical Clusters. Presented by :Limou Wang

Chapter 12 Discovering New Knowledge Data Mining

Machine Learning in Spam Filtering

Data Mining for Customer Service Support. Senioritis Seminar Presentation Megan Boice Jay Carter Nick Linke KC Tobin

Evaluation of Feature Selection Methods for Predictive Modeling Using Neural Networks in Credits Scoring

E-commerce Transaction Anomaly Classification

Comparison of Data Mining Techniques used for Financial Data Analysis

SURVEY OF TEXT CLASSIFICATION ALGORITHMS FOR SPAM FILTERING

Azure Machine Learning, SQL Data Mining and R

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall

SVM Ensemble Model for Investment Prediction

A Secured Approach to Credit Card Fraud Detection Using Hidden Markov Model

DATA MINING TECHNIQUES AND APPLICATIONS

Maschinelles Lernen mit MATLAB

Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence

UNDERSTANDING THE EFFECTIVENESS OF BANK DIRECT MARKETING Tarun Gupta, Tong Xia and Diana Lee

Neural Networks and Support Vector Machines

Employer Health Insurance Premium Prediction Elliott Lui

The multilayer sentiment analysis model based on Random forest Wei Liu1, Jie Zhang2

How To Make A Credit Risk Model For A Bank Account

BIDM Project. Predicting the contract type for IT/ITES outsourcing contracts

Detection. Perspective. Network Anomaly. Bhattacharyya. Jugal. A Machine Learning »C) Dhruba Kumar. Kumar KaKta. CRC Press J Taylor & Francis Croup

CI6227: Data Mining. Lesson 11b: Ensemble Learning. Data Analytics Department, Institute for Infocomm Research, A*STAR, Singapore.

Analecta Vol. 8, No. 2 ISSN

ENSEMBLE DECISION TREE CLASSIFIER FOR BREAST CANCER DATA

Open Access Research on Application of Neural Network in Computer Network Security Evaluation. Shujuan Jin *

Supervised Learning (Big Data Analytics)

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 3, May-Jun 2014

Practical Data Science with Azure Machine Learning, SQL Data Mining, and R

BOOSTING - A METHOD FOR IMPROVING THE ACCURACY OF PREDICTIVE MODEL

Keywords data mining, prediction techniques, decision making.

Leveraging Ensemble Models in SAS Enterprise Miner

1. Classification problems

Flexible Neural Trees Ensemble for Stock Index Modeling

COPYRIGHTED MATERIAL. Contents. List of Figures. Acknowledgments

Decision Trees from large Databases: SLIQ

Local features and matching. Image classification & object localization

ON INTEGRATING UNSUPERVISED AND SUPERVISED CLASSIFICATION FOR CREDIT RISK EVALUATION

Local classification and local likelihoods

HYBRID PROBABILITY BASED ENSEMBLES FOR BANKRUPTCY PREDICTION

Applied Data Mining Analysis: A Step-by-Step Introduction Using Real-World Data Sets

Support Vector Machines for Dynamic Biometric Handwriting Classification

An Introduction to Data Mining

CS 2750 Machine Learning. Lecture 1. Machine Learning. CS 2750 Machine Learning.

Design call center management system of e-commerce based on BP neural network and multifractal

The Artificial Prediction Market

Search Taxonomy. Web Search. Search Engine Optimization. Information Retrieval

Using multiple models: Bagging, Boosting, Ensembles, Forests

A Systemic Artificial Intelligence (AI) Approach to Difficult Text Analytics Tasks

Classification and Prediction techniques using Machine Learning for Anomaly Detection.

Neural Networks for Sentiment Detection in Financial Text

Prediction Model for Crude Oil Price Using Artificial Neural Networks

Impact of Feature Selection on the Performance of Wireless Intrusion Detection Systems

Constrained Classification of Large Imbalanced Data by Logistic Regression and Genetic Algorithm

IFT3395/6390. Machine Learning from linear regression to Neural Networks. Machine Learning. Training Set. t (3.5, -2,..., 127, 0,...

Data Quality Mining: Employing Classifiers for Assuring consistent Datasets

Event driven trading new studies on innovative way. of trading in Forex market. Michał Osmoła INIME live 23 February 2016

Machine Learning Big Data using Map Reduce

How To Solve The Kd Cup 2010 Challenge

Data Mining. 1 Introduction 2 Data Mining methods. Alfred Holl Data Mining 1

Data Mining Methods: Applications for Institutional Research

Chapter 6. The stacking ensemble approach

FRAUD DETECTION IN ELECTRIC POWER DISTRIBUTION NETWORKS USING AN ANN-BASED KNOWLEDGE-DISCOVERY PROCESS

A Health Degree Evaluation Algorithm for Equipment Based on Fuzzy Sets and the Improved SVM

A survey on Data Mining based Intrusion Detection Systems

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Credit Card Fraud Detection Using Self Organised Map

Introduction to Support Vector Machines. Colin Campbell, Bristol University

The relation between news events and stock price jump: an analysis based on neural network

Beating the NCAA Football Point Spread

Performance Based Evaluation of New Software Testing Using Artificial Neural Network

Transcription:

Application of Event Based Decision Tree and Ensemble of Data Driven Methods for Maintenance Action Recommendation James K. Kimotho, Christoph Sondermann-Woelke, Tobias Meyer, and Walter Sextro Department of Mechatronics and Dynamics, University of Paderborn, 33098 Paderborn, Germany james.kuria.kimotho@uni-paderborn.de christoph.sondermann-woelke@uni-paderborn.de tobias.meyer@uni-paderborn.de walter.sextro@uni-paderborn.de ABSTRACT This study presents the methods employed by a team from the department of Mechatronics and Dynamics at the University of Paderborn, Germany for the 2013 PHM data challenge. The focus of the challenge was on maintenance action recommendation for an industrial equipment based on remote monitoring and diagnosis. Since an ensemble of data driven methods has been considered as the state of the art approach in diagnosis and prognosis, the first approach was to evaluate the performance of an ensemble of data driven methods using the parametric data as input and problems (recommended maintenance action) as the output. Due to close correlation of parametric data of different problems, this approach produced high misclassification rate. Event-based decision trees were then constructed to identify problems associated with particular events. To distinguish between problems associated with events that appeared in multiple problems, support vector machine (SVM) with parameters optimally tuned using particle swarm optimization (PSO) was employed. Parametric data was used as the input to the SVM algorithm and majority voting was employed to determine the final decision for cases with multiple events. A total of 1 SVM models were constructed. This approach improved the overall score from 21 to 48. The method was further enhanced by employing an ensemble of three data driven methods, that is, SVM, random forests (RF) and bagged trees (BT), to build the event based models. With this approach, a score of 1 was obtained. The results demonstrate that the proposed event based method can be effective in maintenance action recommendation based on events codes and parametric data acquired remotely from an industrial equipment. James K. Kimotho et.al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 United States License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. 1. INTRODUCTION The focus of the 2013 Prognostic and Health Management data challenge was on maintenance action recommendation in industrial remote monitoring and diagnostics. The challenge was to identify faults with confirmed maintenance actions masked within a huge amount of data with unconfirmed problems, given event codes and associated snapshot of the operational data acquired when a trigger condition is met on board. One of the challenges in using such data is that most industrial machines are designed to operate at varying conditions and a change in the operation conditions triggers a change in sensory measurements which leads to false alarm. This poses a huge challenge in isolating abnormal behavior from false alarm since it tends to be masked by the huge number of the false alarm instances recorded. In addition, most industrial equipment consist of an integration of complex systems which require different maintenance actions, further complicating the diagnostic process. A maintenance action recommender that is able to distinguish between faults with confirmed maintenance action and false alarms is therefore necessary. Remote monitoring and diagnosis is currently gaining momentum in condition based maintenance due to the advancement in information technology and telecommunication industry(xue & Yan, 2007). In this approach, condition monitoring data is remotely acquired and whenever an anomaly is detected, the data is recorded and transmitted to a central monitoring and diagnosis center (Xue, Yan, Roddy, & Varma, 200). Here further analysis on the data is conducted to isolate and diagnose the faults and based on the outcome of the diagnosis, a maintenance scheme is recommended. This has led to a reduction in maintenance costs since an engineer is not required on-site to perform the trouble shooting. A number of literature based on remote monitoring and diagnosis have been published. Xue et al. (200) combined non-parametric statistical test and decision fusion using gen- International Journal of Prognostics and Health Management, ISSN213-248, 2013 03 1

INTERNATIONAL JOURNAL OF PROGNOSTICS AND HEALTH MANAGEMENT eralized regression neural network to predict the condition of a locomotive engine. The condition was classified as normal or abnormal. Xue and Yan (2007) presented a model-based anomaly detection strategy for locomotive subsystems based on parametric data. The method involved the use of residuals between measured and model outputs to define a health indicator based on normal condition. Statistical testing, Gaussian mixture model and support vector machines were then used to evaluate the health index of the test data. Similarly the study focused on ability to detect normal and abnormal behaviors from operational data. It is evident that the use of operational data obtained remotely poses a huge challenge in fault identification and classification and there is need to develop algorithms that are capable of exploiting the operational data to not only detect abnormalities in industrial equipment, but also to classify faults and recommend maintenance action. The 2012 PHM challenge was based on the need for recommenders with this capability. The following sections describe the data used in the challenge and data preprocessing. 1.1. Data Description In order to develop an effective maintenance recommender, it was important to understand the structure of the data. Due to proprietary reasons, there was very little information about the sensory measurements. The data which was obtained from an industrial equipment was presented in comma separated values (csv) and consisted of the following: 1. Train - Case to Problem: a list of cases with confirmed problems, where the problem represent a maintenance action to correct the identified anomaly. This list consisted of 14 cases with a total of 13 problems. 2. Train - Nuisance Cases - a list of cases whose symptoms do not represent a confirmed problem. These are cases created by automated systems and presented to an engineer who established that the symptom was not sufficient to notify the customer of the problem. These cases constituted the bulk of the training data. 3. Training cases to events and parameters - a list of all the cases in (1) and (2) with the events that triggered the recording of the cases and a snapshot of the operating conditions or parameters. A total of 30 parameters were acquired at every data logging session. A collection of events and corresponding operational data at the point when the anomaly detection module is triggered refers to the case. The event code indicates the system or subsystem that the measurements came from and the reason why the code was generated. Some cases contain multiple data instances while some contain single data instances. 4. Test cases to events and parameters - a list of cases with the corresponding events and parameters for evaluating the recommender. The recommender should propose a maintenance action for a confirmed problem and output none for unconfirmed problems. Table 1 shows a summary of the training and testing data. Data set Number Number Number of of of Cases instances unique events Train - Case to problem 14 27,20 38 Train - Nuisance cases 1029 1,29,243 1239 Test cases 938 1,893,882 124 Table 1. Summary of training and testing data. Due to the large sizes of the training and testing data files, each of the files was broken down into smaller files of 000 instances and loaded into MATLAB environment from where the data was converted into *.mat files. All the processing was handled within the MATLAB environment. 1.2. Data Preprocessing The first step in preparing the data was to separate events and parameters associated with the train case to problem from those associated with nuisance cases. Figure 1 shows the distribution of the problems within the train-case to problems data. As seen from Figure 1, problem P284 is more prevalent with problem P7940 having the least recorded cases. No. of Occurrences 0 0 40 30 20 10 0 P019 P0898 P1737 P284 P21 P300 P9 Problems Figure 1. Distribution of problems in the train-case to problem training data The data was unstructured in that it contained both numerical values and in some instances characters ( null ). The string null was interpreted as zero (0). The data also contained some data instances with missing parameters. It was not possible to remove this data since some cases had all data instances with missing parameters. Therefore these cases were treated separately. There were also some parameters with constant values, for instance parameter P0. These parameters were removed from the data leaving a total of 2 parameters out of 30. P747 P79 P7940 P97 P99 P997 2

1.3. Data Evaluation Challenges Since data was acquired whenever a specific condition is met onboard, rather than a fixed sampling rate, feature extraction and preprocessing of the data was quite difficult. In addition, due to proprietary reasons, the nature of the parameters recorded was not revealed. This made it difficult to define the thresholds for diagnostic purposes. Another challenge identified was the few samples or data instances in some cases, with some having only one event recorded. In such cases the data was masked by the nuisance data and made it very difficult to identify the actual problems. The small percentage of confirmed problems, 14 cases against 1029 nuisance cases would require batch training since in normal circumstances, the nuisance data would suppress the confirmed problems. where each neuron is connected to the output of all the neurons of the preceding layer. Each neuron computes the weighted sum of its inputs through an activation function (Rajakarunakaran, Venkumar, Devaraj, & Rao, 2008). Training ANN consists of adapting the weights until the training error reaches a set minimum. A feedforward neural network consisting of three hidden layers with [110 110 80] neurons in the three layers respectively was employed. Scaled conjugate gradient (SCG) was employed as the training algorithm due to its ability to converge faster and accommodate large data. 3. The following sections describe the methodologies employed by our team. Since ensemble of machine learning (ML) algorithms has been considered as the state of the art approach in both diagnosis and prognosis of machinery failures (Q. Yang, Liu, Zhang, & Wu, 2012), the first attempt was to use the parametric data together with an ensemble of the state of the art machine learning algorithms to classify the problems and also identify nuisance data. However, due to the close correlation between the parametric data of different problems, it was discovered that the use of parametric data together with ensemble of ML algorithms was not sufficient. The method was therefore extended to incorporate the event codes to improve classification performance. Classification and Regression Trees (CART): CART predicts response to data through binary decision trees. A decision tree contains leaf nodes that represent the class name and decision nodes that specify a test to be carried out on a single attribute value, with one branch and sub-tree for each possible outcome of the test (Sutton, 2008). To predict a response, the decisions in the tree are followed from the root node to the leaf node. Pruning is normally carried out with the goal of identifying the tree with the lowest error rate based on previously unobserved data instances. Figure 2 shows a section of the classification tree with the decision nodes, where xnn represents the parameter number. The decision rules were derived from the parametric data. x9 < 0.414739 2. PARAMETRIC BASED E NSEMBLE OF DATA D RIVEN M ETHODS The first attempt was to employ an ensemble of data driven methods with the given parameters as the input and the confirmed problems as the target for training. Once the algorithms were trained, parameters from the test data were used as the input to the trained models whose output was the predicted problems. The following six data driven methods were employed: 1. k-nearest Neighbors (knn): In this method, the distance between each test datum and the training data is calculated and the test datum is assigned the same class as most of the k closest data (Bobrowski & Topczewska, 2004). Various distance functions can be employed but Mahalanobis distance function (Bobrowski & Topczewska, 2004) was found to perform best on the given data. The distance between data points x and y can be calculated by. q (1) d(x, y) = (x y)t S 1 (x y), where S is the covariance matrix of data x. 2. Artificial Neural Networks (ANN): ANN maps input data into output through one or more layers of neurons x2 < 32 x9 >= 0.414739 x2 >= 32 P747 x1 < -0.0021384 x1 >= -0.0021384 P284 x2 < -0.31443 P300 x2 >= -0.31443 P21 P019 Figure 2. A section of a classification tree with decision nodes and leaves 4. Bagged trees (BT): Bagged trees involve training a number of classification trees where for each tree, a data set (Si ) is randomly sampled from the training data with replacement (Sutton, 2008). During testing, each classifier returns its class prediction and the class with the most votes is assigned to the data instance. In this study, 100 trees were found to yield the best results during training.. Random forests (RF): Random forests is derived from CART and it involves iteratively training a number of classification trees with each tree trained with a data set that is randomly selected with replacement from the orig- 3

inal data set (B.-S. Yang, Di, & Han, 2008). At each decision node, the algorithm determines the best split based on a set of features (variables) randomly selected from the original feature space. The final output of the algorithm is based on majority voting from all the trees. Figure 3 shows the construction of a random forest, where S is the original data set and Si is the randomly sampled data set, Ci is a classification tree trained with data set Si and N is the total number of trees. A combination of 00 trees with 0 iterations was found to yield the best results with the given data. of the six algorithms was then built based on weighted majority voting, where the results from algorithms with higher accuracy were given more weight. Table 2 shows the average training accuracy of the selected algorithms. Original data Algorithm Average accuracy knn ANN CART BT RF SVM Ensemble 89% 88% 83% 93% 94% 90% 9% Table 2. Classification accuracy based on training data S1 S2 SN C1 C2 CN Majority voting Decision Figure 3. Construction of a random forest. Support vector machines (SVM): SVM is a maximum margin classifier for binary data. SVM seeks to find a hyperplane that separates data into two classes within the feature space, with the largest margin possible. For non-linear SVM, a kernel function may be employed to transform the input data into a higher dimensional feature space where classification is carried out (Hsu & Lin, 2002). Multi-classification is achieved by constructing and combining several binary classifiers. Pairwise method where n(n 1) binary SVMs are constructed was 2 employed since it is more suitable for practical applications (Hsu & Lin, 2002). A three-fold cross-validation technique was employed during training. 2.1. Training In order to evaluate the performance of the selected algorithms, training data consisting of the 14 cases with confirmed problems and a random sample of 40 nuisance cases was used. This translated to approximately 40,000 data instances. The data was randomly permuted and split into two: 7% of the data was used for training and 2% for testing. The process was repeated with 39 other sampling instances of the nuisance data to build 40 models. The average classification accuracy of each model was computed. An ensemble A look at the classification errors revealed that the majority of the errors occurred for the cases with single events. In particular, cases with event E390 recorded the most errors. Event E390 appears in majority of the cases, both in the cases with confirmed problems and nuisance cases. 2.2. Testing To evaluate the performance of the algorithms on the test data, the data was supplied each case at a time to the 40 models described in the previous section and majority voting employed to select the most likely problem for each algorithm. An ensemble of the algorithms based on weighted voting was then built. The performance of the method was evaluated using Equation 2. Score = N O N IO N N O, (2) where N O is the number of outputs, N IO is the number of incorrect outputs and N N O is the number of nuisance outputs. If the method provided an output for cases with unconfirmed output, this was considered as nuisance output. The number of outputs was based on a sample of 348 cases with an equal number of nuisance cases and cases with confirmed problems. A score of 21, with N O = 303, N IO = 133 and N N O = 149 was obtained based on this method. From the results obtained, it was clear that using parameters exclusively to predict the problem would not yield good results. The next attempt was to incorporate the event codes in the classification process. 3. E VENT BASED D ECISION T REE AND S UPPORT V EC TOR M ACHINES Since the event code indicates the system or subsystem that the measurements came from and the reason why the code was generated, a method combining event based decision tree 4

and SVM was developed. The input to the SVM was the parametric data corresponding to events. This section describes construction of this method. shows the work flow of the SVM-based part of the method (Kimotho et al., 2013). 1 SVM models based on events that are triggered by multiple faults were trained. The prediction accuracy based on the training data was 90%. 3.1. Cases with Single Events It was observed that some cases with confirmed problems consisted of single events. A decision tree to identify these events and problems was developed. Some of the single events appeared in multiple problems. In such cases, the parametric data was used to derive the rules to differentiate between the different problems. One such event was E390. Figure 4 shows a section of the decision tree constructed for this event. L is the number of unique events per case. As seen in Figure 4, the decision tree was extended to include other single events appearing in the training data. Parametric data was used to derive decisions at the nodes for single events appearing in more than one case. Due to time constraints, the decision tree was not trained to identify nuisance data. L=1 TRAINING Update PSO NO Target Train SVM 3-fold CV Evaluate CV error Threshold reached? Training data YES Initialize PSO TESTING Test data SVM model Problem Figure. Work flow of the SVM based classifier with optimally tuned parameters L>1 3.3. Testing Ev=E390 x1<0.783 x1>=0.783 x24<31. P284 Ev E390 Multiple Event Model Other single event models x24>=31. x<0.487 x>=0.487 P747 Testing was carried out by considering each case at a time. Based on the event code, the matching prediction model was retrieved and used to test parametric data of corresponding event. Since most cases had multiple events, the classification decision was arrived at based on majority voting. From this method, a score of 48 was attained, with N O = 332, N IO = 122 and N N O = 12. 4. E VENT BASED D ECISION T REE AND E NSEMBLE OF DATA D RIVEN M ETHODS P019 P79 Figure 4. Construction of a decision tree for event E390 For events appearing in multiple cases, SVM method with parameters optimally tuned using particle swarm optimization was used to train models corresponding to each event. A total of 1 models were trained to identify the possible problem given the event code and corresponding parameters. For events not appearing in the training data, the parametric data was tested with each of the models and the problem with the highest number of vote was selected. 3.2. Training To train the method, the training data was again split into two, 7% of the data for training and 2% for testing. A 3-fold cross validation (CV) technique was employed during tuning of the parameters. PSO algorithm was employed to optimally tune the parameters of the SVM algorithm (Kimotho, Sondermann-Woelke, Meyer, & Sextro, 2013). Figure In this method, event based decision tree and an ensemble of data driven methods that utilize the parametric data corresponding to the events were combined to classify the problems. For events appearing in multiple cases, three data driven methods (RF, BT and SVM) were used to train models corresponding to each event. A total of 1 models for each method were trained to identify the possible problem given the event code and corresponding parameters. The classification decision was made by majority vote of the results from the three algorithms. For events not appearing in the training data, the parametric data was tested with each of the models and the problem with the highest number of vote was selected. 4.1. Training To train the method, the training data was again split into two, 7% of the data for training and 2% for testing. The prediction accuracy based on the training data was 92%.

300 Method 1 Method 2 Method 3 No. of Occurrences 3000 200 2000 100 1000 ne 7 40 9 47 9 00 1 84 37 98 no P9 9 P9 9 P9 7 P7 9 P7 P7 P P3 P2 P2 P1 7 P0 8 P0 1 0 9 00 Problems Figure. Distribution of classification results of the test data based on the three methods presented 4.2. Testing Similar to the previous method, testing was carried out by considering each case at a time. Based on the event codes, the matching prediction model for each method was retrieved and used to classify the test data using parametric data as the input. The predictions from all the algorithms were combined and the classification decision made based on majority vote. With this approach, a score of 1 with N O = 331, N IO = 121 and N N O = 19 was obtained. This was a slight improvement compared to using event-based decision tree and SVM. Figure shows the distribution of the predictions from the three methods presented, where method 1 is the ensemble of data driven methods with only parametric data as input, method 2 is the event based method with SVM and method 3 is the event based method and ensemble of data driven methods.. C ONCLUSION Methodologies for recommending maintenance action, based on event codes and machinery parametric data obtained remotely have been presented. The large amount of data with unconfirmed problems (nuisance data) compared to the confirmed problems introduced a high rate of misclassification, especially when using the parametric data. However, incorporating event codes in classifying problems was found to yield better results. This led to our team being ranked position three in the 2013 PHM data challenge. The method could be further improved to reduce the number of incorrect and nuisance outputs. ACKNOWLEDGMENT The authors wish to acknowledge the support of the German Academic Exchange Service (DAAD) and the Ministry of Higher Education Science and Technology (MoHEST), Kenya. The authors would also like to thank Jun.-Prof. Artus Krohn-Grimberghe for the fruitful discussions on big data analysis. R EFERENCES Bobrowski, L., & Topczewska, M. (2004). Improving the k-nn classification with the euclidean distance through linear data transformations. In Proceedings of the 4th industrial conference on data mining, ICDM. Hsu, C. W., & Lin, C. J. (2002). A comparison of methods for multiclass support vector machines. IEEE Transactions on Neural Networks, 13, 41-42. Kimotho, J., Sondermann-Woelke, C., Meyer, T., & Sextro, W. (2013). Machinery prognostic method based on multi-class support vector machines and hybrid differential evolution particle swarm optimization. Chemical Engineering Transactions, 33, 19-24. Rajakarunakaran, S., Venkumar, P., Devaraj, D., & Rao, K. S. P. (2008). Artificial neural network approach for fault detection inrotary system. Applied Soft Computing, 8, 740-748. Sutton, C. D. (2008). Classification and regression trees, bagging, and boosting. Handbook of Statistics, 24, 303329. Xue, F., & Yan, W. (2007). Parametric model-based anomaly detection for locomotive subsystems. In Proceedings of international joint conference on neural networks. Xue, F., Yan, W., Roddy, N., & Varma, A. (200). Operational data based anomaly detection for locomotive diagnostics. In Proceedings of international conference on machine learning. Yang, B.-S., Di, X., & Han, T. (2008). Random forests classifier for machine fault diagnosis. Journal of Mechanical Science and Technology, 22, 171-172. Yang, Q., Liu, C., Zhang, D., & Wu, D. (2012). A new ensemble fault diagnosis method based on k-means algorithm. International Journal of Intelligent Engineering & Systems,, 9-1.