DATA PREPARATION FOR DATA MINING BASED ON NEURAL NETWORK: A CASE STUDY ON GERMAN CREDIT CLASSIFICATION DATASET



Similar documents
Predicting the Risk of Heart Attacks using Neural Network and Decision Tree

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 3, May-Jun 2014

DATA MINING TECHNOLOGY. Keywords: data mining, data warehouse, knowledge discovery, OLAP, OLAM.

Data Preprocessing. Week 2

Improving the Performance of Data Mining Models with Data Preparation Using SAS Enterprise Miner Ricardo Galante, SAS Institute Brasil, São Paulo, SP

Data Exploration and Preprocessing. Data Mining and Text Mining (UIC Politecnico di Milano)

Data Exploration Data Visualization

Customer Classification And Prediction Based On Data Mining Technique

Data exploration with Microsoft Excel: analysing more than one variable

STATISTICA. Financial Institutions. Case Study: Credit Scoring. and

Data Mining: Exploring Data. Lecture Notes for Chapter 3. Introduction to Data Mining

COM CO P 5318 Da t Da a t Explora Explor t a ion and Analysis y Chapte Chapt r e 3

Data Mining: Exploring Data. Lecture Notes for Chapter 3. Introduction to Data Mining

Exploratory data analysis (Chapter 2) Fall 2011

Dynamic Data in terms of Data Mining Streams

CS Introduction to Data Mining Instructor: Abdullah Mueen

Data Mining: Data Preprocessing. I211: Information infrastructure II

Diagrams and Graphs of Statistical Data

Social Media Mining. Data Mining Essentials

Data Mining: A Preprocessing Engine

EFFICIENT DATA PRE-PROCESSING FOR DATA MINING

Data Mining Solutions for the Business Environment

Data quality in Accounting Information Systems

SPATIAL DATA CLASSIFICATION AND DATA MINING

Comparison of K-means and Backpropagation Data Mining Algorithms

Information Management course

Data Mining: Exploring Data. Lecture Notes for Chapter 3. Slides by Tan, Steinbach, Kumar adapted by Michael Hahsler

Iris Sample Data Set. Basic Visualization Techniques: Charts, Graphs and Maps. Summary Statistics. Frequency and Mode

Application of Data Mining Methods in Health Care Databases

not possible or was possible at a high cost for collecting the data.

Data Mining Part 5. Prediction

Data Mining: Motivations and Concepts

4.1 Exploratory Analysis: Once the data is collected and entered, the first question is: "What do the data look like?"

Medical Information Management & Mining. You Chen Jan,15, 2013 You.chen@vanderbilt.edu

2. IMPLEMENTATION. International Journal of Computer Applications ( ) Volume 70 No.18, May 2013

(b) How data mining is different from knowledge discovery in databases (KDD)? Explain.

Cleaned Data. Recommendations

A Knowledge Management Framework Using Business Intelligence Solutions

DATA MINING TECHNIQUES AND APPLICATIONS

TIETS34 Seminar: Data Mining on Biometric identification

Employee Survey Analysis

IMPROVING DATA INTEGRATION FOR DATA WAREHOUSE: A DATA MINING APPROACH

Quality Control of National Genetic Evaluation Results Using Data-Mining Techniques; A Progress Report

Comparison of Data Mining Techniques used for Financial Data Analysis

A Review of Data Mining Techniques

Rule based Classification of BSE Stock Data with Data Mining

Data cleaning and Data preprocessing

STATS8: Introduction to Biostatistics. Data Exploration. Babak Shahbaba Department of Statistics, UCI

A Regression Approach for Forecasting Vendor Revenue in Telecommunication Industries

Innovative Techniques and Tools to Detect Data Quality Problems

Data Warehousing and OLAP Technology for Knowledge Discovery

Applied Mathematical Sciences, Vol. 7, 2013, no. 112, HIKARI Ltd,

A Proposed Data Mining Model to Enhance Counter- Criminal Systems with Application on National Security Crimes

CHAPTER 1 INTRODUCTION

International Journal of Scientific & Engineering Research, Volume 5, Issue 4, April ISSN

The Role of Size Normalization on the Recognition Rate of Handwritten Numerals

Exploring Relationships between Highest Level of Education and Income using Corel Quattro Pro

COURSE RECOMMENDER SYSTEM IN E-LEARNING

Lecture 2. Summarizing the Sample

META DATA QUALITY CONTROL ARCHITECTURE IN DATA WAREHOUSING

Data Mining: Concepts and Techniques. Jiawei Han. Micheline Kamber. Simon Fräser University К MORGAN KAUFMANN PUBLISHERS. AN IMPRINT OF Elsevier

IBM SPSS Direct Marketing 23

Role of Customer Response Models in Customer Solicitation Center s Direct Marketing Campaign

IBM SPSS Direct Marketing 22

IBM SPSS Data Preparation 22

Data Quality Mining: Employing Classifiers for Assuring consistent Datasets

Application of Data Mining Techniques in Intrusion Detection

Introduction to Regression and Data Analysis

Predictive Dynamix Inc

Chapter 7: Data Mining

Data Mining Part 5. Prediction

Neural Networks in Data Mining

Knowledge Discovery and Data Mining. Structured vs. Non-Structured Data

FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM

DATA PREPARATION FOR DATA MINING

Dr. U. Devi Prasad Associate Professor Hyderabad Business School GITAM University, Hyderabad

Data Mining Project Report. Document Clustering. Meryem Uzun-Per

How To Use Neural Networks In Data Mining

Data Mining Applications in Higher Education

How can we discover stocks that will

Introduction to Data Mining

CONTENTS PREFACE 1 INTRODUCTION 1 2 DATA VISUALIZATION 19

A Survey on Pre-processing and Post-processing Techniques in Data Mining

An Empirical Study of Application of Data Mining Techniques in Library System

Summarizing and Displaying Categorical Data

WebFOCUS RStat. RStat. Predict the Future and Make Effective Decisions Today. WebFOCUS RStat

Simple Predictive Analytics Curtis Seare

Artificial Neural Network, Decision Tree and Statistical Techniques Applied for Designing and Developing Classifier

Credit Risk Models. August 24 26, 2010

A Figure A: Maximum circle of compatibility for position A, related to B and C

BUSINESS ANALYTICS. Data Pre-processing. Lecture 3. Information Systems and Machine Learning Lab. University of Hildesheim.

Review on Financial Forecasting using Neural Network and Data Mining Technique

How To Solve The Kd Cup 2010 Challenge

Dataset Preparation and Indexing for Data Mining Analysis Using Horizontal Aggregations

DATA MINING, DIRTY DATA, AND COSTS (Research-in-Progress)

An Overview of Database management System, Data warehousing and Data Mining

Data Mining. Knowledge Discovery, Data Warehousing and Machine Learning Final remarks. Lecturer: JERZY STEFANOWSKI

Data Mining System, Functionalities and Applications: A Radical Review

Categorical Data Visualization and Clustering Using Subjective Factors

Data Mining Techniques

Transcription:

DATA PREPARATION FOR DATA MINING BASED ON NEURAL NETWORK: A CASE STUDY ON GERMAN CREDIT CLASSIFICATION DATASET Maria Ulfah Siregar Teknik Informatika Fakultas Sains & Teknologi UIN Sunan Kalijaga Yogyakarta (Email: ulfahtc96@yahoo.com) Abstract This paper will give detailed data description and preparation of German Credit Classification dataset, before it is used for further processes in data mining or data warehouse. Data preparation is the longest and most difficult part of data mining process. In general, readily available data is usually dirty and sometimes no quality data is available. There are five parts in data description and preparation that are going to be given in this paper. The first part is the name of the dataset and the number of examples and their types of attributes. In the second part, some examples from good and bad class as are given in the form of tables. Then, a data preliminary process is carried out to detect missing values from each attribute. Next, the result of statistical data analysis is displayed on charts or categories tables from each attribute. The last part is preprocessing, which comprise of data cleaning, integration and transformation. Based on the results obtained, three out of twenty attributes are deleted: Attribute, Attribute 8 and Attribute. So, the final data is smaller than the original one. Moreover, data is distributed more normally and in suitable patterns, which is hoped to be helpful for further processes. Keywords: Data preparation, data, missing value, data cleaning, data integration, data transformation, statistical analysis. A. Introduction Data preparation is an important issue for pattern recognition, information retrieval, machine learning, web intelligence, data warehousing, data mining, and other areas of research that concerns with data. It has been understood that data preparation practically takes approximately 8% of the total data engineering effort (Zhang et al., 3), so data preparation is a crucial part in data-based research. Yet, data preparation comprises techniques concerned with analyzing raw data so as to yield quality data (Zhang et al., 3). In this paper, data is prepared for further usage in data mining based on neural network. Data is explored in data mining to get the pattern of data and finally to predict something (Lukawiecki, 7). Data mining is the search for relationships and global patterns that exist in large databases but are 'hidden' among the vast amount of data (Sukumar). In other word, data mining can be used to extract knowledge from data (Negnevitsky, ). In the real world, data tend to be incomplete, noisy, and inconsistent. In other word, data is dirty (Han & Kamber). Incomplete data means there are lacks of attribute values, lacks 33

Kaunia, Vol. IV, No., Oktober 8: 33-6 of certain attributes of interest or only aggregate data. Meanwhile, noisy data means data containing either errors or outliers. And, inconsistent data means data containing discrepancies in codes or names. In addition, sometimes there are no quality data, and this will result in no quality mining results. We may say that data must be in 'clean' forms before it is processed. Data preparation is the longest and most difficult part of the data mining process (Borysowich, 7). It starts with data selection, maximization of quality of data, data mining approaches selection, data mining methods selection, and data transformation. However, this paper will only cover data cleaning, data integration, data transformation, data reduction, and data discretization. Data cleaning handles data that are incomplete, noisy, and inconsistent. It will also fill in missing values, smooth noisy data, identify outliers, and correct data inconsistency. Meanwhile, data integration combines data from multiple sources to form a coherent data store, like data warehouse for example. Metadata, correlation analysis, data conflict detection, and resolution of semantic heterogeneity contribute towards smooth data integration. Data reduction obtains reduced representation in volume but produces the same or similar analytical results. Data discretization is part of data reduction but with a particular importance, especially for numerical data. In this paper, data mining will derive some criteria classifying customer into good or bad classification. In the future, this classification can be used to process acceptation or rejection of loan requests from customer. For 'good customer', bank almost certainly process and accept his/her loan requests because of his/her lower credit risk. Otherwise, bank may need extra data or further assessment. From this data, we are looking for 'clues' that bring us to the decision of whether a customer can be classified into the good or the bad class. Customers, which are classified into good class, have certain data that make them classified into this class. We can adjust some relationships such as has good history in previous credits, has property which safe the loan, has stable account, has good job, and etc so as he/she can be classified into one of two these classes. Credit risk management is an important and crucial part of financial and regulation institutions due to measurement and attenuation of unexpected losses for the credit display in a portfolio (Mihail et al.). The recent tarnishing bankruptcies in US and others countries proof its importance and existence. By using credit risk management, institutions will be aware in their decisions to whether to sanction a loan request or not. The assessment of the loan proposal requires a lot of expertise, experience and systematic approach (Kulkarni & Sreekantha, 8). The following are some of benefits of credit risk measurement (Kulkarni & Sreekantha, 8):. It facilitates informed credit decision consistent with Bank's Risk appetite. It provides ability to price products on the basis of risk 3. It facilitates dynamic provisioning and minimizes impact of losses 3

Data Preparation for Data Mining Based on Neural Network... (Maria Ulfah Siregar) Al 7: management/self-employed/highly qualified employee/officer o Attribute 8: number of people being liable to provide maintenance for o Attribute 9: telephone A9: A9:, registered under the customers name o Attribute : foreign worker A: A: no 3. Source of Data Before Preprocessing In order to give a deeper understanding on the dataset, five examples of upper lines for each class are depicted in separate tables below. Table : Examples from good class Al No-account No-account less-dm ODM No-account A 8 A3 duly-till-now duly-till-now duly-till-now duly-lill-now duly-till-now A radio-tv radio- v furniture education radio-tv A5 766 6 35 963 A6 unknown unknown unknown unknown lessloodm A7 seven-years one-year four-years four-years seven- years A8 A9 married-male female-divorced female-divorced female-divorced single-male A A building-society real-estate building-society building-society car A3 9 8 3 3 A Bank Table : (Continued) A5 rent rent own rent rent Aie i i i i A7 resident management A8 A9 no no no no no A Target good. good. good. good. good. Table 3 : Examples from bad class Al!ess-DM less-dm ODM ODM ODM A 9 8 A3 duly-till-now duly-till-now duly-lill-now duly-lill-now duly-till-now A repairs Used-car radio-tv Used-car furniture AS 639 56 366 97 335 A6 lessloodm lessloodm lessloodm lessloodm lessloodm A7 four-years four-years one-year seven-years over-seven AS 3 A9 single-male female-divorced female-divorced single-male single-male A Table : (Continued) car car Building-society Building-society A3 3 3 39 39 A stores AS own rent rent free rent A6 3 A7 management management A8 A9 no no A Target bad. bad. bad. bad. bad.. List of Attributes With Their Types (Categorical, Nominal, or Continuous) In this dataset, there are only two types of attributes, categorical and continuous. 37

Kaunia, Vol. IV, No., Oktober8: 33-6 A6:d"...< 5 DM A63:5d"...<DM A6:...e" DM A65: unknown/no savings account o Attribute 7: present employment since A7: unemployed A7:...<lyear A73: d"... <years A7:d"... < 7years A75:...e" 7 years o Attribute 8: installment rate in percentage of disposable o Attribute 9: personal status and sex A9:male : divorced/separated A9: female: divorced/separated A93: male : single A9: male : married/widowed A95: female: single o Attribute : other debtors/guarantors A: A: co-applicant A3: guarantor o Attribute : present residence since o Attribute : property A: real estate A: if not A: building society savings agreement/life insurance A3: if not A /A : car or other, not in attribute 6 A: unknown/no property o Attribute 3: age in years o Attribute : other installment plans A:bank A: stores A3: o Attribute 5: housing A5:rent A5: own Al 53: for free o Attribute 6: number of existing credits at this bank o Attribute 7: job " A7: unemployed/un - non-resident A7: un - resident A73: employee/official 36

Data Preparation for Data Mining Based on Neural Network... (Maria Ulfah Siregar). Lending decisions can be taken with minimum time, thus lending business volume can increase substantially. B. Method Data preparation is initiated with the following data descriptions:. Name of Dataset German Credit Classification dataset, obtained from the UCI (University of California, Irvine) Machine Learning Repository, was used in this study. We chose this dataset because of its number of examples is sufficient and its values for each attribute are complete or available.. Brief Description The number of examples in the dataset is. The dataset is classified into two classes: good and bad class. The good class has 7 examples whereas the bad one has 3. The dataset has attributes, each labeled from Al to A. Seven of the attributes are of continuous (numerical) types, while the other 3 are of categorical types. The description of each attribute is given as follow: o Attribute : status of existing checking account :...<ODM A:Od"...<DM A3:... e" DM/salary assignments for at least year A: no checking account. o Attribute : duration of loan in month o Attribute 3: credit history A3: no credits taken/all credits paid back duly A3: all credits at this bank paid back duly A3: existing credits paid back duly till now A3 3: delay in paying off in the past A3: critical account/other credits existing (not at this bank), o Attribute : purpose A: car (new) A6: education A: car (used) A7: vacation A: furniture/equipment A8: retraining A3: radio/television A9: business A: domestic appliances A: others A5: repairs o Attribute 5: credit amount o Attribute 6: savings account/bonds A6:... < DM 35

Kaunia, Vol. IV, No., Oktober 8: 33-6 Table 5 : Attributes and its types Status (qualitative - categorical). Duration (numerical): continuous. Credit-history (qualitative - categorical). Purpose (qualitative - categorical). Credit (numerical): continuous. Savings-account (qualitative - categorical). Employment (qualitative - categorical). Installment-rate (numerical): continuous. Personal-status (qualitative - categorical). Debtors (qualitative - categorical). Residence-time (numerical): continuous. Property (qualitative - categorical). Age (numerical): continuous. Installments (qualitative - categorical). Housing (qualitative - categorical), values that accepted: rent, own, free. Existing-credits (numerical): continuous. Job (qualitative - categorical). Liable-people (numerical): continuous. Telephone (qualitative - categorical). Foreign (qualitative - categorical). After we examine the whole data, it is found that there are no missing values for all attributes and objects. Detection of missing values and fill in is one-step of data cleaning. Missing values here refers to some data that may be coded as a blank or assigned a special value that cannot possibly occur. This error may occur if the data was not obtained from observation or if it was lost. The next step in this study is statistical analysis of the data. For categorical types of data, the analysis can be easily carried out based on frequency. After we obtain frequency tables for each categorical attribute, we can plot the frequencies onto some graphical tools. In this paper, we use help from chart of Excel and choose bar chart to represent them. But here, we just describe one attribute, Al, in detail. The other attributes will be depicted in table forms. Al Figure : Attribute status before preprocessing Although data distribution is not normal, we did not find any outliers. Outliers are data from the same attribute which are very different from the rest of the whole data being observed. Outliers can be detected by examining frequency table, plot of chart, or cluster analysis. If cluster analysis is used, the outliers will lay outside of clusters. To obtain a normal distribution for the data, one of the alternatives is to combine the category of less-dm and over- DM. The rest of the attributes of categorical types are listed in following table: 38

Data Preparation for Data Mining Based on Neural Network... (Maria Ulfah Siregar) Table 6 : Attributes and its suggestions Attribute Suggestions A3 Categories all-paid-duty and bank-paid-duly can be combined into one category A Vacation category should be better ignored and deleted. «Categories retraining, domestic-app and repairs, can be integrated into one category, which is others. A6 less IOOODM and overlooo DM can be combined into new category, greater than or equal 5QQDM A7 Categories one-year and four-years can be combined into new category, less than four years, Still, categories seven-years and over-seven can also be combined into new category, greater than or equal four years. A9 Integrate its categories into jus! two categories, one for male and one for female AIO Deleted A No modification A bank and stores categories can be combined into one category AI5 rent andfree can be combined into one category. A7 unemployed/un - non-resident, un - resident, and management/self-employed/highly qualified employee/officer are combined into one category. A9 No modification A Deleted For continuous data type, it is difficult to do straight analysis by frequency, especially if the data varies strongly. To solve this problem, it is better to put them into interval categories, as summarized in the following table: Table 7 : Analyzation of continuous types attributes Attribute A A5 A8 A3 A6 A8 Analyze Data '7' might be an outlier. Us value has very large difference from the others. In addition, its frequency is only one, so small. This object might come from another population, which has large duration. Examining attribute credit, data '8' has range that far from previous daia, while the other values differ smoothly each other. The data distribution is quite good. Moreover, no such outliers are found. Same with AS Same with A8 Although the distribution is out of normal, there is no such outlier found. The distribution of data of this attribute is not normal. We decide by deleting this attribute will not interfere other attributes. Univariate examination sometimes is not sufficient to identify all errors exist in the data. Therefore, attributes should be cross-related in order to determine whether there is any relationship between attributes which is commonly known not to be true. For categorical attribute, we can use cross-tabulation to do this. Here, we perform cross-tabulation to the "job" and "employment" attributes since the relationship between them is generally known. Table 8 ; Cross-tabulation between job and employment attributes Attribute employment unemployed one- year four-years seven-years over-seven S unemployed-non-resident un-resident management 6 33 5 7 6 8 3 8 3 9 6 5 From table 8, it can be seen that the five unemployed-non-resident who having their job for one year, one unemployed-non-resident who has his/her job for four years, the one unresident employee who is unemployed, employees who are unemployed, the 33 39

Kaunia, Vol. IV, No., Oktober 8: 33-6 employees with management job who are unemployed, are known not to be true relationships. of these errors are difficult to find out if it is just based on univariate examining. For continuous attribute, scatter graph can be used to analyze the correlation between attributes (Kannan). There are two steps in determining the existence and degree of linear association between two attributes (Afifi & Azen, 979). To do this, first, we can plot the points of (XjEttr,, y^ttrrj), (Xjattr,, y^tte,),..., (x n attr,, y n attrr r ) in the x-y plane. The resulted graph is called a scatter gram/scatter graph. After that, the simple coefficient of correlation can be calculated using the following equation: The correlation coefficient value (r) lays between the interval of- d" r d". The closer the correlation coefficient value to - or, the stronger the relationship between the attributes. The positive value of r means that when the value of attribute increases, the value of attribute also increases, which proves that they have a positive correlation. Meanwhile, the negative value of r means that when the value of attribute increases, the value of attribute decreases. The extreme cases occur when the r = ±, which indicate a perfect linear association between the two attributes, where if the value of attribute is given, the value of attribute can be determined exactly. The below diagram, Figure, shows that duration and credit is related as positive linear. And, it is also reflected from its correlation value,.6, which is almost. There are up to 5 pairs of correlation among continuous attributes that can be drawn from the dataset. However, only one correlation displayed graphically in this paper, while the other correlations are summarized in a correlation matrix, which is depicted in Table 9. 8 6 - - - 6 Duration (In month) Good Bad Linear (Good) Linear (Bad)

Data Preparation for Data Mining Based on Neural Network... (Maria Ulfah Siregar) Figure : Scatter graph of attributes A and A5 Table 9 : Correlation matrix for attribute A to A6 A AS A8 AI3 A6 A.698.779.367 -.36 -.8 A5 -.7357.89633.3767.7955 A8.9337.586568.6687.6698.89653 A3 A6.95358 After statistical analysis of the data, the next step is data preprocessing, which begins with data cleaning. If there are missing values, we can either fill in the missing values manually or delete the object with missing values found in more than one attributes. The value filled in can be obtained from the global constant, the attribute mean, or the most probable value. However, since the analysis indicates that the dataset does not contain any missing value, this process was not carried out. If the data varies largely, we can smooth the noise in the data using binning technique. To do this, firstly, we need to sort the data, and then distribute it into a number of bins. After that, the data in each bin is smoothed either by bin means, bin medians or bin boundaries. For categorical attributes, some of the categories can be combined to obtain a normal distribution. In addition, a few of the attributes considered meaningless are deleted. Results are depicted in Table. Table : Categorical attributes after preprocessing less ODM greater than or equal ODM paid-duly delay critical A9 A Exiil Not exist Exist, no need further preprocessing education business others less DM less 5DM greater than or equal SOODM unknown unemployed less four years greater than or equal Four years male female real-estate building-society Exist Exist

Kaunia, Vol. IV, No., Oktober 8: 33-6 For continuous attributes, we will describe them one by one separately. The duration attribute is partitioned into 3 bins using equal-depth (frequency) partitioning method. Three intervals are chosen because they give almost the same number of examples. The equal-depth partitioning method is used here since the data is skewed. After the data was partitioned into 3 bins, it was smoothed by bin means which can be seen as follows: Table : Duration attribute (A) Interval Number of examples Bin means - 3-5-7 359 3 9.86. 39.5 The credit attribute is partitioned into 7 bins to accommodate the same number of examples in each bin. After partitioned and smoothed by bin means, we obtain the following: Table : Credit attribute (A5) Interval 5-DM > & < 5DM >5&<DM >&<3DM >3&<DM >&<7DM >7DM Number of examples 6 9 6 88 3 5 Bin means 73.53 8.395 76.57 6.697 39.8 538.965 985.3 The installment-rate attribute is partitioned into bins using Equal-width (distance) partitioning method. By substituting N, B (max value) and A (min value) in the width interval formula with (four), (four) and (one) respectively, we obtain: B-A W=- N - =.75 Since the data in each interval is already uniformed, we do not need to perform data smoothing. Table 3 : Installment-rate attribute (A8) Interval -.75.76-.5.5-3.5 3.6- Member of examples 36 3 57 76 The residence-time attribute is partitioned into bins, also using Equal-width (distance) partitioning method. And the same as before, we can substitute N, B (max value) and A (min value) in the width interval formula with (four), (four) and (one) respectively to obtain: B-A W = N -

Data Preparation for Data Mining Based on Neural Network... (Maria Ulfah Siregar) For this attribute, we do not need to perform data smoothing since the data in each interval is already uniformed. Table : residence-time attribute (Al ) Interval -.75.76-.5.5-3.5 3.6- Member of examples 3 38 9 3 Same as before, age attribute is also partitioned into 3 bins. The reasons are not only due to the same number of member examples in each interval but also the age. After partitioned into 3 bins and smoothed by bin boundaries, the following table is obtained: Interval 9-3 3- -75 Number of member 35 7 Table 5 : Age attribute (A3) Data before smoothing (9,,,, 3.. 5, 6, 7, 8, 9, 3 (3, 3, 33, 3, 35, 36, 37. 38, 39, ) ( 3 5 6 7 8 9 5 5 5 53 5 55 56, 57, 58, 59, 6, 6, 6, 63. 6, 65, 66, 67, 7, 7, 75) Data after smoothing 9 = 9 3 = 6 3 = 77 =38 5-6 75 = 8 The existing-credits attribute is partitioned into bins using Equal-depth (frequency) partitioning method. Then, we perform data smoothing using bin boundaries. Table 6 : Existing-credits attribute (A 6) Interval < - Member of examples 633 367 Data =633-333 = 3 After preprocessing is done, the results in the dataset are different from the original ones, as summarized in the following tables: Table 7 : Examples from good class after preprocessing Al.5 A 39.5.. 9.86 9.86 A3.333.333.333.333.333 A.5.5.333.667.5 AS 985.3 8.395 538.965 8.395 76.57 A6 A7.5.5.5 A8 A9 Table 8 : (Continued) A.333.333.333.667 A3 A 9 9 9 A5 A6 A7 A9 Target 3

Kaunia, Vol. IV, No., Oktober 8: 33-6 Table 9 : Examples from bad class after preprocessing Al.5.5 A 9.86. 9,86 39.5. A3.33.33.33.33.33 A.67.5.67.333 AS 73.53 985.3 8.395 985.3 39.8 A6 A7.5.5.5 A8 3 A9 Table : (Continued) A.667.667.333.333 A3 9 9 9 3 3 A A5 A6 A7 A9 Target If data is originated from multiple sources, data integration is needed to combine such data into a coherent data store. And since the dataset used in this study comes from the same source, UCI Machine Learning, it does not require integration. The next step is data transformation. In this step, the data is transformed or consolidated into forms, which is appropriate for mining processes. As transformation runs, we apply minmax normalization to the examples so that their value can propagate into mining process. Minmax normalization is well explained by this formula: v min, v = (new_ max- new_ mm) + new _ min max- min After transformation, the value of each attribute lays between and. This is common to neural network, although there are other set of values between - and. C. Results Finally, we come to results as show in the following tables. The results are given as five examples from upper line dataset for Good and Bad class. Table : Examples from good class Al.5 A.3683.3683 A3.333.333.333.333.333 A.5.5.333.667.5 AS.6988.599.6988.5 A6 I A7.5.5.5 A8.333333 A9 Table : (Continued).333333.333333 A.333.333.333.667 A3.39857.375 A A5 Alt A7 I A9 ranjet

Data Preparation for Data Mining Based on Neural Network... (Maria Ulfah Siregar) Table 3 : Examples from bad class after transformation Al.5.5 A.3683.3683 A3.33.33.33.33.33 A.67.5.67.333 A5.6988.339 A6 A7.5.5.5 A8.666667 A9 Table : (Continued).333333.333333 A.667.667.333.333 A3.86.86 A A5 A6 A7 A9 Tarset D. Conclusion This paper has explained basic data preparation, which can be used to identify the good data and the bad one. The dataset used in this study was German Credit Classification. As summarized in previous sections, some outliers were found. However, these outliers can be treated either by deleting the objects present in only one attribute, or by deleting the attributes. Therefore, as can be seen from results, three attributes have been deleted. We assume that those attributes are less significant to contribute for loan sanction. These attributes are other debtor/guarantors, the number of people being liable to provide maintenance for, and foreign worker. Data preprocessing also showed that there are relationship or correlation between attributes, as exemplified by the correlation between the duration of loan and the credit amount. Its correlation can be deduced from its correlation coefficient value which is near to. From result tables capturing a few of data after preprocessing, values for its attributes are laid between and, which is a very small number. We can say that data is smoother and therefore suggest that it has suitable patterns for data mining process based on neural network. Although some customers have bad credit history or risky credit experience, which is categorized as critical or delay on attribute credit-history, it is found that it is still possible for them to obtain the credit, as they can be included into the good class. Finally, the dataset that used in this paper may help in the decision-making about customer's loan request. Therefore, experts involved on credits, loan, bank should be highly qualified so that they would be able to understand the data well. 5

Kaunia, Vol. IV, No., Oktober 8: 33-6 REFERENCES Afifi,A.A., &Azen, S.P. 979. Statistical Analysis: A Computer Oriented Approach, Second Edition, Academic Press. Borysowich, Craig. 7. Preparing Data for Data Mining. Accessed from http://it.toolbox.com/ blogs/enterprise-solutions/preparing-data-for-data-mining-l 7755.6 th of February 9. Han, Jiawei & Kamber, Micheline. Data Mining: Concepts and Techniques. Accessed from http://www.cs.sfu.ca/~han/bk/adbminer.ppt. 6th of February 9. Hofmann, H. 7. UCI Machine Learning Repository [http://www.ics.uci.edu/~mlearn/ MLRepository.html]. Irvine, CA: University of California, School of Information and Computer Science. Kannan, Kavitha. Data Mining Report on Iris and Australian Credit Card Dataset. Tugas Ilmiah. School of Computer Science and Information Technology. Universiti Putra Malaysia. Kulkarni, R.V. & Sreekantha, O.K. 8. Knowledgebase System Design For Credit Risk Evaluation Using Neuro - Fuzzy Logic. Proc. OfNCKM. Lukawiecki, Rafal. 7. Introduction to Data Mining. Accessed from http:// download.microsoft.com/documents/uk/technet/postevent/--8/ _Introduction_to_Data_Mining.pptx. 6* of February 9. Mihail, N., CetinS, I., and Orzan, G Credit Risk Evaluation. Theoretical and Applied Economics. Negnevitsky, Michael.. Artificial Intelligence: AGuide to Intelligent Systems, First Edition, Addison-Wesley. Sukumar, Rajagopal. Data Mining. Accessed from http://sirius-software.com/sug99/datmin.ppt. 6th of February 9. Zhang, S., Zhang, C.Q., and Yang, Q. 3. Data Preparation For Data Mining. Applied Artificial Intelligence, 7:375-38. 6