PLAANN as a Classification Tool for Customer Intelligence in Banking

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "PLAANN as a Classification Tool for Customer Intelligence in Banking"

Transcription

1 PLAANN as a Classification Tool for Customer Intelligence in Banking EUNITE World Competition in domain of Intelligent Technologies The Research Report Ireneusz Czarnowski and Piotr Jedrzejowicz Department of Information Systems Gdynia Maritime University Morska 83, Gdynia Poland irek, Abstract. The paper is a report describing using PLAANN as a classification tool for customer intelligence in banking. PLAANN is a software tool consisting of several ANN based classifiers, which are trained using the population learning algorithm (PLA). The paper reviews briefly PLA, explains its application to training ANN, describes functions and operation modes of PLAANN and, finally, reports on experiments with the set of patterns provided under the EUNITE World Competition. 1 Introduction One of the major application categories of artificial neural networks (ANN) is classification. The idea here is to recognize and classify given patterns to typically much fewer groups of patterns. The latter will be the output of the network for this type of application. This is accomplished by training the neural network using sample patterns and their correct answers. When the neural network is properly trained, it can, hopefully, give correct answers not only for the sample patterns, but also for new similar patterns. Main advantages of the approach include ability to tolerate imprecision and uncertainty and still achieving tractability, robustness, and low cost in practical applications. Since training a neural network for practical application is often very time consuming, an extensive research work is being carried in order to accelerate this process. Another problem with ANN training methods is danger of being caught in a local optimum. Hence, researchers look not only for algorithms that train neural networks quickly but rather for quick algorithms that are not likely, or less likely, to get trapped in a local optimum. One of the possible approaches to training ANN is using population-based methods which are known to be useful in solving variety of difficult computational problems [5, 6, 7]. Unfortunately quite often population-based approaches require substantial computational resources rendering the approach not practicable. Possible solution is

2 2 Ireneusz Czarnowski and Piotr Jedrzejowicz to combine robustness of the population-based methods with efficiency and speed of heuristic algorithms and neighbourhood search techniques. The idea led to proposing a variant of the population-based approach called population learning algorithm (PLA) proposed in [4]. Possibility of applying population learning algorithms to train ANN has been investigated in earlier papers of the authors [1, 2, 3]. Several versions of the PLA have been designed, implemented and applied to solving variety of benchmark problems. Initial results were promising showing good or very good performance of the PLA as a tool for ANN training [2]. The paper focuses on describing features and uses of the proposed PLAANN (Population Learning Algorithm and Artificial Neural Network) classification tool designed for solving classification problems in banking intelligence. The following sections give brief description of the PLA, provide information on intelligent technology used to implement the tool, describe data handling procedures, discuss experimental results and suggest how to use the PLAANN classifier as an adaptive learning system. Conclusions include some comments on features of technologies used. 2 Population Learning Algorithm Population learning algorithm is a population-based method inspired by analogies to a phenomenon of social education processes in which a diminishing number of individuals enter more and more advanced learning stages. In PLA an individual represents a coded solution of the considered problem. Initially, a number of individuals, known as the initial population, is randomly generated. Increasing the initial population size can be considered as a mean for diversification helping to escape from the local optima. Once the initial population has been generated, individuals enter the first learning stage. It involves applying some, possibly basic and elementary, improvement schemes or conducting simple learning sessions. These can be based on some local search procedures. The improved individuals are then evaluated and better ones pass to the subsequent stages. A strategy of selecting better or more promising individuals must be defined and duly applied. At the following stages the whole cycle is repeated. Individuals are subject to improvement and learning, either individually or through information exchange, and the selected ones are again promoted to a higher stage with the remaining ones dropped-out from the process. At the final stage the remaining individuals are reviewed and the best represents a solution to the problem at hand. Learning process at early stages can be run in parallel. Individuals are then grouped into classes with possibly different curricula, which are different improvement schemes. At certain level the best from all groups join together to form higher-level groups where improvement and learning process are still carried in parallel. At some stage selected individuals are brought together to complete education. At different stages of the process, different improvement schemes and learning procedures are applied. These gradually become more and more sophisticated and, possibly, time consuming as there are less and less individuals to be taught. Finally,

3 PLAANN as a Classification Tool for Customer Intelligence in Banking 3 after having passed all the prescribed stages, a final population is analysed with a view to select the fittest individual. START LEARN i (P) Set the number of learning stages N. Set the initial population size m SELECT i (P) Define learning/improvement procedures LEARN i (P), i=1...n, operating on a population of individuals P Define selection procedures SELECT i (P) i=1...n, operating on a population of individuals P i :=1 i := i +1 i > N YES Consider the best individual from P as a solution NO Generate the initial population Set P := initial population END Fig.1. General idea of the population learning algorithm 3 The PLAANN Classifier The proposed PLAANN classifier is a software tool based on using a set of artificial neural networks trained by the dedicated population learning algorithm. Main functions of the PLAANN include: pre-processing input data sets (both train and test), training artificial neural networks, classification of patterns. 3.1 Pre-processing Input Data Sets To assure efficiency of the classifier it has been decided to apply the Input Data Transformation Algorithm which role is to decrease the size of the training data set by

4 4 Ireneusz Czarnowski and Piotr Jedrzejowicz first partitioning it into clusters of similar patterns and afterwards considering only a representation of similar patterns. Number of such representatives, known as the inspiration level has to be set by the user at the fine-tuning phase. Choice of its value requires finding a compromise between the time needed for training ANN, the quality of classification results and the size of the available input data set. Pseudo-code of the Input Data Transformation Algorithm is shown in Fig. 2. Procedure Input_data_transformation {N number of patterns; n number of attributes; X input data set matrix of n columns and N rows; x ij - data set element (i=1..n; j=1..n); T in - training data set; T ts - testing data set} Begin Transform X normalizing each x ij into interval <0,1> and then rounding it to 0 or 1; For j =1 to n do Calculate Sum = N j x ij ; i = 1 End for For i =1 to N do n Calculate Ii = xij Sum j ; j= 1 End for Map elements of X into t subsets, each containing data elements with identical values of I i, where t is a number of different values of I i (i = 1...N); Construct training set T tr taking k patterns from each thus created subset, where k inspiration_level; Construct testing set Tts = X Ttr ; End Fig. 2. Pseudo code of the Input Data Transformation Algorithm 3.2 ANN Training Algorithm To train artificial neural networks an implementation of the population learning algorithm, originally proposed in [4], is used. In order to increase efficiency of the approach it has been decided to use the parallel computing environment based on PVM (Parallel Virtual Machine). This allows for running parallel learning processes or groups of such processes and thus speeding up ANN training.

5 PLAANN as a Classification Tool for Customer Intelligence in Banking 5 A neural network learns patterns by adjusting its weights. Learning process can be considered as a search for weights of connections between neurons such that a network can output the correct target pattern for each input pattern. Search processes aiming at finding the required weights are carried within the proposed tool as a population learning scheme in accordance with principles of the population learning algorithm. Since the discussed approach assumes a multiple classifiers working in parallel, each group of processes is used to train a single classifier (see section 3.3). Processes within such group deal with independent populations of weights (here called individuals) using similar learning and improvement procedures. Processes within a group do exchange information by forwarding best individuals to other processes. Processes of search within such a group are performed by the, so called, slave workers but information exchange and process coordination is handled by the master worker (one for each group of processes). The following features characterize the proposed parallel implementation of PLA: - Master worker defines number of slave workers and size of the initial population for each of them. - Each slave worker uses identical learning/improvement procedures. - Master worker activates parallel processing. - After completing each stage workers inform master about the best solution found so far. - Master worker compares the received values and sends out the best solution to all the workers replacing their current worst individual. - Master worker can stop computations if the desired quality level of the objective function has been achieved. This level is defined at the beginning of computations through setting the desired value of the mean squared error on a given set of training patterns. - Slave workers can also stop computations if the above condition has been met. - Computation is carried for the predefined number of stages. Generally the PLA code, which is run by each slave workers, is based on the following assumptions: - An individual is a vector of real numbers from the predefined interval, each representing a value of weight of the respective link between neurons in the considered ANN. - The initial population of individuals is generated randomly from. - There are five learning/improvement procedures used standard mutation, local search, non-uniform mutation, gradient mutation and gradient adjustment. - There is a common selection criterion for all stages. At each stage, individuals with fitness below the current average are rejected. The improvement procedures require some additional comments. The first procedure standard mutation modifies an individual by generating new values of two randomly selected elements within an individual. If the fitness function value has improved then the modification is accepted. The second learning/improvement procedure involves mutual exchange of values between two randomly selected elements (often called chromosomes) within an individual. If the fitness function value of an individual after such an exchange has improved, then the modification is accepted.

6 6 Ireneusz Czarnowski and Piotr Jedrzejowicz The third learning/improvement procedure non-uniform mutation involves modifying an individual by repeatedly adjusting value of the randomly selected element (in this case a real number) until the fitness function value has improved or until a number of consecutive improvements have been attempted unsuccessfully. This number has to be set at the fine-tuning phase. The value of the adjustment is calculated as: ( t, y) = y(1 r t ( 1 ) r T, where r is the uniformly distributed real number from (0, 1], T is equal to the length of the vector representing an individual and t is a current number of adjustment. The fourth improvement procedure gradient mutation changes two randomly selected elements within an individual by incrementing or decrementing their values. Direction of change (increment/decrement) is random and has identical probabilities equal to 0.5. The value of change is proportional to the gradient of an individual. If the fitness function value of an individual has improved then the modification is accepted. Number of iterations for procedures number 1,2 and 4 has to be set at the fine-tuning phase. The fifth learning/improvement procedure adjusts the value of each element of the individual by a constant value proportional to its gradient. is calculated as = α ξ, where α is the factor determining a size of the step in direction of ξ, known as a momentum. α has value from (0, 1]. In the proposed algorithm its value iterates starting from 1 with the step equal to ξ is a vector determining a direction of search and is equal to the gradient of an individual. ) 3.3 Pattern Classification In general, the proposed classification tool is composed of the K independent classifiers one for each of the considered pattern classes. After all K classifiers have been trained each one becomes an expert in recognizing patterns belonging to a single class. In case of the banking customers classification problem there are two classes (K = 2) and two independent classifiers trained to recognize active and, respectively non-active customers. Each classifier is an artificial neural network of the MLP structure with 3 layers input, hidden and output. Number of neurons in the input layer is equal to the number of attributes of the input pattern. The hidden layer has 15 neurons and the output layer consists of the single neuron. The range of weights is [-1, 1] and the sigmoid activation (transfer) function has the sigmoid gain value set to 1.0. The proposed tool takes a decision as to classifying the input pattern in the following three steps: The input pattern is read by each of the two classifiers (each specializing in a different class). Each of the classifiers produces an output value C i, i = 1,2 which is a real number from the (0, 1] interval.

7 PLAANN as a Classification Tool for Customer Intelligence in Banking 7 If C 1 > C 2 then the input pattern is classified as belonging to class one. Otherwise it is considered as belonging to class two. 4 Using PLAANN PLAANN has been coded in C++ and is ready to be run on Unix and Linux platforms. The parallel version has been implemented to be executed under the PVM environment. A sequential, non-parallel version of the tool is also available. There are three modes of running the PLAANN: 1. Training classifiers and producing classification results using two data sets one with training and another with testing patterns. 2. Training classifiers and producing classification results using a single input data set which is automatically partitioned into training and testing data sets. The training data set is generated by the Input Data Transformation Algorithm applied to original input data set. The testing data set includes all the remaining instances from the original input set, which have not been included into the training set. 3. Classifying test patterns using already trained classifiers. All input data sets need to be stored in a shared catalogue of the PVM environment. The catalogue can be configured by NFS to assure access to data by all concurrent groups of processes. When running the PLAANN the user has to specify the mode and provide names of the respective data files (training, test or input). The second mode allows the user to experiment with different inspiration levels with a view to finding best compromise between efficiency of computations and quality of the classification process. In each mode the user is expected to set parameters controlling classification processes. This requires modifying the configure file, which example content is shown in Fig. 3. Number of hidden units in the middle layer 15 Value of the sigmoid gain 1.0 Range of weights Number of improvement procedures (max 5) 5 Size of the initial population (max 1000) 150 Number of iterations for improvement procedure 80 Number of slave workers (max 32) 5 Inspiration level 10 Fig.3. Example content of the configure file As the final result PLAANN generates a classified test file containing classification results. The file has the same structure as the training file. Bedsides, the tool generates automatically additional file named info_results. This file summarizes classification results. In Fig. 4 an example content of the info_results file is shown.

8 8 Ireneusz Czarnowski and Piotr Jedrzejowicz Running mode 2 Execution time [s] 805 Measure of accuracy on test data 80 Number of patterns (input data set) Number of classes 2 Number of patterns (training data set) 364 Number of patterns in class 1 (training data set) 180 Number of patterns in class 2 (training data set) 184 Number of patterns (test data set) Fig.4. Example contents of the info_results file With the three operating modes available the PLAANN can be used within an adaptive-learning loop. Initially, depending on the user requirements, the PLAANN should be used in mode 1 or 2. Mode 1 would be used when there is a need for classifying the existing set of patterns and, at the same time, the user does have at hand a set of reliable training data. More often, however the user would have a set of patterns with classification results and would like to have classifier able to classify future patterns of similar kind. In such a case mode 2 should be used to train the tool and to estimate efficiency and quality of classification. If both are satisfactory the user should run the system in mode 3 to get classification of incoming patterns. After a while, an adaptive loop could be closed. Initial data set used at a previous stage plus new patterns that have been obtained within certain time period will be merged to produce a new input data set for mode 2. Frequency of such mergers and repeated training (involving perhaps also a scheme for deleting oldest patterns) would depend on needs of the user and structure of the patterns. 5 Experiment Results PLAANN has been used to produce a set of classification results for the Customer Intelligence in the Bank problem as provided by the organizers of the EUNITE world competition in domain of intelligent technologies. The results have been stored in the file client_test.txt (attached to this paper) consisting of testing patterns to which information about class has been added by PLAANN. To train the tool a set of patterns provided in the file client_train.txt has been used. Computational experiment carried to evaluate quality of the proposed PLAANN implementation has been based on using the tool in mode 2, thus enabling to choose, during the fine tuning phase, satisfactory inspiration level. The client_train.txt file contains patterns each with 36 attributes and a class value provided. In the experiment aiming at evaluating PLAANN performance in application to the Customer Intelligence in the Bank problem, the tool has been run with inspiration numbers equal to 5, 10, 15 and 50. Training data sets produced by the Input Data Transformation Algorithm have had 219, 272, 346 and 732 patterns, respectively. This has clearly resulted in a substantial reduction of training set size as compared

9 PLAANN as a Classification Tool for Customer Intelligence in Banking 9 with original patterns. The reduced training set still preserves basic features of the analysed data. This can be seen intuitively in Fig. 5 and 6 where the initial distribution of values for attributes 35 and 36 (Fig. 5) can be compared with the distribution generated by the Input Data Transformation Algorithm (Fig. 6) dimension # Non-active Active dimension #35 Fig. 5. The initial distribution of values for the attributes 35 and dimension # Non-active Active dimension # 35 Fig. 6. The reduced distribution of values for the attributes 35 and 36 with 272 patterns and inspiration number = 10 In the computational experiment the PLAANN has been run 20 times for each inspiration level (5, 10, 15, and 50). Each time patterns representing the respective cluster of similar patterns have been randomly drawn from available similar patterns. Their number has been determined by the inspiration level. Characteristics of thus obtained classifications averaged over 20 runs are shown in Table 1.

10 10 Ireneusz Czarnowski and Piotr Jedrzejowicz Table 1. Performance of PLAANN in the classification experiment Insp. level Training data Accuracy of classification Testing data Training time [s] mean max min mean max min 5 100% 100% 100% 78% 80.3% 76.6% % 100% 99% 79.7% 81.2% 74% % 100% 98.4% 80.1% 81.9% 78.6% % 100% 97.5% 80.8% 81.7% 78.8% 786 Averaged 100% % 341 Overall performance of the PLAANN seems quite satisfactory. It can be expected that the tool in application to customer intelligence in banking problem would assure at least 80% of correct classification decisions. It is also clear that increasing inspiration level leads to a better performance in terms of classifier quality at a cost of higher requirements in terms of computation time. Experiment has been carried on Sun Challenge R4400 workstation with 12 processors. A number of slave workers used by the master varied in different runs from 5 to 15 and has been chosen randomly. 6 Summary The paper is a report describing using PLAANN as a classification tool for customer intelligence in banking problem. PLAANN is a software tool consisting of several ANN based classifiers, which are trained using the population learning algorithm (PLA). Main inference engine within the proposed approach is the population learning approach. Population learning algorithm is a population-based method inspired by analogies to a phenomenon of social education processes in which a diminishing number of individuals enter more and more advanced learning stages. In PLA an individual represents a coded solution of the considered problem. Initially, a number of individuals, known as the initial population, is randomly generated. Increasing the initial population size can be considered as a mean for diversification helping to escape from the local optima. Once the initial population has been generated, individuals enter the first learning stage. It involves applying some, possibly basic and elementary, improvement schemes or conducting simple learning sessions. These can be based on some local search procedures. The improved individuals are then evaluated and better ones pass to the subsequent stages. To solve the classification problem posed by the organizers of the EUNITE World Competition in domain of Intelligent Technologies a software tool called PLAANN has been proposed and implemented. PLAANN has been coded in C++ and is ready

11 PLAANN as a Classification Tool for Customer Intelligence in Banking 11 to be run on Unix and Linux platforms. The parallel version has been implemented to be executed under the PVM environment. A sequential, non-parallel version of the tool is also available. There are three modes of running the PLAANN: Training classifiers and producing classification results using two data sets one with training and another with testing patterns. Training classifiers and producing classification results using a single input data set which is automatically partitioned into training and testing data sets. Classifying test patterns using already trained classifiers. With the three operating modes available the PLAANN can be used within an adaptive-learning loop allowing to easily train classifiers as often as needed to assure adaptability to changing environment. Computational experiment carried have proven that the PLAAN can be a useful tool for customer intelligence in banking problem. Its estimated performance level is above 80% of correct classifications. References 1. Czarnowski, I., Jedrzejowicz, P., Ratajczak, E.:Population Learning Algorithm - Example Implementations and Experiments. Proceedings of the Fourth Metaheuristics International Conference, Porto (2001) Czarnowski, I., Jedrzejowicz, P.: Population Learning Metaheuristic for Neural Network Training. Proceedings of the Sixth International Conference on Neural Networks and Soft Computing (ICNNSC), Zakopane (2002) 3. Czarnowski, I., Jedrzejowicz, P.: Application of the Parallel Population Learning Algorithm to Training Feed-forward ANN. Proceedings of the Euro-International Symposium on Computational Intelligence (E-ISCI), Kosice (2002) 4. Jedrzejowicz, P.: Social Learning Algorithm as a Tool for Solving Some Difficult Scheduling Problems. Foundation of Computing and Decision Sciences (1999) 24: Goldberg, D.E.: Gentic Algorithms in Search, Optimization & Machine Learning, Addison-Wesley, Boston (1989) 6. Michalewicz Z.: Genetic Algorithms + Data Structures = Evolution Programs, Springer-Verlag, Berlin (1992) 7. Mitchell M.: An Introduction to Genetic Algorithms, MIT Press, Boston (1996)

Evolutionary Detection of Rules for Text Categorization. Application to Spam Filtering

Evolutionary Detection of Rules for Text Categorization. Application to Spam Filtering Advances in Intelligent Systems and Technologies Proceedings ECIT2004 - Third European Conference on Intelligent Systems and Technologies Iasi, Romania, July 21-23, 2004 Evolutionary Detection of Rules

More information

MapReduce Approach to Collective Classification for Networks

MapReduce Approach to Collective Classification for Networks MapReduce Approach to Collective Classification for Networks Wojciech Indyk 1, Tomasz Kajdanowicz 1, Przemyslaw Kazienko 1, and Slawomir Plamowski 1 Wroclaw University of Technology, Wroclaw, Poland Faculty

More information

INTRODUCTION TO NEURAL NETWORKS

INTRODUCTION TO NEURAL NETWORKS INTRODUCTION TO NEURAL NETWORKS Pictures are taken from http://www.cs.cmu.edu/~tom/mlbook-chapter-slides.html http://research.microsoft.com/~cmbishop/prml/index.htm By Nobel Khandaker Neural Networks An

More information

Lecture 6. Artificial Neural Networks

Lecture 6. Artificial Neural Networks Lecture 6 Artificial Neural Networks 1 1 Artificial Neural Networks In this note we provide an overview of the key concepts that have led to the emergence of Artificial Neural Networks as a major paradigm

More information

Computational Complexity between K-Means and K-Medoids Clustering Algorithms for Normal and Uniform Distributions of Data Points

Computational Complexity between K-Means and K-Medoids Clustering Algorithms for Normal and Uniform Distributions of Data Points Journal of Computer Science 6 (3): 363-368, 2010 ISSN 1549-3636 2010 Science Publications Computational Complexity between K-Means and K-Medoids Clustering Algorithms for Normal and Uniform Distributions

More information

An Introduction to Neural Networks

An Introduction to Neural Networks An Introduction to Vincent Cheung Kevin Cannons Signal & Data Compression Laboratory Electrical & Computer Engineering University of Manitoba Winnipeg, Manitoba, Canada Advisor: Dr. W. Kinsner May 27,

More information

Memory Allocation Technique for Segregated Free List Based on Genetic Algorithm

Memory Allocation Technique for Segregated Free List Based on Genetic Algorithm Journal of Al-Nahrain University Vol.15 (2), June, 2012, pp.161-168 Science Memory Allocation Technique for Segregated Free List Based on Genetic Algorithm Manal F. Younis Computer Department, College

More information

EFFICIENT DATA PRE-PROCESSING FOR DATA MINING

EFFICIENT DATA PRE-PROCESSING FOR DATA MINING EFFICIENT DATA PRE-PROCESSING FOR DATA MINING USING NEURAL NETWORKS JothiKumar.R 1, Sivabalan.R.V 2 1 Research scholar, Noorul Islam University, Nagercoil, India Assistant Professor, Adhiparasakthi College

More information

Research on the Performance Optimization of Hadoop in Big Data Environment

Research on the Performance Optimization of Hadoop in Big Data Environment Vol.8, No.5 (015), pp.93-304 http://dx.doi.org/10.1457/idta.015.8.5.6 Research on the Performance Optimization of Hadoop in Big Data Environment Jia Min-Zheng Department of Information Engineering, Beiing

More information

International Journal of Electronics and Computer Science Engineering 1449

International Journal of Electronics and Computer Science Engineering 1449 International Journal of Electronics and Computer Science Engineering 1449 Available Online at www.ijecse.org ISSN- 2277-1956 Neural Networks in Data Mining Priyanka Gaur Department of Information and

More information

Follow links Class Use and other Permissions. For more information, send email to: permissions@pupress.princeton.edu

Follow links Class Use and other Permissions. For more information, send email to: permissions@pupress.princeton.edu COPYRIGHT NOTICE: David A. Kendrick, P. Ruben Mercado, and Hans M. Amman: Computational Economics is published by Princeton University Press and copyrighted, 2006, by Princeton University Press. All rights

More information

Differential Evolution Particle Swarm Optimization for Digital Filter Design

Differential Evolution Particle Swarm Optimization for Digital Filter Design Differential Evolution Particle Swarm Optimization for Digital Filter Design Bipul Luitel, and Ganesh K. Venayagamoorthy, Abstract In this paper, swarm and evolutionary algorithms have been applied for

More information

Load balancing in a heterogeneous computer system by self-organizing Kohonen network

Load balancing in a heterogeneous computer system by self-organizing Kohonen network Bull. Nov. Comp. Center, Comp. Science, 25 (2006), 69 74 c 2006 NCC Publisher Load balancing in a heterogeneous computer system by self-organizing Kohonen network Mikhail S. Tarkov, Yakov S. Bezrukov Abstract.

More information

Visualization of large data sets using MDS combined with LVQ.

Visualization of large data sets using MDS combined with LVQ. Visualization of large data sets using MDS combined with LVQ. Antoine Naud and Włodzisław Duch Department of Informatics, Nicholas Copernicus University, Grudziądzka 5, 87-100 Toruń, Poland. www.phys.uni.torun.pl/kmk

More information

SUCCESSFUL PREDICTION OF HORSE RACING RESULTS USING A NEURAL NETWORK

SUCCESSFUL PREDICTION OF HORSE RACING RESULTS USING A NEURAL NETWORK SUCCESSFUL PREDICTION OF HORSE RACING RESULTS USING A NEURAL NETWORK N M Allinson and D Merritt 1 Introduction This contribution has two main sections. The first discusses some aspects of multilayer perceptrons,

More information

Data Mining - Evaluation of Classifiers

Data Mining - Evaluation of Classifiers Data Mining - Evaluation of Classifiers Lecturer: JERZY STEFANOWSKI Institute of Computing Sciences Poznan University of Technology Poznan, Poland Lecture 4 SE Master Course 2008/2009 revised for 2010

More information

D A T A M I N I N G C L A S S I F I C A T I O N

D A T A M I N I N G C L A S S I F I C A T I O N D A T A M I N I N G C L A S S I F I C A T I O N FABRICIO VOZNIKA LEO NARDO VIA NA INTRODUCTION Nowadays there is huge amount of data being collected and stored in databases everywhere across the globe.

More information

Effect of Using Neural Networks in GA-Based School Timetabling

Effect of Using Neural Networks in GA-Based School Timetabling Effect of Using Neural Networks in GA-Based School Timetabling JANIS ZUTERS Department of Computer Science University of Latvia Raina bulv. 19, Riga, LV-1050 LATVIA janis.zuters@lu.lv Abstract: - The school

More information

Analecta Vol. 8, No. 2 ISSN 2064-7964

Analecta Vol. 8, No. 2 ISSN 2064-7964 EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,

More information

Chapter 4: Artificial Neural Networks

Chapter 4: Artificial Neural Networks Chapter 4: Artificial Neural Networks CS 536: Machine Learning Littman (Wu, TA) Administration icml-03: instructional Conference on Machine Learning http://www.cs.rutgers.edu/~mlittman/courses/ml03/icml03/

More information

Comparison of Supervised and Unsupervised Learning Classifiers for Travel Recommendations

Comparison of Supervised and Unsupervised Learning Classifiers for Travel Recommendations Volume 3, No. 8, August 2012 Journal of Global Research in Computer Science REVIEW ARTICLE Available Online at www.jgrcs.info Comparison of Supervised and Unsupervised Learning Classifiers for Travel Recommendations

More information

Self Organizing Maps: Fundamentals

Self Organizing Maps: Fundamentals Self Organizing Maps: Fundamentals Introduction to Neural Networks : Lecture 16 John A. Bullinaria, 2004 1. What is a Self Organizing Map? 2. Topographic Maps 3. Setting up a Self Organizing Map 4. Kohonen

More information

Efficient Recognition of Mouse-based Gestures

Efficient Recognition of Mouse-based Gestures Key words: mouse gestures recognition k-nearest Neighbours Pawe l HOFMAN 1 Maciej PIASECKI 1 Efficient Recognition of Mouse-based Gestures The notion of a mouse gesture is defined in the paper as a simplified

More information

Temporal Difference Learning in the Tetris Game

Temporal Difference Learning in the Tetris Game Temporal Difference Learning in the Tetris Game Hans Pirnay, Slava Arabagi February 6, 2009 1 Introduction Learning to play the game Tetris has been a common challenge on a few past machine learning competitions.

More information

Neural Network Add-in

Neural Network Add-in Neural Network Add-in Version 1.5 Software User s Guide Contents Overview... 2 Getting Started... 2 Working with Datasets... 2 Open a Dataset... 3 Save a Dataset... 3 Data Pre-processing... 3 Lagging...

More information

Impact of Feature Selection on the Performance of Wireless Intrusion Detection Systems

Impact of Feature Selection on the Performance of Wireless Intrusion Detection Systems 2009 International Conference on Computer Engineering and Applications IPCSIT vol.2 (2011) (2011) IACSIT Press, Singapore Impact of Feature Selection on the Performance of ireless Intrusion Detection Systems

More information

6.2.8 Neural networks for data mining

6.2.8 Neural networks for data mining 6.2.8 Neural networks for data mining Walter Kosters 1 In many application areas neural networks are known to be valuable tools. This also holds for data mining. In this chapter we discuss the use of neural

More information

CS Master Level Courses and Areas COURSE DESCRIPTIONS. CSCI 521 Real-Time Systems. CSCI 522 High Performance Computing

CS Master Level Courses and Areas COURSE DESCRIPTIONS. CSCI 521 Real-Time Systems. CSCI 522 High Performance Computing CS Master Level Courses and Areas The graduate courses offered may change over time, in response to new developments in computer science and the interests of faculty and students; the list of graduate

More information

Iranian J Env Health Sci Eng, 2004, Vol.1, No.2, pp.51-57. Application of Intelligent System for Water Treatment Plant Operation.

Iranian J Env Health Sci Eng, 2004, Vol.1, No.2, pp.51-57. Application of Intelligent System for Water Treatment Plant Operation. Iranian J Env Health Sci Eng, 2004, Vol.1, No.2, pp.51-57 Application of Intelligent System for Water Treatment Plant Operation *A Mirsepassi Dept. of Environmental Health Engineering, School of Public

More information

International Journal of Scientific & Engineering Research, Volume 4, Issue 5, May ISSN

International Journal of Scientific & Engineering Research, Volume 4, Issue 5, May ISSN International Journal of Scientific & Engineering Research, Volume 4, Issue 5, May-213 737 Letter Recognition Data Using Neural Network Hussein Salim Qasim Abstract The letters dataset from the UCI repository

More information

A K-means-like Algorithm for K-medoids Clustering and Its Performance

A K-means-like Algorithm for K-medoids Clustering and Its Performance A K-means-like Algorithm for K-medoids Clustering and Its Performance Hae-Sang Park*, Jong-Seok Lee and Chi-Hyuck Jun Department of Industrial and Management Engineering, POSTECH San 31 Hyoja-dong, Pohang

More information

Neural Networks. Neural network is a network or circuit of neurons. Neurons can be. Biological neurons Artificial neurons

Neural Networks. Neural network is a network or circuit of neurons. Neurons can be. Biological neurons Artificial neurons Neural Networks Neural network is a network or circuit of neurons Neurons can be Biological neurons Artificial neurons Biological neurons Building block of the brain Human brain contains over 10 billion

More information

Software Project Management with GAs

Software Project Management with GAs Software Project Management with GAs Enrique Alba, J. Francisco Chicano University of Málaga, Grupo GISUM, Departamento de Lenguajes y Ciencias de la Computación, E.T.S Ingeniería Informática, Campus de

More information

PMR5406 Redes Neurais e Lógica Fuzzy Aula 3 Multilayer Percetrons

PMR5406 Redes Neurais e Lógica Fuzzy Aula 3 Multilayer Percetrons PMR5406 Redes Neurais e Aula 3 Multilayer Percetrons Baseado em: Neural Networks, Simon Haykin, Prentice-Hall, 2 nd edition Slides do curso por Elena Marchiori, Vrie Unviersity Multilayer Perceptrons Architecture

More information

Social Media Mining. Data Mining Essentials

Social Media Mining. Data Mining Essentials Introduction Data production rate has been increased dramatically (Big Data) and we are able store much more data than before E.g., purchase data, social media data, mobile phone data Businesses and customers

More information

NTC Project: S01-PH10 (formerly I01-P10) 1 Forecasting Women s Apparel Sales Using Mathematical Modeling

NTC Project: S01-PH10 (formerly I01-P10) 1 Forecasting Women s Apparel Sales Using Mathematical Modeling 1 Forecasting Women s Apparel Sales Using Mathematical Modeling Celia Frank* 1, Balaji Vemulapalli 1, Les M. Sztandera 2, Amar Raheja 3 1 School of Textiles and Materials Technology 2 Computer Information

More information

Neural Networks and Support Vector Machines

Neural Networks and Support Vector Machines INF5390 - Kunstig intelligens Neural Networks and Support Vector Machines Roar Fjellheim INF5390-13 Neural Networks and SVM 1 Outline Neural networks Perceptrons Neural networks Support vector machines

More information

Comparison of K-means and Backpropagation Data Mining Algorithms

Comparison of K-means and Backpropagation Data Mining Algorithms Comparison of K-means and Backpropagation Data Mining Algorithms Nitu Mathuriya, Dr. Ashish Bansal Abstract Data mining has got more and more mature as a field of basic research in computer science and

More information

ELLIOTT WAVES RECOGNITION VIA NEURAL NETWORKS

ELLIOTT WAVES RECOGNITION VIA NEURAL NETWORKS ELLIOTT WAVES RECOGNITION VIA NEURAL NETWORKS Martin Kotyrba Eva Volna David Brazina Robert Jarusek Department of Informatics and Computers University of Ostrava Z70103, Ostrava, Czech Republic martin.kotyrba@osu.cz

More information

A Hybrid Tabu Search Method for Assembly Line Balancing

A Hybrid Tabu Search Method for Assembly Line Balancing Proceedings of the 7th WSEAS International Conference on Simulation, Modelling and Optimization, Beijing, China, September 15-17, 2007 443 A Hybrid Tabu Search Method for Assembly Line Balancing SUPAPORN

More information

Guido Sciavicco. 11 Novembre 2015

Guido Sciavicco. 11 Novembre 2015 classical and new techniques Università degli Studi di Ferrara 11 Novembre 2015 in collaboration with dr. Enrico Marzano, CIO Gap srl Active Contact System Project 1/27 Contents What is? Embedded Wrapper

More information

Optimizing CPU Scheduling Problem using Genetic Algorithms

Optimizing CPU Scheduling Problem using Genetic Algorithms Optimizing CPU Scheduling Problem using Genetic Algorithms Anu Taneja Amit Kumar Computer Science Department Hindu College of Engineering, Sonepat (MDU) anutaneja16@gmail.com amitkumar.cs08@pec.edu.in

More information

D-optimal plans in observational studies

D-optimal plans in observational studies D-optimal plans in observational studies Constanze Pumplün Stefan Rüping Katharina Morik Claus Weihs October 11, 2005 Abstract This paper investigates the use of Design of Experiments in observational

More information

Learning. Artificial Intelligence. Learning. Types of Learning. Inductive Learning Method. Inductive Learning. Learning.

Learning. Artificial Intelligence. Learning. Types of Learning. Inductive Learning Method. Inductive Learning. Learning. Learning Learning is essential for unknown environments, i.e., when designer lacks omniscience Artificial Intelligence Learning Chapter 8 Learning is useful as a system construction method, i.e., expose

More information

The Scientific Data Mining Process

The Scientific Data Mining Process Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In

More information

Comparison of Optimization Techniques in Large Scale Transportation Problems

Comparison of Optimization Techniques in Large Scale Transportation Problems Journal of Undergraduate Research at Minnesota State University, Mankato Volume 4 Article 10 2004 Comparison of Optimization Techniques in Large Scale Transportation Problems Tapojit Kumar Minnesota State

More information

An Enhanced Clustering Algorithm to Analyze Spatial Data

An Enhanced Clustering Algorithm to Analyze Spatial Data International Journal of Engineering and Technical Research (IJETR) ISSN: 2321-0869, Volume-2, Issue-7, July 2014 An Enhanced Clustering Algorithm to Analyze Spatial Data Dr. Mahesh Kumar, Mr. Sachin Yadav

More information

ISSN: 2319-5967 ISO 9001:2008 Certified International Journal of Engineering Science and Innovative Technology (IJESIT) Volume 2, Issue 3, May 2013

ISSN: 2319-5967 ISO 9001:2008 Certified International Journal of Engineering Science and Innovative Technology (IJESIT) Volume 2, Issue 3, May 2013 Transistor Level Fault Finding in VLSI Circuits using Genetic Algorithm Lalit A. Patel, Sarman K. Hadia CSPIT, CHARUSAT, Changa., CSPIT, CHARUSAT, Changa Abstract This paper presents, genetic based algorithm

More information

Analysis of Multilayer Neural Networks with Direct and Cross-Forward Connection

Analysis of Multilayer Neural Networks with Direct and Cross-Forward Connection Analysis of Multilayer Neural Networks with Direct and Cross-Forward Connection Stanis law P laczek and Bijaya Adhikari Vistula University, Warsaw, Poland stanislaw.placzek@wp.pl,bijaya.adhikari1991@gmail.com

More information

Performance Evaluation of Artificial Neural. Networks for Spatial Data Analysis

Performance Evaluation of Artificial Neural. Networks for Spatial Data Analysis Contemporary Engineering Sciences, Vol. 4, 2011, no. 4, 149-163 Performance Evaluation of Artificial Neural Networks for Spatial Data Analysis Akram A. Moustafa Department of Computer Science Al al-bayt

More information

Estimation of the COCOMO Model Parameters Using Genetic Algorithms for NASA Software Projects

Estimation of the COCOMO Model Parameters Using Genetic Algorithms for NASA Software Projects Journal of Computer Science 2 (2): 118-123, 2006 ISSN 1549-3636 2006 Science Publications Estimation of the COCOMO Model Parameters Using Genetic Algorithms for NASA Software Projects Alaa F. Sheta Computers

More information

Introduction to Machine Learning Using Python. Vikram Kamath

Introduction to Machine Learning Using Python. Vikram Kamath Introduction to Machine Learning Using Python Vikram Kamath Contents: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Introduction/Definition Where and Why ML is used Types of Learning Supervised Learning Linear Regression

More information

Building MLP networks by construction

Building MLP networks by construction University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2000 Building MLP networks by construction Ah Chung Tsoi University of

More information

A Stock Pattern Recognition Algorithm Based on Neural Networks

A Stock Pattern Recognition Algorithm Based on Neural Networks A Stock Pattern Recognition Algorithm Based on Neural Networks Xinyu Guo guoxinyu@icst.pku.edu.cn Xun Liang liangxun@icst.pku.edu.cn Xiang Li lixiang@icst.pku.edu.cn Abstract pattern respectively. Recent

More information

Learning in Abstract Memory Schemes for Dynamic Optimization

Learning in Abstract Memory Schemes for Dynamic Optimization Fourth International Conference on Natural Computation Learning in Abstract Memory Schemes for Dynamic Optimization Hendrik Richter HTWK Leipzig, Fachbereich Elektrotechnik und Informationstechnik, Institut

More information

Using Data Mining for Mobile Communication Clustering and Characterization

Using Data Mining for Mobile Communication Clustering and Characterization Using Data Mining for Mobile Communication Clustering and Characterization A. Bascacov *, C. Cernazanu ** and M. Marcu ** * Lasting Software, Timisoara, Romania ** Politehnica University of Timisoara/Computer

More information

Chapter 7. Hierarchical cluster analysis. Contents 7-1

Chapter 7. Hierarchical cluster analysis. Contents 7-1 7-1 Chapter 7 Hierarchical cluster analysis In Part 2 (Chapters 4 to 6) we defined several different ways of measuring distance (or dissimilarity as the case may be) between the rows or between the columns

More information

An Overview of Knowledge Discovery Database and Data mining Techniques

An Overview of Knowledge Discovery Database and Data mining Techniques An Overview of Knowledge Discovery Database and Data mining Techniques Priyadharsini.C 1, Dr. Antony Selvadoss Thanamani 2 M.Phil, Department of Computer Science, NGM College, Pollachi, Coimbatore, Tamilnadu,

More information

Multiagent Reputation Management to Achieve Robust Software Using Redundancy

Multiagent Reputation Management to Achieve Robust Software Using Redundancy Multiagent Reputation Management to Achieve Robust Software Using Redundancy Rajesh Turlapati and Michael N. Huhns Center for Information Technology, University of South Carolina Columbia, SC 29208 {turlapat,huhns}@engr.sc.edu

More information

New Ensemble Combination Scheme

New Ensemble Combination Scheme New Ensemble Combination Scheme Namhyoung Kim, Youngdoo Son, and Jaewook Lee, Member, IEEE Abstract Recently many statistical learning techniques are successfully developed and used in several areas However,

More information

Self-Learning Genetic Algorithm for a Timetabling Problem with Fuzzy Constraints

Self-Learning Genetic Algorithm for a Timetabling Problem with Fuzzy Constraints Self-Learning Genetic Algorithm for a Timetabling Problem with Fuzzy Constraints Radomír Perzina, Jaroslav Ramík perzina(ramik)@opf.slu.cz Centre of excellence IT4Innovations Division of the University

More information

A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment

A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment Panagiotis D. Michailidis and Konstantinos G. Margaritis Parallel and Distributed

More information

Categorical Data Visualization and Clustering Using Subjective Factors

Categorical Data Visualization and Clustering Using Subjective Factors Categorical Data Visualization and Clustering Using Subjective Factors Chia-Hui Chang and Zhi-Kai Ding Department of Computer Science and Information Engineering, National Central University, Chung-Li,

More information

Comparison of machine learning methods for intelligent tutoring systems

Comparison of machine learning methods for intelligent tutoring systems Comparison of machine learning methods for intelligent tutoring systems Wilhelmiina Hämäläinen 1 and Mikko Vinni 1 Department of Computer Science, University of Joensuu, P.O. Box 111, FI-80101 Joensuu

More information

Chapter 2 The Research on Fault Diagnosis of Building Electrical System Based on RBF Neural Network

Chapter 2 The Research on Fault Diagnosis of Building Electrical System Based on RBF Neural Network Chapter 2 The Research on Fault Diagnosis of Building Electrical System Based on RBF Neural Network Qian Wu, Yahui Wang, Long Zhang and Li Shen Abstract Building electrical system fault diagnosis is the

More information

Clustering Genetic Algorithm

Clustering Genetic Algorithm Clustering Genetic Algorithm Petra Kudová Department of Theoretical Computer Science Institute of Computer Science Academy of Sciences of the Czech Republic ETID 2007 Outline Introduction Clustering Genetic

More information

Comparison of Major Domination Schemes for Diploid Binary Genetic Algorithms in Dynamic Environments

Comparison of Major Domination Schemes for Diploid Binary Genetic Algorithms in Dynamic Environments Comparison of Maor Domination Schemes for Diploid Binary Genetic Algorithms in Dynamic Environments A. Sima UYAR and A. Emre HARMANCI Istanbul Technical University Computer Engineering Department Maslak

More information

OPTIMIZATION OF VENTILATION SYSTEMS IN OFFICE ENVIRONMENT, PART II: RESULTS AND DISCUSSIONS

OPTIMIZATION OF VENTILATION SYSTEMS IN OFFICE ENVIRONMENT, PART II: RESULTS AND DISCUSSIONS OPTIMIZATION OF VENTILATION SYSTEMS IN OFFICE ENVIRONMENT, PART II: RESULTS AND DISCUSSIONS Liang Zhou, and Fariborz Haghighat Department of Building, Civil and Environmental Engineering Concordia University,

More information

Data quality in Accounting Information Systems

Data quality in Accounting Information Systems Data quality in Accounting Information Systems Comparing Several Data Mining Techniques Erjon Zoto Department of Statistics and Applied Informatics Faculty of Economy, University of Tirana Tirana, Albania

More information

An Augmented Normalization Mechanism for Capacity Planning & Modelling Elegant Approach with Artificial Intelligence

An Augmented Normalization Mechanism for Capacity Planning & Modelling Elegant Approach with Artificial Intelligence An Augmented Normalization Mechanism for Capacity Planning & Modelling Elegant Approach with Artificial Intelligence 13th Annual International Software Testing Conference 2013 Bangalore, 4 th -5 th December

More information

Research on Clustering Analysis of Big Data Yuan Yuanming 1, 2, a, Wu Chanle 1, 2

Research on Clustering Analysis of Big Data Yuan Yuanming 1, 2, a, Wu Chanle 1, 2 Advanced Engineering Forum Vols. 6-7 (2012) pp 82-87 Online: 2012-09-26 (2012) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/aef.6-7.82 Research on Clustering Analysis of Big Data

More information

npsolver A SAT Based Solver for Optimization Problems

npsolver A SAT Based Solver for Optimization Problems npsolver A SAT Based Solver for Optimization Problems Norbert Manthey and Peter Steinke Knowledge Representation and Reasoning Group Technische Universität Dresden, 01062 Dresden, Germany peter@janeway.inf.tu-dresden.de

More information

A FUZZY LOGIC APPROACH FOR SALES FORECASTING

A FUZZY LOGIC APPROACH FOR SALES FORECASTING A FUZZY LOGIC APPROACH FOR SALES FORECASTING ABSTRACT Sales forecasting proved to be very important in marketing where managers need to learn from historical data. Many methods have become available for

More information

Price Prediction of Share Market using Artificial Neural Network (ANN)

Price Prediction of Share Market using Artificial Neural Network (ANN) Prediction of Share Market using Artificial Neural Network (ANN) Zabir Haider Khan Department of CSE, SUST, Sylhet, Bangladesh Tasnim Sharmin Alin Department of CSE, SUST, Sylhet, Bangladesh Md. Akter

More information

Neural Networks. CAP5610 Machine Learning Instructor: Guo-Jun Qi

Neural Networks. CAP5610 Machine Learning Instructor: Guo-Jun Qi Neural Networks CAP5610 Machine Learning Instructor: Guo-Jun Qi Recap: linear classifier Logistic regression Maximizing the posterior distribution of class Y conditional on the input vector X Support vector

More information

Novelty Detection in image recognition using IRF Neural Networks properties

Novelty Detection in image recognition using IRF Neural Networks properties Novelty Detection in image recognition using IRF Neural Networks properties Philippe Smagghe, Jean-Luc Buessler, Jean-Philippe Urban Université de Haute-Alsace MIPS 4, rue des Frères Lumière, 68093 Mulhouse,

More information

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski trajkovski@nyus.edu.mk

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski trajkovski@nyus.edu.mk Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trakovski trakovski@nyus.edu.mk Neural Networks 2 Neural Networks Analogy to biological neural systems, the most robust learning systems

More information

DECISION TREE INDUCTION FOR FINANCIAL FRAUD DETECTION USING ENSEMBLE LEARNING TECHNIQUES

DECISION TREE INDUCTION FOR FINANCIAL FRAUD DETECTION USING ENSEMBLE LEARNING TECHNIQUES DECISION TREE INDUCTION FOR FINANCIAL FRAUD DETECTION USING ENSEMBLE LEARNING TECHNIQUES Vijayalakshmi Mahanra Rao 1, Yashwant Prasad Singh 2 Multimedia University, Cyberjaya, MALAYSIA 1 lakshmi.mahanra@gmail.com

More information

BOOSTING - A METHOD FOR IMPROVING THE ACCURACY OF PREDICTIVE MODEL

BOOSTING - A METHOD FOR IMPROVING THE ACCURACY OF PREDICTIVE MODEL The Fifth International Conference on e-learning (elearning-2014), 22-23 September 2014, Belgrade, Serbia BOOSTING - A METHOD FOR IMPROVING THE ACCURACY OF PREDICTIVE MODEL SNJEŽANA MILINKOVIĆ University

More information

Tetris: Experiments with the LP Approach to Approximate DP

Tetris: Experiments with the LP Approach to Approximate DP Tetris: Experiments with the LP Approach to Approximate DP Vivek F. Farias Electrical Engineering Stanford University Stanford, CA 94403 vivekf@stanford.edu Benjamin Van Roy Management Science and Engineering

More information

Neural Network Predictor for Fraud Detection: A Study Case for the Federal Patrimony Department

Neural Network Predictor for Fraud Detection: A Study Case for the Federal Patrimony Department DOI: 10.5769/C2012010 or http://dx.doi.org/10.5769/c2012010 Neural Network Predictor for Fraud Detection: A Study Case for the Federal Patrimony Department Antonio Manuel Rubio Serrano (1,2), João Paulo

More information

Optimum Design of Worm Gears with Multiple Computer Aided Techniques

Optimum Design of Worm Gears with Multiple Computer Aided Techniques Copyright c 2008 ICCES ICCES, vol.6, no.4, pp.221-227 Optimum Design of Worm Gears with Multiple Computer Aided Techniques Daizhong Su 1 and Wenjie Peng 2 Summary Finite element analysis (FEA) has proved

More information

Lecture 8 Artificial neural networks: Unsupervised learning

Lecture 8 Artificial neural networks: Unsupervised learning Lecture 8 Artificial neural networks: Unsupervised learning Introduction Hebbian learning Generalised Hebbian learning algorithm Competitive learning Self-organising computational map: Kohonen network

More information

Data Mining Techniques Chapter 7: Artificial Neural Networks

Data Mining Techniques Chapter 7: Artificial Neural Networks Data Mining Techniques Chapter 7: Artificial Neural Networks Artificial Neural Networks.................................................. 2 Neural network example...................................................

More information

Stabilization by Conceptual Duplication in Adaptive Resonance Theory

Stabilization by Conceptual Duplication in Adaptive Resonance Theory Stabilization by Conceptual Duplication in Adaptive Resonance Theory Louis Massey Royal Military College of Canada Department of Mathematics and Computer Science PO Box 17000 Station Forces Kingston, Ontario,

More information

Open Access Research on Application of Neural Network in Computer Network Security Evaluation. Shujuan Jin *

Open Access Research on Application of Neural Network in Computer Network Security Evaluation. Shujuan Jin * Send Orders for Reprints to reprints@benthamscience.ae 766 The Open Electrical & Electronic Engineering Journal, 2014, 8, 766-771 Open Access Research on Application of Neural Network in Computer Network

More information

ANN Based Fault Classifier and Fault Locator for Double Circuit Transmission Line

ANN Based Fault Classifier and Fault Locator for Double Circuit Transmission Line International Journal of Computer Sciences and Engineering Open Access Research Paper Volume-4, Special Issue-2, April 2016 E-ISSN: 2347-2693 ANN Based Fault Classifier and Fault Locator for Double Circuit

More information

Hybrid Evolution of Heterogeneous Neural Networks

Hybrid Evolution of Heterogeneous Neural Networks Hybrid Evolution of Heterogeneous Neural Networks 01001110 01100101 01110101 01110010 01101111 01101110 01101111 01110110 01100001 00100000 01110011 01101011 01110101 01110000 01101001 01101110 01100001

More information

A Service Revenue-oriented Task Scheduling Model of Cloud Computing

A Service Revenue-oriented Task Scheduling Model of Cloud Computing Journal of Information & Computational Science 10:10 (2013) 3153 3161 July 1, 2013 Available at http://www.joics.com A Service Revenue-oriented Task Scheduling Model of Cloud Computing Jianguang Deng a,b,,

More information

Forecasting Demand in the Clothing Industry. Eduardo Miguel Rodrigues 1, Manuel Carlos Figueiredo 2 2

Forecasting Demand in the Clothing Industry. Eduardo Miguel Rodrigues 1, Manuel Carlos Figueiredo 2 2 XI Congreso Galego de Estatística e Investigación de Operacións A Coruña, 24-25-26 de outubro de 2013 Forecasting Demand in the Clothing Industry Eduardo Miguel Rodrigues 1, Manuel Carlos Figueiredo 2

More information

HYBRID GENETIC ALGORITHM PARAMETER EFFECTS FOR OPTIMIZATION OF CONSTRUCTION RESOURCE ALLOCATION PROBLEM. Jin-Lee KIM 1, M. ASCE

HYBRID GENETIC ALGORITHM PARAMETER EFFECTS FOR OPTIMIZATION OF CONSTRUCTION RESOURCE ALLOCATION PROBLEM. Jin-Lee KIM 1, M. ASCE 1560 HYBRID GENETIC ALGORITHM PARAMETER EFFECTS FOR OPTIMIZATION OF CONSTRUCTION RESOURCE ALLOCATION PROBLEM Jin-Lee KIM 1, M. ASCE 1 Assistant Professor, Department of Civil Engineering and Construction

More information

Neural network software tool development: exploring programming language options

Neural network software tool development: exploring programming language options INEB- PSI Technical Report 2006-1 Neural network software tool development: exploring programming language options Alexandra Oliveira aao@fe.up.pt Supervisor: Professor Joaquim Marques de Sá June 2006

More information

Efficient mapping of the training of Convolutional Neural Networks to a CUDA-based cluster

Efficient mapping of the training of Convolutional Neural Networks to a CUDA-based cluster Efficient mapping of the training of Convolutional Neural Networks to a CUDA-based cluster Jonatan Ward Sergey Andreev Francisco Heredia Bogdan Lazar Zlatka Manevska Eindhoven University of Technology,

More information

Support Vector Machines with Clustering for Training with Very Large Datasets

Support Vector Machines with Clustering for Training with Very Large Datasets Support Vector Machines with Clustering for Training with Very Large Datasets Theodoros Evgeniou Technology Management INSEAD Bd de Constance, Fontainebleau 77300, France theodoros.evgeniou@insead.fr Massimiliano

More information

K-Means Clustering Tutorial

K-Means Clustering Tutorial K-Means Clustering Tutorial By Kardi Teknomo,PhD Preferable reference for this tutorial is Teknomo, Kardi. K-Means Clustering Tutorials. http:\\people.revoledu.com\kardi\ tutorial\kmean\ Last Update: July

More information

Research of Digital Character Recognition Technology Based on BP Algorithm

Research of Digital Character Recognition Technology Based on BP Algorithm Research of Digital Character Recognition Technology Based on BP Algorithm Xianmin Wei Computer and Communication Engineering School of Weifang University Weifang, China wfxyweixm@126.com Abstract. This

More information

Management of Software Projects with GAs

Management of Software Projects with GAs MIC05: The Sixth Metaheuristics International Conference 1152-1 Management of Software Projects with GAs Enrique Alba J. Francisco Chicano Departamento de Lenguajes y Ciencias de la Computación, Universidad

More information

Solving Timetable Scheduling Problem by Using Genetic Algorithms

Solving Timetable Scheduling Problem by Using Genetic Algorithms Solving Timetable Scheduling Problem by Using Genetic Algorithms Branimir Sigl, Marin Golub, Vedran Mornar Faculty of Electrical Engineering and Computing, University of Zagreb Unska 3, 1 Zagreb, Croatia

More information

Forecasting of Economic Quantities using Fuzzy Autoregressive Model and Fuzzy Neural Network

Forecasting of Economic Quantities using Fuzzy Autoregressive Model and Fuzzy Neural Network Forecasting of Economic Quantities using Fuzzy Autoregressive Model and Fuzzy Neural Network Dušan Marček 1 Abstract Most models for the time series of stock prices have centered on autoregressive (AR)

More information

ARTIFICIAL INTELLIGENCE METHODS IN EARLY MANUFACTURING TIME ESTIMATION

ARTIFICIAL INTELLIGENCE METHODS IN EARLY MANUFACTURING TIME ESTIMATION 1 ARTIFICIAL INTELLIGENCE METHODS IN EARLY MANUFACTURING TIME ESTIMATION B. Mikó PhD, Z-Form Tool Manufacturing and Application Ltd H-1082. Budapest, Asztalos S. u 4. Tel: (1) 477 1016, e-mail: miko@manuf.bme.hu

More information