SELECTING NEURAL NETWORK ARCHITECTURE FOR INVESTMENT PROFITABILITY PREDICTIONS
|
|
|
- Mavis Carroll
- 9 years ago
- Views:
Transcription
1 UDC: Original scientific paper SELECTING NEURAL NETWORK ARCHITECTURE FOR INVESTMENT PROFITABILITY PREDICTIONS Tonimir Kišasondi, Alen Lovren i University of Zagreb, Faculty of Organization and Informatics, Varaždin, Croatia tonimir.kisasondi, [email protected] Abstract: In this paper we present a modified neural network architecture and an algorithm that enables neural networks to learn vectors in accordance to user designed sequences or graph structures. This enables us to use the modified network algorithm to identify, generate or complete specified patterns that are learned in the training phase. The algorithm is based on the idea that neural networks in the human neurocortex represent a distributed memory of sequences that are stored in invariant hierarchical form with associative access. The algorithm was tested on our custom built simulator that supports the usage of our ADT neural network with standard backpropagation and our custom built training algorithms, and it proved to be useful and successful in modelling graphs. Keywords: Neural networks, MLP training algorithms, neural network ADT, neural network graph modelling.. INTRODUCTION In this paper we present a modified neural network architecture and an algorithm that enables neural networks to learn vectors in accordance to user designed sequences. This enables us to use the modified network algorithm to identify, generate or complete specified patterns that are learned in the training phase. The algorithm is based on the idea that neural networks in the human neurocortex represent a distributed memory of sequences that are stored in invariant hierarchical form with associative access. (See [4]) The algorithm was tested on our custom built simulator that supports the usage of our ADT neural network with standard backpropagation and our custom built training algorithms.. PREVIOUS RESEARCH IN THIS FIELD Similar works that can employ some sort of temporal processing are Elman and Jordan networks (see [5] page 58) which employ feedback loops from the hidden to input layer and Time delayed neural network (TDNN) (see [5] page 479). Both present a method that employs modifications to the original algorithm s inner functioning and cannot represent dispersed time patterns, while this modification 93
2 Z. Kišasondi, A. Lovren i. Selecting neural networks architecture for investment... can be utilized on the original algorithm without changing of the algorithms internal functioning. Also our modifications to the original algorithm can be extended to a number of architectures and training algorithms, because they represent a concept of transfering information in the neural network. Also our algorithm can model graphs with vertices and edges so it has a broad use. 3. MODIFICATIONS TO THE FEEDFORWARD NEURAL NETWORK ARCHITECTURE Each neural network has the ability to transform an input vector to an output vector by means of the weights and biases that the network contains. Those weights and biases represent the networks information processing memory. We define that transformation ability that can organize output vectors as a set that contains certain vectors (edges) and nodes. The output vector organization can be represented as a mathematical graph. Nodes of the vector only represent a certain state that a neural net can be in, in which only the vectors from that state are available. A vector is defined as a set of data that represents the input or output of the neural net. Input vectors are marked as: [ x,x,x 3,...,x n ] and output vectors are marked as: [ y y, y 3,..., y n ]. A node is defined as a point in the graph in which all vectors that are assigned to that node have a unique representative part that can be used in the feedback loop of the neural net. Some vectors point to the same node in the graph, while some vectors can be modelled so that the can shift the current node the neural net uses. Nodes are marked as Nn: where the lowercase n represents the numerical designation of the node. Also, we define a specific architecture and a principle of neural net training which is based on modifications to the feedforward algorithm that enables the usage of feedback loops where the user uses only the relevant part of the neurons that represent input neurons which can also be feedback neurons or stand-alone input neurons like neuron #3 as represented on figure. The feedback loop connects output layer neurons with their input counterparts with or without usage of special trigger neurons (designated T) that are used for additional modelling or preparation of the combined input vector. By that, we add information into the input and output vectors of our neural net in the training phase, so we can organize vectors into sequential, parallel or combined patterns. In accordance to that, the network has a temporal effect: current output of the network is affected by previous vectors that were computed. Every vector has the ability to transpose the network into another node in which for the same inputs the net gives out different outputs that are defined by our network training because the part of the information about the output is transmitted to the input in the next feedforward phase. Our algorithm was tested on a three layered neural network with backpropagation training with usage of linear momentum and jitter. (See [],[7] and [8]) 94
3 Journal of information and organizational sciences, Volume 30, Number (006) T T I- O- 3 3 Input Layer 4 Output Layer O- Figure. Modified neural network architecture To employ the organization of neural net memory into sequences, each input and corresponding output vector used in training or running the network must have a specific number of neurons devoted for usage in the feedback loop. Those neurons can be stand-alone neurons that are only used for the feedback loop like the neurons and on figure or if we have a set of unique inputs with unique outputs we can use the input neurons normally and also use them in the feedback loop. As an example of organization of input vectors from figure we can derive expression where we see that if we want a transition from node to node that y, y that are the outputs of the neural net in feedforward phase t must be transferred to x, x which are the input neurons in the beginning of the next feedforward phase t+ N :[ y y] t N :[ x x ] () t+ Following that, we can define any number of neurons that will be used for the feedback loop, where each output vector defines which input vectors can be selected in the next feedforward phase. If we use a greater number of neurons for the feedback loop, we can create more nodes. We can represent the memory organization of vectors and nodes with a memory organization graph. For example we can create the graph for network from figure with nodes and 5 vectors. Figure. Example memory organization graph 95
4 Z. Kišasondi, A. Lovren i. Selecting neural networks architecture for investment... From figure we can see that the user generated graph of memory organization has nodes, N and N. Node has a vector V5 that is only used in that node and a transition vector V. Also, N is the starting node and is marked with double outlined circle. Node has two vectors that are only accessible from that node, V and V3 and has a transition vector V4. As previously noted, we can generate as many nodes or vectors as we want as long as there are enough weights and biases to represent them since the information about transformation of the input is contained in the weights and biases of the network. To evaluate the memory capacity of any multilayered MLP network, we created expression. This gives us the exact number of weights and biases in the whole network. λ ( w b ) = ( NPL NPL ) + ( NPL ) () i i+ j i= j= λ In expression the Greek sign lambda represents the number of layers of the network and NPL i stands for neurons per layer the number of neurons in a network layer. The first part of the expression before the summation sign is connected to the number of weights while the second part of the expression is connected to the number of biases. Also when modelling the feedback loop data, we need to note that the neurons of the feedback loop will most easily convert the input values that are the asymptotical values of their activation functions. Additional improvements to the feedback loop can be made by usage of a layer of trigger neurons on the feedback loop that utilise certain rules based on value of the input that they receive. Most effective solutions are based on binary or bipolar triggers that convert the input to the desired output based on a certain tolerance ratio like 5% or 0% deviation from the desired value, using these neurons we can avoid certain situations that put our network in an unidentified node or an unstable state. It is advised to use such neurons if we use a larger number of nodes. Also, any discrete value with enough distance from another identifier can be used for the feedback loop. Activation function of such neuron can be easily represented by the source code double binary_trigger(double input) if(input>0.5) return ; else return 0; 4. CREATION OF GRAPHED VECTORS FOR BACKPROPAGATION LEARNING Each vector that wants to be used in this architecture must be composed of two parts, the data part and the feedback part. Each output vector must have a unique identifier in its feedback part about the node that follows from that vector. As an example, we can take the memory organization graph from figure. For the neural network from figure Neurons and are used in the feedback loop and create the 96
5 Journal of information and organizational sciences, Volume 30, Number (006) feedback part, while neuron 3 creates the data part of the vector. Output vectors have 4 neurons. and which create the feedback part and 3 and 4 which create the data part. Node will use the designation (0,0) so the input vector for node looks like: ( 0,0, Γ ) Node will use the designation (0,) and it s input vector is: ( 0, Γ ), where Γ represents any input value. And from that, given for some input or output data, we can create the set of training vectors that look like: Edge Input vector Output N: (0, 0, V 5 0.3) (0, N: (0, 0, V 0.6) (0, 0, vector 0.4, 0.8) ) N : (0, N : (0, N : (0, V 0.3) (0, V 3 0.9) (0, V 4 0.6) (0, 0, 0.7, 0., 0.9, 0.3) 0.6) 0.9) If we want to use another approach, we can use graph structures that today present a method for visualization and understanding a wide variety of problems. The most common and standard way to mathematically describe graphs is with use of adjacency matrixes. The problem is to transfer the graph matrix into a set of input output vector combinations. For a proof of concept the following example for a graph with 4 nodes and 0 vectors is given: Table. Graph adjacency matrix N N N3 N4 N V V 0 V 5 N V,V 3 V N3 V 7 N4 V 8 V 9 V 4 97
6 Z. Kišasondi, A. Lovren i. Selecting neural networks architecture for investment... N N N3 N4 Figure 3. Graph for matrix in table From Table we created the graph in figure 3. Conversion to the input and output pairs will be given with some assumptions: Each of the node designators (Nn) where the n numerical designation of the node, will be treated as generated binary set, as we said in chapter 3 of this paper. In this case, for example this would be (N: 00, N: 0 N3: 0, N4:). Each of the vectors is shown as Vn where each vector would state an input / output combination pair for the neural network. The conversion is simple, because the adjacency matrix in reality shows that each row specifies the start node, and each column designates and end node connection. An example conversion is done in table. Table. Input / Output vector combinations created from Table. # Input vector Output Vector NV NV NV 0 N,V 0 3 NV 5 N3,V 5 4 N,V N,V 5 N,V 3 N,V 3 6 N,V N4,V 7 N3,V 7 NV 7 8 N4,V 8 NV 8 9 N4,V 9 N3,V 9 0 N4,V 4 N4,V 4 From this table we can see that conversions from matrix to vector set form are simple for manual or automatic conversion and that they can be easily automated with a simple program. If we want to use the algorithm on our created vector set, we need to transfer the output feedback loop data to the input feedback loop data, transfer the input data part and then initiate the feedforward sequence. This can be described with the source code: 98
7 Journal of information and organizational sciences, Volume 30, Number (006) void Feedback_Forward() input_vector[0] = network_layer[].neuron[0].output; input_vector[] = network_layer[].neuron[].output; // the feedback loops are defined here,here, we see the example for loops, while we can add any number of loops. for(a = 0; a < architecture[]; a++) network_layer[0].neuron[a].output =0; // Clean up the outputs of hidden layer neurons for(a = 0; a < architecture[]; a++) network_layer[].neuron[a].output =0; for(a = 0; a < architecture[]; a++) network_layer[0].neuron[a].output += network_layer[0].neuron[a].bias; // Hidden layer FeedForward for(b = 0; b < architecture[0]; b++) network_layer[0].neuron[a].output += network_layer[0].neuron[a].weight[b] * input_vector[b]; network_layer[0].neuron[a].output = sig(network_layer[0].neuron[a].output, prim_slope_type); for(a = 0; a< architecture[]; a++) for(b = 0; b< architecture[]; b++) network_layer[].neuron[a].output += network_layer[].neuron[a].weight[b] * network_layer[0].neuron[b].output; network_layer[].neuron[a].output += network_layer[].neuron[a].bias; network_layer[].neuron[a].output = sig(network_layer[].neuron[a].output, sec_slope_type); // Output layer feedforward // End of function Feedback_Forward As we can see from the code, the original algorithm is intact, all changes and modifications can be created as an additional phase that is executed prior the feedforward phase. Also, we call the function sig(x,y) in our code, which returns the value of the sigmoid function of that neuron in a specific layer. After training, our network is in the last node that the training sequence initiated with its feedback_forward function. We must set the initial node of the network with the function starting_node that will put the starting node in the input vectors feedback part. We can modify the function to accommodate a larger set by simply adding more elements for the feedback loop and more hidden neurons. Here the starting nodes designation in the feedback part of the vector is (0,0) void starting_node() network_layer[].neuron[0].output = 0; // Set the feedback part of the vector to (0,0,... network_layer[].neuron[].output = 0; 99
8 Z. Kišasondi, A. Lovren i. Selecting neural networks architecture for investment VERIFICATION OF ALGORITHM AND ARCHITECTURE / SIMULATIONS AND RESULTS We verified our algorithm on several sets in our NN-SIM simulator for validation and as a proof of concept. As an example we will take a memory organization graph composed of 0 nodes with 30 vectors. The graph's organization and dataset is shown on figure 4. Node FEEDBACK DATA FEEDBACK DATA Figure 4. Dataset and graph for proof of concept The network's training architectures are shown on table in figure 3, also each training architecture was tested with a variety of learning rates, momentum s and jitter factors. Validation neurons were also used, and their usefulness was tested. Table 3. Architecture test data. Input Feedback Input Data Hidden neurons Output Feedback Output Data 4 Neurons Neurons 0 4 Neurons Neurons 4 Neurons Neurons 3 4 Neurons Neurons 4 Neurons Neurons 4 4 Neurons Neurons 4 Neurons Neurons 0 4 Neurons Neurons 4 Neurons Neurons 30 4 Neurons Neurons 4 Neurons Neurons 60 4 Neurons Neurons
9 Journal of information and organizational sciences, Volume 30, Number (006) 5. SIMULATION RESULTS Each simulation was executed in iterations of 000 to iterations with MSE measurements after each 000 th iteration. The architecture and algorithm proved to complete and gave correct outputs on all sufficient sets. We will show the MSE measurements for only sets that are the most interesting. The minimal architecture is 4 neurons, which can be seen from the unique output set vectors. The line on the graph symbolizes the MSE. 0,09 0,050 0,08 0,045 0,07 0,040 0,06 0,035 0,05 0,030 0,05 MSE 0,04 MSE 0,00 0,03 0,05 0,0 0,00 0,0 0,005 0,00 0,000-0, Algorithm Iteration Algorithm Iteration Figure 5. MSE outputs, left: 4 hidden neurons; right: 30 hidden neurons As we can see, the smallest deviation is achieved with the 30 hidden neurons (right side of Figure 5). This is logical because each hidden neuron is allocated to one vector the network has to learn. The network with 4 hidden neurons is a minimal solution because 3 unique vectors that are represented in the entire network while there are some vectors that are doubled, but are not represented in the same state. Also, the network with 30 hidden neurons converges to a global minimum faster. Also if we don t use any trigger neurons on the feedback loop, we noticed that by sequential execution of some vectors like vector 3 in node and any other local loop vectors on the graph that by adding a slight error or a small jitter impulse like maximum of 0% noise to the input, the network will slightly deviate from it s learned output and after some iterations, it will return to it s original, learned state. This is shown on figure 6, as a deviation that continues on 3 iterations of the algorithm. -0,005 0
10 Z. Kišasondi, A. Lovren i. Selecting neural networks architecture for investment... 0,0 0,00 0,008 0,006 Deviation 0,004 0,00 0,000-0,00-0,004-0, Algorithm Iteration Figure 6. Output deviation and recovery based on single addition of noise 5.. GENERATION OR CONTINUANCE OF SEQUENCES WITH THE network Our neural network can be used to generate or continue sequences. In that order, the feedback loop works as a trigger in which the whole network continues to the next node by the automatic change in the input vectors feedback part. For example: data part from figure 3 has only 3 unique inputs: (0.3, 0.9), (0.7, 0.) and (0.4, 0.5) if we set the data part to any of those vectors, and sequentially execute our algorithm, the network will traverse the graph s nodes based on the same value data inputs ALGORITHM BENCHMARKING AND CONCLUSION OF TEST RESULTS As we tested and implemented our algorithm with mixed datasets, of one which is showed above, we found out that the algorithm gives us the ability to model ADT Graphs, and that it doesn t slow down or disrupt the classic backpropagation or any of its modified variants. Standard benchmarking datasets cannot be used because the algorithm appends additional neurons for its feedback loop (as we can see from our research), and actually only changes the way that the vectors are presented and transferred to the neural network and since there are no networks that use this principle, this is the reason why we used our datasets. The only difference is that our network needs to use additional hidden neurons to store the processing information because of the added data, and some modifications to the architecture and algorithms, we can conclude that this method doesn t disrupt the original backpropagation, and gives us additional possibilities stated in this paper. 6. CONCLUSION AND FUTURE RESEARCH The feedback loop architecture can be used in areas that require memory organization abilities with all the features and benefits of neural networks. The benefits of organizing the output patterns into graphs are valued in some fields that can employ this architecture in areas that require sequential or mixed pattern recognition, pattern completion and dynamic control. Such systems require some art 0
11 Journal of information and organizational sciences, Volume 30, Number (006) of output generation based on previous inputs. We will try to implement this network and some other data evaluation and extraction principles in an adaptive expert system that will use a neural network as part of our hybrid main data evaluation part and will function as an expert system shell. The problem with adding the feedback part of the vector in large sets can be easily suppressed by temporally grouping the training input-output vector pairs into sets for pre-processing. First the pairs are grouped in which all output vector point to a certain node. Each of these sets gets a unique identifier which is the output vector feedback part. Next, the vector pairs are grouped in sets in which the same input vectors are representatives of their nodes, and they get their unique identifiers. This can be easily implemented into a script for pre-processing of training data sets. Future research on this network will be based on automated learning method that will employ an ADT that can represent the memory organization graph in any form that will be a compact and effective way for training representation. Because of the simplicity of this algorithm and modifications to the architecture, we can modify any program or implementation to utilise the feedback loop architecture. Also, future works will include attempted modifications on other algorithms (See: [3] and []) that are not strictly backpropagation based. REFERENCES: [] Arbib, M. A.: The handbook of brain theory and neural networks, MIT press 00. [] Fausett, L. : Fundamentals of Neural Networks, architectures, algorithms and applications, Prentice Hall, 994. Chapters : 5 and 6. [3] Grossberg, S.; Carpenter, G.: Adaptive Resonance Theory from: Arbib, M. A.: The handbook of brain theory and neural networks, MIT press 00, pages [4] Hawkins, J.: On intelligence, Times Books 004. [5] Principe, J.C.; Euliano, N.R.; Lefebvre, W.C.: Neural and adaptive systems, fundamentals through simulations, J.Wiley & Sons, 000. Chapter 0 Temporal processing with neural networks and Chapter Training and using recurrent networks. [6] Rumelhart, D.E.; Hinton, G.E.; Williams R.J.: Learning Internal Representations by Error propagation [7] Rumelhart, D.E.; McClelland J.L.: Parallel Distributed Processing, Volume MIT Press, Cambridge, MA, 986 [8] Webros, P.: Backpropagation: Basics and new developments, from: Arbib, M. A.: The handbook of brain theory and neural networks, MIT press 00, pages Received: 9 December 005 Accepted: 9 June
How To Use Neural Networks In Data Mining
International Journal of Electronics and Computer Science Engineering 1449 Available Online at www.ijecse.org ISSN- 2277-1956 Neural Networks in Data Mining Priyanka Gaur Department of Information and
Analecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
Recurrent Neural Networks
Recurrent Neural Networks Neural Computation : Lecture 12 John A. Bullinaria, 2015 1. Recurrent Neural Network Architectures 2. State Space Models and Dynamical Systems 3. Backpropagation Through Time
Neural network software tool development: exploring programming language options
INEB- PSI Technical Report 2006-1 Neural network software tool development: exploring programming language options Alexandra Oliveira [email protected] Supervisor: Professor Joaquim Marques de Sá June 2006
EFFICIENT DATA PRE-PROCESSING FOR DATA MINING
EFFICIENT DATA PRE-PROCESSING FOR DATA MINING USING NEURAL NETWORKS JothiKumar.R 1, Sivabalan.R.V 2 1 Research scholar, Noorul Islam University, Nagercoil, India Assistant Professor, Adhiparasakthi College
Power Prediction Analysis using Artificial Neural Network in MS Excel
Power Prediction Analysis using Artificial Neural Network in MS Excel NURHASHINMAH MAHAMAD, MUHAMAD KAMAL B. MOHAMMED AMIN Electronic System Engineering Department Malaysia Japan International Institute
Keywords: Image complexity, PSNR, Levenberg-Marquardt, Multi-layer neural network.
Global Journal of Computer Science and Technology Volume 11 Issue 3 Version 1.0 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals Inc. (USA) Online ISSN: 0975-4172
MANAGING QUEUE STABILITY USING ART2 IN ACTIVE QUEUE MANAGEMENT FOR CONGESTION CONTROL
MANAGING QUEUE STABILITY USING ART2 IN ACTIVE QUEUE MANAGEMENT FOR CONGESTION CONTROL G. Maria Priscilla 1 and C. P. Sumathi 2 1 S.N.R. Sons College (Autonomous), Coimbatore, India 2 SDNB Vaishnav College
DATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
Data quality in Accounting Information Systems
Data quality in Accounting Information Systems Comparing Several Data Mining Techniques Erjon Zoto Department of Statistics and Applied Informatics Faculty of Economy, University of Tirana Tirana, Albania
Programming Exercise 3: Multi-class Classification and Neural Networks
Programming Exercise 3: Multi-class Classification and Neural Networks Machine Learning November 4, 2011 Introduction In this exercise, you will implement one-vs-all logistic regression and neural networks
Machine Learning and Data Mining -
Machine Learning and Data Mining - Perceptron Neural Networks Nuno Cavalheiro Marques ([email protected]) Spring Semester 2010/2011 MSc in Computer Science Multi Layer Perceptron Neurons and the Perceptron
Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski [email protected]
Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trakovski [email protected] Neural Networks 2 Neural Networks Analogy to biological neural systems, the most robust learning systems
degrees of freedom and are able to adapt to the task they are supposed to do [Gupta].
1.3 Neural Networks 19 Neural Networks are large structured systems of equations. These systems have many degrees of freedom and are able to adapt to the task they are supposed to do [Gupta]. Two very
Performance Evaluation On Human Resource Management Of China S Commercial Banks Based On Improved Bp Neural Networks
Performance Evaluation On Human Resource Management Of China S *1 Honglei Zhang, 2 Wenshan Yuan, 1 Hua Jiang 1 School of Economics and Management, Hebei University of Engineering, Handan 056038, P. R.
Neural Network Design in Cloud Computing
International Journal of Computer Trends and Technology- volume4issue2-2013 ABSTRACT: Neural Network Design in Cloud Computing B.Rajkumar #1,T.Gopikiran #2,S.Satyanarayana *3 #1,#2Department of Computer
Application of Neural Network in User Authentication for Smart Home System
Application of Neural Network in User Authentication for Smart Home System A. Joseph, D.B.L. Bong, D.A.A. Mat Abstract Security has been an important issue and concern in the smart home systems. Smart
NEURAL NETWORKS IN DATA MINING
NEURAL NETWORKS IN DATA MINING 1 DR. YASHPAL SINGH, 2 ALOK SINGH CHAUHAN 1 Reader, Bundelkhand Institute of Engineering & Technology, Jhansi, India 2 Lecturer, United Institute of Management, Allahabad,
AN APPLICATION OF TIME SERIES ANALYSIS FOR WEATHER FORECASTING
AN APPLICATION OF TIME SERIES ANALYSIS FOR WEATHER FORECASTING Abhishek Agrawal*, Vikas Kumar** 1,Ashish Pandey** 2,Imran Khan** 3 *(M. Tech Scholar, Department of Computer Science, Bhagwant University,
3 An Illustrative Example
Objectives An Illustrative Example Objectives - Theory and Examples -2 Problem Statement -2 Perceptron - Two-Input Case -4 Pattern Recognition Example -5 Hamming Network -8 Feedforward Layer -8 Recurrent
Price Prediction of Share Market using Artificial Neural Network (ANN)
Prediction of Share Market using Artificial Neural Network (ANN) Zabir Haider Khan Department of CSE, SUST, Sylhet, Bangladesh Tasnim Sharmin Alin Department of CSE, SUST, Sylhet, Bangladesh Md. Akter
Neural Networks in Data Mining
IOSR Journal of Engineering (IOSRJEN) ISSN (e): 2250-3021, ISSN (p): 2278-8719 Vol. 04, Issue 03 (March. 2014), V6 PP 01-06 www.iosrjen.org Neural Networks in Data Mining Ripundeep Singh Gill, Ashima Department
Stabilization by Conceptual Duplication in Adaptive Resonance Theory
Stabilization by Conceptual Duplication in Adaptive Resonance Theory Louis Massey Royal Military College of Canada Department of Mathematics and Computer Science PO Box 17000 Station Forces Kingston, Ontario,
An Introduction to Neural Networks
An Introduction to Vincent Cheung Kevin Cannons Signal & Data Compression Laboratory Electrical & Computer Engineering University of Manitoba Winnipeg, Manitoba, Canada Advisor: Dr. W. Kinsner May 27,
Performance Comparison between Backpropagation Algorithms Applied to Intrusion Detection in Computer Network Systems
Performance Comparison between Backpropagation Algorithms Applied to Intrusion Detection in Computer Network Systems Iftikhar Ahmad, M.A Ansari, Sajjad Mohsin Department of Computer Sciences, Federal Urdu
American International Journal of Research in Science, Technology, Engineering & Mathematics
American International Journal of Research in Science, Technology, Engineering & Mathematics Available online at http://www.iasir.net ISSN (Print): 2328-349, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629
Neural Networks and Support Vector Machines
INF5390 - Kunstig intelligens Neural Networks and Support Vector Machines Roar Fjellheim INF5390-13 Neural Networks and SVM 1 Outline Neural networks Perceptrons Neural networks Support vector machines
A Simple Feature Extraction Technique of a Pattern By Hopfield Network
A Simple Feature Extraction Technique of a Pattern By Hopfield Network A.Nag!, S. Biswas *, D. Sarkar *, P.P. Sarkar *, B. Gupta **! Academy of Technology, Hoogly - 722 *USIC, University of Kalyani, Kalyani
Back Propagation Neural Networks User Manual
Back Propagation Neural Networks User Manual Author: Lukáš Civín Library: BP_network.dll Runnable class: NeuralNetStart Document: Back Propagation Neural Networks Page 1/28 Content: 1 INTRODUCTION TO BACK-PROPAGATION
THE HUMAN BRAIN. observations and foundations
THE HUMAN BRAIN observations and foundations brains versus computers a typical brain contains something like 100 billion miniscule cells called neurons estimates go from about 50 billion to as many as
Chapter 12 Discovering New Knowledge Data Mining
Chapter 12 Discovering New Knowledge Data Mining Becerra-Fernandez, et al. -- Knowledge Management 1/e -- 2004 Prentice Hall Additional material 2007 Dekai Wu Chapter Objectives Introduce the student to
COMBINED NEURAL NETWORKS FOR TIME SERIES ANALYSIS
COMBINED NEURAL NETWORKS FOR TIME SERIES ANALYSIS Iris Ginzburg and David Horn School of Physics and Astronomy Raymond and Beverly Sackler Faculty of Exact Science Tel-Aviv University Tel-A viv 96678,
Role of Neural network in data mining
Role of Neural network in data mining Chitranjanjit kaur Associate Prof Guru Nanak College, Sukhchainana Phagwara,(GNDU) Punjab, India Pooja kapoor Associate Prof Swami Sarvanand Group Of Institutes Dinanagar(PTU)
Lecture 6. Artificial Neural Networks
Lecture 6 Artificial Neural Networks 1 1 Artificial Neural Networks In this note we provide an overview of the key concepts that have led to the emergence of Artificial Neural Networks as a major paradigm
NEUROMATHEMATICS: DEVELOPMENT TENDENCIES. 1. Which tasks are adequate of neurocomputers?
Appl. Comput. Math. 2 (2003), no. 1, pp. 57-64 NEUROMATHEMATICS: DEVELOPMENT TENDENCIES GALUSHKIN A.I., KOROBKOVA. S.V., KAZANTSEV P.A. Abstract. This article is the summary of a set of Russian scientists
Multiple Layer Perceptron Training Using Genetic Algorithms
Multiple Layer Perceptron Training Using Genetic Algorithms Udo Seiffert University of South Australia, Adelaide Knowledge-Based Intelligent Engineering Systems Centre (KES) Mawson Lakes, 5095, Adelaide,
NEURAL NETWORKS A Comprehensive Foundation
NEURAL NETWORKS A Comprehensive Foundation Second Edition Simon Haykin McMaster University Hamilton, Ontario, Canada Prentice Hall Prentice Hall Upper Saddle River; New Jersey 07458 Preface xii Acknowledgments
A Time Series ANN Approach for Weather Forecasting
A Time Series ANN Approach for Weather Forecasting Neeraj Kumar 1, Govind Kumar Jha 2 1 Associate Professor and Head Deptt. Of Computer Science,Nalanda College Of Engineering Chandi(Bihar) 2 Assistant
Neural Networks and Back Propagation Algorithm
Neural Networks and Back Propagation Algorithm Mirza Cilimkovic Institute of Technology Blanchardstown Blanchardstown Road North Dublin 15 Ireland [email protected] Abstract Neural Networks (NN) are important
From Control Loops to Software
CNRS-VERIMAG Grenoble, France October 2006 Executive Summary Embedded systems realization of control systems by computers Computers are the major medium for realizing controllers There is a gap between
Stock Prediction using Artificial Neural Networks
Stock Prediction using Artificial Neural Networks Abhishek Kar (Y8021), Dept. of Computer Science and Engineering, IIT Kanpur Abstract In this work we present an Artificial Neural Network approach to predict
Why? A central concept in Computer Science. Algorithms are ubiquitous.
Analysis of Algorithms: A Brief Introduction Why? A central concept in Computer Science. Algorithms are ubiquitous. Using the Internet (sending email, transferring files, use of search engines, online
Chapter 4: Artificial Neural Networks
Chapter 4: Artificial Neural Networks CS 536: Machine Learning Littman (Wu, TA) Administration icml-03: instructional Conference on Machine Learning http://www.cs.rutgers.edu/~mlittman/courses/ml03/icml03/
EVALUATION OF NEURAL NETWORK BASED CLASSIFICATION SYSTEMS FOR CLINICAL CANCER DATA CLASSIFICATION
EVALUATION OF NEURAL NETWORK BASED CLASSIFICATION SYSTEMS FOR CLINICAL CANCER DATA CLASSIFICATION K. Mumtaz Vivekanandha Institute of Information and Management Studies, Tiruchengode, India S.A.Sheriff
Performance Evaluation of Artificial Neural. Networks for Spatial Data Analysis
Contemporary Engineering Sciences, Vol. 4, 2011, no. 4, 149-163 Performance Evaluation of Artificial Neural Networks for Spatial Data Analysis Akram A. Moustafa Department of Computer Science Al al-bayt
NEURAL NETWORK FUNDAMENTALS WITH GRAPHS, ALGORITHMS, AND APPLICATIONS
NEURAL NETWORK FUNDAMENTALS WITH GRAPHS, ALGORITHMS, AND APPLICATIONS N. K. Bose HRB-Systems Professor of Electrical Engineering The Pennsylvania State University, University Park P. Liang Associate Professor
Stock Market Prediction System with Modular Neural Networks
Stock Market Prediction System with Modular Neural Networks Takashi Kimoto and Kazuo Asakawa Computer-based Systems Laboratory FUJTSU LABORATORES LTD., KAWASAK 1015 Kamikodanaka, Nakahara-Ku, Kawasaki
ELLIOTT WAVES RECOGNITION VIA NEURAL NETWORKS
ELLIOTT WAVES RECOGNITION VIA NEURAL NETWORKS Martin Kotyrba Eva Volna David Brazina Robert Jarusek Department of Informatics and Computers University of Ostrava Z70103, Ostrava, Czech Republic [email protected]
Car Insurance. Havránek, Pokorný, Tomášek
Car Insurance Havránek, Pokorný, Tomášek Outline Data overview Horizontal approach + Decision tree/forests Vertical (column) approach + Neural networks SVM Data overview Customers Viewed policies Bought
THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS
THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS O.U. Sezerman 1, R. Islamaj 2, E. Alpaydin 2 1 Laborotory of Computational Biology, Sabancı University, Istanbul, Turkey. 2 Computer Engineering
Introduction to Artificial Neural Networks
POLYTECHNIC UNIVERSITY Department of Computer and Information Science Introduction to Artificial Neural Networks K. Ming Leung Abstract: A computing paradigm known as artificial neural network is introduced.
Artificial Neural Network for Speech Recognition
Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual Student Research Showcase Overview Presenting an Artificial Neural Network to recognize and classify speech Spoken
1. Classification problems
Neural and Evolutionary Computing. Lab 1: Classification problems Machine Learning test data repository Weka data mining platform Introduction Scilab 1. Classification problems The main aim of a classification
Impelling Heart Attack Prediction System using Data Mining and Artificial Neural Network
General Article International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347-5161 2014 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Impelling
A Content based Spam Filtering Using Optical Back Propagation Technique
A Content based Spam Filtering Using Optical Back Propagation Technique Sarab M. Hameed 1, Noor Alhuda J. Mohammed 2 Department of Computer Science, College of Science, University of Baghdad - Iraq ABSTRACT
Predicting the Risk of Heart Attacks using Neural Network and Decision Tree
Predicting the Risk of Heart Attacks using Neural Network and Decision Tree S.Florence 1, N.G.Bhuvaneswari Amma 2, G.Annapoorani 3, K.Malathi 4 PG Scholar, Indian Institute of Information Technology, Srirangam,
AUTOMATION OF ENERGY DEMAND FORECASTING. Sanzad Siddique, B.S.
AUTOMATION OF ENERGY DEMAND FORECASTING by Sanzad Siddique, B.S. A Thesis submitted to the Faculty of the Graduate School, Marquette University, in Partial Fulfillment of the Requirements for the Degree
Comparison of Supervised and Unsupervised Learning Classifiers for Travel Recommendations
Volume 3, No. 8, August 2012 Journal of Global Research in Computer Science REVIEW ARTICLE Available Online at www.jgrcs.info Comparison of Supervised and Unsupervised Learning Classifiers for Travel Recommendations
Feed-Forward mapping networks KAIST 바이오및뇌공학과 정재승
Feed-Forward mapping networks KAIST 바이오및뇌공학과 정재승 How much energy do we need for brain functions? Information processing: Trade-off between energy consumption and wiring cost Trade-off between energy consumption
Prediction Model for Crude Oil Price Using Artificial Neural Networks
Applied Mathematical Sciences, Vol. 8, 2014, no. 80, 3953-3965 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.43193 Prediction Model for Crude Oil Price Using Artificial Neural Networks
Neural Network and Genetic Algorithm Based Trading Systems. Donn S. Fishbein, MD, PhD Neuroquant.com
Neural Network and Genetic Algorithm Based Trading Systems Donn S. Fishbein, MD, PhD Neuroquant.com Consider the challenge of constructing a financial market trading system using commonly available technical
Novelty Detection in image recognition using IRF Neural Networks properties
Novelty Detection in image recognition using IRF Neural Networks properties Philippe Smagghe, Jean-Luc Buessler, Jean-Philippe Urban Université de Haute-Alsace MIPS 4, rue des Frères Lumière, 68093 Mulhouse,
Fast Sequential Summation Algorithms Using Augmented Data Structures
Fast Sequential Summation Algorithms Using Augmented Data Structures Vadim Stadnik [email protected] Abstract This paper provides an introduction to the design of augmented data structures that offer
Hybrid Data Envelopment Analysis and Neural Networks for Suppliers Efficiency Prediction and Ranking
1 st International Conference of Recent Trends in Information and Communication Technologies Hybrid Data Envelopment Analysis and Neural Networks for Suppliers Efficiency Prediction and Ranking Mohammadreza
5. A full binary tree with n leaves contains [A] n nodes. [B] log n 2 nodes. [C] 2n 1 nodes. [D] n 2 nodes.
1. The advantage of.. is that they solve the problem if sequential storage representation. But disadvantage in that is they are sequential lists. [A] Lists [B] Linked Lists [A] Trees [A] Queues 2. The
General Framework for an Iterative Solution of Ax b. Jacobi s Method
2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,
DEVELOPING AN ARTIFICIAL NEURAL NETWORK MODEL FOR LIFE CYCLE COSTING IN BUILDINGS
DEVELOPING AN ARTIFICIAL NEURAL NETWORK MODEL FOR LIFE CYCLE COSTING IN BUILDINGS Olufolahan Oduyemi 1, Michael Okoroh 2 and Angela Dean 3 1 and 3 College of Engineering and Technology, University of Derby,
RESEARCH PAPERS FACULTY OF MATERIALS SCIENCE AND TECHNOLOGY IN TRNAVA SLOVAK UNIVERSITY OF TECHNOLOGY IN BRATISLAVA
RESEARCH PAPERS FACULTY OF MATERIALS SCIENCE AND TECHNOLOGY IN TRNAVA SLOVAK UNIVERSITY OF TECHNOLOGY IN BRATISLAVA 2010 Number 29 3D MODEL GENERATION FROM THE ENGINEERING DRAWING Jozef VASKÝ, Michal ELIÁŠ,
Neural Networks algorithms and applications
Neural Networks algorithms and applications By Fiona Nielsen 4i 12/12-2001 Supervisor: Geert Rasmussen Niels Brock Business College 1 Introduction Neural Networks is a field of Artificial Intelligence
NEW TECHNIQUE TO DEAL WITH DYNAMIC DATA MINING IN THE DATABASE
www.arpapress.com/volumes/vol13issue3/ijrras_13_3_18.pdf NEW TECHNIQUE TO DEAL WITH DYNAMIC DATA MINING IN THE DATABASE Hebah H. O. Nasereddin Middle East University, P.O. Box: 144378, Code 11814, Amman-Jordan
A simple application of Artificial Neural Network to cloud classification
A simple application of Artificial Neural Network to cloud classification Tianle Yuan For AOSC 630 (by Prof. Kalnay) Introduction to Pattern Recognition (PR) Example1: visual separation between the character
DATA SECURITY BASED ON NEURAL NETWORKS
TASKQUARTERLY9No4,409 414 DATA SECURITY BASED ON NEURAL NETWORKS KHALED M. G. NOAMAN AND HAMID ABDULLAH JALAB Faculty of Science, Computer Science Department, Sana a University, P.O. Box 13499, Sana a,
Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence
Artificial Neural Networks and Support Vector Machines CS 486/686: Introduction to Artificial Intelligence 1 Outline What is a Neural Network? - Perceptron learners - Multi-layer networks What is a Support
Temporal Difference Learning in the Tetris Game
Temporal Difference Learning in the Tetris Game Hans Pirnay, Slava Arabagi February 6, 2009 1 Introduction Learning to play the game Tetris has been a common challenge on a few past machine learning competitions.
Big Data Analytics Using Neural networks
San José State University SJSU ScholarWorks Master's Projects Master's Theses and Graduate Research 4-1-2014 Big Data Analytics Using Neural networks Follow this and additional works at: http://scholarworks.sjsu.edu/etd_projects
Artificial neural networks
Artificial neural networks Now Neurons Neuron models Perceptron learning Multi-layer perceptrons Backpropagation 2 It all starts with a neuron 3 Some facts about human brain ~ 86 billion neurons ~ 10 15
Network (Tree) Topology Inference Based on Prüfer Sequence
Network (Tree) Topology Inference Based on Prüfer Sequence C. Vanniarajan and Kamala Krithivasan Department of Computer Science and Engineering Indian Institute of Technology Madras Chennai 600036 [email protected],
Feedforward Neural Networks and Backpropagation
Feedforward Neural Networks and Backpropagation Feedforward neural networks Architectural issues, computational capabilities Sigmoidal and radial basis functions Gradient-based learning and Backprogation
Designing a neural network for forecasting financial time series
Designing a neural network for forecasting financial time series 29 février 2008 What a Neural Network is? Each neurone k is characterized by a transfer function f k : output k = f k ( i w ik x k ) From
CHAPTER 5 PREDICTIVE MODELING STUDIES TO DETERMINE THE CONVEYING VELOCITY OF PARTS ON VIBRATORY FEEDER
93 CHAPTER 5 PREDICTIVE MODELING STUDIES TO DETERMINE THE CONVEYING VELOCITY OF PARTS ON VIBRATORY FEEDER 5.1 INTRODUCTION The development of an active trap based feeder for handling brakeliners was discussed
TRAINING A LIMITED-INTERCONNECT, SYNTHETIC NEURAL IC
777 TRAINING A LIMITED-INTERCONNECT, SYNTHETIC NEURAL IC M.R. Walker. S. Haghighi. A. Afghan. and L.A. Akers Center for Solid State Electronics Research Arizona State University Tempe. AZ 85287-6206 [email protected]
Handwritten Digit Recognition with a Back-Propagation Network
396 Le Cun, Boser, Denker, Henderson, Howard, Hubbard and Jackel Handwritten Digit Recognition with a Back-Propagation Network Y. Le Cun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard,
IBM SPSS Neural Networks 22
IBM SPSS Neural Networks 22 Note Before using this information and the product it supports, read the information in Notices on page 21. Product Information This edition applies to version 22, release 0,
Neural Computation - Assignment
Neural Computation - Assignment Analysing a Neural Network trained by Backpropagation AA SSt t aa t i iss i t i icc aa l l AA nn aa l lyy l ss i iss i oo f vv aa r i ioo i uu ss l lee l aa r nn i inn gg
Horse Racing Prediction Using Artificial Neural Networks
Horse Racing Prediction Using Artificial Neural Networks ELNAZ DAVOODI, ALI REZA KHANTEYMOORI Mathematics and Computer science Department Institute for Advanced Studies in Basic Sciences (IASBS) Gavazang,
COMPARATIVE STUDY OF RECOGNITION TOOLS AS BACK-ENDS FOR BANGLA PHONEME RECOGNITION
ITERATIOAL JOURAL OF RESEARCH I COMPUTER APPLICATIOS AD ROBOTICS ISS 2320-7345 COMPARATIVE STUDY OF RECOGITIO TOOLS AS BACK-EDS FOR BAGLA PHOEME RECOGITIO Kazi Kamal Hossain 1, Md. Jahangir Hossain 2,
129: Artificial Neural Networks. Ajith Abraham Oklahoma State University, Stillwater, OK, USA 1 INTRODUCTION TO ARTIFICIAL NEURAL NETWORKS
129: Artificial Neural Networks Ajith Abraham Oklahoma State University, Stillwater, OK, USA 1 Introduction to Artificial Neural Networks 901 2 Neural Network Architectures 902 3 Neural Network Learning
Università degli Studi di Bologna
Università degli Studi di Bologna DEIS Biometric System Laboratory Incremental Learning by Message Passing in Hierarchical Temporal Memory Davide Maltoni Biometric System Laboratory DEIS - University of
Neural Networks for Machine Learning. Lecture 13a The ups and downs of backpropagation
Neural Networks for Machine Learning Lecture 13a The ups and downs of backpropagation Geoffrey Hinton Nitish Srivastava, Kevin Swersky Tijmen Tieleman Abdel-rahman Mohamed A brief history of backpropagation
INTELLIGENT ENERGY MANAGEMENT OF ELECTRICAL POWER SYSTEMS WITH DISTRIBUTED FEEDING ON THE BASIS OF FORECASTS OF DEMAND AND GENERATION Chr.
INTELLIGENT ENERGY MANAGEMENT OF ELECTRICAL POWER SYSTEMS WITH DISTRIBUTED FEEDING ON THE BASIS OF FORECASTS OF DEMAND AND GENERATION Chr. Meisenbach M. Hable G. Winkler P. Meier Technology, Laboratory
Lecture 3: Finding integer solutions to systems of linear equations
Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture
Image Compression through DCT and Huffman Coding Technique
International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Rahul
Machine Learning: Multi Layer Perceptrons
Machine Learning: Multi Layer Perceptrons Prof. Dr. Martin Riedmiller Albert-Ludwigs-University Freiburg AG Maschinelles Lernen Machine Learning: Multi Layer Perceptrons p.1/61 Outline multi layer perceptrons
The Flink Big Data Analytics Platform. Marton Balassi, Gyula Fora" {mbalassi, gyfora}@apache.org
The Flink Big Data Analytics Platform Marton Balassi, Gyula Fora" {mbalassi, gyfora}@apache.org What is Apache Flink? Open Source Started in 2009 by the Berlin-based database research groups In the Apache
SUCCESSFUL PREDICTION OF HORSE RACING RESULTS USING A NEURAL NETWORK
SUCCESSFUL PREDICTION OF HORSE RACING RESULTS USING A NEURAL NETWORK N M Allinson and D Merritt 1 Introduction This contribution has two main sections. The first discusses some aspects of multilayer perceptrons,
Automatic Inventory Control: A Neural Network Approach. Nicholas Hall
Automatic Inventory Control: A Neural Network Approach Nicholas Hall ECE 539 12/18/2003 TABLE OF CONTENTS INTRODUCTION...3 CHALLENGES...4 APPROACH...6 EXAMPLES...11 EXPERIMENTS... 13 RESULTS... 15 CONCLUSION...
Neural Network Add-in
Neural Network Add-in Version 1.5 Software User s Guide Contents Overview... 2 Getting Started... 2 Working with Datasets... 2 Open a Dataset... 3 Save a Dataset... 3 Data Pre-processing... 3 Lagging...
Knowledge Based Descriptive Neural Networks
Knowledge Based Descriptive Neural Networks J. T. Yao Department of Computer Science, University or Regina Regina, Saskachewan, CANADA S4S 0A2 Email: [email protected] Abstract This paper presents a
Method of Combining the Degrees of Similarity in Handwritten Signature Authentication Using Neural Networks
Method of Combining the Degrees of Similarity in Handwritten Signature Authentication Using Neural Networks Ph. D. Student, Eng. Eusebiu Marcu Abstract This paper introduces a new method of combining the
