Fundamentals of Neural Networks
|
|
|
- Lesley Sparks
- 9 years ago
- Views:
Transcription
1 Fundamentals of Neural Networks : AI Course lecture 37 38, notes, slides RC Chakraborty, [email protected], June 01, Fundamentals of Neural Networks Artificial Intelligence Return to Website Neural network, topics : Introduction, biological neuron model, artificial neuron model, notations, functions; Model of artificial neuron - McCulloch-Pitts neuron equation; Artificial neuron basic elements, activation functions, threshold function, piecewise linear function, sigmoidal function; Neural network architectures - single layer feed-forward network, multi layer feed-forward network, recurrent networks; Learning Methods in Neural Networks - classification of learning algorithms, supervised learning, unsupervised learning, reinforced learning, Hebbian learning, gradient descent learning, competitive learning, stochastic learning. Single-Layer NN System - single layer perceptron, learning algorithm for training, linearly separable task, XOR Problem, learning algorithm, ADAptive LINear Element (ADALINE) architecture and training mechanism; Applications of neural networks - clustering, classification, pattern recognition, function approximation, prediction systems.
2 Fundamentals of Neural Networks Artificial Intelligence Topics (Lectures 37, 38 2 hours) Slides 1. Introduction Why neural network?, Research history, Biological neuron model, Artificial neuron model, Notations, Functions. 2. Model of Artificial Neuron McCulloch-Pitts Neuron Equation, Artificial neuron basic elements, Activation functions threshold function, piecewise linear function, sigmoidal function. 3. Neural Network Architectures Single layer feed-forward network, Multi layer feed-forward network, Recurrent networks. 4 Learning Methods in Neural Networks Classification of learning algorithms, Supervised learning, Unsupervised learning, Reinforced learning, Hebbian Learning, Gradient descent learning, Competitive learning, Stochastic learning. 5. Single-Layer NN System Single layer perceptron : learning algorithm for training, linearly separable task, XOR Problem, learning algorithm; ADAptive LINear Element (ADALINE) : architecture, training mechanism 6. Applications of Neural Networks Clustering, Classification / pattern recognition, Function approximation, Prediction systems References : 38 02
3 Neural Networks What is Neural Net? A neural net is an artificial representation of the human brain that tries to simulate its learning process. An artificial neural network (ANN) is often called a "Neural Network" or simply Neural Net (NN). 03 Traditionally, the word neural network is referred to a network of biological neurons in the nervous system that process and transmit information. Artificial neural network is an interconnected group of artificial neurons that uses a mathematical model or computational model for information processing based on a connectionist approach to computation. The artificial neural networks are made of interconnecting artificial neurons which may share some properties of biological neural networks. Artificial Neural network is a network of simple processing elements (neurons) which can exhibit complex global behavior, determined by the connections between the processing elements and element parameters. Artificial neural network is an adaptive system that changes its structure based on external or internal information that flows through the network.
4 AI-Neural Network Introduction 1. Introduction RC Chakraborty, Neural Computers mimic certain processing capabilities of the human brain. - Neural Computing is an information processing paradigm, inspired by biological system, composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. - Artificial Neural Networks (ANNs), like people, learn by example. - An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. - Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is true of ANNs as well. 04
5 1.1 Why Neural Network AI-Neural Network Introduction The conventional computers are good for - fast arithmetic and does what programmer programs, ask them to do. The conventional computers are not so good for - interacting with noisy data or data from the environment, massive parallelism, fault tolerance, and adapting to circumstances. The neural network systems help where we can not formulate an algorithmic solution or where we can get lots of examples of the behavior we require. Neural Networks follow different paradigm for computing. The von Neumann machines are based on the processing/memory abstraction of human information processing. The neural networks are based on the parallel architecture of biological brains. Neural networks are a form of multiprocessor computer system, with - simple processing elements, - a high degree of interconnection, - simple scalar messages, and - adaptive interaction between elements. 05
6 1.2 Research History RC Chakraborty, AI-Neural Network Introduction The history is relevant because for nearly two decades the future of Neural network remained uncertain. McCulloch and Pitts (1943) are generally recognized as the designers of the first neural network. They combined many simple processing units together that could lead to an overall increase in computational power. They suggested many ideas like : a neuron has a threshold level and once that level is reached the neuron fires. It is still the fundamental way in which ANNs operate. The McCulloch and Pitts's network had a fixed set of weights. Hebb (1949) developed the first learning rule, that is if two neurons are active at the same time then the strength between them should be increased. In the 1950 and 60's, many researchers (Block, Minsky, Papert, and Rosenblatt worked on perceptron. The neural network model could be proved to converge to the correct weights, that will solve the problem. The weight adjustment (learning algorithm) used in the perceptron was found more powerful than the learning rules used by Hebb. The perceptron caused great excitement. It was thought to produce programs that could think. Minsky & Papert (1969) showed that perceptron could not learn those functions which are not linearly separable. The neural networks research declined throughout the 1970 and until mid 80's because the perceptron could not learn certain important functions. Neural network regained importance in The researchers, Parker and LeCun discovered a learning algorithm for multi-layer networks called back propagation that could solve problems that were not linearly separable. 06
7 Biological Neuron Model AI-Neural Network Introduction The human brain consists of a large number, more than a billion of neural cells that process information. Each cell works like a simple processor. The massive interaction between all cells and their parallel processing only makes the brain's abilities possible. Dendrites are branching fibers that extend from the cell body or soma. Soma or cell body of a neuron contains the nucleus and other structures, support chemical processing and production of neurotransmitters. Axon is a singular fiber carries information away from the soma to the synaptic sites of other neurons (dendrites and somas), muscles, or glands. Axon hillock is the site of summation for incoming information. At any moment, the collective influence of all neurons that conduct impulses to a given neuron will determine whether or not an Fig. Structure of Neuron action potential will be initiated at the axon hillock and propagated along the axon. Myelin Sheath consists of fat-containing cells that insulate the axon from electrical activity. This insulation acts to increase the rate of transmission of signals. A gap exists between each myelin sheath cell along the axon. Since fat inhibits the propagation of electricity, the signals jump from one gap to the next. Nodes of Ranvier are the gaps (about 1 µm) between myelin sheath cells long axons are Since fat serves as a good insulator, the myelin sheaths speed the rate of transmission of an electrical impulse along the axon. Synapse is the point of connection between two neurons or a neuron and a muscle or a gland. Electrochemical communication between neurons takes place at these junctions. Terminal Buttons of a neuron are the small knobs at the end of an axon that release chemicals called neurotransmitters.
8 Information flow in a Neural Cell AI-Neural Network Introduction The input /output and the propagation of information are shown below. 08 Structure of a Neural Cell in the Human Brain Dendrites receive activation from other neurons. Soma processes the incoming activations and converts them into output activations. Axons act as transmission lines to send activation to other neurons. Synapses the junctions allow signal transmission between the axons and dendrites. The process of transmission is by diffusion of chemicals called neuro-transmitters. McCulloch-Pitts introduced a simplified model of this real neurons.
9 1.4 Artificial Neuron Model AI-Neural Network Introduction An artificial neuron is a mathematical function conceived as a simple model of a real (biological) neuron. The McCulloch-Pitts Neuron This is a simplified model of real neurons, known as a Threshold Logic Unit. Input 1 Input 2 Σ Output Input n A set of input connections brings in activations from other neurons. A processing unit sums the inputs, and then applies a non-linear activation function (i.e. squashing / transfer / threshold function). An output line transmits the result to other neurons. In other words, - The input to a neuron arrives in the form of signals. - The signals build up in the cell. - Finally the cell discharges (cell fires) through the output. - The cell can start building up signals again. 09
10 1.5 Notations RC Chakraborty, Recaps : Scalar, Vectors, Matrices and Functions AI-Neural Network Introduction Scalar : The number x i can be added up to give a scalar number. s = x 1 + x 2 + x x n = n Σ i=1 xi Vectors : An ordered sets of related numbers. Row Vectors (1 x n) X = ( x 1, x 2, x 3,..., x n ), Y = ( y 1, y 2, y 3,..., y n ) Add : Two vectors of same length added to give another vector. Z = X + Y = (x 1 + y 1, x 2 + y 2,...., x n + y n ) Multiply: Two vectors of same length multiplied to give a scalar. p = X. Y = x 1 y 1 + x 2 y x n y n = n Σ i=1 x i y i 10
11 Matrices : m x n matrix, AI-Neural Network Introduction row no = m, column no = n w 11 w w 1n w 21 w w 21 W = w m1 w w mn Add or Subtract : Matrices of the same size are added or subtracted component by component. A + B = C, cij = aij + bij a11 a12 b11 b12 c11 = a11+b11 c12 = a12+b12 a21 a22 + b21 b22 = C21 = a21+b21 C22 = a22 +b22 Multiply : matrix A multiplied by matrix B gives matrix C. (m x n) (n x p) (m x p) elements cij = aik bkj n Σ k=1 a11 a12 b11 b12 c11 c12 x = a21 a22 b21 b22 c21 c22 c11 = (a11 x b11) + (a12 x B21) c12 = (a11 x b12) + (a12 x B22) C21 = (a21 x b11) + (a22 x B21) C22 = (a21 x b12) + (a22 x B22) 11
12 1.6 Functions RC Chakraborty, AI-Neural Network Introduction The Function y= f(x) describes a relationship, an input-output mapping, from x to y. Threshold or Sign function : sgn(x) defined as Sign(x) O/P 1 sgn (x) = 1 if x 0 0 if x < I/P Threshold or Sign function : (differentiable) form of the threshold function sigmoid(x) defined as a smoothed Sign(x) O/P 1 sigmoid (x) = e -x I/P 12
13 AI-Neural Network Model of Neuron 2. Model of Artificial Neuron RC Chakraborty, A very simplified model of real neurons is known as a Threshold Logic Unit (TLU). The model is said to have : - A set of synapses (connections) brings in activations from other neurons. - A processing unit sums the inputs, and then applies a non-linear activation function (i.e. squashing / transfer / threshold function). - An output line transmits the result to other neurons. 2.1 McCulloch-Pitts (M-P) Neuron Equation McCulloch-Pitts neuron is a simplified model of real biological neuron. Input 1 Input 2 Σ Output Input n Simplified Model of Real Neuron (Threshold Logic Unit) The equation for the output of a McCulloch-Pitts neuron as a function of 1 to n inputs is written as Output = sgn ( Input i - Φ ) where Φ is the neuron s activation threshold. n Σ If Input i Φ then Output = 1 i=1 n Σ n Σ i=1 If Input i < Φ then Output = 0 i=1 In this McCulloch-Pitts neuron model, the missing features are : - Non-binary input and output, - Non-linear summation, - Smooth thresholding, - Stochastic, and - Temporal information processing. 13
14 2.2 Artificial Neuron - Basic Elements AI-Neural Network Model Neuron Neuron consists of three basic components - weights, thresholds, and a single activation function. x1 W1 Activation Function x2 W2 Σ y i=1 xn Wn Synaptic Weights Φ Threshold Weighting Factors w Fig Basic Elements of an Artificial Linear Neuron The values w1, w2,... wn are weights to determine the strength of input vector X = [x1, x2,..., xn] T. Each input is multiplied by the associated weight of the neuron connection X T excites and the -ve weight inhibits the node output. I = X T.W = x 1 w 1 + x 2 w x n w n = Threshold Φ W. The +ve weight x i w i The node s internal threshold Φ is the magnitude offset. It affects the activation of the node output y as: n Y = f (I) = f { Σ xi wi - Φk } i=1 To generate the final output Y, the sum is passed on to a non-linear filter f called Activation Function or Transfer function or Squash function which releases the output Y. n Σ i=1 14
15 Threshold for a Neuron AI-Neural Network Model of Neuron In practice, neurons generally do not fire (produce an output) unless their total input goes above a threshold value. The total input for each neuron is the sum of the weighted inputs to the neuron minus its threshold value. This is then passed through the sigmoid function. The equation for the transition in a neuron is : a = 1/(1 + exp(- x)) where x = Σ i ai wi - Q a ai is the activation for the neuron is the activation for neuron i wi is the weight Q is the threshold subtracted Activation Function An activation function f performs a mathematical operation on the signal output. The most common activation functions are: - Linear Function, - Piecewise Linear Function, - Threshold Function, - Sigmoidal (S shaped) function, - Tangent hyperbolic function The activation functions are chosen depending upon the type of problem to be solved by the network. 15
16 2.2 Activation Functions f - Types AI-Neural Network Model of Neuron Over the years, researches tried several functions to convert the input into an outputs. The most commonly used functions are described below. - I/P Horizontal axis shows sum of inputs. - O/P Vertical axis shows the value the function produces ie output. - All functions f are designed to produce values between 0 and 1. Threshold Function A threshold (hard-limiter) activation function is either a binary type or a bipolar type as shown below. binary threshold Output of a binary threshold function produces : O/p 1 I/P 1 if the weighted sum of the inputs is +ve, 0 if the weighted sum of the inputs is -ve. 1 if I 0 Y = f (I) = 0 if I < 0 bipolar threshold O/p 1 I/P -1 Output of a bipolar threshold function produces : 1 if the weighted sum of the inputs is +ve, -1 if the weighted sum of the inputs is -ve. 1 if I 0 Y = f (I) = -1 if I < 0 Neuron with hard limiter activation function is called McCulloch-Pitts model. 16
17 Piecewise Linear Function AI-Neural Network Model of Neuron This activation function is also called saturating linear function and can have either a binary or bipolar range for the saturation limits of the output. The mathematical model for a symmetric saturation function is described below. Piecewise Linear O/p +1 I/P -1 This is a sloping function that produces : -1 for a -ve weighted sum of inputs, 1 for a +ve weighted sum of inputs. I proportional to input for values between +1 and -1 weighted sum, 1 if I 0 Y = f (I) = I if -1 I 1-1 if I < 0 17
18 AI-Neural Network Model of Neuron Sigmoidal Function (S-shape function) The nonlinear curved S-shape function is called the sigmoid function. This is most common type of activation used to construct the neural networks. It is mathematically well behaved, differentiable and strictly increasing function. Sigmoidal function 1 O/P α = 2.0 α = 1.0 α = I/P A sigmoidal transfer function can be written in the form: 1 Y = f (I) =, 0 f(i) e -α I This is explained as = 1/(1 + exp(-α I)), 0 f(i) 1 0 for large -ve input values, 1 for large +ve values, with a smooth transition between the two. α is slope parameter also called shape parameter; symbol the λ is also used to represented this parameter. The sigmoidal function is achieved using exponential equation. By varying α different shapes of the function can be obtained which adjusts the abruptness of the function as it changes between the two asymptotic values. 18
19 Example : RC Chakraborty, AI-Neural Network Model of Neuron The neuron shown consists of four inputs with the weights. x1=1 x2=2 X3= Σ I Activation Function y xn=8 +2 Synaptic Weights Summing Junction Fig Neuron Structure of Example Φ = 0 Threshold The output I of the network, prior to the activation function stage, is I = X T. W = = = (1 x 1) + (2 x 1) + (5 x -1) + (8 x 2) = 14 With a binary activation function the outputs of the neuron is: y (threshold) = 1; 19
20 AI-Neural Network Architecture 3. Neural Network Architectures An Artificial Neural Network (ANN) is a data processing system, consisting large number of simple highly interconnected processing elements as artificial neuron in a network structure that can be represented using a directed graph G, an ordered 2-tuple (V, E), consisting a set V of vertices and a set E of edges. - The vertices may represent neurons (input/output) and - The edges may represent synaptic links labeled by the weights attached. RC Chakraborty, Example : V1 e 5 V3 e 2 e 4 e 5 V5 V2 e 3 V4 Fig. Directed Graph Vertices V = { v 1, v 2, v 3, v 4, v 5 } Edges E = { e 1, e 2, e 3, e 4, e 5 } 20
21 3.1 Single Layer Feed-forward Network AI-Neural Network Architecture The Single Layer Feed-forward Network consists of a single layer of weights, where the inputs are directly connected to the outputs, via a series of weights. The synaptic links carrying weights connect every input to every output, but not other way. This way it is considered a network of feed-forward type. The sum of the products of the weights and the inputs is calculated in each neuron node, and if the value is above some threshold (typically 0) the neuron fires and takes the activated value (typically 1); otherwise it takes the deactivated value (typically -1). input xi weights wij output yj x 1 w 12 w 11 w 21 y 1 x 2 w 22 y 2 w 2m w n1 w 1m w n2 x n w nm y m Single layer Neurons Fig. Single Layer Feed-forward Network 21
22 3.2 Multi Layer Feed-forward Network AI-Neural Network Architecture The name suggests, it consists of multiple layers. The architecture of this class of network, besides having the input and the output layers, also have one or more intermediary layers called hidden layers. The computational units of the hidden layer are known as hidden neurons. x 1 Input hidden layer weights vij v 11 Output hidden layer weights wjk w 11 y 1 v 21 y 1 w 12 y 2 x 2 v 1m w 11 v 2m y 3 xl Input Layer neurons xi v n1 Vl m y m Hidden Layer neurons yj w 1m y n Output Layer neurons zk Fig. Multilayer feed-forward network in (l m n) configuration. - The hidden layer does intermediate computation before directing the input to output layer. - The input layer neurons are linked to the hidden layer neurons; the weights on these links are referred to as input-hidden layer weights. - The hidden layer neurons and the corresponding weights are referred to as output-hidden layer weights. - A multi-layer feed-forward network with l input neurons, m 1 neurons in the first hidden layers, m2 neurons in the second hidden layers, and n output neurons in the output layers is written as (l - m 1 - m 2 n ). - Fig. above illustrates a multilayer feed-forward network with a configuration (l - m n). 22
23 3.3 Recurrent Networks AI-Neural Network Architecture The Recurrent Networks differ from feed-forward architecture. A Recurrent network has at least one feed back loop. Example : y 1 x 1 x 2 y 1 y 2 Feedback links y m Y n Xl Input Layer neurons xi Hidden Layer neurons yj Output Layer neurons zk Fig. Recurrent neural network There could be neurons with self-feedback links; that is the output of a neuron is feedback into itself as input. 23
24 AI-Neural Network Learning methods 4. Learning methods in Neural Networks RC Chakraborty, The learning methods in neural networks are classified into three basic types : - Supervised Learning, - Unsupervised Learning - Reinforced Learning These three types are classified based on : - presence or absence of teacher and - the information provided for the system to learn. These are further categorized, based on the rules used, as - Hebbian, - Gradient descent, - Competitive - Stochastic learning. 24
25 Classification of Learning Algorithms AI-Neural Network Learning methods Fig. below indicate the hierarchical representation of the algorithms mentioned in the previous slide. These algorithms are explained in subsequent slides. Neural Network Learning algorithms Supervised Learning (Error based) Reinforced Learning (Output based) Unsupervised Learning Stochastic Error Correction Gradient descent Hebbian Competitive Least Mean Square Back Propagation Fig. Classification of learning algorithms 25
26 Supervised Learning AI-Neural Network Learning methods - A teacher is present during learning process and presents expected output. - Every input pattern is used to train the network. - Learning process is based on comparison, between network's computed output and the correct expected output, generating "error". - The "error" generated is used to change network parameters that result improved performance. Unsupervised Learning - No teacher is present. - The expected or desired output is not presented to the network. - The system learns of it own by discovering and adapting to the structural features in the input patterns. Reinforced Learning - A teacher is present but does not present the expected or desired output but only indicated if the computed output is correct or incorrect. - The information provided helps the network in its learning process. - A reward is given for correct answer computed and a penalty for a wrong answer. 26 Note : The Supervised and Unsupervised learning methods are most popular forms of learning compared to Reinforced learning.
27 Hebbian Learning AI-Neural Network Learning methods Hebb proposed a rule based on correlative weight adjustment. In this rule, the input-output pattern pairs (Xi, Yi) are associated by the weight matrix W, known as correlation matrix computed as where Yi T W = n Σ i=1 Xi Yi T is the transpose of the associated output vector Yi There are many variations of this rule proposed by the other researchers (Kosko, Anderson, Lippman). 27
28 Gradient Descent Learning AI-Neural Network Learning methods This is based on the minimization of errors E defined in terms of weights and the activation function of the network. - Here, the activation function of the network is required to be differentiable, because the updates of weight is dependent on the gradient of the error E. - If Wij is the weight update of the link connecting the i th and the j th neuron of the two neighboring layers, then Wij is defined as Wij = η ( E / Wij ) where η is the learning rate parameters and ( E / Wij ) is error gradient with reference to the weight Wij. Note : The Hoffs Delta rule and Back-propagation learning rule are the examples of Gradient descent learning. 28
29 Competitive Learning AI-Neural Network Learning methods - In this method, those neurons which respond strongly to the input stimuli have their weights updated. - When an input pattern is presented, all neurons in the layer compete, and the winning neuron undergoes weight adjustment. - This strategy is called "winner-takes-all". 29 Stochastic learning - In this method the weights are adjusted in a probabilistic fashion. - Example : Simulated annealing which is a learning mechanism employed by Boltzmann and Cauchy machines.
30 AI-Neural Network Single Layer learning 5. Single-Layer NN Systems RC Chakraborty, Here, a simple Perceptron Model and an ADALINE Network Model is presented. 5.1 Single Layer Perceptron Definition : An arrangement of one input layer of neurons feed forward to one output layer of neurons is known as Single Layer Perceptron. input xi weights wij output yj x 1 w 12 w 11 w 21 y 1 x 2 w 22 y 2 w 2m w n1 w 1m w n2 x n w nm y m Single layer Perceptron Fig. Simple Perceptron Model 1 if net j 0 n y j = f (net j) = where net j = Σ xi wij i=1 0 if net j < 0 30
31 AI-Neural Network Single Layer learning Learning Algorithm for Training Perceptron The training of Perceptron is a supervised learning algorithm where weights are adjusted to minimize error when ever the output does not match the desired output. If the output is correct then no adjustment of weights is done. K+1 i.e. = W i j W i j K If the output is 1 but should have been 0 then the weights are decreased on the active input link K+1 i.e. = α. xi W i j W i j K If the output is 0 but should have been 1 then the weights are increased on the active input link K+1 i.e. = + α. xi W i j W i j K Where K+1 W i j is the new adjusted weight, W i j K is the old weight xi α is the input and α is the learning rate parameter. small leads to slow and α large leads to fast learning. 31
32 AI-Neural Network Single Layer learning Perceptron and Linearly Separable Task Perceptron can not handle tasks which are not separable. - Definition : Sets of points in 2-D space are linearly separable if the sets can be separated by a straight line. - Generalizing, a set of points in n-dimensional space are linearly separable if there is a hyper plane of (n-1) dimensions separates the sets. Example S1 S2 S1 S2 (a) Linearly separable patterns (b) Not Linearly separable patterns Note : Perceptron cannot find weights for classification problems that are not linearly separable. 32
33 AI-Neural Network Single Layer learning XOR Problem : Exclusive OR operation Input x1 Input x2 Output XOR truth table Even parity Odd parity Even parity means even number of 1 bits in the input Odd parity means odd number of 1 bits in the input X2 (0, 1) (1, 1) (0, 0) X1 (0, 1) Output of XOR in X1, x2 plane - There is no way to draw a single straight line so that the circles are on one side of the line and the dots on the other side. - Perceptron is unable to find a line separating even parity input patterns from odd parity input patterns. 33
34 Perceptron Learning Algorithm AI-Neural Network Single Layer learning The algorithm is illustrated step-by-step. Step 1 : Create a peceptron with (n+1) input neurons x0, x1,.....,. xn, where x0 = 1 is the bias input. Let O be the output neuron. Step 2 : Initialize weight W = (w0, w1,.....,. wn ) to random weights. Step 3 : Iterate through the input patterns Xj of the training set using the weight set; ie compute the weighted sum of inputs net j = x i w i i=1 for each input pattern j. Step 4 : Compute the output y j using the step function y j = f (net j) = 1 if net j 0 n where net j = Σ xi wij i=1 0 if net j < 0 Step 5 : Compare the computed output yj with the target output yj for each input pattern j. If all the input patterns have been classified correctly, then output (read) the weights and exit. Step 6 : Otherwise, update the weights as given below : If the computed outputs yj is 1 but should have been 0, n Σ Then wi = wi - α xi, i= 0, 1, 2,...., n If the computed outputs yj is 0 but should have been 1, Then wi = wi + α xi, i= 0, 1, 2,...., n where α is the learning parameter and is constant. 34 Step 7 : goto step 3 END
35 5.2 ADAptive LINear Element (ADALINE) AI-Neural Network ADALINE An ADALINE consists of a single neuron of the McCulloch-Pitts type, where its weights are determined by the normalized least mean square (LMS) training law. The LMS learning rule is also referred to as delta rule. It is a well-established supervised training method that has been used over a wide range of diverse applications. Architecture of a simple ADALINE x1 W1 x2 W2 Σ Output Neuron xn Wn Error Σ + Desired Output The basic structure of an ADALINE is similar to a neuron with a linear activation function and a feedback loop. During the training phase of ADALINE, the input vector as well as the desired output are presented to the network. [The complete training mechanism has been explained in the next slide. ] 35
36 ADALINE Training Mechanism AI-Neural Network ADALINE (Ref. Fig. in the previous slide - Architecture of a simple ADALINE) The basic structure of an ADALINE is similar to a linear neuron with an extra feedback loop. During the training phase of ADALINE, the input vector X = [x1, x2,..., xn] T to the network. as well as desired output are presented The weights are adaptively adjusted based on delta rule. After the ADALINE is trained, an input vector presented to the network with fixed weights will result in a scalar output. Thus, the network performs an n dimensional mapping to a scalar value. The activation function is not used during the training phase. Once the weights are properly adjusted, the response of the trained unit can be tested by applying various inputs, which are not in the training set. If the network produces consistent responses to a high degree with the test inputs, it is said that the network could generalize. The process of training and generalization are two important attributes of this network. Usage of ADLINE : In practice, an ADALINE is used to - Make binary decisions; the output is sent through a binary threshold. - Realizations of logic gates such as AND, NOT and OR. - Realize only those logic functions that are linearly separable. 36
37 AI-Neural Network Applications 6. Applications of Neural Network RC Chakraborty, Neural Network Applications can be grouped in following categories: Clustering: A clustering algorithm explores the similarity between patterns and places similar patterns in a cluster. Best known applications include data compression and data mining. Classification/Pattern recognition: The task of pattern recognition is to assign an input pattern (like handwritten symbol) to one of many classes. This category includes algorithmic implementations such as associative memory. Function approximation : The tasks of function approximation is to find an estimate of the unknown function subject to noise. Various engineering and scientific disciplines require function approximation. 37 Prediction Systems: The task is to forecast some future values of a time-sequenced data. Prediction has a significant impact on decision support systems. Prediction differs from function approximation by considering time factor. System may be dynamic and may produce different results for the same input data based on system state (time).
38 7. References : Textbooks RC Chakraborty, AI-AI-Neural Network References 1. "Neural Networks: A Comprehensive Foundation", by Simon S. Haykin, (1999), Prentice Hall, Chapter 1-15, page "Elements of Artificial Neural Networks", by Kishan Mehrotra, Chilukuri K. Mohan and Sanjay Ranka, (1996), MIT Press, Chapter 1-7, page "Fundamentals of Neural Networks: Architecture, Algorithms and Applications", by Laurene V. Fausett, (1993), Prentice Hall, Chapter1-7, page "Neural Network Design", by Martin T. Hagan, Howard B. Demuth and Mark Hudson Beale, ( 1996), PWS Publ. Company, Chapter 1-19, page 1-1 to "An Introduction to Neural Networks", by James A. Anderson, (1997), MIT Press, Chapter 1-17, page "AI: A New Synthesis", by Nils J. Nilsson, (1998), Morgan Kaufmann Inc., Chapter 3, Page Related documents from open source, mainly internet. An exhaustive list is being prepared for inclusion at a later date. 38
Biological Neurons and Neural Networks, Artificial Neurons
Biological Neurons and Neural Networks, Artificial Neurons Neural Computation : Lecture 2 John A. Bullinaria, 2015 1. Organization of the Nervous System and Brain 2. Brains versus Computers: Some Numbers
Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence
Artificial Neural Networks and Support Vector Machines CS 486/686: Introduction to Artificial Intelligence 1 Outline What is a Neural Network? - Perceptron learners - Multi-layer networks What is a Support
Artificial neural networks
Artificial neural networks Now Neurons Neuron models Perceptron learning Multi-layer perceptrons Backpropagation 2 It all starts with a neuron 3 Some facts about human brain ~ 86 billion neurons ~ 10 15
Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski [email protected]
Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trakovski [email protected] Neural Networks 2 Neural Networks Analogy to biological neural systems, the most robust learning systems
Introduction to Artificial Neural Networks
POLYTECHNIC UNIVERSITY Department of Computer and Information Science Introduction to Artificial Neural Networks K. Ming Leung Abstract: A computing paradigm known as artificial neural network is introduced.
Neural Network Design in Cloud Computing
International Journal of Computer Trends and Technology- volume4issue2-2013 ABSTRACT: Neural Network Design in Cloud Computing B.Rajkumar #1,T.Gopikiran #2,S.Satyanarayana *3 #1,#2Department of Computer
Feed-Forward mapping networks KAIST 바이오및뇌공학과 정재승
Feed-Forward mapping networks KAIST 바이오및뇌공학과 정재승 How much energy do we need for brain functions? Information processing: Trade-off between energy consumption and wiring cost Trade-off between energy consumption
Neural Networks and Back Propagation Algorithm
Neural Networks and Back Propagation Algorithm Mirza Cilimkovic Institute of Technology Blanchardstown Blanchardstown Road North Dublin 15 Ireland [email protected] Abstract Neural Networks (NN) are important
Recurrent Neural Networks
Recurrent Neural Networks Neural Computation : Lecture 12 John A. Bullinaria, 2015 1. Recurrent Neural Network Architectures 2. State Space Models and Dynamical Systems 3. Backpropagation Through Time
Neural Networks and Support Vector Machines
INF5390 - Kunstig intelligens Neural Networks and Support Vector Machines Roar Fjellheim INF5390-13 Neural Networks and SVM 1 Outline Neural networks Perceptrons Neural networks Support vector machines
Neural Networks in Data Mining
IOSR Journal of Engineering (IOSRJEN) ISSN (e): 2250-3021, ISSN (p): 2278-8719 Vol. 04, Issue 03 (March. 2014), V6 PP 01-06 www.iosrjen.org Neural Networks in Data Mining Ripundeep Singh Gill, Ashima Department
SELECTING NEURAL NETWORK ARCHITECTURE FOR INVESTMENT PROFITABILITY PREDICTIONS
UDC: 004.8 Original scientific paper SELECTING NEURAL NETWORK ARCHITECTURE FOR INVESTMENT PROFITABILITY PREDICTIONS Tonimir Kišasondi, Alen Lovren i University of Zagreb, Faculty of Organization and Informatics,
An Introduction to Artificial Neural Networks (ANN) - Methods, Abstraction, and Usage
An Introduction to Artificial Neural Networks (ANN) - Methods, Abstraction, and Usage Introduction An artificial neural network (ANN) reflects a system that is based on operations of biological neural
Role of Neural network in data mining
Role of Neural network in data mining Chitranjanjit kaur Associate Prof Guru Nanak College, Sukhchainana Phagwara,(GNDU) Punjab, India Pooja kapoor Associate Prof Swami Sarvanand Group Of Institutes Dinanagar(PTU)
NEURAL NETWORKS IN DATA MINING
NEURAL NETWORKS IN DATA MINING 1 DR. YASHPAL SINGH, 2 ALOK SINGH CHAUHAN 1 Reader, Bundelkhand Institute of Engineering & Technology, Jhansi, India 2 Lecturer, United Institute of Management, Allahabad,
NEURAL NETWORK FUNDAMENTALS WITH GRAPHS, ALGORITHMS, AND APPLICATIONS
NEURAL NETWORK FUNDAMENTALS WITH GRAPHS, ALGORITHMS, AND APPLICATIONS N. K. Bose HRB-Systems Professor of Electrical Engineering The Pennsylvania State University, University Park P. Liang Associate Professor
Introduction to Machine Learning Using Python. Vikram Kamath
Introduction to Machine Learning Using Python Vikram Kamath Contents: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Introduction/Definition Where and Why ML is used Types of Learning Supervised Learning Linear Regression
An Introduction to Neural Networks
An Introduction to Vincent Cheung Kevin Cannons Signal & Data Compression Laboratory Electrical & Computer Engineering University of Manitoba Winnipeg, Manitoba, Canada Advisor: Dr. W. Kinsner May 27,
NEURAL NETWORKS A Comprehensive Foundation
NEURAL NETWORKS A Comprehensive Foundation Second Edition Simon Haykin McMaster University Hamilton, Ontario, Canada Prentice Hall Prentice Hall Upper Saddle River; New Jersey 07458 Preface xii Acknowledgments
Self Organizing Maps: Fundamentals
Self Organizing Maps: Fundamentals Introduction to Neural Networks : Lecture 16 John A. Bullinaria, 2004 1. What is a Self Organizing Map? 2. Topographic Maps 3. Setting up a Self Organizing Map 4. Kohonen
Stock Prediction using Artificial Neural Networks
Stock Prediction using Artificial Neural Networks Abhishek Kar (Y8021), Dept. of Computer Science and Engineering, IIT Kanpur Abstract In this work we present an Artificial Neural Network approach to predict
How To Use Neural Networks In Data Mining
International Journal of Electronics and Computer Science Engineering 1449 Available Online at www.ijecse.org ISSN- 2277-1956 Neural Networks in Data Mining Priyanka Gaur Department of Information and
CHAPTER I From Biological to Artificial Neuron Model
Ugur HALICI ARTIFICIAL NEURAL NETWORKS CHAPTER CHAPTER I From Biological to Artificial Neuron Model Martin Gardner in his book titled 'The Annotated Snark" has the following note for the last illustration
Comparison of Supervised and Unsupervised Learning Classifiers for Travel Recommendations
Volume 3, No. 8, August 2012 Journal of Global Research in Computer Science REVIEW ARTICLE Available Online at www.jgrcs.info Comparison of Supervised and Unsupervised Learning Classifiers for Travel Recommendations
Neural network software tool development: exploring programming language options
INEB- PSI Technical Report 2006-1 Neural network software tool development: exploring programming language options Alexandra Oliveira [email protected] Supervisor: Professor Joaquim Marques de Sá June 2006
SEMINAR OUTLINE. Introduction to Data Mining Using Artificial Neural Networks. Definitions of Neural Networks. Definitions of Neural Networks
SEMINAR OUTLINE Introduction to Data Mining Using Artificial Neural Networks ISM 611 Dr. Hamid Nemati Introduction to and Characteristics of Neural Networks Comparison of Neural Networks to traditional
A Time Series ANN Approach for Weather Forecasting
A Time Series ANN Approach for Weather Forecasting Neeraj Kumar 1, Govind Kumar Jha 2 1 Associate Professor and Head Deptt. Of Computer Science,Nalanda College Of Engineering Chandi(Bihar) 2 Assistant
American International Journal of Research in Science, Technology, Engineering & Mathematics
American International Journal of Research in Science, Technology, Engineering & Mathematics Available online at http://www.iasir.net ISSN (Print): 2328-349, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629
Lecture 6. Artificial Neural Networks
Lecture 6 Artificial Neural Networks 1 1 Artificial Neural Networks In this note we provide an overview of the key concepts that have led to the emergence of Artificial Neural Networks as a major paradigm
Computational Intelligence Introduction
Computational Intelligence Introduction Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2011 Farzaneh Abdollahi Neural Networks 1/21 Fuzzy Systems What are
Analecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
SUCCESSFUL PREDICTION OF HORSE RACING RESULTS USING A NEURAL NETWORK
SUCCESSFUL PREDICTION OF HORSE RACING RESULTS USING A NEURAL NETWORK N M Allinson and D Merritt 1 Introduction This contribution has two main sections. The first discusses some aspects of multilayer perceptrons,
129: Artificial Neural Networks. Ajith Abraham Oklahoma State University, Stillwater, OK, USA 1 INTRODUCTION TO ARTIFICIAL NEURAL NETWORKS
129: Artificial Neural Networks Ajith Abraham Oklahoma State University, Stillwater, OK, USA 1 Introduction to Artificial Neural Networks 901 2 Neural Network Architectures 902 3 Neural Network Learning
3 An Illustrative Example
Objectives An Illustrative Example Objectives - Theory and Examples -2 Problem Statement -2 Perceptron - Two-Input Case -4 Pattern Recognition Example -5 Hamming Network -8 Feedforward Layer -8 Recurrent
A Simple Feature Extraction Technique of a Pattern By Hopfield Network
A Simple Feature Extraction Technique of a Pattern By Hopfield Network A.Nag!, S. Biswas *, D. Sarkar *, P.P. Sarkar *, B. Gupta **! Academy of Technology, Hoogly - 722 *USIC, University of Kalyani, Kalyani
Machine Learning and Data Mining -
Machine Learning and Data Mining - Perceptron Neural Networks Nuno Cavalheiro Marques ([email protected]) Spring Semester 2010/2011 MSc in Computer Science Multi Layer Perceptron Neurons and the Perceptron
Power Prediction Analysis using Artificial Neural Network in MS Excel
Power Prediction Analysis using Artificial Neural Network in MS Excel NURHASHINMAH MAHAMAD, MUHAMAD KAMAL B. MOHAMMED AMIN Electronic System Engineering Department Malaysia Japan International Institute
Lecture 8 February 4
ICS273A: Machine Learning Winter 2008 Lecture 8 February 4 Scribe: Carlos Agell (Student) Lecturer: Deva Ramanan 8.1 Neural Nets 8.1.1 Logistic Regression Recall the logistic function: g(x) = 1 1 + e θt
Nerve Cell Communication
Nerve Cell Communication Core Concept: Nerve cells communicate using electrical and chemical signals. Class time required: Approximately 2 forty minute class periods Teacher Provides: For each student
Chapter 4: Artificial Neural Networks
Chapter 4: Artificial Neural Networks CS 536: Machine Learning Littman (Wu, TA) Administration icml-03: instructional Conference on Machine Learning http://www.cs.rutgers.edu/~mlittman/courses/ml03/icml03/
Neural Networks for Machine Learning. Lecture 13a The ups and downs of backpropagation
Neural Networks for Machine Learning Lecture 13a The ups and downs of backpropagation Geoffrey Hinton Nitish Srivastava, Kevin Swersky Tijmen Tieleman Abdel-rahman Mohamed A brief history of backpropagation
6.2.8 Neural networks for data mining
6.2.8 Neural networks for data mining Walter Kosters 1 In many application areas neural networks are known to be valuable tools. This also holds for data mining. In this chapter we discuss the use of neural
THE HUMAN BRAIN. observations and foundations
THE HUMAN BRAIN observations and foundations brains versus computers a typical brain contains something like 100 billion miniscule cells called neurons estimates go from about 50 billion to as many as
AN APPLICATION OF TIME SERIES ANALYSIS FOR WEATHER FORECASTING
AN APPLICATION OF TIME SERIES ANALYSIS FOR WEATHER FORECASTING Abhishek Agrawal*, Vikas Kumar** 1,Ashish Pandey** 2,Imran Khan** 3 *(M. Tech Scholar, Department of Computer Science, Bhagwant University,
Neural Networks algorithms and applications
Neural Networks algorithms and applications By Fiona Nielsen 4i 12/12-2001 Supervisor: Geert Rasmussen Niels Brock Business College 1 Introduction Neural Networks is a field of Artificial Intelligence
degrees of freedom and are able to adapt to the task they are supposed to do [Gupta].
1.3 Neural Networks 19 Neural Networks are large structured systems of equations. These systems have many degrees of freedom and are able to adapt to the task they are supposed to do [Gupta]. Two very
ARTIFICIAL NEURAL NETWORKS FOR DATA MINING
ARTIFICIAL NEURAL NETWORKS FOR DATA MINING Amrender Kumar I.A.S.R.I., Library Avenue, Pusa, New Delhi-110 012 [email protected] 1. Introduction Neural networks, more accurately called Artificial Neural
Chapter 12 Discovering New Knowledge Data Mining
Chapter 12 Discovering New Knowledge Data Mining Becerra-Fernandez, et al. -- Knowledge Management 1/e -- 2004 Prentice Hall Additional material 2007 Dekai Wu Chapter Objectives Introduce the student to
Artificial Neural Network for Speech Recognition
Artificial Neural Network for Speech Recognition Austin Marshall March 3, 2005 2nd Annual Student Research Showcase Overview Presenting an Artificial Neural Network to recognize and classify speech Spoken
Prediction Model for Crude Oil Price Using Artificial Neural Networks
Applied Mathematical Sciences, Vol. 8, 2014, no. 80, 3953-3965 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.43193 Prediction Model for Crude Oil Price Using Artificial Neural Networks
Anatomy Review. Graphics are used with permission of: Pearson Education Inc., publishing as Benjamin Cummings (http://www.aw-bc.
Anatomy Review Graphics are used with permission of: Pearson Education Inc., publishing as Benjamin Cummings (http://www.aw-bc.com) Page 1. Introduction The structure of neurons reflects their function.
Follow links Class Use and other Permissions. For more information, send email to: [email protected]
COPYRIGHT NOTICE: David A. Kendrick, P. Ruben Mercado, and Hans M. Amman: Computational Economics is published by Princeton University Press and copyrighted, 2006, by Princeton University Press. All rights
Biology Slide 1 of 38
Biology 1 of 38 2 of 38 35-2 The Nervous System What are the functions of the nervous system? 3 of 38 35-2 The Nervous System 1. Nervous system: a. controls and coordinates functions throughout the body
MANAGING QUEUE STABILITY USING ART2 IN ACTIVE QUEUE MANAGEMENT FOR CONGESTION CONTROL
MANAGING QUEUE STABILITY USING ART2 IN ACTIVE QUEUE MANAGEMENT FOR CONGESTION CONTROL G. Maria Priscilla 1 and C. P. Sumathi 2 1 S.N.R. Sons College (Autonomous), Coimbatore, India 2 SDNB Vaishnav College
TRAINING A LIMITED-INTERCONNECT, SYNTHETIC NEURAL IC
777 TRAINING A LIMITED-INTERCONNECT, SYNTHETIC NEURAL IC M.R. Walker. S. Haghighi. A. Afghan. and L.A. Akers Center for Solid State Electronics Research Arizona State University Tempe. AZ 85287-6206 [email protected]
Performance Evaluation of Artificial Neural. Networks for Spatial Data Analysis
Contemporary Engineering Sciences, Vol. 4, 2011, no. 4, 149-163 Performance Evaluation of Artificial Neural Networks for Spatial Data Analysis Akram A. Moustafa Department of Computer Science Al al-bayt
Neural Networks for Data Mining
ONLINE CHAPTER 6 Neural Networks for Data Mining Learning Objectives Understand the concept and different types of artificial neural networks (ANN) Learn the advantages and limitations of ANN Understand
Data quality in Accounting Information Systems
Data quality in Accounting Information Systems Comparing Several Data Mining Techniques Erjon Zoto Department of Statistics and Applied Informatics Faculty of Economy, University of Tirana Tirana, Albania
EFFICIENT DATA PRE-PROCESSING FOR DATA MINING
EFFICIENT DATA PRE-PROCESSING FOR DATA MINING USING NEURAL NETWORKS JothiKumar.R 1, Sivabalan.R.V 2 1 Research scholar, Noorul Islam University, Nagercoil, India Assistant Professor, Adhiparasakthi College
Learning to Process Natural Language in Big Data Environment
CCF ADL 2015 Nanchang Oct 11, 2015 Learning to Process Natural Language in Big Data Environment Hang Li Noah s Ark Lab Huawei Technologies Part 1: Deep Learning - Present and Future Talk Outline Overview
4.1 Learning algorithms for neural networks
4 Perceptron Learning 4.1 Learning algorithms for neural networks In the two preceding chapters we discussed two closely related models, McCulloch Pitts units and perceptrons, but the question of how to
Open Access Research on Application of Neural Network in Computer Network Security Evaluation. Shujuan Jin *
Send Orders for Reprints to [email protected] 766 The Open Electrical & Electronic Engineering Journal, 2014, 8, 766-771 Open Access Research on Application of Neural Network in Computer Network
Horse Racing Prediction Using Artificial Neural Networks
Horse Racing Prediction Using Artificial Neural Networks ELNAZ DAVOODI, ALI REZA KHANTEYMOORI Mathematics and Computer science Department Institute for Advanced Studies in Basic Sciences (IASBS) Gavazang,
PORTFOLIO SELECTION USING
PORTFOLIO SELECTION USING ARTIFICIAL INTELLIGENCE Andrew J Ashwood BEng(Civil)(Hons), LLB, GradCertLaw, LLM, MBA Submitted in partial fulfilment of the requirements for the degree of Doctor of Business
EVALUATION OF NEURAL NETWORK BASED CLASSIFICATION SYSTEMS FOR CLINICAL CANCER DATA CLASSIFICATION
EVALUATION OF NEURAL NETWORK BASED CLASSIFICATION SYSTEMS FOR CLINICAL CANCER DATA CLASSIFICATION K. Mumtaz Vivekanandha Institute of Information and Management Studies, Tiruchengode, India S.A.Sheriff
Programming Exercise 3: Multi-class Classification and Neural Networks
Programming Exercise 3: Multi-class Classification and Neural Networks Machine Learning November 4, 2011 Introduction In this exercise, you will implement one-vs-all logistic regression and neural networks
Standards Alignment Minnesota Science Standards Alignment Matrix www.brainu.org/resources/mnstds
Lesson Summary: Neurons transfer information by releasing neurotransmitters across the synapse or space between neurons. Students model the chemical communication between pre-synaptic and post-synaptic
Advanced analytics at your hands
2.3 Advanced analytics at your hands Neural Designer is the most powerful predictive analytics software. It uses innovative neural networks techniques to provide data scientists with results in a way previously
Matrix Multiplication
Matrix Multiplication CPS343 Parallel and High Performance Computing Spring 2016 CPS343 (Parallel and HPC) Matrix Multiplication Spring 2016 1 / 32 Outline 1 Matrix operations Importance Dense and sparse
D A T A M I N I N G C L A S S I F I C A T I O N
D A T A M I N I N G C L A S S I F I C A T I O N FABRICIO VOZNIKA LEO NARDO VIA NA INTRODUCTION Nowadays there is huge amount of data being collected and stored in databases everywhere across the globe.
Impelling Heart Attack Prediction System using Data Mining and Artificial Neural Network
General Article International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347-5161 2014 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Impelling
SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING
AAS 07-228 SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING INTRODUCTION James G. Miller * Two historical uncorrelated track (UCT) processing approaches have been employed using general perturbations
Anatomy Review Graphics are used with permission of: adam.com (http://www.adam.com/) Benjamin Cummings Publishing Co (http://www.awl.com/bc).
Page 1. Introduction The structure of neurons reflects their function. One part of the cell receives incoming signals. Another part generates outgoing signals. Anatomy Review Graphics are used with permission
Comparison of K-means and Backpropagation Data Mining Algorithms
Comparison of K-means and Backpropagation Data Mining Algorithms Nitu Mathuriya, Dr. Ashish Bansal Abstract Data mining has got more and more mature as a field of basic research in computer science and
Use of Artificial Neural Network in Data Mining For Weather Forecasting
Use of Artificial Neural Network in Data Mining For Weather Forecasting Gaurav J. Sawale #, Dr. Sunil R. Gupta * # Department Computer Science & Engineering, P.R.M.I.T& R, Badnera. 1 [email protected]
THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS
THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS O.U. Sezerman 1, R. Islamaj 2, E. Alpaydin 2 1 Laborotory of Computational Biology, Sabancı University, Istanbul, Turkey. 2 Computer Engineering
Data Mining. Supervised Methods. Ciro Donalek [email protected]. Ay/Bi 199ab: Methods of Computa@onal Sciences hcp://esci101.blogspot.
Data Mining Supervised Methods Ciro Donalek [email protected] Supervised Methods Summary Ar@ficial Neural Networks Mul@layer Perceptron Support Vector Machines SoLwares Supervised Models: Supervised
ABSTRACT The World MINING 1.2.1 1.2.2. R. Vasudevan. Trichy. Page 9. usage mining. basic. processing. Web usage mining. Web. useful information
SSRG International Journal of Electronics and Communication Engineering (SSRG IJECE) volume 1 Issue 1 Feb Neural Networks and Web Mining R. Vasudevan Dept of ECE, M. A.M Engineering College Trichy. ABSTRACT
1. Classification problems
Neural and Evolutionary Computing. Lab 1: Classification problems Machine Learning test data repository Weka data mining platform Introduction Scilab 1. Classification problems The main aim of a classification
Comparison of Supervised and Unsupervised Learning Algorithms for Pattern Classification
Comparison of Supervised and Unsupervised Learning Algorithms for Pattern Classification R. Sathya Professor, Dept. of MCA, Jyoti Nivas College (Autonomous), Professor and Head, Dept. of Mathematics, Bangalore,
Iranian J Env Health Sci Eng, 2004, Vol.1, No.2, pp.51-57. Application of Intelligent System for Water Treatment Plant Operation.
Iranian J Env Health Sci Eng, 2004, Vol.1, No.2, pp.51-57 Application of Intelligent System for Water Treatment Plant Operation *A Mirsepassi Dept. of Environmental Health Engineering, School of Public
APPLICATION OF ARTIFICIAL NEURAL NETWORKS USING HIJRI LUNAR TRANSACTION AS EXTRACTED VARIABLES TO PREDICT STOCK TREND DIRECTION
LJMS 2008, 2 Labuan e-journal of Muamalat and Society, Vol. 2, 2008, pp. 9-16 Labuan e-journal of Muamalat and Society APPLICATION OF ARTIFICIAL NEURAL NETWORKS USING HIJRI LUNAR TRANSACTION AS EXTRACTED
The Backpropagation Algorithm
7 The Backpropagation Algorithm 7. Learning as gradient descent We saw in the last chapter that multilayered networks are capable of computing a wider range of Boolean functions than networks with a single
The Action Potential Graphics are used with permission of: adam.com (http://www.adam.com/) Benjamin Cummings Publishing Co (http://www.awl.
The Action Potential Graphics are used with permission of: adam.com (http://www.adam.com/) Benjamin Cummings Publishing Co (http://www.awl.com/bc) ** If this is not printed in color, it is suggested you
Current Standard: Mathematical Concepts and Applications Shape, Space, and Measurement- Primary
Shape, Space, and Measurement- Primary A student shall apply concepts of shape, space, and measurement to solve problems involving two- and three-dimensional shapes by demonstrating an understanding of:
Neurophysiology. 2.1 Equilibrium Potential
2 Neurophysiology 2.1 Equilibrium Potential An understanding of the concepts of electrical and chemical forces that act on ions, electrochemical equilibrium, and equilibrium potential is a powerful tool
Tennis Winner Prediction based on Time-Series History with Neural Modeling
Tennis Winner Prediction based on Time-Series History with Neural Modeling Amornchai Somboonphokkaphan, Suphakant Phimoltares, and Chidchanok Lursinsap Abstract Tennis is one of the most popular sports
Big Data Analytics CSCI 4030
High dim. data Graph data Infinite data Machine learning Apps Locality sensitive hashing PageRank, SimRank Filtering data streams SVM Recommen der systems Clustering Community Detection Web advertising
Preface. C++ Neural Networks and Fuzzy Logic:Preface. Table of Contents
C++ Neural Networks and Fuzzy Logic by Valluru B. Rao MTBooks, IDG Books Worldwide, Inc. ISBN: 1558515526 Pub Date: 06/01/95 Table of Contents Preface The number of models available in neural network literature
NTC Project: S01-PH10 (formerly I01-P10) 1 Forecasting Women s Apparel Sales Using Mathematical Modeling
1 Forecasting Women s Apparel Sales Using Mathematical Modeling Celia Frank* 1, Balaji Vemulapalli 1, Les M. Sztandera 2, Amar Raheja 3 1 School of Textiles and Materials Technology 2 Computer Information
Nerves and Nerve Impulse
Nerves and Nerve Impulse Terms Absolute refractory period: Period following stimulation during which no additional action potential can be evoked. Acetylcholine: Chemical transmitter substance released
Appendix 4 Simulation software for neuronal network models
Appendix 4 Simulation software for neuronal network models D.1 Introduction This Appendix describes the Matlab software that has been made available with Cerebral Cortex: Principles of Operation (Rolls
Method of Combining the Degrees of Similarity in Handwritten Signature Authentication Using Neural Networks
Method of Combining the Degrees of Similarity in Handwritten Signature Authentication Using Neural Networks Ph. D. Student, Eng. Eusebiu Marcu Abstract This paper introduces a new method of combining the
Deep Learning for Multivariate Financial Time Series. Gilberto Batres-Estrada
Deep Learning for Multivariate Financial Time Series Gilberto Batres-Estrada June 4, 2015 Abstract Deep learning is a framework for training and modelling neural networks which recently have surpassed
Application of Neural Network in User Authentication for Smart Home System
Application of Neural Network in User Authentication for Smart Home System A. Joseph, D.B.L. Bong, D.A.A. Mat Abstract Security has been an important issue and concern in the smart home systems. Smart
Chapter 07: Instruction Level Parallelism VLIW, Vector, Array and Multithreaded Processors. Lesson 05: Array Processors
Chapter 07: Instruction Level Parallelism VLIW, Vector, Array and Multithreaded Processors Lesson 05: Array Processors Objective To learn how the array processes in multiple pipelines 2 Array Processor
Chapter 7: The Nervous System
Chapter 7: The Nervous System Objectives Discuss the general organization of the nervous system Describe the structure & function of a nerve Draw and label the pathways involved in a withdraw reflex Define
