Package AMORE. February 19, 2015



Similar documents
Implementation of Neural Networks with Theano.

Feed-Forward mapping networks KAIST 바이오및뇌공학과 정재승

Application of Neural Network in User Authentication for Smart Home System

Package neuralnet. February 20, 2015

Chapter 4: Artificial Neural Networks

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski

Performance Evaluation of Artificial Neural. Networks for Spatial Data Analysis

Recurrent Neural Networks

Temporal Difference Learning in the Tetris Game

Neural Computation - Assignment

An Introduction to Neural Networks

Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence

Method of Combining the Degrees of Similarity in Handwritten Signature Authentication Using Neural Networks

Neural network software tool development: exploring programming language options

Package bigdata. R topics documented: February 19, 2015

Package VideoComparison

Lecture 6. Artificial Neural Networks

Feedforward Neural Networks and Backpropagation

Neural Network Toolbox

Programming Exercise 3: Multi-class Classification and Neural Networks

Package empiricalfdr.deseq2

Neural Networks and Support Vector Machines

Machine Learning and Data Mining -

6.2.8 Neural networks for data mining

Simplified Machine Learning for CUDA. Umar

Price Prediction of Share Market using Artificial Neural Network (ANN)

Analecta Vol. 8, No. 2 ISSN

Package EstCRM. July 13, 2015

A Simple Feature Extraction Technique of a Pattern By Hopfield Network

Machine Learning: Multi Layer Perceptrons

Performance Evaluation On Human Resource Management Of China S Commercial Banks Based On Improved Bp Neural Networks

An Artificial Neural Networks-Based on-line Monitoring Odor Sensing System

CHAPTER 5 PREDICTIVE MODELING STUDIES TO DETERMINE THE CONVEYING VELOCITY OF PARTS ON VIBRATORY FEEDER

Performance Comparison between Backpropagation Algorithms Applied to Intrusion Detection in Computer Network Systems

SUCCESSFUL PREDICTION OF HORSE RACING RESULTS USING A NEURAL NETWORK

Role of Neural network in data mining

AN APPLICATION OF TIME SERIES ANALYSIS FOR WEATHER FORECASTING

Neural Network Predictor for Fraud Detection: A Study Case for the Federal Patrimony Department

Anupam Tarsauliya Shoureya Kant Rahul Kala Researcher Researcher Researcher IIITM IIITM IIITM Gwalior Gwalior Gwalior

Novelty Detection in image recognition using IRF Neural Networks properties

Lecture 8 February 4

ANN Based Fault Classifier and Fault Locator for Double Circuit Transmission Line

Data Mining Techniques Chapter 7: Artificial Neural Networks

An Augmented Normalization Mechanism for Capacity Planning & Modelling Elegant Approach with Artificial Intelligence

Back Propagation Neural Network for Wireless Networking

Big Data Analytics Using Neural networks

Package retrosheet. April 13, 2015

Iranian J Env Health Sci Eng, 2004, Vol.1, No.2, pp Application of Intelligent System for Water Treatment Plant Operation.

PLAANN as a Classification Tool for Customer Intelligence in Banking

Cash Forecasting: An Application of Artificial Neural Networks in Finance

OPTIMUM LEARNING RATE FOR CLASSIFICATION PROBLEM

Machine Learning and Pattern Recognition Logistic Regression

Stock Prediction using Artificial Neural Networks

Self Organizing Maps: Fundamentals

Horse Racing Prediction Using Artificial Neural Networks

Forecasting of Economic Quantities using Fuzzy Autoregressive Model and Fuzzy Neural Network

Study of a neural network-based system for stability augmentation of an airplane

Neural Network Design in Cloud Computing

Neural Network Add-in

More Data Mining with Weka

DYNAMIC LOAD BALANCING OF FINE-GRAIN SERVICES USING PREDICTION BASED ON SERVICE INPUT JAN MIKSATKO. B.S., Charles University, 2003 A THESIS

ANNMD - Artificial Neural Network Model Developer. Jure Smrekar

NEURAL NETWORK FUNDAMENTALS WITH GRAPHS, ALGORITHMS, AND APPLICATIONS

SMORN-VII REPORT NEURAL NETWORK BENCHMARK ANALYSIS RESULTS & FOLLOW-UP 96. Özer CIFTCIOGLU Istanbul Technical University, ITU. and

IBM SPSS Neural Networks 22

degrees of freedom and are able to adapt to the task they are supposed to do [Gupta].

Using artificial intelligence for data reduction in mechanical engineering

Package SHELF. February 5, 2016

CLASSIFICATION AND PREDICTION IN DATA MINING WITH NEURAL NETWORKS

APPLICATION OF ARTIFICIAL NEURAL NETWORKS USING HIJRI LUNAR TRANSACTION AS EXTRACTED VARIABLES TO PREDICT STOCK TREND DIRECTION

Efficient online learning of a non-negative sparse autoencoder

Neural Networks for Machine Learning. Lecture 13a The ups and downs of backpropagation

Time Series Data Mining in Rainfall Forecasting Using Artificial Neural Network

Data Mining using Artificial Neural Network Rules

Package sendmailr. February 20, 2015

Package CoImp. February 19, 2015

NEURAL NETWORKS A Comprehensive Foundation

Advanced analytics at your hands

Introduction to Synoptic

EFFICIENT DATA PRE-PROCESSING FOR DATA MINING

FOREX PREDICTION USING AN ARTIFICIAL INTELLIGENCE SYSTEM JINXING HAN GOULD. Bachelor of Science. Beijing University.

Using Neural Networks to Improve Behavioural Realism in Driving Simulation Scenarios

FRAUD DETECTION IN ELECTRIC POWER DISTRIBUTION NETWORKS USING AN ANN-BASED KNOWLEDGE-DISCOVERY PROCESS

1. Classification problems

American International Journal of Research in Science, Technology, Engineering & Mathematics

3 An Illustrative Example

Package hoarder. June 30, 2015

A Neural Network based Approach for Predicting Customer Churn in Cellular Network Services

Linear Threshold Units

Artificial Neural Network, Decision Tree and Statistical Techniques Applied for Designing and Developing Classifier

Appendices. Appendix A: List of Publications

Stock Prediction Using Artificial Neural Networks

Package pesticides. February 20, 2015

University of Cambridge Engineering Part IIB Module 4F10: Statistical Pattern Processing Handout 8: Multi-Layer Perceptrons

Real-Time Credit-Card Fraud Detection using Artificial Neural Network Tuned by Simulated Annealing Algorithm

Prediction Model for Crude Oil Price Using Artificial Neural Networks

Bank Customers (Credit) Rating System Based On Expert System and ANN

A New Approach For Estimating Software Effort Using RBFN Network

IBM SPSS Neural Networks 19

An Approach for Utility Pole Recognition in Real Conditions

Transcription:

Encoding UTF-8 Version 0.2-15 Date 2014-04-10 Title A MORE flexible neural network package Package AMORE February 19, 2015 Author Manuel Castejon Limas, Joaquin B. Ordieres Mere, Ana Gonzalez Marcos, Francisco Javier Martinez de Pison Ascacibar, Alpha V. Pernia Espinoza, Fernando Alba Elias, Jose Maria Perez Ramos Maintainer Manuel Castejón Limas <manuel.castejon@gmail.com> This package was born to release the TAO robust neural network algorithm to the R users. It has grown and I think it can be of interest for the users wanting to implement their own training algorithms as well as for those others whose needs lye only in the ``user space''. License GPL (>= 2) URL http://rwiki.sciviews.org/doku.php?id=packages:cran:amore LazyLoad yes Repository CRAN Date/Publication 2014-04-14 14:53:30 NeedsCompilation yes R topics documented: ADAPTgd.MLPnet..................................... 2 ADAPTgdwm.MLPnet................................... 3 BATCHgd.MLPnet..................................... 4 BATCHgdwm.MLPnet................................... 5 error.lms.......................................... 6 error.tao.......................................... 7 graphviz.mlpnet...................................... 8 init.mlpneuron....................................... 9 newff............................................ 11 1

2 ADAPTgd.MLPnet random.init.mlpnet..................................... 13 random.init.mlpneuron.................................. 14 select.activation.function.................................. 15 sim.mlpnet......................................... 16 train............................................. 17 training.report........................................ 18 Index 20 ADAPTgd.MLPnet Adaptative gradient descent training Adaptative gradient descent training method. ADAPTgd.MLPnet(net,P, T,n.epochs, n.threads=0l) net P T n.epochs n.threads Neural Network to train. Input data set. Target output data set. Number of epochs to train Unused, but required to match the BATCH* function template. This function returns a neural network object modified according to the input and target data set.

ADAPTgdwm.MLPnet 3 newff,train,adaptgdwm.mlpnet ADAPTgdwm.MLPnet Adaptative gradient descent with momentum training Adaptative gradient descent with momentum training method. ADAPTgdwm.MLPnet(net,P, T,n.epochs, n.threads=0l) net P T n.epochs n.threads Neural Network to train. Input data set. Target output data set. Number of epochs to train Unused, but required to match the BATCH* function template. This function returns a neural network object modified according to the input and target data set. newff,train,adaptgd.mlpnet

4 BATCHgd.MLPnet BATCHgd.MLPnet Batch gradient descent training Modifies the neural network weights and biases according to the training set. BATCHgd.MLPnet(net,P,T,n.epochs, n.threads=0l) net P T n.epochs n.threads Neural Network to train. Input data set. Target output data set. Number of epochs to train Number of threads to spawn. If <1, spawns NumberProcessors-1 threads. If no OpenMP is found, this argument will be ignored. This function returns a neural network object modified according to the chosen data. newff,train,batchgdwm.mlpnet

BATCHgdwm.MLPnet 5 BATCHgdwm.MLPnet Batch gradient descent with momentum training Modifies the neural network weights and biases according to the training set. BATCHgdwm.MLPnet(net,P,T, n.epochs, n.threads=0l) net P T n.epochs n.threads Neural Network to train. Input data set. Target output data set. Number of epochs to train Number of threads to spawn. If <1, spawns NumberProcessors-1 threads. If no OpenMP is found, this argument will be ignored. This functions returns a neural network object modified according to the chosen data. newff,train,batchgd.mlpnet

6 error.lms error.lms Neural network training error criteria. The error functions calculate the goodness of fit of a neural network according to certain criterium: LMS: Least Mean Squares Error. LMLS: Least Mean Log Squares minimization. TAO: TAO error minimization. The deltae functions calculate the influence functions of their error criteria. error.lms(arguments) error.lmls(arguments) error.tao(arguments) deltae.lms(arguments) deltae.lmls(arguments) deltae.tao(arguments) arguments List of arguments to pass to the functions. The first element is the prediction of the neuron. The second element is the corresponding component of the target vector. The third element is the whole net. This allows the TAO criterium to know the value of the S parameter and eventually ( next minor update) will allow the user to apply regularization criteria. This functions return the error and influence function criteria.

error.tao 7 Pernía Espinoza, A.V., Ordieres Meré, J.B., Martínez de Pisón, F.J., González Marcos, A. TAOrobust backpropagation learning algorithm. Neural Networks. Vol. 18, Issue 2, pp. 191 204, 2005. train error.tao TAO robust error criterium auxiliar functions. Auxiliar functions. Not meant to be called from the user but from the error.tao and the deltae.tao functions. hfun(v,k) phifun(v,k) dphifun(v,k) v k Input value. Threshold limit. These functions return a numeric array with dimension equal to the dimension of v.

8 graphviz.mlpnet Pernía Espinoza, A.V., Ordieres Meré, J.B., Martínez de Pisón, F.J., González Marcos, A. TAOrobust backpropagation learning algorithm. Neural Networks. Vol. 18, Issue 2, pp. 191 204, 2005. train graphviz.mlpnet Neural network graphic representation Creates a dot file, suitable to be processed with graphviz, containing a graphical representation of the netwok topology and some numerical information about the network parameters. graphviz.mlpnet(net, filename, digits) net filename digits Neural Network. Name of the dot file to be written. Number of digits used to round the parameters. This function writes a file suitable to be postprocessed with the graphviz package. Thus, multiple formats can be obtained: ps, pdf,...

init.mlpneuron 9 http:\/\/www.graphviz.org init.mlpneuron Neuron constructor. Creates a neuron according to the structure established by the AMORE package standard. init.mlpneuron(id, type, activation.function, output.links, output.aims, input.links, weights, bias, method, method.dep.variables) id Numerical index of the neuron (so as to be refered in a network operation). type Either hidden or ouput, according to the layer the neuron belongs to. activation.function The name of the characteristic function of the neuron. It can be "pureline", "tansig", "sigmoid" or even "custom" in case that the user wants to configure its own activation function accordingly defining f0 and f1. output.links output.aims input.links weights bias method The id s of the neurons that accept the output value of this neuron as an input. The location of the output of the neuron in the input set of the addressed neuron. Gives answer to: Is this output the first, the second, the third,..., input at the addressed neuron?. Similarly for an output neuron: Is this output the first, the second, the third,..., element of the output vector? The id s of the neurons whose outputs work as inputs for this neuron. Positive values represent that we take the outputs of other neurons as inputs. Negative values represent the coordinates of the input vector to be considered as inputs. The multiplying factors of the input values. The bias summed to the weighted sum of the inputs. Prefered training method. Currently it can be: "ADAPTgd": Adaptative gradient descend. "ADAPTgdwm": Adaptative gradient descend with momentum. "BATCHgd": BATCH gradient descend. "BATCHgdwm": BATCH gradient descend with momentum. method.dep.variables Variables used by the training methods: ADAPTgd method:

10 init.mlpneuron delta: Used in the backpropagation method. learning.rate: Learning rate parameter. Notice that we can use a different rate for each neuron. ADAPTgdwm method: delta: Used in the backpropagation method. learning.rate: Learning rate parameter. Notice that we can use a different rate for each neuron. momentum: Momentum constant used in the backpropagation with momentum learning criterium. former.weight.change: Last increment in the weight parameters. Used by the momentum training technique. former.bias.change: Last increment in the bias parameter. Used by the momentum training technique. BATCHgd method: delta: Used in the backpropagation method. learning.rate: Learning rate parameter. Notice that we can use a different rate for each neuron. sum.delta.x: Used as an acumulator of the changes to apply to the weight parameters in the batch training. sum.delta.bias: Used as an acumulator of the changes to apply to the bias parameters in the batch training. BATCHgdwm method: delta: Used in the backpropagation method. learning.rate: Learning rate parameter. Notice that we can use a different rate for each neuron. sum.delta.x: Used as an acumulator of the changes to apply to the weight parameters in the batch training. sum.delta.bias: Used as an acumulator of the changes to apply to the bias parameters in the batch training. momentum: Momentum constant used in the backpropagation with momentum learning criterium. former.weight.change: Last increment in the weight parameters. Used by the momentum training technique. former.bias.change: Last increment in the bias parameter. Used by the momentum training technique. init.mlpneuron returns a single neuron. Mainly used to create a neural network object.

newff 11 newff, random.init.mlpnet, random.init.mlpneuron, select.activation.function, init.mlpneuron newff Create a Multilayer Feedforward Neural Network Creates a feedforward artificial neural network according to the structure established by the AMORE package standard. newff(n.neurons, learning.rate.global, momentum.global, error.criterium, Stao, hidden.layer, output.layer, method) n.neurons Numeric vector containing the number of neurons of each layer. The first element of the vector is the number of input neurons, the last is the number of output neurons and the rest are the number of neuron of the different hidden layers. learning.rate.global Learning rate at which every neuron is trained. momentum.global Momentum for every neuron. Needed by several training methods. error.criterium Criterium used to measure to proximity of the neural network prediction to its target. Currently we can choose amongst: Stao hidden.layer "LMS": Least Mean Squares. "LMLS": Least Mean Logarithm Squared (Liano 1996). "TAO": TAO Error (Pernia, 2004). Stao parameter for the TAO error criterium. Unused by the rest of criteria. Activation function of the hidden layer neurons. Available functions are: "purelin". "tansig". "sigmoid". "hardlim". "custom": The user must manually define the f0 and f1 elements of the neurons.

12 newff output.layer method Activation function of the hidden layer neurons according to the former list shown above. Prefered training method. Currently it can be: "ADAPTgd": Adaptative gradient descend. "ADAPTgdwm": Adaptative gradient descend with momentum. "BATCHgd": BATCH gradient descend. "BATCHgdwm": BATCH gradient descend with momentum. newff returns a multilayer feedforward neural network object. Joaquin Ordieres Meré. Ana González Marcos. Alpha V. Pernía Espinoza. Eliseo P. Vergara Gonzalez. Francisco Javier Martinez de Pisón. Fernando Alba Elías. Pernía Espinoza, A.V., Ordieres Meré, J.B., Martínez de Pisón, F.J., González Marcos, A. TAOrobust backpropagation learning algorithm. Neural Networks. Vol. 18, Issue 2, pp. 191 204, 2005. init.mlpneuron, random.init.mlpnet, random.init.mlpneuron, select.activation.function Examples #Example 1 library(amore) # P is the input vector P <- matrix(sample(seq(-1,1,length=1000), 1000, replace=false), ncol=1) # The network will try to approximate the target P^2 target <- P^2 # We create a feedforward network, with two hidden layers. # The first hidden layer has three neurons and the second has two neurons. # The hidden layers have got Tansig activation functions and the output layer is Purelin. net <- newff(n.neurons=c(1,3,2,1), learning.rate.global=1e-2, momentum.global=0.5, error.criterium="lms", Stao=NA, hidden.layer="tansig", output.layer="purelin", method="adaptgdwm") result <- train(net, P, target, error.criterium="lms", report=true, show.step=100, n.shows=5 ) y <- sim(result$net, P) plot(p,y, col="blue", pch="+")

random.init.mlpnet 13 points(p,target, col="red", pch="x") random.init.mlpnet Initialize the network with random weigths and biases. Provides random values to the network weights and biases so as to start with. Basically it applies the random.init.mlpneuron function to every neuron in the network. random.init.mlpnet(net) net The neural network object random.init.mlpnet returns the input network with weights and biases changed randomly. random.init.mlpneuron, init.mlpneuron, newff

14 random.init.mlpneuron random.init.mlpneuron Initialize the neuron with random weigths and bias. Provides random values to the neuron weights and bias so as to start with. It is usually called by the random.init.neuralnet function during the construction of the neural object by the newff function. random.init.mlpneuron(net.number.weights, neuron) Details net.number.weights Number of bias and weight parameters of the neural network the neuron belongs to. neuron The neuron object. The values are assigned according to the suggestions of Haykin. random.init.mlpneuron returns the input neuron with bias and weights changed randomly. random.init.mlpnet, init.mlpneuron, newff

select.activation.function 15 select.activation.function Provides R code of the selected activation function. Provides random values to the neuron weights and bias so as to start with. It is usually called by the random.init.neuralnet function during the construction of the neural object by the newff function. select.activation.function(activation.function) activation.function Activation function name. Currently the user may choose amongst purelin, tansig, sigmoid, hardlim and custom. If custom is chosen the the user must manually assign the neuron f0 and f1 functions. select.activation.function returns a list with two elements. The first, f0 is the R code selected to serve as the neuron activation function. The second, f1 is the R code of the activation function derivative. init.mlpneuron, newff

16 sim.mlpnet sim.mlpnet Performs the simulation of a neural network from an input data set. This function calculates the output values of the neural network for a given data set. Various versions are provided according to different degrees of C code conversion. The sim.mlpnet function is the latest and quickest. sim(net,p,...) #sim.mlpnet(net,p,...)... Currently, the parameters below are accepted. net P Neural Network to simulate. Data Set input values. This function returns a matrix containing the output values of the neural network for the given data set. Joaquin Ordieres Meré <j.ordieres@upm.es> Francisco Javier Martinez de Pisón <fjmartin@unirioja.es> newff,train

train 17 train Neural network training function. For a given data set (training set), this function modifies the neural network weights and biases to approximate the relationships amongst variables present in the training set. These may serve to satisfy several needs, i.e. fitting non-linear functions. train(net, P, T, Pval=NULL, Tval=NULL, error.criterium="lms", report=true, n.shows, show.step, Stao=NA,prob=NULL,n.threads=0L) net P T Pval Tval Neural Network to train. Training set input values. Training set output values Validation set input values for optional early stopping. Validation set output values for optional early stopping. error.criterium Criterium used to measure the goodness of fit:"lms", "LMLS", "TAO". Stao report n.shows show.step prob n.threads Initial value of the S parameter used by the TAO algorithm. Logical value indicating whether the training function should keep quiet or should provide graphical/written information during the training process instead. Number of times to report (if report is TRUE). The total number of training epochs is n.shows times show.step. Number of epochs to train non-stop until the training function is allow to report. Vector with the probabilities of each sample so as to apply resampling training. Number of threads to spawn for the BATCH* training methods. If <1, spawns NumberProcessors-1 threads. If no OpenMP is found, this argument will be ignored. This function returns a list with two elements: the trained Neural Network object with weights and biases adjusted by the adaptative backpropagation with momentum method and a matrix with the errors obtained during the training. If the validation set is provided, the early stopping technique is applied.

18 training.report Joaquin Ordieres Meré <j.ordieres@upm.es> Pernía Espinoza, A.V., Ordieres Meré, J.B., Martínez de Pisón, F.J., González Marcos, A. TAOrobust backpropagation learning algorithm. Neural Networks. Vol. 18, Issue 2, pp. 191 204, 2005. newff training.report Neural network training report generator function. Function in charge of reporting the behavior of the network training. The users should modify this function according to their needs. training.report(net,p,t, idx.show, error.criterium) net P T Neural Network to train. Training set input values. Training set output values idx.show Current show index. error.criterium Criterium used to measure the goodness of fit. This function does not return any value. Just useful for printing and plotting.

training.report 19 train

Index Topic neural ADAPTgd.MLPnet, 2 ADAPTgdwm.MLPnet, 3 BATCHgd.MLPnet, 4 BATCHgdwm.MLPnet, 5 error.lms, 6 error.tao, 7 graphviz.mlpnet, 8 init.mlpneuron, 9 newff, 11 random.init.mlpnet, 13 random.init.mlpneuron, 14 select.activation.function, 15 sim.mlpnet, 16 train, 17 training.report, 18 phifun (error.tao), 7 random.init.mlpnet, 11, 12, 13, 14 random.init.mlpneuron, 11 13, 14 select.activation.function, 11, 12, 15 sim (sim.mlpnet), 16 sim.mlpnet, 16 train, 3 5, 7, 8, 16, 17, 19 training.report, 18 ADAPTgd.MLPnet, 2, 3 ADAPTgdwm.MLPnet, 3, 3 BATCHgd.MLPnet, 4, 5 BATCHgdwm.MLPnet, 4, 5 deltae.lmls (error.lms), 6 deltae.lms (error.lms), 6 deltae.tao, 7 deltae.tao (error.lms), 6 dphifun (error.tao), 7 error.lmls (error.lms), 6 error.lms, 6 error.tao, 7, 7 error.tao (error.lms), 6 graphviz.mlpnet, 8 hfun (error.tao), 7 init.mlpneuron, 9, 11 15 newff, 3 5, 11, 11, 13 16, 18 20