Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski trajkovski@nyus.edu.mk



Similar documents
Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence

Feed-Forward mapping networks KAIST 바이오및뇌공학과 정재승

Neural Network Design in Cloud Computing

Machine Learning and Data Mining -

Lecture 6. Artificial Neural Networks

Chapter 4: Artificial Neural Networks

Stock Prediction using Artificial Neural Networks

Neural Networks in Data Mining

Neural Networks and Support Vector Machines

Artificial neural networks

An Introduction to Neural Networks

Analecta Vol. 8, No. 2 ISSN

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

Neural network software tool development: exploring programming language options

TRAINING A LIMITED-INTERCONNECT, SYNTHETIC NEURAL IC

Biological Neurons and Neural Networks, Artificial Neurons

NEURAL NETWORKS A Comprehensive Foundation

Machine Learning: Multi Layer Perceptrons

Neural Networks and Back Propagation Algorithm

SUCCESSFUL PREDICTION OF HORSE RACING RESULTS USING A NEURAL NETWORK

Introduction to Artificial Neural Networks

Role of Neural network in data mining

Recurrent Neural Networks

AN APPLICATION OF TIME SERIES ANALYSIS FOR WEATHER FORECASTING

Statistical Machine Learning

Lecture 8 February 4

Machine Learning. Chapter 18, 21. Some material adopted from notes by Chuck Dyer

Introduction to Machine Learning Using Python. Vikram Kamath

6.2.8 Neural networks for data mining

Application of Neural Network in User Authentication for Smart Home System

Learning is a very general term denoting the way in which agents:

Data Mining Techniques Chapter 7: Artificial Neural Networks

CS 2750 Machine Learning. Lecture 1. Machine Learning. CS 2750 Machine Learning.

Impact of Feature Selection on the Performance of Wireless Intrusion Detection Systems

SEMINAR OUTLINE. Introduction to Data Mining Using Artificial Neural Networks. Definitions of Neural Networks. Definitions of Neural Networks

Data Mining Using Neural Networks: A Guide for Statisticians

EFFICIENT DATA PRE-PROCESSING FOR DATA MINING

NEURAL NETWORK FUNDAMENTALS WITH GRAPHS, ALGORITHMS, AND APPLICATIONS

IFT3395/6390. Machine Learning from linear regression to Neural Networks. Machine Learning. Training Set. t (3.5, -2,..., 127, 0,...

Neural Networks algorithms and applications

129: Artificial Neural Networks. Ajith Abraham Oklahoma State University, Stillwater, OK, USA 1 INTRODUCTION TO ARTIFICIAL NEURAL NETWORKS

Neural Networks: a replacement for Gaussian Processes?

D A T A M I N I N G C L A S S I F I C A T I O N

Feedforward Neural Networks and Backpropagation

Neural Networks for Machine Learning. Lecture 13a The ups and downs of backpropagation

Machine Learning and Data Mining. Regression Problem. (adapted from) Prof. Alexander Ihler

An Introduction to Artificial Neural Networks (ANN) - Methods, Abstraction, and Usage

Follow links Class Use and other Permissions. For more information, send to:

Neural Computation - Assignment

ARTIFICIAL NEURAL NETWORKS FOR DATA MINING

American International Journal of Research in Science, Technology, Engineering & Mathematics

Data Mining - Evaluation of Classifiers

Machine Learning. CUNY Graduate Center, Spring Professor Liang Huang.

Linear Models for Classification

Comparison of K-means and Backpropagation Data Mining Algorithms

Artificial Neural Network and Non-Linear Regression: A Comparative Study

Linear Threshold Units

A Time Series ANN Approach for Weather Forecasting

Chapter 12 Discovering New Knowledge Data Mining

NEURAL NETWORKS IN DATA MINING

Models of Cortical Maps II

More Data Mining with Weka

TRAINING A 3-NODE NEURAL NETWORK IS NP-COMPLETE

Power Prediction Analysis using Artificial Neural Network in MS Excel

SELECTING NEURAL NETWORK ARCHITECTURE FOR INVESTMENT PROFITABILITY PREDICTIONS

Machine Learning and Pattern Recognition Logistic Regression

Comparison of Supervised and Unsupervised Learning Classifiers for Travel Recommendations

Supervised Learning (Big Data Analytics)

Automatic Inventory Control: A Neural Network Approach. Nicholas Hall

Chapter 2 The Research on Fault Diagnosis of Building Electrical System Based on RBF Neural Network

Forecasting Stock Prices using a Weightless Neural Network. Nontokozo Mpofu

Computational Intelligence Introduction

APPLICATION OF ARTIFICIAL NEURAL NETWORKS USING HIJRI LUNAR TRANSACTION AS EXTRACTED VARIABLES TO PREDICT STOCK TREND DIRECTION

Appendix 4 Simulation software for neuronal network models

Self Organizing Maps: Fundamentals

MANAGING QUEUE STABILITY USING ART2 IN ACTIVE QUEUE MANAGEMENT FOR CONGESTION CONTROL

NEUROEVOLUTION OF AUTO-TEACHING ARCHITECTURES

CHAPTER 5 PREDICTIVE MODELING STUDIES TO DETERMINE THE CONVEYING VELOCITY OF PARTS ON VIBRATORY FEEDER

THE HUMAN BRAIN. observations and foundations

Efficient online learning of a non-negative sparse autoencoder

University of Cambridge Engineering Part IIB Module 4F10: Statistical Pattern Processing Handout 8: Multi-Layer Perceptrons

CSC384 Intro to Artificial Intelligence

ABSTRACT The World MINING R. Vasudevan. Trichy. Page 9. usage mining. basic. processing. Web usage mining. Web. useful information

Prediction Model for Crude Oil Price Using Artificial Neural Networks

Intrusion Detection via Machine Learning for SCADA System Protection

Novelty Detection in image recognition using IRF Neural Networks properties

Padma Charan Das Dept. of E.T.C. Berhampur, Odisha, India

These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop

Impelling Heart Attack Prediction System using Data Mining and Artificial Neural Network

Introduction to Support Vector Machines. Colin Campbell, Bristol University

CHAPTER 6 PRINCIPLES OF NEURAL CIRCUITS.

Performance Evaluation of Artificial Neural. Networks for Spatial Data Analysis

degrees of freedom and are able to adapt to the task they are supposed to do [Gupta].

Effect of Using Neural Networks in GA-Based School Timetabling

Predict Influencers in the Social Network

Learning to Process Natural Language in Big Data Environment

Depth and Excluded Courses

Performance Evaluation On Human Resource Management Of China S Commercial Banks Based On Improved Bp Neural Networks

Open Access Research on Application of Neural Network in Computer Network Security Evaluation. Shujuan Jin *

Transcription:

Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trakovski trakovski@nyus.edu.mk

Neural Networks 2

Neural Networks Analogy to biological neural systems, the most robust learning systems we know. Attempt to understand natural biological systems through computational modeling. Massive parallelism allows for computational efficiency. Help understand distributed nature of neural representations (rather than localist representation) that allow robustness and significant degradation. Intelligent behavior as an emergent property of large number of simple units rather than from explicitly encoded symbolic rules and algorithms. 3

Neural Speed Constraints Neurons have a switching time on the order of a few milliseconds, compared to nanoseconds for current computing hardware. However, neural systems can perform complex cognitive tasks (vision, speech understanding) in tenths of a second. Time for performing 100 serial steps in this time frame is orders of magnitude more than current personal computers. Must be exploiting massive parallelism. Human brain has about 10 11 neurons with an average of 10 4 connections each. 4

Neural Network Learning Learning approach based on modeling adaptation in biological neural systems. Perceptron: Initial algorithm for learning simple neural networks (single layer) developed in the 1950 s. Backpropagation: More complex algorithm for learning multi-layer neural networks developed in the 1980 s. 5

Real Neurons Cell structures Cell body Dendrites Axon Synaptic terminals 6

Neural Communication Electrical potential across cell membrane exhibits spikes called action potentials. Spike originates in cell body, travels down axon, and causes synaptic terminals to release neurotransmitters. Chemical diffuses across synapse to dendrites of other neurons. Neurotransmitters can be excitatory or inhibitory. If net input of neurotransmitters to a neuron from other neurons is excitatory and exceeds some threshold, it fires an action potential. 7

Real Neural Learning Synapses change size and strength with experience. Hebbian learning: When two connected neurons are firing at the same time, the strength of the synapse between them increases. Neurons that fire together, wire together. 8

Artificial Neuron Model Model network as a graph with cells as nodes and synaptic connections as weighted edges from node i to node, w i 1 Model net input to cell as net = w Cell output is: i i o i w 12 w 13 w 14w 15 w 16 2 3 4 5 6 o = 0 if 1if net net i < T T o 1 (T is threshold for unit ) 0 T net 9

Neural Computation McCollough and Pitts (1943) showed how such model neurons could compute logical functions and be used to construct finite-state machines. Can be used to simulate logic gates: AND: Let all w i be T /n, where n is the number of inputs. OR: Let all w i be T NOT: Let threshold be 0, single input with a negative weight. Can build arbitrary logic circuits, sequential machines, and computers with such gates. Given negated inputs, two layer network can compute any boolean function using a two level AND-OR network. 10

Perceptron Training Assume supervised training examples giving the desired output for a unit given a set of known input activations. Learn synaptic weights so that unit produces the correct output for each example. Perceptron uses iterative update algorithm to learn a correct set of weights. 11

Perceptron Learning Rule Update weights by: w = w + η ( t o ) o i where η is the learning rate t is the teacher specified output for unit. Equivalent to rules: If output is correct do nothing. If output is high, lower weights on active inputs If output is low, increase weights on active inputs Also adust threshold to compensate: i i T = T η ( t o ) 12

Perceptron Learning Algorithm Iteratively update weights until convergence. Initialize weights to random values Until outputs of all training examples are correct For each training pair, E, do: Compute current output o for E given its inputs Compare current output to target value, t, for E Update synaptic weights and threshold using learning rule Each execution of the outer loop is typically called an epoch. 13

Perceptron as a Linear Separator Since perceptron uses linear threshold function, it is searching for a linear separator that discriminates the classes. o 3?? w > 12o2 + w13o3 T1 w 12 o 3 > o2 + w13 T w 1 13 o 2 Or hyperplane in n-dimensional space 14

Concept Perceptron Cannot Learn Cannot learn exclusive-or, or parity function in general. o 3 1 +?? 0 1 + o 2 15

Perceptron Limits System obviously cannot learn concepts it cannot represent. Minksy and Papert (1969) wrote a book analyzing the perceptron and demonstrating many functions it could not learn. These results discouraged further research on neural nets; and symbolic AI became the dominate paradigm. 16

Perceptron Convergence and Cycling Theorems Perceptron convergence theorem: If the data is linearly separable and therefore a set of weights exist that are consistent with the data, then the Perceptron algorithm will eventually converge to a consistent set of weights. Perceptron cycling theorem: If the data is not linearly separable, the Perceptron algorithm will eventually repeat a set of weights and threshold at the end of some epoch and therefore enter an infinite loop. By checking for repeated weights+threshold, one can guarantee termination with either a positive or negative result. 17

Perceptron as Hill Climbing The hypothesis space being search is a set of weights and a threshold. Obective is to minimize classification error on the training set. Perceptron effectively does hill-climbing (gradient descent) in this space, changing the weights a small amount at each point to decrease training set error. For a single model neuron, the space is well behaved with a single minima. training error 0 weights 18

Perceptron Performance Linear threshold functions are restrictive (high bias) but still reasonably expressive; more general than: Pure conunctive Pure disunctive M-of-N (at least M of a specified set of N features must be present) In practice, converges fairly quickly for linearly separable data. Can effectively use even incompletely converged results when only a few outliers are misclassified. Experimentally, Perceptron does quite well on many benchmark data sets. 19

Multi-Layer Networks Multi-layer networks can represent arbitrary functions, but an effective learning algorithm for such networks was thought to be difficult. A typical multi-layer network consists of an input, hidden and output layer, each fully connected to the next, with activation feeding forward. output hidden activation input The weights determine the function computed. Given an arbitrary number of hidden units, any boolean function can be computed with a single hidden layer. 20

Hill-Climbing in Multi-Layer Nets Since greed is good perhaps hill-climbing can be used to learn multi-layer networks in practice although its theoretical limits are clear. However, to do gradient descent, we need the output of a unit to be a differentiable function of its input and weights. Standard linear threshold function is not differentiable at the threshold. o i 1 0 T net 21

Differentiable Output Function Need non-linear output function to move beyond linear functions. A multi-layer linear network is still linear. Standard solution is to use the non-linear, differentiable sigmoidal logistic function: o = 1+ e 1 ( net T ) 1 0 T net 22

Gradient Descent Define obective to minimize error: = E( W ) ( t kd okd d D k K where D is the set of training examples, K is the set of output units, t kd and o kd are, respectively, the teacher and current output for unit k for example d. The derivative of a sigmoid unit with respect to net input is: o = o ( 1 o ) net Learning rule to change weights to minimize error is: w i E = η w i 2 ) 23

Backpropagation Learning Rule Each weight changed by: w δ δ i = o = ηδ o i ( 1 o )( t o ) if = o ( 1 o ) δ w if k where η is a constant called the learning rate t is the correct teacher output for unit δ is the error measure for unit k k is an output unit is a hidden unit 24

Error Backpropagation First calculate error of output units and use this to change the top layer of weights. Current output: o =0.2 Correct output: t =1.0 Error δ = o (1 o )(t o ) 0.2(1 0.2)(1 0.2)=0.128 Update weights into w i = ηδ o i output hidden input 25

Error Backpropagation Next calculate error for hidden units based on errors on the output units it feeds into. output δ = o (1 o ) k δ w k k hidden input 26

Error Backpropagation Finally update bottom layer of weights based on errors calculated for hidden units. output δ = o (1 o ) k δ w k k Update weights into w i = ηδ o i hidden input 27

Backpropagation Training Algorithm Create the 3-layer network with H hidden units with full connectivity between layers. Set weights to small random real values. Until all training examples produce the correct value (within ε), or mean squared error ceases to decrease, or other termination criteria: Begin epoch For each training example, d, do: Calculate network output for d s input values Compute error between current output and correct output for d Update weights by backpropagating error and using learning rule End epoch 28

Comments on Training Algorithm Not guaranteed to converge to zero training error, may converge to local optima or oscillate indefinitely. However, in practice, does converge to low error for many large networks on real data. Many epochs (thousands) may be required, hours or days of training for large networks. To avoid local-minima problems, run several trials starting with different random weights. Take results of trial with lowest training set error. Build a committee of results from multiple trials (possibly weighting votes by training set accuracy). 29

Representational Power Boolean functions: Any boolean function can be represented by a two-layer network with sufficient hidden units. Continuous functions: Any bounded continuous function can be approximated with arbitrarily small error by a two-layer network. Sigmoid functions can act as a set of basis functions for composing more complex functions, like sine waves in Fourier analysis. Arbitrary function: Any function can be approximated to arbitrary accuracy by a three-layer network. 30

Sample Learned XOR Network 5.24 3.6 6.96 A X O 3.58 5.74 3.11 7.38 B Y 2.03 5.57 Hidden Unit A represents: (X Y) Hidden Unit B represents: (X Y) Output O represents: A B = (X Y) (X Y) = X Y 31

Hidden Unit Representations Trained hidden units can be seen as newly constructed features that make the target concept linearly separable in the transformed space. On many real domains, hidden units can be interpreted as representing meaningful features such as vowel detectors or edge detectors, etc.. However, the hidden layer can also become a distributed representation of the input in which each individual unit is not easily interpretable as a meaningful feature. 32

Over-Training Prevention Running too many epochs can result in over-fitting. error on test data 0 # training epochs on training data Keep a hold-out validation set and test accuracy on it after every epoch. Stop training when additional epochs actually increase validation error. To avoid losing training data for validation: Use internal 10-fold CV on the training set to compute the average number of epochs that maximizes generalization accuracy. Train final network on complete training set for this many epochs. 33

Determining the Best Number of Hidden Units Too few hidden units prevents the network from adequately fitting the data. Too many hidden units can result in over-fitting. error on test data 0 # hidden units on training data Use internal cross-validation to empirically determine an optimal number of hidden units. 34

Successful Applications Text to Speech (NetTalk) Fraud detection Financial Applications Chemical Plant Control Pavillion Technologies Automated Vehicles Game Playing Neurogammon Handwriting recognition 35

Issues in Neural Nets More efficient training methods: Quickprop Conugate gradient (exploits 2 nd derivative) Learning the proper network architecture: Grow network until able to fit data Shrink large network until unable to fit data Optimal Brain Damage Recurrent networks that use feedback and can learn finite state machines with backpropagation through time. 36