Neural Networks Kohonen Self-Organizing Maps

Similar documents
Self-Organizing g Maps (SOM) COMP61021 Modelling and Visualization of High Dimensional Data

Self Organizing Maps: Fundamentals

Comparison of Supervised and Unsupervised Learning Classifiers for Travel Recommendations

UNIVERSITY OF BOLTON SCHOOL OF ENGINEERING MS SYSTEMS ENGINEERING AND ENGINEERING MANAGEMENT SEMESTER 1 EXAMINATION 2015/2016 INTELLIGENT SYSTEMS

A Simple Feature Extraction Technique of a Pattern By Hopfield Network

Visualization of Breast Cancer Data by SOM Component Planes

Classification of Engineering Consultancy Firms Using Self-Organizing Maps: A Scientific Approach

Neural Network Add-in

6.2.8 Neural networks for data mining

NEURAL NETWORK FUNDAMENTALS WITH GRAPHS, ALGORITHMS, AND APPLICATIONS

Monitoring of Complex Industrial Processes based on Self-Organizing Maps and Watershed Transformations

EFFICIENT DATA PRE-PROCESSING FOR DATA MINING

Load balancing in a heterogeneous computer system by self-organizing Kohonen network

Method of Combining the Degrees of Similarity in Handwritten Signature Authentication Using Neural Networks

Chapter 2 The Research on Fault Diagnosis of Building Electrical System Based on RBF Neural Network

EVALUATION OF NEURAL NETWORK BASED CLASSIFICATION SYSTEMS FOR CLINICAL CANCER DATA CLASSIFICATION

Advanced Web Usage Mining Algorithm using Neural Network and Principal Component Analysis

NEURAL NETWORKS A Comprehensive Foundation

Neural network software tool development: exploring programming language options

Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence

Novelty Detection in image recognition using IRF Neural Networks properties

Network Intrusion Detection Using an Improved Competitive Learning Neural Network

A Computational Framework for Exploratory Data Analysis

1. Classification problems

Models of Cortical Maps II

SUCCESSFUL PREDICTION OF HORSE RACING RESULTS USING A NEURAL NETWORK

The Research of Data Mining Based on Neural Networks

Segmentation of stock trading customers according to potential value

INTRUSION DETECTION SYSTEM USING SELF ORGANIZING MAP

Content Based Analysis of Databases Using Self-Organizing Maps

Comparison of Supervised and Unsupervised Learning Algorithms for Pattern Classification

ViSOM A Novel Method for Multivariate Data Projection and Structure Visualization

QoS Mapping of VoIP Communication using Self-Organizing Neural Network

Data Mining and Neural Networks in Stata

Role of Neural network in data mining

9. Text & Documents. Visualizing and Searching Documents. Dr. Thorsten Büring, 20. Dezember 2007, Vorlesung Wintersemester 2007/08

An Analysis on Density Based Clustering of Multi Dimensional Spatial Data

Visualization of Topology Representing Networks

MANAGING QUEUE STABILITY USING ART2 IN ACTIVE QUEUE MANAGEMENT FOR CONGESTION CONTROL

An Introduction to Neural Networks

USING SELF-ORGANIZING MAPS FOR INFORMATION VISUALIZATION AND KNOWLEDGE DISCOVERY IN COMPLEX GEOSPATIAL DATASETS

Methodology for Emulating Self Organizing Maps for Visualization of Large Datasets

Supporting Online Material for

Cognitive Dynamics - Dynamic Cognition?

Chapter 4: Artificial Neural Networks

Artificial Neural Network for Speech Recognition

SEARCH AND CLASSIFICATION OF "INTERESTING" BUSINESS APPLICATIONS IN THE WORLD WIDE WEB USING A NEURAL NETWORK APPROACH

CS 2750 Machine Learning. Lecture 1. Machine Learning. CS 2750 Machine Learning.

A Partially Supervised Metric Multidimensional Scaling Algorithm for Textual Data Visualization

An Introduction to Artificial Neural Networks (ANN) - Methods, Abstraction, and Usage

Data Mining Techniques Chapter 7: Artificial Neural Networks

LVQ Plug-In Algorithm for SQL Server

Recurrent Neural Networks

A New Approach For Estimating Software Effort Using RBFN Network

Neural Networks and Support Vector Machines

Neural Networks algorithms and applications

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski

ultra fast SOM using CUDA

Online data visualization using the neural gas network

3 An Illustrative Example

Hybrid Evolution of Heterogeneous Neural Networks

Visualization of textual data: unfolding the Kohonen maps.

Data topology visualization for the Self-Organizing Map

On the use of Three-dimensional Self-Organizing Maps for Visualizing Clusters in Geo-referenced Data

Quality Assessment in Spatial Clustering of Data Mining

Local Anomaly Detection for Network System Log Monitoring

Neural Networks for Intrusion Detection and Its Applications

AUTOMATION OF ENERGY DEMAND FORECASTING. Sanzad Siddique, B.S.

Self Organizing Maps for Visualization of Categories

TRAIN AND ANALYZE NEURAL NETWORKS TO FIT YOUR DATA

Visualizing an Auto-Generated Topic Map

Grid e-services for Multi-Layer SOM Neural Network Simulation

Feed-Forward mapping networks KAIST 바이오및뇌공학과 정재승

Cluster Analysis: Advanced Concepts

Lecture 6. Artificial Neural Networks

Weathervanes. The finishing touch

CITY UNIVERSITY OF HONG KONG 香 港 城 市 大 學. Self-Organizing Map: Visualization and Data Handling 自 組 織 神 經 網 絡 : 可 視 化 和 數 據 處 理

COMPARISON OF OBJECT BASED AND PIXEL BASED CLASSIFICATION OF HIGH RESOLUTION SATELLITE IMAGES USING ARTIFICIAL NEURAL NETWORKS

Comparing large datasets structures through unsupervised learning

Data Mining. Cluster Analysis: Advanced Concepts and Algorithms

A Discussion on Visual Interactive Data Exploration using Self-Organizing Maps

Mobile Phone APP Software Browsing Behavior using Clustering Analysis

Credit Card Fraud Detection Using Self Organised Map

PRACTICAL DATA MINING IN A LARGE UTILITY COMPANY

KATE GLEASON COLLEGE OF ENGINEERING. John D. Hromi Center for Quality and Applied Statistics

DOG Pets cat - dog - horse - hamster - rabbit - fish

Analecta Vol. 8, No. 2 ISSN

Transcription:

Neural Networks Kohonen Self-Organizing Maps Mohamed Krini Christian-Albrechts-Universität zu Kiel Faculty of Engineering Institute of Electrical and Information Engineering Digital Signal Processing and System Theory

Contents of the Lecture Entire Semester Introduction Pre-Processing and Feature Extraction Threshold Logic Units - Single Perceptrons Multilayer Perceptrons Training Multilayer Perceptrons Radial Basis Function Networks Learning Vector Quantization Kohonen Self-Organizing Maps Hopfield and Recurrent Networks Slide VIII-2

Contents of this Part Kohonen Self-Organizing Maps Introduction Definitions and Properties Neighborhood of Output Neurons Learning Rule Topology Functions Examples Applications Literature Slide VIII-3

Introduction Structure of a Self-Organizing Map: Output neurons and their neighborhood relationships The map shows the output neurons and the neighborhood relationships. These neighborhood relationships are called topology. Connections Each input neuron is connected to all output neurons. Input neurons Slide VIII-4

Definitions and Properties (1/2) Definitions: A self-organizing map (SOM) or Kohonen feature map is a two layer network without hidden neurons: All input neurons are connected to all output neurons: The distance between the input and the weight vectors is used as input function, similar to RBFN. The activation function is a radial function, i.e. a monotonously decreasing function: The identity is used as output function for each output neuron. Slide VIII-5

Definitions and Properties (2/2) Properties: The neurons are interconnected by neighborhood relationships. These relationships are called topology which is described by a distance function: The training of a SOM is highly influenced by its topology. A SOM always activates the neuron with the least distance to an input pattern (principle of winner-takes-all is often used). In absence of neighborhood relationships the SOM operates as vector quantization of the input space. Self-Organizing Maps are networks trained without a teacher. Slide VIII-6

Neighborhood of Output Neurons One-Dimensional Topology: Two-Dimensional Topology: Two example topologies of a self-organizing map. Black lines indicate neighbors of a neuron. Gray lines indicate regions assigned to neurons. Topologies with more dimensions would also be possible. They are often not employed due to visualization complexity. Slide VIII-7

Learning Rule (1/2) Training Procedure: 1. The network starts with random neuron centers 2. An input vector is selected from the input space. 3. The distance is determined for every neuron. The winner neuron with the maximum activation is selected. 4. The reference vectors are updated using a neighborhood function and the update rule: Neighborhood radius 5. The training is continued with step 2 as long as the maximum number of iterations is not reached. Slide VIII-8

Learning Rule (2/2) Training Parameters: The function, also called topology function, represents the neighborhood relationships between the neurons. Note that the function is defined on the grid and not on the input space. The function can be any unimodal function that reaches its maximum at. Time-varying learning rate: Time-varying neighborhood radius: Slide VIII-9

Topology Functions (1/2) Common Topology Function: A common topology function is the Gaussian function: with represents the neuron positions on the grid, not on the input space. denotes the winner neuron position. Other functions, such as the cone function or the Mexican hat function, can also be used. Slide VIII-10

Topology Functions (2/2) Different Functions: Gaussian Triangular Mexican Hat Rectangle Slide VIII-11

Examples Two Dimension Topology (1/3) Unfolding of a Two-Dimensional Self-Organizing Map: 10x10 output neurons were used for the two-dimensional topology. Training patterns randomly distributed within the interval are utilized. Random initialization of the reference vectors. Nearest neighbor neurons are connected by a straight line. Learning rate Neighborhood radius Gaussian topology function is used. Slide VIII-12

Examples Two Dimension Topology (2/3) Unfolding of a Two-Dimensional Self-Organizing Map (continued): Initial state 20 100 500 1000 100000 Slide VIII-13

Examples Two Dimension Topology (3/3) Activation of the Output Neurons: (Input pattern:, Gaussian activation function) Slide VIII-14

Examples One Dimension Topology Unfolding of a One-Dimensional Self Organizing Map: Initial state 20 500 1000 10000 50000 Slide VIII-15

Examples Failure of the Training Unfolding of a Two-Dimensional Map Inappropriate Initialization: 1000 10000 50000 100000 If the initialization is not chosen appropriately, e.g. if the learning rate or the neighborhood radius parameters are chosen too low, the training of a selforganizing network may fail. Examples on the left demonstrate the maps at iteration 1000, 10000, 50000 and 100000. The maps are not correctly unfolded. Slide VIII-16

Examples Dimension Reduction 2D Self Organizing Map in a 3D Input Space: Self-organizing maps were trained with random points of a rotation parabola (upper graphs) and of a cubic function (lower graphs). Three input neurons are used. Initial state State after 90000 iterations A map with 10x10 output neurons is utilized. Reference vectors of adjacent output neurons are connected by a straight line. Original and image space have different dimensions. Self-organizing maps can be used for dimensionality reduction. Slide VIII-17

Applications Speech Recognition Phoneme Map in Finnish: a a a ah h ae ae o a a h r ae l o o a r r r g o o r r r m m Part of (hexagonal) feature map Phonemes are extracted from the input signal using the FFT, logarithm, averaging and normalization. During learning, each neuron begins to respond strongly to a specific phoneme. A specific neuron is most active after the training. Some nodes and their phoneme labels are shown on the left. The map works very well in Finnish (phonetic language). Slide VIII-18

Applications Dimension Reduction (1/3) dove hen duck goose owl hawk eagle fox wolf dog cat tiger lion horse zebra small 1 1 1 1 1 1 0 0 0 0 1 0 0 0 0 0 medium 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 big 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 legs 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 4 legs 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 hair 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 hooves 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 mane 0 0 0 1 0 0 0 0 0 1 0 0 1 1 1 0 feathers 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 hunt 0 0 0 0 1 1 1 1 0 1 1 1 1 0 0 0 run 0 0 0 0 0 0 0 0 1 1 0 1 1 1 1 0 fly 1 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 swim 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 cow Animal Names and Attributes: Each column is a description of an animal, based on the presence (=1) or absence (=0) of some of the 13 different attributes given on the left. Each animal was encoded by a 29- dim data vector: a 13-dimensional vector for the attributes and a 16- dimensional vector for the animal name (vector in which only the element corresponding to the animal has a non-zero value). Slide VIII-19

Applications Dimension Reduction (2/3) Trained Network: duck * * horse * * * cow * * * * * * zebra * * * * * * * * * * * * * * tiger goose * * * * * wolf * * * * * * hawk * * * * * * * * owl * * * * * * lion dove * * * * * dog * * * * * * * * * * * * * * * * eagle * * * * * * hen * * * * * fox * * cat A Kohonen Network of 10 x 10 x 29 was utilized. The network was trained for 2000 epochs. The map shows the cells giving the strongest response when only the animal name is presented as input (the dots indicate neurons with weaker responses). Slide VIII-20

Applications Dimension Reduction (3/3) Contextual Map: duck duck horse horse zebra zebra cow cow cow cow duck duck horse zebra zebra zebra cow cow tiger tiger goose goose goose zebra zebra zebra wolf wolf tiger tiger goose goose hawk hawk hawk wolf wolf wolf tiger tiger goose owl hawk hawk hawk wolf wolf wolf lion lion dove owl owl hawk hawk dog dog dog lion lion dove dove owl owl owl dog dog dog dog lion dove dove eagle eagle eagle dog dog dog dog cat hen hen Eagle eagle eagle fox fox fox cat cat hen hen eagle eagle eagle fox fox fox cat cat This map shows the result of a simulated electrode penetration mapping". Each cell has been labeled with the animal name that is its best stimulus, i.e., elicits the strongest response for that cell. The result is a contextual map. Similar animals are adjacent. Slide VIII-21

Literature Further details can be found in: S. Haykin: Neural Networks and Learning Machines (Chapter 9), Prentice-Hall, 3rd edition, 2009. C. Borgelt, F. Klawonn, R. Kruse, D. Nauck: Neuro-Fuzzy-Systeme (Chapter 6), Vieweg Verlag, Wiesbaden, 2003 (in German). R. Rojas: Neural Networks A Systematic Introduction (Chapter 15), Springer, Berlin, Germany, 1996. H. Ritter, T. Kohonen: Self-Organizing Semantic Maps, Biol. Cybern. 61, 241-254, Springer, 1989. D. Kriesel: A Brief Introduction to Neural Networks (chapter 10), http://www.dkriesel.com/en/science/neural_networks, 2005. Slide VIII-22