From extreme learning machines to reservoir computing: random projections and dynamics for motion learning
|
|
|
- Leonard George
- 10 years ago
- Views:
Transcription
1 From extreme learning machines to reservoir computing: random projections and dynamics for motion learning u y q Jochen Steil 2011, Tutorial at CBIC
2 Biology Physics Bielefeld Environment is part of: Intelligent Bielefeld University one of 4 strategic profile areas two central scientific institutes (CoR-Lab, CITEC) 5 of 13 departments cooperate 2 special research units and 3 interdisciplinary graduate programs industrial partners (Honda, Miele, Bertelsmann, OWL MASCHINENBAU,...) > 350 researchers, ~ 10 Mill EUR funding/year EU projects: italk, ROBOTDOC, HUMAVIPS, MONARCA, EMiCab, AMARSi, Echord Research Institute for Cognition and Robotics (CoR-Lab) Excellence Cluster Cognitive Interaction Technology (CITEC) Linguistics Psychology & Sports Technology Chemistry Economics History, Philosophy Law Mathematics Education Public Health Sociology
3 Reinhart, Steil, Humanoids 2009 Rolf, Steil, ICDL 2009 Neumann, Rolf, Steil, SAB 2010 Rolf, Steil, Trans. Autonomous Mental Development, 2010 Steil, Neural Networks, 2007 Reinhart, Steil, 2010: Differential Equations & Dynamical Systems Lemme, Steil, ESANN, 2010 Reservoir Dynamic Networks Neural Dynamic Movement Primitives Stability Theory of Recurrent Networks DAAD fellowship Russia Steil, Dissertation, 1999, Steil, Neurocomputing, 2002 Steil, Neurocomputing, 2006 Dynamics & Learning Visual Online Learning Robot Learning Architecture Neural Perceptual Grouping a) Input image b) Edge features Speech recognition Active vision + exploration Interaction Robot arm control Robot grasping c) CLM grouping d) Potts spin grouping Steil, Götting, Wersing, Körner Neurocomputing, 2002 Denecke, Wersing, Steil, Körner, Neurocomputing, 2007 Denecke et al., ESANN, 2010 Gesture recognition Object referencing Object recognition and pose Shared Attention Hand tracking Steil, Roethling, Haschke, Ritter Robotics & Autonomous Systems, 2004 Wischnewski et al, Cognitive Computation, 2010 Wersing, Steil, Ritter Neural Computation, 2001 Weng, Wersing, Steil, Ritter, Trans. Neural Networks, 2006
4 the multi-layer perceptron (MLP) u q
5 The standard neural learning approach for multi-layer perceptron (MLP): Minimize (quadratic) error function
6 Compute gradient w E by backpropagation u q adapt weights adapt weights local error output error
7 u q adapt weights task specific hidden representation
8 challenge: task specific hidden representation? propagation of errors? novel approach: learning & hidden state separate: linear output regression high-dim. random projection
9 u q random projection linear output regression
10
11 Outline Extreme Learning Machine (ELM) & Intrinsic Plasticity learn inverse kinematics ELM + Recurrence = Echo State, Reservoir Computing (RC) Associative Neural Learning (for kinematics) inverse kinematics + trajectory generation Programming Dynamics (shape attractor dynamics) learn and select multiple inverse kinematics solutions Learning velocity fields (movement primitives) Learning sequences of movement primitives
12
13 1990 s: Random projections for data processing dimension reduction separation properties are preserved only linear projections used/analyzed very powerful tool, see e.g Now: random projections dimension expansion (like for kernels) non-linear transformations universal feature set obtained
14 ELM - Extreme Learning Machine (Huang 2006) high dimensional random neural feedforward network, random u q learning: only by regression (+ weight decay regularization)
15 training: collect states for all tr training inputs (state harvesting) collect targets ˆQ = ( ˆq1,..., ˆq tr ) T do linear regression W out = ( H T H + ɛi ) 1 H T ˆQ regularization through epsilon -> model selection! ( not critical/considered in Huang 2006 (and other literature), because data sets were very large)
16 u q some other issues: input scaling, overfitting, initialize bias, weights in correct range model selection is important
17 u q idea: input specific tuning of features by adaptation of the nonlinear function Optimizing Extreme Learning Machines via Ridge Regression and Batch Intrinsic Plasticity, Neumann, K., and J.J. Steil, Neurocomputing, to appear.
18 Intrinsic Plasticity: optimize parameters of single neuron! use parametrized Fermi: h(x, a, b) =1/ (1 + exp ( ax b)) adjust parameters online: also batch algorithm available: Neumann & Steil, ICANN 2011 IP Learning
19 VIDEO Neumann, Emmerich, Steil, in preparation
20 Intrinsic Plasticity input/task specific scaling set of features is input specific set of features is less diverse IP acts as input regularizer reduces dependence on #nodes, initialization parameters! (similar to the Gaussian kernel with in SVM) Neumann, Emmerich, Steil, in preparation
21 ELM + Intrinsic Plasticity learn inverse kinematics (static, simple, here well defined) input position, output joint angles Neumann, K., and J.J. Steil, Neurocomputing, to appear. ELM + IP
22 ELM + Intrinsic Plasticity standard UCI tasks we (almost) always use IP! Neumann, K., and J.J. Steil, Neurocomputing, to appear
23 IP regularization through reduced feature complexity -- too much IP produces degenerated features VIDEO 0
24 Outline Extreme Learning Machine (ELM) & Intrinsic Plasticity learn inverse kinematics ELM + Recurrence = Echo State, Reservoir Computing (RC) Associative Neural Learning (for kinematics) kinematics + trajectory generation Programming Dynamics (shape attractor dynamics) learn and select multiple inverse kinematics solutions Learning velocity fields (movement primitives) Learning sequences of movement primitives
25 ELM + Recurrency
26 ELM & Recurrence? staged processing from left to right, network used for feature generation with recurrence: converge to attractor u q u q Extreme Learning Machine Attractor based ELM = Echo state network for static mappings ELM vs Echo State
27 What does recurrence add? non-linear mixtures of features counteracts the regularization effect of IP ELM + Recurrency
28 What does recurrence add? non-linear mixtures of features counteracts the regularization effect of IP VIDEO ELM + Recurrency
29 What does recurrence add? VIDEO ELM + Recurrency
30 What does recurrence add? counteracts the regularization effect of IP combination useful reduces sensitivity to model selection parameters Abalone task from UCI repository test error ELM IP + ESN hidden input epsilon scaling neurons ELM + Recurrency
31 Digression: ELM+ Intrinsic Plasticity + Recurrence = Reservoir Learning with IP (Steil@HRI, 2006) still works despite recurrence empirically very useful strong regularizer w.r. to #nodes, connectivity pattern, sparseness... achieves both lifetime and spatial sparseness! % of time steps Mackey Glass % of time steps Random without IP epoch 20 epoch without IP epoch 20 epoch neuron output neuron output Steil, 2007, Neural Networks Reservoir, IP, sparseness
32 VIDEO Steil, 2007, Neural Networks Reservoir, IP, sparseness
33 Application: tool manipulation for ASIMO.+ 933&:$4; <==8!$ &'( 3""#$%!" / 933&:$4; +384$4358 &'(!""#$%!" / *+,- )#$%!!" $"657&!8 Rolf & Steil, LAB-RS, 2010, best paper award
34 Application: manipulating a stick Neumann, Rolf & Steil, SAB 2010 TUM/DLR,
35 Positions of the hand center points while holding a stick Positions of the hand center points while holding a stick Y-Axis X-Axis Y-Axis X-Axis Neumann, Rolf & Steil, SAB 2010 TUM/DLR,
36 Outline Extreme Learning Machine (ELM) & Intrinsic Plasticity learn inverse kinematics ELM + Recurrence = Echo State, Reservoir Computing (RC) Associative Neural Learning (for kinematics) kinematics + trajectory generation Programming Dynamics (shape attractor dynamics) learn and select multiple inverse kinematics solutions Learning velocity fields (movement primitives) Learning sequences of movement primitives
37 Associative Neural Reservoir Learning
38 Associative Neural Recurrent Learning ANRL from staged processing to dynamics finally add feedback ELM based: u q Associative Neural Reservoir Learning
39 Associative Neural Recurrent Learning ANRL reservoir based: h Associative Neural Reservoir Learning
40 Echo State Network vs Recurrent Dynamics Text Echo State (staged) - layered update (single pass)! - it takes one time step from u - q: u h q update: u(k) - h(k+1) & readout: q(k+1) = W h(k+1) ANRL (recurrent) - all nodes one RNN! - sychronous update - it takes two time steps from u - q: u h q u(k) - h(k+1) - q(k+2) - needs always prediction!
41 Associative Neural Recurrent Learning ANRL h idea: data pairs (u,q) from some relation (function) "Recurrent neural associative learning of forward and inverse kinematics for movement generation of the redundant PA-10 robot", Reinhart & Steil, LAB-RS, 2008, best paper award
42 Associative Neural Reservoir Learning basic ideas: (u,q) become inputs and outputs store data (u,q) in attractors efficient learning possible (regression/online as before) û(k) ˆq(k) h y(k) u(k) q(k)
43 Associative Neural Reservoir Learning basic ideas: generalization by associative attractor completion choose between multiple solutions through feedback trajectory generation by means of transients x u ˆx h(x, ˆq) ˆq ˆq z 1
44 Our application: Kinematics forward kinematics FK: u=ik(q) uniquely defined (many-to-one) u (hand position) q (arm angles)
45 Kinematics: inverse kinematics IK: q=ik(u) not uniquely defined (one-to-many) q (arm angles) elbow up elbow down u (position) selection of solution = redundancy resolution
46 Example: redundancy resolution: elbow down elbow up
47 association is based on data (not functions) G ={(u^i,q^i)}, i = 1...N read out forward kinematics read out inverse kinematics q G u
48 inverse kinematics (A example-arm-manifolds.mpeg) Rolf, Steil, Gienger, IEEE TAMD, 2010, Goal Babbling permits direct learning of inverse kinematics
49 association by storing pairs (u,q) via SOM u= Fingertip position q= sampled on grip 10x10x10 Walter & Ritter,
50 - (u,q) association by concatenation - train (P)SOM network - read out both ways and get FK and IK! Walter & Ritter,
51 Barhen, Gulati, Zak, Intelligent robots and computer vision, 1989
52 ANRL for kinematics (icub Arm) include forward-model simultaneous learning of both models inverse kinematics 0.1 training data network response h q joint angle [rad] u 2 [m] u 1 [m] time step k forward kinematics, sensory prediction
53 online learning: train from trajectories state in reservoir is useful for temporal integration û(k) ˆq(k) hy(k) u(k) q(k) limit case: attractor u y h q u = FK(q ) q = IK(u )
54 Task: Move Hands Trajectory Representation Inverse Kinematics Robot Control Our approach: learn learn forward and inverse kinematics in associative reservoir network learn Generalization: a) set new target - compute joint angles
55 Evaluation & Generalization (icub 7 DOF arm) joint angle [rad] E[m] time step k projection to y-z (along spiral) regularization by learning joint angle [rad] number of movements time step k probability that E < value u 3 [m] u 2 [m] excellent generalization & graceful degradation continuous shift of attractor possible error E [m] error E [m] Reinhart & Steil, IEEE Conf. Humanoids, 2009
56 Interactive learning of redundancy resolution by ANRL 2010, (Lemme, Rüther, Nordmann, Wrede, Steil, Weirich, Johannfunke)
57 VIDEO
58 Task: Move Hands Trajectory Representation Inverse Kinematics Robot Control Our approach: learn learn forward and inverse kinematics in associative reservoir network Trajectory Generation Generalization: a) set new target - compute joint angles b) movement generation by iteration toward target attractor
59 cope with feedback: internal simulation, sensory prediction Reinhart & Steil, IEEE Conf. Humanoids, 2009
60 Movement generation by attractor dynamics controller setting autonomous operation by sensory prediction Reinhart & Steil, IEEE Conf. Humanoids, 2009
61 Movement generation by attractor dynamics (icub arm) start targets generated movements target start generated movements 3 0 u 3 [m] u 2 [m] u 1 [m] u 2 [m] u 1 [m] home-to-target generalization start-to-home generalization Reinhart & Steil, IEEE Conf. Humanoids, 2009
62 Robustness by attractor dynamics start target generated movements perturbation perturbation! 0.02 u 2 [m] Reinhart & Steil, IEEE Conf. Humanoids, 2009 u 1 [m]
63 Analysis: Speed profiles w.r. to t 3.5 x t = 0.02 t = 0.04 t = 0.08 velocity [m/time step] time step k
64 maximal velocity [m/iteration] t = 0.02 t = 0.04 t = 0.06 t = 0.08 t = target distance [m]
65 Outline Extreme Learning Machine (ELM) & Intrinsic Plasticity learn inverse kinematics ELM + Recurrence = Echo State, Reservoir Computing (RC) Associative Neural Learning (for kinematics) kinematics + trajectory generation Programming Dynamics (shape attractor dynamics) learn and select multiple inverse kinematics solutions Learning velocity fields (movement primitives) Learning sequences of movement primitives
66 Output feedback & programming dynamics problem: how to shape feedback loop? more general: how to imprint arbitrary dynamics? how to generalize and stabelize? x u ˆx h(x, ˆq) ˆq ˆq z 1
67 Programming Dynamics: Example 2-D network + 1-D input to parametrize attractor Reinhart & Steil, ICANN 2011
68 Programming dynamics: directly shape network transients to attractor ELM+feedback or fully recurrent reservoir batch learning (improve by reservoir regularization, see references below) approach generate sequence of desired states get weights through linear regression use smart sampling of state trajectories Reinhart & Steil, ICANN, 2011 Reinhart & Steil, Differential Equations and Dynamical Systems, 2010 Reinhart & Steil, Humanoids 2011, Reinhart PhD-thesis, 2011
69 Programming multiple redundancy resolutions: generate trajectories toward attractor points q (x 1, q 1 ) (x 3, q 3 ) (x 2, q 2 ) q 1 4 (t) q2 4 (t) x 1 =x 2 x 3 =x 4 x Reinhart & Steil, Humanoids 2011
70 Example icub right arm positioning: network setup: - input: 4 DOF joint angles, 3 wrist coordinates - training data: systematic sample (including redundancies!) of icub arm/hand positions 3 wrist x W inp x h(x, q) W out x coordinates uu u ˆx 4 DOF q W inp q W out q ˆq
71 Example icub right arm positioning: associative completion through dynamics xu ˆx u h(xu, ˆq) ˆq ˆq z 1
72 Example icub right arm positioning: in the control loop! xu ˆx u q h(x, q) u τ ˆq
73 Example icub right arm positioning: dynamical selection of solutions q q 3 [deg] q q 1 [deg] 2 [deg] 4 [deg] time step k perturbation
74 Example icub right arm positioning: also mixed constraints possible!
75 Outline Extreme Learning Machine (ELM) & Intrinsic Plasticity learn inverse kinematics ELM + Recurrence = Echo State, Reservoir Computing (RC) Associative Neural Learning (for kinematics) kinematics + trajectory generation Programming Dynamics (shape attractor dynamics) learn and select multiple inverse kinematics solutions Learning velocity fields (movement primitives) Learning sequences of movement primitives
76 Learning velocity fields (movement primitives) confidence output Movement Primitives
77 Learning velocity fields (movement primitives) target demonstrations reproductions Simulation Results ẋ( m/s) ẏ( m/s) x( m) x( m) Training Data: Human Movements y( m) Movement Primitives
78 Learning velocity fields (movement primitives) y (dm) 0 Confidence y (dm) Training Data Default Data Reproduction x (dm) x (dm) Movement Primitives
79 Generalization with confidence Movement Primitives
80 Learning velocity fields (movement primitives) VIDEO Movement Primitives
81 outlook: neural dynamic motion primitives integration du/dt standard motion generation via DMP allows to learn trajectory shapes (AMARSi D4.1 update)
82 Outline Extreme Learning Machine (ELM) & Intrinsic Plasticity learn inverse kinematics ELM + Recurrence = Echo State, Reservoir Computing (RC) Associative Neural Learning (for kinematics) kinematics + trajectory generation Programming Dynamics (shape attractor dynamics) learn and select multiple inverse kinematics solutions Learning velocity fields (movement primitives) Learning sequences of movement primitives Learning sequences of movement primitives
83 Sequencing of movement primitives train for constant value of sequencer relative start coordinates type of movement Lemme, work in progress, unpublished train for zero value of velocity switch feedback off while running Movement Primitives
84 Sequencing of movement primitives Movement Primitives
85 Sequencing of movement primitives Movement Primitives
86 Summary: Extreme Learning Machine (ELM) & Intrinsic Plasticity learn inverse kinematics ELM + Recurrence = Echo State, Reservoir Computing (RC) Associative Neural Learning (for kinematics) kinematics + trajectory generation Programming Dynamics (shape attractor dynamics) learn and select multiple inverse kinematics solutions Learning velocity fields (movement primitives) Learning sequences of movement primitives Movement Primitives
87 Current work: associate visual feedback audio-visuo-motor patterns combine trajectory generation and inverse kinematics provide training data in autonomous exploration use platforms you can not model! VIDEO
88 R. F. Reinhart, Associative Learning, Programming Dynamics, icub, PhD Student M. Rolf, Motion Learning on ASIMO, Goal Babbling, PhD Student K. Neumann, Bimanual Motion Learning on ASIMO, Intrinsic Plasticity A. Lemme, Sequencing of Motion Primitives FlexIRob: A. Nordmann A. Lemme S. Rüther A. Weirich M. Johannfunke Dr. S. Wrede S. Krüger M. Götting Cognitive Systems Engineering: System Integration, ASIMO, icub, Kuka support
89 Publications Regularization and stability in reservoir networks with output feedback. R. F. Reinhart and J.J. Steil, Neurocomputing, conditionally accepted Optimizing Extreme Learning Machines via Ridge Regression and Batch Intrinsic Plasticity Neumann, K., and J.J. Steil, Neurocomputing, conditionally accepted Batch intrinsic plasticity for extreme learning machines. K. Neumann and J.J. Steil. ICANN, pages , State prediction: A constructive method to program recurrent neural networks. R. F. Reinhart and J.J. Steil. ICANN, pages , Neural learning and dynamical selection of redundant solutions for inverse kinematic control. R. F. Reinhart and J.J. Steil, IEEE Humanoids, 2011 A constrained regularization approach for input-driven recurrent neural networks. R. F. Reinhart and J.J. Steil. Differential Equations and Dynamical Systems, 19:27 46, Reservoir regularization stabilizes learning of Echo State Networks with output feedback. R. Felix Reinhart and J.J. Steil, ESANN, pp , 2011 Teaching and Learning Redundancy Resolution for Autonomous Generation of Flexible Robot Movements. S. Wrede, M. Johannfunke, A. Lemme, A. Nordmann, S. Rüther, A. Weirich, J.J. Steil, Workshop Computational Intelligence, GMA-FA 5.14, Dortmund, 2010 Learning Flexible Full Body Kinematics for Humanoid Tool Use. M. Rolf, J.J. Steil and M. Gienger, Int. Symp. Learning and Adaptive Behavior in Robotic Systems, 2010 Learning Inverse Kinematics for Pose-Constraint Bi-Manual Movements. K. Neumann, M. Rolf, J.J. Steil and M. Gienger, Int. Conf. Simulation of Adaptive Behavior, 2010 Recurrence enhances the spatial encoding of static inputs in reservoir networks. C. Emmerich, F. R. Reinhart, and J. J. Steil. ICANN, pp , Attractor-based computation with reservoirs for online learning of inverse kinematics. R. F. Reinhart, and J.J. Steil, Proc. ESANN, pp , 2009
90 Efficient exploration and learning of whole body kinematics. M. Rolf, J.J. Steil, and M. Gienger, Proc. Int. Conf. Developmental Learning, 2009 Reaching movement generation with a recurrent neural network based on learning inverse kinematics. R. F. Reinhart and J.J. Steil. IEEE Conf. Humanoid Robotics, pages , Recurrent neural associative learning of forward and inverse kinematics for movement generation of the redundant PA-10 robot. R. F. Reinhart and J.J. Steil. Learning Adaptive Behavior in Robotic Systems, pp , Improving reservoirs using intrinsic plasticity. Schrauwen B., Wardermann M., Verstraeten D., Steil J.J., Stroobandt D., Neurocomputing, pp , 2008 Online reservoir adaptation by intrinsic plasticity for backpropagation-decorrelation and echo state learning. J.J. Steil. Neural Networks, 20(3): , Online Stability of backpropagation-decorrelation recurrent learning. J.J. Steil, Neurocomputing, vol. 69(7-9), pp , 2006 (some, few) related publications: A. Lemme, R. F. Reinhart, and J. J. Steil. Efficient online learning of a non-negative sparse autoencoder. ESANN, pages 1 6, Butko and Triesch. Exploring the Role of Intrinsic Plasticity for the Learning of Sensory Representations. Neurocomputing, 70(7-9): , Lukoševičius and Jaeger, Reservoir computing approaches to recurrent neural network training, Jaeger, The "echo state" approach to analyzing and training recurrent neural networks, Baraniuk, R, Wakin, M., Random Projections of Smooth Manifolds, Foundations of Computational Mathematics, vol(9), no. 1, 51-77, 2009 HG.-B. Huang, Q.-Y. Zhu, and C.-K. Siew, Extreme learning machine: Theory and applications, Neurocomputing, vol. 70, no. 1-3, pp , G.-B. Huang, L. Chen, and C.-K. Siew, Universal approximation using incremental constructive feedforward networks with random hidden nodes, IEEE Transactions on Neural Networks, vol. 17, no. 4, pp. 879{892, 2006.
91 Thank you for your attention! x q h(x, q) τ ˆx ˆq more information:
Efficient online learning of a non-negative sparse autoencoder
and Machine Learning. Bruges (Belgium), 28-30 April 2010, d-side publi., ISBN 2-93030-10-2. Efficient online learning of a non-negative sparse autoencoder Andre Lemme, R. Felix Reinhart and Jochen J. Steil
Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski [email protected]
Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trakovski [email protected] Neural Networks 2 Neural Networks Analogy to biological neural systems, the most robust learning systems
Novelty Detection in image recognition using IRF Neural Networks properties
Novelty Detection in image recognition using IRF Neural Networks properties Philippe Smagghe, Jean-Luc Buessler, Jean-Philippe Urban Université de Haute-Alsace MIPS 4, rue des Frères Lumière, 68093 Mulhouse,
6.2.8 Neural networks for data mining
6.2.8 Neural networks for data mining Walter Kosters 1 In many application areas neural networks are known to be valuable tools. This also holds for data mining. In this chapter we discuss the use of neural
Analecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
NEURAL NETWORKS A Comprehensive Foundation
NEURAL NETWORKS A Comprehensive Foundation Second Edition Simon Haykin McMaster University Hamilton, Ontario, Canada Prentice Hall Prentice Hall Upper Saddle River; New Jersey 07458 Preface xii Acknowledgments
INSTRUCTOR WORKBOOK Quanser Robotics Package for Education for MATLAB /Simulink Users
INSTRUCTOR WORKBOOK for MATLAB /Simulink Users Developed by: Amir Haddadi, Ph.D., Quanser Peter Martin, M.A.SC., Quanser Quanser educational solutions are powered by: CAPTIVATE. MOTIVATE. GRADUATE. PREFACE
Recurrent Neural Networks
Recurrent Neural Networks Neural Computation : Lecture 12 John A. Bullinaria, 2015 1. Recurrent Neural Network Architectures 2. State Space Models and Dynamical Systems 3. Backpropagation Through Time
The Artificial Prediction Market
The Artificial Prediction Market Adrian Barbu Department of Statistics Florida State University Joint work with Nathan Lay, Siemens Corporate Research 1 Overview Main Contributions A mathematical theory
Machine Learning and Data Mining -
Machine Learning and Data Mining - Perceptron Neural Networks Nuno Cavalheiro Marques ([email protected]) Spring Semester 2010/2011 MSc in Computer Science Multi Layer Perceptron Neurons and the Perceptron
Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence
Artificial Neural Networks and Support Vector Machines CS 486/686: Introduction to Artificial Intelligence 1 Outline What is a Neural Network? - Perceptron learners - Multi-layer networks What is a Support
Simple and efficient online algorithms for real world applications
Simple and efficient online algorithms for real world applications Università degli Studi di Milano Milano, Italy Talk @ Centro de Visión por Computador Something about me PhD in Robotics at LIRA-Lab,
Automated Stellar Classification for Large Surveys with EKF and RBF Neural Networks
Chin. J. Astron. Astrophys. Vol. 5 (2005), No. 2, 203 210 (http:/www.chjaa.org) Chinese Journal of Astronomy and Astrophysics Automated Stellar Classification for Large Surveys with EKF and RBF Neural
Neural Network Design in Cloud Computing
International Journal of Computer Trends and Technology- volume4issue2-2013 ABSTRACT: Neural Network Design in Cloud Computing B.Rajkumar #1,T.Gopikiran #2,S.Satyanarayana *3 #1,#2Department of Computer
Taking Inverse Graphics Seriously
CSC2535: 2013 Advanced Machine Learning Taking Inverse Graphics Seriously Geoffrey Hinton Department of Computer Science University of Toronto The representation used by the neural nets that work best
LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING. ----Changsheng Liu 10-30-2014
LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING ----Changsheng Liu 10-30-2014 Agenda Semi Supervised Learning Topics in Semi Supervised Learning Label Propagation Local and global consistency Graph
Neural Networks and Support Vector Machines
INF5390 - Kunstig intelligens Neural Networks and Support Vector Machines Roar Fjellheim INF5390-13 Neural Networks and SVM 1 Outline Neural networks Perceptrons Neural networks Support vector machines
Chapter 4: Artificial Neural Networks
Chapter 4: Artificial Neural Networks CS 536: Machine Learning Littman (Wu, TA) Administration icml-03: instructional Conference on Machine Learning http://www.cs.rutgers.edu/~mlittman/courses/ml03/icml03/
Photonic Reservoir Computing with coupled SOAs
Photonic Reservoir Computing with coupled SOAs Kristof Vandoorne, Wouter Dierckx, David Verstraete, Benjamin Schrauwen, Roel Baets, Peter Bienstman and Jan Van Campenhout OSC 2008: August 26 Intelligence
THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS
THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS O.U. Sezerman 1, R. Islamaj 2, E. Alpaydin 2 1 Laborotory of Computational Biology, Sabancı University, Istanbul, Turkey. 2 Computer Engineering
Neural Network Add-in
Neural Network Add-in Version 1.5 Software User s Guide Contents Overview... 2 Getting Started... 2 Working with Datasets... 2 Open a Dataset... 3 Save a Dataset... 3 Data Pre-processing... 3 Lagging...
Accurate and robust image superresolution by neural processing of local image representations
Accurate and robust image superresolution by neural processing of local image representations Carlos Miravet 1,2 and Francisco B. Rodríguez 1 1 Grupo de Neurocomputación Biológica (GNB), Escuela Politécnica
CS 2750 Machine Learning. Lecture 1. Machine Learning. http://www.cs.pitt.edu/~milos/courses/cs2750/ CS 2750 Machine Learning.
Lecture Machine Learning Milos Hauskrecht [email protected] 539 Sennott Square, x5 http://www.cs.pitt.edu/~milos/courses/cs75/ Administration Instructor: Milos Hauskrecht [email protected] 539 Sennott
Performance Evaluation of Artificial Neural. Networks for Spatial Data Analysis
Contemporary Engineering Sciences, Vol. 4, 2011, no. 4, 149-163 Performance Evaluation of Artificial Neural Networks for Spatial Data Analysis Akram A. Moustafa Department of Computer Science Al al-bayt
SUCCESSFUL PREDICTION OF HORSE RACING RESULTS USING A NEURAL NETWORK
SUCCESSFUL PREDICTION OF HORSE RACING RESULTS USING A NEURAL NETWORK N M Allinson and D Merritt 1 Introduction This contribution has two main sections. The first discusses some aspects of multilayer perceptrons,
Lecture 6. Artificial Neural Networks
Lecture 6 Artificial Neural Networks 1 1 Artificial Neural Networks In this note we provide an overview of the key concepts that have led to the emergence of Artificial Neural Networks as a major paradigm
Class-specific Sparse Coding for Learning of Object Representations
Class-specific Sparse Coding for Learning of Object Representations Stephan Hasler, Heiko Wersing, and Edgar Körner Honda Research Institute Europe GmbH Carl-Legien-Str. 30, 63073 Offenbach am Main, Germany
Feed-Forward mapping networks KAIST 바이오및뇌공학과 정재승
Feed-Forward mapping networks KAIST 바이오및뇌공학과 정재승 How much energy do we need for brain functions? Information processing: Trade-off between energy consumption and wiring cost Trade-off between energy consumption
Neural Networks: a replacement for Gaussian Processes?
Neural Networks: a replacement for Gaussian Processes? Matthew Lilley and Marcus Frean Victoria University of Wellington, P.O. Box 600, Wellington, New Zealand [email protected] http://www.mcs.vuw.ac.nz/
Predict Influencers in the Social Network
Predict Influencers in the Social Network Ruishan Liu, Yang Zhao and Liuyu Zhou Email: rliu2, yzhao2, [email protected] Department of Electrical Engineering, Stanford University Abstract Given two persons
Models of Cortical Maps II
CN510: Principles and Methods of Cognitive and Neural Modeling Models of Cortical Maps II Lecture 19 Instructor: Anatoli Gorchetchnikov dy dt The Network of Grossberg (1976) Ay B y f (
Università degli Studi di Bologna
Università degli Studi di Bologna DEIS Biometric System Laboratory Incremental Learning by Message Passing in Hierarchical Temporal Memory Davide Maltoni Biometric System Laboratory DEIS - University of
IFT3395/6390. Machine Learning from linear regression to Neural Networks. Machine Learning. Training Set. t (3.5, -2,..., 127, 0,...
IFT3395/6390 Historical perspective: back to 1957 (Prof. Pascal Vincent) (Rosenblatt, Perceptron ) Machine Learning from linear regression to Neural Networks Computer Science Artificial Intelligence Symbolic
MANAGING QUEUE STABILITY USING ART2 IN ACTIVE QUEUE MANAGEMENT FOR CONGESTION CONTROL
MANAGING QUEUE STABILITY USING ART2 IN ACTIVE QUEUE MANAGEMENT FOR CONGESTION CONTROL G. Maria Priscilla 1 and C. P. Sumathi 2 1 S.N.R. Sons College (Autonomous), Coimbatore, India 2 SDNB Vaishnav College
Introduction to Machine Learning Using Python. Vikram Kamath
Introduction to Machine Learning Using Python Vikram Kamath Contents: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Introduction/Definition Where and Why ML is used Types of Learning Supervised Learning Linear Regression
Towards better accuracy for Spam predictions
Towards better accuracy for Spam predictions Chengyan Zhao Department of Computer Science University of Toronto Toronto, Ontario, Canada M5S 2E4 [email protected] Abstract Spam identification is crucial
Neural network software tool development: exploring programming language options
INEB- PSI Technical Report 2006-1 Neural network software tool development: exploring programming language options Alexandra Oliveira [email protected] Supervisor: Professor Joaquim Marques de Sá June 2006
Sense Making in an IOT World: Sensor Data Analysis with Deep Learning
Sense Making in an IOT World: Sensor Data Analysis with Deep Learning Natalia Vassilieva, PhD Senior Research Manager GTC 2016 Deep learning proof points as of today Vision Speech Text Other Search & information
Sensory-motor control scheme based on Kohonen Maps and AVITE model
Sensory-motor control scheme based on Kohonen Maps and AVITE model Juan L. Pedreño-Molina, Antonio Guerrero-González, Oscar A. Florez-Giraldo, J. Molina-Vilaplana Technical University of Cartagena Department
Mean-Shift Tracking with Random Sampling
1 Mean-Shift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of
Supply Chain Forecasting Model Using Computational Intelligence Techniques
CMU.J.Nat.Sci Special Issue on Manufacturing Technology (2011) Vol.10(1) 19 Supply Chain Forecasting Model Using Computational Intelligence Techniques Wimalin S. Laosiritaworn Department of Industrial
Using artificial intelligence for data reduction in mechanical engineering
Using artificial intelligence for data reduction in mechanical engineering L. Mdlazi 1, C.J. Stander 1, P.S. Heyns 1, T. Marwala 2 1 Dynamic Systems Group Department of Mechanical and Aeronautical Engineering,
TRAINING A LIMITED-INTERCONNECT, SYNTHETIC NEURAL IC
777 TRAINING A LIMITED-INTERCONNECT, SYNTHETIC NEURAL IC M.R. Walker. S. Haghighi. A. Afghan. and L.A. Akers Center for Solid State Electronics Research Arizona State University Tempe. AZ 85287-6206 [email protected]
CHAPTER 5 PREDICTIVE MODELING STUDIES TO DETERMINE THE CONVEYING VELOCITY OF PARTS ON VIBRATORY FEEDER
93 CHAPTER 5 PREDICTIVE MODELING STUDIES TO DETERMINE THE CONVEYING VELOCITY OF PARTS ON VIBRATORY FEEDER 5.1 INTRODUCTION The development of an active trap based feeder for handling brakeliners was discussed
Visualization of Breast Cancer Data by SOM Component Planes
International Journal of Science and Technology Volume 3 No. 2, February, 2014 Visualization of Breast Cancer Data by SOM Component Planes P.Venkatesan. 1, M.Mullai 2 1 Department of Statistics,NIRT(Indian
An Approach for Utility Pole Recognition in Real Conditions
6th Pacific-Rim Symposium on Image and Video Technology 1st PSIVT Workshop on Quality Assessment and Control by Image and Video Analysis An Approach for Utility Pole Recognition in Real Conditions Barranco
ADVANCED MACHINE LEARNING. Introduction
1 1 Introduction Lecturer: Prof. Aude Billard ([email protected]) Teaching Assistants: Guillaume de Chambrier, Nadia Figueroa, Denys Lamotte, Nicola Sommer 2 2 Course Format Alternate between: Lectures
Self Organizing Maps: Fundamentals
Self Organizing Maps: Fundamentals Introduction to Neural Networks : Lecture 16 John A. Bullinaria, 2004 1. What is a Self Organizing Map? 2. Topographic Maps 3. Setting up a Self Organizing Map 4. Kohonen
Human-like Arm Motion Generation for Humanoid Robots Using Motion Capture Database
Human-like Arm Motion Generation for Humanoid Robots Using Motion Capture Database Seungsu Kim, ChangHwan Kim and Jong Hyeon Park School of Mechanical Engineering Hanyang University, Seoul, 133-791, Korea.
Intrusion Detection via Machine Learning for SCADA System Protection
Intrusion Detection via Machine Learning for SCADA System Protection S.L.P. Yasakethu Department of Computing, University of Surrey, Guildford, GU2 7XH, UK. [email protected] J. Jiang Department
NEUROMATHEMATICS: DEVELOPMENT TENDENCIES. 1. Which tasks are adequate of neurocomputers?
Appl. Comput. Math. 2 (2003), no. 1, pp. 57-64 NEUROMATHEMATICS: DEVELOPMENT TENDENCIES GALUSHKIN A.I., KOROBKOVA. S.V., KAZANTSEV P.A. Abstract. This article is the summary of a set of Russian scientists
American International Journal of Research in Science, Technology, Engineering & Mathematics
American International Journal of Research in Science, Technology, Engineering & Mathematics Available online at http://www.iasir.net ISSN (Print): 2328-349, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629
GLOVE-BASED GESTURE RECOGNITION SYSTEM
CLAWAR 2012 Proceedings of the Fifteenth International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines, Baltimore, MD, USA, 23 26 July 2012 747 GLOVE-BASED GESTURE
Design-Simulation-Optimization Package for a Generic 6-DOF Manipulator with a Spherical Wrist
Design-Simulation-Optimization Package for a Generic 6-DOF Manipulator with a Spherical Wrist MHER GRIGORIAN, TAREK SOBH Department of Computer Science and Engineering, U. of Bridgeport, USA ABSTRACT Robot
ARTIFICIAL INTELLIGENCE METHODS IN EARLY MANUFACTURING TIME ESTIMATION
1 ARTIFICIAL INTELLIGENCE METHODS IN EARLY MANUFACTURING TIME ESTIMATION B. Mikó PhD, Z-Form Tool Manufacturing and Application Ltd H-1082. Budapest, Asztalos S. u 4. Tel: (1) 477 1016, e-mail: [email protected]
Visualization of large data sets using MDS combined with LVQ.
Visualization of large data sets using MDS combined with LVQ. Antoine Naud and Włodzisław Duch Department of Informatics, Nicholas Copernicus University, Grudziądzka 5, 87-100 Toruń, Poland. www.phys.uni.torun.pl/kmk
CALIBRATION OF A ROBUST 2 DOF PATH MONITORING TOOL FOR INDUSTRIAL ROBOTS AND MACHINE TOOLS BASED ON PARALLEL KINEMATICS
CALIBRATION OF A ROBUST 2 DOF PATH MONITORING TOOL FOR INDUSTRIAL ROBOTS AND MACHINE TOOLS BASED ON PARALLEL KINEMATICS E. Batzies 1, M. Kreutzer 1, D. Leucht 2, V. Welker 2, O. Zirn 1 1 Mechatronics Research
XIV. Title. 2.1 Schematics of the WEP. 21. 2.2 Encryption in WEP technique 22. 2.3 Decryption in WEP technique. 22. 2.4 Process of TKIP 25
XIV LIST OF FIGURES Figure Title Page 2.1 Schematics of the WEP. 21 2.2 Encryption in WEP technique 22 2.3 Decryption in WEP technique. 22 2.4 Process of TKIP 25 2.5 IEEE 802.1x Structure 30 2.6 RSNA Architecture
Robotics. Chapter 25. Chapter 25 1
Robotics Chapter 25 Chapter 25 1 Outline Robots, Effectors, and Sensors Localization and Mapping Motion Planning Motor Control Chapter 25 2 Mobile Robots Chapter 25 3 Manipulators P R R R R R Configuration
MOBILE ROBOT TRACKING OF PRE-PLANNED PATHS. Department of Computer Science, York University, Heslington, York, Y010 5DD, UK (email:[email protected].
MOBILE ROBOT TRACKING OF PRE-PLANNED PATHS N. E. Pears Department of Computer Science, York University, Heslington, York, Y010 5DD, UK (email:[email protected]) 1 Abstract A method of mobile robot steering
EFFICIENT DATA PRE-PROCESSING FOR DATA MINING
EFFICIENT DATA PRE-PROCESSING FOR DATA MINING USING NEURAL NETWORKS JothiKumar.R 1, Sivabalan.R.V 2 1 Research scholar, Noorul Islam University, Nagercoil, India Assistant Professor, Adhiparasakthi College
Optimum Design of Worm Gears with Multiple Computer Aided Techniques
Copyright c 2008 ICCES ICCES, vol.6, no.4, pp.221-227 Optimum Design of Worm Gears with Multiple Computer Aided Techniques Daizhong Su 1 and Wenjie Peng 2 Summary Finite element analysis (FEA) has proved
Linear Threshold Units
Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear
NEURAL NETWORK FUNDAMENTALS WITH GRAPHS, ALGORITHMS, AND APPLICATIONS
NEURAL NETWORK FUNDAMENTALS WITH GRAPHS, ALGORITHMS, AND APPLICATIONS N. K. Bose HRB-Systems Professor of Electrical Engineering The Pennsylvania State University, University Park P. Liang Associate Professor
3 An Illustrative Example
Objectives An Illustrative Example Objectives - Theory and Examples -2 Problem Statement -2 Perceptron - Two-Input Case -4 Pattern Recognition Example -5 Hamming Network -8 Feedforward Layer -8 Recurrent
Force/position control of a robotic system for transcranial magnetic stimulation
Force/position control of a robotic system for transcranial magnetic stimulation W.N. Wan Zakaria School of Mechanical and System Engineering Newcastle University Abstract To develop a force control scheme
Application of Neural Network in User Authentication for Smart Home System
Application of Neural Network in User Authentication for Smart Home System A. Joseph, D.B.L. Bong, D.A.A. Mat Abstract Security has been an important issue and concern in the smart home systems. Smart
Machine Learning: Multi Layer Perceptrons
Machine Learning: Multi Layer Perceptrons Prof. Dr. Martin Riedmiller Albert-Ludwigs-University Freiburg AG Maschinelles Lernen Machine Learning: Multi Layer Perceptrons p.1/61 Outline multi layer perceptrons
Role of Neural network in data mining
Role of Neural network in data mining Chitranjanjit kaur Associate Prof Guru Nanak College, Sukhchainana Phagwara,(GNDU) Punjab, India Pooja kapoor Associate Prof Swami Sarvanand Group Of Institutes Dinanagar(PTU)
Neural Networks for Machine Learning. Lecture 13a The ups and downs of backpropagation
Neural Networks for Machine Learning Lecture 13a The ups and downs of backpropagation Geoffrey Hinton Nitish Srivastava, Kevin Swersky Tijmen Tieleman Abdel-rahman Mohamed A brief history of backpropagation
COMBINED NEURAL NETWORKS FOR TIME SERIES ANALYSIS
COMBINED NEURAL NETWORKS FOR TIME SERIES ANALYSIS Iris Ginzburg and David Horn School of Physics and Astronomy Raymond and Beverly Sackler Faculty of Exact Science Tel-Aviv University Tel-A viv 96678,
Parallel Data Selection Based on Neurodynamic Optimization in the Era of Big Data
Parallel Data Selection Based on Neurodynamic Optimization in the Era of Big Data Jun Wang Department of Mechanical and Automation Engineering The Chinese University of Hong Kong Shatin, New Territories,
A Learning Based Method for Super-Resolution of Low Resolution Images
A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 [email protected] Abstract The main objective of this project is the study of a learning based method
A Control Scheme for Industrial Robots Using Artificial Neural Networks
A Control Scheme for Industrial Robots Using Artificial Neural Networks M. Dinary, Abou-Hashema M. El-Sayed, Abdel Badie Sharkawy, and G. Abouelmagd unknown dynamical plant is investigated. A layered neural
SECOND YEAR. Major Subject 3 Thesis (EE 300) 3 Thesis (EE 300) 3 TOTAL 3 TOTAL 6. MASTER OF ENGINEERING IN ELECTRICAL ENGINEERING (MEng EE) FIRST YEAR
MASTER OF SCIENCE IN ELECTRICAL ENGINEERING (MS EE) FIRST YEAR Elective 3 Elective 3 Elective 3 Seminar Course (EE 296) 1 TOTAL 12 TOTAL 10 SECOND YEAR Major Subject 3 Thesis (EE 300) 3 Thesis (EE 300)
Self-Organizing g Maps (SOM) COMP61021 Modelling and Visualization of High Dimensional Data
Self-Organizing g Maps (SOM) Ke Chen Outline Introduction ti Biological Motivation Kohonen SOM Learning Algorithm Visualization Method Examples Relevant Issues Conclusions 2 Introduction Self-organizing
Learning to Process Natural Language in Big Data Environment
CCF ADL 2015 Nanchang Oct 11, 2015 Learning to Process Natural Language in Big Data Environment Hang Li Noah s Ark Lab Huawei Technologies Part 1: Deep Learning - Present and Future Talk Outline Overview
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
Evaluation of Machine Learning Techniques for Green Energy Prediction
arxiv:1406.3726v1 [cs.lg] 14 Jun 2014 Evaluation of Machine Learning Techniques for Green Energy Prediction 1 Objective Ankur Sahai University of Mainz, Germany We evaluate Machine Learning techniques
Optimization of PID parameters with an improved simplex PSO
Li et al. Journal of Inequalities and Applications (2015) 2015:325 DOI 10.1186/s13660-015-0785-2 R E S E A R C H Open Access Optimization of PID parameters with an improved simplex PSO Ji-min Li 1, Yeong-Cheng
INTRODUCTION TO MACHINE LEARNING 3RD EDITION
ETHEM ALPAYDIN The MIT Press, 2014 Lecture Slides for INTRODUCTION TO MACHINE LEARNING 3RD EDITION [email protected] http://www.cmpe.boun.edu.tr/~ethem/i2ml3e CHAPTER 1: INTRODUCTION Big Data 3 Widespread
Communicating Agents Architecture with Applications in Multimodal Human Computer Interaction
Communicating Agents Architecture with Applications in Multimodal Human Computer Interaction Maximilian Krüger, Achim Schäfer, Andreas Tewes, Rolf P. Würtz Institut für Neuroinformatik, Ruhr-Universität
Skill acquisition. Skill acquisition: Closed loop theory Feedback guides learning a motor skill. Problems. Motor learning practice
Motor learning theories closed loop theory schema theory hierarchical theory Skill acquisition Motor learning practice Fitt s three stages motor imagery physical changes Skill acquisition: Closed loop
Classifying Manipulation Primitives from Visual Data
Classifying Manipulation Primitives from Visual Data Sandy Huang and Dylan Hadfield-Menell Abstract One approach to learning from demonstrations in robotics is to make use of a classifier to predict if
Metrics on SO(3) and Inverse Kinematics
Mathematical Foundations of Computer Graphics and Vision Metrics on SO(3) and Inverse Kinematics Luca Ballan Institute of Visual Computing Optimization on Manifolds Descent approach d is a ascent direction
A New Approach For Estimating Software Effort Using RBFN Network
IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.7, July 008 37 A New Approach For Estimating Software Using RBFN Network Ch. Satyananda Reddy, P. Sankara Rao, KVSVN Raju,
Depth and Excluded Courses
Depth and Excluded Courses Depth Courses for Communication, Control, and Signal Processing EECE 5576 Wireless Communication Systems 4 SH EECE 5580 Classical Control Systems 4 SH EECE 5610 Digital Control
The Neuro Slot Car Racer: Reinforcement Learning in a Real World Setting
The Neuro Slot Car Racer: Reinforcement Learning in a Real World Setting Tim C. Kietzmann Neuroinformatics Group Institute of Cognitive Science University of Osnabrück [email protected] Martin
A Real Time Hand Tracking System for Interactive Applications
A Real Time Hand Tracking System for Interactive Applications Siddharth Swarup Rautaray Indian Institute of Information Technology Allahabad ABSTRACT In vision based hand tracking systems color plays an
A Reliability Point and Kalman Filter-based Vehicle Tracking Technique
A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video
Machine Learning in FX Carry Basket Prediction
Machine Learning in FX Carry Basket Prediction Tristan Fletcher, Fabian Redpath and Joe D Alessandro Abstract Artificial Neural Networks ANN), Support Vector Machines SVM) and Relevance Vector Machines
