Taking Inverse Graphics Seriously
|
|
|
- Tamsin Stone
- 10 years ago
- Views:
Transcription
1 CSC2535: 2013 Advanced Machine Learning Taking Inverse Graphics Seriously Geoffrey Hinton Department of Computer Science University of Toronto
2 The representation used by the neural nets that work best for recognition (Yann LeCun) Convolutional neural nets use multiple layers of feature detectors that have local receptive fields and shared weights. The feature extraction layers are interleaved with sub-sampling layers that throw away information about precise position in order to achieve some translation invariance.
3 Why convolutional neural networks are doomed Convolutional nets are doomed for two reasons: Sub-sampling loses the precise spatial relationships between higher-level parts such as a nose and a mouth. The precise spatial relationships are needed for identity recognition Overlapping the sub-sampling pools helps a bit. They cannot extrapolate their understanding of geometric relationships to radically new viewpoints.
4 Equivariance vs Invariance Sub-sampling tries to make the neural activities invariant to small changes in viewpoint. This is a silly goal, motivated by the fact that the final label needs to be viewpoint-invariant. Its better to aim for equivariance: Changes in viewpoint lead to corresponding changes in neural activities. In the perceptual system, its the weights that code viewpoint-invariant knowledge, not the neural activities.
5 What is the right representation of images? Computer vision is inverse computer graphics, so the higher levels of a vision system should look like the representations used in graphics. Graphics programs use hierarchical models in which spatial structure is modeled by matrices that represent the transformation from a coordinate frame embedded in the whole to a coordinate frame embedded in each part. These matrices are totally viewpoint invariant. This representation makes it easy to compute the relationship between a part and the retina from the relationship between a whole and the retina. Its just a matrix multiply!
6 Some psychological evidence that our visual systems impose coordinate frames in order to represent shapes (after Irvin Rock)
7 The cube task You can all imagine a wire-frame cube. Now imagine the cube standing on one corner so that body-diagonal from that corner, through the center of the cube to the opposite corner is vertical. Where are the other corners of the cube? You cannot do this task because I forced you to use a coordinate system that is not your usual way of understanding a cube.
8 Mental rotation: More evidence for coordinate frames We perform mental rotation to decide if the tilted R has the correct handedness, not to recognize that it is an R. But why do we need to do mental rotation to decide handedness?
9 What mental rotation achieves It is very difficult to compute the handedness of a coordinate transformation by looking at the individual elements of the matrix. The handedness is the sign of the determinant of the matrix relating the object to the viewer. This is a high-order parity problem. To avoid this computation, we rotate to upright preserving handedness, then we look to see which way it faces when it is in its familiar orientation. If we had individual neurons that represented a whole pose, we would not have a problem with identifying handedness, because these neurons would have a handedness.
10 Two layers in a hierarchy of parts A higher level visual entity is present if several lower level visual entities can agree on their predictions for its pose. p j T j face T ij T i T ij T h T hj T hj p i T i p h T h mouth pose of mouth i. e. relationship to camera nose
11 A crucial property of the pose vectors They allow spatial transformations to be modeled by linear operations. This makes it easy to learn a hierarchy of visual entities. It makes it easy to generalize across viewpoints. The invariant geometric properties of a shape are in the weights, not in the activities. The activities are equivariant.
12 Extrapolating shape recognition to very different sizes, orientations and positions Current wisdom: Learn different models for different viewpoints. This requires a lot of training data. A much more ambitious approach: The manifold of images of the same rigid shape is highly non-linear in the space of pixel intensities. Transform to a space in which the manifold is globally linear. This allows massive extrapolation.
13 A toy example of what we could do if we could reliably extract the poses of parts of objects Consider images composed of five ellipses. The shape is determined entirely by the spatial relations between the ellipses because all parts have the same shape. Can we extract sets of spatial relationships from data that contains several different shapes? Can we generalize far beyond the range of variation in the training examples? Can we learn without being told which ellipse corresponds to which part of a shape?
14 Examples of two shapes (Yichuan Tang) Examples of training data Examples of training data It can extrapolate! Generated data after training Generated data after training
15 Learning one shape using factor analysis The pose of the whole can predict the poses of all the parts. This generative model is easy to fit using the EM algorithm for factor analysis. T j face T ij T hj = factor loadings T i mouth T h nose
16 Learning the right assignments of the ellipses to the parts of a shape. If we do not know how to assign the ellipses to the parts of a known shape we can try many different assignments and pick the one that gives the best reconstruction. Then we assume that is the correct assignment and use that assignment for learning. This quickly leads to models that have strong opinions about the correct assignment. We could eliminate most of the search by using a feed-forward neural net to predict the assignments.
17 Fitting a mixture of shapes We can use a mixture of factor analysers (MFA) to fit images that contain many different shapes so long as each image only contains one example of one shape.
18 Recognizing deformed shapes with MFA
19 A big problem How do we get from pixels to the first level parts that output explicit pose parameters? We do not have any labeled data. This de-rendering stage has to be very non-linear.
20 A picture of three capsules i x y i x y i x y Intensity of the capsule s visual entity. input image
21 Extracting pose information by using a domain specific decoder (Navdeep Jaitly & Tijmen Tieleman) The idea is to define a simple decoder that produces an image by adding together contributions from each capsule. Each capsule learns a fixed template that gets intensity-scaled and translated differently for reconstructing each image. The encoder must learn to extract the appropriate intensity and translation for each capsule from the input image.
22 output image Decoder: Add together intensity-scaled and translated contributions from each capsule. learned template i x y i x y i x y Train encoder using backpropagation
23 The templates learned by the autoencoder Each template is multiplied by a casespecific intensity and translated by a casespecific Δ x, Δy Then it is added to the output image.
24 image reconstruction recon. error capsule 1 capsule 2 capsule 3 capsule 4
25 Modeling the capsule outputs with a factor analyser 10 factors i 1 x1 y1 i2 x2 y2 i3 x3 y3 i4 x4 y4... We learn a mixture of 10 factor analysers on unlabeled data. What do the means look like?
26 Means discovered by a mixture of factor analysers mean - 2 σ of each factor mean + 2 σ of each factor directly on pixels on outputs of capsules
27
28 Another simple way to learn the lowest level parts Use pairs of images that are related by a known coordinate transformation e.g. a small translation of the image. We often have non-visual access to image transformations e.g. When we make an eye-movement. Cats learn to see much more easily if they control the image transformations (Held & Hein)
29 Learning the lowest level capsules We are given a pair of images related by a known translation. Step 1: Compute the capsule outputs for the first image. Each capsule uses its own set of recognition hidden units to extract the x and y coordinates of the visual entity it represents (and also the probability of existence) Step 2: Apply the transformation to the outputs of each capsule - Just add Δx to each x output and Δy to each y output Step 3: Predict the transformed image from the transformed outputs of the capsules Each capsule uses its own set of generative hidden units to compute its contribution to the prediction.
30 actual output target output gate +Δx +Δy p x y +Δx +Δy p x y +Δx +Δy p x y probability that the capsule s visual entity is present input image
31 Why it has to work When the net is trained with back-propagation, the only way it can get the transformations right is by using x and y in a way that is consistent with the way we are using Δx and Δy. This allows us to force the capsules to extract the coordinates of visual entities without having to decide what the entities are or where they are.
32 gate output image p x x x p p probability that the feature is present input image
33 Relationship to a Kalman filter A linear dynamical system can predict the next observation vector. But only when there is a linear relationship between the underlying dynamics and the observations. The extended Kalman filter assumes linearity about the current operating point. It s a fudge. Transforming autoencoders use non-linear recognition units to map the observation space to the space in which the dynamics is linear. Then they use non-linear generation units to map the prediction back to observation space This is a much better approach than the extended Kalman filter, especially when the dynamics is known.
34 Relationship to Slow Feature Analysis Slow feature analysis uses a very primitive model of the dynamics. It assumes that the underlying state does not change much. This is a much weaker way of extracting features than using precise knowledge of the dynamics. Also, slow feature analysis only produces dumb features that do not output explicit instantiation parameters.
35 Dealing with the three dimensional world Use stereo images to allow the encoder to extract 3-D poses. Using capsules, 3-D would not be much harder than 2-D if we started with 3-D pixels. The loss of the depth coordinate is a separate problem from the complexity of 3-D geometry. At least capsules stand a chance of dealing with the 3-D geometry properly. Convolutional nets in 3-D are a nightmare.
36 Dealing with 3-D viewpoint (Alex Krizhevsky) Input stereo pair Output stereo pair Correct output
37 It also works on test data
38 the end The work on transforming autoencoders is in a paper called transforming autoencoders on my webpage: The work on the dropout technique that is used to make a large number of capsules find different overlapping templates is described in a paper on arxiv:
Neural Networks for Machine Learning. Lecture 13a The ups and downs of backpropagation
Neural Networks for Machine Learning Lecture 13a The ups and downs of backpropagation Geoffrey Hinton Nitish Srivastava, Kevin Swersky Tijmen Tieleman Abdel-rahman Mohamed A brief history of backpropagation
Introduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Deep Learning Barnabás Póczos & Aarti Singh Credits Many of the pictures, results, and other materials are taken from: Ruslan Salakhutdinov Joshua Bengio Geoffrey
Part-Based Recognition
Part-Based Recognition Benedict Brown CS597D, Fall 2003 Princeton University CS 597D, Part-Based Recognition p. 1/32 Introduction Many objects are made up of parts It s presumably easier to identify simple
Efficient online learning of a non-negative sparse autoencoder
and Machine Learning. Bruges (Belgium), 28-30 April 2010, d-side publi., ISBN 2-93030-10-2. Efficient online learning of a non-negative sparse autoencoder Andre Lemme, R. Felix Reinhart and Jochen J. Steil
Image Classification for Dogs and Cats
Image Classification for Dogs and Cats Bang Liu, Yan Liu Department of Electrical and Computer Engineering {bang3,yan10}@ualberta.ca Kai Zhou Department of Computing Science [email protected] Abstract
Introduction to Computer Graphics
Introduction to Computer Graphics Torsten Möller TASC 8021 778-782-2215 [email protected] www.cs.sfu.ca/~torsten Today What is computer graphics? Contents of this course Syllabus Overview of course topics
Deep Convolutional Inverse Graphics Network
Deep Convolutional Inverse Graphics Network Tejas D. Kulkarni* 1, William F. Whitney* 2, Pushmeet Kohli 3, Joshua B. Tenenbaum 4 1,2,4 Massachusetts Institute of Technology, Cambridge, USA 3 Microsoft
Robust Real-Time Face Detection
Robust Real-Time Face Detection International Journal of Computer Vision 57(2), 137 154, 2004 Paul Viola, Michael Jones 授 課 教 授 : 林 信 志 博 士 報 告 者 : 林 宸 宇 報 告 日 期 :96.12.18 Outline Introduction The Boost
Object Recognition and Template Matching
Object Recognition and Template Matching Template Matching A template is a small image (sub-image) The goal is to find occurrences of this template in a larger image That is, you want to find matches of
Manifold Learning with Variational Auto-encoder for Medical Image Analysis
Manifold Learning with Variational Auto-encoder for Medical Image Analysis Eunbyung Park Department of Computer Science University of North Carolina at Chapel Hill [email protected] Abstract Manifold
Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report
Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 69 Class Project Report Junhua Mao and Lunbo Xu University of California, Los Angeles [email protected] and lunbo
CS 1699: Intro to Computer Vision. Deep Learning. Prof. Adriana Kovashka University of Pittsburgh December 1, 2015
CS 1699: Intro to Computer Vision Deep Learning Prof. Adriana Kovashka University of Pittsburgh December 1, 2015 Today: Deep neural networks Background Architectures and basic operations Applications Visualizing
Supporting Online Material for
www.sciencemag.org/cgi/content/full/313/5786/504/dc1 Supporting Online Material for Reducing the Dimensionality of Data with Neural Networks G. E. Hinton* and R. R. Salakhutdinov *To whom correspondence
Handwritten Digit Recognition with a Back-Propagation Network
396 Le Cun, Boser, Denker, Henderson, Howard, Hubbard and Jackel Handwritten Digit Recognition with a Back-Propagation Network Y. Le Cun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard,
An Analysis of Single-Layer Networks in Unsupervised Feature Learning
An Analysis of Single-Layer Networks in Unsupervised Feature Learning Adam Coates 1, Honglak Lee 2, Andrew Y. Ng 1 1 Computer Science Department, Stanford University {acoates,ang}@cs.stanford.edu 2 Computer
Template-based Eye and Mouth Detection for 3D Video Conferencing
Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer
Learning to Process Natural Language in Big Data Environment
CCF ADL 2015 Nanchang Oct 11, 2015 Learning to Process Natural Language in Big Data Environment Hang Li Noah s Ark Lab Huawei Technologies Part 1: Deep Learning - Present and Future Talk Outline Overview
Big Ideas in Mathematics
Big Ideas in Mathematics which are important to all mathematics learning. (Adapted from the NCTM Curriculum Focal Points, 2006) The Mathematics Big Ideas are organized using the PA Mathematics Standards
Image and Video Understanding
Image and Video Understanding 2VO 710.095 WS Christoph Feichtenhofer, Axel Pinz Slide credits: Many thanks to all the great computer vision researchers on which this presentation relies on. Most material
Novelty Detection in image recognition using IRF Neural Networks properties
Novelty Detection in image recognition using IRF Neural Networks properties Philippe Smagghe, Jean-Luc Buessler, Jean-Philippe Urban Université de Haute-Alsace MIPS 4, rue des Frères Lumière, 68093 Mulhouse,
Applying Deep Learning to Car Data Logging (CDL) and Driver Assessor (DA) October 22-Oct-15
Applying Deep Learning to Car Data Logging (CDL) and Driver Assessor (DA) October 22-Oct-15 GENIVI is a registered trademark of the GENIVI Alliance in the USA and other countries Copyright GENIVI Alliance
Limitations of Human Vision. What is computer vision? What is computer vision (cont d)?
What is computer vision? Limitations of Human Vision Slide 1 Computer vision (image understanding) is a discipline that studies how to reconstruct, interpret and understand a 3D scene from its 2D images
Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence
Artificial Neural Networks and Support Vector Machines CS 486/686: Introduction to Artificial Intelligence 1 Outline What is a Neural Network? - Perceptron learners - Multi-layer networks What is a Support
Deep learning applications and challenges in big data analytics
Najafabadi et al. Journal of Big Data (2015) 2:1 DOI 10.1186/s40537-014-0007-7 RESEARCH Open Access Deep learning applications and challenges in big data analytics Maryam M Najafabadi 1, Flavio Villanustre
Classifying Manipulation Primitives from Visual Data
Classifying Manipulation Primitives from Visual Data Sandy Huang and Dylan Hadfield-Menell Abstract One approach to learning from demonstrations in robotics is to make use of a classifier to predict if
ARTIFICIAL INTELLIGENCE METHODS IN EARLY MANUFACTURING TIME ESTIMATION
1 ARTIFICIAL INTELLIGENCE METHODS IN EARLY MANUFACTURING TIME ESTIMATION B. Mikó PhD, Z-Form Tool Manufacturing and Application Ltd H-1082. Budapest, Asztalos S. u 4. Tel: (1) 477 1016, e-mail: [email protected]
Applying Deep Learning to Enhance Momentum Trading Strategies in Stocks
This version: December 12, 2013 Applying Deep Learning to Enhance Momentum Trading Strategies in Stocks Lawrence Takeuchi * Yu-Ying (Albert) Lee [email protected] [email protected] Abstract We
Selecting Receptive Fields in Deep Networks
Selecting Receptive Fields in Deep Networks Adam Coates Department of Computer Science Stanford University Stanford, CA 94305 [email protected] Andrew Y. Ng Department of Computer Science Stanford
An Introduction to Neural Networks
An Introduction to Vincent Cheung Kevin Cannons Signal & Data Compression Laboratory Electrical & Computer Engineering University of Manitoba Winnipeg, Manitoba, Canada Advisor: Dr. W. Kinsner May 27,
Admin stuff. 4 Image Pyramids. Spatial Domain. Projects. Fourier domain 2/26/2008. Fourier as a change of basis
Admin stuff 4 Image Pyramids Change of office hours on Wed 4 th April Mon 3 st March 9.3.3pm (right after class) Change of time/date t of last class Currently Mon 5 th May What about Thursday 8 th May?
Analecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION
AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION Saurabh Asija 1, Rakesh Singh 2 1 Research Scholar (Computer Engineering Department), Punjabi University, Patiala. 2 Asst.
Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski [email protected]
Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trakovski [email protected] Neural Networks 2 Neural Networks Analogy to biological neural systems, the most robust learning systems
Such As Statements, Kindergarten Grade 8
Such As Statements, Kindergarten Grade 8 This document contains the such as statements that were included in the review committees final recommendations for revisions to the mathematics Texas Essential
Neural Networks and Support Vector Machines
INF5390 - Kunstig intelligens Neural Networks and Support Vector Machines Roar Fjellheim INF5390-13 Neural Networks and SVM 1 Outline Neural networks Perceptrons Neural networks Support vector machines
Programming Exercise 3: Multi-class Classification and Neural Networks
Programming Exercise 3: Multi-class Classification and Neural Networks Machine Learning November 4, 2011 Introduction In this exercise, you will implement one-vs-all logistic regression and neural networks
Steven C.H. Hoi School of Information Systems Singapore Management University Email: [email protected]
Steven C.H. Hoi School of Information Systems Singapore Management University Email: [email protected] Introduction http://stevenhoi.org/ Finance Recommender Systems Cyber Security Machine Learning Visual
Modeling Pixel Means and Covariances Using Factorized Third-Order Boltzmann Machines
Modeling Pixel Means and Covariances Using Factorized Third-Order Boltzmann Machines Marc Aurelio Ranzato Geoffrey E. Hinton Department of Computer Science - University of Toronto 10 King s College Road,
Tracking in flussi video 3D. Ing. Samuele Salti
Seminari XXIII ciclo Tracking in flussi video 3D Ing. Tutors: Prof. Tullio Salmon Cinotti Prof. Luigi Di Stefano The Tracking problem Detection Object model, Track initiation, Track termination, Tracking
A Simple Feature Extraction Technique of a Pattern By Hopfield Network
A Simple Feature Extraction Technique of a Pattern By Hopfield Network A.Nag!, S. Biswas *, D. Sarkar *, P.P. Sarkar *, B. Gupta **! Academy of Technology, Hoogly - 722 *USIC, University of Kalyani, Kalyani
A Short Introduction to Computer Graphics
A Short Introduction to Computer Graphics Frédo Durand MIT Laboratory for Computer Science 1 Introduction Chapter I: Basics Although computer graphics is a vast field that encompasses almost any graphical
NEURAL NETWORKS A Comprehensive Foundation
NEURAL NETWORKS A Comprehensive Foundation Second Edition Simon Haykin McMaster University Hamilton, Ontario, Canada Prentice Hall Prentice Hall Upper Saddle River; New Jersey 07458 Preface xii Acknowledgments
Efficient Learning of Sparse Representations with an Energy-Based Model
Efficient Learning of Sparse Representations with an Energy-Based Model Marc Aurelio Ranzato Christopher Poultney Sumit Chopra Yann LeCun Courant Institute of Mathematical Sciences New York University,
Deformable Part Models with CNN Features
Deformable Part Models with CNN Features Pierre-André Savalle 1, Stavros Tsogkas 1,2, George Papandreou 3, Iasonas Kokkinos 1,2 1 Ecole Centrale Paris, 2 INRIA, 3 TTI-Chicago Abstract. In this work we
What is Visualization? Information Visualization An Overview. Information Visualization. Definitions
What is Visualization? Information Visualization An Overview Jonathan I. Maletic, Ph.D. Computer Science Kent State University Visualize/Visualization: To form a mental image or vision of [some
Machine Learning for Data Science (CS4786) Lecture 1
Machine Learning for Data Science (CS4786) Lecture 1 Tu-Th 10:10 to 11:25 AM Hollister B14 Instructors : Lillian Lee and Karthik Sridharan ROUGH DETAILS ABOUT THE COURSE Diagnostic assignment 0 is out:
HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT
International Journal of Scientific and Research Publications, Volume 2, Issue 4, April 2012 1 HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT Akhil Gupta, Akash Rathi, Dr. Y. Radhika
Stochastic Pooling for Regularization of Deep Convolutional Neural Networks
Stochastic Pooling for Regularization of Deep Convolutional Neural Networks Matthew D. Zeiler Department of Computer Science Courant Institute, New York University [email protected] Rob Fergus Department
Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication
Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication Thomas Reilly Data Physics Corporation 1741 Technology Drive, Suite 260 San Jose, CA 95110 (408) 216-8440 This paper
The Value of Visualization 2
The Value of Visualization 2 G Janacek -0.69 1.11-3.1 4.0 GJJ () Visualization 1 / 21 Parallel coordinates Parallel coordinates is a common way of visualising high-dimensional geometry and analysing multivariate
Simplified Machine Learning for CUDA. Umar Arshad @arshad_umar Arrayfire @arrayfire
Simplified Machine Learning for CUDA Umar Arshad @arshad_umar Arrayfire @arrayfire ArrayFire CUDA and OpenCL experts since 2007 Headquartered in Atlanta, GA In search for the best and the brightest Expert
203.4770: Introduction to Machine Learning Dr. Rita Osadchy
203.4770: Introduction to Machine Learning Dr. Rita Osadchy 1 Outline 1. About the Course 2. What is Machine Learning? 3. Types of problems and Situations 4. ML Example 2 About the course Course Homepage:
Solving Simultaneous Equations and Matrices
Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering
Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors
Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col
Visibility optimization for data visualization: A Survey of Issues and Techniques
Visibility optimization for data visualization: A Survey of Issues and Techniques Ch Harika, Dr.Supreethi K.P Student, M.Tech, Assistant Professor College of Engineering, Jawaharlal Nehru Technological
Method of Combining the Degrees of Similarity in Handwritten Signature Authentication Using Neural Networks
Method of Combining the Degrees of Similarity in Handwritten Signature Authentication Using Neural Networks Ph. D. Student, Eng. Eusebiu Marcu Abstract This paper introduces a new method of combining the
Learning Deep Architectures for AI. Contents
Foundations and Trends R in Machine Learning Vol. 2, No. 1 (2009) 1 127 c 2009 Y. Bengio DOI: 10.1561/2200000006 Learning Deep Architectures for AI By Yoshua Bengio Contents 1 Introduction 2 1.1 How do
Thnkwell s Homeschool Precalculus Course Lesson Plan: 36 weeks
Thnkwell s Homeschool Precalculus Course Lesson Plan: 36 weeks Welcome to Thinkwell s Homeschool Precalculus! We re thrilled that you ve decided to make us part of your homeschool curriculum. This lesson
Keywords: Image Generation and Manipulation, Video Processing, Video Factorization, Face Morphing
TENSORIAL FACTORIZATION METHODS FOR MANIPULATION OF FACE VIDEOS S. Manikandan, Ranjeeth Kumar, C.V. Jawahar Center for Visual Information Technology International Institute of Information Technology, Hyderabad
Recognition. Sanja Fidler CSC420: Intro to Image Understanding 1 / 28
Recognition Topics that we will try to cover: Indexing for fast retrieval (we still owe this one) History of recognition techniques Object classification Bag-of-words Spatial pyramids Neural Networks Object
CS1112 Spring 2014 Project 4. Objectives. 3 Pixelation for Identity Protection. due Thursday, 3/27, at 11pm
CS1112 Spring 2014 Project 4 due Thursday, 3/27, at 11pm You must work either on your own or with one partner. If you work with a partner you must first register as a group in CMS and then submit your
OBJECT TRACKING USING LOG-POLAR TRANSFORMATION
OBJECT TRACKING USING LOG-POLAR TRANSFORMATION A Thesis Submitted to the Gradual Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements
Local features and matching. Image classification & object localization
Overview Instance level search Local features and matching Efficient visual recognition Image classification & object localization Category recognition Image classification: assigning a class label to
Two hours UNIVERSITY OF MANCHESTER SCHOOL OF COMPUTER SCIENCE. M.Sc. in Advanced Computer Science. Friday 18 th January 2008.
COMP60321 Two hours UNIVERSITY OF MANCHESTER SCHOOL OF COMPUTER SCIENCE M.Sc. in Advanced Computer Science Computer Animation Friday 18 th January 2008 Time: 09:45 11:45 Please answer any THREE Questions
A Learning Based Method for Super-Resolution of Low Resolution Images
A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 [email protected] Abstract The main objective of this project is the study of a learning based method
Intelligent Flexible Automation
Intelligent Flexible Automation David Peters Chief Executive Officer Universal Robotics February 20-22, 2013 Orlando World Marriott Center Orlando, Florida USA Trends in AI and Computing Power Convergence
Performance Assessment Task Which Shape? Grade 3. Common Core State Standards Math - Content Standards
Performance Assessment Task Which Shape? Grade 3 This task challenges a student to use knowledge of geometrical attributes (such as angle size, number of angles, number of sides, and parallel sides) to
Accurate and robust image superresolution by neural processing of local image representations
Accurate and robust image superresolution by neural processing of local image representations Carlos Miravet 1,2 and Francisco B. Rodríguez 1 1 Grupo de Neurocomputación Biológica (GNB), Escuela Politécnica
Semantic Recognition: Object Detection and Scene Segmentation
Semantic Recognition: Object Detection and Scene Segmentation Xuming He [email protected] Computer Vision Research Group NICTA Robotic Vision Summer School 2015 Acknowledgement: Slides from Fei-Fei
Activity recognition in ADL settings. Ben Kröse [email protected]
Activity recognition in ADL settings Ben Kröse [email protected] Content Why sensor monitoring for health and wellbeing? Activity monitoring from simple sensors Cameras Co-design and privacy issues Necessity
Essential Mathematics for Computer Graphics fast
John Vince Essential Mathematics for Computer Graphics fast Springer Contents 1. MATHEMATICS 1 Is mathematics difficult? 3 Who should read this book? 4 Aims and objectives of this book 4 Assumptions made
Component Ordering in Independent Component Analysis Based on Data Power
Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals
Recognizing Cats and Dogs with Shape and Appearance based Models. Group Member: Chu Wang, Landu Jiang
Recognizing Cats and Dogs with Shape and Appearance based Models Group Member: Chu Wang, Landu Jiang Abstract Recognizing cats and dogs from images is a challenging competition raised by Kaggle platform
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - [email protected]
Solution Guide III-C. 3D Vision. Building Vision for Business. MVTec Software GmbH
Solution Guide III-C 3D Vision MVTec Software GmbH Building Vision for Business Machine vision in 3D world coordinates, Version 10.0.4 All rights reserved. No part of this publication may be reproduced,
GLOVE-BASED GESTURE RECOGNITION SYSTEM
CLAWAR 2012 Proceedings of the Fifteenth International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines, Baltimore, MD, USA, 23 26 July 2012 747 GLOVE-BASED GESTURE
Image Processing and Computer Graphics. Rendering Pipeline. Matthias Teschner. Computer Science Department University of Freiburg
Image Processing and Computer Graphics Rendering Pipeline Matthias Teschner Computer Science Department University of Freiburg Outline introduction rendering pipeline vertex processing primitive processing
Common Core Unit Summary Grades 6 to 8
Common Core Unit Summary Grades 6 to 8 Grade 8: Unit 1: Congruence and Similarity- 8G1-8G5 rotations reflections and translations,( RRT=congruence) understand congruence of 2 d figures after RRT Dilations
Pedestrian Detection with RCNN
Pedestrian Detection with RCNN Matthew Chen Department of Computer Science Stanford University [email protected] Abstract In this paper we evaluate the effectiveness of using a Region-based Convolutional
Tensor Methods for Machine Learning, Computer Vision, and Computer Graphics
Tensor Methods for Machine Learning, Computer Vision, and Computer Graphics Part I: Factorizations and Statistical Modeling/Inference Amnon Shashua School of Computer Science & Eng. The Hebrew University
THE HUMAN BRAIN. observations and foundations
THE HUMAN BRAIN observations and foundations brains versus computers a typical brain contains something like 100 billion miniscule cells called neurons estimates go from about 50 billion to as many as
Visualization Techniques in Data Mining
Tecniche di Apprendimento Automatico per Applicazioni di Data Mining Visualization Techniques in Data Mining Prof. Pier Luca Lanzi Laurea in Ingegneria Informatica Politecnico di Milano Polo di Milano
3 An Illustrative Example
Objectives An Illustrative Example Objectives - Theory and Examples -2 Problem Statement -2 Perceptron - Two-Input Case -4 Pattern Recognition Example -5 Hamming Network -8 Feedforward Layer -8 Recurrent
COMP175: Computer Graphics. Lecture 1 Introduction and Display Technologies
COMP175: Computer Graphics Lecture 1 Introduction and Display Technologies Course mechanics Number: COMP 175-01, Fall 2009 Meetings: TR 1:30-2:45pm Instructor: Sara Su ([email protected]) TA: Matt Menke
Lecture 6: Classification & Localization. boris. [email protected]
Lecture 6: Classification & Localization boris. [email protected] 1 Agenda ILSVRC 2014 Overfeat: integrated classification, localization, and detection Classification with Localization Detection. 2 ILSVRC-2014
A. OPENING POINT CLOUDS. (Notepad++ Text editor) (Cloud Compare Point cloud and mesh editor) (MeshLab Point cloud and mesh editor)
MeshLAB tutorial 1 A. OPENING POINT CLOUDS (Notepad++ Text editor) (Cloud Compare Point cloud and mesh editor) (MeshLab Point cloud and mesh editor) 2 OPENING POINT CLOUDS IN NOTEPAD ++ Let us understand
Lecture 6: CNNs for Detection, Tracking, and Segmentation Object Detection
CSED703R: Deep Learning for Visual Recognition (206S) Lecture 6: CNNs for Detection, Tracking, and Segmentation Object Detection Bohyung Han Computer Vision Lab. [email protected] 2 3 Object detection
Rectified Linear Units Improve Restricted Boltzmann Machines
Rectified Linear Units Improve Restricted Boltzmann Machines Vinod Nair [email protected] Geoffrey E. Hinton [email protected] Department of Computer Science, University of Toronto, Toronto, ON
Topological Data Analysis Applications to Computer Vision
Topological Data Analysis Applications to Computer Vision Vitaliy Kurlin, http://kurlin.org Microsoft Research Cambridge and Durham University, UK Topological Data Analysis quantifies topological structures
SUCCESSFUL PREDICTION OF HORSE RACING RESULTS USING A NEURAL NETWORK
SUCCESSFUL PREDICTION OF HORSE RACING RESULTS USING A NEURAL NETWORK N M Allinson and D Merritt 1 Introduction This contribution has two main sections. The first discusses some aspects of multilayer perceptrons,
How To Use Neural Networks In Data Mining
International Journal of Electronics and Computer Science Engineering 1449 Available Online at www.ijecse.org ISSN- 2277-1956 Neural Networks in Data Mining Priyanka Gaur Department of Information and
MassArt Studio Foundation: Visual Language Digital Media Cookbook, Fall 2013
INPUT OUTPUT 08 / IMAGE QUALITY & VIEWING In this section we will cover common image file formats you are likely to come across and examine image quality in terms of resolution and bit depth. We will cover
Feed-Forward mapping networks KAIST 바이오및뇌공학과 정재승
Feed-Forward mapping networks KAIST 바이오및뇌공학과 정재승 How much energy do we need for brain functions? Information processing: Trade-off between energy consumption and wiring cost Trade-off between energy consumption
Keywords: Image complexity, PSNR, Levenberg-Marquardt, Multi-layer neural network.
Global Journal of Computer Science and Technology Volume 11 Issue 3 Version 1.0 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals Inc. (USA) Online ISSN: 0975-4172
