Object Recognition and Template Matching



Similar documents
BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES

Adaptive Face Recognition System from Myanmar NRC Card

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Linear Threshold Units

A secure face tracking system

Orthogonal Diagonalization of Symmetric Matrices

Environmental Remote Sensing GEOG 2021

Video-Rate Stereo Vision on a Reconfigurable Hardware. Ahmad Darabiha Department of Electrical and Computer Engineering University of Toronto

Index Terms: Face Recognition, Face Detection, Monitoring, Attendance System, and System Access Control.

Efficient Attendance Management: A Face Recognition Approach

Capacity of an RCE-based Hamming Associative Memory for Human Face Recognition

Taking Inverse Graphics Seriously

Mathematical Model Based Total Security System with Qualitative and Quantitative Data of Human

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Manifold Learning Examples PCA, LLE and ISOMAP

Introduction: Overview of Kernel Methods

The Scientific Data Mining Process

: Introduction to Machine Learning Dr. Rita Osadchy

AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION

Subspace Analysis and Optimization for AAM Based Face Alignment

Admin stuff. 4 Image Pyramids. Spatial Domain. Projects. Fourier domain 2/26/2008. Fourier as a change of basis

by the matrix A results in a vector which is a reflection of the given

Principal Component Analysis

9 Hedging the Risk of an Energy Futures Portfolio UNCORRECTED PROOFS. Carol Alexander 9.1 MAPPING PORTFOLIOS TO CONSTANT MATURITY FUTURES 12 T 1)

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder

Principal components analysis

Computer Vision Based Employee Activities Analysis

Part-Based Recognition

AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA)

Non-negative Matrix Factorization (NMF) in Semi-supervised Learning Reducing Dimension and Maintaining Meaning

Component Ordering in Independent Component Analysis Based on Data Power

Incremental PCA: An Alternative Approach for Novelty Detection

Java Modules for Time Series Analysis

A tutorial on Principal Components Analysis

Image Segmentation and Registration

Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies

CALCULATIONS & STATISTICS

Convolution. 1D Formula: 2D Formula: Example on the web:

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Image Compression through DCT and Huffman Coding Technique

Lecture 9: Introduction to Pattern Analysis

Palmprint Recognition with PCA and ICA

Local features and matching. Image classification & object localization

Visual-based ID Verification by Signature Tracking

Computational Foundations of Cognitive Science

CS Introduction to Data Mining Instructor: Abdullah Mueen

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014

A Comparison of Photometric Normalisation Algorithms for Face Verification

Class-specific Sparse Coding for Learning of Object Representations

Assessment of safety distance with the use of buffer zones of the objects

Introduction to Matrix Algebra

Medical Information Management & Mining. You Chen Jan,15, 2013 You.chen@vanderbilt.edu

Palmprint as a Biometric Identifier

HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT

Classification of Fingerprints. Sarat C. Dass Department of Statistics & Probability

FACE RECOGNITION BASED ATTENDANCE MARKING SYSTEM

SYMMETRIC EIGENFACES MILI I. SHAH

CS231M Project Report - Automated Real-Time Face Tracking and Blending

Computer Vision Techniques for Man-Machine Interaction

OBJECT TRACKING USING LOG-POLAR TRANSFORMATION

THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS

Correlation and Convolution Class Notes for CMSC 426, Fall 2005 David Jacobs

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall

CHAPTER 3: DIGITAL IMAGING IN DIAGNOSTIC RADIOLOGY. 3.1 Basic Concepts of Digital Imaging

Factor Analysis. Chapter 420. Introduction

The Implementation of Face Security for Authentication Implemented on Mobile Phone

Matt Cabot Rory Taca QR CODES

Introduction Epipolar Geometry Calibration Methods Further Readings. Stereo Camera Calibration

Simultaneous Gamma Correction and Registration in the Frequency Domain

ECE 533 Project Report Ashish Dhawan Aditi R. Ganesan

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Lecture 6: Classification & Localization. boris. ginzburg@intel.com

Similarity and Diagonalization. Similar Matrices

Least-Squares Intersection of Lines

Neural Network based Vehicle Classification for Intelligent Traffic Control

Principal Gabor Filters for Face Recognition

WATER BODY EXTRACTION FROM MULTI SPECTRAL IMAGE BY SPECTRAL PATTERN ANALYSIS

HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER

EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set

FACTOR ANALYSIS NASC

Augmented Reality Tic-Tac-Toe

Exploratory Factor Analysis and Principal Components. Pekka Malo & Anton Frantsev 30E00500 Quantitative Empirical Research Spring 2016

Position Estimation Using Principal Components of Range Data

4. GPCRs PREDICTION USING GREY INCIDENCE DEGREE MEASURE AND PRINCIPAL COMPONENT ANALYIS

A Computer Vision System on a Chip: a case study from the automotive domain

How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm

Nonlinear Iterative Partial Least Squares Method

Linear Algebra Review. Vectors

Introduction to Matrices

An Introduction to Neural Networks

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Summary: Transformations. Lecture 14 Parameter Estimation Readings T&V Sec Parameter Estimation: Fitting Geometric Models

Introduction to Principal Component Analysis: Stock Market Values

Canonical Correlation Analysis

A Simple Feature Extraction Technique of a Pattern By Hopfield Network

A System for Capturing High Resolution Images

Factor Analysis. Factor Analysis

Overview. 1. Introduction. 2. Parts of the Project. 3. Conclusion. Motivation. Methods used in the project Results and comparison

Recognition. Sanja Fidler CSC420: Intro to Image Understanding 1 / 28

Transcription:

Object Recognition and Template Matching Template Matching A template is a small image (sub-image) The goal is to find occurrences of this template in a larger image That is, you want to find matches of this template in the image 1

Example Template Image matches Basic Approach For each Image coordinate i,j for the size of the template s,t compute a pixel-wise metric between the image and the template sum next record the similarity next A match is based on the closest similarity measurement at each (i,j) 2

Similarity Criteria Correlation The correlation response between two images f and t is defined as: c = x, y f ( x, y) t( x, y) This is often called cross-correlation Template Matching Using Correlation Assume a template T with [2W, 2H] The correlation response at each x,y is: W H c ( x, y) = f ( x + k, y + l) t( k, l) k = W l= H Pick the c(x,y) with the maximum response [It is typical to ignore the boundaries where the template won t fit] 3

Template Matching Response Space c(x,y) (using correlation) Matlab Example Problems with Correlation If the image intensity varies with position, Correlation can fail. For example, the correlation between the template and an exactly matched region can be less than correlation between the template and a bright spot. The range of c(x,y) is dependent on the size of the feature Correlation is not invariant to changes in image intensity Such as lighting conditions 4

Normalized Correlation We can normalize for the effects of changing intensity and the template size We call this Normalized Correlation c = x, y x, y [ f ( x, y) [ f ( x, y) f ] 2 f ][ t( x, y) t ] x, y 2 [ t( x, y) t ] 1/ 2 Make sure you handle dividing by 0 Finding Matches Normalized correlation returns values with a maximum range of 1. Specify accepted matches with a threshold Example c(x,y) > 0.9 considered a match Note that you generally need to perform some type of Non-maximum suppression Run a filter of a given window size Find the max in that window, set other values to 0 5

Other Metrics Normalized Correlation is robust It is one of the most commonly used template matching criteria when accuracy is important But, it is computationally expensive For speed, we often use other similarity metrics SSD Sum of the Squared Difference c( x, y) = W H k = W l= H [ f ( x + k, y + l) t( k, l)] 2 Note in this case, you look for the minimum response! 6

SAD Sum of the Absolute Difference W H c ( x, y) = f ( x + k, y + l) t( k, l) k = W l= H Also, look for the minimum response! This operation can be performed efficiently with integer math. Example Response Space c(x,y) (using SAD) A match is the minimum response 7

Template Matching Limitations Templates are not scale or rotation invariant Slight size or orientation variations can cause problems Often use several templates to represent one object Different sizes Rotations of the same template Note that template matching is an computationally expensive operation Especially if you search the entire image Or if you use several templates However, it can be easily parallelized Template Matching Basic tool for area-based stereo Basic tool for object tracking in video Basic tool for simple OCR Basic foundation for simple object recognition 8

Object Recognition We will discuss a simple form of object recognition Appearance Based Recognition Assume we have images of several known objects We call this our Training Set We are given a new image We want to recognize (or classify) it based on our existing set of images Example? http://www.cs.columbia.edu/cave/research/softlib/coil-20.html Columbia University Image Library 9

Object Recognition Typical Problem You have a training set of images of N objects You are given a new image, F F is an image of one of these N objects Maybe at a slightly different view than the images in your training set Can you determine which object F is? Let s Start With Face Recognition Database of faces [objects] Given an new image, Can you tell who this is? ftp://ftp.uk.research.att.com:pub/data/att_faces.tar.z 10

About the Training Set The training set generally has several images of the same object at slightly different views The more views, the more robust the training set However, more views creates a larger training set! Brute Force Approach to Face Recognition This is a template matching problem The new face image is a template Compare the new face image against the database of images Using Normalized Correlation, SSD, or SAD For example: Let I i be all of the existing faces Let F be the new face For each I i c i = I i F (SAD) Hypothesis that the minimum c i is the person 11

Example Database of 40 people 5 Images per person We randomly choose 4 faces to compose our database That is a set of 160 images 1 image per person that isn t in the database Find this face using the Brute force approach (The class example uses image of size 56x46 pixels. This is very small and only used for a demonstration. Typical image sizes would be 256x256 or higher) Implementation Let Ii (training images) be written as a vector Form a matrix X from these vectors X = I 1 I 2... I i... I n-1 I n X dimensions: W*H of image * number_of_images 12

Implementation Let F (new face) also be written as a vector Compute the distance of F to each I i for i = 1 to n s = F - I i Closest I i (min s) is hypothesised to be the match In class example: X matrix is: 2576 x 160 elements To compare F with all Ii Brute force approach takes roughly 423,555 integer operations using SAD Example New Face Training Set SAD Diff Image (Computed in Vector Form) SAD result 50765 108711 98158 108924 99820 13

Brute Force Computationally Expensive Requires a huge amount of memory Not very practical We need a more compact representation We have a collection of faces Face images are highly correlated They share many of the same spatial characteristics Face,Nose, Eyes We should be able to describe these images using a more compact representation 14

Compact Representation Images of faces, are not randomly distributed We can apply Principal Component Analysis (PCA) PCA finds the best vectors that account for the distribution of face images within the entire space Each image can be described as a linear combination of these principal components The powerful feature is that we can approximate this space with only a few of the principal components Seminal Paper: Face Recognition Using Eigenfaces 1991, Mathew A. Turk and Alex. P. Pentland (MIT) Eigen-Face Decomposition Idea Find the mean face of the training set Subtract the mean from all images Now each image encodes its variation from the mean Compute a covariance matrix of all the images Imagine that this is encoding the spread of the variation for each pixel (in the entire image set) Compute the principal components for the covariance matrix (eigen-vectors of the covariance space) Parameterize each face in terms of principal components 15

Eigen-Face Decomposition X = Compute Mean Image I 1 I 2... I i... I n-1 I n I ^ X = ^ ^ ^ ^ ^ I 1 I 2... I i... I n-1 I n Compose X ^ of ^ I i = (I i I) Eigen-Face Decomposition Compute the covariance matrix C= X ^ X ^ T (note this is a huge matrix, size_of_image*size_of_image) Perform Eigen-decomposition on C This gives us back a set of eigen vectors (u i ) These are the principal components of C U = u 1 u 2... u i...... u m-1 u m 16

The Eigen-Faces These eigenvector form what Pentland called eigen-faces First 5 Eigen Faces (From our training set) Parameterize faces as Eigen-faces All faces in our training set can be described as a linear combination of the eigen-faces The trick is, we can approximate our face using only a few eigen-vectors P i is only size k P i = U kt * (I i I) Where k << Size of Image (k = 20) 17

Eigen-face Representation Original K= 2 5 15 20 30 60 Comparing with Eigen faces We build a new representation of our training set For each Ii in our training set of N images Compute: P i =U kt * (I i I) Create a new matrix P aram = P 1 P 2 P 3... P i.... P n-1 P n Only has k rows! 18

Recognition using Eigen-Faces Find a match using the parameterization coefficients of P aram So, given a new face F Parameterize it in Eigenspace P f = U kt * (I i I) Find the closest P i using SAD min P i P f Hypothesis image corresponding to Pi is our match! EigenFaces Performance Pixel Space In class example: X matrix is: 2576 x 160 elements Brute force approach takes roughly 423,555 integer operations using SAD Eigen Space In class example Assume we have already calculated U and P aram P aram = 20 x 160 elements Search approach 51,520 multiples to convert our image to eigen-space roughly 3200 integer operations to find a match SAD!!! 19

Eigenspace Representation Requires significant pre-processing of space Greatly reduces the amount of memory needed Greatly reduces the matching speed Widely accepted approach Extension to Generalized Object Recognition Build several eigenspaces using several training sets (one eigenspace for each set) Parameterize new image into these spaces Find the closest match in all spaces Find the closest space 20

Pose Recognition Industrial Imaging Automation Take a training set of an images at difference positions Build an eigenspace of the training set Given an a new image Find its closest match in the space this is its pose Draw backs to Eigenspaces Computationally Expensive Images have to be registered Same size, roughly same background The choice of K affects the accuracy of recognition Static representation If you want to add a new object (person) You have to rebuild the eigenspace Starts to break down when there are too many objects You begin to get a random distribution 21

Template Matching Summary Similarity Criteria Correlation, Normalized Correlation SSD and SAD Object Recognition Appearance Based PCA (Principal Component Analysis) Eigen-space representation Eigen-faces Active Research Area Not too much for template matching Object Recognition Selected Feature Based Eigen Decomp 22

Active Research Area Computing Eigenspaces Optimal Eigenspaces Incremental Eigenspaces Face Recognition Training set is important Fake training images with view morphing Compressed Domain Integration Eigenspace research In math and computer vision Very active area 23