Computational Foundations of Cognitive Science



Similar documents
Convolution. 1D Formula: 2D Formula: Example on the web:

Digital Imaging and Multimedia. Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University

Correlation and Convolution Class Notes for CMSC 426, Fall 2005 David Jacobs

The continuous and discrete Fourier transforms

Aliasing, Image Sampling and Reconstruction

1 Norms and Vector Spaces

jorge s. marques image processing

α = u v. In other words, Orthogonal Projection

Linear Algebra Review. Vectors

Image Gradients. Given a discrete image Á Òµ, consider the smoothed continuous image ܵ defined by

REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING

Admin stuff. 4 Image Pyramids. Spatial Domain. Projects. Fourier domain 2/26/2008. Fourier as a change of basis

The Convolution Operation

Visualization and Feature Extraction, FLOW Spring School 2016 Prof. Dr. Tino Weinkauf. Flow Visualization. Image-Based Methods (integration-based)

Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition

EECS 556 Image Processing W 09. Interpolation. Interpolation techniques B splines

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow

Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture.

Convolution, Correlation, & Fourier Transforms. James R. Graham 10/25/2005

DERIVATIVES AS MATRICES; CHAIN RULE

Linear Filtering Part II

Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections

Sampling and Interpolation. Yao Wang Polytechnic University, Brooklyn, NY11201

Armstrong Atlantic State University Engineering Studies MATLAB Marina Image Processing Primer

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

Sharpening through spatial filtering

QUALITY TESTING OF WATER PUMP PULLEY USING IMAGE PROCESSING

Short-time FFT, Multi-taper analysis & Filtering in SPM12

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Module II: Multimedia Data Mining

Lectures 6&7: Image Enhancement

Object Recognition and Template Matching

Optimization in R n Introduction

Image Processing Techniques applied to Liquid Argon Time Projection Chamber(LArTPC) Data

Name: Section Registered In:

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

T ( a i x i ) = a i T (x i ).

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Interpolation of RGB components in Bayer CFA images

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

NOTES ON LINEAR TRANSFORMATIONS

SGN-1158 Introduction to Signal Processing Test. Solutions

Detecting Network Anomalies. Anant Shah

Inner Product Spaces

521466S Machine Vision Assignment #7 Hough transform

3. INNER PRODUCT SPACES

Lecture 6: Classification & Localization. boris. ginzburg@intel.com

Introduction to Medical Imaging. Lecture 11: Cone-Beam CT Theory. Introduction. Available cone-beam reconstruction methods: Our discussion:

( ) which must be a vector

BIOINF 585 Fall 2015 Machine Learning for Systems Biology & Clinical Informatics

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Optimization - Elements of Calculus - Basics of constrained and unconstrained optimization

Mathematical finance and linear programming (optimization)

The Image Deblurring Problem

Edge detection. (Trucco, Chapt 4 AND Jain et al., Chapt 5) -Edges are significant local changes of intensity in an image.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

Robert Collins CSE598G. More on Mean-shift. R.Collins, CSE, PSU CSE598G Spring 2006

Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering

Lecture 2 Matrix Operations

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

GE Medical Systems Training in Partnership. Module 8: IQ: Acquisition Time

Big Data Analytics CSCI 4030

2.2 Creaseness operator

Linear Algebra Notes

One advantage of this algebraic approach is that we can write down

Virtual Mouse Using a Webcam

USING SPECTRAL RADIUS RATIO FOR NODE DEGREE TO ANALYZE THE EVOLUTION OF SCALE- FREE NETWORKS AND SMALL-WORLD NETWORKS

Environmental Remote Sensing GEOG 2021

Solving Systems of Linear Equations

Numerical Methods For Image Restoration

RIPL. An Image Processing DSL. Rob Stewart & Deepayan Bhowmik. 1st May, Heriot Watt University

Section Inner Products and Norms

Lecture 3: Linear methods for classification

South Carolina College- and Career-Ready (SCCCR) Pre-Calculus

Norbert Schuff Professor of Radiology VA Medical Center and UCSF

Vector and Matrix Norms

Unified Lecture # 4 Vectors

An Iterative Image Registration Technique with an Application to Stereo Vision

Computational Optical Imaging - Optique Numerique. -- Deconvolution --

: Introduction to Machine Learning Dr. Rita Osadchy

CHAPTER 3: DIGITAL IMAGING IN DIAGNOSTIC RADIOLOGY. 3.1 Basic Concepts of Digital Imaging

Math 241, Exam 1 Information.

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

Lecture Notes 2: Matrices as Systems of Linear Equations

By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

Basics of Statistical Machine Learning


Lecture 8: Signal Detection and Noise Assumption

THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok

Vision based Vehicle Tracking using a high angle camera

A Learning Based Method for Super-Resolution of Low Resolution Images

Video-Conferencing System

Linear Algebra: Vectors

Lecture L3 - Vectors, Matrices and Coordinate Transformations

International Journal of Emerging Technology and Advanced Engineering Website: (ISSN , Volume 2, Issue 8, August 2012)

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88

BANACH AND HILBERT SPACE REVIEW

F Matrix Calculus F 1

Em bedded DSP : I ntroduction to Digital Filters

CMOS Image Sensor Noise Reduction Method for Image Signal Processor in Digital Cameras and Camera Phones

Transcription:

Computational Foundations of Cognitive Science Lecture 15: Convolutions and Kernels Frank Keller School of Informatics University of Edinburgh keller@inf.ed.ac.uk February 23, 2010 Frank Keller Computational Foundations of Cognitive Science 1

1 Convolutions of Discrete Functions 2 Frank Keller Computational Foundations of Cognitive Science 2

Convolutions of Discrete Functions : Convolution If f and g are discrete functions, then f g is the convolution of f and g and is defined as: (f g)(x) = + u= f (u)g(x u) Intuitively, the convolution of two functions represents the amount of overlap between the two functions. The function g is the input, f the kernel of the convolution. Convolutions are often used for filtering, both in the temporal or frequency domain (one dimensional) and in the spatial domain (two dimensional). Frank Keller Computational Foundations of Cognitive Science 3

Convolutions of Discrete Functions Theorem: Properties of Convolution If f, g, and h are functions and a is a constant, then: f g = g f (commutativity) f (g h) = (f g) h (associativity) f (g + h) = (f g) + (f h) (distributivity) a(f g) = (af ) g = f (ag) (associativity with scalar multiplication) Note that it doesn t matter if g or f is the kernel, due to commutativity. Frank Keller Computational Foundations of Cognitive Science 4

If a function f ranges over a finite set of values a = a 1, a 2,..., a n, then it can be represented as vector [ a 1 a 2... a n ]. : If the functions f and g are represented as vectors a = [ a 1 a 2... a m ] and b = [ b1 b 2... b n ], then f g is a vector c = [ c 1 c 2... c m+n 1 ] as follows: c x = u a u b x u+1 where u ranges over all legal subscripts for a u and b x u+1, specifically u = max(1, x n + 1)... min(x, m). Frank Keller Computational Foundations of Cognitive Science 5

If we assume that the two vectors a and b have the same dimensionality, then the convolution c is: c 1 = a 1 b 1 c 2 = a 1 b 2 + a 2 b 1 c 3 = a 1 b 3 + a 2 b 2 + a 3 b 1... c n = a 1 b n + a 2 b n 1 + + a n b 1... c 2n 1 = a n b n Note that the sum for each component only includes those products for which the subscripts are valid. Frank Keller Computational Foundations of Cognitive Science 6

Example 1 0 Assume a = 0, b = 1. 1 2 Then a 1 b 1 1 0 a 1 b 2 + a 2 b 1 a b = a 1 b 3 + a 2 b 2 + a 3 b 1 a 2 b 3 + a 3 b 2 = 1 1 + 0 0 1 2 + 0 1 + ( 1)0 0 2 + ( 1)1 = a 3 b 3 ( 1)2 0 1 2 1 2. Frank Keller Computational Foundations of Cognitive Science 7

What happens if the kernel is smaller than the input vector (this is actually the typical case)? Assume a = [ ] 1 2 1, b = 2 1 1 1 1. Compute a b. 1 1 What is the purpose of the kernel a? Frank Keller Computational Foundations of Cognitive Science 8

We can extend convolution to functions of two variables f (x, y) and g(x, y). : Convolution for Functions of two Variables If f and g are discrete functions of two variables, then f g is the convolution of f and g and is defined as: (f g)(x, y) = + + u= v= f (u, v)g(x u, y v) Frank Keller Computational Foundations of Cognitive Science 9

We can regard functions of two variables as matrices with A xy = f (x, y), and obtain a matrix definition of convolution. : If the functions f and g are represented as the n m matrix A and the k l matrix B, then f g is an (n + k 1) (m + l 1) matrix C: c xy = a uv b x u+1,y v+1 u v where u and v range over all legal subscripts for a uv and b x u+1,y v+1. Note: the treatment of subscripts can vary from implementation to implementation, and affects the size of C (this is parameterizable in Matlab, see documentation of conv2 function). Frank Keller Computational Foundations of Cognitive Science 10

Example b a 11 a 12 a 11 b 12 b 13 b 14 b 15 13 Let A = a 21 a 22 a 23 and B = b 21 b 22 b 23 b 24 b 25 b a 31 a 32 a 31 b 32 b 33 b 34 b 35. 33 b 41 b 42 b 43 b 44 b 45 Then for C = A B, the entry c 33 = a 11 b 33 + a 12 b 32 + a 13 b 31 + a 21 b 23 + a 22 b 22 + a 23 b 21 + a 31 b 13 + a 32 b 12 + a 33 b 11. Here, B could represent an image, and A could represent a kernel performing an image operation, for instance. Frank Keller Computational Foundations of Cognitive Science 11

Example: Image Processing Convolving and image with a kernel (typically a 3 3 matrix) is a powerful tool for image processing. B = K B = 1/9 1/9 1/9 K = 1/9 1/9 1/9 implements a mean filter which smooths an 1/9 1/9 1/9 image by replacing each pixel value with the mean of its neighbors. Frank Keller Computational Foundations of Cognitive Science 12

Example: Image Processing B = K B = 1 0 1 The kernel K = 2 0 2 implements the Sobel edge detector. 1 0 1 It detects gradients in the pixel values (sudden changes in brightness), which correspond to edges. The example is for vertical edges. Frank Keller Computational Foundations of Cognitive Science 13

We can also define convolution for continuous functions. In this case, we replace the sums by integrals in the definition. : Convolution If f and g are continuous functions, then f g is the convolution of f and g and is defined as: (f g)(x) = + f (u)g(x u)du Convolutions of continuous functions are widely used in signal processing for filtering continuous signals, e.g., speech. Frank Keller Computational Foundations of Cognitive Science 14

Example Assume{ the following step functions: 3 if 0 x 4 g(x) = f (x) = 0 otherwise If we integrate g(x), we get: G(x) = Then the convolution f g is: (f g)(x) = + f (u)g(x u)du = 1 2 1 x 1 2 x+1 g(u)du = 1 2 (G(x 1) G(x + 1)) = 3 2 (x + 1) if 1 x < 1 3 if 1 x 3 3 2 (x 1) + 6 if 3 < x 5 0 otherwise { 1 2 if 1 x 1 0 otherwise 0 if x 0 3x if 0 x 4. 12 if x > 4 1 g(x u)du = 1 Frank Keller Computational Foundations of Cognitive Science 15

Function g(x) (blue) and convolution f g(x) (green): 4 3.5 3 2.5 2 1.5 1 0.5 0 0.5 1 2 1 0 1 2 3 4 5 6 Frank Keller Computational Foundations of Cognitive Science 16

Assume we have a function I (x) that represents the intensity of a signal over time. This can be a very spiky function. We can make this function less spiky by convolving it with a Gaussian kernel. This is a kernel given by the Gaussian function: G(x) = 1 2π e 1 2 x2 The convolution G I is a smoothed version of the original intensity function. We will learn more about the Gaussian function (aka normal distribution) in the second half of this course. Frank Keller Computational Foundations of Cognitive Science 17

Gaussian function G(x): 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 3 2 1 0 1 2 3 Frank Keller Computational Foundations of Cognitive Science 18

Original intensity function I (x): 220 200 180 160 140 120 100 80 60 40 20 0 0 20 40 60 80 100 120 140 160 180 200 Frank Keller Computational Foundations of Cognitive Science 19

Smoothed function obtained by convolution with a Gaussian kernel: 220 200 180 160 140 120 100 80 60 40 20 0 0 20 40 60 80 100 120 140 160 180 200 Frank Keller Computational Foundations of Cognitive Science 20

Summary The convolution (f g)(x) = f (u)g(x u) represents the overlap between a discrete function g and a kernel f ; convolutions in one dimension can be represented as vectors, convolutions in two dimensions as matrices; in image processing, two dimensional convolution can be used to filter an image or for edge detection; for continuous functions, convolution is defined as (f g)(x) = f (u)g(x u)du; this can be used in signal processing, e.g., to smooth a signal. Frank Keller Computational Foundations of Cognitive Science 21