# Introduction to Computer Vision. Week 11, Fall 2010 Instructor: Prof. Ko Nishino

Save this PDF as:

Size: px
Start display at page:

Download "Introduction to Computer Vision. Week 11, Fall 2010 Instructor: Prof. Ko Nishino"

## Transcription

1 Introduction to Computer Vision Week 11, Fall 2010 Instructor: Prof. Ko Nishino

2 The Projective Plane Why do we need homogeneous coordinates? represent points at infinity, homographies, perspective projection, multi-view relationships What is the geometric intuition? a point in the image is a ray in projective space (0,0,0) z y x (x,y,1) image plane (sx,sy,s) each point (x,y) on the plane is represented by a ray (sx,sy,s) all points on the ray are equivalent: (x, y, 1) (sx, sy, s)

3 Projective Lines What does a line in the image correspond to in projective space? A line is a plane of rays through origin all rays (x,y,z) satisfying: ax + by + cz = 0 [ ] in vector notation : 0 = a b c x y z A line is also represented as a homogeneous 3-vector l l p

4 Point and Line Duality A line l is a homogeneous 3-vector It is to every point (ray) p on the line: l p=0 l p 1 p 2 l 1 p l 2 What is the line l spanned by rays p 1 and p 2? l is to p 1 and p 2 l = p 1 p 2 l is the plane normal What is the intersection of two lines l 1 and l 2? p is to l 1 and l 2 p = l 1 l 2 Points and lines are dual in projective space given any formula, can switch the meanings of points and lines to get another formula

5 Ideal Points and Lines y (sx,sy,0) y z x (0,0,1) image plane z Ideal point ( point at infinity ) p (x, y, 0) parallel to image plane Infinitely large coordinates Ideal line l (0,0,1) parallel to image plane x image plane

6 Homographies of Points and Lines Computed by 3x3 matrix multiplication To transform a point: p = Hp To transform a line: lp=0 l p =0 0 = lp = lh -1 Hp = lh -1 p l = lh -1 lines are transformed by postmultiplication of H -1

7 3D Projective Geometry These concepts generalize naturally to 3D Homogeneous coordinates Projective 3D points have four coords: P = (X,Y,Z,W) Duality A plane N is also represented by a 4-vector Points and planes are dual in 3D: N P=0 Projective transformations Represented by 4x4 matrices T: P = TP, N = N T -1

8 Vanishing Points image plane vanishing point camera center Vanishing point ground plane projection of a point at infinity

9 Vanishing Points (2D) image plane vanishing point camera center line on ground plane

10 Vanishing Points image plane vanishing point V camera center C line on ground plane Properties line on ground plane Any two parallel lines have the same vanishing point v The ray from C through v is parallel to the lines An image may have more than one vanishing point

11 Vanishing points Image by Q-T. Luong (a vision researcher & photographer)

12 Vanishing Lines v 1 v 2 Multiple Vanishing Points Any set of parallel lines on the plane define a vanishing point The union of all of these vanishing points is the horizon line (also called vanishing line) Note that different planes define different vanishing lines

13 Vanishing Lines Multiple Vanishing Points Any set of parallel lines on the plane define a vanishing point The union of all of these vanishing points is the horizon line (also called vanishing line) Note that different planes define different vanishing lines

14 Vanishing Lines Image by Q-T. Luong (a vision researcher & photographer)

15 Computing Vanishing Points V P 0 D P is a point at infinity, v is its projection They depend only on line direction Parallel lines P 0 + td, P 1 + td intersect at P

16 Computing Vanishing Lines C l ground plane l is the intersection of horizontal plane through C with image plane Compute l from two sets of parallel lines on ground plane All points at same height as C project to l points higher than C project above l Provides way of comparing height of objects in the scene

17

18 Fun with Vanishing Points

19 Recognition Slides courtesy of Professor Steven Seitz

20 Recognition The Margaret Thatcher Illusion, by Peter Thompson!

21 Recognition The George Bush Illusion, by Tania Lombrozo!

22 Recognition Problems What is it? Object detection Who is it? Recognizing identity What are they doing? Activities All of these are classification problems Choose one class from a list of possible candidates

23 Face Detection

24 One Simple Method: Skin Detection R skin! Skin pixels have a distinctive range of colors Corresponds to region(s) in RGB color space for visualization, only R and G components are shown above G Skin classifier A pixel X = (R,G,B) is skin if it is in the skin region But how to find this region?

25 Skin Detection R Learn the skin region from examples Manually label pixels in one or more training images as skin or not skin Plot the training data in RGB space skin pixels shown in orange, non-skin pixels shown in blue some skin pixels may be outside the region, non-skin pixels inside. Skin classifier Given X = (R,G,B): how to determine if it is skin or not? G

26 Skin Classification Techniques R Skin classifier Given X = (R,G,B): how to determine if it is skin or not? Nearest neighbor find labeled pixel closest to X choose the label for that pixel Data modeling fit a model (curve, surface, or volume) to each class Probabilistic data modeling fit a probability model to each class G

27 Probability Basic probability X is a random variable P(X) is the probability that X achieves a certain value called a PDF! - probability distribution/density function! - a 2D PDF is a surface, 3D PDF is a volume! or continuous X! discrete X! Conditional probability: P(X Y) probability of X given that we already know Y

28 Probabilistic Skin Classification T h Now we can model uncertainty Each pixel has a probability of being skin or not skin Skin classifier Given X = (R,G,B): how to determine if it is skin or not? Choose interpretation of highest probability set X to be a skin pixel if and only if Where do we get and?

29 Learning Conditional PDF s We can calculate P(R skin) from a set of training images It is simply a histogram over the pixels in the training images each bin R i contains the proportion of skin pixels with color R i But this isn t quite what we want Why not? How to determine if a pixel is skin? We want P(skin R) not P(R skin) How can we get it?

30 Bayes Rule In terms of our problem: what we measure! (likelihood)! domain knowledge! (prior) what we want! (posterior)! normalization term! What could we use for the prior P(skin)? Could use domain knowledge P(skin) may be larger if we know the image contains a person for a portrait, P(skin) may be higher for pixels in the center Could learn the prior from the training set. How? P(skin) may be proportion of skin pixels in training set

31 Bayesian Estimation T h T h likelihood! posterior (unnormalized)! Bayesian estimation = minimize probability of misclassification Goal is to choose the label (skin or ~skin) that maximizes the posterior (Maximum A Posteriori (MAP) estimation) Suppose the prior is uniform: P(skin) = P(~skin) = in this case, 0.5! maximizing the posterior is equivalent to maximizing the likelihood (Maximum Likelihood (ML) estimation) if and only if

32 Skin Detection Results

33 General Classification This same procedure applies in more general circumstances More than two classes More than one dimension Example: face detection Here, X is an image region dimension = # pixels each face can be thought of as a point in a high dimensional space H. Schneiderman, T. Kanade. "A Statistical Method for 3D Object Detection Applied to Faces and Cars". IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2000) H. Schneiderman and T.Kanade!

34 Linear Subspaces convert x into v 1, v 2 coordinates! What does the v 2 coordinate measure?! - distance to line! - use it for classification near 0 for orange pts! What does the v 1 coordinate measure?! - position along line! - use it to specify which orange point it is! Classification can be expensive Must either search (e.g., nearest neighbors) or store large PDF s Suppose the data points are arranged as above Idea fit a line, classifier measures distance to line

35 Dimensionality Reduction Dimensionality reduction We can represent the orange points with only their v 1 coordinates since v 2 coordinates are all essentially 0 This makes it much cheaper to store and compare points A bigger deal for higher dimensional problems

36 Linear Subspaces Consider the variation along direction v among all of the orange points:! What unit vector v minimizes var?! What unit vector v maximizes var?! Solution:!v 1 is eigenvector of A with largest eigenvalue!!v 2 is eigenvector of A with smallest eigenvalue!

37 Principal Component Analysis Suppose each data point is N-dimensional Same procedure applies: The eigenvectors of A define a new coordinate system eigenvector with largest eigenvalue captures the most variation among training vectors x eigenvector with smallest eigenvalue has least variation We can compress the data by only using the top few eigenvectors corresponds to choosing a linear subspace represent points on a line, plane, or hyper-plane these eigenvectors are known as the principal components

38 The Space of Faces = + An image is a point in a high dimensional space An N x M image is a point in R NM We can define vectors in this space as we did in the 2D case

39 Dimensionality Reduction The set of faces is a subspace of the set of images Suppose it is K dimensional We can find the best subspace using PCA This is like fitting a hyper-plane to the set of faces spanned by vectors v 1, v 2,..., v K any face

40 Eigenfaces PCA extracts the eigenvectors of A Gives a set of vectors v 1, v 2, v 3,... Each one of these vectors is a direction in face space T what do these look like? h e T h e T h e

41 Projecting onto the Eigenfaces The eigenfaces v 1,..., v K span the space of faces A face is converted to eigenface coordinates by The image part with The image part with The image part with The image part with The image part with The image part with The image part with The image part with

42 Recognition with Eigenfaces Algorithm 1. Process the image database (set of images with labels) Run PCA compute eigenfaces Calculate the K coefficients for each image 2. Given a new image (to be recognized) x, calculate K coefficients 3. Detect if x is a face 4. If it is a face, who is it? Find closest labeled face in database nearest-neighbor in K-dimensional space

43 Choosing the Dimension K eigenvalues i = K NM How many eigenfaces to use? Look at the decay of the eigenvalues the eigenvalue tells you the amount of variance in the direction of that eigenface ignore eigenfaces with low variance

44 Object Recognition This is just the tip of the iceberg We ve talked about using pixel color as a feature Many other features can be used: edges motion (e.g., optical flow) object size SIFT... Classical object recognition techniques recover 3D information as well given an image and a database of 3D models, determine which model(s) appears in that image often recover 3D pose of the object as well

45 Recap

46 Image Formation/Sensing Pin-hole model Thin-lens law Aperture Depth of field Radiometric/Geometric distortion Human eyes Sensors Quantum efficiency Sensing color

47 Camera Models and Projective Geometry Homogeneous coordinates and their geometric intuition Projection models Perspective, orthographic, weak-perspective, and affine Camera calibration Camera parameters Direct linear method Homography Points and lines in projective space projective operations: line intersection, line containing two points ideal points and lines (at infinity) Vanishing points and lines and how to compute them

48 Image Filtering Digital vs. continuous images Linear shift invariant systems Convolution Sifting property Cascade system Fourier transform Convolution Gaussian smoothing Sampling theorem Nyquist law Aliasing Smoothing Gaussian filtering Median filtering Image scaling Image resampling Correlation

49 Edge Detection Origin of edges Edge types Theory of edge detection Squared gradient Laplacian Discrete approximations Accounting for noise Derivative theorem of convolution Laplacian of Gaussian Canny edge detector Difference of Gaussians

50 Motion Optical flow problem definition Motion field and optical flow Aperture problem and how it arises Assumptions Brightness constancy, small motion, smoothness Derivation of optical flow constraint equation Lucas-Kanade equation Derivation Conditions for solvability Meanings of eigenvalues and eigenvectors Iterative refinement Newton s method Pyramid-based flow estimation

51 Mosaicing Wide-angle imaging Cylindrical image reprojection Accounting for radial distortion Brightness constancy, small motion, smoothness Drifting

52 Lightness Color constancy Lightness recovery Frequency domain interpretation Solving Poisson equation Lightness from multiple images

54 Photometric Stereo Gradient space Reflectance map Diffuse photometric stereo derivation equations solving for albedo, normals depths from normals Handling shadows Computing light source directions from a shiny ball Limitations

56 Stereo Visual cues for 3D inference Disparity and depth Vergence Epipolar geometry Epipolar plane/line Fundamental matrix Stereo image rectification Stereo matching window-based matching effect of window size sources of error Active stereo (basic idea) structured light laser scanning

57 Structure-from-Motion Factorization method Problem formulation Rank constraint SVD Resolve ambiguity

58 Recognition Classifiers Probabilistic classification Decision boundaries Learning PDF s from training images Bayes law Maximum likelihood MAP Principal component analysis Eigenfaces algorithm

59 Fin! Don t bomb the final exam Closed book COMPREHENSIVE! Here, same time next week

### Epipolar Geometry Prof. D. Stricker

Outline 1. Short introduction: points and lines Epipolar Geometry Prof. D. Stricker 2. Two views geometry: Epipolar geometry Relation point/line in two views The geometry of two cameras Definition of the

### C4 Computer Vision. 4 Lectures Michaelmas Term Tutorial Sheet Prof A. Zisserman. fundamental matrix, recovering ego-motion, applications.

C4 Computer Vision 4 Lectures Michaelmas Term 2004 1 Tutorial Sheet Prof A. Zisserman Overview Lecture 1: Stereo Reconstruction I: epipolar geometry, fundamental matrix. Lecture 2: Stereo Reconstruction

### Feature Tracking and Optical Flow

02/09/12 Feature Tracking and Optical Flow Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Many slides adapted from Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve

### Problem definition: optical flow

Motion Estimation http://www.sandlotscience.com/distortions/breathing_objects.htm http://www.sandlotscience.com/ambiguous/barberpole.htm Why estimate motion? Lots of uses Track object behavior Correct

### Computer Vision - part II

Computer Vision - part II Review of main parts of Section B of the course School of Computer Science & Statistics Trinity College Dublin Dublin 2 Ireland www.scss.tcd.ie Lecture Name Course Name 1 1 2

### Epipolar Geometry and Stereo Vision

04/12/11 Epipolar Geometry and Stereo Vision Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Many slides adapted from Lana Lazebnik, Silvio Saverese, Steve Seitz, many figures from

### Camera calibration and epipolar geometry. Odilon Redon, Cyclops, 1914

Camera calibration and epipolar geometry Odilon Redon, Cyclops, 94 Review: Alignment What is the geometric relationship between pictures taken by cameras that share the same center? How many points do

### Robust Estimation of Light Directions and Diffuse Reflectance of Known Shape Object

Robust Estimation of Light Directions and Diffuse Reflectance of Known Shape Object Takehiro TACHIKAWA, Shinsaku HIURA, Kosuke SATO Graduate School of Engineering Science, Osaka University Email: {tachikawa,

### Depth from a single camera

Depth from a single camera Fundamental Matrix Essential Matrix Active Sensing Methods School of Computer Science & Statistics Trinity College Dublin Dublin 2 Ireland www.scss.tcd.ie 1 1 Geometry of two

### MVA ENS Cachan. Lecture 2: Logistic regression & intro to MIL Iasonas Kokkinos Iasonas.kokkinos@ecp.fr

Machine Learning for Computer Vision 1 MVA ENS Cachan Lecture 2: Logistic regression & intro to MIL Iasonas Kokkinos Iasonas.kokkinos@ecp.fr Department of Applied Mathematics Ecole Centrale Paris Galen

### Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006

Practical Tour of Visual tracking David Fleet and Allan Jepson January, 2006 Designing a Visual Tracker: What is the state? pose and motion (position, velocity, acceleration, ) shape (size, deformation,

### Image Projection. Goal: Introduce the basic concepts and mathematics for image projection.

Image Projection Goal: Introduce the basic concepts and mathematics for image projection. Motivation: The mathematics of image projection allow us to answer two questions: Given a 3D scene, how does it

### Stereo Vision (Correspondences)

Stereo Vision (Correspondences) EECS 598-08 Fall 2014! Foundations of Computer Vision!! Instructor: Jason Corso (jjcorso)! web.eecs.umich.edu/~jjcorso/t/598f14!! Readings: FP 7; SZ 11; TV 7! Date: 10/27/14!!

### COMP 558 Exercises 1 Oct 2, 2010

Questions 1. When a face is photographed from the front and from a small distance, the nose appears much larger than it should in comparison to the other parts of the face. Why? 2. Below are sketches of

### 3D Computer Vision. Photometric stereo. Prof. Didier Stricker

3D Computer Vision Photometric stereo Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Physical parameters

### The calibration problem was discussed in details during lecture 3.

1 2 The calibration problem was discussed in details during lecture 3. 3 Once the camera is calibrated (intrinsics are known) and the transformation from the world reference system to the camera reference

### Calibrating a Camera and Rebuilding a Scene by Detecting a Fixed Size Common Object in an Image

Calibrating a Camera and Rebuilding a Scene by Detecting a Fixed Size Common Object in an Image Levi Franklin Section 1: Introduction One of the difficulties of trying to determine information about a

### Convolution. 1D Formula: 2D Formula: Example on the web: http://www.jhu.edu/~signals/convolve/

Basic Filters (7) Convolution/correlation/Linear filtering Gaussian filters Smoothing and noise reduction First derivatives of Gaussian Second derivative of Gaussian: Laplacian Oriented Gaussian filters

### Robert Collins CSE598C, PSU. Introduction to Mean-Shift Tracking

Introduction to Mean-Shift Tracking Appearance-Based Tracking current frame + previous location likelihood over object location appearance model (e.g. image template, or Mode-Seeking (e.g. mean-shift;

### Image Formation. Introduce the elements of camera models, optics, and image

Image Formation Goal: formation. Introduce the elements of camera models, optics, and image Motivation: Camera models, together with radiometry and reflectance models, allow us to formulate the dependence

### PCA to Eigenfaces. CS 510 Lecture #16 March 23 th A 9 dimensional PCA example

PCA to Eigenfaces CS 510 Lecture #16 March 23 th 2015 A 9 dimensional PCA example is dark around the edges and bright in the middle. is light with dark vertical bars. is light with dark horizontal bars.

### Computer Vision: Filtering

Computer Vision: Filtering Raquel Urtasun TTI Chicago Jan 10, 2013 Raquel Urtasun (TTI-C) Computer Vision Jan 10, 2013 1 / 82 Today s lecture... Image formation Image Filtering Raquel Urtasun (TTI-C) Computer

### Image Formation. Image Formation occurs when a sensor registers radiation. Mathematical models of image formation:

Image Formation Image Formation occurs when a sensor registers radiation. Mathematical models of image formation: 1. Image function model 2. Geometrical model 3. Radiometrical model 4. Color model 5. Spatial

### 8.1 Lens Equation. 8.2 Image Resolution (8.1) z' z r

Chapter 8 Optics This chapter covers the essentials of geometrical optics. Radiometry is covered in Chapter 9. Machine vision relies on the pinhole camera model which models the geometry of perspective

### Geometric and Radiometric Camera Calibration

Geometric and Radiometric Camera Calibration Shape From Stereo requires geometric knowledge of: Cameras extrinsic parameters, i.e. the geometric relationship between the two cameras. Camera intrinsic parameters,

### Face Recognition using Principle Component Analysis

Face Recognition using Principle Component Analysis Kyungnam Kim Department of Computer Science University of Maryland, College Park MD 20742, USA Summary This is the summary of the basic idea about PCA

### Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C

### Robert Collins CSE598G. More on Mean-shift. R.Collins, CSE, PSU CSE598G Spring 2006

More on Mean-shift R.Collins, CSE, PSU Spring 2006 Recall: Kernel Density Estimation Given a set of data samples x i ; i=1...n Convolve with a kernel function H to generate a smooth function f(x) Equivalent

### Introduction Epipolar Geometry Calibration Methods Further Readings. Stereo Camera Calibration

Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration 12.10.2004 Overview Introduction Summary / Motivation Depth Perception Ambiguity of Correspondence

### Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition

Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition 1. Image Pre-Processing - Pixel Brightness Transformation - Geometric Transformation - Image Denoising 1 1. Image Pre-Processing

### IMAGE FORMATION. Antonino Furnari

IPLab - Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli Studi di Catania http://iplab.dmi.unict.it IMAGE FORMATION Antonino Furnari furnari@dmi.unict.it http://dmi.unict.it/~furnari

### Markov chains and Markov Random Fields (MRFs)

Markov chains and Markov Random Fields (MRFs) 1 Why Markov Models We discuss Markov models now. This is the simplest statistical model in which we don t assume that all variables are independent; we assume

### Face Recognition using SIFT Features

Face Recognition using SIFT Features Mohamed Aly CNS186 Term Project Winter 2006 Abstract Face recognition has many important practical applications, like surveillance and access control.

### CSCI 445 Amin Atrash. Ultrasound, Laser and Vision Sensors. Introduction to Robotics L. Itti & M. J. Mataric

Introduction to Robotics CSCI 445 Amin Atrash Ultrasound, Laser and Vision Sensors Today s Lecture Outline Ultrasound (sonar) Laser range-finders (ladar, not lidar) Vision Stereo vision Ultrasound/Sonar

### 2-View Geometry. Mark Fiala Ryerson University Mark.fiala@ryerson.ca

CRV 2010 Tutorial Day 2-View Geometry Mark Fiala Ryerson University Mark.fiala@ryerson.ca 3-Vectors for image points and lines Mark Fiala 2010 2D Homogeneous Points Add 3 rd number to a 2D point on image

### Digital Image Fundamentals. Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr

Digital Image Fundamentals Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Imaging process Light reaches surfaces in 3D. Surfaces reflect. Sensor element receives

### ColorCrack: Identifying Cracks in Glass

ColorCrack: Identifying Cracks in Glass James Max Kanter Massachusetts Institute of Technology 77 Massachusetts Ave Cambridge, MA 02139 kanter@mit.edu Figure 1: ColorCrack automatically identifies cracks

### Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.

Epipolar Geometry We consider two perspective images of a scene as taken from a stereo pair of cameras (or equivalently, assume the scene is rigid and imaged with a single camera from two different locations).

### Lecture 19 Camera Matrices and Calibration

Lecture 19 Camera Matrices and Calibration Project Suggestions Texture Synthesis for In-Painting Section 10.5.1 in Szeliski Text Project Suggestions Image Stitching (Chapter 9) Face Recognition Chapter

### Optical Flow as a property of moving objects used for their registration

Optical Flow as a property of moving objects used for their registration Wolfgang Schulz Computer Vision Course Project York University Email:wschulz@cs.yorku.ca 1. Introduction A soccer game is a real

### Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic

### Object Recognition and Template Matching

Object Recognition and Template Matching Template Matching A template is a small image (sub-image) The goal is to find occurrences of this template in a larger image That is, you want to find matches of

### Homographies and Panoramas. Slides from Steve Seitz, Rick Szeliski, Alexei Efros, Fredo Durand, and Kristin Grauman

Homographies and Panoramas Slides from Steve Seitz, Rick Szeliski, Alexei Efros, Fredo Durand, and Kristin Grauman cs129: Computational Photography James Hays, Brown, Fall 2012 Why Mosaic? Are you getting

### Computer Graphics: Visualisation Lecture 3. Taku Komura Institute for Perception, Action & Behaviour

Computer Graphics: Visualisation Lecture 3 Taku Komura tkomura@inf.ed.ac.uk Institute for Perception, Action & Behaviour Taku Komura Computer Graphics & VTK 1 Last lecture... Visualisation can be greatly

### Lecture 2: Homogeneous Coordinates, Lines and Conics

Lecture 2: Homogeneous Coordinates, Lines and Conics 1 Homogeneous Coordinates In Lecture 1 we derived the camera equations λx = P X, (1) where x = (x 1, x 2, 1), X = (X 1, X 2, X 3, 1) and P is a 3 4

### Lecture 8: Signal Detection and Noise Assumption

ECE 83 Fall Statistical Signal Processing instructor: R. Nowak, scribe: Feng Ju Lecture 8: Signal Detection and Noise Assumption Signal Detection : X = W H : X = S + W where W N(, σ I n n and S = [s, s,...,

### Image formation. Image formation. Pinhole camera. Pinhole camera. Image formation. Matlab tutorial. Physical parameters of image formation

Image formation Image formation Matlab tutorial How are objects in the world captured in an image? Tuesday, Sept 2 Physical parameters of image formation Geometric Type of projection Camera pose Optical

### CSE168 Computer Graphics II, Rendering. Spring 2006 Matthias Zwicker

CSE168 Computer Graphics II, Rendering Spring 2006 Matthias Zwicker Last time Sampling and aliasing Aliasing Moire patterns Aliasing Sufficiently sampled Insufficiently sampled [R. Cook ] Fourier analysis

### Highlight Removal by Illumination-Constrained Inpainting

Highlight Removal by Illumination-Constrained Inpainting Ping Tan Stephen Lin Long Quan Heung-Yeung Shum Microsoft Research, Asia Hong Kong University of Science and Technology Abstract We present a single-image

### Arrangements And Duality

Arrangements And Duality 3.1 Introduction 3 Point configurations are tbe most basic structure we study in computational geometry. But what about configurations of more complicated shapes? For example,

### Projective Geometry: A Short Introduction. Lecture Notes Edmond Boyer

Projective Geometry: A Short Introduction Lecture Notes Edmond Boyer Contents 1 Introduction 2 11 Objective 2 12 Historical Background 3 13 Bibliography 4 2 Projective Spaces 5 21 Definitions 5 22 Properties

### EECS 556 Image Processing W 09. Interpolation. Interpolation techniques B splines

EECS 556 Image Processing W 09 Interpolation Interpolation techniques B splines What is image processing? Image processing is the application of 2D signal processing methods to images Image representation

### Perceptual Color Spaces

Perceptual Color Spaces Background Humans can perceive thousands of colors, and only about a couple of dozen gray shades (cones/rods) Divided into two major areas: full color and pseudo color processing

### Sensor Models 2. Reading: Chapter 3. ECE/OPTI 531 Image Processing Lab for Remote Sensing Fall 2005

Sensor Models Reading: Chapter 3 ECE/OPTI 53 Image Processing Lab for Remote Sensing Fall 25 Sensor Models LSI System Model Spatial Response Spectral Response Signal Amplification, Sampling, and Quantization

### A Learning Based Method for Super-Resolution of Low Resolution Images

A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 emre.ugur@ceng.metu.edu.tr Abstract The main objective of this project is the study of a learning based method

### Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round \$200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

### Advanced Computer Graphics. Rendering Equation. Matthias Teschner. Computer Science Department University of Freiburg

Advanced Computer Graphics Rendering Equation Matthias Teschner Computer Science Department University of Freiburg Outline rendering equation Monte Carlo integration sampling of random variables University

### INTRODUCTION TO RENDERING TECHNIQUES

INTRODUCTION TO RENDERING TECHNIQUES 22 Mar. 212 Yanir Kleiman What is 3D Graphics? Why 3D? Draw one frame at a time Model only once X 24 frames per second Color / texture only once 15, frames for a feature

### 3D Scanner using Line Laser. 1. Introduction. 2. Theory

. Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric

### Optical Flow. Shenlong Wang CSC2541 Course Presentation Feb 2, 2016

Optical Flow Shenlong Wang CSC2541 Course Presentation Feb 2, 2016 Outline Introduction Variation Models Feature Matching Methods End-to-end Learning based Methods Discussion Optical Flow Goal: Pixel motion

### EE 368 Project: Face Detection in Color Images

EE 368 Project: Face Detection in Color Images Wenmiao Lu and Shaohua Sun Department of Electrical Engineering Stanford University May 26, 2003 Abstract We present in this report an approach to automatic

### Geometric Image Transformations

Geometric Image Transformations Part One 2D Transformations Spatial Coordinates (x,y) are mapped to new coords (u,v) pixels of source image -> pixels of destination image Types of 2D Transformations Affine

### Path Tracing. Michael Doggett Department of Computer Science Lund university. 2012 Michael Doggett

Path Tracing Michael Doggett Department of Computer Science Lund university 2012 Michael Doggett Outline Light transport notation Radiometry - Measuring light Illumination Rendering Equation Monte Carlo

### Last Lecture. Single View Modeling. Vermeer s Music Lesson Reconstructions by Criminisi et al.

Last Lecture Single View Modeling Vermeer s Music Lesson Reconstructions by Criminisi et al. Today Photometric Stereo Separate Global and Direct Illumination Photometric Stereo Photometric Stereo Readings

### Digital Image Processing Using Matlab. Haris Papasaika-Hanusch Institute of Geodesy and Photogrammetry, ETH Zurich

Haris Papasaika-Hanusch Institute of Geodesy and Photogrammetry, ETH Zurich haris@geod.baug.ethz.ch Images and Digital Images A digital image differs from a photo in that the values are all discrete. Usually

### Structured light systems

Structured light systems Tutorial 1: 9:00 to 12:00 Monday May 16 2011 Hiroshi Kawasaki & Ryusuke Sagawa Today Structured light systems Part I (Kawasaki@Kagoshima Univ.) Calibration of Structured light

### The Geometry of Perspective Projection

The Geometry o Perspective Projection Pinhole camera and perspective projection - This is the simplest imaging device which, however, captures accurately the geometry o perspective projection. -Rays o

### These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop

Music and Machine Learning (IFT6080 Winter 08) Prof. Douglas Eck, Université de Montréal These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher

### APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder

APPM4720/5720: Fast algorithms for big data Gunnar Martinsson The University of Colorado at Boulder Course objectives: The purpose of this course is to teach efficient algorithms for processing very large

### An Introduction to Machine Learning

An Introduction to Machine Learning L5: Novelty Detection and Regression Alexander J. Smola Statistical Machine Learning Program Canberra, ACT 0200 Australia Alex.Smola@nicta.com.au Tata Institute, Pune,

### Illumination Models and Shading. Foley & Van Dam, Chapter 16

Illumination Models and Shading Foley & Van Dam, Chapter 16 Illumination Models and Shading Light Source Models Ambient Illumination Diffuse Reflection Specular Reflection Polygon Rendering Methods Flat

### Relating Vanishing Points to Catadioptric Camera Calibration

Relating Vanishing Points to Catadioptric Camera Calibration Wenting Duan* a, Hui Zhang b, Nigel M. Allinson a a Laboratory of Vision Engineering, University of Lincoln, Brayford Pool, Lincoln, U.K. LN6

### An Iterative Image Registration Technique with an Application to Stereo Vision

An Iterative Image Registration Technique with an Application to Stereo Vision Bruce D. Lucas Takeo Kanade Computer Science Department Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 Abstract

### STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct

### Geometric Transformations and Image Warping: Mosaicing

Geometric Transformations and Image Warping: Mosaicing CS 6640 Ross Whitaker, Guido Gerig SCI Institute, School of Computing University of Utah (with slides from: Jinxiang Chai, TAMU) faculty.cs.tamu.edu/jchai/cpsc641_spring10/lectures/lecture8.ppt

### Lecture 9: Continuous

CSC2515 Fall 2007 Introduction to Machine Learning Lecture 9: Continuous Latent Variable Models 1 Example: continuous underlying variables What are the intrinsic latent dimensions in these two datasets?

### Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall

Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin

### Efficient Attendance Management: A Face Recognition Approach

Efficient Attendance Management: A Face Recognition Approach Badal J. Deshmukh, Sudhir M. Kharad Abstract Taking student attendance in a classroom has always been a tedious task faultfinders. It is completely

### Bayesian Classification

CS 650: Computer Vision Bryan S. Morse BYU Computer Science Statistical Basis Training: Class-Conditional Probabilities Suppose that we measure features for a large training set taken from class ω i. Each

### CSE 252B: Computer Vision II

CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jia Mao, Andrew Rabinovich LECTURE 9 Affine and Euclidean Reconstruction 9.1. Stratified reconstruction Recall that in 3D reconstruction from

### Lecture 12: Cameras and Geometry. CAP 5415 Fall 2010

Lecture 12: Cameras and Geometry CAP 5415 Fall 2010 The midterm What does the response of a derivative filter tell me about whether there is an edge or not? Things aren't working Did you look at the filters?

### Computer Graphics. Course SS 2007 Antialiasing. computer graphics & visualization

Computer Graphics Course SS 2007 Antialiasing How to avoid spatial aliasing caused by an undersampling of the signal, i.e. the sampling frequency is not high enough to cover all details Supersampling -

2010 Digital Image Computing: Techniques and Applications Colour Adjustment and Specular Removal for Non-Uniform Shape from Shading Xiaozheng Zhang, Yongsheng Gao, Terry Caelli Biosecurity Group, Queensland

### Advanced Computer Graphics. Materials and Lights. Matthias Teschner. Computer Science Department University of Freiburg

Advanced Computer Graphics Materials and Lights Matthias Teschner Computer Science Department University of Freiburg Motivation materials are characterized by surface reflection properties empirical reflectance

### Lecture 9: Introduction to Pattern Analysis

Lecture 9: Introduction to Pattern Analysis g Features, patterns and classifiers g Components of a PR system g An example g Probability definitions g Bayes Theorem g Gaussian densities Features, patterns

### Linear Threshold Units

Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear

### Lecture 3: Linear methods for classification

Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,

### Color. Chapter Spectral content of ambient illumination, which is the color content of the light shining on surfaces

Chapter 10 Color As discussed elsewhere in this book, light has intensity and images have gray value; but light consists of a spectrum of wavelengths and images can include samples from multiple wavelengths

### Geometric Camera Parameters

Geometric Camera Parameters What assumptions have we made so far? -All equations we have derived for far are written in the camera reference frames. -These equations are valid only when: () all distances

### RE INVENT THE CAMERA: 3D COMPUTATIONAL PHOTOGRAPHY FOR YOUR MOBILE PHONE OR TABLET

RE INVENT THE CAMERA: 3D COMPUTATIONAL PHOTOGRAPHY FOR YOUR MOBILE PHONE OR TABLET REINVENT THE CAMERA: 3D COMPUTATIONAL PHOTOGRAPHY FOR YOUR MOBILE PHONE OR TABLET The first electronic camera (left),

### Bayesian Image Super-Resolution

Bayesian Image Super-Resolution Michael E. Tipping and Christopher M. Bishop Microsoft Research, Cambridge, U.K..................................................................... Published as: Bayesian

### Data Clustering. Dec 2nd, 2013 Kyrylo Bessonov

Data Clustering Dec 2nd, 2013 Kyrylo Bessonov Talk outline Introduction to clustering Types of clustering Supervised Unsupervised Similarity measures Main clustering algorithms k-means Hierarchical Main

### Subspace Analysis and Optimization for AAM Based Face Alignment

Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China zhaoming1999@zju.edu.cn Stan Z. Li Microsoft

### Colorado School of Mines Computer Vision Professor William Hoff

Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Introduction to 2 What is? A process that produces from images of the external world a description

### Radiometric alignment and vignetting calibration. Pablo d'angelo University of Bielefeld

Radiometric alignment and vignetting calibration University of Bielefeld Overview Motivation Image formation Vignetting and exposure estimation Results Summary Motivation Determination of vignetting and

### Brightness and geometric transformations

Brightness and geometric transformations Václav Hlaváč Czech Technical University in Prague Center for Machine Perception (bridging groups of the) Czech Institute of Informatics, Robotics and Cybernetics

### Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition

Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition 710.080 2VO 710.081 1KU 1 Optical Flow (I) Content Introduction Local approach (Lucas Kanade) Global approaches (Horn-Schunck,

### A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow

, pp.233-237 http://dx.doi.org/10.14257/astl.2014.51.53 A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow Giwoo Kim 1, Hye-Youn Lim 1 and Dae-Seong Kang 1, 1 Department of electronices

### Least-Squares Intersection of Lines

Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a

### Principal components analysis

CS229 Lecture notes Andrew Ng Part XI Principal components analysis In our discussion of factor analysis, we gave a way to model data x R n as approximately lying in some k-dimension subspace, where k