Computer Vision: Filtering
|
|
|
- Phebe Eaton
- 9 years ago
- Views:
Transcription
1 Computer Vision: Filtering Raquel Urtasun TTI Chicago Jan 10, 2013 Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
2 Today s lecture... Image formation Image Filtering Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
3 Readings Chapter 2 and 3 of Rich Szeliski s book Available online here Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
4 How is an image created? The image formation process that produced a particular image depends on lighting conditions scene geometry surface properties camera optics [Source: R. Szeliski] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
5 Image formation Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
6 [Source: A. Efros] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82 What is an image?!"#"$%&'(%)*+%' 0*1&&'23456'37'$-*6*'"7'$-"6'4&%66',-*'./*'
7 From photons to RGB values Sample the 2D space on a regular grid. Quantize each sample, i.e., the photons arriving at each active cell are integrated and then digitized. [Source: D. Hoiem] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
8 What is an image? A grid (matrix) of intensity values #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #% % #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ &$ &$ &$ #$$ #$$ #$$ #$$ #$$ #$$!" #$$ #$$ &$ '$ '$ &$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ '( )#& )*$ )&$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ )#& )*$ )&$ )&$ )&$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ )#& )*$ #%% #%% )&$ )&$ '$ #$$ #$$ #$$ #$$ #$$ )#& )*$ #%% #%% )&$ )&$ '$ *& #$$ #$$ #$$ #$$ )#& )*$ )*$ )&$ )#& )#& '$ *& #$$ #$$ #$$ #$$ &* )#& )#& )#& '$ '$ '$ *& #$$ #$$ #$$ #$$ #$$ &* &* &* &* &* &* #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ #$$ Common to use one byte per value: 0=black, 255=white) [Source: N. Snavely] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
9 What is an image? We can think of a (grayscale) image as a function f : R 2 R giving the intensity at position (x, y) f (x, y) x y A digital image is a discrete (sampled, quantized) version of this function [Source: N. Snavely] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
10 Image Transformations As with any function, we can apply operators to an image!g (x,y) = f (x,y) + 20!g (x,y) = f (-x,y) We ll talk about special kinds of operators, correlation and convolution (linear filtering) [Source: N. Snavely] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
11 Filtering Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
12 Question: Noise reduction Given a camera and a still scene, how can you reduce noise? Take lots of images and average them! Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
13 Question: Noise reduction Given a camera and a still scene, how can you reduce noise? Take lots of images and average them! Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
14 Question: Noise reduction Given a camera and a still scene, how can you reduce noise? Take lots of images and average them! What s the next best thing? [Source: S. Seitz] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
15 Question: Noise reduction Given a camera and a still scene, how can you reduce noise? Take lots of images and average them! What s the next best thing? [Source: S. Seitz] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
16 Image filtering Modify the pixels in an image based on some function of a local neighborhood of each pixel #'" $" #"!" &"!" #" #" %" 5).0"678*9)8" %" ()*+,"-.+/0"1+2+" 3)1-401"-.+/0"1+2+" [Source: L. Zhang] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
17 Applications of Filtering Enhance an image, e.g., denoise, resize. Extract information, e.g., texture, edges. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
18 Applications of Filtering Enhance an image, e.g., denoise, resize. Extract information, e.g., texture, edges. Detect patterns, e.g., template matching. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
19 Applications of Filtering Enhance an image, e.g., denoise, resize. Extract information, e.g., texture, edges. Detect patterns, e.g., template matching. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
20 Noise reduction Simplest thing: replace each pixel by the average of its neighbors. This assumes that neighboring pixels are similar, and the noise to be independent from pixel to pixel. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
21 Noise reduction Simplest thing: replace each pixel by the average of its neighbors. This assumes that neighboring pixels are similar, and the noise to be independent from pixel to pixel. [Source: S. Marschner] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
22 Noise reduction Simplest thing: replace each pixel by the average of its neighbors. This assumes that neighboring pixels are similar, and the noise to be independent from pixel to pixel. [Source: S. Marschner] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
23 Noise reduction Simplest thing: replace each pixel by the average of its neighbors This assumes that neighboring pixels are similar, and the noise to be independent from pixel to pixel. Moving average in 1D: [1, 1, 1, 1, 1]/5 [Source: S. Marschner] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
24 Noise reduction Simpler thing: replace each pixel by the average of its neighbors This assumes that neighboring pixels are similar, and the noise to be independent from pixel to pixel. Non-uniform weights [1, 4, 6, 4, 1] / 16 [Source: S. Marschner] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
25 Moving Average in 2D [Source: S. Seitz] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
26 Moving Average in 2D [Source: S. Seitz] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
27 Moving Average in 2D [Source: S. Seitz] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
28 Moving Average in 2D [Source: S. Seitz] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
29 Moving Average in 2D [Source: S. Seitz] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
30 Moving Average in 2D [Source: S. Seitz] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
31 Linear Filtering: Correlation Involves weighted combinations of pixels in small neighborhoods. The output pixels value is determined as a weighted sum of input pixel values g(i, j) = k,l f (i + k, j + l)h(k, l) Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
32 Linear Filtering: Correlation Involves weighted combinations of pixels in small neighborhoods. The output pixels value is determined as a weighted sum of input pixel values g(i, j) = k,l f (i + k, j + l)h(k, l) The entries of the weight kernel or mask h(k, l) are often called the filter coefficients. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
33 Linear Filtering: Correlation Involves weighted combinations of pixels in small neighborhoods. The output pixels value is determined as a weighted sum of input pixel values g(i, j) = k,l f (i + k, j + l)h(k, l) The entries of the weight kernel or mask h(k, l) are often called the filter coefficients. This operator is the correlation operator g = f h Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
34 Linear Filtering: Correlation Involves weighted combinations of pixels in small neighborhoods. The output pixels value is determined as a weighted sum of input pixel values g(i, j) = k,l f (i + k, j + l)h(k, l) The entries of the weight kernel or mask h(k, l) are often called the filter coefficients. This operator is the correlation operator g = f h Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
35 Smoothing by averaging What if the filter size was 5 x 5 instead of 3 x 3? [Source: K. Graumann] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
36 Gaussian filter What if we want nearest neighboring pixels to have the most influence on the output? Removes high-frequency components from the image (low-pass filter). [Source: S. Seitz] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
37 Smoothing with a Gaussian [Source: K. Grauman] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
38 Mean vs Gaussian Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
39 Gaussian filter: Parameters Size of kernel or mask: Gaussian function has infinite support, but discrete filters use finite kernels. [Source: K. Grauman] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
40 Gaussian filter: Parameters Variance of the Gaussian: determines extent of smoothing. [Source: K. Grauman] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
41 Gaussian filter: Parameters [Source: K. Grauman] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
42 Is this the most general Gaussian? No, the most general form for x R d N (x; µ, Σ) = ( 1 exp 1 ) (2π) d/2 Σ 1/2 2 (x µ)t Σ 1 (x µ) But the simplified version is typically use for filtering. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
43 Properties of the Smoothing All values are positive. They all sum to 1. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
44 Properties of the Smoothing All values are positive. They all sum to 1. Amount of smoothing proportional to mask size. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
45 Properties of the Smoothing All values are positive. They all sum to 1. Amount of smoothing proportional to mask size. Remove high-frequency components; low-pass filter. [Source: K. Grauman] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
46 Properties of the Smoothing All values are positive. They all sum to 1. Amount of smoothing proportional to mask size. Remove high-frequency components; low-pass filter. [Source: K. Grauman] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
47 Example of Correlation What is the result of filtering the impulse signal (image) F with the arbitrary kernel H? [Source: K. Grauman] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
48 Convolution Convolution operator g(i, j) = k,l f (i k, j l)h(k, l) = k,l f (k, l)h(i k, j l) = f h and h is then called the impulse response function. Equivalent to flip the filter in both dimensions (bottom to top, right to left) and apply cross-correlation. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
49 Convolution Convolution operator g(i, j) = k,l f (i k, j l)h(k, l) = k,l f (k, l)h(i k, j l) = f h and h is then called the impulse response function. Equivalent to flip the filter in both dimensions (bottom to top, right to left) and apply cross-correlation. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
50 Matrix form Correlation and convolution can both be written as a matrix-vector multiply, if we first convert the two-dimensional images f (i, j) and g(i, j) into raster-ordered vectors f and g with H a sparse matrix. g = Hf Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
51 Correlation vs Convolution Convolution g(i, j) = k,l f (i k, j l)h(k, l) G = H F Correlation g(i, j) = k,l f (i + k, j + l)h(k, l) G = H F For a Gaussian or box filter, how will the outputs differ? If the input is an impulse signal, how will the outputs differ? h δ?, and h δ? Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
52 Example What s the result? [Source: D. Lowe] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
53 Example What s the result? [Source: D. Lowe] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
54 Example What s the result? [Source: D. Lowe] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
55 Example What s the result? [Source: D. Lowe] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
56 Example What s the result? * #" #" #" #" $" #" #" #" #"!"!"!" -!!"!"!" %"!"!"!" Original! [Source: D. Lowe] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
57 Example What s the result? * #" #" #" #" $" #" #" #" #" -!!"!"!"!"!"!"!"!"!" 0" Original!!"#$%&'(')*+,-&$* %&''()*+&*(,"(-.(,/* [Source: D. Lowe] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
58 Sharpening [Source: D. Lowe] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
59 Gaussian Filter Convolution with itself is another Gaussian *!" Convolving twice with Gaussian kernel of width σ is the same as convolving once with kernel of width σ 2 [Source: K. Grauman] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
60 Sharpening revisited What does blurring take away?!" #"!"#$#%&'( )*!!+,-.(/0102(.-+&#'( Let s add it back #"$"!" *$+,+'#-) (&.#+-)!"#$%&'&() [Source: S. Lazebnik] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
61 Sharpening!"#$%&'&() #$%&'&() [Source: N. Snavely] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
62 Optical Convolution Camera Shake!" * Figure: Fergus, et al., SIGGRAPH 2006 Blur in out-of-focus regions of an image. Figure: Bokeh: Click for more info [Source: N. Snavely] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
63 Correlation vs Convolution The convolution is both commutative and associative. The Fourier transform of two convolved images is the product of their individual Fourier transforms. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
64 Correlation vs Convolution The convolution is both commutative and associative. The Fourier transform of two convolved images is the product of their individual Fourier transforms. Both correlation and convolution are linear shift-invariant (LSI) operators, which obey both the superposition principle and the shift invariance principle h (f 0 + f 1 ) = h f o + h f 1 if g(i, j) = f (i + k, j + l) (h g)(i, j) = (h f )(i + k, j + l) which means that shifting a signal commutes with applying the operator. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
65 Correlation vs Convolution The convolution is both commutative and associative. The Fourier transform of two convolved images is the product of their individual Fourier transforms. Both correlation and convolution are linear shift-invariant (LSI) operators, which obey both the superposition principle and the shift invariance principle h (f 0 + f 1 ) = h f o + h f 1 if g(i, j) = f (i + k, j + l) (h g)(i, j) = (h f )(i + k, j + l) which means that shifting a signal commutes with applying the operator. Is the same as saying that the effect of the operator is the same everywhere. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
66 Correlation vs Convolution The convolution is both commutative and associative. The Fourier transform of two convolved images is the product of their individual Fourier transforms. Both correlation and convolution are linear shift-invariant (LSI) operators, which obey both the superposition principle and the shift invariance principle h (f 0 + f 1 ) = h f o + h f 1 if g(i, j) = f (i + k, j + l) (h g)(i, j) = (h f )(i + k, j + l) which means that shifting a signal commutes with applying the operator. Is the same as saying that the effect of the operator is the same everywhere. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
67 Boundary Effects The results of filtering the image in this form will lead to a darkening of the corner pixels. The original image is effectively being padded with 0 values wherever the convolution kernel extends beyond the original image boundaries. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
68 Boundary Effects The results of filtering the image in this form will lead to a darkening of the corner pixels. The original image is effectively being padded with 0 values wherever the convolution kernel extends beyond the original image boundaries. A number of alternative padding or extension modes have been developed. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
69 Boundary Effects The results of filtering the image in this form will lead to a darkening of the corner pixels. The original image is effectively being padded with 0 values wherever the convolution kernel extends beyond the original image boundaries. A number of alternative padding or extension modes have been developed. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
70 Boundary Effects The results of filtering the image in this form will lead to a darkening of the corner pixels. The original image is effectively being padded with 0 values wherever the convolution kernel extends beyond the original image boundaries. A number of alternative padding or extension modes have been developed. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
71 Separable Filters The process of performing a convolution requires K 2 operations per pixel, where K is the size (width or height) of the convolution kernel. In many cases, this operation can be speed up by first performing a 1D horizontal convolution followed by a 1D vertical convolution, requiring 2K operations. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
72 Separable Filters The process of performing a convolution requires K 2 operations per pixel, where K is the size (width or height) of the convolution kernel. In many cases, this operation can be speed up by first performing a 1D horizontal convolution followed by a 1D vertical convolution, requiring 2K operations. If his is possible, then the convolution kernel is called separable. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
73 Separable Filters The process of performing a convolution requires K 2 operations per pixel, where K is the size (width or height) of the convolution kernel. In many cases, this operation can be speed up by first performing a 1D horizontal convolution followed by a 1D vertical convolution, requiring 2K operations. If his is possible, then the convolution kernel is called separable. And it is the outer product of two kernels K = vh T Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
74 Separable Filters The process of performing a convolution requires K 2 operations per pixel, where K is the size (width or height) of the convolution kernel. In many cases, this operation can be speed up by first performing a 1D horizontal convolution followed by a 1D vertical convolution, requiring 2K operations. If his is possible, then the convolution kernel is called separable. And it is the outer product of two kernels K = vh T Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
75 Let s play a game... Is this separable? If yes, what s the separable version? Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
76 Let s play a game... Is this separable? If yes, what s the separable version? What does this filter do? Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
77 Let s play a game... Is this separable? If yes, what s the separable version? Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
78 Let s play a game... Is this separable? If yes, what s the separable version? What does this filter do? Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
79 Let s play a game... Is this separable? If yes, what s the separable version? Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
80 Let s play a game... Is this separable? If yes, what s the separable version? What does this filter do? Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
81 Let s play a game... Is this separable? If yes, what s the separable version? Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
82 Let s play a game... Is this separable? If yes, what s the separable version? What does this filter do? Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
83 Let s play a game... Is this separable? If yes, what s the separable version? Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
84 Let s play a game... Is this separable? If yes, what s the separable version? What does this filter do? Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
85 How can we tell if a given kernel K is indeed separable? Inspection... this is what we were doing. Looking at the analytic form of it. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
86 How can we tell if a given kernel K is indeed separable? Inspection... this is what we were doing. Looking at the analytic form of it. Look at the singular value decomposition (SVD), and if only one singular value is non-zero, then it is separable K = UΣV T = i σ i u i v T i with Σ = diag(σ i ). Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
87 How can we tell if a given kernel K is indeed separable? Inspection... this is what we were doing. Looking at the analytic form of it. Look at the singular value decomposition (SVD), and if only one singular value is non-zero, then it is separable K = UΣV T = i σ i u i v T i with Σ = diag(σ i ). σ1 u 1 and σ 1 v T 1 are the vertical and horizontal kernels. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
88 How can we tell if a given kernel K is indeed separable? Inspection... this is what we were doing. Looking at the analytic form of it. Look at the singular value decomposition (SVD), and if only one singular value is non-zero, then it is separable K = UΣV T = i σ i u i v T i with Σ = diag(σ i ). σ1 u 1 and σ 1 v T 1 are the vertical and horizontal kernels. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
89 Application of filtering: Template matching Filters as templates: filters look like the effects they are intended to find. Use normalized cross-correlation score to find a given pattern (template) in the image. Normalization needed to control for relative brightnesses. [Source: K. Grauman] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
90 Template matching [Source: K. Grauman] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
91 More complex Scenes Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
92 Let s talk about Edge Detection Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
93 Filtering: Edge detection Map image from 2d array of pixels to a set of curves or line segments or contours. More compact than pixels. Look for strong gradients, post-process. Figure: [Shotton et al. PAMI, 07] [Source: K. Grauman] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
94 Origin of edges Edges are caused by a variety of factors surface normal discontinuity depth discontinuity surface color discontinuity illumination discontinuity [Source: N. Snavely] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
95 What causes an edge? [Source: K. Grauman] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
96 Looking more locally... [Source: K. Grauman] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
97 Images as functions Edges look like steep cliffs [Source: N. Snavely] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
98 Characterizing Edges An edge is a place of rapid change in the image intensity function. [Source: S. Lazebnik] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
99 How to Implement Derivatives with Convolution How can we differentiate a digital image F[x,y]? Option 1: reconstruct a continuous image f, then compute the partial derivative as f (x, y) x = lim ɛ 0 f (x + ɛ, y) f (x) ɛ Option 2: take discrete derivative (finite difference) f (x, y) x f [x + 1, y] f [x] 1 Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
100 How to Implement Derivatives with Convolution How can we differentiate a digital image F[x,y]? Option 1: reconstruct a continuous image f, then compute the partial derivative as f (x, y) x = lim ɛ 0 f (x + ɛ, y) f (x) ɛ Option 2: take discrete derivative (finite difference) f (x, y) x f [x + 1, y] f [x] 1 What would be the filter to implement this using convolution? Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
101 How to Implement Derivatives with Convolution How can we differentiate a digital image F[x,y]? Option 1: reconstruct a continuous image f, then compute the partial derivative as f (x, y) x = lim ɛ 0 f (x + ɛ, y) f (x) ɛ Option 2: take discrete derivative (finite difference) f (x, y) x f [x + 1, y] f [x] 1 What would be the filter to implement this using convolution?!"!" [Source: S. Seitz] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
102 How to Implement Derivatives with Convolution How can we differentiate a digital image F[x,y]? Option 1: reconstruct a continuous image f, then compute the partial derivative as f (x, y) x = lim ɛ 0 f (x + ɛ, y) f (x) ɛ Option 2: take discrete derivative (finite difference) f (x, y) x f [x + 1, y] f [x] 1 What would be the filter to implement this using convolution?!"!" [Source: S. Seitz] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
103 Partial derivatives of an image Figure: Using correlation filters [Source: K. Grauman] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
104 Finite Difference Filters [Source: K. Grauman] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
105 Image Gradient The gradient of an image f = [ ] f x, f y The gradient points in the direction of most rapid change in intensity Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
106 Image Gradient The gradient of an image f = [ ] f x, f y The gradient points in the direction of most rapid change in intensity The gradient direction (orientation of edge normal) is given by: ( f θ = tan 1 y / f ) x Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
107 Image Gradient The gradient of an image f = [ ] f x, f y The gradient points in the direction of most rapid change in intensity The gradient direction (orientation of edge normal) is given by: ( f θ = tan 1 y / f ) x The edge strength is given by the magnitude f = ( f x )2 + ( f y )2 [Source: S. Seitz] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
108 Image Gradient The gradient of an image f = [ ] f x, f y The gradient points in the direction of most rapid change in intensity The gradient direction (orientation of edge normal) is given by: ( f θ = tan 1 y / f ) x The edge strength is given by the magnitude f = ( f x )2 + ( f y )2 [Source: S. Seitz] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
109 Image Gradient [Source: S. Lazebnik] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
110 Effects of noise Consider a single row or column of the image. Plotting intensity as a function of position gives a signal.!"#$%&#'()*&#+,-.& [Source: S. Seitz] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
111 Effects of noise Smooth first, and look for picks in x (h f ). [Source: S. Seitz] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
112 Derivative theorem of convolution Differentiation property of convolution It saves one operation h (h f ) = ( x x ) f = h ( f x ) [Source: S. Seitz] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
113 2D Edge Detection Filters Gaussian h σ (x, y) = 1 2πσ exp u 2 2 +v 2 2σ 2 Derivative of Gaussian (x) x h σ(u, v) [Source: N. Snavely] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
114 Derivative of Gaussians [Source: K. Grauman] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
115 Laplacian of Gaussians Edge by detecting zero-crossings of bottom graph [Source: S. Seitz] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
116 2D Edge Filtering with 2 the Laplacian operator 2 f = 2 f x + 2 f 2 y 2 [Source: S. Seitz] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
117 Effect of σ on derivatives The detected structures differ depending on the Gaussian s scale parameter: Larger values: larger scale edges detected. Smaller values: finer features detected. [Source: K. Grauman] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
118 Derivatives Use opposite signs to get response in regions of high contrast. They sum to 0 so that there is no response in constant regions. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
119 Derivatives Use opposite signs to get response in regions of high contrast. They sum to 0 so that there is no response in constant regions. High absolute value at points of high contrast. [Source: K. Grauman] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
120 Derivatives Use opposite signs to get response in regions of high contrast. They sum to 0 so that there is no response in constant regions. High absolute value at points of high contrast. [Source: K. Grauman] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
121 Band-pass filters The Sobel and corner filters are band-pass and oriented filters. More sophisticated filters can be obtained by convolving with a Gaussian filter G(x, y, σ) = 1 ( 2πσ 2 exp x 2 + y 2 ) 2σ 2 and taking the first or second derivatives. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
122 Band-pass filters The Sobel and corner filters are band-pass and oriented filters. More sophisticated filters can be obtained by convolving with a Gaussian filter G(x, y, σ) = 1 ( 2πσ 2 exp x 2 + y 2 ) 2σ 2 and taking the first or second derivatives. These filters are band-pass filters: they filter low and high frequencies. Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
123 Band-pass filters The Sobel and corner filters are band-pass and oriented filters. More sophisticated filters can be obtained by convolving with a Gaussian filter G(x, y, σ) = 1 ( 2πσ 2 exp x 2 + y 2 ) 2σ 2 and taking the first or second derivatives. These filters are band-pass filters: they filter low and high frequencies. The second derivative of a two-dimensional image is the laplacian operator 2 f = 2 f x f y 2 Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
124 Band-pass filters The Sobel and corner filters are band-pass and oriented filters. More sophisticated filters can be obtained by convolving with a Gaussian filter G(x, y, σ) = 1 ( 2πσ 2 exp x 2 + y 2 ) 2σ 2 and taking the first or second derivatives. These filters are band-pass filters: they filter low and high frequencies. The second derivative of a two-dimensional image is the laplacian operator 2 f = 2 f x f y 2 Blurring an image with a Gaussian and then taking its Laplacian is equivalent to convolving directly with the Laplacian of Gaussian (LoG) filter, Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
125 Band-pass filters The Sobel and corner filters are band-pass and oriented filters. More sophisticated filters can be obtained by convolving with a Gaussian filter G(x, y, σ) = 1 ( 2πσ 2 exp x 2 + y 2 ) 2σ 2 and taking the first or second derivatives. These filters are band-pass filters: they filter low and high frequencies. The second derivative of a two-dimensional image is the laplacian operator 2 f = 2 f x f y 2 Blurring an image with a Gaussian and then taking its Laplacian is equivalent to convolving directly with the Laplacian of Gaussian (LoG) filter, Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
126 Band-pass filters The directional or oriented filter can obtained by smoothing with a Gaussian (or some other filter) and then taking a directional derivative u = u u (G f ) = u (G f ) = ( u G) f with u = (cos θ, sin θ). The Sobel operator is a simple approximation of this: Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
127 Practical Example [Source: N. Snavely] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
128 Finding Edges Figure: Gradient magnitude [Source: N. Snavely] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
129 Finding Edges!"#$#%&'%("#%#)*#+% Figure: Gradient magnitude [Source: N. Snavely] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
130 Non-Maxima Suppression Figure: Gradient magnitude Check if pixel is local maximum along gradient direction: requires interpolation [Source: N. Snavely] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
131 Finding Edges Figure: Thresholding [Source: N. Snavely] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
132 Finding Edges Figure: Thinning: Non-maxima suppression [Source: N. Snavely] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
133 Canny Edge Detector Matlab: edge(image, canny ) 1 Filter image with derivative of Gaussian 2 Find magnitude and orientation of gradient 3 Non-maximum suppression 4 Linking and thresholding (hysteresis): Define two thresholds: low and high Use the high threshold to start edge curves and the low threshold to continue them [Source: D. Lowe and L. Fei-Fei] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
134 Canny edge detector Still one of the most widely used edge detectors in computer vision J. Canny, A Computational Approach To Edge Detection, IEEE Trans. Pattern Analysis and Machine Intelligence, 8: , Depends on several parameters: σ of the blur and the thresholds Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
135 Canny edge detector large σ detects large-scale edges small σ detects fine edges original Canny with Canny with [Source: S. Seitz] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
136 Scale Space (Witkin 83) first derivative peaks larger Gaussian filtered signal Properties of scale space (w/ Gaussian smoothing) edge position may shift with increasing scale (σ) two edges may merge with increasing scale an edge may not split into two with increasing scale [Source: N. Snavely] Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
137 Next class... more on filtering and image features Raquel Urtasun (TTI-C) Computer Vision Jan 10, / 82
Convolution. 1D Formula: 2D Formula: Example on the web: http://www.jhu.edu/~signals/convolve/
Basic Filters (7) Convolution/correlation/Linear filtering Gaussian filters Smoothing and noise reduction First derivatives of Gaussian Second derivative of Gaussian: Laplacian Oriented Gaussian filters
Edge detection. (Trucco, Chapt 4 AND Jain et al., Chapt 5) -Edges are significant local changes of intensity in an image.
Edge detection (Trucco, Chapt 4 AND Jain et al., Chapt 5) Definition of edges -Edges are significant local changes of intensity in an image. -Edges typically occur on the boundary between two different
Computational Foundations of Cognitive Science
Computational Foundations of Cognitive Science Lecture 15: Convolutions and Kernels Frank Keller School of Informatics University of Edinburgh [email protected] February 23, 2010 Frank Keller Computational
jorge s. marques image processing
image processing images images: what are they? what is shown in this image? What is this? what is an image images describe the evolution of physical variables (intensity, color, reflectance, condutivity)
Correlation and Convolution Class Notes for CMSC 426, Fall 2005 David Jacobs
Correlation and Convolution Class otes for CMSC 46, Fall 5 David Jacobs Introduction Correlation and Convolution are basic operations that we will perform to extract information from images. They are in
Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition
Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition 1. Image Pre-Processing - Pixel Brightness Transformation - Geometric Transformation - Image Denoising 1 1. Image Pre-Processing
Digital Imaging and Multimedia. Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University
Digital Imaging and Multimedia Filters Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What are Filters Linear Filters Convolution operation Properties of Linear Filters Application
Linear Filtering Part II
Linear Filtering Part II Selim Aksoy Department of Computer Engineering Bilkent University [email protected] Fourier theory Jean Baptiste Joseph Fourier had a crazy idea: Any periodic function can
Image Filtering & Edge Detection
What is imae filterin? Imae Filterin & Ede Detection Modify the pixels in an imae based on some function of a local neihborhood of the pixels. Some function Readin: Chapter 7 and 8, F&P Linear Functions
Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture.
Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture. Chirag Gupta,Sumod Mohan K [email protected], [email protected] Abstract In this project we propose a method to improve
Admin stuff. 4 Image Pyramids. Spatial Domain. Projects. Fourier domain 2/26/2008. Fourier as a change of basis
Admin stuff 4 Image Pyramids Change of office hours on Wed 4 th April Mon 3 st March 9.3.3pm (right after class) Change of time/date t of last class Currently Mon 5 th May What about Thursday 8 th May?
Canny Edge Detection
Canny Edge Detection 09gr820 March 23, 2009 1 Introduction The purpose of edge detection in general is to significantly reduce the amount of data in an image, while preserving the structural properties
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING Ms.PALLAVI CHOUDEKAR Ajay Kumar Garg Engineering College, Department of electrical and electronics Ms.SAYANTI BANERJEE Ajay Kumar Garg Engineering
Image Gradients. Given a discrete image Á Òµ, consider the smoothed continuous image ܵ defined by
Image Gradients Given a discrete image Á Òµ, consider the smoothed continuous image ܵ defined by ܵ Ü ¾ Ö µ Á Òµ Ü ¾ Ö µá µ (1) where Ü ¾ Ö Ô µ Ü ¾ Ý ¾. ½ ¾ ¾ Ö ¾ Ü ¾ ¾ Ö. Here Ü is the 2-norm for the
QUALITY TESTING OF WATER PUMP PULLEY USING IMAGE PROCESSING
QUALITY TESTING OF WATER PUMP PULLEY USING IMAGE PROCESSING MRS. A H. TIRMARE 1, MS.R.N.KULKARNI 2, MR. A R. BHOSALE 3 MR. C.S. MORE 4 MR.A.G.NIMBALKAR 5 1, 2 Assistant professor Bharati Vidyapeeth s college
Sharpening through spatial filtering
Sharpening through spatial filtering Stefano Ferrari Università degli Studi di Milano [email protected] Elaborazione delle immagini (Image processing I) academic year 2011 2012 Sharpening The term
SGN-1158 Introduction to Signal Processing Test. Solutions
SGN-1158 Introduction to Signal Processing Test. Solutions 1. Convolve the function ( ) with itself and show that the Fourier transform of the result is the square of the Fourier transform of ( ). (Hints:
Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report
Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 69 Class Project Report Junhua Mao and Lunbo Xu University of California, Los Angeles [email protected] and lunbo
Analecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
Lectures 6&7: Image Enhancement
Lectures 6&7: Image Enhancement Leena Ikonen Pattern Recognition (MVPR) Lappeenranta University of Technology (LUT) [email protected] http://www.it.lut.fi/ip/research/mvpr/ 1 Content Background Spatial
Railway Expansion Joint Gaps and Hooks Detection Using Morphological Processing, Corner Points and Blobs
Railway Expansion Joint Gaps and Hooks Detection Using Morphological Processing, Corner Points and Blobs Samiul Islam ID: 12301053 Rubayat Ahmed Khan ID: 11301026 Supervisor RubelBiswas Department of Computer
EECS 556 Image Processing W 09. Interpolation. Interpolation techniques B splines
EECS 556 Image Processing W 09 Interpolation Interpolation techniques B splines What is image processing? Image processing is the application of 2D signal processing methods to images Image representation
MVA ENS Cachan. Lecture 2: Logistic regression & intro to MIL Iasonas Kokkinos [email protected]
Machine Learning for Computer Vision 1 MVA ENS Cachan Lecture 2: Logistic regression & intro to MIL Iasonas Kokkinos [email protected] Department of Applied Mathematics Ecole Centrale Paris Galen
Visualization and Feature Extraction, FLOW Spring School 2016 Prof. Dr. Tino Weinkauf. Flow Visualization. Image-Based Methods (integration-based)
Visualization and Feature Extraction, FLOW Spring School 2016 Prof. Dr. Tino Weinkauf Flow Visualization Image-Based Methods (integration-based) Spot Noise (Jarke van Wijk, Siggraph 1991) Flow Visualization:
Module II: Multimedia Data Mining
ALMA MATER STUDIORUM - UNIVERSITÀ DI BOLOGNA Module II: Multimedia Data Mining Laurea Magistrale in Ingegneria Informatica University of Bologna Multimedia Data Retrieval Home page: http://www-db.disi.unibo.it/courses/dm/
Image Segmentation and Registration
Image Segmentation and Registration Dr. Christine Tanner ([email protected]) Computer Vision Laboratory, ETH Zürich Dr. Verena Kaynig, Machine Learning Laboratory, ETH Zürich Outline Segmentation
Sachin Patel HOD I.T Department PCST, Indore, India. Parth Bhatt I.T Department, PCST, Indore, India. Ankit Shah CSE Department, KITE, Jaipur, India
Image Enhancement Using Various Interpolation Methods Parth Bhatt I.T Department, PCST, Indore, India Ankit Shah CSE Department, KITE, Jaipur, India Sachin Patel HOD I.T Department PCST, Indore, India
Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall
Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin
Lecture 14. Point Spread Function (PSF)
Lecture 14 Point Spread Function (PSF), Modulation Transfer Function (MTF), Signal-to-noise Ratio (SNR), Contrast-to-noise Ratio (CNR), and Receiver Operating Curves (ROC) Point Spread Function (PSF) Recollect
Armstrong Atlantic State University Engineering Studies MATLAB Marina Image Processing Primer
Armstrong Atlantic State University Engineering Studies MATLAB Marina Image Processing Primer Prerequisites The Image Processing Primer assumes nowledge of the MATLAB IDE, MATLAB help, arithmetic operations,
Computational Optical Imaging - Optique Numerique. -- Deconvolution --
Computational Optical Imaging - Optique Numerique -- Deconvolution -- Winter 2014 Ivo Ihrke Deconvolution Ivo Ihrke Outline Deconvolution Theory example 1D deconvolution Fourier method Algebraic method
Segmentation & Clustering
EECS 442 Computer vision Segmentation & Clustering Segmentation in human vision K-mean clustering Mean-shift Graph-cut Reading: Chapters 14 [FP] Some slides of this lectures are courtesy of prof F. Li,
The Image Deblurring Problem
page 1 Chapter 1 The Image Deblurring Problem You cannot depend on your eyes when your imagination is out of focus. Mark Twain When we use a camera, we want the recorded image to be a faithful representation
Mean-Shift Tracking with Random Sampling
1 Mean-Shift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of
Environmental Remote Sensing GEOG 2021
Environmental Remote Sensing GEOG 2021 Lecture 4 Image classification 2 Purpose categorising data data abstraction / simplification data interpretation mapping for land cover mapping use land cover class
3D Scanner using Line Laser. 1. Introduction. 2. Theory
. Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric
An Iterative Image Registration Technique with an Application to Stereo Vision
An Iterative Image Registration Technique with an Application to Stereo Vision Bruce D. Lucas Takeo Kanade Computer Science Department Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 Abstract
Resolution Enhancement of images with Interpolation and DWT-SWT Wavelet Domain Components
Resolution Enhancement of images with Interpolation and DWT-SWT Wavelet Domain Components Mr. G.M. Khaire 1, Prof. R.P.Shelkikar 2 1 PG Student, college of engg, Osmanabad. 2 Associate Professor, college
How To Filter Spam Image From A Picture By Color Or Color
Image Content-Based Email Spam Image Filtering Jianyi Wang and Kazuki Katagishi Abstract With the population of Internet around the world, email has become one of the main methods of communication among
Object Recognition and Template Matching
Object Recognition and Template Matching Template Matching A template is a small image (sub-image) The goal is to find occurrences of this template in a larger image That is, you want to find matches of
Classification of Fingerprints. Sarat C. Dass Department of Statistics & Probability
Classification of Fingerprints Sarat C. Dass Department of Statistics & Probability Fingerprint Classification Fingerprint classification is a coarse level partitioning of a fingerprint database into smaller
OBJECT TRACKING USING LOG-POLAR TRANSFORMATION
OBJECT TRACKING USING LOG-POLAR TRANSFORMATION A Thesis Submitted to the Gradual Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements
DIGITAL IMAGE PROCESSING AND ANALYSIS
DIGITAL IMAGE PROCESSING AND ANALYSIS Human and Computer Vision Applications with CVIPtools SECOND EDITION SCOTT E UMBAUGH Uffi\ CRC Press Taylor &. Francis Group Boca Raton London New York CRC Press is
INTRODUCTION TO RENDERING TECHNIQUES
INTRODUCTION TO RENDERING TECHNIQUES 22 Mar. 212 Yanir Kleiman What is 3D Graphics? Why 3D? Draw one frame at a time Model only once X 24 frames per second Color / texture only once 15, frames for a feature
Aliasing, Image Sampling and Reconstruction
Aliasing, Image Sampling and Reconstruction Recall: a pixel is a point It is NOT a box, disc or teeny wee light It has no dimension It occupies no area It can have a coordinate More than a point, it is
A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow
, pp.233-237 http://dx.doi.org/10.14257/astl.2014.51.53 A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow Giwoo Kim 1, Hye-Youn Lim 1 and Dae-Seong Kang 1, 1 Department of electronices
Signature Region of Interest using Auto cropping
ISSN (Online): 1694-0784 ISSN (Print): 1694-0814 1 Signature Region of Interest using Auto cropping Bassam Al-Mahadeen 1, Mokhled S. AlTarawneh 2 and Islam H. AlTarawneh 2 1 Math. And Computer Department,
Feature Tracking and Optical Flow
02/09/12 Feature Tracking and Optical Flow Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Many slides adapted from Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve
Multi-Zone Adjustment
Written by Jonathan Sachs Copyright 2008 Digital Light & Color Introduction Picture Window s 2-Zone Adjustment and3-zone Adjustment transformations are powerful image enhancement tools designed for images
Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006
Practical Tour of Visual tracking David Fleet and Allan Jepson January, 2006 Designing a Visual Tracker: What is the state? pose and motion (position, velocity, acceleration, ) shape (size, deformation,
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
Solving Systems of Linear Equations Using Matrices
Solving Systems of Linear Equations Using Matrices What is a Matrix? A matrix is a compact grid or array of numbers. It can be created from a system of equations and used to solve the system of equations.
VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION
VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION Mark J. Norris Vision Inspection Technology, LLC Haverhill, MA [email protected] ABSTRACT Traditional methods of identifying and
SIGNAL PROCESSING & SIMULATION NEWSLETTER
1 of 10 1/25/2008 3:38 AM SIGNAL PROCESSING & SIMULATION NEWSLETTER Note: This is not a particularly interesting topic for anyone other than those who ar e involved in simulation. So if you have difficulty
A New Image Edge Detection Method using Quality-based Clustering. Bijay Neupane Zeyar Aung Wei Lee Woon. Technical Report DNA #2012-01.
A New Image Edge Detection Method using Quality-based Clustering Bijay Neupane Zeyar Aung Wei Lee Woon Technical Report DNA #2012-01 April 2012 Data & Network Analytics Research Group (DNA) Computing and
Microsoft Excel 2010 Charts and Graphs
Microsoft Excel 2010 Charts and Graphs Email: [email protected] Web Page: http://training.health.ufl.edu Microsoft Excel 2010: Charts and Graphs 2.0 hours Topics include data groupings; creating
Automatic Traffic Estimation Using Image Processing
Automatic Traffic Estimation Using Image Processing Pejman Niksaz Science &Research Branch, Azad University of Yazd, Iran [email protected] Abstract As we know the population of city and number of
Algorithm for License Plate Localization and Recognition for Tanzania Car Plate Numbers
Algorithm for License Plate Localization and Recognition for Tanzania Car Plate Numbers Isack Bulugu Department of Electronics Engineering, Tianjin University of Technology and Education, Tianjin, P.R.
COMP175: Computer Graphics. Lecture 1 Introduction and Display Technologies
COMP175: Computer Graphics Lecture 1 Introduction and Display Technologies Course mechanics Number: COMP 175-01, Fall 2009 Meetings: TR 1:30-2:45pm Instructor: Sara Su ([email protected]) TA: Matt Menke
WHITE PAPER. Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception
Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception Abstract
Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections
Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections Maximilian Hung, Bohyun B. Kim, Xiling Zhang August 17, 2013 Abstract While current systems already provide
Numerical Methods For Image Restoration
Numerical Methods For Image Restoration CIRAM Alessandro Lanza University of Bologna, Italy Faculty of Engineering CIRAM Outline 1. Image Restoration as an inverse problem 2. Image degradation models:
ROBUST VEHICLE TRACKING IN VIDEO IMAGES BEING TAKEN FROM A HELICOPTER
ROBUST VEHICLE TRACKING IN VIDEO IMAGES BEING TAKEN FROM A HELICOPTER Fatemeh Karimi Nejadasl, Ben G.H. Gorte, and Serge P. Hoogendoorn Institute of Earth Observation and Space System, Delft University
Digital Image Processing
GONZ_FMv3.qxd 7/26/07 9:05 AM Page i Digital Image Processing Third Edition Rafael C. Gonzalez University of Tennessee Richard E. Woods MedData Interactive Upper Saddle River, NJ 07458 GONZ_FMv3.qxd 7/26/07
Time series analysis Matlab tutorial. Joachim Gross
Time series analysis Matlab tutorial Joachim Gross Outline Terminology Sampling theorem Plotting Baseline correction Detrending Smoothing Filtering Decimation Remarks Focus on practical aspects, exercises,
The continuous and discrete Fourier transforms
FYSA21 Mathematical Tools in Science The continuous and discrete Fourier transforms Lennart Lindegren Lund Observatory (Department of Astronomy, Lund University) 1 The continuous Fourier transform 1.1
Enhanced LIC Pencil Filter
Enhanced LIC Pencil Filter Shigefumi Yamamoto, Xiaoyang Mao, Kenji Tanii, Atsumi Imamiya University of Yamanashi {[email protected], [email protected], [email protected]}
Discrete Convolution and the Discrete Fourier Transform
Discrete Convolution and the Discrete Fourier Transform Discrete Convolution First of all we need to introduce what we might call the wraparound convention Because the complex numbers w j e i πj N have
CMOS Image Sensor Noise Reduction Method for Image Signal Processor in Digital Cameras and Camera Phones
CMOS Image Sensor Noise Reduction Method for Image Signal Processor in Digital Cameras and Camera Phones Youngjin Yoo, SeongDeok Lee, Wonhee Choe and Chang-Yong Kim Display and Image Processing Laboratory,
LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK
vii LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK LIST OF CONTENTS LIST OF TABLES LIST OF FIGURES LIST OF NOTATIONS LIST OF ABBREVIATIONS LIST OF APPENDICES
Frequency Response of FIR Filters
Frequency Response of FIR Filters Chapter 6 This chapter continues the study of FIR filters from Chapter 5, but the emphasis is frequency response, which relates to how the filter responds to an input
2.2 Creaseness operator
2.2. Creaseness operator 31 2.2 Creaseness operator Antonio López, a member of our group, has studied for his PhD dissertation the differential operators described in this section [72]. He has compared
An edge-from-focus approach to 3D inspection and metrology
An edge-from-focus approach to 3D inspection and metrology Fuqin Deng 1, Jia Chen 2, Jianyang Liu 3, Zhijun Zhang 1, Jiangwen Deng 1, Kenneth S.M. Fung 1, and Edmund Y. Lam 4 1 ASM Pacific Technology Limited,
Lecture 8: Signal Detection and Noise Assumption
ECE 83 Fall Statistical Signal Processing instructor: R. Nowak, scribe: Feng Ju Lecture 8: Signal Detection and Noise Assumption Signal Detection : X = W H : X = S + W where W N(, σ I n n and S = [s, s,...,
Digital image processing
746A27 Remote Sensing and GIS Lecture 4 Digital image processing Chandan Roy Guest Lecturer Department of Computer and Information Science Linköping University Digital Image Processing Most of the common
Tracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object
521466S Machine Vision Assignment #7 Hough transform
521466S Machine Vision Assignment #7 Hough transform Spring 2014 In this assignment we use the hough transform to extract lines from images. We use the standard (r, θ) parametrization of lines, lter the
Investigation of Color Aliasing of High Spatial Frequencies and Edges for Bayer-Pattern Sensors and Foveon X3 Direct Image Sensors
Investigation of Color Aliasing of High Spatial Frequencies and Edges for Bayer-Pattern Sensors and Foveon X3 Direct Image Sensors Rudolph J. Guttosch Foveon, Inc. Santa Clara, CA Abstract The reproduction
APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder
APPM4720/5720: Fast algorithms for big data Gunnar Martinsson The University of Colorado at Boulder Course objectives: The purpose of this course is to teach efficient algorithms for processing very large
FFT Algorithms. Chapter 6. Contents 6.1
Chapter 6 FFT Algorithms Contents Efficient computation of the DFT............................................ 6.2 Applications of FFT................................................... 6.6 Computing DFT
A Learning Based Method for Super-Resolution of Low Resolution Images
A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 [email protected] Abstract The main objective of this project is the study of a learning based method
Probabilistic Latent Semantic Analysis (plsa)
Probabilistic Latent Semantic Analysis (plsa) SS 2008 Bayesian Networks Multimedia Computing, Universität Augsburg [email protected] www.multimedia-computing.{de,org} References
Tutorial for Tracker and Supporting Software By David Chandler
Tutorial for Tracker and Supporting Software By David Chandler I use a number of free, open source programs to do video analysis. 1. Avidemux, to exerpt the video clip, read the video properties, and save
How to resize, rotate, and crop images
How to resize, rotate, and crop images You will frequently want to resize and crop an image after opening it in Photoshop from a digital camera or scanner. Cropping means cutting some parts of the image
Scanners and How to Use Them
Written by Jonathan Sachs Copyright 1996-1999 Digital Light & Color Introduction A scanner is a device that converts images to a digital file you can use with your computer. There are many different types
Robert Collins CSE598G. More on Mean-shift. R.Collins, CSE, PSU CSE598G Spring 2006
More on Mean-shift R.Collins, CSE, PSU Spring 2006 Recall: Kernel Density Estimation Given a set of data samples x i ; i=1...n Convolve with a kernel function H to generate a smooth function f(x) Equivalent
Tutorial. Filtering Images F I L T E R I N G. Filtering Images. with. TNTmips. page 1
F I L T E R I N G Tutorial Filtering Images Filtering Images with TNTmips page 1 Filtering Images Before Getting Started In working with digital forms of aerial photographs or satellite imagery, you will
GeoGebra. 10 lessons. Gerrit Stols
GeoGebra in 10 lessons Gerrit Stols Acknowledgements GeoGebra is dynamic mathematics open source (free) software for learning and teaching mathematics in schools. It was developed by Markus Hohenwarter
Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data
CMPE 59H Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data Term Project Report Fatma Güney, Kübra Kalkan 1/15/2013 Keywords: Non-linear
IMAGE RECOGNITION FOR CATS AND DOGS
IMAGE RECOGNITION FOR CATS AND DOGS HYO JIN CHUNG AND MINH N. TRAN Abstract. In this project, we are given a training set of 8 images of cats and 8 images of dogs to classify a testing set of 38 images
WEEK #3, Lecture 1: Sparse Systems, MATLAB Graphics
WEEK #3, Lecture 1: Sparse Systems, MATLAB Graphics Visualization of Matrices Good visuals anchor any presentation. MATLAB has a wide variety of ways to display data and calculation results that can be
Excel -- Creating Charts
Excel -- Creating Charts The saying goes, A picture is worth a thousand words, and so true. Professional looking charts give visual enhancement to your statistics, fiscal reports or presentation. Excel
MATLAB-based Applications for Image Processing and Image Quality Assessment Part I: Software Description
RADIOENGINEERING, VOL. 20, NO. 4, DECEMBER 2011 1009 MATLAB-based Applications for Image Processing and Image Quality Assessment Part I: Software Description Lukáš KRASULA, Miloš KLÍMA, Eric ROGARD, Edouard
Characterizing Digital Cameras with the Photon Transfer Curve
Characterizing Digital Cameras with the Photon Transfer Curve By: David Gardner Summit Imaging (All rights reserved) Introduction Purchasing a camera for high performance imaging applications is frequently
Correcting the Lateral Response Artifact in Radiochromic Film Images from Flatbed Scanners
Correcting the Lateral Response Artifact in Radiochromic Film Images from Flatbed Scanners Background The lateral response artifact (LRA) in radiochromic film images from flatbed scanners was first pointed
