Forensic Image Processing.
|
|
|
- Cornelius Phillips
- 10 years ago
- Views:
Transcription
1 Forensic Image Processing
2 Forensic Image Processing Lesson 1 An introduction on digital images
3 Purpose of the course What is a digital image? What use can images have for investigative applications? What are most common issues? How to solve them? What kind of information can I obtain from an image? What precautions I must take care of? 2
4 Image enhancement and restoration Get more information from the analysis of an image 3
5 Information classification Pills Footprints, tires... Bullets... Visual information classification upon some criteria 4
6 Biometrics Comparison and recognition of physiological or behavioral features of a certain subject. Fingerprints Hand palms Faces Iris Ear shape... 5
7 Photogrammetry Evaluation of some sizes in the scene by proportion with known lengths 6
8 3d reconstruction Dynamic analysis (event) Static analysis (objects, places, faces...) 7
9 Contents of the course 1) An introduction on digital images 2) Main issues 3) Removing noise 4) Improving details 5) Advanced techniques 6) Video analysis and enhancement 7) 3d reconstruction 8) Biometrics 8
10 Outline What is an image Difference between analog and digital Digital images Color representation Image formats and compression 9
11 What is an image? Bidimensional function of light intensity perceived by the human eye x f(x,y) y 10
12 Analog and digital Analog: continuous variation. Digital: variations by steps. 11
13 What is a digital image? Both coordinates and intensity values are discrete sizes copyright 12
14 Discrete coordinates Minimum indivisible element called pixel (picture element) copyright 13
15 Discrete values Every pixel may assume only a finite number of different values copyright 14
16 Implications It is not possible to increase the detail of digital images enlarging it. 15
17 Image representation Matrix of numbers
18 Image characteristics Resolution: Columns x Rows of the matrix (ex. 1024x768) Bit depth: Number of different values that every pixel may assume (ex. 8 bits = 256 values) 0 = black (minimum intensity) 255 = white (maximum intensity) 17
19 What about colors? What is color? Color is they way the human visual system measures the visible part of the electromagnetic spectrum copyright 18
20 How do we perceive color? The human eye has two kind of photoreceptors: Cones: high sensibility to light low sensibility to colors Rods: low sensibility to light high sensibility to colors three different types, for each primary color: red, green and blue. 19
21 Color in digital images 3 different matrices corresponding to the three RGB components 20
22 Color spaces A color space is a representation used to specify colors, i.e. to describe quantitatively the human perception of the visible part of the electromagnetic spectrum. RGB is a color space. HSL: hue, saturation, lightness CMYK: used by printers YUV, YIQ, YCrCb: TV signal, split brightness Y and colors CIE XYZ, CIELuv, CIELab: for particular purposes... 21
23 Digital image formats Digital images can be stored in many different formats The format defines what kind of information are used to represent the image The matrix representing the image would need a big amount of memory: for example: 1024 (width) x 768 (height) x 8 (bits per pixel ) x 3 (color components) = 2,3 MB! Almost always compression techniques are used, which may be: Lossless Lossy 22
24 Types of compression Lossless compression Reduces used memory Allows to reconstruct exactly original data Not very high compression factor (about 2 or 3) For example: TIF, PNG, BMP Lossy compression (for ex. JPEG) Very high compression factor Some information of the original image are lost, thus it can't be reconstructed exactly Loss of detail Artifacts 23
25 Lossy compression: example Bmp: 188 kb Jpeg: 7 kb 24
26 Conclusions Digital images have many different uses in the forensic and investigative fields. Digital images have very different characteristics from the analog ones, such us the old film photographies. It's important to understand the meaning of light intensity and color. Due to practical reasons very often images are compressed with consequent loss of quality. 25
27 References Digital image processing: Jain, Fundamentals of digital image processing, Prentice Hall Gonzalez-Woods, Digital Image Processing Forensic image processing: Peter Kovesi home page 26
28 Forensic Image Processing Lesson 2 Main issues of forensic image processing
29 Outline Main problems relative to image quality Techniques to solve them and their implications Images coming mainly from CCTV systems Many of the problems are present also in more general cases 2
30 Capture devices VCR multiplexer cameras monitor 3
31 Analog or digital? Very often data is still stored on VHS They must be digitalized to process them VCR frame grabber computer 4
32 Digital characteristics Not subject to wear and tear But... How do I connect it? What is the format (very often proprietary)? How to I replace it (service interruption)?? 5
33 Steps of the process Different kinds of disturbs are introduced at different stages: information acquisition (camera) analog storage (VCR) digital conversion and storage (DVR o frame grabbers) c 6
34 Acquisition Most of the disturbs are usually introduced in this step and are due to : capturing camera features captured scene features captured subjects features 7
35 Blur Wrong focus Limited depth of field 8
36 Motion blur Moving subject Too long aperture time of the camera shutter 9
37 Noise Too long aperture time of the camera shutter Quality of the components (sensor) 10
38 Geometric distortions Device optics (in particular wide angle lenses) 11
39 Contrast, colors, brightness Scene features Devices settings Components quality 12
40 Used standards Resolution Frame rate Interlacing 13
41 Analog storage Very noticeable problems Mainly caused by wear of devices (VCRs) of storage supports (VHS tapes) 14
42 Scratches Wear of VHS 15
43 Line shits Misalignment caused by wrong timings of VCR heads. 16
44 Electromagnetic interferences Quality of components Shielding 17
45 Digital conversion and storage Practical limits, such us the need of data compression, usually introduce disturbs even in this step. 18
46 Lossy compression Artifacts Loss of detail 19
47 Level compression An infinite number of intensities must be represented by a limited number of values, causing loss of information. 20
48 Image enhancement/restoration Different disturbs / features of the image How to improve quality? image enhancement image restoration 21
49 Image enhancement Process used to improve visual appeal of an image, enhance or reduce some features brightness contrast colors crop denoising sharpening... 22
50 Image restoration Describe by a mathematical model a known disturb that corrupts the image and try to invert the process. deblurring geometric functions filtering... 23
51 Lawful implications Digital data is very sensible to manipulations: easy cheap everyone can do it what is the original? what is the processed? What's the objective difference between enhancement and manipulation of an image? Which kinds of processing are acceptable and which aren't? 24
52 The problem Image has been manipulated HOW?? 25
53 Requirements 1. Preserve original image 2. Document all the details of all the steps of the processing 3. Output image must be exactly replicable applying the documented process to the original image OK! 26
54 Conclusions Many different problems on the images different results different causes many different ways to face them Sometimes even from very low quality images we can obtain useful information. We can use many techniques to enhance the image, but they must be handled with care, especially if we want to use them as a proof. 27
55 References Digital images and lawful implications: Recommendations and guidelines for the use of digital imaging processing in the criminal justice system / Best practices for documenting image enhancement. Scientific Working Group on Imaging Technology. Digital images as evidence. House of Lords, Science and Technology fifth report. pa/ld199798/ldselect/ldsctech/064v/st0501.htm 28
56 Forensic Image Processing Lesson 3 Noise smoothing
57 Outline What is noise in an image? How is it present? How to reduce it? 2
58 What is the noise? Random noise: random variation in pixel values present also in traditional analog photography Other types: long exposure times (low light conditions) periodic noise compression artifacts
59 Random noise Noise in different types of digital cameras ( 4
60 Noise smoothing techniques Spatial image enhancement Work done on image pixels Frequency image enhancement Work done on Fourier transform of the image The same theory can be applied also to other processing techniques that we will see in the next sections. Multi-image enhancement Putting together information coming from different images See lesson 6 [email protected] 5
61 Spatial image enhancement The value of the pixel in the position (x,y) of the output image is the result of an operation between a certain window of pixel around the position (x,y) in the original image. neighborhood around f(x,y) T operator g(x,y) Original image f Processed image g For practical reasons often the neighborhood is a square around the point to evaluate, but it could be any kind. [email protected] 6
62 Averaging filter Every point of the output image is obtained by the average of pixels around (x,y) in the original image. neighborhood around f(x,y) T operator g(x,y) This kind of process is called image smoothing. Reduces noise and artifacts...but also detail! [email protected] 7
63 Example It is difficult to practically obtain useful results from averaging, because usually it causes a big loss of detail. It's important, being at the base of other techniques. Increasing the neighborhood size, its effect become stronger. [email protected] 8
64 Implementation (I) The averaging filter is an example of spatial filter. Spatial filtering can be implemented in a computer software by a mathematical process called convolution. An averaging filter with a 3x3 neighborhood is like doing the convolution of the image with the mask: 1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 The mask is moved across the whole image until all pixel are filtered. [email protected] 9
65 Implementation (II) Convolution consists in overlapping the mask over a part of the image of size 3x3, multiplying the pixels for the mask values and summing the results to get the value that will substitute the central pixel. [email protected] 10
66 Gaussian smoothing If we want to emphasize the contribution of the pixel that are more central in the window, we can use a bell shaped distribution, called Gaussian Gaussian mask 5x5 [email protected] 11
67 Results original averaging (5x5) gaussian (5x5) 12
68 Linear and non linear filtering Seen filters, applyable by convolution, are called linear filters, since the output image can be obtained by a simple linear combination of input image values. The process is always the same for every value of the pixel. There are many other types of filters, where the method used to calculate the output image depends on the values of the pixel of the neighborhood. This kind of processing is called non linear filtering. It's not possible to implement non linear filters by a simple convolution. [email protected] 13
69 Median filtering Median filtering is one of the simplest non linear filters. The median value of a sequence of numbers is such a value that half of the numbers are greater and half are lower. For example, the median of is 6. The median filter substitute the central pixel of the considered neighborhood with its median value. Median filtering is very good to remove impulsive noise (often called salt and pepper ) in the images, preserving the details. [email protected] 14
70 Used windows Sometimes, to better preserve the details, we can use windows different from the square one, using for example a + or x shaped window. a b c b a c d e f d e f e g h i h g i [email protected] 15
71 Results original corrupted median + median 3x3 square [email protected] 16
72 Other types of noise Sometimes the noise in the images is not random. We can distinguish some features helpful to understand the causes of this noise in order to apply a more successful filtering. Some examples are the periodic noise or the one caused by compression, which is often present together with artifacts. Different techniques, even quite complex, must be used for each case, using sometimes also frequency filtering techniques. [email protected] 17
73 Compression noise original (jpeg) filtered 18
74 Fourier transform Using pixel matrices is not the only way to represent images. It is possible to represent images by the Fourier transform : instead of specifying values of single pixels, it considers the image like the sum of different frequencies waveforms. Fourier transform represents an image by a matrix of amplitude and phase for each frequency of the image. It is difficult to interpret it directly, but low frequencies could be considered as uniform zones of the image and the high frequencies as the details. [email protected] 19
75 Visualization Generally the logarithm of absolute values of the amplitude can be easily visualized as an image. 20
76 Frequency filtering An algorithm, called Fast Fourier Transform (FFT), is used to effectively calculate the Fourier transform of an image. The Fourier transformation is invertible so we can use the inverse (IFFT) to obtain the original image starting from its frequency transform. It is possible to filter the Fourier transform of the image instead of its representation in the spatial domain. FFT filtro IFFT original original FFT filtered FFT filtered image 21
77 Low pass filtering It is possible to use frequency filtering to amplify or reduce some components of the image. Low pass filtering consists in let pass low frequencies and remove high frequencies. We obtain an image similar to the original, but less sharp. We decrease the noise but also the detail! 22
78 Example 23
79 Periodic noise Sometimes images are corrupted by periodic noise. Natural images have a well recognizable general shape, which is like a cross with higher values towards the center. Being periodic, often noise has a well defined frequency, thus it is clearly recognizable in some particular area of the spectrum. Filtering out of the spectrum of such areas, it is possible to reconstruct the image without the disturb. We can work in the same way to remove even other periodic signals, such as banknote watermark, which is not proper noise. [email protected] 24
80 Example Original image Original spectrum (the two white dots represent the disturb) Filtered spectrum (dots have been removed) Filtered image (inverse transform of filtered spectrum) 25
81 Convolution theorem A convolution in the spatial domain corresponds to a multiplication in the frequency domain. Given two images f(x) and g(x) and their respective Fourier transforms F(n) and G(n), and calling FT the Fourier transform: FT( f(x)*g(x) ) = F(n)G(n) convolution multiplication or, equivalently, FT( f(x)g(x) ) = F(n)*G(n) If we want to filter an image in the frequency domain, we can obtain the same result in the spatial domain with a convolution. [email protected] 26
82 Conclusions Noise is a random signal over the image Different techniques to reduce it: spatial filtering (pixel) frequency filtering (Fourier) These are basic techniques, bu there are more advanced ones, for example: adaptive filters frame integration [email protected] 27
83 Forensic Image Processing Lesson 4 Enhancing details
84 Outline Sharpening: local contrast enhancement to improve some detail in the image. Edge detection: operators for the extraction of borders and details in the image. Interpolation: increasing resolution of an image calculating the value of new pixels by mathematical techniques. 2
85 Sharpening Sharpening an image corresponds to locally increase the contrast in some parts of the image. It is the inverse process of the smoothing, which is used for noise reduction. The basic idea is to identify and amplify the value of pixels which are mostly different from the neighboring ones, thus emphasizing edges and fine details. It is like summing to the original image another image representing the edges, the details, or the high frequencies. Problem: how to distinguish details from noise? 3
86 Unsharp mask Increase the detail summing to the original image another image representing high frequencies. High frequencies can be obtained subtracting a low pass (see part 3) filtered version of the image from the original one. 4
87 Laplacian On of the most common implementation of unsharp masking is to sum to the original image the opposit of its Laplacian (a well known mathematical operator). It may be implemented with different masks, depending on the desired effect and neighborhood size, for example with:
88 Example originale sharpened 6
89 High pass filter Frequency filtering (see part. 3) can be applied also to obtain an image representing borders and details of the original. High pass filtering an image consists in let high frequencies pass and stop low frequencies. Summing this image with the original, we obtain a more detailed one. We increase the detail, but also the noise! 7
90 Example 8
91 Other filters In literature many different filters are proposed, even rather complicated, to enhance image detail or to extract edges, that generally has better performances than the basic ones presented here. To get good results filters must be adaptive, thus modifying their characteristics upon the local image values. In this way is possible to increase the detail without amplifying much noise. 9
92 Example original adaptive sharpening unsharp masking 10
93 Edge detection Sometimes it can be useful to extract image edges, not just to improve the detail, but to analyze images enhancing some features. 11
94 Example 12
95 Sid Wallace 13
96 How does it work To detect edges we need a filter that emphasizes the points where differences between pixels are most noticeable. A simple filter to detect vertical edges is: When it is convolved with an image with a strong edge, this filter gives an output image that emphasize vertical borders and ignores uniform areas
97 Original image Example Output image filter 15
98 Sobel One of the most common and simple edge detectors is the Sobel operator. It has two masks, on for the vertical edges and one for the horizontal edges The two results are elevated to the 2nd power, then the result is thresholded determining the sensibility of the overall function. 16
99 Esempio Vertical edges square sum Horizontal edges square 17 threshold
100 Other edge detectors Laplaciano (seen in unsharp mask) Prewitt Canny... original laplacian 18
101 Low risolution (I) Sometimes we'd like to zoom some detail of the image 19
102 Low resolution (II) In digital images it is not possible to increase the detail zooming the image. 20
103 Interpolation Mathematical process used to estimate unknown pixel values by the values of know neighbor pixels. 21
104 Interpolation algorithms Nearest neighbor: copies the value of closest pixel. Bilinear: weights neighbor pixels in a way that they contribute proportionally to their distance with a linear behavior. Bicubic: weights neighbor pixels in a way that they contribute proportionally to their distance with a cubic behavior. There are many more complex algorithm, but in real world application almost often these three are used. In particular, the bicubic is the one that gives better results. 22
105 Monodimensional interpolation s nearest: bilinear: bicubic: 23
106 Bidimensional interpolation It's more complex, but with the traditional algorithms we can simply interpolate monodimensionally first the rows and then the columns (or viceversa) to obtain the same result. 24
107 Comparison nearest neighbor bilinear bicubic 8x zoom 25
108 Advanced techniques 8x bicubic 8x lowadi 26
109 Conclusions Detail enhancement is complex and strictly correlated with the noise in the images. Different filtering techniques, both in space and frequency, are available. We can use similar techniques for sharpening and edge detection. To increase the resolution we must interpolate images but we must take care that the new details are not true but artificially created by mathematic methods to make the images more appealing. 27
110 Forensic Image Processing Lesson 5 Advanced techniques
111 Outline Intensity transformations Image histrogram Omomorphic filtering Deblurring Motion deblurring Color image processing 2
112 Intensity transformations (I) Real world luminance intensity range is much wider that the one that can be captured and visualized with a photography. Intensity transformations modify values pixel by pixel by a defined curve, with the purpose to increase or decrease contrast between different intensity values. 3
113 Intensity transformations (II) neighbourhood around f(x,y) Original image T operator g(x,y) Processed image g Particular spatial transform where the neighborhood corresponds only with the position of the pixel to calculate. 4
114 Increasing brighteness 5
115 Decreasing brightness 6
116 Non linear transformations (gamma) 7
117 Gamma correction copyright 8
118 Other transformations copyright I can remove, enhance or attenuate desired intensity ranges. 9
119 Image histogram Luminance distribution in an image is called histogram. The histogram represent the number of pixel (vertical axis) of a certain value (horizontal axis). Number of pixels Pixels value 10
120 Histogram equalization Histogram equalization is a process aimed at making image histogram more uniform and thus improving image contrast. 11
121 Illumination and reflectance Images are created by light reflected by object, which is composed by: Quantity of light incident on the scene (illumination) Quantity of light reflected by objects in the scene (reflectance) Illumination and reflectance are merged by multiplication: f(x) = i(x) r(x) Reflectance gives information about the color and shape of objects, while illumination variations may cause confusion. 12
122 Omomorphic filters Working in the frequency domain with the Fourier transform, it is possible to separate reflectance from illumination to reduce the effect of the last one. This kind of filters are called omomorphic filters, and are very useful to correct non uniform light conditions. Generally: High frequencies: illumination Low frequencies: reflectance 13
123 Example 14
124 Optical blur It is not always possible to get in focus all the detail of a scene, that may result out of focus ( blurred ). 15
125 Point spread function A defocused image can be considered as the result of the convolution of an ideal sharp image, with a function called point spread function (PSF), representing the features of the blur. PSF represents the matrix of pixel that in an ideal ( sharp ) case should correspond to a single pixel of maximum intensity. Optical blur can be approximated by a bidimensional Gaussian PSF. 16
126 Gaussian blur * = convolution 17
127 Deconvolution In practical cases I would like to reconstruct a sharp image from a blurred one. Upon some hypothesis I can invert the process (deconvolution) to approximatively reconstruct the original image. = deconvolution 18
128 Motion blur (I) If an object moves too fast with respect to the camera, this causes motion blur. It is very frequent in night footage, where low light condition need a longer aperture time of the camera shutter. 19
129 Motion blur (II) Motion blur can be modeled by the average of several translated copies of the ideal sharp image. Even this observation may be modeled by the PSF. The image can be restored by the convolution, but better results are obtained with other techniques, such as Wiener filtering. 20
130 Wiener filter Results obtainable by simple deconvolution are very sensible to the noise in the image. If: u(m,n) is the image without blur (that we don't have in real cases), v(m,n) is the blurred image, u'(m,n) is the image obtained with the restoration process, the purpose of the Wiener filter is to obtain, starting from v(m,n), an image u'(m,n) as similar as possible to u(m,n). This is done trying to minimize the mean square error (MSE), i.e. the average of the squared differences between every single pixel in u'(m,n) and u(m,n). 21
131 Motion deblurring deconvolution = 22
132 PSF Estimate correctly the PSF is not easy. A wrong PSF corrupts further the image. 23
133 Color image processing (I) The vast majority of image processing techniques has been studied for grayscale images (black and white). Color images are represented as the union of 3 grayscale images (RGB components), thus the same techniques are usually applied independently to the three components. This is what is usually done but it is not actually very correct! 24
134 Color image processing (II) From the theoretical point of view it is not acceptable to threat separately the three components, since they are strongly correlated and processing results should be either. In practical applications visual appearance is almost always acceptable but taking care of details sometime is possible to notice some artifacts. A definitive and formally correct solution hasn't been found yet and the problem of color is still totally open, since it is based on the human perception, which is difficult to measure objectively. Some kind of problems are specifically studied for color images. 25
135 Luminance processing An approach that is sometimes used is to process the image in one of those color space that separate informations about color and brightness (see lesson 1), It is possible to process luminance components as if it was a grayscale image and leave untouched the color components. This can be justified by the fact that the human eye is much more sensible to variation in the intensity than the chromatic value. In this way we also reduce computational cost, processing only one matrix instead of three! 26
136 Conclusions We saw some image processing techniques that allow to obtain, by mean of mathematical models, impressive results. All presented problems are still open and better techniques to use for the processing are still in progress. Color image processing issues are often underestimated, but are of primary importance and difficult solution. 27
137 Forensic Image Processing Lesson 6 Processing video sequences
138 Outline Video formats Deinterlacing Frame integration Registration Demultiplexing Motion detection 2
139 What is a video? A video is set of images (frames) that, played in fast sequence, give the viewer the illusion of movement. Our eye does not perceive flicker between the frames thanks to the visual persistence: the last projected image remains impressed for a certain time (a fraction of a second) on the retina even after its source has been removed. The other effect that contributes to perceive an image sequence as continuous motion is the beta effect, a perceptive illusion by which the brain connects different frames by a certain perception of time and causality. 3
140 Analog and digital Like in the case of images, also analog and digital video signal are very different. An analog signal is, for example, the one of a televsion, and it can be stored on an analog device such as a VCR. An example of digital signal is the one used by PCs or DVD players; for practical reasons it always needs complex compression techniques to be stored. Analog signal may be converted to digital (and viceversa) by proper converters; they are actually used by the vast majority of visualization devices (such as monitors). 4
141 Analog video TV-signal transmission is done by three principal standards: PAL, Secam and NTSC. We'll mainly refer to the PAL standard, since is the one used in Europe, but also for the other formats the considerations are very similar. 5
142 World standards 6
143 PAL format PAL signal consists of the transmission of 635 lines at a frequency of 50 frames per second. Actual resolution used for each frame is 576 (height) by 720 (width). Used color space is YUV: Y represents the luminance, U and V color information. This choice has been done in the passage of TV from black and white to colors: in this way is possible to use the same signal: black and white TV consider only Y component, ignoring U and V. 7
144 Interlacing In the PAL signal images are played at a 50 Hz frequency. Actually the transmitted frames are only the half, 25. In every frame even and odd lines belong to two different frames to be displayed. Thanks to the visual persistence, we perceive the images as if they where actually projected at a 50 Hz frequency. With this technique: transmission bandwidth is divided by two...but also the vertical resolution is divided by two! 8
145 Deinterlacing In the TV missing lines are interpolated in each frame. The format where all lines are drawn on the screen is called progressive. The process of converting from interlaced to progressive is called deinterlacing. Odd field Even field 9
146 Interlaced image 10
147 Odd field image Deinterlacing Even field image 11
148 Deinterlacing techniques Basic techniques are similar to those used for image enlargement, but the interpolation is applied only on the vertical direction. A bad deinterlacing can lead to artifacts, particularly on small details and diagonal edges, which can become jagged. More advanced techniques allows to obtain better results, for example using adaptive algorithms which evaluates the shape of edges. 12
149 Linear deinterlacing 13
150 Adaptive deinterlacing Linear deinterlacing An example of adaptive deinterlacing 14
151 Multi frame deinterlacing It is possible to perform a better deinterlacing if I put together information coming from different frames. If the scene is static, I can copy the missing lines from the previous or the next frame. If there is movement, we can evaluate it and compensate it to calculate better the missing lines. These are advanced techniques, very often used in modern devices. 15
152 Digital formats Let's suppose to store digitally a PAL video: the needed storage for every second should be: 576 x 720 (resolution) x 24 bit (8 bit per channel) x 50 Hz (frames per second) = Megabytes per second!!! This means that a CD could store less more than 10 seconds of video. To digitally store video sequences very advanced compression techniques must be used. 16
153 Video codecs A video codec is a software component (or an hardware device) which can encode (compressing for storage) and decode (decompressing for visualization) a video stream. Compression techniques have dramatically improved in the very last years. Generally codecs are lossy, so it's needed to find the right balance between compression, loss of quality and computational cost. Video formats differs not only by the used codec, but also the type of file used for the storage, called container. For example DIVX is a codec, AVI is the container. Codecs are identified by a tag, called FOURCC (four character code), saved inside the stream, formats (partially) by the file extension. 17
154 Most popular codecs H.261, first good compression standard, used for videoconferencing and videotelephony. MPEG-1, used in the VideoCD format (VCD). MPEG-2, used in DVDs and SVCDs. H.263, current standard for videotelephony, videoconferencing and contents streaming by Internet. MPEG-4 (H.264), state-of-the-art of movie compression, extremely widely adopted (DivX, XviD, 3ivx, WMV). Sorenson 3, used by Apple QuickTime, it may be considered the precursor of H.264. RealVideo, very popular some years ago, not very used anymore. 18
155 Video processing techniques It is possible to consider not only every frame as a separate image, but processing frames taking into account also the temporal information. Putting together information coming from different frames it is possible to obtain results much better than from single images. 19
156 Frame integration If we have a sequence of images of the same scene, disturbed by zero mean random noise, we can think to average the corresponding pixels in the different frames. With an infinite number of frames to average, the noise should tend to zero allowing to reconstruct the clean image. We won't ever have an infinite set of images......but also with a small number results are noticeable! 20
157 Example Average on 10 images with random gaussian noise 21
158 Image registration (I) To apply the frame integration it is necessary that the images represent exactly the same scene. This hypotesis is not always verified, since if the camera is not fixed or there is some moving subject, the scene actually changes in every frame. The process used to align two images (or some details of them) is called registration. In general, the registration is the process to align two or more images representing the same scene, acquired in different moments, with different sensors or from different point of view. It is a process needed before most of image processing applications that combine different frames together. 22
159 Image registration (II) The simplest type of image registration is the alignment of images differing by a simple translation, for example to stabilize a shaking scene. 10 frames sequence Average without registration Average with registration 23
160 Image registration (III) More advanced techniques allow to correct also other kinds of transformations, such as the perspective effect. 6 images sequence Without registration With registration 24
161 Multiplexing Often information coming from several cameras are stored together on the same support, interleaving in different frames signal coming from different cameras from different locations. VCR multiplexer cameras monitor 25
162 Demultiplexing In the analysis of a video is very uncomfortable to search every time the next frame of a location of interest, since the acquisition sequence usually is not regular and it is not possible to know where is located the next desired frame. The operation which divides a multiplexed stream in separated video sequences, is called demultiplexing. Demultiplexing can be done automatically: by saving by the system the actual sequence log in the multiplexing stage (very uncommon); by later separation by frames similarity criteria (difference, correlation...). 26
163 Demultiplexing Sometimes I need also to deinterlace the image... 27
164 Motion detection Often surveillance systems have automatic alarms to signal the responsible personnel in case motion is perceived in the acquired scene in locations where nothing should happen, for example to a forbidden place. The way to accomplish this is called motion detection. The most basic techniques consist on calculating how the current frame differs from the previous or from a difference one (sometimes called background); if this difference, that can be calculated in several ways, is bigger than a certain threshold, then the system alerts the operator. 28
165 Motion detection Image with no motion Image with motion Reference image Difference image Difference image 29
166 Conclusions Working on movies rather than single images I have more powerful techniques to restore the footage. In order to efficiently store and transmit the movies, they must be encoded in some compressed format. 30
167 Forensic Image Processing Lesson 7 3d reconstruction
168 Outline Stereoscopic vision 3d reconstruction and photogrammetry Perspective correction and rectification Geometric distortion correction 2
169 Stereoscopic vision To get threedimensional information from images (which are bidimensional) it is necessary the stereoscopic vision. The real world image is projected in a different way on the two human eyes, and thanks to this difference we are able to evaluate relative distances between objects. Closer object are projected more separated on our retina, while far away object are projected closer. 3
170 Pin-hole camera model In a camera the image is projected on a plane (while the retina is curved). The fundamental model for image formation is called pin-hole camera model. It assumes that every point of the image is generated as direct projection on the real point through an optical center. 4
171 Photogrammetry The techniques employed to calculate distances in the 3d space by their 2d representation (images) is called photogrammetry. Photogrammetry is used in many different fields: forensics; architecture; geology; archeology; topography;... 5
172 Projection matrix In order to use a camera to take 3d measures it must be calibrated: this means finding the mathematical relationship between threedimensional points in the real world and where they appear in the image. This relationship is called projection matrix. su q11 sv = q 21 s q31 q12 q22 q32 q13 q23 q33 X q14 Y q24. Z q34 1 Scaled image values (they must be divided by s to obtain the real coordinates) 3d coordinates Projection matrix 6
173 Calibration Calibration needs a test object where have been located some 3d positions accurately measured. These 3d positions are correlated to the image positions where this points appear by the projection matrix. 7
174 3d reconstruction It is possible to reconstruct, by the relation between two or more photos, the 3d features (and thus the positions) of a scene. 8
175 Single view metrology Humans are able to get a lot of 3d information by a single photography. From this consideration other measurement techniques have been developed. 9
176 Perspective Perspective is the effect that allows us to get 2d information of a scene from its bidimensional representation. Two parallel lines viewed in perspective will converge to a point. Two sets of parallel lines in different directions in the plane determines two vanishing points. Two vanishing points determines the vanishing line of the plane. Al lines lying on planes parallel to this one will converge on points belonging to the vanishing lines. Finding vanishing points and lines is needed to reconstruct 3d information from perspective images. 10
177 Vanishing lines and vanishing points Vertical vanishing line Vanishing point Vanishing line Vanishing point 11
178 Cross ratio If we have four aligned points, the ratio of the ration between the length of segments that they determine, remains constant also after a perspective transformation. ( X3 ( X3 X1 ) X2) ( X4 ( X4 X1 ) X2) = constant 12
179 To usderstand... Cross ratio = AC 488 AD 596 = = BC 173 BD
180 In practice... The cross ratio on an image will be equal to the same cross ratio measured in the real world. If we can measure lengths by reference objects in the real world we can calculate any unknown length on the image. Known lengths Unknown length AC 488 x AD = = = BC BD Image cross ratio x = 2 14
181 Very useful! 15
182 Rectification Rectification allows to modify perspective effect on a plane making it parallel to the image plane. Useful: To measure and calculate bidimensional ratios To improve visualization of a scene 16
183 Rectification Rectified sidewalk Rectified fence 17
184 Rectification As seen from top Original picture 18
185 This is difficult... Noticeable result, but far from perfect because of geometric distortions and strong perspective to correct. 19
186 Geometric distortion Pin-hole camera is not precise in most of the real cases. Camera actually introduces geometric distortion that may modify the aspect of an object in the scene. Most of the distortions are caused by the optics and depend on lens characteristics. The most common effect is to transform straight lines in curves, this effect is mostly noticeable in wide angle optics. 20
187 Distortion correction (I) 21
188 Distortion correction (II) 22
189 Why is it important? It's a preparatory step before other transformation (for example to correct perspective). Distortion can heavily modify the characteristics of the scene or of the subjects, especially on the borders of the image. Distortion must be corrected before measuring positions and distances, otherwise the calculation will be wrongly influenced. 23
190 Geometric transformations (I) Perspective modification and distortion correction are geometric transformations. Many different geometric transformations exist, for example the enlargement (that we've seen with interpolation) and the rotation. The procedure for all geometric transformations consists in calculating by mathematical operations the position of some points starting from the position of this points in the original image. 24
191 Geometric transformations (II) Original image grid Transformed image grid 25
192 Interpolation again In the position calculated by a geometric transform I'll have to estimate the value of corresponding pixels. In general I don't know the value of pixels in these positions, since they are different with respect to the original. I must calculate output pixel values starting from the original ones by the interpolation techniques seen with image enlargement. 26
193 Conclusions Obtaining information about the 3d world from a 2d image is not easy, but can be very important. I can reconstruct the real aspect of a scene. I can (partially) modify point of view of a scene. I can measure something in a scene. 27
194 Further reading Photogrammetry: Single view metrology Criminisi, Reid, Zisserman International Journal of Computer Vision
195 Forensic Image Processing Lesson 8 Biometrics
196 Biometric recognition Biometric recognition employs physiological or behavioral characteristics of a person to determine his/her identity. 2
197 Necessary features (I) What kind of measures can be considered biometric? Every characteristic that is: universal: everyone should have it; distinguishable: taken two different persons, this characteristic must be sufficiently different; permanent: it must be sufficiently invariant in a certain time period; measurable: it must be quantitatively measurable. 3
198 Practical features (II) In real identification systems other aspects must be taken into consideration: Performances: accuracy and speed of the recognition necessary resources external factors influencing characteristics Acceptability: availability of the persons to accept a certain identification method in their every day life Cheating possibility: how easily the system could be cheated 4
199 Biometric identifiers Physiological: Behavioral DNA Handwriting Ear shape Voice Face Gait Facial thermogram Posture Fingerprint Keyboard typewriting Hand geometry... Hand vein Iris Odor Handprint Retina... 5
200 Characteristics Every method has different features. In the table the features of different biometrics are compared upon the perception of the author of the article An introduction to biometric recognition (see Further reading ). 6
201 Biometric systems A biometric system is composed by four basic components: 1. sensor module: acquires biometric data; 2.feature extraction module: processes acquired data do obtain some numeric parameters from them; 3.matching module: parameters are compared with reference ones; decision-making module: establishes subject identity or confirms or rejects declared identity 7
202 Identification and verification Biometric systems can be used for two different purposes: identification: who's this person? verification: is really this person who declares to be? Identification is much more complex than verification. When we speak about recognition we must clearly distinguish between identification and verification. For many practical applications verification is sufficient. 8
203 Procedure example 9
204 Performances and errors (I) Two different acquisitions of the same biometric feature of the same person are not exactly the same because of: imperfections in the sensor and differences in the acquisition process; modifications in the characteristics of the subject; environmental conditions; interaction of the subject with the sensor. Generally the output of a biometric system is a number indicating the matching score between the acquired data and one element of the database. 10
205 Performances and errors (II) Decisions of the system are regulated by a threshold t: couples of biometric data with a matching score bigger than t are considered belonging to the same person. A biometric system can commit two types of error: false match: make belonging to the same person two measures actually taken on two different persons; false non-match: make belonging to different persons two measures actually taken on the same person. We must find the right compromise between the false match rate (FMR) and the false non-match rate (FMNR) in every biometric system. 11
206 Performances and errors (III) 12
207 Some biometric techniques A little introduction on some biometric techniques: face recognition; iris recognition; fingerprint recognition. 13
208 Face recognition Two main different approaches: position and shape of some features (nose, lips, eyes...) global analysis of the face as a weighted average of some standard faces. Very sensible to changing light condition and different angles of view. Little obtrusive, it may be very useful in some situations, less effective in others. 14
209 Basic idea Database composed by faces normalized in size and position. average face 15
210 Basic idea A certain statistical analysis allows to calculate some reference faces ( eigenfaces ). Every face can be described by the average face by a weighted average of all reference faces. A face can be encoded by the weights of the reference faces needed to represent it. Eigenface 1 Eigenface 2 Eigenface 3 Eigenface 4 Eigenface 5 16
211 Reconstruction Face reconstruction starting from the average face, adding reference faces. Original face Reconstructed face 17
212 Iris recognition but: Encoding iris features allows a very fast and precise recognition: the encoding of two different eyes differs in average by 50% of values; the probability that they differ of less than 30% is almost zero. Iris is a unique feature of every person, even in the case of twins. It's very difficult to cheat the system. it's very obtrusive and needs user interaction systems are rather expensive 18
213 Basic idea (I) Iris image is unrolled 19
214 Basic idea (II) Iris lines are encoded by their pattern. 20
215 Fingerprints (I) 21
216 Fingerprints (II) A fingerprint is characterized by ridges and valleys. Fingerprints differs from person to person (even in twins). Recognition systems are quite cheap and enough precise. Recognition methods (especially for identification) are computationally heavy. For some category of subjects or environmental conditions the method can be inaccurate (manual workers with cut on the skin, sweating hands...). 22
217 Features (I) Federal Bureau of Investigation Educational Internet Publication Some particular patterns are more frequent. 23
218 Features (II) 24
219 Features (III) Fingerprints are characterized by: Patterns (loop, whorl, arch...) Ridges between couples of minutiae Type, direction, and position of minutiae Position of the pores A minutia is a bifurcation or a termination: a clear distinction between them is generally ignored, since one is the negative of the other and small modification in the print can transform a bifurcation in a termination and viceversa. 25
220 Minutiae (I) Position of minutiae of a fingerprint Position and direction of minutiae of a fingerprint 26
221 Minutiae (II) The comparison of all the possible couples of minutiae is a complicated and heavy process. 27
222 Pattern recognition (I) Biometric techniques belongs to the field widely called pattern recognition. Pattern recognition techniques aim at extracting a numeric description from any kind of information (in our case a visual one) for the comparison and recognition with some reference features. Applicative examples are the classification of footprints, tire marks, logos on drug pills, bullets... 28
223 Pattern recognition (II) Pills database Shoeprint database 29
224 Conclusions Usage of biometric recognition is widely expanding. One of the main problem is the acceptance by the subjects involved in the recognition process. Employed methods must be very reliable and not obtrusive. Similar techniques are used also to threat non-biometric data. 30
225 Further reading Biometrics An introduction to biometric recognition Jain, A.K.; Ross, A.; Prabhakar, S.; Circuits and Systems for Video Technology, IEEE Transactions on Volume 14, Issue 1, Jan Page(s):4-20 Digital Object Identifier /TCSVT Pattern recognition Overview of Pattern recognition and image processing in forensic science Zeno Geradts and Jurrien Bijhold, Netherlands Forensic Institute 31
Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition
Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition 1. Image Pre-Processing - Pixel Brightness Transformation - Geometric Transformation - Image Denoising 1 1. Image Pre-Processing
Video compression: Performance of available codec software
Video compression: Performance of available codec software Introduction. Digital Video A digital video is a collection of images presented sequentially to produce the effect of continuous motion. It takes
Lectures 6&7: Image Enhancement
Lectures 6&7: Image Enhancement Leena Ikonen Pattern Recognition (MVPR) Lappeenranta University of Technology (LUT) [email protected] http://www.it.lut.fi/ip/research/mvpr/ 1 Content Background Spatial
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - [email protected]
Understanding Megapixel Camera Technology for Network Video Surveillance Systems. Glenn Adair
Understanding Megapixel Camera Technology for Network Video Surveillance Systems Glenn Adair Introduction (1) 3 MP Camera Covers an Area 9X as Large as (1) VGA Camera Megapixel = Reduce Cameras 3 Mega
Video Camera Image Quality in Physical Electronic Security Systems
Video Camera Image Quality in Physical Electronic Security Systems Video Camera Image Quality in Physical Electronic Security Systems In the second decade of the 21st century, annual revenue for the global
EVIDENCE PHOTOGRAPHY TEST SPECIFICATIONS MODULE 1: CAMERA SYSTEMS & LIGHT THEORY (37)
EVIDENCE PHOTOGRAPHY TEST SPECIFICATIONS The exam will cover evidence photography involving crime scenes, fire scenes, accident scenes, aircraft incident scenes, surveillances and hazardous materials scenes.
Sharpening through spatial filtering
Sharpening through spatial filtering Stefano Ferrari Università degli Studi di Milano [email protected] Elaborazione delle immagini (Image processing I) academic year 2011 2012 Sharpening The term
Overview. Raster Graphics and Color. Overview. Display Hardware. Liquid Crystal Display (LCD) Cathode Ray Tube (CRT)
Raster Graphics and Color Greg Humphreys CS445: Intro Graphics University of Virginia, Fall 2004 Color models Color models Display Hardware Video display devices Cathode Ray Tube (CRT) Liquid Crystal Display
SCIENTIFIC WORKING GROUP ON IMAGING TECHNOLOGIES (SWGIT)
SCIENTIFIC WORKING GROUP ON IMAGING TECHNOLOGIES (SWGIT) DRAFT RECOMMENDATIONS AND GUIDELINES FOR THE USE OF DIGITAL IMAGE PROCESSING IN THE CRIMINAL JUSTICE SYSTEM (Version 1.1 February 2001) The purpose
Understanding HD: Frame Rates, Color & Compression
Understanding HD: Frame Rates, Color & Compression HD Format Breakdown An HD Format Describes (in no particular order) Resolution Frame Rate Bit Rate Color Space Bit Depth Color Model / Color Gamut Color
Scanners and How to Use Them
Written by Jonathan Sachs Copyright 1996-1999 Digital Light & Color Introduction A scanner is a device that converts images to a digital file you can use with your computer. There are many different types
MMGD0203 Multimedia Design MMGD0203 MULTIMEDIA DESIGN. Chapter 3 Graphics and Animations
MMGD0203 MULTIMEDIA DESIGN Chapter 3 Graphics and Animations 1 Topics: Definition of Graphics Why use Graphics? Graphics Categories Graphics Qualities File Formats Types of Graphics Graphic File Size Introduction
Video-Conferencing System
Video-Conferencing System Evan Broder and C. Christoher Post Introductory Digital Systems Laboratory November 2, 2007 Abstract The goal of this project is to create a video/audio conferencing system. Video
Composite Video Separation Techniques
TM Composite Video Separation Techniques Application Note October 1996 AN9644 Author: Stephen G. LaJeunesse Introduction The most fundamental job of a video decoder is to separate the color from the black
EECS 556 Image Processing W 09. Interpolation. Interpolation techniques B splines
EECS 556 Image Processing W 09 Interpolation Interpolation techniques B splines What is image processing? Image processing is the application of 2D signal processing methods to images Image representation
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING Ms.PALLAVI CHOUDEKAR Ajay Kumar Garg Engineering College, Department of electrical and electronics Ms.SAYANTI BANERJEE Ajay Kumar Garg Engineering
Digital Image Fundamentals. Selim Aksoy Department of Computer Engineering Bilkent University [email protected]
Digital Image Fundamentals Selim Aksoy Department of Computer Engineering Bilkent University [email protected] Imaging process Light reaches surfaces in 3D. Surfaces reflect. Sensor element receives
White paper. H.264 video compression standard. New possibilities within video surveillance.
White paper H.264 video compression standard. New possibilities within video surveillance. Table of contents 1. Introduction 3 2. Development of H.264 3 3. How video compression works 4 4. H.264 profiles
Computational Optical Imaging - Optique Numerique. -- Deconvolution --
Computational Optical Imaging - Optique Numerique -- Deconvolution -- Winter 2014 Ivo Ihrke Deconvolution Ivo Ihrke Outline Deconvolution Theory example 1D deconvolution Fourier method Algebraic method
Personal Identity Verification (PIV) IMAGE QUALITY SPECIFICATIONS FOR SINGLE FINGER CAPTURE DEVICES
Personal Identity Verification (PIV) IMAGE QUALITY SPECIFICATIONS FOR SINGLE FINGER CAPTURE DEVICES 1.0 SCOPE AND PURPOSE These specifications apply to fingerprint capture devices which scan and capture
1. Redistributions of documents, or parts of documents, must retain the SWGIT cover page containing the disclaimer.
Disclaimer: As a condition to the use of this document and the information contained herein, the SWGIT requests notification by e-mail before or contemporaneously to the introduction of this document,
A System for Capturing High Resolution Images
A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: [email protected]
Digital Image Requirements for New Online US Visa Application
Digital Image Requirements for New Online US Visa Application As part of the electronic submission of your DS-160 application, you will be asked to provide an electronic copy of your photo. The photo must
PERFORMANCE ANALYSIS OF HIGH RESOLUTION IMAGES USING INTERPOLATION TECHNIQUES IN MULTIMEDIA COMMUNICATION SYSTEM
PERFORMANCE ANALYSIS OF HIGH RESOLUTION IMAGES USING INTERPOLATION TECHNIQUES IN MULTIMEDIA COMMUNICATION SYSTEM Apurva Sinha 1, Mukesh kumar 2, A.K. Jaiswal 3, Rohini Saxena 4 Department of Electronics
Study of the Human Eye Working Principle: An impressive high angular resolution system with simple array detectors
Study of the Human Eye Working Principle: An impressive high angular resolution system with simple array detectors Diego Betancourt and Carlos del Río Antenna Group, Public University of Navarra, Campus
Data Storage. Chapter 3. Objectives. 3-1 Data Types. Data Inside the Computer. After studying this chapter, students should be able to:
Chapter 3 Data Storage Objectives After studying this chapter, students should be able to: List five different data types used in a computer. Describe how integers are stored in a computer. Describe how
LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK
vii LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK LIST OF CONTENTS LIST OF TABLES LIST OF FIGURES LIST OF NOTATIONS LIST OF ABBREVIATIONS LIST OF APPENDICES
Introduction to Digital Video
Introduction to Digital Video Significance of the topic With the increasing accessibility of technology for everyday people, things are starting to get digitalized: digital camera, digital cable, digital
Computer Vision. Color image processing. 25 August 2014
Computer Vision Color image processing 25 August 2014 Copyright 2001 2014 by NHL Hogeschool and Van de Loosdrecht Machine Vision BV All rights reserved [email protected], [email protected] Color image
Comparison of different image compression formats. ECE 533 Project Report Paula Aguilera
Comparison of different image compression formats ECE 533 Project Report Paula Aguilera Introduction: Images are very important documents nowadays; to work with them in some applications they need to be
Lecture 14. Point Spread Function (PSF)
Lecture 14 Point Spread Function (PSF), Modulation Transfer Function (MTF), Signal-to-noise Ratio (SNR), Contrast-to-noise Ratio (CNR), and Receiver Operating Curves (ROC) Point Spread Function (PSF) Recollect
Visual perception basics. Image aquisition system. P. Strumiłło
Visual perception basics Image aquisition system P. Strumiłło Light perception by humans Humans perceive approx. 90% of information about the environment by means of visual system. Efficiency of the human
Prepared by: Paul Lee ON Semiconductor http://onsemi.com
Introduction to Analog Video Prepared by: Paul Lee ON Semiconductor APPLICATION NOTE Introduction Eventually all video signals being broadcasted or transmitted will be digital, but until then analog video
MassArt Studio Foundation: Visual Language Digital Media Cookbook, Fall 2013
INPUT OUTPUT 08 / IMAGE QUALITY & VIEWING In this section we will cover common image file formats you are likely to come across and examine image quality in terms of resolution and bit depth. We will cover
Processing the Image or Can you Believe what you see? Light and Color for Nonscientists PHYS 1230
Processing the Image or Can you Believe what you see? Light and Color for Nonscientists PHYS 1230 Optical Illusions http://www.michaelbach.de/ot/mot_mib/index.html Vision We construct images unconsciously
Analecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture.
Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture. Chirag Gupta,Sumod Mohan K [email protected], [email protected] Abstract In this project we propose a method to improve
jorge s. marques image processing
image processing images images: what are they? what is shown in this image? What is this? what is an image images describe the evolution of physical variables (intensity, color, reflectance, condutivity)
Understanding Line Scan Camera Applications
Understanding Line Scan Camera Applications Discover the benefits of line scan cameras, including perfect, high resolution images, and the ability to image large objects. A line scan camera has a single
WHITE PAPER. Are More Pixels Better? www.basler-ipcam.com. Resolution Does it Really Matter?
WHITE PAPER www.basler-ipcam.com Are More Pixels Better? The most frequently asked question when buying a new digital security camera is, What resolution does the camera provide? The resolution is indeed
Outline. Quantizing Intensities. Achromatic Light. Optical Illusion. Quantizing Intensities. CS 430/585 Computer Graphics I
CS 430/585 Computer Graphics I Week 8, Lecture 15 Outline Light Physical Properties of Light and Color Eye Mechanism for Color Systems to Define Light and Color David Breen, William Regli and Maxim Peysakhov
AUDIO. 1. An audio signal is an representation of a sound. a. Acoustical b. Environmental c. Aesthetic d. Electrical
Essentials of the AV Industry Pretest Not sure if you need to take Essentials? Do you think you know the basics of Audio Visual? Take this quick assessment test on Audio, Visual, and Systems to find out!
A Proposal for OpenEXR Color Management
A Proposal for OpenEXR Color Management Florian Kainz, Industrial Light & Magic Revision 5, 08/05/2004 Abstract We propose a practical color management scheme for the OpenEXR image file format as used
Introduction to image coding
Introduction to image coding Image coding aims at reducing amount of data required for image representation, storage or transmission. This is achieved by removing redundant data from an image, i.e. by
Robot Perception Continued
Robot Perception Continued 1 Visual Perception Visual Odometry Reconstruction Recognition CS 685 11 Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart
B2.53-R3: COMPUTER GRAPHICS. NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions.
B2.53-R3: COMPUTER GRAPHICS NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions. 2. PART ONE is to be answered in the TEAR-OFF ANSWER
The Image Deblurring Problem
page 1 Chapter 1 The Image Deblurring Problem You cannot depend on your eyes when your imagination is out of focus. Mark Twain When we use a camera, we want the recorded image to be a faithful representation
TVL - The True Measurement of Video Quality
ACTi Knowledge Base Category: Educational Note Sub-category: Video Quality, Hardware Model: N/A Firmware: N/A Software: N/A Author: Ando.Meritee Published: 2010/10/25 Reviewed: 2010/10/27 TVL - The True
Figure 1: Relation between codec, data containers and compression algorithms.
Video Compression Djordje Mitrovic University of Edinburgh This document deals with the issues of video compression. The algorithm, which is used by the MPEG standards, will be elucidated upon in order
CS-184: Computer Graphics
CS-184: Computer Graphics Lecture #18: Introduction to Animation Prof. James O Brien University of California, Berkeley V2007-F-18-1.0 Introduction to Animation Generate perception of motion with sequence
Understanding The Face Image Format Standards
Understanding The Face Image Format Standards Paul Griffin, Ph.D. Chief Technology Officer Identix April 2005 Topics The Face Image Standard The Record Format Frontal Face Images Face Images and Compression
Computer Networks and Internets, 5e Chapter 6 Information Sources and Signals. Introduction
Computer Networks and Internets, 5e Chapter 6 Information Sources and Signals Modified from the lecture slides of Lami Kaya ([email protected]) for use CECS 474, Fall 2008. 2009 Pearson Education Inc., Upper
High Quality Image Magnification using Cross-Scale Self-Similarity
High Quality Image Magnification using Cross-Scale Self-Similarity André Gooßen 1, Arne Ehlers 1, Thomas Pralow 2, Rolf-Rainer Grigat 1 1 Vision Systems, Hamburg University of Technology, D-21079 Hamburg
Lecture 12: Cameras and Geometry. CAP 5415 Fall 2010
Lecture 12: Cameras and Geometry CAP 5415 Fall 2010 The midterm What does the response of a derivative filter tell me about whether there is an edge or not? Things aren't working Did you look at the filters?
Understanding Compression Technologies for HD and Megapixel Surveillance
When the security industry began the transition from using VHS tapes to hard disks for video surveillance storage, the question of how to compress and store video became a top consideration for video surveillance
CHAPTER 3: DIGITAL IMAGING IN DIAGNOSTIC RADIOLOGY. 3.1 Basic Concepts of Digital Imaging
Physics of Medical X-Ray Imaging (1) Chapter 3 CHAPTER 3: DIGITAL IMAGING IN DIAGNOSTIC RADIOLOGY 3.1 Basic Concepts of Digital Imaging Unlike conventional radiography that generates images on film through
Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections
Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections Maximilian Hung, Bohyun B. Kim, Xiling Zhang August 17, 2013 Abstract While current systems already provide
Best practices for producing quality digital video files
University of Michigan Deep Blue deepblue.lib.umich.edu 2011-03-09 Best practices for producing quality digital video files Formats Group, Deep Blue http://hdl.handle.net/2027.42/83222 Best practices for
Digital Video-Editing Programs
Digital Video-Editing Programs Digital video-editing software gives you ready access to all your digital video clips. Courtesy Harold Olejarz. enable you to produce broadcastquality video on classroom
A BRIEF STUDY OF VARIOUS NOISE MODEL AND FILTERING TECHNIQUES
Volume 4, No. 4, April 2013 Journal of Global Research in Computer Science REVIEW ARTICLE Available Online at www.jgrcs.info A BRIEF STUDY OF VARIOUS NOISE MODEL AND FILTERING TECHNIQUES Priyanka Kamboj
INTRODUCTION IMAGE PROCESSING >INTRODUCTION & HUMAN VISION UTRECHT UNIVERSITY RONALD POPPE
INTRODUCTION IMAGE PROCESSING >INTRODUCTION & HUMAN VISION UTRECHT UNIVERSITY RONALD POPPE OUTLINE Course info Image processing Definition Applications Digital images Human visual system Human eye Reflectivity
Data Storage 3.1. Foundations of Computer Science Cengage Learning
3 Data Storage 3.1 Foundations of Computer Science Cengage Learning Objectives After studying this chapter, the student should be able to: List five different data types used in a computer. Describe how
Otis Photo Lab Inkjet Printing Demo
Otis Photo Lab Inkjet Printing Demo Otis Photography Lab Adam Ferriss Lab Manager [email protected] 310.665.6971 Soft Proofing and Pre press Before you begin printing, it is a good idea to set the proof
Lecture 16: A Camera s Image Processing Pipeline Part 1. Kayvon Fatahalian CMU 15-869: Graphics and Imaging Architectures (Fall 2011)
Lecture 16: A Camera s Image Processing Pipeline Part 1 Kayvon Fatahalian CMU 15-869: Graphics and Imaging Architectures (Fall 2011) Today (actually all week) Operations that take photons to an image Processing
Adaptive Coded Aperture Photography
Adaptive Coded Aperture Photography Oliver Bimber, Haroon Qureshi, Daniel Danch Institute of Johannes Kepler University, Linz Anselm Grundhoefer Disney Research Zurich Max Grosse Bauhaus University Weimar
Building an Advanced Invariant Real-Time Human Tracking System
UDC 004.41 Building an Advanced Invariant Real-Time Human Tracking System Fayez Idris 1, Mazen Abu_Zaher 2, Rashad J. Rasras 3, and Ibrahiem M. M. El Emary 4 1 School of Informatics and Computing, German-Jordanian
How To Use Trackeye
Product information Image Systems AB Main office: Ågatan 40, SE-582 22 Linköping Phone +46 13 200 100, fax +46 13 200 150 [email protected], Introduction TrackEye is the world leading system for motion
Solving Simultaneous Equations and Matrices
Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering
Video Conferencing Display System Sizing and Location
Video Conferencing Display System Sizing and Location As video conferencing systems become more widely installed, there are often questions about what size monitors and how many are required. While fixed
How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm
IJSTE - International Journal of Science Technology & Engineering Volume 1 Issue 10 April 2015 ISSN (online): 2349-784X Image Estimation Algorithm for Out of Focus and Blur Images to Retrieve the Barcode
Multimodal Biometric Recognition Security System
Multimodal Biometric Recognition Security System Anju.M.I, G.Sheeba, G.Sivakami, Monica.J, Savithri.M Department of ECE, New Prince Shri Bhavani College of Engg. & Tech., Chennai, India ABSTRACT: Security
Understanding Video Latency What is video latency and why do we care about it?
By Pete Eberlein, Sensoray Company, Inc. Understanding Video Latency What is video latency and why do we care about it? When choosing components for a video system, it is important to understand how the
Algorithms for the resizing of binary and grayscale images using a logical transform
Algorithms for the resizing of binary and grayscale images using a logical transform Ethan E. Danahy* a, Sos S. Agaian b, Karen A. Panetta a a Dept. of Electrical and Computer Eng., Tufts University, 161
Sampling Theorem Notes. Recall: That a time sampled signal is like taking a snap shot or picture of signal periodically.
Sampling Theorem We will show that a band limited signal can be reconstructed exactly from its discrete time samples. Recall: That a time sampled signal is like taking a snap shot or picture of signal
Introduction to Digital Resolution
Introduction to Digital Resolution 2011 Copyright Les Walkling 2011 Adobe Photoshop screen shots reprinted with permission from Adobe Systems Incorporated. Version 2011:02 CONTENTS Pixels of Resolution
Common Core Unit Summary Grades 6 to 8
Common Core Unit Summary Grades 6 to 8 Grade 8: Unit 1: Congruence and Similarity- 8G1-8G5 rotations reflections and translations,( RRT=congruence) understand congruence of 2 d figures after RRT Dilations
EPSON SCANNING TIPS AND TROUBLESHOOTING GUIDE Epson Perfection 3170 Scanner
EPSON SCANNING TIPS AND TROUBLESHOOTING GUIDE Epson Perfection 3170 Scanner SELECT A SUITABLE RESOLUTION The best scanning resolution depends on the purpose of the scan. When you specify a high resolution,
Computer Vision. Image acquisition. 25 August 2014. Copyright 2001 2014 by NHL Hogeschool and Van de Loosdrecht Machine Vision BV All rights reserved
Computer Vision Image acquisition 25 August 2014 Copyright 2001 2014 by NHL Hogeschool and Van de Loosdrecht Machine Vision BV All rights reserved [email protected], [email protected] Image acquisition
Understanding astigmatism Spring 2003
MAS450/854 Understanding astigmatism Spring 2003 March 9th 2003 Introduction Spherical lens with no astigmatism Crossed cylindrical lenses with astigmatism Horizontal focus Vertical focus Plane of sharpest
White paper. HDTV (High Definition Television) and video surveillance
White paper HDTV (High Definition Television) and video surveillance Table of contents Introduction 3 1. HDTV impact on video surveillance market 3 2. Development of HDTV 3 3. How HDTV works 4 4. HDTV
Jitter Measurements in Serial Data Signals
Jitter Measurements in Serial Data Signals Michael Schnecker, Product Manager LeCroy Corporation Introduction The increasing speed of serial data transmission systems places greater importance on measuring
CS 325 Computer Graphics
CS 325 Computer Graphics 01 / 25 / 2016 Instructor: Michael Eckmann Today s Topics Review the syllabus Review course policies Color CIE system chromaticity diagram color gamut, complementary colors, dominant
Introduction. www.imagesystems.se
Product information Image Systems AB Main office: Ågatan 40, SE-582 22 Linköping Phone +46 13 200 100, fax +46 13 200 150 [email protected], Introduction Motion is the world leading software for advanced
E190Q Lecture 5 Autonomous Robot Navigation
E190Q Lecture 5 Autonomous Robot Navigation Instructor: Chris Clark Semester: Spring 2014 1 Figures courtesy of Siegwart & Nourbakhsh Control Structures Planning Based Control Prior Knowledge Operator
DIGITAL IMAGE PROCESSING AND ANALYSIS
DIGITAL IMAGE PROCESSING AND ANALYSIS Human and Computer Vision Applications with CVIPtools SECOND EDITION SCOTT E UMBAUGH Uffi\ CRC Press Taylor &. Francis Group Boca Raton London New York CRC Press is
NAPCS Product List for NAICS 51219: Post Production Services and Other Motion Picture and Video Industries
National 51219 1 Postproduction Providing computerized and electronic image and sound processing (film, video, digital media, etc.). Includes editing, transfer, color correction, digital restoration, visual
How to Choose the Right Network Cameras. for Your Surveillance Project. Surveon Whitepaper
How to Choose the Right Network Cameras for Your Surveillance Project Surveon Whitepaper From CCTV to Network, surveillance has changed from single professional-orientated technology to one integrated
Choosing a digital camera for your microscope John C. Russ, Materials Science and Engineering Dept., North Carolina State Univ.
Choosing a digital camera for your microscope John C. Russ, Materials Science and Engineering Dept., North Carolina State Univ., Raleigh, NC One vital step is to choose a transfer lens matched to your
Planetary Imaging Workshop Larry Owens
Planetary Imaging Workshop Larry Owens Lowell Observatory, 1971-1973 Backyard Telescope, 2005 How is it possible? How is it done? Lowell Observatory Sequence,1971 Acquisition E-X-P-E-R-I-M-E-N-T-A-T-I-O-N!
Automatic Labeling of Lane Markings for Autonomous Vehicles
Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 [email protected] 1. Introduction As autonomous vehicles become more popular,
Face detection is a process of localizing and extracting the face region from the
Chapter 4 FACE NORMALIZATION 4.1 INTRODUCTION Face detection is a process of localizing and extracting the face region from the background. The detected face varies in rotation, brightness, size, etc.
Image Authentication Scheme using Digital Signature and Digital Watermarking
www..org 59 Image Authentication Scheme using Digital Signature and Digital Watermarking Seyed Mohammad Mousavi Industrial Management Institute, Tehran, Iran Abstract Usual digital signature schemes for
International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014
Efficient Attendance Management System Using Face Detection and Recognition Arun.A.V, Bhatath.S, Chethan.N, Manmohan.C.M, Hamsaveni M Department of Computer Science and Engineering, Vidya Vardhaka College
Parametric Comparison of H.264 with Existing Video Standards
Parametric Comparison of H.264 with Existing Video Standards Sumit Bhardwaj Department of Electronics and Communication Engineering Amity School of Engineering, Noida, Uttar Pradesh,INDIA Jyoti Bhardwaj
A Learning Based Method for Super-Resolution of Low Resolution Images
A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 [email protected] Abstract The main objective of this project is the study of a learning based method
Digital exposure-based workflow Digital Imaging II classes Columbia College Chicago Photography Department Revised 20100522
Digital exposure-based workflow Digital Imaging II classes Columbia College Chicago Photography Department Revised 20100522 Goal The goal of this workflow is to allow you to create master image files of
1. Introduction to image processing
1 1. Introduction to image processing 1.1 What is an image? An image is an array, or a matrix, of square pixels (picture elements) arranged in columns and rows. Figure 1: An image an array or a matrix
