Video stabilization for high resolution images reconstruction

Similar documents
SUPER RESOLUTION FROM MULTIPLE LOW RESOLUTION IMAGES

Super-resolution Reconstruction Algorithm Based on Patch Similarity and Back-projection Modification

Performance Verification of Super-Resolution Image Reconstruction

Limitation of Super Resolution Image Reconstruction for Video

ROBUST COLOR JOINT MULTI-FRAME DEMOSAICING AND SUPER- RESOLUTION ALGORITHM

A Learning Based Method for Super-Resolution of Low Resolution Images

Image Super-Resolution for Improved Automatic Target Recognition

FAST REGISTRATION METHODS FOR SUPER-RESOLUTION IMAGING. Jari Hannuksela, Jarno Väyrynen, Janne Heikkilä and Pekka Sangi

Superresolution images reconstructed from aliased images

RESOLUTION IMPROVEMENT OF DIGITIZED IMAGES

Accurate and robust image superresolution by neural processing of local image representations

Bayesian Image Super-Resolution

Extracting a Good Quality Frontal Face Images from Low Resolution Video Sequences

Latest Results on High-Resolution Reconstruction from Video Sequences

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow

Face Model Fitting on Low Resolution Images

Super-Resolution Methods for Digital Image and Video Processing

Parametric Comparison of H.264 with Existing Video Standards

Redundant Wavelet Transform Based Image Super Resolution

Low-resolution Character Recognition by Video-based Super-resolution

NEIGHBORHOOD REGRESSION FOR EDGE-PRESERVING IMAGE SUPER-RESOLUTION. Yanghao Li, Jiaying Liu, Wenhan Yang, Zongming Guo

Discrete Curvelet Transform Based Super-resolution using Sub-pixel Image Registration

Mean-Shift Tracking with Random Sampling

High Quality Image Magnification using Cross-Scale Self-Similarity

A NEW SUPER RESOLUTION TECHNIQUE FOR RANGE DATA. Valeria Garro, Pietro Zanuttigh, Guido M. Cortelazzo. University of Padova, Italy

Super-Resolution for Traditional and Omnidirectional Image Sequences

Feature Tracking and Optical Flow

Metrics on SO(3) and Inverse Kinematics

Super-Resolution Imaging Applied to Moving Targets in High Dynamic Scenes

Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition

A Novel Method for Brain MRI Super-resolution by Wavelet-based POCS and Adaptive Edge Zoom

Super-resolution method based on edge feature for high resolution imaging

SUPER-RESOLUTION FROM MULTIPLE IMAGES HAVING ARBITRARY MUTUAL MOTION

A System for Capturing High Resolution Images

1646 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 6, NO. 12, DECEMBER 1997


Low-resolution Image Processing based on FPGA

Image Interpolation by Pixel Level Data-Dependent Triangulation

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

Performance Analysis and Comparison of JM 15.1 and Intel IPP H.264 Encoder and Decoder

Chapter 1 Simultaneous demosaicing and resolution enhancement from under-sampled image sequences

Whitepaper. Image stabilization improving camera usability

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY

Final Year Project Progress Report. Frequency-Domain Adaptive Filtering. Myles Friel. Supervisor: Dr.Edward Jones

Resolution Enhancement of Photogrammetric Digital Images

Image Compression through DCT and Huffman Coding Technique

A Novel Method to Improve Resolution of Satellite Images Using DWT and Interpolation

Resolving Objects at Higher Resolution from a Single Motion-blurred Image

WHITE PAPER. Are More Pixels Better? Resolution Does it Really Matter?

Image super-resolution: Historical overview and future challenges

The Image Deblurring Problem

Vision based Vehicle Tracking using a high angle camera

An Iterative Image Registration Technique with an Application to Stereo Vision

Real Time Baseball Augmented Reality

High Quality Image Deblurring Panchromatic Pixels

Graphic Design. Background: The part of an artwork that appears to be farthest from the viewer, or in the distance of the scene.

Geometric Camera Parameters

Advances in scmos Camera Technology Benefit Bio Research

Video-to-Video Dynamic Super-Resolution for Grayscale and Color Sequences

AR-media Plugin v2.3. for Autodesk 3ds Max. QUICK START GUIDE (September, 2013)

Medical Image Processing on the GPU. Past, Present and Future. Anders Eklund, PhD Virginia Tech Carilion Research Institute

CS231M Project Report - Automated Real-Time Face Tracking and Blending

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Analecta Vol. 8, No. 2 ISSN

Detecting and positioning overtaking vehicles using 1D optical flow

Shear :: Blocks (Video and Image Processing Blockset )

Face Recognition in Low-resolution Images by Using Local Zernike Moments

VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS

UNIVERSITY OF CALIFORNIA SANTA CRUZ A FAST AND ROBUST FRAMEWORK FOR IMAGE FUSION AND ENHANCEMENT DOCTOR OF PHILOSOPHY ELECTRICAL ENGINEERING

Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication

Tracking and Recognition in Sports Videos

Super-Resolution from a Single Image

Tracking of Small Unmanned Aerial Vehicles

Quick-Start Guide. Remote Surveillance & Playback SUPER DVR MONITORING SOFTWARE. For use on Q-See s QSDT series of PC Securitiy Surveillance Cards

EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM

Video-Conferencing System

EECS 556 Image Processing W 09. Interpolation. Interpolation techniques B splines

ACE: After Effects CS6

CDVS-7000 Series Remote Software Users Guide

Sachin Patel HOD I.T Department PCST, Indore, India. Parth Bhatt I.T Department, PCST, Indore, India. Ankit Shah CSE Department, KITE, Jaipur, India

Laser Gesture Recognition for Human Machine Interaction

How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm

High Resolution Images from Low Resolution Video Sequences

Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm

MATLAB-based Applications for Image Processing and Image Quality Assessment Part II: Experimental Results

Introduction to image coding

Avigilon Control Center Web Client User Guide

To determine vertical angular frequency, we need to express vertical viewing angle in terms of and. 2tan. (degree). (1 pt)

Pictorial User s Guide

1. Central Monitoring System Software

Understanding Megapixel Camera Technology for Network Video Surveillance Systems. Glenn Adair

Essential Mathematics for Computer Graphics fast

CS 4620 Practicum Programming Assignment 6 Animation

Subspace Analysis and Optimization for AAM Based Face Alignment

Machine Learning in Multi-frame Image Super-resolution

Super-Resolution Reconstruction in MRI: Better Images Faster?

High Resolution Images from a Sequence of Low Resolution Observations

Cloud tracking with optical flow for short-term solar forecasting


Mouse Control using a Web Camera based on Colour Detection

Transcription:

Advanced Project S9 Video stabilization for high resolution images reconstruction HIMMICH Youssef, KEROUANTON Thomas, PATIES Rémi, VILCHES José. Abstract Super-resolution reconstruction produces one or a set of high-resolution images from a set of lowresolution images, each slightly different from another. Many different super-resolution methods have been developed during the last thirty years, each one with its strength and flaws depending on the type of data it is applied on. In this project, the images are extracted from a video stream, either from a webcam in live or from a recorded video stored in the hard drive. Thus, the superresolution of such an image cannot be performed without a precise estimation of the movement between a template image and another one. We developed a motion estimation program that provides the necessary parameters to a super-resolution algorithm, which we chose amongst the many available after comparing their efficiency in terms of speed and result. I. INTRODUCTION The transfer of a scene from reality to a digital image generates a lot of losses: atmosphere blur effects for spatial imaging in particular; motion effects in most cases of moving subjects; camera blur effect, which is increased by the reduction of the size of pixels on recent digital images sensors; obviously a down-sampling effect due to the limited number of photosensitive pixels on the sensor; and finally non-linear distortion effects created by the lens. Those issues can be partially resolved with hardware solutions. For instance, an increase of the number of photosensitive pixels minimizes the down-sampling issue, but it also decreases the quality of the luminous signal perceived by each pixel, which accentuates the camera blur effect. The motion issue can be fixed with higher acquisition times and by stabilizing the sensor, and the atmosphere blur effect can be fixed with solutions such as Hubble. Therefore, each of these solutions implies an additive cost, and eventually their limits are easily reached. This is why during the last decades software solutions were developed: in opposite to hardware solutions, these can be applied in any situation and provides a constant enhancement of the image quality. The super-resolution algorithms only need a set of low-resolution (LR) images to create one, or many, highresolution (HR) image with enhanced resolution and quality. In this project, the source of LR images is a video stream, more precisely a region of interest that is tracked in the video. Fig : Examples of effects The super-resolution requires the transformation that was applied from one image to another in order to create the HR image. Lucas-Kanade [] algorithm is a reference in image alignment. In fact, it is used in this project to estimate the motion parameters in order to locate the region of interest in

Advanced Project S9 2 the video frames and extract patches of it which are going to play the role of LR images, and the parameters it furnishes can be transferred, after an adaptation, to the SR algorithm. There are many possible applications to this project. For instance, it could help surveillance and face recognition by computing HR images from a video surveillance record. Actually it is used to transform a video camera into a picture camera with a higher resolution. One other application uses vibrating lenses and a classical camera: the lens vibrates while the camera takes pictures, which are each slightly different, and those many images are used to create a single HR image of both resolution and quality the camera couldn t give. Fig. 2: Example of the Jitter camera possibility, which switches the position of the sensor between the integration time of the pixels to produce shifted images without any blur effect In this project, we tried to fulfill different objectives: - Track a region of interest in a video stream - Be able to reconstruct a high resolution image from this ROI using a super-resolution algorithm - Design an interface to allows the user to change some parameters or the SR algorithm Section II details the process of image alignment and how it tracks the region of interest chosen by the user. Section III presents different super-resolution algorithms and a comparison of their performances. Section IV of this paper presents the user interface that was developed, and what possibilities are proposed to the user. Finally, section V concludes this project. A. Introduction II. MOTION PARAMETERS ESTIMATION Super resolution reconstruction needs motion information between the LR images, including shifts and eventual rotation in the image plane. Image alignment allows not only to estimate these motion parameters, but also to track the region of interest in order to extract the LR images. B. Lucas-Kanade forward compositional algorithm This technique consists in moving, and possibly deforming, a template image to minimize the difference between the template and an image. Many algorithms can be used in image alignment; in this project, the Lucas-Kanade forward compositional Algorithm is used to estimate motion parameters in order to proceed to the super resolution reconstruction. The goal of the Lucas-Kanade algorithm is to minimize the sum of squared error between two images, the template T and the image I, warped back onto the coordinate frame of the template: = ; Where is the parameter s vector: () = (,,,, ) The Lucas-Kanade algorithm assumes that a current estimate of is known and then iteratively solves for increments P to the parameters P; i.e. the following expression is minimized: = ( ; ); ( ) (2) LR images from video HR image Template SR factor Estimation of movement parameters and tracking Super resolution reconstruction Fig 3: Schematic overview of the super-resolution principle

Advanced Project S9 3 The non-linear expression in (2) is linearized by performing a first order Taylor expansion on ( ; ); : = ( ; ); + () In this expression, =, P (3) is the gradient of the image evaluated at ;, i.e. is computed in the coordinate frame of and then warped back onto the coordinate frame of T using the current estimate of the warp ;. The term is the Jacobian of the warp. If ; = ;, ; = then : (4) H = (7) Once the increments vector is computed, the warp is updated as follows: P (8) The Lucas-Kanade algorithm (Lucas and Kanade, 98) then consists of iteratively applying (6) and (8) The algorithm is: Pre- compute: Evaluate the Jacobian at ; Iterate until P <ϵ Warp with ; to compute ; Compute the error ; Compute the gradient of image ; Compute the steepest descent images and in order to proceed we assume that the warp ; = is identity warp, thus it.simplifies the notations. Minimizing the expression in (4) is a least squares problem and has a closed from solution which can be derived as follows: = ; + () P Compute the Hessian matrix using Equation (7) Compute ; + P Compute P using Equation (6) Update the warp ; ; ; P C. Tracking of the region of interest (5) Setting this expression to equal zero and solving gives the closed form solution for the minimum of the expression in (2) as: P = ; where is the Hessian matrix: (6) Once the motion parameters are computed between the first two frames, the tracker can be very easy to set up in the video sequence, i.e. an initial motion estimate is determined from previous frames, and used to move the tracking window at the start of the estimation process to a place that will likely be close to the actual solution. The initial shift is usually close enough to the true estimate so one iteration of the LucasKanade estimator is enough to recover the motion within the patch (see example of region of interest (ROI) tracking in fig. 4). With the current estimate of the position of ROI in the current warped image, the extraction of the LR image is going to be performed as shown in the fig. 5: Fig 4: Example of the tracking of a region of interest

Advanced Project S9 4 Fig 5: Estimation of the position of specific points after motion, and detail between estimated and actual position of the center of ROI Those figures show that the precision of the algorithm is quite remarkable. Actually it is capable of finding exactly the position of the two test points (red and blue points) placed on the template and matches them to their positions in the warped current frame. To extract the LR image, the coordinates of the center of the ROI of the current frame are used via the motion estimation matrix : = (9) Where is the center of the ROI in the referential of the ROI, is the centre of the region of interest in the current frame referential, and is the estimated transformation matrix. Then, is rounded to an integer position and it s going to be the center of the extracted LR in the current frame referential. The extracted LR image has the same size as the template. D. Geometric transformation model and motion parameters adaptation Assuming that the super resolution reconstruction has a sense only for plane object, the transformation model for motion estimation has been forced to use a Euclidian movement model; it means that only three degrees of liberty are considered, the two translations following the image axes and the rotation in the same plane: Fig 6: Shematic explaining the transition between the video frame referential, the ROI referential, and the extracted LR image Fig 7: Example of Euclidian transformation (Tx = 2px, Ty = 2px, = )

Advanced Project S9 5 The motion estimator assumes that the rotation is made around the origin point, as shown in fig 7. Therefore, some super resolution algorithms may have another transformation model and assume that the rotation is made around the center of the ROI, so an adaptation of parameters has to be done as follows: Where is the estimated rotation around the center of the ROI, is the translation to the center of the ROI and is estimated shift: On the one hand, the motion estimator assumes that: Where cos = sin sin cos () are the coordinate in the current warped frame referential and are the coordinate in the aligned template referential. On the other hand, the super resolution algorithm assumes that: = = () Fig 8: Adaptation of the parameters schematic In order to calculate these adapted parameters, it is needed to identify them from one side of the equation to the other: cos sin sin cos ℎ = = (2) Where are the coordinate of the center of the ROI in the ROI referential.+ cos = sin sin cos cos + sin + + sin cos + + (3) According to () and (3) we can say that = (4) And = + cos sin (5) = + sin + cos III. SUPER RESOLUTION RECONSTRUCTION A. Presentation of the algorithms In this part will be presented different super resolution algorithms, and their performance results concerning both the speed of computation, and the subjective quality of the enhanced image. The first algorithm, Interpolation [2], is the most

Advanced Project S9 6 elementary, and consists in a simple interpolation of the LR images. In order to perform this interpolation, is created a grid of equal size to the wanted HR image, obtained by multiplying the HR factor and the LR resolution. This grid is then centered on zero in order to apply the estimated rotation that was obtained before with the center of the image as reference. Finally the grid is de-centered, and then translated according to the estimated parameters of translation there were between images. The result is a grid of HR resolution, and moves according to the estimated parameters of both rotation and translation between the template image and the current image. This grid is then cubically interpolated with the current image using the griddata Matlab function, which provides the HR image. The second algorithm uses the Papoulis-Gerchberg algorithm. This algorithm assumes that enough pixels values are known in the HR grid, and sacrifices the high frequencies in the final enhanced image. Starting from a HR grid, as explained before, the first step of this algorithm is to copy the known pixels values from the LR image to the HR grid. In order to do that, the LR pixels are moved according to both the HR factor and the estimated movement. More precisely, the estimated movement is rounded in order to provide an integer position in which the pixel can be copied. Then, the Fourier transform of the resulting grid is calculated, the high frequency components are set to zero, and an inverse Fourier transform brings the new grid. The same process is then repeated until it converges. The main idea behind that algorithm is to correct the aliasing created by an up sampling by interpolating the unknown values of the HR image. The third algorithm studied is the Projection Onto Convex Sets (POCS). The previous algorithm is known to be a particular case of the POCS algorithm, therefore they have similarities. A HR grid is created, and, as for Papoulis- Gerchberg, known pixels values are copied in it. Then, a normalized blur filter is iteratively applied to the resulting grid, and known pixels values are copied again to the nearest integer pixels. This process is repeated until the difference between two successive resulting images is inferior to a certain threshold, -4 in our case. Finally, the fourth and last algorithm is based on the Robust Super Resolution [3] developed by P. Milanfar, and is the Fast Robust Super Resolution algorithm. The two main steps of it are a non-iterative data fusion, followed by an iterative de-blurring-interpolation. The data fusion consists in calculating the median value of all the nearby pixels after the usual interpolation step seen before. Then, the de-blurring interpolation consists in a gradient back projection that ends after a chosen amount of iterations. This last algorithm requires more input images than the others to provide an acceptable result, which isn t a problem in this application as LR images are extracted from a video. B. Algorithms results All algorithms have been tested in the same conditions: there are 8 low resolution images with a resolution of 28 by 28 pixel black and white and the super-resolution factor is equal to 4. However, the Fast Robust Super Resolution algorithm couldn t be compared to the other three algorithms, as the final result with only 8 images wasn t satisfying, so there was no point comparing its performances in those conditions. There are several indicators to evaluate algorithms. First of all, the execution time can be seen on the following diagram: Fig 9: execution time for each algorithm The Papoulis-Gerchberg algorithm is clearly slower than the other two. Such a time of computation is acceptable in the final application. The PSNR (peak signal-to-noise ratio) is used to measure the quality of images reconstruction compared to the original image. PSNR (db) me (s) 25 2 5 3 25 2 5 5 5 7,7499 8,683 23,372 Execuon me PSNR 24,864 Fig : PSNR for each algorithm The formula to calculate the PSNR is: 2,6354,6843 #$ = # # Where MAX is the maximum possible pixel value of the image, MSE is the sum over all squared value differences

Advanced Project S9 7 divided by image size. For this performance test, higher values are better. The most accurate algorithm, according to this objective test, is the POCS algorithm, and the worst is Papoulis Those objective tests can be completed with a subjective test: the Mean Opinion Score (MOS). The MOS indicator is generated by averaging the results of a set of subjective quality evaluation from to, from several people. The best score is, and the results are: MOS The aim of this project was to design software for surveillance use. People hired for surveillance purpose work either on real time capture device or on video recordings. The software takes this fact into account and proposes two functioning modes: Real Time mode and Video Lecture mode. The software allows the user to select the region he is interested in to zoom and apply an SR algorithm. It may help identifying suspects or detecting some important details in the background of a scene. B. Install Score 8 6 4 2 8,57 7,79 3,2 Here is the list of the components needed to run the demo version of the software: a video capture device which supports the following resolutions: 6x2, 32x24, 64x48, Matlab, VLC, activex VLC plugin version 2, an image acquisition toolbox. Drivers for the camera device can add additional functions to the program. C. Interface description Fig : MOS results Once again, the Papoulis algorithm provides the worst result, while Interpolation and POCS are similar. To conclude these tests, the interpolation algorithm is the fastest and it gives the best results in terms of image quality according to viewers. The POCS algorithm has similar results than the interpolation one. Its PSNR is the highest. The Papoulis-Gerchberg algorithm is the worst in all aspects. The Fast Robust Super Resolution algorithm, as explained before, couldn t be compared easily to the others algorithms. In order to have an idea of its performance, other tests have been made in different conditions. It appears that this algorithm takes about forty seconds to compute a SR image with a SR factor of 4, and an initial resolution of 96 by 28 pixels with 55 images. It can be assumed that if it were efficient with only 4 images, it would be much faster than the other tested algorithms. With 55 images, the subjective result is very good as details that were totally blurred become clear and readable. It ends up that this algorithm becomes interesting in situations where a lot of similar images could be provided, which is not the case of this project. In surveillance videos, for instance, the subject would move a lot and such a set of images couldn t be acquired. Once the software is launched, Real Time mode is set. As a consequence the camera device is automatically switched on. As shown below, the button Ouvrir fichier.avi lets the user switch from real time mode to video lecture mode. Fig 2: main interface program. The interface is divided in four parts: o Center panel named Video_win contains the video flow o Top right button is the Ouvrir fichier.avi button. o Top right panel is named Configuration Webcam. It allows configuring the video capture device plugged on the computer o Bottom right panel is named Super Resolution. It allows configuring and executing super resolution algorithms. A. Objective IV. USER INTERFACE The following interface allows the user to directly interact with the camera device. Image parameters and others options can be set with the webcam drivers. Refer to your driver documentation for more details.

Advanced Project S9 8 Options proposed in the Configuration Webcam panel are: o Enable or disable webcam acquisition. o Set the frames per second: o 3,24, or 2 frames per second o Set the resolution of the video flow: o 64x48 o 32x24 o 6x2 Panel Super Resolution allows setting: o The zone of the video on which you want to apply super-resolution. o The SR factor in the menu Facteur d agrandissement. o The algorithm used to compute the HR image in the menu Algorithme SR. o The number of images used to compute the image. Remark: the number of images used is linked with the quality of the image obtained. The bottom right button executes the SR algorithm according to all the options settled. As said earlier, the button Ouvrir un fichier.avi switches to the lecture mode. Once the button is clicked, the explorer pops up and the user may choose the avi file that is to be played and analyzed. Once the file is chosen, a new window pops up (see below). D. Use case. In order to get a HR image of an object, the user must click on the button Choisir zone d intérêt. Whatever the selected mode, the software starts recording a set of images. Once enough images are stored, a region of interest (ROI) has to be chosen. To choose a zone, two clicks are needed. These two clicks are the diagonal of the square of the chosen zone. Now, to get the High Resolution image, users must click on the button Execute. At this time, a popup will appear with the new image. The user can try different algorithms with different parameters, and save the image computed as any figure in MATLAB. V. CONCLUSION This project puts in place an efficient tracking algorithm, using the Lucas-Kanade motion estimator, which provides accurate movement estimation and represents an essential task in the SR process. Concerning the SR algorithm, after both objective and subjective comparison, the interpolation algorithm gives the best result, even though the user is allowed to choose the algorithm in the software. The SR algorithms studied in this project are essentially based on the movement of the subjects in the scene or those of the camera. Therefore, in [4], Elad and Feuer have addressed the interesting problem of motion-free super resolution in which a HR image is derived from a set of blurred and down sampled versions of the original image. REFERENCES Fig 3: Lecture mode interface. This menu contains all the usual actions that can be found in a media player: read, pause, and move backwards or forwards The right panel named Super Resolution works exactly the same way as in the Real Time mode. Contrary to the Real Time window, you can resize this window as much as you want. [] Simon Baker and Lain Matthews Lucas-Kanade 2 Years On: A Unifying Framework [2] Super- Resolution Image Reconstruction: A Technical Overview. Sung Cheol Park, Min Kyu Park, and Moon Gi Kang. IEEE Signal rocessing Magazine [3] Fast and Robust Multiframe Super Resolution Sina Farsiu, M. Dirk Robinson, Student Member, IEEE, Michael Elad, and Peyman Milanfar,Senior Member, IEEE Transactions on image processing, vol. 3, no., october 24 [4] M.Elad and A.Feuer. Restoration of a Single Super resolution Image from Several Blurred, Noisy and Under sampled Measured Images. IEEE Transactions on Image Processing, 6(2):646 658, December 997