3D Morphable Model Fitting For Low-Resolution Facial Images

Size: px
Start display at page:

Download "3D Morphable Model Fitting For Low-Resolution Facial Images"

Transcription

1 3D Morphable Model Fitting For Low-Resolution Facial Images Pouria Mortazavian Josef Kittler William Christmas Centre for Vision, Speech and Signal Processing, University of Surrey, United Kingdom. Abstract This paper proposes a new algorithm for fitting a 3D morphable face model on low-resolution (LR) facial images. We analyse the criterion commonly used by the main fitting algorithms and by comparing with an image formation model, show that this criterion is only valid if the resolution of the input image is high. We then derive an imaging model to describe the process of LR image formation given the 3D model. Finally, we use this imaging model to improve the fitting criterion. Experimental results show that our algorithm significantly improves fitting results on LR images and yields similar parameters to those that would have been obtained if the input image had a higher resolution. We also show that our algorithm can be used for face recognition in low-resolutions where the conventional fitting algorithms fail. 1. Introduction Since the pioneering work of Blanz and Vetter ([14], [3], [4]), 3D morphable face models have been at the centre of attention in many research areas involving human faces. For many applications it is necessary to estimate the parameters of the model from a single D image, a process known as model fitting. The aim of model fitting is to estimate the parameters of a 3D morphable face model such that the model would represent the input D face image. The process includes estimating the 3D shape and texture of the given face. Different approaches have been proposed for this challenging task (e.g. [4], [10], [9]). The problem of fitting a 3D morphable model to a D image is an inverse, ill-posed problem. It is generally approached by minimising an energy function designed to model the error between the model s appearance and the input image in a maximum a posteriori (MAP) estimation framework. In general, such an energy function would be highly non-convex with numerous local minima, and to make things worse, the ambiguity of the problem increases in low resolutions due to the lack of information in the input images. Low resolution faces lose a significant amount of useful information due to many factors including the optical blur caused by camera optics, finite density of CCD elements, and noise from various sources. Most applications involving low-resolution images deal with these problems by explicitly modelling the image formation process. For instance, in the super-resolution work of [], the problem of constructing a high-resolution (HR) image from a number of low-resolution (LR) images is approached by constraining the result such that the sought HR image would yield the LR observations through an image formation model. In another example which deals specifically with low-resolution faces, Dedeoglu et al.[5] formulated a method for fitting Active Appearance Models to LR faces by modelling the low-resolution image formation process. We approach the problem of fitting a 3D morphable model to a low-resolution face image in a similar manner by modelling the low-resolution image formation process. We start by critically analysing two of the main fitting algorithms proposed in the literature [3], [10]. By comparing the image synthesis model assumed by these conventional fitting algorithms with the continuous image formation model, we argue that although these algorithms work well for high-resolution inputs, their application becomes less relevant as the resolution of the input image decreases. We then propose an alternative imaging model which takes into account the point-spread function of the virtual camera and use this model in the fitting algorithm. Experimental results show that incorporating this imaging model into the fitting algorithm improves performance for low-resolution images.. Background.1. 3D Morphable Model A morphable model can be constructed from a collection of 3D face scans each represented by a shape (S i ) and a texture (T i ) vector. Any face in the collection as well as any new 3D face can be synthesised as a linear combination

2 of the faces in the collection. S new = i a is i, T new = i b it i (1) ( i a i = i b i = 1) where all the 3D scans are assumed to be in dense point-topoint correspondence such that for any given j, the j th vertex corresponds to the same location on all face scans. The face scans used by Blanz and Vetter are represented in cylindrical coordinates and the authors introduced a dedicated optical flow based algorithm to put these scans in dense correspondence. Tena et al.[13], [7] proposed an alternative method called the Iterative Multi-Resolution Dense 3D Registration (IMDR) which they used for building a 3D morphable model [7], [1]. Once the correspondence between the vertices of all scans is established, Principal Components Analysis (PCA) is performed separately on shape and texture vectors to decorrelate the data: N S N T S mod = S + α i s i T mod = T + β i t i () i=1 i=1 where S and T are the mean shape and texture values respectively, s i and t i are the i th eigenvectors of the shape and texture covariance matrices respectively, and N S and N T are the number of shape and texture eigenvalues to be used. Also, α i and β i are the mixture coefficients known as the model s shape and texture parameters respectively. The model used in our work is built using using the methodology of [7]. The face scans are obtained using a 3dMD TM sensor. The 3D meshes are registered using the IMDR algorithm... Model Fitting Blanz and Vetter also proposed a method for fitting the 3D morphable model to a given D face image [4]. Given a D face image, I inp, and a set of manually defined landmarks, L, the fitting algorithm estimates the model parameters, α and β, together with a set of projection parameters, ρ, and a set of illumination parameters, γ, such that rendering the model with the estimated parameters will produce an image which resembles the input image. The projection parameters include 3D rotations and translations, and the focal length of a virtual camera. The illumination parameters include the parameters of the Phong illumination model such as light intensity, direction etc. The synthesised image is supposed to be closest to the input image in terms of Euclidean distance: E I = I inp (m) I mod (m) (3) m where m Z is the D coordinates of each pixel. Also, the error associated with the manually defined landmarks is expressed as E F = j qj x kj where qj is the position of the j th landmark and x kj is the image position of its corresponding vertex (k j ). The fitting procedure is then formulated as maximising the posterior probability: {α, β, } = argmax α,β, { p(α, β, I inp, L) } { = argmax p(i inp α, β, )p(l α, β, )p(α)p(β)p( ) } α,β, (4) where = {ρ, γ} is the set of projection and illumination parameters. The priors p(α) and p(β) are given by PCA, and a Gaussian distribution is assumed for p( ) with ad hoc values for the means, i, and standard deviations σ,i. For independent, identically distributed Gaussian pixel noise with standard ( deviation σ I, we have p(i inp 1 α, β, ) exp E ). I Furthermore, the σi locations of the landmarks are assumed to be ( affected 1 by Gaussian noise, so: p(l α, β, ) exp E ). F σf Finally, model fitting is performed by minimising: E = 1 σi E I + 1 σf N S E F + i α i σ S,i N S + i β i σ T,i + i ( i i ) σ R,i (5) The above fitting criterion is non-convex and suffers from a large number of local minima. Hence, in order to reduce the risk of being stuck in local minima a stochastic optimisation method is used. In [10] Romdhani and Vetter proposed an alternative fitting framework which in addition to the input image and landmarks, uses multiple features extracted from the input image. The objective function obtained this way is generally smoother and has fewer local minima. This Multi- Features Fitting (MFF) algorithm maximises the posterior probability p(θ f 1 (I inp ), f (I inp ),..., f N (I inp )) where θ is the set of all sought parameters and each f i (I inp ) extracts a specific feature. Assuming the features are statistically independent and that deterministic feature extractors are used, it can be shown that maximising the posterior is equivalent to minimising: ln p(f 1 (I inp ) θ)... ln p(f N (I inp ) θ) ln p(θ) (6) where each negative logarithm of a probability defines a cost function for a specific feature. The features used by Romdhani and Vetter include pixel colour, edges, landmarks, and specular highlights. In addition to these imagebased features and the cost corresponding to prior probabilities of model parameters, an additional cost is added to constrain the admissible range of texture values. Among the various cost functions used in [8], we mainly focus on the pixel colour value cost function E I which is

3 defined similarly to the fitting algorithm of [4] as the Euclidean distance between the synthesised and input images. We also present a novel edge cost function. The synthesised image in the above two fitting algorithms at a given pixel, m, can be expressed ([8]) in terms of model parameters as: I mod (m) = T c (u; α, β, ρ, γ) P 1 (x; α, ρ) (7) where x is the D location of the centre of pixel m and T c is the model texture defined in the model s reference coordinates (u), illuminated by the Phong reflectance model. P 1 denotes the inverse of a mapping P (u; α, ρ) = x that projects each point u from the model s reference coordinates to the D image coordinates. The operator denotes composition with this mapping. Equation 7 states that in order to obtain the value of a point on the image plane, the D coordinates, x, of the point are projected to the model s coordinates and the value of the illuminated texture at the obtained point is taken as the image value at point x. By investigating an imaging model, in the next section, we argue that this process is not suitable for modelling the image formation process in LR images. Hence, any objective function that uses such a synthesised image is not suitable for model fitting on low-resolution input images. 3. The Low-Resolution Problem We consider the image formation model and argue that the conventional fitting criteria described previously are not suitable for low-resolution images since the imaging model they assume for the synthesised image becomes invalid as the resolution of the input image decreases. Let m = (m, n) Z be the pixel coordinates on the image plane of I. The continuous image formation model can be expressed as [6]: I(m) = (E P SF )(x) = X E(x).P SF (x m)dx, (8) wheree(.) is the continuous irradiance light-field that would reach the image plane under the pinhole model, P SF (.) is the point spread function of the virtual camera, and the integral is taken over x = (x, y) R which is the continuous pixel coordinates on the image plane. P SF can further be decomposed into two components: P SF (x) = (w a)(x), (9) where w models the optical blur and a models the spatial integration performed by the virtual CCD sensors. In the simplest case, the optical blur can be neglected (w(x) = δ(x)). Furthermore, we assume that the CCD elements are square with side length s. Hence, a(x) takes the form: a(x) = { 1 A if x < s & y < s 0 o.w. (10) where A = s is the area of each pixel. The synthesised image used by the conventional fitting algorithms (eq. 7) is actually the irradiance, E(x), sampled at the pixel centre locations of the image plane. Such algorithms neglect the effect of the convolution with the camera point spread function, effectively assuming P SF (x) = δ(x). For a high resolution image, this assumption can be justified since the pixels can be assumed to have an infinitesimally small size (A 0). However, as the resolution of the image decreases, this assumption becomes less and less valid since the effect of the spatial integration becomes more and more significant making the fitting criterion (eq. 3) increasingly sub-optimal. Thus, for low-resolution inputs, the image formation model used to synthesise an image from the 3D morphable model must consider the spatial integration in order to synthesise a more realistic image. In order to consider the effects of spatial integration in the imaging model, we need to consider the continuous irradiance field, E(x), that will reach the image plane as opposed to the conventional imaging models which only sample this irradiance field at the location of the pixel centres. The continuous irradiance field at the image plane can be obtained in two steps. First, each model vertex is projected to the image plane and its illuminated texture value is computed. The projection of the vertices is done through the projection P which includes a 3D rigid transform (rotation and translation) to map each point of the model s surface from object-centred coordinates to a position relative to the camera in world-coordinates, followed by a weak perspective projection which projects this point to the image plane. The illuminated texture values are computed using Phong s reflection model. This gives the irradiance value at certain points over the image plane which correspond to the projected model vertices. The second step for calculating the continuous irradiance field is to compute the irradiance over the rest of image plane which can be performed by interpolating between the known irradiance values using the triangle structure of the model. However, for the purpose of synthesising a low-resolution image the variations of texture values within each triangle are negligible. Therefore, we assign a single texture value, ˆT (k), to each triangle defined as the average texture of its vertices. Thus, the triangle structure of the model together with the irradiance values assigned to each triangle yield a continuous, and piece-wise constant irradiance field over the whole face area of the image plane. Given this piece-wise constant irradiance field, the imaging model of Equation 8 can then be estimated by a weighted average of the illuminated texture values of each

4 triangle whithin a given pixel: I mod (m) = 1 ˆT (k).w (k, m) (11) A k K where K is the set of triangles which after projection to the image plane overlap with pixel m, and W (k, m) is the area of overlap between pixel m and the k th triangle after it is projected to the image plane. Equation 11 defines an imaging model for synthesising an image from the model considering the spatial integration over the low-resolution pixels. We use this model to formulate a suitable criterion for fitting a 3D morphable model on LR images. 4. Fitting Algorithm for Low-Resolution In addition to the sub-optimality of the cost function for low-resolution images, the optimisation methods used for optimising the cost function in the fitting algorithms of [7] and [4] is not suitable for the low-resolution case. Assuming that the contribution of all pixels of the image to the overall cost are redundant, these methods used a stochastic optimisation method which only evaluates the cost over a small number of vertices in each iteration. The reason for using the stochastic optimisation was to gain efficiency and avoid local minima at the cost of limited convergence properties (such as convergence radius). For a low-resolution input however, the initial assumption of redundant contribution from image pixels is no longer valid. In fact, due to the lack of information in a low-resolution image it is crucial to make sure all available information is used. Hence, we do not use a stochastic optimisation algorithm. Instead, we deal with the problem of local minima by using a multi-feature fitting strategy similar to that of [10]. Due to using multiple features the overall cost function in this framework is smoother and a stochastic optimisation is not necessary to avoid local minima ([8]). Levenberg-Marquadt optimisation can therefore be used in order to optimise the cost function. We use the landmarks, image edges, and pixel colour values as features in the MFF framework. A cost function is associatedd with each of these features. Furthermore, we use two cost functions to account for the shape and texture priors. The landmark cost function and the prior costs in our method are similar to the conventional MFF algorithm (See [8] for more details). However, for the edge cost and pixel colour features, we use novel cost functions described in the following Edge Cost Function The aim of the edge cost function is to maximise the likelihood p(f e (I) α, ρ). Romdhani and Vetter used a deterministic edge detector (Canny) in order to obtain the edge map of the input image. The distance transform of this edge map was then used as a cost surface to evaluate the edge cost function defined as: ln p(f e (I) α, ρ) C e = e et.e e (1) with e e i (α, ρ) = q e i p i where q e i is the D coordinates of the ith edge point of the input image, and p i is the D location of its corresponding model edge point. However, using a single edge detector with a fixed set of parameters (eg. thresholds) doesn t always recover suitable edges in real-world images without tuning its parameters. Noting this, in [1], Amberg et al.constructed a smooth silhouette cost surface by integrating the information obtained from multiple edge detectors with different thresholds. We use a similar approach and extend it into multiple resolutions of the input image. Given an input image, we first construct its Gaussian pyramid with l levels. If the input image is HR, we place it at the lowest level (highest resolution) of the pyramid and construct the higher levels (lower resolutions) by blurring and downsampling the original input. On the other hand, if the input is LR, we place it at a higher level of the pyramid -based on its resolution- and construct the lower levels by bilinear interpolation. In the next step, we use the Canny edge detector with a range of different thresholds, {th 1, th,... th n }, to obtain multiple edge maps for each image in the pyramid and for each edge map we compute its distance transform. Finally, the edge cost surface, S, is constructed by integrating all the available distance transforms as: S = 1 nl l i=1 n j=1 D th j i, D th j i +k where D thj i is the distance transform of the i th level of the pyramid obtained with the j th threshold, and k is a smoothing constant and its value is approximately 1 0th of the size of the input face. The cost surface constructed in this manner has the desirable property of having strong minima at locations where the edge is consistant over the range of resolutions and thresholds while still retaining weaker minima at the locations of weaker edges which are only detected by fewer combinations of resolution and threshold. 4.. Colour Cost Function As was argued in Section 3 the conventional pixel colour cost becomes sub-optimal in low resolutions. By replacing the conventional imaging model with our low-resolution imaging model (eq. 11), we improve the pixel colour cost function. The pixel colour cost function aims to maximize the likelihood of the input image given the model, projection, and illumination parameters: p(i inp θ), where θ = {α, β, ρ, γ}, or equivalently minimising the (negative) log-likelihood ln p(i inp θ). By assuming that the image pixels are

5 affected by independent, identically distributed Gaussian noise, the pixel colour cost function can be expressed as: ln p(i inp θ) 1 [I inp (m) I mod ((m))] (13) m where I mod (m) is given by equation 11. Unlike conventional fitting algorithms in which the pixel cost function is summed over the model vertices or polygons, in our formulation of the pixel colour cost function (eq. 13) the sum is taken over the pixels of the LR input image. As mentioned previously, we do not use a stochastic optimisation method. This means all visible polygons of the model are used in every iteration for evaluating the pixel colour cost function and the sum in equation 13 is taken over all pixels of the LR face. Hence, our algorithm is computationally more expensive than the conventional MFF. 5. Experimental Results We evaluate our theoretical framework on the the PIE dataset [11]. The LR images are synthesized by downsampling the original images using differet down-sampling factors (DSF ). We experiment using a subset of the the PIE dataset which covers different poses and illuminations. More specifically, we use pictures of all 68 available individuals in poses: frontal and side (poses 7 and 5, respectively 1 ), and 3 illumination conditions (illuminations 01, 0, and 13). In total the subset we use for our experiments contains 408 images covering a broad range of facial shape and texture, as well as different imaging conditions. Figure 1 shows fitting results of a sample image with DSF ranging from 4 to 1. While the conventional fitting fails to recover detailed texture even for DSF = 4, our approach manages to recover a reasonable amount of the detail even in much lower resolution, notably outperforming the conventional method at lower resolutions. Once the model fitting has been performed, one can use the recovered parameters to render the face in any desired resolution, pose, or illumination. Figure shows an example of the model fitted on a LR image with DSF = 8, rendered at the resolution equvalent to DSF =, in other words, 4 times enlarged. For comparison, we aslo included results of the same rendering when the model was fitted using the conventional MFF algorithm, and bilinear interpolation. Note that the conventional algorithm has completely failed in fitting the model when the input resolution is very low (the input resolution in this figure is equivalent to the third column of Figure 1) In the next experiment we compare the similarity of the parameters recovered from LR images with those recovered from HR images. We take the model parameters obtained using the conventional fitting on the original HR im- 1 See [11] for details Figure 1. Comparison of model fitting on LR images. Each column shows from left to right images with DSF=4,6,8, and 1 respectively. Top row: original image, middle row: Model fitted using the conventional MFF algorithm. Bottom row: Model fitted using our LR-MFF apprach. ages (DSF = 1) as ground truth and measure the similarity of parameters recovered from lower-resolutions to these ground truth parameters. Note that this is a plausible choise of ground truth since a)the true 3D parameters for these real images are not known, b)the similarity of the HR parameters to the true 3D parameters is outside the scope of this paper and has been addressed in other works [], and c)in a realistic scenario, these HR parameters are the ones that would be used for most applications (eg. face recognition) where the input is a D image. We measure the similarity in parameter space in terms of normalised correlation (NC) between the parameter vectors of the HR and LR image. We compare both shape (α) and texture (β) similarity in the parameter space. Figure 3 compares the similarity scores for shape and texture vectors obtained using our algorithm with those obtained using conventional MFF. It is clear from Figure 3 that our algorithm performs significantly better that the con-

6 (a) Original image at DSF= (b) Bilinear interpolation of DSF=8 to DSF= (c) Model Fitted using proposed LR-MFF on DSF=8. (d) Model Fitted using conventional MFF on DSF=8. Rendered at the resolution of DSF = Rendered at the resolution of DSF = Figure. Enlarging the face after fitting the model. our method. This is an expected observation since our algorithm is specifically designed for low resolutions where multiple polygons are projected to the same image pixel. In HR images where this assumption does not hold our algorithm performs worse than the conventional MFF due to its higher ambiguity. However, since during the early stages of the fitting it can easily be confirmed wether the input image is HR or LR (by consdiering the estimated focal length and distance to camera, or by measuring the average area of the projected polygons), it is straighforward to propose a hybrid algorithm which uses the conventional MFF for HR inputs and switches to LR-MFF if the input is LR. Finally, we use our fitting algorithm for face recognition in low resolution. We Fit the morphable model on the frontal HR images (DSF = ) with ambient illumination only (illumination 00 ) and take the optimised parameters as gallery. We then use both algorithms for fitting on all images in our subset and use the optimised parameters as probe. Similar to [10], in addition to the global model fitting, we fit 4 segments of the face, namely, the eyes, nose, mouth, and the rest of the face, separately in order to increase the descriptiveness of the model. The parameters obtained by the segmented fitting are stacked together with the global parameters to form a descriptive vectore. The most commonly used way of forming an identitiy vector from the model parameters is to stack all shape and texture parameters of both global and segmented models in one vector (Equation 14), each normalised by their respective standard deviation. This vector is then used it as the Identity Vector of the given face for identification (eg. see [10]). V = [ αg 1 σ S,1 α g n α, αs1 1 σ S,nα σ S,1 α s1 n α σ S,nα α s4 n α σ S,nα Figure 3. Similarity of the model parameters in the parameter space. 3(a): Shape similarity, 3(b): Texture similarity ventional MFF in low resolutions. Note that in higher resolutions (DSF < 4), the conventional MFF outperforms β g 1 σ T,1 β g n β, βs1 1 σ T,nβ σ T,1 β s1 n β σ T,nβ β s4 n β σ T,nβ ] (14) where superscript g denotes the global model and superscripts s1 to s4 denote the 4 segments of the model, σ S,i and σ T,i denote the standards deviation of the i th shape and texture parameters respectively, and n α and n β denote the number of α and β parameters, respectively. Normalised correlation between the gallery and probe identity vectors is then used as a similarity score to identify the face. We present identification results for two different settings corresponding respectively to pose, and illumination variations. Table 1 shows the rank-1 identification results for the pose variation setting where the gallery image is frontal with ambient illumination (pose 7, illumination 01) and the probe image is a non-frontal image with similar ilumination (pose 05, illumination 01). Table shows these results for the illumination variation setting where see [11] for details

7 both gallery and probe are frontal but the probe image is subject to side-wise directional light in addition to the ambient light (pose 7, illumination 0). As was expected, our proposed method does not perform as well as the conventional MFF in high resolutions. However, its performance is considerably more stable over the whole range of resolutions and clearly outperforms the conventional MFF in low resolutions in both settings. Table 1. Rank-1 identification results for the pose variation setting over a range of resolutions. DSF MFF LR-MFF Table. Rank-1 identification results for the illumination variation setting over a range of resolutions. DSF MFF LR-MFF Conclusions The problem of fitting a 3D morphable face model on D images has been approached by a number of researchers and different algorithms have been proposed in the literature. We investigated this problem under the assumption that the input D image has a low resolution. We argued that the objective function used by the common conventional fitting algorithms becomes suboptimal in such a scenario since the image formation model used by these algorithms does not consider the effect of the virtual camera s point spread function implicitly assuming a continuous image. While this assumption has little impact for model fitting on HR images, it renders the fitting criterion invalid as the image resolution decreases. We proposed a new image formation model given the model parameters that incorporates a camera model and takes into account the spatial integration over pixels which results in a more accurate modelling of the low-resolution image formation process. We showed that the fitting performance can be improved by incorporating this model into the fitting objective function. Experimental results show that the new fitting algorithm outperforms traditional algorithms in low-resolution scenarios in terms of visual quality and similarity of the obtained parameters to the HR parameters. Furthermore, we experimentally confirmed that our fitting is robust enough to be used for face recognition in low resolution with a relatively high performance. 7. Acknowledgement This work was partially supported by EPSRC project ACASVA (Adaptive cognition for automated sports video annotation) under grant EP/F06941/1. References [1] B. Amberg, A. Blake, A. W. Fitzgibbon, S. Romdhani, and T. Vetter. Reconstructing high quality face-surfaces using model based stereo. In ICCV, pages 1 8, 007. [] S. Baker and T. Kanade. Limits on super-resolution and how to break them. IEEE Transactions on Pattern Analysis and Machine Intelligence, 4(1): , September 00. [3] V. Blanz and T. Vetter. A morphable model for the synthesis of 3d faces. In A. Rockwood, editor, Siggraph 1999, Computer Graphics Proceedings, pages , Los Angeles, Addison Wesley Longman. [4] V. Blanz and T. Vetter. Face recognition based on fitting a 3d morphable model. IEEE Trans. Pattern Anal. Mach. Intell., 5(9): , September 003. [5] G. Dedeoglu, S. Baker, and T. Kanade. Resolution-aware fitting of active appearance models to low-resolution images. In Proceedings of the 9th European Conference on Computer Vision, ECCV 006, volume II, pages Springer- Verlag, May 006. [6] V. S. Nalwa. A guided tour of computer vision. Addison- Wesley Longman Publishing Co., Inc., Boston, MA, USA, [7] J. T. Rodriguez. 3D Face Modelling for D+3D Face Recognition. PhD thesis, Centre for Vision, Speech, and Signal Processing, University of Surrey, November 007. [8] S. Romdhani. Face Image Analysis Using a Multiple Feature Fitting Strategy. PhD thesis, Univ. of Basel, January 005. [9] S. Romdhani and T. Vetter. Efficient, robust and accurate fitting of a 3d morphable model. In ICCV, pages 59 66, 003. [10] S. Romdhani and T. Vetter. Estimating 3d shape and texture using pixel intensity, edges, specular highlights, texture constraints and a prior. In CVPR (), pages , 005. [11] T. Sim, S. Baker, and M. Bsat. The cmu pose, illumination, and expression database. IEEE Transactions on Pattern Analysis and Machine Intelligence, 5(1): , December 003. [1] J. Tena, R. Smith, M. Hamouz, J. Kittler, A. Hilton, and J. Illingworth. d face pose normalisation using a 3d morphable model. In Proceedings of the International Conference on Video and Signal Based Surveillance, pages 1 6, September 007. [13] J. R. Tena, M. Hamouz, A. Hilton, and J. Illingworth. A validation method for dense non-rigid 3d face registration. In Proceedings of the IEEE International Conference on Advanced Video and Signal-based Surveillance, November 006. [14] T. Vetter and V. Blanz. Estimating coloured 3d face models from single images: An example based approach. In ECCV (), pages , 1998.

A Learning Based Method for Super-Resolution of Low Resolution Images

A Learning Based Method for Super-Resolution of Low Resolution Images A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 emre.ugur@ceng.metu.edu.tr Abstract The main objective of this project is the study of a learning based method

More information

Face Model Fitting on Low Resolution Images

Face Model Fitting on Low Resolution Images Face Model Fitting on Low Resolution Images Xiaoming Liu Peter H. Tu Frederick W. Wheeler Visualization and Computer Vision Lab General Electric Global Research Center Niskayuna, NY, 1239, USA {liux,tu,wheeler}@research.ge.com

More information

Bayesian Image Super-Resolution

Bayesian Image Super-Resolution Bayesian Image Super-Resolution Michael E. Tipping and Christopher M. Bishop Microsoft Research, Cambridge, U.K..................................................................... Published as: Bayesian

More information

Subspace Analysis and Optimization for AAM Based Face Alignment

Subspace Analysis and Optimization for AAM Based Face Alignment Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China zhaoming1999@zju.edu.cn Stan Z. Li Microsoft

More information

Expression Invariant 3D Face Recognition with a Morphable Model

Expression Invariant 3D Face Recognition with a Morphable Model Expression Invariant 3D Face Recognition with a Morphable Model Brian Amberg brian.amberg@unibas.ch Reinhard Knothe reinhard.knothe@unibas.ch Thomas Vetter thomas.vetter@unibas.ch Abstract We present an

More information

High Quality Image Magnification using Cross-Scale Self-Similarity

High Quality Image Magnification using Cross-Scale Self-Similarity High Quality Image Magnification using Cross-Scale Self-Similarity André Gooßen 1, Arne Ehlers 1, Thomas Pralow 2, Rolf-Rainer Grigat 1 1 Vision Systems, Hamburg University of Technology, D-21079 Hamburg

More information

Simultaneous Gamma Correction and Registration in the Frequency Domain

Simultaneous Gamma Correction and Registration in the Frequency Domain Simultaneous Gamma Correction and Registration in the Frequency Domain Alexander Wong a28wong@uwaterloo.ca William Bishop wdbishop@uwaterloo.ca Department of Electrical and Computer Engineering University

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

More information

Image Interpolation by Pixel Level Data-Dependent Triangulation

Image Interpolation by Pixel Level Data-Dependent Triangulation Volume xx (200y), Number z, pp. 1 7 Image Interpolation by Pixel Level Data-Dependent Triangulation Dan Su, Philip Willis Department of Computer Science, University of Bath, Bath, BA2 7AY, U.K. mapds,

More information

A Comparison of Photometric Normalisation Algorithms for Face Verification

A Comparison of Photometric Normalisation Algorithms for Face Verification A Comparison of Photometric Normalisation Algorithms for Face Verification James Short, Josef Kittler and Kieron Messer Centre for Vision, Speech and Signal Processing University of Surrey Guildford, Surrey,

More information

Tracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object

More information

Component Ordering in Independent Component Analysis Based on Data Power

Component Ordering in Independent Component Analysis Based on Data Power Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals

More information

Image Segmentation and Registration

Image Segmentation and Registration Image Segmentation and Registration Dr. Christine Tanner (tanner@vision.ee.ethz.ch) Computer Vision Laboratory, ETH Zürich Dr. Verena Kaynig, Machine Learning Laboratory, ETH Zürich Outline Segmentation

More information

Canny Edge Detection

Canny Edge Detection Canny Edge Detection 09gr820 March 23, 2009 1 Introduction The purpose of edge detection in general is to significantly reduce the amount of data in an image, while preserving the structural properties

More information

Low-resolution Character Recognition by Video-based Super-resolution

Low-resolution Character Recognition by Video-based Super-resolution 2009 10th International Conference on Document Analysis and Recognition Low-resolution Character Recognition by Video-based Super-resolution Ataru Ohkura 1, Daisuke Deguchi 1, Tomokazu Takahashi 2, Ichiro

More information

PASSIVE DRIVER GAZE TRACKING WITH ACTIVE APPEARANCE MODELS

PASSIVE DRIVER GAZE TRACKING WITH ACTIVE APPEARANCE MODELS PASSIVE DRIVER GAZE TRACKING WITH ACTIVE APPEARANCE MODELS Takahiro Ishikawa Research Laboratories, DENSO CORPORATION Nisshin, Aichi, Japan Tel: +81 (561) 75-1616, Fax: +81 (561) 75-1193 Email: tishika@rlab.denso.co.jp

More information

Lecture 3: Linear methods for classification

Lecture 3: Linear methods for classification Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,

More information

Accurate and robust image superresolution by neural processing of local image representations

Accurate and robust image superresolution by neural processing of local image representations Accurate and robust image superresolution by neural processing of local image representations Carlos Miravet 1,2 and Francisco B. Rodríguez 1 1 Grupo de Neurocomputación Biológica (GNB), Escuela Politécnica

More information

2.2 Creaseness operator

2.2 Creaseness operator 2.2. Creaseness operator 31 2.2 Creaseness operator Antonio López, a member of our group, has studied for his PhD dissertation the differential operators described in this section [72]. He has compared

More information

Multidimensional Scaling for Matching. Low-resolution Face Images

Multidimensional Scaling for Matching. Low-resolution Face Images Multidimensional Scaling for Matching 1 Low-resolution Face Images Soma Biswas, Member, IEEE, Kevin W. Bowyer, Fellow, IEEE, and Patrick J. Flynn, Senior Member, IEEE Abstract Face recognition performance

More information

A Multiresolution 3D Morphable Face Model and Fitting Framework

A Multiresolution 3D Morphable Face Model and Fitting Framework A Multiresolution 3D Morphable Face Model and Fitting Framework Patrik Huber 1, Guosheng Hu 2, Rafael Tena 1, Pouria Mortazavian 3, Willem P. Koppen 1, William Christmas 1, Matthias Rätsch 4 and Josef

More information

Super-resolution method based on edge feature for high resolution imaging

Super-resolution method based on edge feature for high resolution imaging Science Journal of Circuits, Systems and Signal Processing 2014; 3(6-1): 24-29 Published online December 26, 2014 (http://www.sciencepublishinggroup.com/j/cssp) doi: 10.11648/j.cssp.s.2014030601.14 ISSN:

More information

A Sampled Texture Prior for Image Super-Resolution

A Sampled Texture Prior for Image Super-Resolution A Sampled Texture Prior for Image Super-Resolution Lyndsey C. Pickup, Stephen J. Roberts and Andrew Zisserman Robotics Research Group Department of Engineering Science University of Oxford Parks Road,

More information

Lighting Estimation in Indoor Environments from Low-Quality Images

Lighting Estimation in Indoor Environments from Low-Quality Images Lighting Estimation in Indoor Environments from Low-Quality Images Natalia Neverova, Damien Muselet, Alain Trémeau Laboratoire Hubert Curien UMR CNRS 5516, University Jean Monnet, Rue du Professeur Benoît

More information

DYNAMIC RANGE IMPROVEMENT THROUGH MULTIPLE EXPOSURES. Mark A. Robertson, Sean Borman, and Robert L. Stevenson

DYNAMIC RANGE IMPROVEMENT THROUGH MULTIPLE EXPOSURES. Mark A. Robertson, Sean Borman, and Robert L. Stevenson c 1999 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or

More information

ROBUST VEHICLE TRACKING IN VIDEO IMAGES BEING TAKEN FROM A HELICOPTER

ROBUST VEHICLE TRACKING IN VIDEO IMAGES BEING TAKEN FROM A HELICOPTER ROBUST VEHICLE TRACKING IN VIDEO IMAGES BEING TAKEN FROM A HELICOPTER Fatemeh Karimi Nejadasl, Ben G.H. Gorte, and Serge P. Hoogendoorn Institute of Earth Observation and Space System, Delft University

More information

3 Image-Based Photo Hulls. 2 Image-Based Visual Hulls. 3.1 Approach. 3.2 Photo-Consistency. Figure 1. View-dependent geometry.

3 Image-Based Photo Hulls. 2 Image-Based Visual Hulls. 3.1 Approach. 3.2 Photo-Consistency. Figure 1. View-dependent geometry. Image-Based Photo Hulls Greg Slabaugh, Ron Schafer Georgia Institute of Technology Center for Signal and Image Processing Atlanta, GA 30332 {slabaugh, rws}@ece.gatech.edu Mat Hans Hewlett-Packard Laboratories

More information

Information Fusion in Low-Resolution Iris Videos using Principal Components Transform

Information Fusion in Low-Resolution Iris Videos using Principal Components Transform Information Fusion in Low-Resolution Iris Videos using Principal Components Transform Raghavender Jillela, Arun Ross West Virginia University {Raghavender.Jillela, Arun.Ross}@mail.wvu.edu Patrick J. Flynn

More information

3D Model-Based Face Recognition in Video

3D Model-Based Face Recognition in Video The 2 nd International Conference on Biometrics, eoul, Korea, 2007 3D Model-Based Face Recognition in Video Unsang Park and Anil K. Jain Department of Computer cience and Engineering Michigan tate University

More information

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression Logistic Regression Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max

More information

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - nzarrin@qiau.ac.ir

More information

Least Squares Estimation

Least Squares Estimation Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David

More information

Normalisation of 3D Face Data

Normalisation of 3D Face Data Normalisation of 3D Face Data Chris McCool, George Mamic, Clinton Fookes and Sridha Sridharan Image and Video Research Laboratory Queensland University of Technology, 2 George Street, Brisbane, Australia,

More information

Face Recognition in Low-resolution Images by Using Local Zernike Moments

Face Recognition in Low-resolution Images by Using Local Zernike Moments Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August14-15, 014 Paper No. 15 Face Recognition in Low-resolution Images by Using Local Zernie

More information

Mean-Shift Tracking with Random Sampling

Mean-Shift Tracking with Random Sampling 1 Mean-Shift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of

More information

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin

More information

Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.

Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R. Epipolar Geometry We consider two perspective images of a scene as taken from a stereo pair of cameras (or equivalently, assume the scene is rigid and imaged with a single camera from two different locations).

More information

Environmental Remote Sensing GEOG 2021

Environmental Remote Sensing GEOG 2021 Environmental Remote Sensing GEOG 2021 Lecture 4 Image classification 2 Purpose categorising data data abstraction / simplification data interpretation mapping for land cover mapping use land cover class

More information

An Iterative Image Registration Technique with an Application to Stereo Vision

An Iterative Image Registration Technique with an Application to Stereo Vision An Iterative Image Registration Technique with an Application to Stereo Vision Bruce D. Lucas Takeo Kanade Computer Science Department Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 Abstract

More information

Extracting a Good Quality Frontal Face Images from Low Resolution Video Sequences

Extracting a Good Quality Frontal Face Images from Low Resolution Video Sequences Extracting a Good Quality Frontal Face Images from Low Resolution Video Sequences Pritam P. Patil 1, Prof. M.V. Phatak 2 1 ME.Comp, 2 Asst.Professor, MIT, Pune Abstract The face is one of the important

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow 02/09/12 Feature Tracking and Optical Flow Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Many slides adapted from Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve

More information

3D Face Modeling. Vuong Le. IFP group, Beckman Institute University of Illinois ECE417 Spring 2013

3D Face Modeling. Vuong Le. IFP group, Beckman Institute University of Illinois ECE417 Spring 2013 3D Face Modeling Vuong Le IFP group, Beckman Institute University of Illinois ECE417 Spring 2013 Contents Motivation 3D facial geometry modeling 3D facial geometry acquisition 3D facial deformation modeling

More information

OBJECT TRACKING USING LOG-POLAR TRANSFORMATION

OBJECT TRACKING USING LOG-POLAR TRANSFORMATION OBJECT TRACKING USING LOG-POLAR TRANSFORMATION A Thesis Submitted to the Gradual Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements

More information

EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set

EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set Amhmed A. Bhih School of Electrical and Electronic Engineering Princy Johnson School of Electrical and Electronic Engineering Martin

More information

Fully Automatic Pose-Invariant Face Recognition via 3D Pose Normalization

Fully Automatic Pose-Invariant Face Recognition via 3D Pose Normalization MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Fully Automatic Pose-Invariant Face Recognition via 3D Pose Normalization Asthana, A.; Marks, T.K.; Jones, M.J.; Tieu, K.H.; Rohith, M.V. TR2011-074

More information

Factor analysis. Angela Montanari

Factor analysis. Angela Montanari Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number

More information

The Visual Internet of Things System Based on Depth Camera

The Visual Internet of Things System Based on Depth Camera The Visual Internet of Things System Based on Depth Camera Xucong Zhang 1, Xiaoyun Wang and Yingmin Jia Abstract The Visual Internet of Things is an important part of information technology. It is proposed

More information

Convolution. 1D Formula: 2D Formula: Example on the web: http://www.jhu.edu/~signals/convolve/

Convolution. 1D Formula: 2D Formula: Example on the web: http://www.jhu.edu/~signals/convolve/ Basic Filters (7) Convolution/correlation/Linear filtering Gaussian filters Smoothing and noise reduction First derivatives of Gaussian Second derivative of Gaussian: Laplacian Oriented Gaussian filters

More information

3D Model based Object Class Detection in An Arbitrary View

3D Model based Object Class Detection in An Arbitrary View 3D Model based Object Class Detection in An Arbitrary View Pingkun Yan, Saad M. Khan, Mubarak Shah School of Electrical Engineering and Computer Science University of Central Florida http://www.eecs.ucf.edu/

More information

Finding people in repeated shots of the same scene

Finding people in repeated shots of the same scene Finding people in repeated shots of the same scene Josef Sivic 1 C. Lawrence Zitnick Richard Szeliski 1 University of Oxford Microsoft Research Abstract The goal of this work is to find all occurrences

More information

Image Super-Resolution for Improved Automatic Target Recognition

Image Super-Resolution for Improved Automatic Target Recognition Image Super-Resolution for Improved Automatic Target Recognition Raymond S. Wagner a and Donald Waagen b and Mary Cassabaum b a Rice University, Houston, TX b Raytheon Missile Systems, Tucson, AZ ABSTRACT

More information

Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite

Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite Philip Lenz 1 Andreas Geiger 2 Christoph Stiller 1 Raquel Urtasun 3 1 KARLSRUHE INSTITUTE OF TECHNOLOGY 2 MAX-PLANCK-INSTITUTE IS 3

More information

SUPER RESOLUTION FROM MULTIPLE LOW RESOLUTION IMAGES

SUPER RESOLUTION FROM MULTIPLE LOW RESOLUTION IMAGES SUPER RESOLUTION FROM MULTIPLE LOW RESOLUTION IMAGES ABSTRACT Florin Manaila 1 Costin-Anton Boiangiu 2 Ion Bucur 3 Although the technology of optical instruments is constantly advancing, the capture of

More information

A unified representation for interactive 3D modeling

A unified representation for interactive 3D modeling A unified representation for interactive 3D modeling Dragan Tubić, Patrick Hébert, Jean-Daniel Deschênes and Denis Laurendeau Computer Vision and Systems Laboratory, University Laval, Québec, Canada [tdragan,hebert,laurendeau]@gel.ulaval.ca

More information

Image Normalization for Illumination Compensation in Facial Images

Image Normalization for Illumination Compensation in Facial Images Image Normalization for Illumination Compensation in Facial Images by Martin D. Levine, Maulin R. Gandhi, Jisnu Bhattacharyya Department of Electrical & Computer Engineering & Center for Intelligent Machines

More information

Least-Squares Intersection of Lines

Least-Squares Intersection of Lines Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a

More information

Superresolution images reconstructed from aliased images

Superresolution images reconstructed from aliased images Superresolution images reconstructed from aliased images Patrick Vandewalle, Sabine Süsstrunk and Martin Vetterli LCAV - School of Computer and Communication Sciences Ecole Polytechnique Fédérale de Lausanne

More information

High Quality Image Deblurring Panchromatic Pixels

High Quality Image Deblurring Panchromatic Pixels High Quality Image Deblurring Panchromatic Pixels ACM Transaction on Graphics vol. 31, No. 5, 2012 Sen Wang, Tingbo Hou, John Border, Hong Qin, and Rodney Miller Presented by Bong-Seok Choi School of Electrical

More information

Determining optimal window size for texture feature extraction methods

Determining optimal window size for texture feature extraction methods IX Spanish Symposium on Pattern Recognition and Image Analysis, Castellon, Spain, May 2001, vol.2, 237-242, ISBN: 84-8021-351-5. Determining optimal window size for texture feature extraction methods Domènec

More information

3D Human Face Recognition Using Point Signature

3D Human Face Recognition Using Point Signature 3D Human Face Recognition Using Point Signature Chin-Seng Chua, Feng Han, Yeong-Khing Ho School of Electrical and Electronic Engineering Nanyang Technological University, Singapore 639798 ECSChua@ntu.edu.sg

More information

Lecture Topic: Low-Rank Approximations

Lecture Topic: Low-Rank Approximations Lecture Topic: Low-Rank Approximations Low-Rank Approximations We have seen principal component analysis. The extraction of the first principle eigenvalue could be seen as an approximation of the original

More information

Algorithms for the resizing of binary and grayscale images using a logical transform

Algorithms for the resizing of binary and grayscale images using a logical transform Algorithms for the resizing of binary and grayscale images using a logical transform Ethan E. Danahy* a, Sos S. Agaian b, Karen A. Panetta a a Dept. of Electrical and Computer Eng., Tufts University, 161

More information

Single Image Super-Resolution using Gaussian Process Regression

Single Image Super-Resolution using Gaussian Process Regression Single Image Super-Resolution using Gaussian Process Regression He He and Wan-Chi Siu Department of Electronic and Information Engineering The Hong Kong Polytechnic University {07821020d, enwcsiu@polyu.edu.hk}

More information

Highlight Removal by Illumination-Constrained Inpainting

Highlight Removal by Illumination-Constrained Inpainting Highlight Removal by Illumination-Constrained Inpainting Ping Tan Stephen Lin Long Quan Heung-Yeung Shum Microsoft Research, Asia Hong Kong University of Science and Technology Abstract We present a single-image

More information

Template-based Eye and Mouth Detection for 3D Video Conferencing

Template-based Eye and Mouth Detection for 3D Video Conferencing Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer

More information

Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006

Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006 Practical Tour of Visual tracking David Fleet and Allan Jepson January, 2006 Designing a Visual Tracker: What is the state? pose and motion (position, velocity, acceleration, ) shape (size, deformation,

More information

Efficient online learning of a non-negative sparse autoencoder

Efficient online learning of a non-negative sparse autoencoder and Machine Learning. Bruges (Belgium), 28-30 April 2010, d-side publi., ISBN 2-93030-10-2. Efficient online learning of a non-negative sparse autoencoder Andre Lemme, R. Felix Reinhart and Jochen J. Steil

More information

Announcements. Active stereo with structured light. Project structured light patterns onto the object

Announcements. Active stereo with structured light. Project structured light patterns onto the object Announcements Active stereo with structured light Project 3 extension: Wednesday at noon Final project proposal extension: Friday at noon > consult with Steve, Rick, and/or Ian now! Project 2 artifact

More information

Machine Learning and Pattern Recognition Logistic Regression

Machine Learning and Pattern Recognition Logistic Regression Machine Learning and Pattern Recognition Logistic Regression Course Lecturer:Amos J Storkey Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh Crichton Street,

More information

From Few to Many: Illumination Cone Models for Face Recognition Under Variable Lighting and Pose. Abstract

From Few to Many: Illumination Cone Models for Face Recognition Under Variable Lighting and Pose. Abstract To Appear in the IEEE Trans. on Pattern Analysis and Machine Intelligence From Few to Many: Illumination Cone Models for Face Recognition Under Variable Lighting and Pose Athinodoros S. Georghiades Peter

More information

A Fully Automatic Approach for Human Recognition from Profile Images Using 2D and 3D Ear Data

A Fully Automatic Approach for Human Recognition from Profile Images Using 2D and 3D Ear Data A Fully Automatic Approach for Human Recognition from Profile Images Using 2D and 3D Ear Data S. M. S. Islam, M. Bennamoun, A. S. Mian and R. Davies School of Computer Science and Software Engineering,

More information

P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition

P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition K. Osypov* (WesternGeco), D. Nichols (WesternGeco), M. Woodward (WesternGeco) & C.E. Yarman (WesternGeco) SUMMARY Tomographic

More information

High Performance GPU-based Preprocessing for Time-of-Flight Imaging in Medical Applications

High Performance GPU-based Preprocessing for Time-of-Flight Imaging in Medical Applications High Performance GPU-based Preprocessing for Time-of-Flight Imaging in Medical Applications Jakob Wasza 1, Sebastian Bauer 1, Joachim Hornegger 1,2 1 Pattern Recognition Lab, Friedrich-Alexander University

More information

Sachin Patel HOD I.T Department PCST, Indore, India. Parth Bhatt I.T Department, PCST, Indore, India. Ankit Shah CSE Department, KITE, Jaipur, India

Sachin Patel HOD I.T Department PCST, Indore, India. Parth Bhatt I.T Department, PCST, Indore, India. Ankit Shah CSE Department, KITE, Jaipur, India Image Enhancement Using Various Interpolation Methods Parth Bhatt I.T Department, PCST, Indore, India Ankit Shah CSE Department, KITE, Jaipur, India Sachin Patel HOD I.T Department PCST, Indore, India

More information

Colorado School of Mines Computer Vision Professor William Hoff

Colorado School of Mines Computer Vision Professor William Hoff Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Introduction to 2 What is? A process that produces from images of the external world a description

More information

Vision based Vehicle Tracking using a high angle camera

Vision based Vehicle Tracking using a high angle camera Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu gramos@clemson.edu dshu@clemson.edu Abstract A vehicle tracking and grouping algorithm is presented in this work

More information

USING SPECTRAL RADIUS RATIO FOR NODE DEGREE TO ANALYZE THE EVOLUTION OF SCALE- FREE NETWORKS AND SMALL-WORLD NETWORKS

USING SPECTRAL RADIUS RATIO FOR NODE DEGREE TO ANALYZE THE EVOLUTION OF SCALE- FREE NETWORKS AND SMALL-WORLD NETWORKS USING SPECTRAL RADIUS RATIO FOR NODE DEGREE TO ANALYZE THE EVOLUTION OF SCALE- FREE NETWORKS AND SMALL-WORLD NETWORKS Natarajan Meghanathan Jackson State University, 1400 Lynch St, Jackson, MS, USA natarajan.meghanathan@jsums.edu

More information

Edge tracking for motion segmentation and depth ordering

Edge tracking for motion segmentation and depth ordering Edge tracking for motion segmentation and depth ordering P. Smith, T. Drummond and R. Cipolla Department of Engineering University of Cambridge Cambridge CB2 1PZ,UK {pas1001 twd20 cipolla}@eng.cam.ac.uk

More information

How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm

How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm IJSTE - International Journal of Science Technology & Engineering Volume 1 Issue 10 April 2015 ISSN (online): 2349-784X Image Estimation Algorithm for Out of Focus and Blur Images to Retrieve the Barcode

More information

Image Hallucination Using Neighbor Embedding over Visual Primitive Manifolds

Image Hallucination Using Neighbor Embedding over Visual Primitive Manifolds Image Hallucination Using Neighbor Embedding over Visual Primitive Manifolds Wei Fan & Dit-Yan Yeung Department of Computer Science and Engineering, Hong Kong University of Science and Technology {fwkevin,dyyeung}@cse.ust.hk

More information

Automatic Calibration of an In-vehicle Gaze Tracking System Using Driver s Typical Gaze Behavior

Automatic Calibration of an In-vehicle Gaze Tracking System Using Driver s Typical Gaze Behavior Automatic Calibration of an In-vehicle Gaze Tracking System Using Driver s Typical Gaze Behavior Kenji Yamashiro, Daisuke Deguchi, Tomokazu Takahashi,2, Ichiro Ide, Hiroshi Murase, Kazunori Higuchi 3,

More information

Image Super-Resolution via Sparse Representation

Image Super-Resolution via Sparse Representation 1 Image Super-Resolution via Sparse Representation Jianchao Yang, Student Member, IEEE, John Wright, Student Member, IEEE Thomas Huang, Life Fellow, IEEE and Yi Ma, Senior Member, IEEE Abstract This paper

More information

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic

More information

Super-resolution Reconstruction Algorithm Based on Patch Similarity and Back-projection Modification

Super-resolution Reconstruction Algorithm Based on Patch Similarity and Back-projection Modification 1862 JOURNAL OF SOFTWARE, VOL 9, NO 7, JULY 214 Super-resolution Reconstruction Algorithm Based on Patch Similarity and Back-projection Modification Wei-long Chen Digital Media College, Sichuan Normal

More information

Interactive person re-identification in TV series

Interactive person re-identification in TV series Interactive person re-identification in TV series Mika Fischer Hazım Kemal Ekenel Rainer Stiefelhagen CV:HCI lab, Karlsruhe Institute of Technology Adenauerring 2, 76131 Karlsruhe, Germany E-mail: {mika.fischer,ekenel,rainer.stiefelhagen}@kit.edu

More information

Redundant Wavelet Transform Based Image Super Resolution

Redundant Wavelet Transform Based Image Super Resolution Redundant Wavelet Transform Based Image Super Resolution Arti Sharma, Prof. Preety D Swami Department of Electronics &Telecommunication Samrat Ashok Technological Institute Vidisha Department of Electronics

More information

CS 534: Computer Vision 3D Model-based recognition

CS 534: Computer Vision 3D Model-based recognition CS 534: Computer Vision 3D Model-based recognition Ahmed Elgammal Dept of Computer Science CS 534 3D Model-based Vision - 1 High Level Vision Object Recognition: What it means? Two main recognition tasks:!

More information

Solving Simultaneous Equations and Matrices

Solving Simultaneous Equations and Matrices Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering

More information

Multiple Optimization Using the JMP Statistical Software Kodak Research Conference May 9, 2005

Multiple Optimization Using the JMP Statistical Software Kodak Research Conference May 9, 2005 Multiple Optimization Using the JMP Statistical Software Kodak Research Conference May 9, 2005 Philip J. Ramsey, Ph.D., Mia L. Stephens, MS, Marie Gaudard, Ph.D. North Haven Group, http://www.northhavengroup.com/

More information

Largest Fixed-Aspect, Axis-Aligned Rectangle

Largest Fixed-Aspect, Axis-Aligned Rectangle Largest Fixed-Aspect, Axis-Aligned Rectangle David Eberly Geometric Tools, LLC http://www.geometrictools.com/ Copyright c 1998-2016. All Rights Reserved. Created: February 21, 2004 Last Modified: February

More information

Machine Learning for Medical Image Analysis. A. Criminisi & the InnerEye team @ MSRC

Machine Learning for Medical Image Analysis. A. Criminisi & the InnerEye team @ MSRC Machine Learning for Medical Image Analysis A. Criminisi & the InnerEye team @ MSRC Medical image analysis the goal Automatic, semantic analysis and quantification of what observed in medical scans Brain

More information

Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus

Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus Tihomir Asparouhov and Bengt Muthén Mplus Web Notes: No. 15 Version 8, August 5, 2014 1 Abstract This paper discusses alternatives

More information

Tracking in flussi video 3D. Ing. Samuele Salti

Tracking in flussi video 3D. Ing. Samuele Salti Seminari XXIII ciclo Tracking in flussi video 3D Ing. Tutors: Prof. Tullio Salmon Cinotti Prof. Luigi Di Stefano The Tracking problem Detection Object model, Track initiation, Track termination, Tracking

More information

jorge s. marques image processing

jorge s. marques image processing image processing images images: what are they? what is shown in this image? What is this? what is an image images describe the evolution of physical variables (intensity, color, reflectance, condutivity)

More information

Robust Real-Time Face Detection

Robust Real-Time Face Detection Robust Real-Time Face Detection International Journal of Computer Vision 57(2), 137 154, 2004 Paul Viola, Michael Jones 授 課 教 授 : 林 信 志 博 士 報 告 者 : 林 宸 宇 報 告 日 期 :96.12.18 Outline Introduction The Boost

More information

Low-resolution Image Processing based on FPGA

Low-resolution Image Processing based on FPGA Abstract Research Journal of Recent Sciences ISSN 2277-2502. Low-resolution Image Processing based on FPGA Mahshid Aghania Kiau, Islamic Azad university of Karaj, IRAN Available online at: www.isca.in,

More information

A NEW SUPER RESOLUTION TECHNIQUE FOR RANGE DATA. Valeria Garro, Pietro Zanuttigh, Guido M. Cortelazzo. University of Padova, Italy

A NEW SUPER RESOLUTION TECHNIQUE FOR RANGE DATA. Valeria Garro, Pietro Zanuttigh, Guido M. Cortelazzo. University of Padova, Italy A NEW SUPER RESOLUTION TECHNIQUE FOR RANGE DATA Valeria Garro, Pietro Zanuttigh, Guido M. Cortelazzo University of Padova, Italy ABSTRACT Current Time-of-Flight matrix sensors allow for the acquisition

More information

PERFORMANCE ANALYSIS OF HIGH RESOLUTION IMAGES USING INTERPOLATION TECHNIQUES IN MULTIMEDIA COMMUNICATION SYSTEM

PERFORMANCE ANALYSIS OF HIGH RESOLUTION IMAGES USING INTERPOLATION TECHNIQUES IN MULTIMEDIA COMMUNICATION SYSTEM PERFORMANCE ANALYSIS OF HIGH RESOLUTION IMAGES USING INTERPOLATION TECHNIQUES IN MULTIMEDIA COMMUNICATION SYSTEM Apurva Sinha 1, Mukesh kumar 2, A.K. Jaiswal 3, Rohini Saxena 4 Department of Electronics

More information

Introduction to Computer Graphics

Introduction to Computer Graphics Introduction to Computer Graphics Torsten Möller TASC 8021 778-782-2215 torsten@sfu.ca www.cs.sfu.ca/~torsten Today What is computer graphics? Contents of this course Syllabus Overview of course topics

More information