Evaluation of wavelet based linear subspace techniques for face recognition

Size: px
Start display at page:

Download "Evaluation of wavelet based linear subspace techniques for face recognition"

Transcription

1 Hima Deepthi Vankayalapati Evaluation of wavelet based linear subspace techniques for face recognition Masterarbeit zur Erlangung des akademischen Grades Diplom Ingenieurin Studium Information Technology Alpen-Adria-Universität Klagenfurt Fakultät für Technische Wissenschaften Begutachter: Univ.-Prof. Dr.-Ing. Kyandoghere Kyamakya Institute for Smart System-Technologies Transportation Informatics Klagenfurt, im October 2008

2 Eidesstattliche Erklärung Ich erkläre ehrenwörtlich, dass ich die vorliegende wissenschaftliche Arbeit selbstständig angefertigt und die mit ihr unmittelbar verbundenen Tätigkeiten selbst erbracht habe. Ich erkläre weiters, dass ich keine anderen als die angegebenen Hilfsmittel benutzt habe. Alle aus gedruckten, ungedruckten oder dem Internet im Wortlaut oder im wesentlichen Inhalt übernommenen Formulierungen und Konzepte sind gemäß den Regeln für wissenschaftliche Arbeiten zitiert und durch Fußnoten bzw. durch andere genaue Quellenangaben gekennzeichnet. Die während des Arbeitsvorganges gewährte Unterstützung einschließlich signifikanter Betreuungshinweise ist vollständig angegeben. Die wissenschaftliche Arbeit ist noch keiner anderen Prüfungsbehörde vorgelegt worden. Diese Arbeit wurde in gedruckter und elektronischer Form abgegeben. Ich bestätige, dass der Inhalt der digitalen Version vollständig mit dem der gedruckten Version übereinstimmt. Ich bin mir bewusst, dass eine falsche Erklärung rechtliche Folgen haben wird. (Unterschrift) (Ort, Datum) i

3 Abstract Face recognition is a complex visual classification task which plays an important role in computer vision, image processing and pattern recognition. Research concerning the face recognition started nearly in 1960s. The initial research directions in 1960 s are based on locating the features such as eyes, ears, nose and mouth on the photograph and calculating the distance between the reference image and the stored images. In 1990 s, researchers introduced linear subspace techniques, statistics related techniques, to face recognition problems. The introduction of the linear subspace techniques is a milestone in the face recognition concept. The main objective of the thesis is to improve the recognition rate of existing face recognition methods for large databases with varying pose, expression and environmental conditions (lighting, rainy etc). The human skill of identifying thousands of people even after so many years with different aging, different light conditions and viewing conditions excited many researchers to focus in face recognition systems. Researchers have developed various biometric techniques to identify or recognize persons with their physical characteristics like finger, voice, face etc. These biometric techniques have their own advantages and drawbacks as well. Among all the biometric techniques face recognition has a distinct advantage of collecting the required data or image without individual cooperation. In this work, we are attempting to answer the following research questions: Is the face recognition system invariant to face expressions? Is the face recognition system invariant to environmental conditions (like background, climate changes)? Is the face recognition system invariant to viewing conditions? Is the face recognition system giving higher performance or high recognition rate for large databases? Is the wavelet based linear subspace technique a good alternative to perform face recognition tasks? What is the efficiency of the wavelet based linear subspace technique when compared with its counter parts? In this thesis, different linear subspace techniques like principal component analysis (PCA), independent component analysis (ICA) and linear discriminant analysis (LDA) ii

4 are studied and implemented. Experiments are conducted to improve the performance of face recognition algorithms with varying side views, expression and illumination conditions. An emphasis is placed on the importance of testing the algorithms on a larger database in order to check their robustness. Among the three linear subspace techniques, LDA gives considerably better performance results on larger databases. However, the LDA recognition rate is not sufficient for real time applications, due to it s inability to extract the nonlinear features. In real time applications, we need to recognize the person among the large stored images with different background and environmental conditions. In this thesis, wavelets are proposed to extract the nonlinear features in images before the linear subspace techniques. Wavelets use nonlinear bases and give both the spatial and frequency information of the images at the same time. The results obtained show that the proposed wavelet based linear discriminant analysis (WLDA) performs better than the LDA in larger databases. Finally, the proposed WLDA technique is implemented in Simulink and on FPGA as a soft core with the help of DSP Builder from Altera. Keywords:Face Recognition, Linear subspace techniques, Wavelets. iii

5 Acknowledgments I would like to thank Univ.-Prof. Dr.-Ing. Kyandoghere Kyamakya for giving me an opportunity to work in the institute and for his constant encouragement and his constant support, advice throughout my work. It was a great learning experience working with the Institute for Smart System-Technologies team and especially with the other students working in the institute. My special thanks to Dr.-Ing Chedjou and Dr. Tuan for their valuable suggestions. Last but not least, I am deeply indebted to my husband for his constant moral support and valuable suggestions at the right time.. iv

6 Contents List of Figures List of Abbreviations Nomenclature viii x xi 1 Motivation Motivation Problem statement Major contributions of the thesis Thesis outline Background Introduction Digital image Image-Scalar operations Image-Image operations Image statistics Histogram of the image Mean of the image Variance of the image Image acquisition Feature selection and extraction Feature selection Feature extraction Face Recognition Introduction History of face recognition Face recognition basics Face detection and recognition Face recognition approaches Feature based face recognition Model based face recognition Template based face recognition Appearance based face recognition v

7 3.2.5 Face recognition applications Linear subspace techniques Introduction Linear subspace techniques Image representation Mean image from the face database Covariance matrix from the face database Principal Component Analysis Calculation of mean and covariance matrix from the given database Calculation of eigenvalues and eigenvectors Formation of a feature vector Derivation of a new data set PCA example Eigen face method Face recognition using eigen face technique Linear Discriminant Analysis LDA approaches Fisher s linear discriminant technique Subspace LDA Direct LDA for face recognition Independent Component Analysis Definitions of linear ICA Objective or contrast function of ICA Optimization algorithms of ICA ICA based face recognition Comparison of linear subspace techniques Summary Nonlinear Analysis Introduction Kernel based face recognition Kernel PCA Kernel PCA for face recognition Kernel FLD for face recognition Kernel ICA for face recognition Disadvantages of kernel based nonlinear analysis Summary Wavelet Transform Introduction Difference between fourier analysis and wavelet analysis Wavelet transform classification Continuous wavelet transform Discrete wavelet transform Daubechies wavelet transform vi

8 6.5 Wavelet based linear subspace techniques Summary Distance Measures Introduction Distance measures for face recognition Euclidean distance Standardized euclidean distance Mahalanobis distance City Block distance metric Minkowski distance metric Cosine distance metric Face recognition evaluation Watch list ROC Verification ROC Summary Database Introduction ORL database FERET face database AR face database CVL face database Essex face94 database Summary Field Programmable Gate Array Introduction Cyclone II FPGA DSP Builder tool Summary Experimental results Eigen vectors versus Recognition rate Comparison of linear subspace techniques Wavelets based linear subspace techniques Different wavelet functions versus recognition rate Different wavelet coefficients versus recognition rate Effect of database size on recognition Importance of test image on the recognition rate Impact of database quality on recognition rate FPGAs deployment Conclusion Future work Bibliography 97 vii

9 List of Figures 2.1 Representation of the rectangular shape digital image into pixel P (X, Y ) Gray scale representation of the example image Color storage representation in the color images R, G and B color channels of the example color image Image scalar operations (a) Image from essex face94 database (b) Image after scalar operation Image image operations (a) First Lenna image (b) Second Lenna image with different pose Blended version of two different pose Lenna images Gray scale histogram of the example gray scale image Color image histogram of the example color image Feature selection (a) Selection of wrong feature (b) Selection of correct feature Comparison of biometric techniques [18] Classification of face recognition techniques Example model of the labeled graphs D Morphable face model method Linear subspace technique general algorithm Matrix representation of N images present in the database Principal component analysis (a) Given original data and (b) After applying PCA PCA classification of given data (a) Worse classification of data (b) The best classification of data Dimensionality reduction of the given data after applying PCA First eight eigen faces obtained from essex face94 database by using PCA Mean image obtained from essex face94 database by using PCA Principal component analysis algorithm for face recognition Flow chart of linear discriminant analysis algorithm Fisher s linear discriminant algorithm for face recognition Mean face obtained from essex face94 database by using FLD method Highest fisher faces obtained for essex face94 database by using FLD method General linear discriminant analysis algorithm Four images of X1 class Four images of X2 class Mean images of two classes, X1 and X2, and total mean image viii

10 4.17 Mean image obtained from essex face94 database by using ICA method Independent component analysis algorithm ICA basis obtained from essex face94 database by using ICA Comparison between different linear subspace techniques Example kernel function of the nonlinear data set Discrete time fourier analysis function on signals Discrete time wavelet analysis function on signals Representation of the filter bank in wavelets Spliting the signal spectrum by using subband coding Discrete wavelet transformation decomposition tree Discrete wavelet transformation reconstruction tree Haar wavelet transform Scaling and shifting of the haar wavelets Usage of wavelet in the feature extraction Wavelet coefficients decomposition in discrete wavelet transform Recognition accuracy of the standardized euclidean distance and the euclidean distance [3] Recognition accuracy of the mahalanobis and the euclidean distance [71] Recognition accuracy of the city block distance and the euclidean distance [71] Recognition accuracy of the cosine distance and the euclidean distance [3] Watch list receiver operating characteristic curve [71] Verification receiver operating characteristic curve [72] Sample images of a single person in the ORL database Sample images of a single person in the FERET database Sample images of a single person in the AR database Sample images of a single person in the CVL database Screen shot of the DSP Builder tool box in Simulink Signal compiler block in the DSP Builder tool box Example Simulink model by placing the signal compiler block Cyclone II DSP Board Library in the DSP Builder tool Importance of eigen vectors in linear subspace techniques Number of eigen values versus corresponding eigen value Performance comparison of different linear subspace techniques (PCA, ICA and LDA) Performance comparison of different wavelet functions db1 and db Performance comparison of different wavelet based linear subspace techniques (WPCA, WICA and WLDA) Performance comparison of WLDA and LDA Mean images of the ORL and the FERET databases by using PCA technique Eigen faces of the ORL and the FERET databases by using PCA technique 95 ix

11 x List of Abbreviations PCA ICA LDA FLD KPCA KICA KLDA WPCA WICA WLDA ROC ORL FERET CVC DSP FPGA HDL VHDL CPLD ASIC PLL LAB LE IP NIST DARPA Principal Component Analysis Independent Component Analysis Linear Discriminant Analysis Fisher s Linear Discriminant Kernel based Principal Component Analysis Kernel based Independent Component Analysis Kernel based Linear Discriminant Analysis Wavelet based principal component analysis Wavelet based independent component analysis Wavelet based linear discriminant analysis Receiver Operating Characteristics Olivetti Research Laboratories Face Recognition Technology Computer Vision Center Digital Signal Processors Field Programmable Gate Array Hardware Description Language Very high speed integrated circuit Hardware Description Language Complex programmable Logic devices Application Specific Integrated Circuit phase locked loops Logic Array Blocks Logic Elements Intellectual property The National Institute of Standards and Technology The Defense Advanced Research Projects Agency

12 Nomenclature A C D b g H h I J m i n c P b S b S i S m S w T u W fld W opt W pca The mixing matrix The covariance matrix The whitening transform matrix The high pass filter Differential entropy The low pass filter The mutual information The negentropy Mean image of the class i in the database Random noise The centered images matrix of the database Between class scatter matrix Sum of all the class covariance matrices present in the database The overall scattering matrix of the database Within class scatter matrix Transpose operation Independent source signal The transformation matrix obtained by FLD method The optimal transformation matrix The transformation matrix obtained by PCA method xi

13 Chapter 1 Motivation 1.1 Motivation Face recognition is a complex visual classification task. Research concerning the face recognition started nearly from 1960s [1]. The face plays a primary role in identifying a person. It is the more easily rememberable part of the human. We can recognize thousands of faces in our lifetime. Human face is a complex model because it contains almost the same feature set (eyes, lips, nose, etc) in every face. Eventhough the feature set is the same in each face, its properties are quite different in each face. The skill of human recognition is very robust because it is possible to identify a person even after so many years with different aging, different light conditions and viewing conditions [2]. Face recognition plays an important role in various applications (e.g. computer vision, image processing and pattern recognition). The ideal computer vision models work more or less like humans vision. In computer vision related applications, face recognition is useful in taking the decision based on the information present in the images or video. They are many computer vision based applications on image analysis like recognizing and tracking humans in some public and private areas. Even in future driver assistance systems, driver face observation will play an important role. The person identification and verification is one of the main issues in many security related applications like banking, border check, etc. In these applications persons must be recognized or identified. Researchers have developed various biometric techniques to identify or recognize the persons with their physical characteristics like finger, voice, face etc. There are advantages and drawbacks of the biometric techniques. Among all the biometric techniques face recognition has a distinct advantage of collecting the required data, image, without individual cooperation. The face recognition system is very useful in criminal identification. In this application, the images of a criminal can be stored in the face recognition system database. In recognition algorithms based on matching methods, image acquisition is one of the important tasks. The image of a person must be taken directly from a digital camera or from the video sequence such that it contains maximum possible information about that person. The images must be taken quickly with a small resolution or size in order to speed the algorithms. If we take the high resolution images it takes much time to recognize the per- 1

14 CHAPTER 1. MOTIVATION 2 sons. Then the matching algorithms compare the acquired image with the images in the database to identify the criminal. In real time face recognition, the system must analyze the images and recognize the person very fast. The face recognition system only recognizes the person stored in the database. 1.2 Problem statement In person identification and verification, face recognition is playing a key role. Research on face recognition started early 1960s. In that time, the face is identified with the location of the features present in the images. These features are eyes, nose, mouth, etc. In 1970 s Goldstein, Harmon and Lesk used 21 specific characteristics of the face such as hair color, lip thickness for face recognition. The measurements performed by the methods developed till the end of 1970 s are manual. The real time face recognition system performance depends upon the given input data or image [3]. The general problems faced in real time face recognition are: Images vary in different environmental conditions like lighting conditions, background variation and climate changes. If we take the image in darkness (or less lighting) or with high lighting, we cannot extract the correct features; so the recognition rate is less Images vary in different face expressions like smiling, tearing etc Images vary in different poses. If we take image with one pose, the database contains that person image in other pose. Then it is difficult to recognize the person Due to the variations underlined, a new information is added to the face image when compared to stored data base images. So we cannot compare the features of the test image and stored data set images. There are some techniques to overcome these image variation problems. To overcome image variation problems, Kirby and Sirovich introduced (in 1988) the principal component analysis, which is a statistical technique [1]. The introduction of the principal component analysis is a milestone in the face recognition concept and this leads to the development of various different algorithms for the face recognition based on statistics related techniques. Linear subspace techniques are statistical related techniques which are used to reduce the dimensionality and classify the given data. Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Linear Discriminant Analysis (LDA) are mostly used in linear subspace techniques. For recognition algorithms based on linear subspace techniques, we create the test and training set databases. A training set database contains Images of the person with different poses, expression and environmental conditions Many number of persons (the number depends on the data base)

15 CHAPTER 1. MOTIVATION 3 A class contains one person with different face variations. The test database contains different images of the persons present in the training database. The linear subspace techniques are successful to some extent but they are not effective in large databases [4, 5, 6]. Eigen vectors are calculated using the statistical measures like mean and covariance. The eigen vector gives the variation in the image PCA uses second order statistics. Second order statistics give information about amplitude spectrums but not their phase spectrums. This makes PCA not effective with pose variant test images PCA is not using information about the class [4, 5, 6]. It takes all persons images as one class. Because of that it extracts only the similar features in all images, making it difficult to recognize the person PCA is able to recognize the faces with varying face expressions but not with varying poses PCA is not effective to recognize the person among large databases. Because of the lack of class information ICA tries to maximize the statistical independency between the images [7]. It looks for statistically independent and non gaussian components in the images The recognition rate is better in pose images because of the higher statistics In ICA, the fixed point algorithm is used to calculate the weight matrix [8, 3, 9]. It is an iterative process. So it takes long time to calculate the uncorrelated features With ICA, it is difficult to recognize the person among large databases because of the lack of class information and time factor LDA only uses second order statistics. The second order statistics gives information about amplitude spectrums but not their phase spectrums. This is explained by the variation of LDA with viewing conditions LDA uses information about the class [10]. It tries to maximize the between class variance and minimize the within class variance [4, 5, 6]. In other words, it decreases the distance between same class images and increases the distance between different class images [11]. Because of that LDA easily recognizes the faces among large databases LDA is invariant to face expressions and light conditions The linear discriminant analysis performs considerably better when compared with PCA and ICA for large databases. LDA needs to be combined with other methods to perform much better on big databases.

16 CHAPTER 1. MOTIVATION Major contributions of the thesis The contributions of the thesis are: Presentation of the survey of the relevant literature on visual classification techniques Study of different linear subspace techniques PCA, ICA and LDA and non linear methods (KPCA, KLDA) Implementation of PCA, ICA and LDA techniques and comparison of the techniques using databases Use of the euclidean distance approach to calculate the best match Collection of different face databases like Essex face94, ORL database and FERET database to compare the results Proposal of a novel wavelet based linear discriminant analysis (WLDA) technique Wavelets extract the nonlinear features and form the feature vector. This feature vector is given as an input to the linear subspace technique. Wavelets reduce the complexity as well. Nonlinear features give the more important information present in the image. These nonlinear features improve the performance of the face recognition system. So wavelets based techniques are invariant to environmental conditions and viewing conditions. As part of this thesis, my focus is on the following aspects: Development of wavelets based linear subspace techniques which are invariant to different environmental conditions Comparison of linear subspace techniques with wavelets based linear subspace techniques Comparison of the effect of different wavelets functions and wavelets coefficients on the recognition rate Comparison of the WLDA recognition rate with the WPCA and WICA recognition rates This thesis also involves the soft core of the WLDA technique on FPGAs with the help of the DSP Builder tool from Altera. The Altera DSP Builder converts Matlab/Simulink models into the hardware description language based design files. 1.4 Thesis outline The thesis is organized as follows: Chapter 2 gives the introduction to digital images and various operation on images Chapter 3 gives the introduction, history and basics of face recognition

17 CHAPTER 1. MOTIVATION 5 Chapter 4 presents details about the linear subspace techniques and the comparison of these subspace techniques Chapter 5 presents some methods for the nonlinear analysis of data Chapter 6 gives the introduction to wavelets and wavelets based linear subspace techniques as well Chapter 7 presents different approaches to measure the distance (between the pixels belonging to different images) and explains each approach with its advantages and drawbacks Chapter 8 gives the details about various face databases and the difference between them Chapter 9 gives the details about field programmable gate arrays (FPGA) and the DSP Builder tool used to deploy the Simulink model to FPGAs Chapter 10 presents the results obtained by using different techniques with different databases

18 Chapter 2 Background 2.1 Introduction For all image processing applications, the input is a digital image and the output can be an image or image related data. In order to understand how different operations can be performed on image, we need to understand the basics of the digital image like What is meant by a digital image? How images are stored? What is the color representation of the image? in this chapter, the fundamentals required to understand an image are presented in detail. 2.2 Digital image Generally the image is assumed to be a rectangular shape matrix with X rows and Y columns. The resolution of the image is taken as XxY. The image is stored as a small squared regions or number of picture elements called pixels [12]. In image, P (0, 0) represents the top left corner pixel as shown in figure 2.1 and P (X 1, 0) represents the bottom left corner pixel of the image. In digital image, pixels contain color value and each pixel uses 8 bits (0 to 7 bits). Most commonly, image has two types of representation. One is gray scale image and another one is color image [12]. Gray scale image calculates the intensity of light and it contains 8 bits or one Byte or 256 values. 2 8 = 256 (2.1) These 256 values represent the gray scale image. Each pixel in the gray scale image represents one of these 0 to 255 values or one of the 256 gray shades. The value 0 represents the black and means very less brightness. The value 255 represents the white and means very high brightness [13]. 6

19 CHAPTER 2. BACKGROUND 7 P(0,0) P(X-1,0) P(X-1,Y-1) Figure 2.1: Representation of the rectangular shape digital image into pixel P (X, Y ) The remaining in between the boundary values (0 to 255 values) represents intermediate shades between black to white or gray shades as shown in figure 2.2. The images with only two colors (black and white) are different to these gray scale images [12]. Those two colors (black and white) images are called binary images. A binary representation of the images does not contains shades between black and white. Figure 2.2: Gray scale representation of the example image A given image contains some information. The amount of information depends upon the resolution of the image. This resolution depends on the camera properties. The memory and processing speed are also related to the image resolution. Gray scale images contain less information compared to color images.

20 CHAPTER 2. BACKGROUND 8 A given color image gives the intensity and chrominance of light. The color image contains 24 bits or 3 Bytes [14]. Each Byte represents one primary color (shade of that primary color). For first Byte primary color is red, second Byte primary color is green and Figure 2.3: Color storage representation in the color images third Byte primary color is blue. Each Byte has 256 values from 0 to 255. The pixel value 0 means none of the primary color is present. The pixel value 255 means the maximum amount of primary color shown in figure 2.3. The pixel value or color value can be stored in different color spaces such as HSV (Hue,Saturation,Value) etc. The example RGB color image is shown in figure 2.4. Figure 2.4: R, G and B color channels of the example color image

21 CHAPTER 2. BACKGROUND Image-Scalar operations The image supports different arithmetic operations like addition, subtraction, multiplication and division with the scalar value [12]. These are pixel wise operations. For example, take a scalar value c and choose addition operation. After that, take every pixel in the image and add with c. Then a new image is obtained with a high brightness than in the previous image as shown in figure 2.5. Figure 2.5: Image scalar operations (a) Image from essex face94 database (b) Image after scalar operation Image-Image operations In these operations, we take the image instead of the scalar value. The taken two images must contain the same resolution. These operations are done by taking two pixels with the same coordinate in different images (i.e. take first pixel in the first image and add this with the second image first pixel and so on) [12]. It supports basic operations like addition, subtraction, multiplication, division. Figure 2.6: Image image operations (a) First Lenna image (b) Second Lenna image with different pose

22 CHAPTER 2. BACKGROUND 10 For example, we wanted to generate blended image with these given two images g1 and g2 as shown in figure 2.6. We take a variable a with the value 0.5 in the function f as shown in equation 2.2. f = [a g1 + (1 a) g2] (2.2) Then the new image is formed by taking above operation on the two images. The blended version new image is shown in below figure 2.7 Figure 2.7: Blended version of two different pose Lenna images 2.3 Image statistics The arithmetic mean, the standard deviation and variance are statistics of the image. The histogram gives the statistical representation of the image data [12] Histogram of the image The histogram is very important and simple for analyzing the images. It calculates the count or how many times a particular color level is present (the values from 0 to 255 are presents) in the image. In other terms it gives the pixel intensity values. For gray scale images, there are 256 different pixel values. Figure 2.8 is the example gray scale image histogram graph. For color images, we got 3 histograms (the first histogram is for red color, the second histogram is for green and the third one is for blue color). This graph represents the color spread in the image. In the histogram, X-axis gives pixel

23 CHAPTER 2. BACKGROUND 11 Number of pixels with corresponding pixel value Pixel value Figure 2.8: Gray scale histogram of the example gray scale image values and Y-axis gives the frequency of each pixel. Figure 2.9 is the example color image histogram graph.

24 CHAPTER 2. BACKGROUND 12 Number of pixels with corresponding pixel value Pixel value Figure 2.9: Color image histogram of the example color image The histogram is useful to determine best contrast in the image and how intensity spreading in the image. And the disadvantage of histogram is, unable to give the information related to spacial relationships between the pixels Mean of the image The mean is the main issue in image statistics. The mean of the image represents the average pixel value in that image. In the gray scale image mean values give the average brightness or intensity value. The mean of the image is calculated from equation 2.4 u = E[f] (2.3) E[f] = 1 Y X Y 1 X 1 y=0 x=0 f(x, y) (2.4) Where f(x, y) is the image. E[f] is the expected value or the mean of the image. X stands for the number of rows and Y stands for the number of columns in the image Variance of the image The variance gives the spread of values or pixels around the image mean or the average of the squared distance of possible values from mean. V ar[f] gives the variance. Variance is calculated using the equation 2.6 or equation 2.8. The square root of the variance gives the standard deviation. This standard deviation gives the range of pixel values in the image. V ar[f] = E[(f u) 2 ] (2.5)

25 CHAPTER 2. BACKGROUND 13 V ar[f] = 1 Y X Y 1 X 1 (f(x, y) u) 2 (2.6) y=0 x=0 V ar[f] = E[f 2 ] [u 2 ] (2.7) V ar[f] = ( 1 Y X 2.4 Image acquisition Y 1 X 1 y=0 x=0 f(x, y) 2 ) ( 1 Y X Y 1 X 1 y=0 x=0 f(x, y)) 2 (2.8) Image acquisition is the processes of obtaining an image from the camera. This is the first step of any vision based systems [13]. In image acquisition, we take an image with system considerations. An acquisition can be from different environments : the face image from the real time electro optical camera to acquire a live picture of an object or image from the scanner or image from the magnetic disk or image capture by frame grabber or a video as a sequence of still images. 2.5 Feature selection and extraction Feature selection and extraction can be used in many image processing applications like edge detection, motion detection and so on. The feature contains piece of information and specifies property or characteristics of the object in an image. There are several types of features [15]. General feature: Features like color, texture and shape. They can be again divided into Pixel level features: Feature at every pixel. Example color, location. Local Features: Features after applying some image processing applications like segmentation and edge detection. Global Features: Features over entire image. Domain specific Features: Depends upon the application. Finally feature could be an edge, color, corner, blob (group of pixels) or any property of the image Feature selection In every image, many features are present. In those features, some are relevant and some are irrelevant features (characterized depending upon the application and also depending upon different light conditions). There are generally many low level features (irrelevant) and few high level features. Relevant features contains more useful information [16]. So we only take the relevant feature from all features. In feature selection, the first step is captures the image. The second step is, list all features in that image. Depending upon our application, classify the features in that image. next step is, list the most relevant features from all the features present in the image. The feature selection is very important step in image processing. Application performance

26 CHAPTER 2. BACKGROUND 14 depends upon our selected features. Relevant feature selection gives better and fast results [17]. Also improves the understanding of image data. But this feature selection needs exhaustive search to select the relevant features. If large number of features are available in our image, then it takes lot of time to select main features and also complexity increases. Figure 2.10 shows how to select good features. Figure 2.10: Feature selection (a) Selection of wrong feature (b) Selection of correct feature Feature extraction To classify an object from the image, we must extract features from the image [17]. So after the feature selection, we have to extract those relevant features from the image. Feature extraction is one of the dimensionality reduction method. In this step, we only take the selected features and discard the remaining features [15]. It looses very less information because it discards the irrelevant features. Even in case the amount of data is high, it only takes few useful data. So accuracy is almost the same. Best results are achieved, if we take more relevant features. Dimensionality reduction techniques are useful for feature extraction step. Linear subspace techniques are generally used for dimensionality reduction. These dimensionality reduction techniques or linear subspace techniques are discussed in Chapter 4.

27 Chapter 3 Face Recognition 3.1 Introduction Face recognition is one of the various biometric techniques used in identifying humans. Biometrics use physical and behavioral properties of the human. All biometric systems need some records in database against which they can search for matches [18]. Face plays a primary role in identifying the person and also face is the more easily rememberable part of the human. We can recognize thousands of faces in our lifetime. This skill is very robust because we identify a person even after so many years with different aging, different light conditions and viewing conditions [2]. The biometric techniques like finger print, iris and signature need some cooperation from the person for identification. The major advantage with face recognition is, it does not need any physical cooperation of the human at the time of recognition. Researchers are trying to implement robust face recognition systems, similar to the human recognition capabilities under various conditions, for security and multimedia information access applications where the identification of the object is necessary. The task of machine recognition of humans is becoming increasingly difficult and complex as the conditions/constrains and the sample size are growing. In this chapter, together with the history of the face recognition development, various face recognition methods are explained in detail History of face recognition The research dealing face recognition systems actively started in 1960 s. The initial research directions are based on the features such as eyes, ears, nose and mouth on the photograph; the distance from the reference data is calculated [19]. In 1970 s Goldstein, Harmon and Lesk used 21 specific characteristics of the face such as hair color, lip thickness for face recognition [19]. The measurements for the methods developed till the end of 1970 s are calculated manually. In 1988, Kirby and Sirovich introduced the principal component analysis, a statistics related technique, to face recognition problems [1]. The introduction of the principal component analysis is a milestone in the face recognition concept and this lead to the development of various different algorithms for the face recognition based on the statistics related techniques. 15

28 CHAPTER 3. FACE RECOGNITION Face recognition basics Face Recognition is one of the highly used biometric techniques [18] as shown in figure 3.1. Face recognition systems, irrespective of the method used to recognize, follow the principle of finger print matching methods. The images of the people under recognition are collected and placed in the data base. Images in the database are used to verify the person when the recognition of the person is needed. The major difference between face recognition and finger print matching is we need to take the finger print of the person during identification and match with one we have in our data base. In face recognition we can acquire this image without the direct knowledge of the person and the image to be identified irrespective of the quality of the image in database [20, 21]. This type of recognition without directly involving the person is highly useful in security and monitoring applications Weighted percentage Face Finger Hand Voice Eye Signature Biometric Technology Figure 3.1: Comparison of biometric techniques [18] The face recognition knowledge is also highly useful in the area of image analysis. Face recognition is relative to object recognition in many aspects. The difference between these two recognition techniques is that face recognition is only used to identify persons or individuals in a image. While object recognition is used to identify all objects present in the given image [22]. Many researchers also classify face recognition as a subtask in object recognition [22].

29 CHAPTER 3. FACE RECOGNITION Face detection and recognition Face recognition algorithms calculate the similarities and differences between the stored faces and given test face [23]. If we apply the face recognition technique to the images of persons, the system produces unexpected results. The important step in the face recognition process is detecting the face in a given image. Face detection or object detection is also one of the important application in computer vision. There exist many algorithms to detect the face or object depending upon the features. These algorithms determine the location and size of the face in the image i.e. detect human face features [24, 25]. If the given image contains more than one human face, these algorithms must detect all the faces present in that image. So sometimes face detection is also a part of face recognition system. In developing the face recognition system, we generally consider only face images of different persons (face detection is not required) [26]. Face recognition is used to recognize known faces in every day life. In face detection, storage is not a major issue as the faces are detected from the image and only the detected face is given to the recognition algorithm for the recognition process. But for face recognition, we must store the face images in different poses and under varying conditions in the database for matching purposes at testing time. This matching is based on the distance measures which is explained in Chapter[?]. 3.2 Face recognition approaches Almost all faces have same features such as eyes, nose and mouth arranged regularly in the same manner [27]. So feature selection and feature extraction are very important tasks. Several methods are proposed to extract good features. There are many difficulties in developing a real time face recognition system because of pose (frontal, profile), expressions, occlusion and light conditions etc. So researcher are putting more attention to the face recognition methods [3]. A number of face recognition algorithms have been proposed during last decades and still the research is going. The current face recognition techniques are categorized into Feature based Model based or Knowledge based Template based Appearance based or View based This classification is shown in figure Feature based face recognition In this technique, the selected set of features from the face image are considered for recognition. Face can be recognized by taking features like eyes, nose or mouth etc in detail. The algorithm first find small parts in the image and match these parts with the stored template image. The overall technique describes the position and size of each features (eye,

30 CHAPTER 3. FACE RECOGNITION 18 Graph Matching Feature based Elastic Bunch Graph Matching Model based (or) Knowledge based 3D Morphable Model Face Recognition Template based Linear PCA ICA LDA Others Appearance based (or) View based Nonlinear KPCA KICA Others Figure 3.2: Classification of face recognition techniques nose, mouth or face outline) [28]. The feature based technique is accurate even in small light and expression changes but it is sensitive to scaling and rotation of a face. In case of pose variation, it is difficult to identify the feature points. To overcome the difficulties in the pose variation graph matching techniques are used. The labeled graph is shown in figure 3.3 Figure 3.3: Example model of the labeled graphs 3D graph matching: 3D mesh is used to identify the feature points. High curvature points on face are identified and marked with point. This point and relation between these points are represented as a graph. Elastic bunch graph matching method: Almost all faces have same topological structure [29]. Faces are represented as a graph with main point like eyes, nose, etc are nodes

31 CHAPTER 3. FACE RECOGNITION 19 and edges are represented as a distance vector. Each node has a set of 40 gabor wavelet coefficients (including phase and magnitude) [29]. This labeled graph is a set of nodes connected by the edges. This method is called graph matching for 2D face recognition Model based face recognition In the model based face recognition technique, a model is generated based on the facial variations of the face image. The important information related to the image is used to construct this model. The model based approach has been mainly divided into three main steps [3]. The first step is to develop the face model based on the prior knowledge, second step is to try to fit this 2D projections to given 2D face image and the third step is to take the parameters of the fitting model. These parameters form the feature vector. Shape model is constructed by identifying the positions of the feature points. Texture model represents the gray intensities. After constructing the model, calculates the similarity between the given test face and the faces in the database. In this technique, recognition is an iterative process. The model parameters are calculated by using eigen face analysis. Figure 3.4: 3D Morphable face model method 3D Morphable model: Face is represented as a 3D model for handling different poses and illuminations. Individual faces are combined into a single morphable model by computing dense point to point correspondences to the reference face. The 3D model is obtained from best fit between its 2D projections to given 2D face image. The 3D morphable model is explained in figure 3.4. The 3D morphable model (a statistical model) combines the 3D shape and texture information of all example of individual faces into one vector space of faces [3]. After

32 CHAPTER 3. FACE RECOGNITION 20 forming the vector space of faces, it calculates the average face and individual faces are characterized by increasing the distance from the average. A manually labeled feature is marked on the face like tip of nose, eye corner etc. The number of features required for marking varies depending on the application. The correspondence of all these points or features gives the morphable model Template based face recognition The template based face recognition technique is one of the digital signal processing technique [30]. In this technique, we store some sample templates of the face image in the database. It involves the use of pixel bidirectional array of intensity values either original gray scale or only specific data [28]. The euclidean distance measure is used to calculate the matching between the stored template and the given face image. The performance depends on the resolution of the image. To improve the performance, the normalized cross correlation coefficient is used. pi T, T = E(I T T ) E(I T )E(T ) σ(i T )σ(t ) (3.1) Where I T is the random pixel values in the image I. T is random pixel values of the template. E(.) represents the expected value or mean. Brunelli and Poggio compared feature based face recognition and template based feature recognition techniques [28]. In their work, they were using feature based templates like eyes, nose and mouth in addition to the whole face template Appearance based face recognition Many approaches in computer graphics are related to the images directly not based on 3D models [27]. Faces are stored as two dimensional intensity matrix. The vector space contains different images and each point in the vector space represents an image. Almost all appearance based techniques use statistical properties like mean and covariance to analyze the image [3]. In this approach given a test image, the algorithm finds the similarity between test image and stored images using feature vector. Appearance based face recognition approach is again classified as Linear analysis Nonlinear analysis Linear (subspace) analysis The linear analysis explains the number of linear principal manifolds. Manifolds are important in mathematics and physics. The manifold explains the complex structure in terms of the important properties of the simpler space [31, 32]. In these methods, the average image of all the persons (also of each person) in the data base is calculated. Each image has been translated to average face by subtracting average image from each face image. The face space is a linear subspace of the image face. In the linear analysis, we have three classical statistical techniques.

Efficient Attendance Management: A Face Recognition Approach

Efficient Attendance Management: A Face Recognition Approach Efficient Attendance Management: A Face Recognition Approach Badal J. Deshmukh, Sudhir M. Kharad Abstract Taking student attendance in a classroom has always been a tedious task faultfinders. It is completely

More information

Face detection is a process of localizing and extracting the face region from the

Face detection is a process of localizing and extracting the face region from the Chapter 4 FACE NORMALIZATION 4.1 INTRODUCTION Face detection is a process of localizing and extracting the face region from the background. The detected face varies in rotation, brightness, size, etc.

More information

Object Recognition and Template Matching

Object Recognition and Template Matching Object Recognition and Template Matching Template Matching A template is a small image (sub-image) The goal is to find occurrences of this template in a larger image That is, you want to find matches of

More information

CNN based non-linear image processing for robust Pixel Vision

CNN based non-linear image processing for robust Pixel Vision Vamsi Prakash Makkapati CNN based non-linear image processing for robust Pixel Vision Masterarbeit zur Erlangung des akademischen Grades Diplom Ingenieur Studium Information Technology Alpen-Adria-Universität

More information

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic

More information

Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data

Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data CMPE 59H Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data Term Project Report Fatma Güney, Kübra Kalkan 1/15/2013 Keywords: Non-linear

More information

Index Terms: Face Recognition, Face Detection, Monitoring, Attendance System, and System Access Control.

Index Terms: Face Recognition, Face Detection, Monitoring, Attendance System, and System Access Control. Modern Technique Of Lecture Attendance Using Face Recognition. Shreya Nallawar, Neha Giri, Neeraj Deshbhratar, Shamal Sane, Trupti Gautre, Avinash Bansod Bapurao Deshmukh College Of Engineering, Sewagram,

More information

Colour Image Segmentation Technique for Screen Printing

Colour Image Segmentation Technique for Screen Printing 60 R.U. Hewage and D.U.J. Sonnadara Department of Physics, University of Colombo, Sri Lanka ABSTRACT Screen-printing is an industry with a large number of applications ranging from printing mobile phone

More information

Component Ordering in Independent Component Analysis Based on Data Power

Component Ordering in Independent Component Analysis Based on Data Power Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals

More information

Template-based Eye and Mouth Detection for 3D Video Conferencing

Template-based Eye and Mouth Detection for 3D Video Conferencing Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer

More information

The Scientific Data Mining Process

The Scientific Data Mining Process Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In

More information

Adaptive Face Recognition System from Myanmar NRC Card

Adaptive Face Recognition System from Myanmar NRC Card Adaptive Face Recognition System from Myanmar NRC Card Ei Phyo Wai University of Computer Studies, Yangon, Myanmar Myint Myint Sein University of Computer Studies, Yangon, Myanmar ABSTRACT Biometrics is

More information

BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES

BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES 123 CHAPTER 7 BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES 7.1 Introduction Even though using SVM presents

More information

Supervised Feature Selection & Unsupervised Dimensionality Reduction

Supervised Feature Selection & Unsupervised Dimensionality Reduction Supervised Feature Selection & Unsupervised Dimensionality Reduction Feature Subset Selection Supervised: class labels are given Select a subset of the problem features Why? Redundant features much or

More information

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014 Efficient Attendance Management System Using Face Detection and Recognition Arun.A.V, Bhatath.S, Chethan.N, Manmohan.C.M, Hamsaveni M Department of Computer Science and Engineering, Vidya Vardhaka College

More information

FACE RECOGNITION BASED ATTENDANCE MARKING SYSTEM

FACE RECOGNITION BASED ATTENDANCE MARKING SYSTEM Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 2, February 2014,

More information

HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER

HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER Gholamreza Anbarjafari icv Group, IMS Lab, Institute of Technology, University of Tartu, Tartu 50411, Estonia sjafari@ut.ee

More information

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin

More information

Advanced Signal Processing and Digital Noise Reduction

Advanced Signal Processing and Digital Noise Reduction Advanced Signal Processing and Digital Noise Reduction Saeed V. Vaseghi Queen's University of Belfast UK WILEY HTEUBNER A Partnership between John Wiley & Sons and B. G. Teubner Publishers Chichester New

More information

Image Compression through DCT and Huffman Coding Technique

Image Compression through DCT and Huffman Coding Technique International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Rahul

More information

Vision based Vehicle Tracking using a high angle camera

Vision based Vehicle Tracking using a high angle camera Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu gramos@clemson.edu dshu@clemson.edu Abstract A vehicle tracking and grouping algorithm is presented in this work

More information

OBJECT TRACKING USING LOG-POLAR TRANSFORMATION

OBJECT TRACKING USING LOG-POLAR TRANSFORMATION OBJECT TRACKING USING LOG-POLAR TRANSFORMATION A Thesis Submitted to the Gradual Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements

More information

Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies

Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online at: www.ijarcsms.com

More information

Classification of Fingerprints. Sarat C. Dass Department of Statistics & Probability

Classification of Fingerprints. Sarat C. Dass Department of Statistics & Probability Classification of Fingerprints Sarat C. Dass Department of Statistics & Probability Fingerprint Classification Fingerprint classification is a coarse level partitioning of a fingerprint database into smaller

More information

VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION

VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION Mark J. Norris Vision Inspection Technology, LLC Haverhill, MA mnorris@vitechnology.com ABSTRACT Traditional methods of identifying and

More information

Tracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object

More information

1816 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 7, JULY 2006. Principal Components Null Space Analysis for Image and Video Classification

1816 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 7, JULY 2006. Principal Components Null Space Analysis for Image and Video Classification 1816 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 7, JULY 2006 Principal Components Null Space Analysis for Image and Video Classification Namrata Vaswani, Member, IEEE, and Rama Chellappa, Fellow,

More information

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not. Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C

More information

Environmental Remote Sensing GEOG 2021

Environmental Remote Sensing GEOG 2021 Environmental Remote Sensing GEOG 2021 Lecture 4 Image classification 2 Purpose categorising data data abstraction / simplification data interpretation mapping for land cover mapping use land cover class

More information

Medical Information Management & Mining. You Chen Jan,15, 2013 You.chen@vanderbilt.edu

Medical Information Management & Mining. You Chen Jan,15, 2013 You.chen@vanderbilt.edu Medical Information Management & Mining You Chen Jan,15, 2013 You.chen@vanderbilt.edu 1 Trees Building Materials Trees cannot be used to build a house directly. How can we transform trees to building materials?

More information

AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION

AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION Saurabh Asija 1, Rakesh Singh 2 1 Research Scholar (Computer Engineering Department), Punjabi University, Patiala. 2 Asst.

More information

Mean-Shift Tracking with Random Sampling

Mean-Shift Tracking with Random Sampling 1 Mean-Shift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of

More information

Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication

Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication Thomas Reilly Data Physics Corporation 1741 Technology Drive, Suite 260 San Jose, CA 95110 (408) 216-8440 This paper

More information

Classifying Manipulation Primitives from Visual Data

Classifying Manipulation Primitives from Visual Data Classifying Manipulation Primitives from Visual Data Sandy Huang and Dylan Hadfield-Menell Abstract One approach to learning from demonstrations in robotics is to make use of a classifier to predict if

More information

The Implementation of Face Security for Authentication Implemented on Mobile Phone

The Implementation of Face Security for Authentication Implemented on Mobile Phone The Implementation of Face Security for Authentication Implemented on Mobile Phone Emir Kremić *, Abdulhamit Subaşi * * Faculty of Engineering and Information Technology, International Burch University,

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

DIGITAL IMAGE PROCESSING AND ANALYSIS

DIGITAL IMAGE PROCESSING AND ANALYSIS DIGITAL IMAGE PROCESSING AND ANALYSIS Human and Computer Vision Applications with CVIPtools SECOND EDITION SCOTT E UMBAUGH Uffi\ CRC Press Taylor &. Francis Group Boca Raton London New York CRC Press is

More information

Machine vision systems - 2

Machine vision systems - 2 Machine vision systems Problem definition Image acquisition Image segmentation Connected component analysis Machine vision systems - 1 Problem definition Design a vision system to see a flat world Page

More information

Palmprint as a Biometric Identifier

Palmprint as a Biometric Identifier Palmprint as a Biometric Identifier 1 Kasturika B. Ray, 2 Rachita Misra 1 Orissa Engineering College, Nabojyoti Vihar, Bhubaneswar, Orissa, India 2 Dept. Of IT, CV Raman College of Engineering, Bhubaneswar,

More information

Lecture 9: Introduction to Pattern Analysis

Lecture 9: Introduction to Pattern Analysis Lecture 9: Introduction to Pattern Analysis g Features, patterns and classifiers g Components of a PR system g An example g Probability definitions g Bayes Theorem g Gaussian densities Features, patterns

More information

Subspace Analysis and Optimization for AAM Based Face Alignment

Subspace Analysis and Optimization for AAM Based Face Alignment Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China zhaoming1999@zju.edu.cn Stan Z. Li Microsoft

More information

Low-resolution Image Processing based on FPGA

Low-resolution Image Processing based on FPGA Abstract Research Journal of Recent Sciences ISSN 2277-2502. Low-resolution Image Processing based on FPGA Mahshid Aghania Kiau, Islamic Azad university of Karaj, IRAN Available online at: www.isca.in,

More information

Assessment of Camera Phone Distortion and Implications for Watermarking

Assessment of Camera Phone Distortion and Implications for Watermarking Assessment of Camera Phone Distortion and Implications for Watermarking Aparna Gurijala, Alastair Reed and Eric Evans Digimarc Corporation, 9405 SW Gemini Drive, Beaverton, OR 97008, USA 1. INTRODUCTION

More information

SIGNATURE VERIFICATION

SIGNATURE VERIFICATION SIGNATURE VERIFICATION Dr. H.B.Kekre, Dr. Dhirendra Mishra, Ms. Shilpa Buddhadev, Ms. Bhagyashree Mall, Mr. Gaurav Jangid, Ms. Nikita Lakhotia Computer engineering Department, MPSTME, NMIMS University

More information

Analecta Vol. 8, No. 2 ISSN 2064-7964

Analecta Vol. 8, No. 2 ISSN 2064-7964 EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

MACHINE LEARNING IN HIGH ENERGY PHYSICS

MACHINE LEARNING IN HIGH ENERGY PHYSICS MACHINE LEARNING IN HIGH ENERGY PHYSICS LECTURE #1 Alex Rogozhnikov, 2015 INTRO NOTES 4 days two lectures, two practice seminars every day this is introductory track to machine learning kaggle competition!

More information

Mathematical Model Based Total Security System with Qualitative and Quantitative Data of Human

Mathematical Model Based Total Security System with Qualitative and Quantitative Data of Human Int Jr of Mathematics Sciences & Applications Vol3, No1, January-June 2013 Copyright Mind Reader Publications ISSN No: 2230-9888 wwwjournalshubcom Mathematical Model Based Total Security System with Qualitative

More information

CS 591.03 Introduction to Data Mining Instructor: Abdullah Mueen

CS 591.03 Introduction to Data Mining Instructor: Abdullah Mueen CS 591.03 Introduction to Data Mining Instructor: Abdullah Mueen LECTURE 3: DATA TRANSFORMATION AND DIMENSIONALITY REDUCTION Chapter 3: Data Preprocessing Data Preprocessing: An Overview Data Quality Major

More information

Fingerprint s Core Point Detection using Gradient Field Mask

Fingerprint s Core Point Detection using Gradient Field Mask Fingerprint s Core Point Detection using Gradient Field Mask Ashish Mishra Assistant Professor Dept. of Computer Science, GGCT, Jabalpur, [M.P.], Dr.Madhu Shandilya Associate Professor Dept. of Electronics.MANIT,Bhopal[M.P.]

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

Linear Threshold Units

Linear Threshold Units Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear

More information

Image Segmentation and Registration

Image Segmentation and Registration Image Segmentation and Registration Dr. Christine Tanner (tanner@vision.ee.ethz.ch) Computer Vision Laboratory, ETH Zürich Dr. Verena Kaynig, Machine Learning Laboratory, ETH Zürich Outline Segmentation

More information

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. indhubatchvsa@gmail.com

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. indhubatchvsa@gmail.com LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE 1 S.Manikandan, 2 S.Abirami, 2 R.Indumathi, 2 R.Nandhini, 2 T.Nanthini 1 Assistant Professor, VSA group of institution, Salem. 2 BE(ECE), VSA

More information

Characterizing Digital Cameras with the Photon Transfer Curve

Characterizing Digital Cameras with the Photon Transfer Curve Characterizing Digital Cameras with the Photon Transfer Curve By: David Gardner Summit Imaging (All rights reserved) Introduction Purchasing a camera for high performance imaging applications is frequently

More information

Java Modules for Time Series Analysis

Java Modules for Time Series Analysis Java Modules for Time Series Analysis Agenda Clustering Non-normal distributions Multifactor modeling Implied ratings Time series prediction 1. Clustering + Cluster 1 Synthetic Clustering + Time series

More information

Scanners and How to Use Them

Scanners and How to Use Them Written by Jonathan Sachs Copyright 1996-1999 Digital Light & Color Introduction A scanner is a device that converts images to a digital file you can use with your computer. There are many different types

More information

Simultaneous Gamma Correction and Registration in the Frequency Domain

Simultaneous Gamma Correction and Registration in the Frequency Domain Simultaneous Gamma Correction and Registration in the Frequency Domain Alexander Wong a28wong@uwaterloo.ca William Bishop wdbishop@uwaterloo.ca Department of Electrical and Computer Engineering University

More information

HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT

HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT International Journal of Scientific and Research Publications, Volume 2, Issue 4, April 2012 1 HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT Akhil Gupta, Akash Rathi, Dr. Y. Radhika

More information

Introduction to Pattern Recognition

Introduction to Pattern Recognition Introduction to Pattern Recognition Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2009 CS 551, Spring 2009 c 2009, Selim Aksoy (Bilkent University)

More information

Illumination, Expression and Occlusion Invariant Pose-Adaptive Face Recognition System for Real- Time Applications

Illumination, Expression and Occlusion Invariant Pose-Adaptive Face Recognition System for Real- Time Applications Illumination, Expression and Occlusion Invariant Pose-Adaptive Face Recognition System for Real- Time Applications Shireesha Chintalapati #1, M. V. Raghunadh *2 Department of E and CE NIT Warangal, Andhra

More information

Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report

Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 69 Class Project Report Junhua Mao and Lunbo Xu University of California, Los Angeles mjhustc@ucla.edu and lunbo

More information

ROBUST VEHICLE TRACKING IN VIDEO IMAGES BEING TAKEN FROM A HELICOPTER

ROBUST VEHICLE TRACKING IN VIDEO IMAGES BEING TAKEN FROM A HELICOPTER ROBUST VEHICLE TRACKING IN VIDEO IMAGES BEING TAKEN FROM A HELICOPTER Fatemeh Karimi Nejadasl, Ben G.H. Gorte, and Serge P. Hoogendoorn Institute of Earth Observation and Space System, Delft University

More information

Current Standard: Mathematical Concepts and Applications Shape, Space, and Measurement- Primary

Current Standard: Mathematical Concepts and Applications Shape, Space, and Measurement- Primary Shape, Space, and Measurement- Primary A student shall apply concepts of shape, space, and measurement to solve problems involving two- and three-dimensional shapes by demonstrating an understanding of:

More information

Multimodal Biometric Recognition Security System

Multimodal Biometric Recognition Security System Multimodal Biometric Recognition Security System Anju.M.I, G.Sheeba, G.Sivakami, Monica.J, Savithri.M Department of ECE, New Prince Shri Bhavani College of Engg. & Tech., Chennai, India ABSTRACT: Security

More information

Data, Measurements, Features

Data, Measurements, Features Data, Measurements, Features Middle East Technical University Dep. of Computer Engineering 2009 compiled by V. Atalay What do you think of when someone says Data? We might abstract the idea that data are

More information

Digital Electronics Detailed Outline

Digital Electronics Detailed Outline Digital Electronics Detailed Outline Unit 1: Fundamentals of Analog and Digital Electronics (32 Total Days) Lesson 1.1: Foundations and the Board Game Counter (9 days) 1. Safety is an important concept

More information

Image Normalization for Illumination Compensation in Facial Images

Image Normalization for Illumination Compensation in Facial Images Image Normalization for Illumination Compensation in Facial Images by Martin D. Levine, Maulin R. Gandhi, Jisnu Bhattacharyya Department of Electrical & Computer Engineering & Center for Intelligent Machines

More information

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow , pp.233-237 http://dx.doi.org/10.14257/astl.2014.51.53 A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow Giwoo Kim 1, Hye-Youn Lim 1 and Dae-Seong Kang 1, 1 Department of electronices

More information

Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections

Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections Maximilian Hung, Bohyun B. Kim, Xiling Zhang August 17, 2013 Abstract While current systems already provide

More information

Palmprint Recognition. By Sree Rama Murthy kora Praveen Verma Yashwant Kashyap

Palmprint Recognition. By Sree Rama Murthy kora Praveen Verma Yashwant Kashyap Palmprint Recognition By Sree Rama Murthy kora Praveen Verma Yashwant Kashyap Palm print Palm Patterns are utilized in many applications: 1. To correlate palm patterns with medical disorders, e.g. genetic

More information

Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture.

Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture. Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture. Chirag Gupta,Sumod Mohan K cgupta@clemson.edu, sumodm@clemson.edu Abstract In this project we propose a method to improve

More information

Correcting the Lateral Response Artifact in Radiochromic Film Images from Flatbed Scanners

Correcting the Lateral Response Artifact in Radiochromic Film Images from Flatbed Scanners Correcting the Lateral Response Artifact in Radiochromic Film Images from Flatbed Scanners Background The lateral response artifact (LRA) in radiochromic film images from flatbed scanners was first pointed

More information

A new similarity measure for image segmentation

A new similarity measure for image segmentation A new similarity measure for image segmentation M.Thiyagarajan Professor, School of Computing SASTRA University, Thanjavur, India S.Samundeeswari Assistant Professor, School of Computing SASTRA University,

More information

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore. This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore. Title Transcription of polyphonic signals using fast filter bank( Accepted version ) Author(s) Foo, Say Wei;

More information

Calculation of Minimum Distances. Minimum Distance to Means. Σi i = 1

Calculation of Minimum Distances. Minimum Distance to Means. Σi i = 1 Minimum Distance to Means Similar to Parallelepiped classifier, but instead of bounding areas, the user supplies spectral class means in n-dimensional space and the algorithm calculates the distance between

More information

Wavelet analysis. Wavelet requirements. Example signals. Stationary signal 2 Hz + 10 Hz + 20Hz. Zero mean, oscillatory (wave) Fast decay (let)

Wavelet analysis. Wavelet requirements. Example signals. Stationary signal 2 Hz + 10 Hz + 20Hz. Zero mean, oscillatory (wave) Fast decay (let) Wavelet analysis In the case of Fourier series, the orthonormal basis is generated by integral dilation of a single function e jx Every 2π-periodic square-integrable function is generated by a superposition

More information

Choosing a digital camera for your microscope John C. Russ, Materials Science and Engineering Dept., North Carolina State Univ.

Choosing a digital camera for your microscope John C. Russ, Materials Science and Engineering Dept., North Carolina State Univ. Choosing a digital camera for your microscope John C. Russ, Materials Science and Engineering Dept., North Carolina State Univ., Raleigh, NC One vital step is to choose a transfer lens matched to your

More information

Supervised and unsupervised learning - 1

Supervised and unsupervised learning - 1 Chapter 3 Supervised and unsupervised learning - 1 3.1 Introduction The science of learning plays a key role in the field of statistics, data mining, artificial intelligence, intersecting with areas in

More information

Image Processing Based Automatic Visual Inspection System for PCBs

Image Processing Based Automatic Visual Inspection System for PCBs IOSR Journal of Engineering (IOSRJEN) ISSN: 2250-3021 Volume 2, Issue 6 (June 2012), PP 1451-1455 www.iosrjen.org Image Processing Based Automatic Visual Inspection System for PCBs Sanveer Singh 1, Manu

More information

Analysis of kiva.com Microlending Service! Hoda Eydgahi Julia Ma Andy Bardagjy December 9, 2010 MAS.622j

Analysis of kiva.com Microlending Service! Hoda Eydgahi Julia Ma Andy Bardagjy December 9, 2010 MAS.622j Analysis of kiva.com Microlending Service! Hoda Eydgahi Julia Ma Andy Bardagjy December 9, 2010 MAS.622j What is Kiva? An organization that allows people to lend small amounts of money via the Internet

More information

Predict the Popularity of YouTube Videos Using Early View Data

Predict the Popularity of YouTube Videos Using Early View Data 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

Video-Rate Stereo Vision on a Reconfigurable Hardware. Ahmad Darabiha Department of Electrical and Computer Engineering University of Toronto

Video-Rate Stereo Vision on a Reconfigurable Hardware. Ahmad Darabiha Department of Electrical and Computer Engineering University of Toronto Video-Rate Stereo Vision on a Reconfigurable Hardware Ahmad Darabiha Department of Electrical and Computer Engineering University of Toronto Introduction What is Stereo Vision? The ability of finding the

More information

CHAPTER 3: DIGITAL IMAGING IN DIAGNOSTIC RADIOLOGY. 3.1 Basic Concepts of Digital Imaging

CHAPTER 3: DIGITAL IMAGING IN DIAGNOSTIC RADIOLOGY. 3.1 Basic Concepts of Digital Imaging Physics of Medical X-Ray Imaging (1) Chapter 3 CHAPTER 3: DIGITAL IMAGING IN DIAGNOSTIC RADIOLOGY 3.1 Basic Concepts of Digital Imaging Unlike conventional radiography that generates images on film through

More information

Introduction to Robotics Analysis, Systems, Applications

Introduction to Robotics Analysis, Systems, Applications Introduction to Robotics Analysis, Systems, Applications Saeed B. Niku Mechanical Engineering Department California Polytechnic State University San Luis Obispo Technische Urw/carsMt Darmstadt FACHBEREfCH

More information

MACHINE VISION MNEMONICS, INC. 102 Gaither Drive, Suite 4 Mount Laurel, NJ 08054 USA 856-234-0970 www.mnemonicsinc.com

MACHINE VISION MNEMONICS, INC. 102 Gaither Drive, Suite 4 Mount Laurel, NJ 08054 USA 856-234-0970 www.mnemonicsinc.com MACHINE VISION by MNEMONICS, INC. 102 Gaither Drive, Suite 4 Mount Laurel, NJ 08054 USA 856-234-0970 www.mnemonicsinc.com Overview A visual information processing company with over 25 years experience

More information

jorge s. marques image processing

jorge s. marques image processing image processing images images: what are they? what is shown in this image? What is this? what is an image images describe the evolution of physical variables (intensity, color, reflectance, condutivity)

More information

Automatic Detection of PCB Defects

Automatic Detection of PCB Defects IJIRST International Journal for Innovative Research in Science & Technology Volume 1 Issue 6 November 2014 ISSN (online): 2349-6010 Automatic Detection of PCB Defects Ashish Singh PG Student Vimal H.

More information

A secure face tracking system

A secure face tracking system International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 10 (2014), pp. 959-964 International Research Publications House http://www. irphouse.com A secure face tracking

More information

How To Cluster

How To Cluster Data Clustering Dec 2nd, 2013 Kyrylo Bessonov Talk outline Introduction to clustering Types of clustering Supervised Unsupervised Similarity measures Main clustering algorithms k-means Hierarchical Main

More information

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder APPM4720/5720: Fast algorithms for big data Gunnar Martinsson The University of Colorado at Boulder Course objectives: The purpose of this course is to teach efficient algorithms for processing very large

More information

Accurate and robust image superresolution by neural processing of local image representations

Accurate and robust image superresolution by neural processing of local image representations Accurate and robust image superresolution by neural processing of local image representations Carlos Miravet 1,2 and Francisco B. Rodríguez 1 1 Grupo de Neurocomputación Biológica (GNB), Escuela Politécnica

More information

International Journal of Computer Sciences and Engineering Open Access. A novel technique to hide information using Daubechies Transformation

International Journal of Computer Sciences and Engineering Open Access. A novel technique to hide information using Daubechies Transformation International Journal of Computer Sciences and Engineering Open Access Research Paper Volume-4, Special Issue-1 E-ISSN: 2347-2693 A novel technique to hide information using Daubechies Transformation Jyotsna

More information

SOURCE SCANNER IDENTIFICATION FOR SCANNED DOCUMENTS. Nitin Khanna and Edward J. Delp

SOURCE SCANNER IDENTIFICATION FOR SCANNED DOCUMENTS. Nitin Khanna and Edward J. Delp SOURCE SCANNER IDENTIFICATION FOR SCANNED DOCUMENTS Nitin Khanna and Edward J. Delp Video and Image Processing Laboratory School of Electrical and Computer Engineering Purdue University West Lafayette,

More information

Using MATLAB to Measure the Diameter of an Object within an Image

Using MATLAB to Measure the Diameter of an Object within an Image Using MATLAB to Measure the Diameter of an Object within an Image Keywords: MATLAB, Diameter, Image, Measure, Image Processing Toolbox Author: Matthew Wesolowski Date: November 14 th 2014 Executive Summary

More information

PIXEL-LEVEL IMAGE FUSION USING BROVEY TRANSFORME AND WAVELET TRANSFORM

PIXEL-LEVEL IMAGE FUSION USING BROVEY TRANSFORME AND WAVELET TRANSFORM PIXEL-LEVEL IMAGE FUSION USING BROVEY TRANSFORME AND WAVELET TRANSFORM Rohan Ashok Mandhare 1, Pragati Upadhyay 2,Sudha Gupta 3 ME Student, K.J.SOMIYA College of Engineering, Vidyavihar, Mumbai, Maharashtra,

More information

Low-resolution Character Recognition by Video-based Super-resolution

Low-resolution Character Recognition by Video-based Super-resolution 2009 10th International Conference on Document Analysis and Recognition Low-resolution Character Recognition by Video-based Super-resolution Ataru Ohkura 1, Daisuke Deguchi 1, Tomokazu Takahashi 2, Ichiro

More information

Part-Based Recognition

Part-Based Recognition Part-Based Recognition Benedict Brown CS597D, Fall 2003 Princeton University CS 597D, Part-Based Recognition p. 1/32 Introduction Many objects are made up of parts It s presumably easier to identify simple

More information