A new Method for Face Recognition Using Variance Estimation and Feature Extraction



Similar documents
Template-based Eye and Mouth Detection for 3D Video Conferencing

Index Terms: Face Recognition, Face Detection, Monitoring, Attendance System, and System Access Control.

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014

Face detection is a process of localizing and extracting the face region from the

HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT

Efficient Attendance Management: A Face Recognition Approach

Analecta Vol. 8, No. 2 ISSN

Face Recognition in Low-resolution Images by Using Local Zernike Moments

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

HAND GESTURE BASEDOPERATINGSYSTEM CONTROL

CS231M Project Report - Automated Real-Time Face Tracking and Blending

Building an Advanced Invariant Real-Time Human Tracking System

Neural Network based Vehicle Classification for Intelligent Traffic Control

Hybrid Lossless Compression Method For Binary Images

The Implementation of Face Security for Authentication Implemented on Mobile Phone

Face Recognition For Remote Database Backup System

HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER

An Active Head Tracking System for Distance Education and Videoconferencing Applications

Vision based Vehicle Tracking using a high angle camera

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE.

Evaluation of Optimizations for Object Tracking Feedback-Based Head-Tracking

TIETS34 Seminar: Data Mining on Biometric identification

AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION

FPGA Implementation of Human Behavior Analysis Using Facial Image

A Real Time Driver s Eye Tracking Design Proposal for Detection of Fatigue Drowsiness

Automated Attendance Management System using Face Recognition

Adaptive Face Recognition System from Myanmar NRC Card

Classroom Monitoring System by Wired Webcams and Attendance Management System

Computer Vision Based Employee Activities Analysis

Real-Time Eye Gaze Tracking With an Unmodified Commodity Webcam Employing a Neural Network

VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION

Mean-Shift Tracking with Random Sampling

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

USING COMPUTER VISION IN SECURITY APPLICATIONS

FACE RECOGNITION BASED ATTENDANCE MARKING SYSTEM

Automatic Extraction of Signatures from Bank Cheques and other Documents

Method of Mesh Fabric Defect Inspection Based on Machine Vision

How To Filter Spam Image From A Picture By Color Or Color


Video Surveillance System for Security Applications

A Learning Based Method for Super-Resolution of Low Resolution Images

Open-Set Face Recognition-based Visitor Interface System

Colour Image Segmentation Technique for Screen Printing

Classifying Manipulation Primitives from Visual Data

Multimodal Biometric Recognition Security System

Locating and Decoding EAN-13 Barcodes from Images Captured by Digital Cameras

Part-Based Recognition

Combining an Alternating Sequential Filter (ASF) and Curvelet for Denoising Coronal MRI Images

Circle Object Recognition Based on Monocular Vision for Home Security Robot

SYMMETRIC EIGENFACES MILI I. SHAH

The Scientific Data Mining Process

Submitted By Hoda Moawad Abd El Fattah. A thesis Submitted for the Degree of Doctor of Philosophy in Education (Comparative Education) Supervised By

Canny Edge Detection

RESEARCH PAPERS FACULTY OF MATERIALS SCIENCE AND TECHNOLOGY IN TRNAVA SLOVAK UNIVERSITY OF TECHNOLOGY IN BRATISLAVA

Application of Face Recognition to Person Matching in Trains

Object Tracking System Using Motion Detection

IMPLEMENTATION OF CLASSROOM ATTENDANCE SYSTEM BASED ON FACE RECOGNITION IN CLASS

SIGNATURE VERIFICATION

Signature Region of Interest using Auto cropping

Hands free HCI based on head tracking using feature extraction

Tutorial for Tracker and Supporting Software By David Chandler

Navigation Aid And Label Reading With Voice Communication For Visually Impaired People

3D Scanner using Line Laser. 1. Introduction. 2. Theory

Galaxy Morphological Classification

Keywords image processing, signature verification, false acceptance rate, false rejection rate, forgeries, feature vectors, support vector machines.

WATER BODY EXTRACTION FROM MULTI SPECTRAL IMAGE BY SPECTRAL PATTERN ANALYSIS

Poker Vision: Playing Cards and Chips Identification based on Image Processing

Feasibility Study of Searchable Image Encryption System of Streaming Service based on Cloud Computing Environment

A Study of Automatic License Plate Recognition Algorithms and Techniques

Face Recognition Using Multi-Agent System

Fingerprint s Core Point Detection using Gradient Field Mask

REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING

The use of computer vision technologies to augment human monitoring of secure computing facilities

Palmprint Recognition. By Sree Rama Murthy kora Praveen Verma Yashwant Kashyap

Tracking and Recognition in Sports Videos

A Method of Caption Detection in News Video

Super-resolution method based on edge feature for high resolution imaging

A Cheap Portable Eye-Tracker Solution for Common Setups

Automatic Traffic Estimation Using Image Processing

A Method for Controlling Mouse Movement using a Real- Time Camera

The Role of SPOT Satellite Images in Mapping Air Pollution Caused by Cement Factories

AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA)

Automated Monitoring System for Fall Detection in the Elderly

Limitations of Human Vision. What is computer vision? What is computer vision (cont d)?

BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES

A secure face tracking system

ECE 533 Project Report Ashish Dhawan Aditi R. Ganesan

Novelty Detection in image recognition using IRF Neural Networks properties

Using Real Time Computer Vision Algorithms in Automatic Attendance Management Systems

A Survey of Video Processing with Field Programmable Gate Arrays (FGPA)

Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences

Local features and matching. Image classification & object localization

Intelligent Face Recognition

Interactive person re-identification in TV series

A Real Time Hand Tracking System for Interactive Applications

Design of Multi-camera Based Acts Monitoring System for Effective Remote Monitoring Control

A comparative study on face recognition techniques and neural network

Implementation of OCR Based on Template Matching and Integrating it in Android Application

Transcription:

A new Method for Face Recognition Using Variance Estimation and Feature Extraction Walaa Mohamed 1, Mohamed Heshmat 2, Moheb Girgis 3 and Seham Elaw 4 1, 2, 4 Faculty of science, Mathematical and Computer Science Department, Sohag University, 82524, Sohag, Egypt. 3 Faculty of science, Department of Computer Science, Minia University, El-Minia, Egypt Abstract: Face recognition has been one of the most studied topics in the last decades. Its research has increased in many applications in communications and automatic access control systems. In this paper, a new face recognition method based on OpenCV face detection technique is introduced. The goal of the propose method is to detect faces that exist in the image and trying to locate each face in a previously prepared database using simple variance calculations and Euclidean distance of extracted facial features. The features under consideration are eyes, nose and mouse. Furthermore, a new method to extract facial features is developed based on feature location with respect to face dimensions. The proposed algorithm has been tested on various images and its performance is found to be good in most cases. Experimental results show that our method of detection and verification achieves very encouraging results with good accuracy and simple computations. Keywords: face detection, facial features extraction, variance, RGB color space. 1. INTRODUCTION The great progress of modern communication technology has led to the growing demands of image and video applications in medicine, remote sensing, security, entertainment and education. Fast and reliable face and facial feature detection are required abilities for any human computer interaction approach based on computer vision. One of the main challenging problems in building automated systems that perform face recognition and verification tasks is face detection and facial feature extraction. Though people are good at face identification, recognizing the human faces automatically by computer is very difficult. Face recognition has been widely applied in security system, credit-card verification, and criminal identifications, teleconference and so on. Face recognition is influenced by many complications, such as the differences of facial expression, the light directions of imaging, and the variety of posture, size and angle. Even to the same people, the images taken in a different surroundings may be unlike. The problem is so complicated that the achievement in the field of automatic face recognition by computer is not as satisfied as the finger prints [1]. A lot of methods have been proposed for face detection in still images that are based on texture, depth, shape and color information. Many methods depend on the observation that human faces are characterized by their oval shape and skin-color. The major difficulties encountered in face recognition are due to variations in luminance, facial expressions, visual angles and other potential features such as glasses, beard, etc. This leads to a need for employing several rules in the algorithms that are used to tackle these problems [2], [3], [4]. Nikolaidis et al. [2] proposed a combined approach for facial feature extraction and determination of gaze at direction that employs some improved variations of the adaptive Hough transforms for curve detection, minima analysis of feature candidates, template matching for inner facial feature localization, active contour models for inner face contour detection and projective geometry properties for accurate pose determination. Koo et al. [5] suggested defining 20 facial features. Their method detects the facial candidate regions by Haar classifier, and detects eye candidate region and extracts eye features by dilate operation, then detect lip candidate region using the features. The relative color difference of a* in the L*a*b* color space was used to extract lip feature and to detect nose candidate region and detected 20 features from 2D image by analyzing end of nose. Yen et al. [6] presented an automatic facial feature extraction method based on the edge density distribution of the image. In the preprocessing stage, a face is approximated to an ellipse, and a genetic algorithm is applied to search for the best ellipse region match. In the feature extraction stage, a genetic algorithm is applied to extract the facial features, such as the eyes, nose and mouth, in the predefined sub regions. Gu et al. [1] proposed a method to extract the feature points from faces automatically. It provided a feasible way to locate the positions of two eyeballs, near and far corners of eyes, midpoint of nostrils and mouth corners from face image. Viola and Jones [7]-[9] described a face detection Volume 2, Issue 2 March April 2013 Page 134

framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the Integral Image, which allows the features used by their detector to be computed very quickly. The second is a simple and efficient classify, which is built using the AdaBoost learning algorithm to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classify in a cascade, which allows background regions of the image to be quickly discarded while spending more computation on the promising face-like regions. Srivastava [10] proposed an efficient algorithm for facial expression recognition system, which performs facial expression analysis in a near real time from a live web cam feed. The system is composed of two different entities: trainer and evaluator. Each frame of video feed is passed through a series of steps, including Haar classifiers, skin detection, feature extraction, feature point tracking, creating a learned support vector machine model to classify emotions to achieve a tradeoff between accuracy and result rate. Radha et al. [11] described a comparative analysis of face recognition methods: principle component analysis (PCA), linear discriminant analysis (LDA) and independent component analysis (ICA) based on curvelet transform. The algorithms are tested on ORL Database. Kumar et al. [12] presented an automated system for human face recognition in a real time background world for a large homemade dataset of persons face. To detect real time human face AdaBoost with Haar cascade is used and a simple fast PCA and LDA are used to recognize the faces detected. The matched face is then used to mark attendance in the laboratory, in their case. This paper presents a new face recognition method. The proposed method uses OpenCV [13] to load boosted haar classifier cascades, which allow an initial detection of the faces within a given image. Variance estimation of RGB components is computed to compare the extracted faces and the faces within the database used in comparison. In addition, a new facial feature extraction method is proposed based on feature location. Euclidean distance of facial features of the extracted faces from test image and faces extracted from the database after a variance test is used. The rest of this paper is organized as follows. Section 2 describes the methodology of the proposed method with its stages: face detection, variance estimation, feature extraction, method representation and the proposed algorithm. Section 3 includes the results and method analysis. Section 4 draws the conclusion of this work and possible points for future work. 2. METHODOLOGY Given an image that consists of many objects our goal is to detect humans faces, extract these faces and identify each face using a database of known humans. So, our algorithm is divided to three main steps, first: face detection, the proposed algorithm depends heavily on the open-source library OpenCV in this step. Second: facial feature extraction, an effective method to extract facial features like eyes, nose and mouth depending on their locations with respect to the face region is developed. Third: similar face identification or image searching; the goal of this step is to scan the database of known faces to find the most similar faces to the faces extracted from the test image in the first step. 2.1 Face Detection The proposed method starts with a piece of code named facedetect.cpp. It detects all faces within a given image via haar classifier cascades and draws rectangles around them, as shown in figure 1. Face Detection Figure 1 Face detection using OpenCv algorithm Haar-like [9], [14], [15] feature classifier cascades are composed of multiple classifiers, or conditions, that are used to distinguish unique objects (i.e., face, eye, etc.) from anything that is not that object. Classifiers are a set of values representing sums of pixel brightness on a region-by-region basis. These specifically structured regions are called Haar-like features. Haar-like feature classifiers are created using the integral image which is an intermediate image representation that allows the features used by the detector to be computed very quickly. Rectangle features can be computed very rapidly using the integral image. The integral image [9] at location x, y contains the sum of the pixels above and to the left of x, y inclusive, as shown in the following figure: Figure 2 The integral image representation Each pixel in the integral image represents the change Volume 2, Issue 2 March April 2013 Page 135

in brightness of the corresponding pixel in the original image by finding the sum of the derivatives of the pixels above and to the left of the original: ii ( x, y ) i ( x, y ), (1) x x, y y where ii ( x, y ) is the integral image and i ( x, y ) is the original image. 2.2 Variance Estimation Variance is a very light calculation and considered as an important constraint to prove similarity between two images. Let x be a vector of dimension n, the variance of x can be calculated as follows: color faces to detect the similarity between them. RGB (Red Green Blue) color space, which is used here, is an additive color system based on tri-chromatic theory. It is often found in systems that use a CRT to display images. The RGB color system is very common, and is being used in virtually every computer system as well as television, video etc [16]. Red Green Blue n ( x i x) i var 1 n (2) 2 where x is the mean value of x., However, it is not necessary that the two images which have the same variance to be the same, because different images may have the same value of variance. So the variance is used at first to filter the codebook (database of faces) and extract faces that have the same or close value of variance of the input face image, then another test is applied to choose the most similar faces to this test face. When working with 3-D images or RGB images the important problem appears is that there are three values for each pixel in the image, representing the red, green, and blue colors. To compute the variance of RGB image, the variance for each color is calculated separately. So there are three variance values, one for the red values, another for the green values and third for the blue values. n n n 2 2 2 ( x r x r ) ( x g x g ) ( xb xb) i 1 i 1 i 1 red green blue v, v, v, n n n (3) To simplify the comparison, the average of the three variance values is computed as follows: ( v red v green v blue ) v, (4) 3 2.3 Feature Extraction The proposed method uses the face detector developed by Viola and Jones [7]-[9] to extract the face regions from experiment images. This step makes working on faces directly easier in searching the database for the most similar faces to the face which we are looking for. OpenCV face detection code based on Viola and Jones s algorithm depends on converting the image to grayscale and then applying face detection method. In this portion of our work, the aim is to compare two In RGB color model, any source color (F) can be matched by a linear combination of three color primaries, i.e. Red, Green and Blue, provided that none of those three can be matched by a combination of the other two, see figure 3. Here, F can be represented as: F r R gg b B (5) Volume 2, Issue 2 March April 2013 Page 136, where r, g and b are scalars indicating how much of each of the three primaries (R, G and B) are contained in F. The normalized form of F can be as follows: F R ' R G ' G B ' B, (6) where R ' r / ( r g b), G ' g / ( r g b), (7) B ' b / ( r g b), Figure 3 RGB color model A new facial feature extraction method based on feature location with respect to the whole face region is proposed in this paper. We try to locate eyes, nose and mouse in the faces extracted by using OpenCV [7]-[9] face detection algorithm. By detecting the candidate regions of left eye, right eye, nose and mouse, by training manually, then applying the obtained dimensions of each region on several other faces with the same size, the results were very good, as shown in figure 4. Given a face image of height and 200 pixel width, after training with a lot of images, we found that the candidate region of eyes is located between rows 60 and 95, columns 25 and 80 for right eye and columns 115 and 170 for left eye. The candidate region for the nose is

located between rows 110 and 145 and columns 75 and 125 and the candidate region for the mouse is located between rows 145 and 185 and columns 60 and 135. When applying the dimensions obtained by training on many face images, we found that they there were suitable for any face image with the same width and height, as shown in figure 5. 60 95 110 145 185 25 60 75 80 115 125 135 170 Feature Extraction to compare the similarity between features. The steps of the proposed algorithm are shown in figure 6. 3. RESULTS AND DISCUSSION The experiments were performed on a computer with 2.20 GHz speed and 4 Gbyte RAM using several 200 200 color images containing humans and three codebooks of different sizes (7 images of two persons with different conditions, 10 different images (as shown in step 5 in figure 6), and 20 images include some images to the same persons). A sample of the used database images is shown in figure 7. 1. Read input image. Figure 4 Feature extraction based on feature location 2. Apply face detection code on the input image and detect faces. 60 95 110 145 185 25 60 75 80 115 125 135 170 Feature Extraction 3. Extract each face as a separate image. 4. Read a test face from those extracted in the previous step. 5. Read codebook array (database of images) 25 60 75 80 115 125 135 170 6. Divide codebook into n images. 2.4 Method Representation The proposed algorithm consists of four parts. Firstly, OpenCV face detection code is used to detect and extract faces within the input image. Secondly, variance estimation is applied to extract database images, which have a close variance value to the test image. Thirdly, the cut features method is used to extract facial features from the face images. Finally, Euclidean distance of facial features is computed by the following equation: 60 95 110 145 185 matched feature G matched feature B d abs test feature R matched feature R abs test feature G abs test feature B Feature Extraction Figure 5 Examples of Feature extraction (8) 7. Calculate the variance of each image obtained in step 6, by using eq. (3), (4), and put variance values in an array,. 8. Calculate variance of the test image (step 4) using eq. (3), (4). 9. Compare variance value of test image and each image in codebook and keep locations of the most similar images to test image, which satisfy the condition 100 variance difference 100, in an array. 10. For i=1 to number of similar images which extracted from step 9. a) Extract facial features from each image according to location (right eye left eye nose mouse). b) Calculate the Euclidean distance between the 3-arrays containing the RGB color values of each feature using Eq. (8). Volume 2, Issue 2 March April 2013 Page 137

11. Detect minimum distance (d ) and location of the image that has the minimum distance from step 10. 12. Display the best matched image from codebook. 13. Repeat the steps 4 to 12 for other faces in the input image. Figure 6 the proposed algorithm steps conditions (close eyes and smile). When the variance difference range is extended, all the face images belonging to the same person are returned. The execution proceeds as follows: after the first test (variance test) with variance difference range equals 56 (experimentally obtained), the algorithm returned 3 locations of faces whose variance value close to the variance of the test face #2. In order to know which one of them is the same or the closest to the test face, the facial features of the test face and the facial features of the three face images are extracted and the Euclidean distance to their RGB components is calculated by Eq. (8). Then the face image with the minimum distance (d) is considered as the best matched image and its location is returned. To see the importance of the second test (feature similarity matching), consider some cases like image number #1, #6, #12 and #16, where all of them are images of the same person. In these cases, the test started with variance difference value close to zero and the algorithm worked well and the right image is returned. But when the variance difference is increased to get any other images for the same person, some images like the child image is returned because the variance of the child face image is close somehow to the variance value of some test face images under different conditions. But after applying the second test the right close face images are returned. Figure 7 A sample of the used database Table 1 shows the results obtained using 20 different 200 200 RGB images and a database of RGB images which is shown in figure 7. The first column of the table shows the test face obtained by the face detection algorithm. The second column shows the range of the variance difference between test face and faces in database. The third column shows the similar images which returned after the variance test and their locations in the used database. The fourth column shows the facial features extracted from test image, and the last two columns show the best matched face and its location in the database, after applying the cut features method and calculating Euclidean distance between the facial features of the test face and the faces that were extracted from database after variance test. For example, by using the proposed algorithm to locate the face #2 and finding the most similar face to it within the database using variance range close to 0, the right matched face is obtained. But there are some other face images belong to the same person with different The search efficiency is evaluated by how many times the distance (d) computations are performed on average compared to the size of the database. In the proposed method the total number of distance calculations is small, because it uses the variance test to find out the face images that have a close variance value to the input face image, then the distance computation is performed only on those images where their number is always small compared to the database size. 4. CONCLUSION In this paper, a new method of face recognition based on variance estimation and facial feature cut is proposed. It can be used in face recognition systems such as video surveillance, human computer interfaces, image database management and smart home applications. The proposed algorithm has been tested using a database of faces and the results showed that it is able to recognize a variety of different faces in spite of different expressions and illumination conditions with light calculations. However, the deplorable failures of OpenCV face detection code in detecting zoomed images and rotated images indicates a need for further work. In addition, facial feature expressions and their effect on the recognition of humans need further investigation. Volume 2, Issue 2 March April 2013 Page 138

Table 1: Some results of the proposed algorithm starting with comparing each face obtained by OpenCV code and comparing it with the database images by variance estimation and Euclidean distance of features. Test image Size 200* Face no. Image Variance difference range Similar images Returned after first variance test and there locations in database Test image features Best matched image after feature cut and Euclidean distance Best matched image location #1 0 1 1 391 1,3,7,12, 16 #2 0 2 2 56 2,6,11 #3 0 3 31 1,3 3 #4 0 4 45 4, 20 4 #5 0 5 27 5, 19 5 #6 0 6 6 56 2,6,11 #7 0 7 7 370 1,5,7,12, 16,19 #8 0 8 8 447 8,10 #9 0 9 9 2216 8,9 #10 0 10 10 437 8,10 Volume 2, Issue 2 March April 2013 Page 139

Table 1: Continued Test image Size 200* Face no. Image Variance difference range Similar images Returned after first variance test and there locations in database Test image features Best matched image after feature cut and Euclidean distance #11 0 11 11 Best matched image location 34 2,6,11 #12 0 12 12 391 1,2,5,7,12,16, 19 #13 0 13 13 370 4,13,14, 15,18,20 #14 17 14 14 556 4,13,14, 15,18,20 # 15 0 15 15 162 14,15 # 16 0 16 16 231 1,3,7,12,16 # 17 0 17 17 244 17,18 #18 0 18 18 574 4,13, 14, 17,18,20 19 # 0 19 19 27 5,19 #20 0 20 20 423 4,13,14, 17,18,20 Volume 2, Issue 2 March April 2013 Page 140

REFERENCES [1] H. Gu, G. Su, C. Du, "Feature Points Extraction from Faces, Image and Vision Computing NZ", Palmerston North, pp. 154-158, (2003). [2] Α. Nikolaidis, I. Pitas, "Facial Feature Extraction and Pose Determination", Pattern Recognition, Elsevier, vol. 33, no. 11, pp. 1783-1791, (2000). [3] I. Aldasouqi, M. Hassan, "Smart Human Face Detection System", International Journal of Computers, vol. 5, no. 2, pp. 210-216, (2011). [4] G. Kim, J. Suhr, H. Jung, J. Kim, "Face Occlusion Detection by using B-spline Active Contour and Skin Color Information", 11 th International Conference. Control, Automation, Robotics and Vision (ICARCV), Singapore, pp. 627-632, (2010). [5] H. Koo, H. Song, "Facial Feature Extraction for Face Modeling Program", International Journal of Circuits, Systems And Signal Processing, vol. 4, no. 4, pp. 169-176, (2010). [6] G. Yen, N. Nithianandan, "Facial Feature Extraction Using Genetic Algorithm", Proceedings of the Evolutionary Computation, vol. 2, pp. 1895-1900, IEEE, (2002). [7] P. Viola, M. Jones, "Rapid Object Detection using a Boosted Cascade of Simple Features", IEEE Computer Society Conference on Computer Vision And Pattern Recognition (CVPR2001), Kauai, HI, USA, pp. 511-518, (2001). [8] P. Viola, M. Jones, "Face Recognition Using Boosted Local Features", IEEE International Conference on Computer Vision, (2003). [9] P. Viola, M. Jones, "Robust Real-Time Face Detection", International Journal of Computer Vision, vol. 57, no. 2, pp. 137 154, (2004). [10] S. Srivastava, "Real Time Facial Expression Recognion Using A Novel Method", The International Journal of Multimedia & Its Applications (IJMA), vol. 4, no. 2, pp. 49-57, (2012). [11] V. Radha, N. Nallammal, "Comparative Analysis of Curvelets Based Face Recognition Methods", Proceedings of the World Congress on Engineering and Computer Science, vol. 1, (2011). [12] K. Kumar, S. Prasad, V. Semwal, R. Tripathi, " Real Time Face Recognition Using Adaboost Improved Fast PCA Algorithm", International Journal of Artificial Intelligence & Applications (IJAIA), vol.2, no.3, (2011). [13] http://en.wikipedia.org/wiki/opencv. [14] Z. Yao, H. Li, Tracking a detected face with dynamic programming, Image and Vision, Computing, vol. 24, no. 6, pp. 573-580, (2006). [15] R. Padilla, C. F. F, C. Filho, M. G. F. Costa, "Evaluation of Haar Cascade Classifiers Designed for Face Detection", World Academy of Science, Engineering and Technology, vol. 64, (2012). [16] A. Ford, A. Roberts, "Colour Space Conversions", Technical Report, http://www.poynton.com/pdfs/ coloureq.pdf, (1998). AUTHORS Walaa M. Abd-Elhafiez, received her B.Sc. and M.Sc. degrees from south valley university, Sohag branch, Sohag, Egypt in 2002 and from Sohag University, Sohag, Egypt, Jan 2007, respectively, and her Ph.D. degree from Sohag University, Sohag, Egypt. She authored and co-authored more than 29 scientific papers in national/international journals/conferences. Her research interests include image segmentation, image enhancement, image recognition, image coding and video coding and their applications in image processing. Moahmed Heshmat, received his B.Sc. and M.Sc. degrees from south valley university, Sohag branch, Sohag, Egypt in 2002 and from Sohag University, Sohag, Egypt, 2010, respectively, and his Ph.D. degree from Sohag University, Sohag, Egypt and Bauhaus-University, Weimar, Germany. His research interests include computer vision, 3D data acquisition, object reconstruction, image segmentation, image enhancement, image recognition. Seham Elaw, received her B.Sc. and M.Sc. degrees from south valley university, Sohag branch, Sohag, Egypt in 2004 and from Sohag University, Sohag, Egypt, Jan 2011, respectively. Her research interests include image segmentation, image enhancement, image recognition and their applications in image processing. Volume 2, Issue 2 March April 2013 Page 141