Spoofing Attempt Detection using Gaze Colocation
|
|
|
- Andrew Mills
- 10 years ago
- Views:
Transcription
1 Spoofing Attempt Detection using Gaze Colocation Asad Ali, Farzin Deravi and Sanaul Hoque School of Engineering and Digital Arts University of Kent, Canterbury, Kent, CT2 7NT, United Kingdom {aa623, f.deravi, Abstract: Spoofing attacks on biometric systems are one of the major impediments to their use for secure unattended applications.this paper presents a novel method for face liveness detection by tracking the gaze of the user with an ordinary webcam. In the proposed system, an object appears randomly on the display screen which the user is required to look at while their gaze is measured. The visual stimulus appears in such a way that it repeatedly directs the gaze of the user to specific points on the screen. s extracted from images captured at these sets of colocated points are used to estimate the liveness of the user. A scenario is investigated where genuine users track the challenge with head/eye movements whereas the impostors hold a photograph of the target user and attempt to follow the stimulus during simulated spoofing attacks. The results from the experiments indicate the effectiveness of the gaze colocation feature in detecting spoofing attack. 1 Introduction Despite the successes in biometric recognition systems in recent decades, they still remain vulnerable to increasingly sophisticated spoofing attacks with the use of fake artifacts. These artifacts may be created from the biometric information of genuine users and presented at the system sensor(s). An impostor can present a fake biometric sample of a genuine user to a biometric recognition system to gain access to unauthorised data or premises. This type of spoofing is a direct attack on the sensor (also known as presentation attack); the impostor does not require any a priori knowledge about the internal operation of the biometric system. To prevent such sensor-level attacks, biometric systems need to establish the liveness of the source of an acquired sample. Amongst biometric modalities, face recognition has emerged as being socially acceptable, accurate and convenient and is therefore used for a variety of security applications. But face recognition systems may be considered to be more vulnerable to abuse compared to other biometric modalities, because a simple photograph or video of a genuine user can be used to deceive such systems [Tr11]. Therefore, by introducing a liveness detection mechanism, the security of biometric systems can be substantially improved.
2 Photographs, masks, and videos are the spoofing artifacts that may be used for attacks at sensor level. Photo spoofing can be prevented by detecting motion, smile, eye blinks, etc. However, such techniques can be deceived by presenting a video of the genuine user to the face recognition system. The subtle differences between a photograph (or video) of an individual and the live person needs to be used to establish liveness of the presentation at the sensor. An important source of liveness information is the direct user interactions with the system that are captured and assessed in real time. In this paper we present a novel challenge/response mechanism for face-recognition systems, using a standard webcam, by tracking the gaze of the user moving in response to a visual stimulus. The stimulus is designed to facilitate the acquisition of distinguishing features based on the colocation of sets of points along the gaze trajectory. The paper is organized as follows. In Section 2 a brief overview of the state of the art is presented. Section 3 describes the proposed techniques while Section 4 reports on its experimental evaluation. Finally Section 5 provides conclusions and offers suggestions for further work. 2 Related Work Various approaches have been presented in the literature to establish liveness and to detect presentation attacks. Liveness detection approaches can be grouped into two broad categories: active and passive. Active approaches require user engagement to enable the facial recognition system to establish the liveness of the source through the sample captured at the sensor. Passive approaches do not require user co-operation or even user awareness but exploit involuntary physical movements, such as spontaneous eye blinks, and 3D properties of the image. Passive anti-spooing techniques are usually based on the detection of signs of life, e.g. eye blink, facial expression, etc. For example Pan et al [PWL07] proposed a liveness detection method by extracting the temporal information from the process of the eye blink. They used Conditional Random Fields to model and detect eye-blinks over a sequence of images. Jee et al s method [JJY06] uses a single ordinary camera and analyses the sequence of the images captured. They locate the centre of both eyes in the facial image. If the variance of each eye region is larger than a preset threshold, the image is considered as a live facial image; otherwise the image is classified as a photograph. Wang et al [WDF09] presented a liveness detection method in which physiological motion is detected by estimating the eye blink with an eye contour extraction algorithm. They use active shape models with a random forest classifier trained to recognize the local appearance around each landmark. They also showed that if any motion in the face region is detected the sample is considered to be captured from an impostor. Kollreider et al [Ko09, Ko08, Ko05] combined facial components (nose, ears, etc.) detection and optical flow estimation to determine a liveness score. They assumed that a 3D face produces a special 2D motion. This motion is higher at central face parts (e.g. nose) compared to the outer face regions (e.g. ears). Parts nearer to the
3 camera move differently to parts which are further away in a live face. A translated photograph, by contrast, generates constant motion at various face regions. They also proposed a method which uses lip-motion (without audio information) to assess liveness [Ko05]. Some anti-spoofing techiques are based on the analysis of skin reflectance, texture, noise signature etc. Li et al [Li04] explored a technique based on the analysis of 2-D Fourier spectra of the face image. Their work is based on two principles. They proposed the principle that as the size of a photograph is smaller than the real image and the photograph is flat, it therefore has fewer high frequency components than real face images. Kim et al [Ki12] proposed a method for detecting a single fake image based on frequency and texture analyses. They exploited frequency and texture information using power spectrum. They also used Local Binary Pattern (LBP) features for analyzing the textures. They fused information of the decision values from the frequency-based classifier and the texture-based classifier for detecting the fake faces. Pinto et al [Pi12] used the noise signatures generated by the recaptured video to discriminate between live and fake attempts. They suggested noise was the artifact generated from video captured from other video (and not from real scenes). They used the Fourier spectrum, computation of the visual rhythm and extraction of the grey level co-occurrence matrices as feature descriptors. Systems based on the challenge-response approach belong to the active category, where the user is asked to perform specific activities to ascertain liveness such as uttering digits or changing his or her head pose. For instance Frischholz et al [FW03] investigated a challenge-response approach to enhance the security of the face recognition system. The users were required to look in certain directions, which were chosen by the system randomly. The system estimated the head pose and compared the real time movement (response) to the instructions asked by the system (challenge) to verify the user authenticity. Ali et al [ADH12] presented a method of liveness detection based on gaze tracking. Users are required to follow a moving object with their head/gaze while a camera captures images of the user s face. The path of the object is designed in such a way that a number of collinear points are visited. Work has also been reported on using the gaze trajectory as a source of biometric information [DG11]. The work presented here explores a new feature set, hereby referred to as the gaze colocation feature set, for the detection of presentation attacks. Although a similar setup to the one in [ADH12] has been used the novel features proposed here establish the ability of the natural gaze to return to the same location consistently. Here the users gaze is directed to some pre-selected random positions on the display and features are extracted from sets of gazes at these colocated targets.the underlying hypothsis is that the variance in gazes for colocated positions should be small in genuine user attempts. This phenomenon is then exploited to differentiate between a photo spoof attack and a genuine user input. Video spoofing presents an even greater challenge. A video camera is required and, as reported, sophisticated methods such as video background control, 3D masks, 3D facial images and placing fidcucial points in the background are all being employed to prevent video spoofing [Tr11, Pa11]. In this paper, however, we do not report results of tests on the proposed system under video spoofing attacks.
4 3 Liveness Detection through Gaze Tracking The scenario considered in this paper is that of a face verification system using an ordinary camera (webcam). A block diagram of the proposed system is shown in Figure 1. An object appears on the display and the camera (sensor) captures the frames as the position of the object on the display changes. The gaze colocation features are extracted from the pupil centres in the captured frames which are then classified as genuine or fake. Sensor Control Landmarks Extraction Classifier Decision Display Figure 1: System block diagram 3.1 Visual Stimulus A small object appears at random locations on the screen and the user is required to find and follow it with head/gaze movement. It is not necessary to space these targets uniformly but ideally these should not be too close to one another and each should be visited multiple times. At each appearance of the stimulus, the camera captures an image of the user s face. The presentation of the challenge takes approximately 130 seconds to complete, capturing 90 still images at each location of the challenge. The object appears in a random sequence to prevent predictive video attacks. The object visits each position at least three times. In this way a number of colocated sets of gaze can be identified. In Figure 2(a) a genuine user is seen tracking the challenge to establish liveness, while in Figure 2(b) the impostor is responding to the challenge by carefully shifting a high quality printed photo to gain access to the system. (a) (b) Figure 2: Example of (a) Genuine attempt, and (b) Spoof attempt
5 3.2 Facial Landmark Detection The images captured during the challenge-response were analysed using STASM [MN08] to extract facial landmark points. STASM returns 68 different landmarks on the face region using an active shape model technique. The coordinates of the center of the pupils were used for feature extraction in the proposed scheme. 3.3 Gaze Colocation s The gaze colocation features are extracted from images when the stimulus is at a given location. The x and y coordinates of the object on the display are same when they reappear at a given place at different times during this exercise. It can therefore be assumed that the x and y coordinates of the pupil centres in the corresponding frames should also be very close. This should result in a very small variance in the observed x- and y-coordinates of the pupil centres in genuine attempts. A feature vector is thus formed from the variances of pupil centre coordinates for all the occasions where the stimulus is colocated. Similar features can be extracted from other facial landmarks, but were not used in the results reported here. 4 Experiments The system setup was similar to the one shown in Figure 3. The setup consists of a webcam, a PC and a display monitor. The camera used is a Logitech Quick Cam Pro 5000, and is centrally mounted on the top of a 21.5" LCD screen, a commonly used monitor type, having a resolution of pixels and 5ms response time. The distance between the camera and the user was approximately 750 mm. This distance was not a tight constraint but had to be such that the facial features could be clearly acquired by the camera. User Camera Screen d Figure 3: System Setup
6 Data was collected from 8 subjects in 3 sessions. Each subject provided data for both genuine and impostor attempts, creating 26 sets of each. During spoofing attacks a high quality colour photo of a genuine user was held in front of the camera while attempting to follow the stimulus. Each attempt acquired 90 image frames of resolution pixels. This resolution provided a good enough picture quality to recognize the facial landmarks. In total, 30 sets of x-y coordinates of the pupil centres from collocated gaze targets were extracted resulting in a feature vector of size 60 for each eye. There were a small number frames where the pupil centres were not detected by STASM and such frames (and associated colocation points) were excluded from the feature extraction process. For this data, the (x,y) coordinates of the pupil centres from frames captured while users are looking at the central stiumulus location are plotted in Figure 4 displaying deviations from their mean for all the genuine and fake attempts respectively. It can be observed that the range of the points in genuine attempts is much smaller compared to that of the spoof attempts. This is because the impostor, relying on hand-eye coordination, is unable to align the photo back to the same spot as accurately as a genuine user. (a) (b) Figure 4: Pupil centre deviations capture during (a) genuine attempt and (b) spoof attempt for the central location of the target 4.1 Evaluation Framework Face liveness detection is a two-class problem and there are four possible outcomes of the classification process: true positive, true negative, false negative and false positive. When a genuine (live/non-spoof) attempt is classified as genuine and a false (fake/spoof) attempt is classified as genuine, these are termed true positive (TP) and false positive (FP) classifications respectively. Similarly, when a genuine attempt is classified as a fake and fake attempt is classified as fake these are called true negative (FN) and false negative (TN) respectively. FP and FN are the errorenous outcomes of the process and the rates of their occurance is reported as False Positive Rate (FPR) and False Negative Rate (FNR) in this report in order to facilitate the assessment and comparison of system performance. The term True Positive Rate (TPR) is also used and is equal to 1-FNR. Total Error Rate (TER) can be defined as the proportion of misclassified attempts out ot all the attempts, including both genuine and fake.
7 For the experiments reported here, the database was divided into two disjoint sets for training and testing purposes. Of the 52 samples, 12 were chosen for testing and the remaining 40 for training the classifier. For training the classifier, 20 random samples from fake and 20 from genuine attempts were chosen. The experiments were repeated 50 times, and on each occasion the system used randomly selected samples for testing and training. The mean error rates are reported here. 4.2 Experimental Results Error rates were calculated for a range of system parameters and are reported in this section. True Positive Rates at a set of predefined FPR values were obtained and used for comparison. The ROC curve of the proposed scheme using features from the single eye is presented in Figure 7. It is apparent that the system did not perform very accurately when the entire feature vectors are used. However, the performance improved significantly when subsets of the available features were used for training and testing. The forward feature selection method was used to rank the features [BL97]. the best results were achieved when a subset of the best 15 features was used (as shown in Figure 7). At 10% FPR, the TPR was above 90% which was only around 40% when using the entire feature set. Figure 7: Performance with features extracted from a single eye 4.3 Combination Schemes While the colocation features from each eye may be used in isolation it is interesting to explore if there is complementarity in these feature sets and if a greater accuracy can be achieved by their combination. Therefore, both feature and score fusion schemes were explored to find if there can be gain in accuracy by combing information from features extracts from the two eyes. The following sub-sections will cover each of these fusion schemes in turn.
8 4.3.1 Fusion The features extracted from both the eyes were concatenated to form a larger feature vector which was then used for training and testing. All 60 features from the left eye and the 60 features from the right eye were combined in a feature-level fusion scheme. The scheme is illustrated in Figure 8. A feature selection method was incorporated to find the optimum feature subsets for this scheme. Right Eye Left Eye Extraction Extraction Fusion Selection Classifier Figure 8: fusion using left and right eye Figure 9 shows the ROC curves for different feature dimensions. The TPR of the system was found to be lower than the instances when only one eye was used. Reducing the number of features improved the performance but the best TPR (at 10% FPR) of the system was about 80% while for the single-eye system it was above 90%. Figure 9: fusion performance Score fusion An alternative to the feature fusion strategy, a score fusion scheme is often implemented. In the score fusion scheme, these features were extracted from the right and left eyes and independent classifiers were used to obtain classification scores for each eye. In this multi-classifier system two k-nn classifiers were used for each eye. The a posteriori probabilities from the two classifiers were combined using the product rule for liveness
9 detection [Ki98]. Figure 10 illustrates the scheme and Figure 11 shows the corresponding ROC curves. The scheme achieved a TPR of 99% at FPR of 10%. Right Eye Extraction Selection Classifier Fusion Left Eye Extraction Selection Classifier Figure 10: Score fusion scheme Figure 11: Score fusion performance In order to establish the tradeoff between the feature dimensionality and liveness detection accuracy experiments were performed to establish the performance of the system as the number of dimensions was steadily reduced. Figure 12 illustrates total error rates for different feature dimensions selected using the forward feature selection method. It can be seen that the lowest total error rate was observed when the feature dimension was reduced to around 15. The total error rate started increasing when the feature set was further reduced. The system produced higher total error rates when the feature dimension was large. The reason for this might be that only a small amount of data was available for training given the size of the feature set.
10 Figure 12: Variation in accuracy with feature dimension Table 1 presents a comparative performance of the proposed methods at various levels of FPR. The feature fusion scheme gave the highest error rates in all cases. While using features from only one eye, the system TPR was up to 91%. This improved vastly when the score fusion approach was implemented. At 1% FPR, a TPR of 93% was achieved using score fusion. At 10% FPR, this rose to 99%. Table 1: Performance comparison of the three schemes Single Eye Fusion Score Fusion = % 86% 90% 91% 47% 62% 74% 78% 93% 94% 97% 99% Table 2 shows a comparative analysis of our experimental observations with the performances reported for similar spoof attacks published in the literature. Although the results are from different databases they suggest possible comparative ranking of these various methods and indicate that the proposed method compares favourably with these schemes.
11 Table 2: Comparative performance analysis Method FPR FNR TER Ali et al [ADH12] 13.3% 0.0% 6.7% Kollreider et al [Ko07] 1.5% 19.0% 10.3% Tan et al [PMR11] 9.3% 17.6% 13.5% Peixoto et al [PMR11] 6.3% 7.0% 7.0% Proposed method 1.0% 7% 4.0% 5 Conclusion This paper presents a novel feature set for liveness detection in the presence of photo spoofing for face verification systems. A challenge-response approach is described which uses a visual stimulus to direct the gaze. The test scenario did not constrain the users to move either their head or eyes exclusively. However, the proposed gaze colocation features provided a robust measure for discriminating between live and fake attempts. Initial experiments prove the potential viability of this approach, however, more data is required to establish the performance of the proposed approach with confidence. Although video attacks are excluded in the tests it is expected that within the proposed challenge response framework they would be difficult to mount due to the need for synchronisation with the challenge sequence. Future work will expand the experiments to include a larger database of users and will also explore incorporation of additional features for improving the anti-spoofing capabilities of the system in response to more sophisticated attacks. In particular the relative position of eye centres within the face will be a subject of further study. References [ADH12] Ali, A.; Deravi, F.; Hoque, S.: Liveness detection using gaze collinearity. In Proc of 3rd Intl Conference on Emerging Security Technologies, Lisbon, Portugal. Pages 62-65, Sept [BL97] Blum, A. L.; Langley, P.: Selection of relevant features and examples in machine learning. Artificial intelligence, 97(1): December [DG11] Deravi, F., Guness, S. P., (2011). Gaze Trajectory as a Biometric Modality. In Proceedings of the BIOSIGNALS Conference, Rome, Italy. Pages , January [FW03] Frischholz, R. W.; Werner, A.: Avoiding replay-attacks in a face recognition system using head-pose estimation. in Proc of IEEE Intl Workshop on Analysis and Modeling of Faces and Gestures (AMFG 2003), Nice, France. Pages , 2003.
12 [JJY06] Jee, H. K.; Jung, S. U.; Yoo, J. H.: Liveness detection for embedded face recognition system. International Journal of Biological and Medical Sciences, 1(4): , [Ki12] Kim, G.; Eum, S.; Suhr, J. K.; Kim, D. I., Park, K. R.; Kim, J.: Face liveness detection based on texture and frequency analyses. 5th IAPR International Conference on Biometrics (ICB), New Delhi, India. Pages 67-72, [Ki98] Kittler, J.; Hatef, M.; Duin, R. P. W.; Matas, J.: On combining classifiers., IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(3): , March [Ko09] Kollreider, K.; Fronthaler, H.; Bigun, J.: Non-intrusive liveness detection by face images. Image and Vision Computing, 27(3): , [Ko08] Kollreider, K.; Fronthaler, H.; Bigun, J.: Verifying liveness by multiple experts in face biometrics. in Proc of IEEE Computer Vision and Pattern Recognition Workshop on Biometrics, Anchorage, AK, USA. Pages , June [Ko05] Kollreider, K.; Fronthaler, H.; Bigun, J.: Evaluating liveness by face images and the structure tensor. in Proc of 4th IEEE Workshop on Automatic Identification Advanced Technologies, Washington DC, USA. Pages 75 80, October [Ko07] Kollreider, K.; Fronthaler, H.; Faraj, M. I.; Bigun, J.: Real-Time Face Detection and Motion Analysis with application in Liveness Assessment. IEEE Transaction on Information Forensics and Security, 2(3): , [Li04] Li, J.; Wang, Y.; Tan, T.; Jain, A. K.: Live face detection based on the analysis of Fourier spectra. in Proc of Biometric Technology for Human Identification, Orlando, FL, USA. (SPIE 5404), pages , April [MN08] Milborrow, S.; Nicolls, F.: Locating facial features with an extended active shape model. In Proc. of the 10th European Conference on Computer Vision (ECCV), Marseille, France. October [Pi12] Pinto, A. D. S.; Pedrini, H.; Schwartz, W.; Rocha A.: Video-Based Face Spoofing Detection through Visual Rhythm Analysis. 25th SIBGRAPI Conference on Graphics, Patterns and Images, Ouro Perto, Brazil [PMR11] Peixoto, B.; Michelassi C.; Rocha, A.: Face liveness detection under bad illumination conditions. 18th IEEE Intl Conf on Image Processing (ICIP), Brussels, Belgium. pp. [Pa11] , Pan, G.; Sun, L.; Wu, Z.; Wang, Y.: Monocular camera-based face liveness detection bycombining eyeblink and scene context. Telecommunication Systems. 47(3-4): , August [PWL07] Pan, G.; Lin S.; Wu, Z..; Lao, S.: Eyeblink-based anti-spoofing in face recognition from a generic webcamera. in Proc. of 11th IEEE Intl Conf. on Computer Vision, Rio de Janeiro, Brazil Pages 1-8. [Tr11] Tronci, R.; Muntoni M.; Fadda, G.; Pili, M.; Sirena, N.; Murgia, G,; Ristori M.; Roli, F.: Fusion of multiple clues for photo-attack detection in face recognition systems. International Joint Conference on Biometrics, (IJCB), pages 1-6, October [WDF09] Wang, L,; Ding, X.; Fang, C.: Face Live Detection Method Based on PhysiologicalMotion Analysis. Tsinghua Science & Technology, 14(6): , 2009.
A Face Antispoofing Database with Diverse Attacks
A Face Antispoofing Database with Diverse Attacks Zhiwei Zhang 1, Junjie Yan 1, Sifei Liu 1, Zhen Lei 1,2, Dong Yi 1,2, Stan Z. Li 1,2 1 CBSR & NLPR, Institute of Automation, Chinese Academy of Sciences
LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. [email protected]
LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE 1 S.Manikandan, 2 S.Abirami, 2 R.Indumathi, 2 R.Nandhini, 2 T.Nanthini 1 Assistant Professor, VSA group of institution, Salem. 2 BE(ECE), VSA
Biometric Authentication using Online Signatures
Biometric Authentication using Online Signatures Alisher Kholmatov and Berrin Yanikoglu [email protected], [email protected] http://fens.sabanciuniv.edu Sabanci University, Tuzla, Istanbul,
International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014
Efficient Attendance Management System Using Face Detection and Recognition Arun.A.V, Bhatath.S, Chethan.N, Manmohan.C.M, Hamsaveni M Department of Computer Science and Engineering, Vidya Vardhaka College
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
Multimodal Biometric Recognition Security System
Multimodal Biometric Recognition Security System Anju.M.I, G.Sheeba, G.Sivakami, Monica.J, Savithri.M Department of ECE, New Prince Shri Bhavani College of Engg. & Tech., Chennai, India ABSTRACT: Security
FACE RECOGNITION BASED ATTENDANCE MARKING SYSTEM
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 2, February 2014,
Subspace Analysis and Optimization for AAM Based Face Alignment
Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China [email protected] Stan Z. Li Microsoft
Face Model Fitting on Low Resolution Images
Face Model Fitting on Low Resolution Images Xiaoming Liu Peter H. Tu Frederick W. Wheeler Visualization and Computer Vision Lab General Electric Global Research Center Niskayuna, NY, 1239, USA {liux,tu,wheeler}@research.ge.com
Template-based Eye and Mouth Detection for 3D Video Conferencing
Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer
AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION
AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION Saurabh Asija 1, Rakesh Singh 2 1 Research Scholar (Computer Engineering Department), Punjabi University, Patiala. 2 Asst.
FPGA Implementation of Human Behavior Analysis Using Facial Image
RESEARCH ARTICLE OPEN ACCESS FPGA Implementation of Human Behavior Analysis Using Facial Image A.J Ezhil, K. Adalarasu Department of Electronics & Communication Engineering PSNA College of Engineering
VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS
VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS Norbert Buch 1, Mark Cracknell 2, James Orwell 1 and Sergio A. Velastin 1 1. Kingston University, Penrhyn Road, Kingston upon Thames, KT1 2EE,
PHYSIOLOGICALLY-BASED DETECTION OF COMPUTER GENERATED FACES IN VIDEO
PHYSIOLOGICALLY-BASED DETECTION OF COMPUTER GENERATED FACES IN VIDEO V. Conotter, E. Bodnari, G. Boato H. Farid Department of Information Engineering and Computer Science University of Trento, Trento (ITALY)
MULTIMODAL BIOMETRICS IN IDENTITY MANAGEMENT
International Journal of Information Technology and Knowledge Management January-June 2012, Volume 5, No. 1, pp. 111-115 MULTIMODAL BIOMETRICS IN IDENTITY MANAGEMENT A. Jaya Lakshmi 1, I. Ramesh Babu 2,
Efficient on-line Signature Verification System
International Journal of Engineering & Technology IJET-IJENS Vol:10 No:04 42 Efficient on-line Signature Verification System Dr. S.A Daramola 1 and Prof. T.S Ibiyemi 2 1 Department of Electrical and Information
Tracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object
Open-Set Face Recognition-based Visitor Interface System
Open-Set Face Recognition-based Visitor Interface System Hazım K. Ekenel, Lorant Szasz-Toth, and Rainer Stiefelhagen Computer Science Department, Universität Karlsruhe (TH) Am Fasanengarten 5, Karlsruhe
HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT
International Journal of Scientific and Research Publications, Volume 2, Issue 4, April 2012 1 HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT Akhil Gupta, Akash Rathi, Dr. Y. Radhika
Mouse Control using a Web Camera based on Colour Detection
Mouse Control using a Web Camera based on Colour Detection Abhik Banerjee 1, Abhirup Ghosh 2, Koustuvmoni Bharadwaj 3, Hemanta Saikia 4 1, 2, 3, 4 Department of Electronics & Communication Engineering,
SOURCE SCANNER IDENTIFICATION FOR SCANNED DOCUMENTS. Nitin Khanna and Edward J. Delp
SOURCE SCANNER IDENTIFICATION FOR SCANNED DOCUMENTS Nitin Khanna and Edward J. Delp Video and Image Processing Laboratory School of Electrical and Computer Engineering Purdue University West Lafayette,
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - [email protected]
A Comparative Study on ATM Security with Multimodal Biometric System
A Comparative Study on ATM Security with Multimodal Biometric System K.Lavanya Assistant Professor in IT L.B.R.College of Engineering, Mylavaram. [email protected] C.Naga Raju Associate Professor
ECE 533 Project Report Ashish Dhawan Aditi R. Ganesan
Handwritten Signature Verification ECE 533 Project Report by Ashish Dhawan Aditi R. Ganesan Contents 1. Abstract 3. 2. Introduction 4. 3. Approach 6. 4. Pre-processing 8. 5. Feature Extraction 9. 6. Verification
Simultaneous Gamma Correction and Registration in the Frequency Domain
Simultaneous Gamma Correction and Registration in the Frequency Domain Alexander Wong [email protected] William Bishop [email protected] Department of Electrical and Computer Engineering University
A secure face tracking system
International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 10 (2014), pp. 959-964 International Research Publications House http://www. irphouse.com A secure face tracking
An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network
Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal
VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS
VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS Aswin C Sankaranayanan, Qinfen Zheng, Rama Chellappa University of Maryland College Park, MD - 277 {aswch, qinfen, rama}@cfar.umd.edu Volkan Cevher, James
Journal of Industrial Engineering Research. Adaptive sequence of Key Pose Detection for Human Action Recognition
IWNEST PUBLISHER Journal of Industrial Engineering Research (ISSN: 2077-4559) Journal home page: http://www.iwnest.com/aace/ Adaptive sequence of Key Pose Detection for Human Action Recognition 1 T. Sindhu
Analysis of Multimodal Biometric Fusion Based Authentication Techniques for Network Security
, pp. 239-246 http://dx.doi.org/10.14257/ijsia.2015.9.4.22 Analysis of Multimodal Biometric Fusion Based Authentication Techniques for Network Security R.Divya #1 and V.Vijayalakshmi #2 #1 Research Scholar,
A Reliability Point and Kalman Filter-based Vehicle Tracking Technique
A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video
Synthetic Aperture Radar: Principles and Applications of AI in Automatic Target Recognition
Synthetic Aperture Radar: Principles and Applications of AI in Automatic Target Recognition Paulo Marques 1 Instituto Superior de Engenharia de Lisboa / Instituto de Telecomunicações R. Conselheiro Emídio
Face Recognition: Some Challenges in Forensics. Anil K. Jain, Brendan Klare, and Unsang Park
Face Recognition: Some Challenges in Forensics Anil K. Jain, Brendan Klare, and Unsang Park Forensic Identification Apply A l science i tto analyze data for identification Traditionally: Latent FP, DNA,
Monitoring Head/Eye Motion for Driver Alertness with One Camera
Monitoring Head/Eye Motion for Driver Alertness with One Camera Paul Smith, Mubarak Shah, and N. da Vitoria Lobo Computer Science, University of Central Florida, Orlando, FL 32816 rps43158,shah,niels @cs.ucf.edu
Interactive person re-identification in TV series
Interactive person re-identification in TV series Mika Fischer Hazım Kemal Ekenel Rainer Stiefelhagen CV:HCI lab, Karlsruhe Institute of Technology Adenauerring 2, 76131 Karlsruhe, Germany E-mail: {mika.fischer,ekenel,rainer.stiefelhagen}@kit.edu
A Learning Based Method for Super-Resolution of Low Resolution Images
A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 [email protected] Abstract The main objective of this project is the study of a learning based method
2.11 CMC curves showing the performance of sketch to digital face image matching. algorithms on the CUHK database... 40
List of Figures 1.1 Illustrating different stages in a face recognition system i.e. image acquisition, face detection, face normalization, feature extraction, and matching.. 10 1.2 Illustrating the concepts
Classification of Fingerprints. Sarat C. Dass Department of Statistics & Probability
Classification of Fingerprints Sarat C. Dass Department of Statistics & Probability Fingerprint Classification Fingerprint classification is a coarse level partitioning of a fingerprint database into smaller
Securing Electronic Medical Records using Biometric Authentication
Securing Electronic Medical Records using Biometric Authentication Stephen Krawczyk and Anil K. Jain Michigan State University, East Lansing MI 48823, USA, [email protected], [email protected] Abstract.
DESIGN OF DIGITAL SIGNATURE VERIFICATION ALGORITHM USING RELATIVE SLOPE METHOD
DESIGN OF DIGITAL SIGNATURE VERIFICATION ALGORITHM USING RELATIVE SLOPE METHOD P.N.Ganorkar 1, Kalyani Pendke 2 1 Mtech, 4 th Sem, Rajiv Gandhi College of Engineering and Research, R.T.M.N.U Nagpur (Maharashtra),
Face Recognition in Low-resolution Images by Using Local Zernike Moments
Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August14-15, 014 Paper No. 15 Face Recognition in Low-resolution Images by Using Local Zernie
Document Image Retrieval using Signatures as Queries
Document Image Retrieval using Signatures as Queries Sargur N. Srihari, Shravya Shetty, Siyuan Chen, Harish Srinivasan, Chen Huang CEDAR, University at Buffalo(SUNY) Amherst, New York 14228 Gady Agam and
The Visual Internet of Things System Based on Depth Camera
The Visual Internet of Things System Based on Depth Camera Xucong Zhang 1, Xiaoyun Wang and Yingmin Jia Abstract The Visual Internet of Things is an important part of information technology. It is proposed
Circle Object Recognition Based on Monocular Vision for Home Security Robot
Journal of Applied Science and Engineering, Vol. 16, No. 3, pp. 261 268 (2013) DOI: 10.6180/jase.2013.16.3.05 Circle Object Recognition Based on Monocular Vision for Home Security Robot Shih-An Li, Ching-Chang
Visual-based ID Verification by Signature Tracking
Visual-based ID Verification by Signature Tracking Mario E. Munich and Pietro Perona California Institute of Technology www.vision.caltech.edu/mariomu Outline Biometric ID Visual Signature Acquisition
False alarm in outdoor environments
Accepted 1.0 Savantic letter 1(6) False alarm in outdoor environments Accepted 1.0 Savantic letter 2(6) Table of contents Revision history 3 References 3 1 Introduction 4 2 Pre-processing 4 3 Detection,
Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall
Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin
User Authentication using Combination of Behavioral Biometrics over the Touchpad acting like Touch screen of Mobile Device
2008 International Conference on Computer and Electrical Engineering User Authentication using Combination of Behavioral Biometrics over the Touchpad acting like Touch screen of Mobile Device Hataichanok
An Active Head Tracking System for Distance Education and Videoconferencing Applications
An Active Head Tracking System for Distance Education and Videoconferencing Applications Sami Huttunen and Janne Heikkilä Machine Vision Group Infotech Oulu and Department of Electrical and Information
Video-Based Eye Tracking
Video-Based Eye Tracking Our Experience with Advanced Stimuli Design for Eye Tracking Software A. RUFA, a G.L. MARIOTTINI, b D. PRATTICHIZZO, b D. ALESSANDRINI, b A. VICINO, b AND A. FEDERICO a a Department
Original Research Articles
Original Research Articles Researchers Mr.Ramchandra K. Gurav, Prof. Mahesh S. Kumbhar Department of Electronics & Telecommunication, Rajarambapu Institute of Technology, Sakharale, M.S., INDIA Email-
PRODUCT SHEET. [email protected] [email protected] www.biopac.com
EYE TRACKING SYSTEMS BIOPAC offers an array of monocular and binocular eye tracking systems that are easily integrated with stimulus presentations, VR environments and other media. Systems Monocular Part
CS231M Project Report - Automated Real-Time Face Tracking and Blending
CS231M Project Report - Automated Real-Time Face Tracking and Blending Steven Lee, [email protected] June 6, 2015 1 Introduction Summary statement: The goal of this project is to create an Android
Support Vector Machines for Dynamic Biometric Handwriting Classification
Support Vector Machines for Dynamic Biometric Handwriting Classification Tobias Scheidat, Marcus Leich, Mark Alexander, and Claus Vielhauer Abstract Biometric user authentication is a recent topic in the
Performance Comparison of Visual and Thermal Signatures for Face Recognition
Performance Comparison of Visual and Thermal Signatures for Face Recognition Besma Abidi The University of Tennessee The Biometric Consortium Conference 2003 September 22-24 OUTLINE Background Recognition
Illumination, Expression and Occlusion Invariant Pose-Adaptive Face Recognition System for Real- Time Applications
Illumination, Expression and Occlusion Invariant Pose-Adaptive Face Recognition System for Real- Time Applications Shireesha Chintalapati #1, M. V. Raghunadh *2 Department of E and CE NIT Warangal, Andhra
Chapter 6. The stacking ensemble approach
82 This chapter proposes the stacking ensemble approach for combining different data mining classifiers to get better performance. Other combination techniques like voting, bagging etc are also described
Multi-Factor Biometrics: An Overview
Multi-Factor Biometrics: An Overview Jones Sipho-J Matse 24 November 2014 1 Contents 1 Introduction 3 1.1 Characteristics of Biometrics........................ 3 2 Types of Multi-Factor Biometric Systems
High Resolution Fingerprint Matching Using Level 3 Features
High Resolution Fingerprint Matching Using Level 3 Features Anil K. Jain and Yi Chen Michigan State University Fingerprint Features Latent print examiners use Level 3 all the time We do not just count
PARTIAL FINGERPRINT REGISTRATION FOR FORENSICS USING MINUTIAE-GENERATED ORIENTATION FIELDS
PARTIAL FINGERPRINT REGISTRATION FOR FORENSICS USING MINUTIAE-GENERATED ORIENTATION FIELDS Ram P. Krish 1, Julian Fierrez 1, Daniel Ramos 1, Javier Ortega-Garcia 1, Josef Bigun 2 1 Biometric Recognition
CHAPTER 1 INTRODUCTION
21 CHAPTER 1 INTRODUCTION 1.1 PREAMBLE Wireless ad-hoc network is an autonomous system of wireless nodes connected by wireless links. Wireless ad-hoc network provides a communication over the shared wireless
A Various Biometric application for authentication and identification
A Various Biometric application for authentication and identification 1 Karuna Soni, 2 Umesh Kumar, 3 Priya Dosodia, Government Mahila Engineering College, Ajmer, India Abstract: In today s environment,
SIGNATURE VERIFICATION
SIGNATURE VERIFICATION Dr. H.B.Kekre, Dr. Dhirendra Mishra, Ms. Shilpa Buddhadev, Ms. Bhagyashree Mall, Mr. Gaurav Jangid, Ms. Nikita Lakhotia Computer engineering Department, MPSTME, NMIMS University
Efficient Background Subtraction and Shadow Removal Technique for Multiple Human object Tracking
ISSN: 2321-7782 (Online) Volume 1, Issue 7, December 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com Efficient
Introduction to Pattern Recognition
Introduction to Pattern Recognition Selim Aksoy Department of Computer Engineering Bilkent University [email protected] CS 551, Spring 2009 CS 551, Spring 2009 c 2009, Selim Aksoy (Bilkent University)
Environmental Remote Sensing GEOG 2021
Environmental Remote Sensing GEOG 2021 Lecture 4 Image classification 2 Purpose categorising data data abstraction / simplification data interpretation mapping for land cover mapping use land cover class
Automated Monitoring System for Fall Detection in the Elderly
Automated Monitoring System for Fall Detection in the Elderly Shadi Khawandi University of Angers Angers, 49000, France [email protected] Bassam Daya Lebanese University Saida, 813, Lebanon Pierre
Multimedia Document Authentication using On-line Signatures as Watermarks
Multimedia Document Authentication using On-line Signatures as Watermarks Anoop M Namboodiri and Anil K Jain Department of Computer Science and Engineering Michigan State University East Lansing, MI 48824
Vision-based Real-time Driver Fatigue Detection System for Efficient Vehicle Control
Vision-based Real-time Driver Fatigue Detection System for Efficient Vehicle Control D.Jayanthi, M.Bommy Abstract In modern days, a large no of automobile accidents are caused due to driver fatigue. To
Tracking performance evaluation on PETS 2015 Challenge datasets
Tracking performance evaluation on PETS 2015 Challenge datasets Tahir Nawaz, Jonathan Boyle, Longzhen Li and James Ferryman Computational Vision Group, School of Systems Engineering University of Reading,
Securing Electronic Medical Records Using Biometric Authentication
Securing Electronic Medical Records Using Biometric Authentication Stephen Krawczyk and Anil K. Jain Michigan State University, East Lansing MI 48823, USA {krawcz10,jain}@cse.msu.edu Abstract. Ensuring
Static Environment Recognition Using Omni-camera from a Moving Vehicle
Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing
Detection and Recognition of Mixed Traffic for Driver Assistance System
Detection and Recognition of Mixed Traffic for Driver Assistance System Pradnya Meshram 1, Prof. S.S. Wankhede 2 1 Scholar, Department of Electronics Engineering, G.H.Raisoni College of Engineering, Digdoh
Regional fusion for high-resolution palmprint recognition using spectral minutiae representation
Published in IET Biometrics Received on 1st September 2013 Revised on 15th January 2014 Accepted on 10th February 2014 Special Issue: Integration of Biometrics and Forensics ISSN 2047-4938 Regional fusion
Automated Image Forgery Detection through Classification of JPEG Ghosts
Automated Image Forgery Detection through Classification of JPEG Ghosts Fabian Zach, Christian Riess and Elli Angelopoulou Pattern Recognition Lab University of Erlangen-Nuremberg {riess,elli}@i5.cs.fau.de
CSC574 - Computer and Network Security Module: Intrusion Detection
CSC574 - Computer and Network Security Module: Intrusion Detection Prof. William Enck Spring 2013 1 Intrusion An authorized action... that exploits a vulnerability... that causes a compromise... and thus
A General Framework for Tracking Objects in a Multi-Camera Environment
A General Framework for Tracking Objects in a Multi-Camera Environment Karlene Nguyen, Gavin Yeung, Soheil Ghiasi, Majid Sarrafzadeh {karlene, gavin, soheil, majid}@cs.ucla.edu Abstract We present a framework
RUN-LENGTH ENCODING FOR VOLUMETRIC TEXTURE
RUN-LENGTH ENCODING FOR VOLUMETRIC TEXTURE Dong-Hui Xu, Arati S. Kurani, Jacob D. Furst, Daniela S. Raicu Intelligent Multimedia Processing Laboratory, School of Computer Science, Telecommunications, and
Distributed Vision-Based Reasoning for Smart Home Care
Distributed Vision-Based Reasoning for Smart Home Care Arezou Keshavarz Stanford, CA 9435 [email protected] Ali Maleki Tabar Stanford, CA 9435 [email protected] Hamid Aghajan Stanford, CA 9435 [email protected]
Efficient Attendance Management: A Face Recognition Approach
Efficient Attendance Management: A Face Recognition Approach Badal J. Deshmukh, Sudhir M. Kharad Abstract Taking student attendance in a classroom has always been a tedious task faultfinders. It is completely
A CLASSIFIER FUSION-BASED APPROACH TO IMPROVE BIOLOGICAL THREAT DETECTION. Palaiseau cedex, France; 2 FFI, P.O. Box 25, N-2027 Kjeller, Norway.
A CLASSIFIER FUSION-BASED APPROACH TO IMPROVE BIOLOGICAL THREAT DETECTION Frédéric Pichon 1, Florence Aligne 1, Gilles Feugnet 1 and Janet Martha Blatny 2 1 Thales Research & Technology, Campus Polytechnique,
Emotion Recognition Using Blue Eyes Technology
Emotion Recognition Using Blue Eyes Technology Prof. Sudan Pawar Shubham Vibhute Ashish Patil Vikram More Gaurav Sane Abstract We cannot measure the world of science in terms of progress and fact of development.
Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies
Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online at: www.ijarcsms.com
How To Filter Spam Image From A Picture By Color Or Color
Image Content-Based Email Spam Image Filtering Jianyi Wang and Kazuki Katagishi Abstract With the population of Internet around the world, email has become one of the main methods of communication among
Strategic White Paper
Strategic White Paper Increase Security Through Signature Verification Technology Takeaway Understand the benefits of automatic signature verification Learn about the types of automatic signature verification
Under Real Spoofing Attacks
Security Evaluation of Biometric Authentication Systems Under Real Spoofing Attacks Battista Biggio, Zahid Akhtar, Giorgio Fumera, Gian Luca Marcialis, and Fabio Roli Department of Electrical and Electronic
Low-resolution Character Recognition by Video-based Super-resolution
2009 10th International Conference on Document Analysis and Recognition Low-resolution Character Recognition by Video-based Super-resolution Ataru Ohkura 1, Daisuke Deguchi 1, Tomokazu Takahashi 2, Ichiro
Framework for Biometric Enabled Unified Core Banking
Proc. of Int. Conf. on Advances in Computer Science and Application Framework for Biometric Enabled Unified Core Banking Manohar M, R Dinesh and Prabhanjan S Research Candidate, Research Supervisor, Faculty
CS 534: Computer Vision 3D Model-based recognition
CS 534: Computer Vision 3D Model-based recognition Ahmed Elgammal Dept of Computer Science CS 534 3D Model-based Vision - 1 High Level Vision Object Recognition: What it means? Two main recognition tasks:!
Application-Specific Biometric Templates
Application-Specific Biometric s Michael Braithwaite, Ulf Cahn von Seelen, James Cambier, John Daugman, Randy Glass, Russ Moore, Ian Scott, Iridian Technologies Inc. Introduction Biometric technologies
Face Recognition For Remote Database Backup System
Face Recognition For Remote Database Backup System Aniza Mohamed Din, Faudziah Ahmad, Mohamad Farhan Mohamad Mohsin, Ku Ruhana Ku-Mahamud, Mustafa Mufawak Theab 2 Graduate Department of Computer Science,UUM
