Biometric Access Control System Using Automated Iris Recognition

Similar documents
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING

Analecta Vol. 8, No. 2 ISSN

Efficient Attendance Management: A Face Recognition Approach

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Neural Network based Vehicle Classification for Intelligent Traffic Control

SIGNATURE VERIFICATION

Face Recognition in Low-resolution Images by Using Local Zernike Moments

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE.

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014

ECE 533 Project Report Ashish Dhawan Aditi R. Ganesan

Multimodal Biometric Recognition Security System

How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm

Image Authentication Scheme using Digital Signature and Digital Watermarking

Object Recognition and Template Matching

EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set

Canny Edge Detection

Digital Imaging and Multimedia. Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University

Computational Foundations of Cognitive Science

Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture.

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall

The Implementation of Face Security for Authentication Implemented on Mobile Phone

Determining optimal window size for texture feature extraction methods

Locating and Decoding EAN-13 Barcodes from Images Captured by Digital Cameras

FACE RECOGNITION BASED ATTENDANCE MARKING SYSTEM

Simultaneous Gamma Correction and Registration in the Frequency Domain

COMPARISON OF OBJECT BASED AND PIXEL BASED CLASSIFICATION OF HIGH RESOLUTION SATELLITE IMAGES USING ARTIFICIAL NEURAL NETWORKS

Fast Subsequent Color Iris Matching in large Database

Classification of Fingerprints. Sarat C. Dass Department of Statistics & Probability

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

Signature Region of Interest using Auto cropping

AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION

How To Filter Spam Image From A Picture By Color Or Color

Poker Vision: Playing Cards and Chips Identification based on Image Processing

Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections

Keywords image processing, signature verification, false acceptance rate, false rejection rate, forgeries, feature vectors, support vector machines.

Establishing the Uniqueness of the Human Voice for Security Applications

JPEG compression of monochrome 2D-barcode images using DCT coefficient distributions

A Reliability Point and Kalman Filter-based Vehicle Tracking Technique

Automatic Traffic Estimation Using Image Processing

Categorical Data Visualization and Clustering Using Subjective Factors

Mathematical Model Based Total Security System with Qualitative and Quantitative Data of Human

Circle Object Recognition Based on Monocular Vision for Home Security Robot

Visual-based ID Verification by Signature Tracking

QUALITY TESTING OF WATER PUMP PULLEY USING IMAGE PROCESSING

Convolution. 1D Formula: 2D Formula: Example on the web:

Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data

Template-based Eye and Mouth Detection for 3D Video Conferencing

THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS

DESIGN OF DIGITAL SIGNATURE VERIFICATION ALGORITHM USING RELATIVE SLOPE METHOD

User Authentication using Combination of Behavioral Biometrics over the Touchpad acting like Touch screen of Mobile Device

Biometric Authentication using Online Signatures

Galaxy Morphological Classification

LEAF COLOR, AREA AND EDGE FEATURES BASED APPROACH FOR IDENTIFICATION OF INDIAN MEDICINAL PLANTS

Class-specific Sparse Coding for Learning of Object Representations

BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES

Application-Specific Biometric Templates

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

USING SPECTRAL RADIUS RATIO FOR NODE DEGREE TO ANALYZE THE EVOLUTION OF SCALE- FREE NETWORKS AND SMALL-WORLD NETWORKS

Fingerprint s Core Point Detection using Gradient Field Mask

Component Ordering in Independent Component Analysis Based on Data Power

Colour Image Segmentation Technique for Screen Printing

Palmprint Recognition. By Sree Rama Murthy kora Praveen Verma Yashwant Kashyap

A New Image Edge Detection Method using Quality-based Clustering. Bijay Neupane Zeyar Aung Wei Lee Woon. Technical Report DNA #

Static Environment Recognition Using Omni-camera from a Moving Vehicle

Image Processing Based Automatic Visual Inspection System for PCBs

Identification of TV Programs Based on Provider s Logo Analysis

The use of computer vision technologies to augment human monitoring of secure computing facilities

A Learning Based Method for Super-Resolution of Low Resolution Images

Mean-Shift Tracking with Random Sampling

Tracking and Recognition in Sports Videos

Building an Advanced Invariant Real-Time Human Tracking System

Medical Image Segmentation of PACS System Image Post-processing *

A Method of Caption Detection in News Video

Image Compression through DCT and Huffman Coding Technique

Introduction to Pattern Recognition

HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER

A Spectral Clustering Approach to Validating Sensors via Their Peers in Distributed Sensor Networks


Low-resolution Character Recognition by Video-based Super-resolution

Mouse Control using a Web Camera based on Colour Detection

Image Segmentation and Registration

The Scientific Data Mining Process

Document Image Retrieval using Signatures as Queries

3D Scanner using Line Laser. 1. Introduction. 2. Theory

Subspace Analysis and Optimization for AAM Based Face Alignment

Journal of Industrial Engineering Research. Adaptive sequence of Key Pose Detection for Human Action Recognition

Biggar High School Mathematics Department. National 5 Learning Intentions & Success Criteria: Assessing My Progress

Vision based Vehicle Tracking using a high angle camera

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

Department of Mechanical Engineering, King s College London, University of London, Strand, London, WC2R 2LS, UK; david.hann@kcl.ac.

Signature verification using Kolmogorov-Smirnov. statistic

A secure face tracking system

The Role of Size Normalization on the Recognition Rate of Handwritten Numerals

Euler Vector: A Combinatorial Signature for Gray-Tone Images

A Study on M2M-based AR Multiple Objects Loading Technology using PPHT

High-Performance Signature Recognition Method using SVM

Open-Set Face Recognition-based Visitor Interface System

HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT

Handwritten Character Recognition from Bank Cheque

Transcription:

Biometric Access Control System Using Automated Iris Recognition Waqas Ahmad and S.Saad Azhar Ali Department of Electrical Engineering, Air University, E-9, Islamabad, Pakistan, engr.waqas_ahmad@hotmail.com Faculty of Computer Science & Engineering, Iqra University, Karachi, Pakistan, saadazhar@iqra.edu.pk Abstract A biometric access control system is developed using automated iris recognition. Based on the fact that a human iris contains patterns that are best suited for identification, the use of low cost equipment can help iris recognition to become a standard in security systems. In this paper the database containing images from a low cost camera is assessed, and the overall recognition performance was measured. The pattern matching is performed using statistical metrics. The real-time experimental results on a group of 50 persons are presented. The results showed that a system using low cost equipment can be constructed with a promising 95% accuracy rate. Keywords: Biometric, Iris Recognition, Correlation. I. INTRODUCTION UE to higher security needs biometrics based systems are D becoming one of the best secure and reliable systems. The iris due to its unique biological properties is suited for the identification of an individual. It contains a large amount of discriminating information in its pattern. According to a survey [1] performed by the National Physical Laboratory, UK, iris recognition outperforms other biometric identification methods (e.g. fingerprints, voice and face recognition) proving the technology to be the best. One of the earliest developments in iris recognition system was presented by Dougman [2, 3, and 4]. The algorithms developed by Daugman in [2] are currently being used by many commercial and public entities including British Telecom, US SandiaLabs, UK National Physical Laboratory, NCR, Oki, IriScan, Iridian, Sensar, and Sarnoff [6]. Their results reported almost a zero percent tolerance with a database of millions of iris samples [5]. A survey on neural approaches towards iris recognition is presented in [6]. In most of the techniques, the whole or part of the image of the iris is used. This requires Computations on the image, thus demanding high computation power from the processor which cost a high capital investment. In this research, instead of using the iris image for pattern matching, just the iris pupil boundary is used as the iris signature. The signatures are then processed and are converted to one dimensional array while pattern matching is performed by comparing simple statistical operations resulting: Low computation power requirements Cost reduction in the imaging device as compared to dedicated and sophisticated imaging device needed for 2-D pattern matching. Different databases were also used for training purposes, e.g. the CASIA Database. However, for completeness of the project, a database of 50 users is also developed. The users were then tested the algorithm in real-time. II. PROBLEM STATEMENT This project aims towards devising an access control system for a secure perimeter, which allows access only those people who are identified by the officials using biometric identification of iris recognition. Moreover, it is also required that the computations required for the algorithm be minimized. III. THE IRIS RECOGNITION ALGORITHM In this research project, the main goal was to create a biometric access control system based on Iris Signature Recognition. The proposed system is completely automated as it grants/denies access based on the decision if the person is an authorized person or an intruder. The algorithm consists of two main parts 1. Signature Extracting 2. Signature Matching In the following subsections a discussion on the iris recognition algorithm is presented. A. Signature Extraction The first step in the iris recognition system is to extract the signature from the image. It is worth mentioning that computations with a 2-D image are far more complicate and intricate compared to a 1-D vector. Since we are looking for a simple technique that is both efficient yet effective, we preferred to work on the iris pupil boundary rather than the pattern on the iris or other similar features. Hence, the main feature to be extracted was the iris-pupil boundary, that now we will call the iris signature. It is known that every single individual has a different iris signature. Even two identical twins have individual iris signature. Biometric scientists claim that for a same person, the iris signatures for both of the eyes are distinct [3, 6]. The iris signature extraction is performed using a method discussed in the following subsections. 12

Obtaining the Image Where x is the distance from the origin in the The first step in signature extracting is to obtain a picture rich enough, that offers the desired features. The image acquisition device used in this research is a HSK-6700 Iriscope. The Iriscope acquires a 640x480 pixel RGB photograph of the eye. An RGB sample is shown in Fig. 1. horizontal axis, y is the distance from the origin in the vertical axis, and σ is the standard deviation of the Gaussian distribution. When applied in two dimensions, this formula produces a surface whose contours are concentric circles with a Gaussian distribution from the center point. Values from this distribution are used to build a convolution matrix which is applied to the original image. Each pixel's new value is set to a weighted average of thatt pixel's neighborhood. The original pixel's value receives the heaviest weight (having the highest Gaussian value) and neighboring pixels receive smaller weights as their distance to the original pixel increases. Fig. 1 RGB input image of an eye B. Greyscale Conversion After taking the image it is converted to greyscale for removing the colours from it. As a colour picture has more data size so converting it to greyscale will reduce the overall calculations on the image. The image converted to greyscale is shown in Fig. 2. 2. Estimate of edge strength using Sobel filter pair. Mathematically, the Sobel filter pair uses two 3 3 kernels whichh are convolved with the original image to calculate approximations of the derivatives - one for horizontal changes, and one for vertical. If we define A as the source image, and G x and G y are two images which at each point contain the horizontal and vertical derivative approximations, the computations are as follows[10] : Fig. 2 Greyscale image of the eye Detecting the Edges The iris-pupil boundary can roughly be marked using edge detection. Edge detection returns a binary image with zeros and ones. The threshold on the edge detection is tuned to remove less contrasting edges. In this research edge detection is performed using canny edge detection algorithm. It consistss of threee main steps [7]. 1. Noise reduction throughh convolution with a Gaussian filter. In two dimensions, Gaussian filter is the product of two one dimension Gaussian filters, one per direction [8][ 9]: The x-coordinate is here defined as increasing in the "right"-direction, and the y-coordinate is defined as increasing in the "down"-direction. At each point in the image, the resulting gradient approximations can be combined to give the gradient magnitude, using: 3. Edge tracking Edge tracking is then performed along edges in the image starting in the positions that have edge strength higher than a certain threshold (T). Tracking is continued, marking only local edge maximums as edges, until the edge strength falls below another threshold (T). The binary image after edge detection is shown in Fig. 3 with ones on the dark pixels and zeros on the bright pixel. C. Finding the Centre of pupil Since we need the iris pupil boundary, the pupil has to be grabbed from the image. The pupil being a large dark spot in the middle of the image holds the weight of the image. It was seen after many experiments that the centroid of the image always lays within the pupil. Once a point within the pupil is detected, the boundary can also be found out. The centroid of the image is calculated by dividing the sum of x-coordinates and the sum of y-coordinates with the area of the image. It was individually made sure for all the images in the database thatt the centroids laid anywhere within the pupil. In Fig.4 the white dot within the pupil represents the centroid of the image.. 13

procedure, still a vector of these coordinates cannot be used as a feature vector. This is due to the fact that the pupil can be at any location in the image and the centroid of the image is never in the centre of the pupil. D. Cropping the Pupil Now using the distances measured from the centroid of the image, the highest and lowest pixels are found out. Similarly, the left most and the right most pixels are found out. The extreme points on the iris give us four sides of the iris. The iris can now be cropped from the image as can be seen in Fig. 6 Fig. 3 Canny edge detected image of an eye Fig. 6 Pupil extracted from the original image This cropped image of iris will now be used to find the centroid of the iris. Several experiments with different images taken under different conditions verified that this centroid always lie on the same location in the pupil. A definite centroid now enables the algorithm to extract the correct feature vector, i.e. the iris signature. The centroid of the pupil is shown in Fig. 7. Fig. 4 Image with calculated centroid The centroid of the image can now be used to find the iris-pupil boundary. Points on the iris-pupil boundary are searched in counter-clockwise direction by finding the distance between the image centroid and the first pixel with a value one (denoting the iris-pupil boundary). This procedure is repeated for a whole revolution. The distances can be seen in Fig. 5. Fig. 7 Image with calculated centre of the pupil E. Extracting the Signature We are now able to extract the real feature of interest, i.e., the iris signature. The iris signature is a vector of distances from the centroid of the pupil to the iris-pupil boundary. Hence distances were found in a counter-clockwise direction as shown in Fig. 8 and stored in a vector. This vector can vary if the image taken is slightly tilted or the iris is contracted or expanded due to different light conditions. In order to avoid such situations it is required to perform some basic operations. Fig. 5 Distances of the iris-pupil boundary from the centroid of the image The distances thus help in finding the coordinates of the irispupil boundary using Pythagoras theorem. Although all the points on the iris-pupil boundary can be found using the above To make the signature rotation invariant (camera tilt), the signature values were rearranged in such a way that the signature was started with the maximum distance found and maintain the continuation in the counter-clockwise direction. This way the rotation problem was resolved. 14

Fig. 8 Finding the iris signature To avoid the situation where the iris is contracted or expanded under different light intensities or different focal points, the fact that the iris-pupil boundary remains the same is used. The contraction or expansion varies the distances in same proportion from the center to the boundary. The set of distances or the distance vector can be made invariant using normalization. Hence, the variations related to different light conditions and focuses were resolved. A typical iris-signature is shown in Fig. 9. Distance From The Centroid 1 0.9 0.8 0.7 0.6 0.5 A Sample Iris Signature 0.4 0 100 200 300 400 500 600 700 800 900 Index Fig. 9 A sample Iris Signature F. Signature Matching Signature matching is the phase where the signature patterns are classified for an authorized or unauthorized user. There are many ways found in the literature. We have selected four statistical properties for pattern classification, namely, correlation, Euclidean distance, mean and standard deviation. The main decision is taken by the comparison of correlation. We find the correlation of the input signature with the signatures of each individual. The correlation is high if the input signature belongs to the individual and low otherwise. The correlation matching is then verified by the other three properties. This verification confirms if the input signature is actually the signature of that particular individual. Pattern Matching Using Correlation In general statistical usage, correlation refers to the departure of two random variables from independence [8]. If we have a series of n measurements of different signals then the sample correlation r xy between two signals written as x i and y i where i = 1, 2,..., n, can be found out as: Where and are the mean of the samples; and s x and s y are sample standard deviations. A ±5% threshold value was set for each co-ordinate of iris signatures present in the database. The input signature is compared with each signature in the database. If the coordinates are in ±5% of the co-ordinates of training signatures then using simple calculation it is compared that with which signature it resembles the most. After this step some other properties as Mean, Median and Standard Deviation is also compared between the two signatures for precision in results. The metric correlation is used in a number of applications in pattern matching. For example, correlation technique for pattern matching has been used in MRI applications [12]. An advantage of pattern matching in MRI is that it is less sensitive to spurious phase that may arise in MRI [12]. Likewise, it is also used in face recognition along with 2-D Fourier transforms [13]. This technique has also been used in nuclear power plant monitoring systems. Simplified schemes to compare the signal envelopes were developed using cross-correlation and Eigenvalues [14]. Correlation technique is also used in lip tracking. Correlation is used as a medium to track lip pattern matching [15]. It is important to note that correlation has proven to be a useful tool for pattern matching especially for static data. Hence, we have also implied the correlation method for iris signature matching. The correlation results were verified further by other statistical properties such as euclidean distance, mean and standard deviation of the two Signatures. IV. REAL-TIME EXPERIMENTAL RESULTS In order to verify the proposed algorithm, a case study on a group of 50 students of Air University was prepared. The realtime expriment was performed using a dedicated camera Iriscope HSK6700. The camera is a 1.3 Mega pixel imaging device. The purpose of using this camera was also to show that even a low cost imaging device is also able to construct a delicate recognition system. The interfacing was performed on Pentium IV with 1GB memory. The programming platform wa MATLAB. The image acquisition was performed in difference situations with variable light intensity and ambience. A total of 550 samples were collected. 400 samples were used for training setting the other 150 samples blind to the algorithm. Access control was tested and verified using the 150 samples. It was seen that the algorithm proved to be quite efficient with almost 95% accuracy. A summary of the experiment can be seen in TABLE I. TABLE I EXPERIMENTAL RESULTS Observation Result Total Users 50 Samples of each user 11 15

Total Samples 550 Total trained Samples 400 Total Samples Tested 150 Threshold Value (per co-ordinate) ±5% Legal Access 142 Illegal Access 8 Percentage of correct Access 95% V. CONCLUSIONS A biometric access control system is developed using iris recognition based individual identification. The iris pupil boundary is used as the feature for identification. This reduced the computational load on the processor avoiding the 2-D computations on the image. Pattern matching is performed using correlation and other statistical measures. Real-time experimental results based on a database of 50 individuals show promising results with an overall classification accuracy of more than 95% using simple mathematical operations in a very low cost. REFERENCES [1] T. Mansfield, G. Kelly, D. Chandler, and J. Kan. Biometric product testing final report, Technical report, National Physical Laboratory of UK, 2001 [2] J. Dougman, High Confidence visual recognition by test of statistical independence, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, pp. 1148 1161, 1993. [3] J. Dougman, Biometric signature security system, Harvard University, Cambridge, MA, Tech. Rep., 1990. [4] J. Dougman, High Confidence personal identification by rapid video analysis of iris texture, Proc. IEEE Int. Carnahan Conference Security Technology, pp. 1 11, 1992. [5] J. Dougman, High Confidence Recognition of Persons by Iris Patterns, Security Technology, 2001 IEEE 35th International Conference on, pp. 254 263, 2001. [6] M. Moinuddin, M. Deriche, S. Saad Azhar Ali, A New Iris Recognition Method based on Neural Networks, WSEAS Transactions on information science and applications, Issue 6, Vol1, Dec. 2004 [7] Green, B. Canny Edge Detection Tutorial, http://www.pages.drexel.edu/~weg22/can_tut.html [8] Shapiro, L. G. & Stockman, G. C: "Computer Vision", page 137, 150. Prentence Hall, 2001 [9] Mark S. Nixon and Alberto S. Aguado. Feature Extraction and Image Processing. Academic Press, 2008, p. 88. [10] K. Engel (2006). Real-time volume graphics [11] J. Cohen, P. Cohen, S. G. West, and L. S. Aiken, Applied multiple regression/correlation analysis for the behavioral sciences, 2003, 3rd ed. Hillsdale, NJ: Lawrence Erlbaum Associates. [12] M.S. Sussman, G.A.Wright, The Correlation Coefficient Technique for Pattern Matching, Department of Medical Biophysics; Sunnybrook and Women s College Health Science Centre, Toronto, Canada. [13] Kumar, B.V.K.V., Savvides,M. and Chunyan Xie Correlation Pattern Recognition for Face Recognition Proceedings of IEEE Volume 94, Issue 11, Nov,2008 Page(s) 1963-1976. [14] Y. W. Cheng and P. H. Seong A Signal pattern matching and verification method using interval means cross correlation and eigenvalues in the nuclear power plant monitoring systems, Annals of Nuclear Energy, Volume 29, Issue 15, October 2002, Pages 1795-1807 [15] M. Barnard, E. J. Holden, R. Owens, Lip tracking using Pattern Matching Snakes, The 5th Asian Conference on Computer Vision, 23 25 January 2002, Melbourne, Australia. 16