Currency Recognition in Mobile Application for Visually Challenged

Similar documents
Analecta Vol. 8, No. 2 ISSN

How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Automatic Detection of PCB Defects

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014

SIGNATURE VERIFICATION

Vision based Vehicle Tracking using a high angle camera

Face Recognition in Low-resolution Images by Using Local Zernike Moments

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow

LEAF COLOR, AREA AND EDGE FEATURES BASED APPROACH FOR IDENTIFICATION OF INDIAN MEDICINAL PLANTS

Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

Image Segmentation and Registration

Colour Image Segmentation Technique for Screen Printing

Signature Region of Interest using Auto cropping

COMPARISON OF OBJECT BASED AND PIXEL BASED CLASSIFICATION OF HIGH RESOLUTION SATELLITE IMAGES USING ARTIFICIAL NEURAL NETWORKS

The Scientific Data Mining Process

Image Processing Based Automatic Visual Inspection System for PCBs

Canny Edge Detection

Signature Segmentation from Machine Printed Documents using Conditional Random Field

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall

Automatic Grocery Shopping Assistant

FACE RECOGNITION BASED ATTENDANCE MARKING SYSTEM

Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition

Medical Image Segmentation of PACS System Image Post-processing *

Low-resolution Image Processing based on FPGA

Probabilistic Latent Semantic Analysis (plsa)

Local features and matching. Image classification & object localization

More Local Structure Information for Make-Model Recognition

Introduction to Pattern Recognition

Neural Network based Vehicle Classification for Intelligent Traffic Control

Automatic Traffic Estimation Using Image Processing

Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report

Poker Vision: Playing Cards and Chips Identification based on Image Processing

Morphological segmentation of histology cell images

The Role of Size Normalization on the Recognition Rate of Handwritten Numerals

Randomized Trees for Real-Time Keypoint Recognition

FPGA Implementation of Human Behavior Analysis Using Facial Image

Recognition Method for Handwritten Digits Based on Improved Chain Code Histogram Feature

Make and Model Recognition of Cars

Build Panoramas on Android Phones

Index Terms: Face Recognition, Face Detection, Monitoring, Attendance System, and System Access Control.

QUALITY TESTING OF WATER PUMP PULLEY USING IMAGE PROCESSING

Automatic Extraction of Signatures from Bank Cheques and other Documents

The Delicate Art of Flower Classification

3 An Illustrative Example

Supervised and unsupervised learning - 1

A Dynamic Approach to Extract Texts and Captions from Videos

AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION

Face detection is a process of localizing and extracting the face region from the

Laser Gesture Recognition for Human Machine Interaction

Euler Vector: A Combinatorial Signature for Gray-Tone Images

Image Classification for Dogs and Cats

A Comparative Study between SIFT- Particle and SURF-Particle Video Tracking Algorithms

Visual Structure Analysis of Flow Charts in Patent Images

Potential of face area data for predicting sharpness of natural images

MusicGuide: Album Reviews on the Go Serdar Sali

Jiří Matas. Hough Transform

Classifying Manipulation Primitives from Visual Data

A secure face tracking system

Palmprint Recognition. By Sree Rama Murthy kora Praveen Verma Yashwant Kashyap

Cees Snoek. Machine. Humans. Multimedia Archives. Euvision Technologies The Netherlands. University of Amsterdam The Netherlands. Tree.

Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections

Signature verification using Kolmogorov-Smirnov. statistic

A Study of Automatic License Plate Recognition Algorithms and Techniques

Environmental Remote Sensing GEOG 2021

Navigation Aid And Label Reading With Voice Communication For Visually Impaired People

MVA ENS Cachan. Lecture 2: Logistic regression & intro to MIL Iasonas Kokkinos Iasonas.kokkinos@ecp.fr

Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture.

Open Access A Facial Expression Recognition Algorithm Based on Local Binary Pattern and Empirical Mode Decomposition

Scanners and How to Use Them

An Algorithm for Classification of Five Types of Defects on Bare Printed Circuit Board

Tracking in flussi video 3D. Ing. Samuele Salti

Image Compression through DCT and Huffman Coding Technique

STATIC SIGNATURE RECOGNITION SYSTEM FOR USER AUTHENTICATION BASED TWO LEVEL COG, HOUGH TRANSFORM AND NEURAL NETWORK

Tracking and Recognition in Sports Videos

A New Image Edge Detection Method using Quality-based Clustering. Bijay Neupane Zeyar Aung Wei Lee Woon. Technical Report DNA #

Overview. 1. Introduction. 2. Parts of the Project. 3. Conclusion. Motivation. Methods used in the project Results and comparison

Combining an Alternating Sequential Filter (ASF) and Curvelet for Denoising Coronal MRI Images

TouchPaper - An Augmented Reality Application with Cloud-Based Image Recognition Service

A Cheap Visual Inspection System for Measuring Dimensions of Brass Gear

Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data

The Visual Internet of Things System Based on Depth Camera

Edge detection. (Trucco, Chapt 4 AND Jain et al., Chapt 5) -Edges are significant local changes of intensity in an image.

Choosing a digital camera for your microscope John C. Russ, Materials Science and Engineering Dept., North Carolina State Univ.

EXPLORING IMAGE-BASED CLASSIFICATION TO DETECT VEHICLE MAKE AND MODEL FINAL REPORT

A Genetic Algorithm-Evolved 3D Point Cloud Descriptor

IMPLICIT SHAPE MODELS FOR OBJECT DETECTION IN 3D POINT CLOUDS

Grain size measurement by image analysis: An application in the ceramic and in the metallic industries

ECE 533 Project Report Ashish Dhawan Aditi R. Ganesan

A Counting Algorithm and Application of Image-Based Printed Circuit Boards

TIETS34 Seminar: Data Mining on Biometric identification

Novel Automatic PCB Inspection Technique Based on Connectivity

Neural Networks in Data Mining

Virtual Mouse Using a Webcam

Digital image processing

Face Recognition For Remote Database Backup System

Object Recognition. Selim Aksoy. Bilkent University

Component Ordering in Independent Component Analysis Based on Data Power

Transcription:

Currency Recognition in Mobile Application for Visually Challenged Manikandan K 1, Sumithra T 2 1 Sri Krishna College of Engineering and Technology, manikandank@skcet.ac.in,coimbatore,india 2 Sri Krishna College of Engineering and Technology,sumithhra@gmail.com,Coimbatore,India Abstract In recent trend, mobile phones are essential device for everyone with the advancement of the technology android Smartphone s are equipped with additional application can aid visually challenged peoples. These visually challenged have a less perception of the world around them and they face a lot of difficulty in identifying currency. The visually challenged cannot identify the damaged currency notes and the Indian currency notes have a size difference of just ten mm between two consecutive denominations and make it highly unlikely for a blind person to determine it correctly. As a part of currency recognition system for visually challenged, an already developed techniques are Gaussian mixture model, Texture based recognition and Neural networks. The system to propose the technique for extracting the denomination of Indian currency note which can be used the localization algorithm that localizes currency notes and color matching technique used to identified the currency note. The recognition system is composed the part is preprocessing, including detecting edges, compressing data dimensionalities, and extracting features. This technique can be used further in recognizing the currency notes with the help of SIFT feature extraction. The development of more Informative descriptors as well as the use of neighborhood constraints using algorithms and extracting the audible message from the descriptor feature values of the image data which will be advantage for the blind people. The currency recognition system is found to be working efficiently with accuracy of 93% on 165images. Keywords: Android phone, SIFT, Neural Network, Visually Impaired. 1. Introduction Recent enormous growth of technology, mobile phones becomes a basic thing which everyone should have. When a Smartphone is brought, the first question is usually, what application (Apps) should I get? Apps are the main part which makes smart phones so valuable. There are various and huge numbers of mobile based applications developed every day. These applications are used for various purposes, and they are very helpful and entertaining. These are developed for all type of users like for children s, adults and even for old aged persons. For blind peoples not such applications developed in recent. Currency notes are very important and world is running for it. The Indian currency notes are printed by reserve bank of India and which has many unique identification marks. Currently, the Indian currency system has the denominations of Rs. 1, Rs. 2, Rs. 5, Rs. 10, Rs. 10, Rs. 50, Rs. 100, Rs. 500, and Rs. 1000. All the above mentioned denominations are unique in one feature or the other. These for the blind people so that they may easily recognize the denomination correctly. Every currency note has its denomination engraved at the top right end which is sensitive to touch, but this mark fades away after the currency note goes in circulation for some time. This again creates a difficulty for the visually impaired people to correctly determine the denomination of the currency note [1] [2]. The currency notes are provided with few special identification marks only for the blind people so that they may easily recognize the denomination correctly. Every currency note has its denomination engraved at the top right end which is sensitive to touch, but this mark fades away after the currency note goes in circulation for some time. This again creates a difficulty for the visually impaired people to correctly determine the denomination of the currency note. A currency recognition system for visually impaired has been developed in this 460

paper. It uses a currency localization technique [2] to extract the currency note from a color image. The technique requires feature based currency note localization. This is applied using the Image Processing toolbox available in Matlab. It has been observed till now that Neural Networks (NN) are used to recognize the currency notes which involve some texture or pattern based recognition techniques. The identification of objects in an image is called recognition. This process would probably start with image processing techniques such as noise removal, followed by (low-level) feature extraction to locate lines, regions and possibly areas with certain textures. The clever bit is to interpret collections of these shapes as single objects, e.g. cars on a road, boxes on a conveyor belt or cancerous cells on a microscope slide. One reason for this is an AI problem that an object can appear very different when viewed from different angles or under different lighting. Another problem is of deciding what features belong to what object and which are background or shadows etc. The human visual system performs these tasks unconsciously but a computer requires skilful programming and lots of processing power to approach human performance. Presently, there are several of methods for currency paper recognition [7]. The properties of the HSV (Hue, Saturation and Value) color space with emphasis on the visual perception of the variation in Hue, Saturation and Intensity values of an image pixel are taken [1]. In this technique, Neural Network is used for the function of currency verification and recognition. Essential features from Indian banknotes were extracted by technique and experimented on Neural Network classifier. A simple statistical test is used as the verification step in currency recognition, where univariate Gaussian distribution is employed [8]. The propose using the probability density formed by a multivariable Gaussian function, where the input data space is transferred to a lower dimensional subspace. Due to the structure of this model, the total processing system acts as a hybrid neural network. The method and the performance results are shown by using the real data and the recognition machine. In this work an approach based on computer vision on mobile devices, and develops an application that can run on smart phones. In general, mobile application development has been revolutionized by Android utilities. Here a robust note recognition application using an Android application is attempted. This can use as recognition of banknotes for visually impaired and it is very user friendly in nature. 2. Proposed Methodology This proposed work is implemented in android OS where maximum numbers of smart mobile phones are based on android. The proposed methodology follows the process described in figure 1. The main aim of the system is to implement how the currencies are identified by using their printed design patterns. First the image is scanned and then the identifiable pattern from the captured image is extracted. An identifiable pattern is obtained for each image in the image extraction phase. For storing these pattern as binary values and from this an NN file. If any match is found, it shows the recognized denomination and nationality of the given currency. The paper currency recognition system will be implemented using the following steps 2.1. Image Capturing The currency notes are initially accrued by the mobile camera. While accruing the image, the image must be under proper lighting conditions, occlusion or shadowing must be avoided, Resolution of the image is fixed to be so that any basic camera can be used for the purpose of taking image. And the currency notes are of good quality i.e. they are not very much full of stains or dust etc. 2.2. Pre Processing After the current image captured, it is subjected to preprocessing. It is an essential step must be performed before any process of recognition or segmentation [3]. Here, if there is any noise corrupted in the currency image, they are initially removed by noise removal of median filter. This filtering technique is efficient and simple noise removal technique which removes noses such as impulse or salt and pepper noises in the image. A median filter belongs to the class of nonlinear filters unlike the mean filter. The median filter also follows the moving window principle similar to the mean filter. A 3 3, 5 5, or 7 7 kernel of pixels is scanned over pixel matrix 461

of the entire image. The median of the pixel values in the window is computed, and the center pixel of the window is replaced with the computed median. Median filtering is done by, first sorting all the pixel values from the surrounding neighborhood into numerical order and then replacing the pixel being considered with the middle pixel value. Then this image is enhanced by using histogram equalization technique, which increases the contrast of the image and equalizes the image s histogram Image Capturing Pre Processing Feature Extraction Currency Matching & Recognition Figure 1: Block Diagram of Proposed Methodology 2.3. Feature Extraction Then preprocessed images are given for extraction of features. Here for the recognition of currency note, morphological functions are used. There are three primary morphological functions: erosion, dilation, and hit-ormiss. Morphological operations are usually performed on binary images where the pixel values are either 0 or 1. For simplicity, the image is converted into binary image contains only the values as 0 or 1, and will show a value of zero as black and a value of 1 as white [Soille, 1994]. The thinning function is a one of the morphological operation which generates a minimally connected line that is equidistant from the boundaries. Some of the structure of the object is maintained here. It is an useful when the binary sense of the image is reversed, creating black objects on a white background. This function minimally connected lines that form equidistant boundaries between the objects. After binarization we have done the skeleton of binary image by using morphological thinning operation [4]. After thinning, the pruning algorithm is done based on mathematical morphology. It removes unwanted components. These components can often be created by edge detection algorithms or digitization. The standard pruning algorithm will remove all branches (pixels) shorter than a given number of points. The algorithm starts at the end points and recursively removes a given number of points from each branch. After this step it will apply dilatation on the new end points with a structuring element of 1 s and will intersect the result with the original image. 2.4. Currency Matching & Recognition The Matching algorithm is an important technique for recognition of images. The features are usually taken from large number of images (image database). Where each currency of (5 to 1000 rupees) contains its own unique features which are matched with an current image. The features of certain rupees from the database match with the current image. By this images are matched and recognized. The image feature with minimum value is the recognized currency image. By these currency images rupees is identified. 3. SYSTEM Our currency reader does not require extra or specialized hardware, since our algorithm relies on existing visual features for recognition. It can work on either side of a bill and the recognition result can be spoken out via the phonespeaker or communicated through vibration. Shows the feature areas used for currency recognition. 462

3.1. Scale-Invariant Detectors To obtain invariant descriptors we detect scale-invariant interest points (regions) and characterize each of them by a scale, rotation and illumination invariant descriptor. Scale-invariant detectors. Its have two different scale-invariant detectors: Harris-Laplace [8] and DoG (Difference-of-Gaussian) [6]. Harris-Laplace detects multiscale Harris points and then selects characteristic points in scale-space with the Laplace operator. DoG interest points are local scale-space maxima of the Difference of-gaussian. 3.1.1. Scale and rotation invariant descriptors The output of the two detectors is scale-invariant regions of different sizes. These regions are first mapped to circular regions of a fixed-sized radius. Point neighborhoods which are larger than the normalized region, are smoothed before the size normalization. Rotation-invariance is obtained by rotation in the direction of the average gradient orientation (within a small point neighborhood). Affine illumination changes of the pixel intensities are eliminated by normalization of the image region with the mean and the standard deviation of the intensities within the point neighborhood. These normalized regions are then described by the SIFT descriptor (Scale Invariant Feature Transform). 3.2. Background Subtraction In order to detect and recognize the bill, we first remove irrelevant background. After binarization, black pixels touching the boundary of the image are regarded as background since the bill always has a white boundary that separates itself from the backgrounds. After background subtraction, some noise might still exist. Further refine the location of a bill by running a breadth-first search from the image centre to remove the remaining noise. The complexity of this step is linear in the number of pixels in the image and after processing we know the exact position of the feature area. Then normalize the area to a rectangle with an aspect ratio of 4:1 for recognition. 3.3. Training and Recognition With 1000 samples of captured images of each side of the most common Indian currency. Each has four potential areas to recognize, two fronts and two back. We also collected 10000 samples of general scenes which are not currency. For each side of a given currency to train a strong classifier from a set of weak classifiers. The weak classifiers must be computationally efficient because hundreds of them must be computed in less than 0.1 second. A weak classifier using 32 random pairs of pixels in the image. A random pair of pixels have a relatively stable relationship in that one pixel is brighter than the other. An example of a random pair is shown in Figure 3 where pixel A is brighter than pixel B. The advantage of using pixel pairs is that their relative brightness is not affected by environmental lighting variations. Since the same relationship may also occur in general scenes, we select the pairs that appear more frequently in the inliers (currency images) and less frequently in the outliers (non-currency images). A weak classifier will provide a positive result if more than 2/3 pairs are satisfied and negative otherwise. The 10 weak classifiers selected form a strong classifier that identifies a bill as long as it appears in the image. 4. RESULTS The designed algorithm was applied on acquired image of Indian currency notes to find the denomination of currency note. The captured currency note image gets recognized successfully with follow the design steps of the proposed recognition system. This system has much advancement over the existing system and we can confirm the following observations in table 1.1. The system is unique in its applications. The system allows the user to identify the currency note. Images are taken from different orientations and varied distances. 463

5. CONCLUSION An idea of developing currency recognition for mobile devices using image processing technique. Here, the currency notes are captured continued with preprocessing technique and then features are extracted from it. Those features helps in efficiently matching the captured currency note as a respective rupee from rupee 1, 5, 10, 20, 50, 100, 500 and 1000. They are pre-processing, feature extraction, classifier and auditory display of the image. The localization algorithm has been used to pre-processing the input image that localizes the image with removal of noise reduction. Color histogram is used to filter the intensity of the color pixels by color matching techniques to identify the currency note. The implementation of feature extraction, classifier &auditory display of image. The Feature Extraction is done using SIFT mechanism to create the multiple features and different key points for the captured image. Segmentation mechanism is used for the retrieval process of the image. Finally the neural network technique must be used where audible message is given to the visually challenged which provides an efficient way for identification of the denomination of currency value. When this technique is implemented in android platform then it becomes a very useful application. And now, this Android app that will turn a regular smart phone into a powerful tool for the blind. In future this can be extended to find the currency note is a fake or real. 1.1. Table Accuracy and Speed of each Classification Methods Classifications Accuracy Gaussian mixture model 75% SIFT key cluster 93.83% Acknowledgements In future, there the currency recognition system of different countries could be discussed. The methods and algorithms they have used to develop those systems and which is the best approach. 464

References 1. Journal Article [1] Hanish Aggarwal, Padam Kumar, 2014 Indian Currency Note Denomination Recognition in Color Images, International Journal on Advanced Computer Engineering and Communication Technology Vol-1 Issue: 1: ISSN 2278 5140. [2] Amol A. Shirsath, S.D Bharkad,2013 Survey of currency recognition system using image processing, International Journal of Computational Engineering Research, Vol, 03, Issue 7, 36-40,2013. [3] Dipti Pawade, Pranchal Chaudhari, Harshada Sonkamble, 2013 Comparative Study of Different Paper Currency and Coin Currency Recognition Method, International Journal of Computer Applications (0975 8887), Volume 66 No.23, March 2013. [4] Trupti Pathrabe.G, Mrs.Swapnili Karmore, 2011, A Novel Approach of Embedded System for Indian Paper Currency Recognition, International Journal of Computer Trends and Technology- May to June Issue 2011, ISSN: 2231-2803. 2. Book [5] Carlos Miguel Correia da Costa, Multiview banknote recognition with component and shape analysis. 3. Conference Proceedings [6] Trupti Pathrabe, G.2011, A Novel Approach of Embedded System for Indian Paper Currency Recognition, International Journal of Computer Trends and Technology- May to June Issue 2011, ISSN: 2231-2803. [7] Faiz, M. H. and YingLi, 2010 Robust and effective component-based banknote recognition for the blind, IEEE Transactions on Systems, Man, and Cybernetics--Part C: Applications and Reviews, 1-10. [8] Ji Qian, D. and Zhang, 2006, A Digit Recognition System for Paper Currency Identification Based on Virtual Instruments IEEE Transactions, 1-4244-0555-6/06, 2006. [9] Tanaka, M. F. 1998, Recognition of Paper Currencies by Hybrid Neural Network, IEEE Transactions on Neural Networks, 0-7803-4859-1/98, 1998. [10] Soille, P. 1994 "Geodesic Transformations in Mathematical Morphology: an Overview", Technical Report, School of Mines of Paris, N-24/94/MM, 1994. 465