Navigation Aid And Label Reading With Voice Communication For Visually Impaired People



Similar documents
Chapter 5 Understanding Input. Discovering Computers Your Interactive Guide to the Digital World

A Surveillance Robot with Climbing Capabilities for Home Security

A Method for Image Processing and Distance Measuring Based on Laser Distance Triangulation

How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm

The Keyboard One of the first peripherals to be used with a computer and is still the primary input device for text and numbers.

Cell Phone Based Liquid Inventory Management Using Wireless System

Alternative Methods Of Input. Kafui A. Prebbie 82

Data Transfer between Two USB Devices without using PC

Android based Alcohol detection system using Bluetooth technology

I2C PRESSURE MONITORING THROUGH USB PROTOCOL.

Tutorial for MPLAB Starter Kit for PIC18F

Location-Aware and Safer Cards: Enhancing RFID Security and Privacy

REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING

Intelligent Home Automation and Security System

Chapter 5 Objectives. Chapter 5 Input

Wireless Security Camera

Indoor Surveillance Security Robot with a Self-Propelled Patrolling Vehicle

WESTERN KENTUCKY UNIVERSITY. Web Accessibility. Objective

Discovering Computers. Technology in a World of Computers, Mobile Devices, and the Internet. Chapter 7. Input and Output

Study on Differential Protection of Transmission Line Using Wireless Communication

Mouse Control using a Web Camera based on Colour Detection

Analecta Vol. 8, No. 2 ISSN

A 5 Degree Feedback Control Robotic Arm (Haptic Arm)

Improving Computer Vision-Based Indoor Wayfinding for Blind Persons with Context Information

MRC High Resolution. MR-compatible digital HD video camera. User manual

Instruction Manual Service Program ULTRA-PROG-IR

Chapter I Model801, Model802 Functions and Features

Microtronics technologies Mobile:

A Review of Security System for Smart Home Applications

Development of Integrated Management System based on Mobile and Cloud Service for Preventing Various Hazards

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014

Development of Integrated Management System based on Mobile and Cloud service for preventing various dangerous situations

VPAT Summary. VPAT Details. Section Web-based Internet information and applications - Detail

Designing and Embodiment of Software that Creates Middle Ware for Resource Management in Embedded System

Human Detection Robot using PIR Sensors

[Deotale, 4(7): July, 2015] ISSN: (I2OR), Publication Impact Factor: 3.785

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE.

Prof. Dr. M. H. Assal

VPAT Voluntary Product Accessibility Template

SE05: Getting Started with Cognex DataMan Bar Code Readers - Hands On Lab Werner Solution Expo April 8 & 9

Robot Navigation System with RFID and Ultrasonic Sensors

Design of Wireless Home automation and security system using PIC Microcontroller

Internet and Computing Core Certification Guide Module A Computing Fundamentals

Lesson 10:DESIGN PROCESS EXAMPLES Automatic Chocolate vending machine, smart card and digital camera

Smart Automated Conference Room System

MICROCONTROLLER BASED SMART HOME WITH SECURITY USING GSM TECHNOLOGY

Design And Implementation Of Bank Locker Security System Based On Fingerprint Sensing Circuit And RFID Reader

Unit A451: Computer systems and programming. Section 2: Computing Hardware 4/5: Input and Output Devices

Face Recognition in Low-resolution Images by Using Local Zernike Moments

Firmware version: 1.10 Issue: 7 AUTODIALER GD30.2. Instruction Manual

Overview. Proven Image Quality and Easy to Use Without a Frame Grabber. Your benefits include:

A System for Capturing High Resolution Images

Chapter 02: Computer Organization. Lesson 04: Functional units and components in a computer organization Part 3 Bus Structures

Fig. 1 BAN Architecture III. ATMEL BOARD

HIGH SECURED VEHICLE WITH INTERIOR VENTILATION CONTROL AND PERIMETER MONITORING SYSTEM

A secure face tracking system

HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT

Visual Structure Analysis of Flow Charts in Patent Images

An Embedded Based Web Server Using ARM 9 with SMS Alert System

Advanced Vehicle Tracking System on Google Earth Using GPS and GSM

Android based Secured Vehicle Key Finder System

Design Report. IniTech for

GSM Based Home Automation, Safety and Security System Using Android Mobile Phone

SIGNATURE VERIFICATION

Tutorial for Programming the LEGO MINDSTORMS NXT

Computer and Set of Robots

ANDROID APPLICATION DEVELOPMENT FOR ENVIRONMENT MONITORING USING SMART PHONES

MH - Gesellschaft für Hardware/Software mbh

FPGA Implementation of Human Behavior Analysis Using Facial Image

ANDROID LEVERED DATA MONITORING ROBOT

Bluetooth to serial HC-06 wireless module

Pen Drive to Pen Drive and Mobile Data Transfer Using ARM

MACHINE VISION MNEMONICS, INC. 102 Gaither Drive, Suite 4 Mount Laurel, NJ USA

Volume 2, Issue 3, March 2014 International Journal of Advance Research in Computer Science and Management Studies

Modeling the Mobile Application Development Lifecycle

Low-resolution Character Recognition by Video-based Super-resolution

Implementing a Digital Answering Machine with a High-Speed 8-Bit Microcontroller

Radio sensor powered by a mini solar cell the EnOcean STM 110 now functions with even less light

1. Learn about the 555 timer integrated circuit and applications 2. Apply the 555 timer to build an infrared (IR) transmitter and receiver

How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud

Voluntary Product Accessibility Template

A REMOTE MONITORING SYSTEM FOR A THREE-PHASE 10-KVA SWITCHABLE DISTRIBUTION TRANSFORMER USING ZIGBEE WIRELESS NETWORK

Indoor Surveillance System Using Android Platform

International Journal of Advancements in Research & Technology, Volume 2, Issue 5, M ay ISSN

Robot Perception Continued

ELECTRONIC DOCUMENT IMAGING

How To Filter Spam Image From A Picture By Color Or Color

SMS based Remote Control System

Security & Chip Card ICs SLE 44R35S / Mifare

Static Environment Recognition Using Omni-camera from a Moving Vehicle

Automatic License Plate Recognition using Python and OpenCV

Embedded Systems on ARM Cortex-M3 (4weeks/45hrs)

FB-500A User s Manual

Neural Network based Vehicle Classification for Intelligent Traffic Control

Transcription:

Navigation Aid And Label Reading With Voice Communication For Visually Impaired People A.Manikandan 1, R.Madhuranthi 2 1 M.Kumarasamy College of Engineering, mani85a@gmail.com,karur,india 2 M.Kumarasamy College of Engineering, online.ranthi@gmail.com,karur,india Abstract The main aim of this project is to design a system for blind persons to recognize the hand held objects or products. In this project, we design and develop a system to find products or objects with voice announcements and In order to help blind people navigate safely and quickly, an obstacle detection system using ultrasonic sensors and USB camera-based visual navigation has been considered. The proposed system detects obstacles up to 300 cm via sonar and sends audio feedback to inform the person about their location. The image and the text related to that thing is displayed in the mobile and also voice announcement of the same text will be announced from the android mobile. Prototype of an electronic travel aid device has been developed and experimentally verified on blind-folded persons to analyse the device performance in a laboratory set-up. Keywords: PIC16F87, Ultrasonic sensor, Pit sensor, optical character recognition (OCR). 1. Introduction Reading is obviously essential in today s society. Printed text is everywhere in the form of reports, receipts, bank statements, restaurant menus, classroom handouts, product packages, instructions on medicine bottles, etc. The ability of people who are blind or have significant visual impairments to read printed labels and product packages will enhance independent living and foster economic and social self-sufficiency. Today, there are already a few systems that have some promise for portable use, but they cannot handle product labelling. Today, there are already a few systems that have some promise for portable use, but they cannot handle product labeling. For example, portable bar code readers designed to help blind people identify different products in an extensive product database can enable users who are blind to access information about these products. Through Speech and Braille. Such as pen scanners might be employed in these and similar situations. Such systems integrate OCR software to offer the function of scanning and recognition of text and some have integrated voice output. For example, portable bar code readers designed to help blind people identify different products in an extensive product database can enable users who are blind to access information about these products through speech and braille. But a big limitation is that it is very hard for blind users to find the position of the bar code and to correctly point the bar code reader at the bar code. Some reading assistive systems such as pen scanner might be employed in these and similar situations. A number of portable reading assistants have been designed specifically for the visually impaired K-Reader Mobile runs on a cell phone and allows the user to read mail, receipts, fliers, and many other documents. However, the document to be read must be nearly flat, placed on a clear, dark surface (i.e., a non-cluttered background), and contain mostly text. In addition, K-Reader Mobile accurately reads black print on a white background, but has problems recognizing colored text or text on a colored background. It cannot read text with complex backgrounds. 1

Furthermore, these systems require a blind user to manually localize areas of interest and text regions on the objects in most cases. 2 Software Specifications And Framework 2.1 Software Specifications Language Platform : C and C++ : OpenCV (linux-library) 2.2 Framework This paper presents a prototype system of assistive text reading. The system framework consists of three functional components: scene capture, data processing, and audio output. The scene capture component collects scenes containing objects of interest in the form of images or video. In our prototype, it corresponds to a camera attached to a pair of sunglasses. The data processing component is used for deploying our proposed algorithms, including 1) object- of- interest detection to selectively extract the image of the object held by the blind user from the cluttered background or other neutral objects in the camera view; and 2) text localization to obtain image regions containing text, and text recognition to transform image-based text information into readable codes. We use a mini laptop as the processing device in our current prototype system. The audio output component is to inform the blind user of recognized text codes. 3 Image Capturing And Pre-Processing The live video is captured by using web cam and it can be done using OPENCV libraries.the image format from the webcam is in RGB24 format. The frames from the video is segregated and undergone to the pre processing. The capturing videos are projected in a window with a size of 320x240. Totally 10 frames per second can be captured by using the webcam. First, we get the frames continuously from the camera and send it to the process. Once the object of interest is extracted from the camera image cascade classifier is used for recognizing the character from the object, the system is ready to apply our automatic text extraction algorithm. Fig 1: Block Diagram of Text Reading 2

To extract the hand-held object of interest from other objects in the camera view, ask users to shake the handheld objects containing the text they wish to identify and then employ a motion-based method to localize objects from cluttered background. 4 Automatic Text Extraction Text localization is then performed on the camera-based image. The Cascade Ada boost classifier confirms the existence of text information. 4.1 Cascade classifier: The work with a cascade classifier includes two major stages: training and detection. Detection stage is described in a documentation of object detect module of general OpenCV documentation. Documentation give some basic information about cascade classifier. 4.2 Training data preparation: For training we need a set of samples. There are two types of samples: negative and positive. Negative samples correspond to non-object images. Positive samples correspond to images with detected objects. Set of negative samples must be prepared manually, whereas set of positive samples is created using opencv_create samples utility. 5 Character Detection We use OpenCV (open source computer vision) library to process the images so that features for each letter could be extracted. 5.1 Image capturing : First, we get the frames continuously from the camera and send it to the process. Once the object of interest is extracted from the camera image using cascade classifier, subsequent process can be done using following steps. Fig.2. Original captured color image 5.2 Conversion: Conversion to gray-scale can be done in OpenCV. The reason we had to convert our image to gray-scale was because thresholding could be applied to monochrome pictures only.in an 8 bit image each pixel is represented by one number from 0 to 255 where 0 is black and 255 is white. The simplest way to convert the image to black and white pixels would be to select one value, lets say 128 and consider all pixels that have higher value to be white and the others black. The biggest problem with this approach is that the brightness can vary from picture to picture and as a result some images might become totally black while others are entirely white. The function MinMaxLoc finds minimum and maximum element values and their positions. 6 Text Recognition And Audio Output Text recognition is performed by off-the-shelf OCR prior to output of informative words from the localized text regions. A text region labels the minimum rectangular area for the accommodation of characters inside it, so the border of the text region contacts the edge boundary of the text characters. However, this experiment show that OCR 3

generates better performance text regions are first assigned proper margin areas and binarized to segments text characters from background. The recognized text codes are recorded in script files. Then, employ the Microsoft Speech Software Development Kit to load these files and display the audio output of text information. Blind users can adjust speech rate, volume and tone according to their preferences. 7 Ultrasound-Based Distance Measurement The sonar system is based on four ultrasonic transducers mounted together. The sensors are fixed at shoulders high to increase the field of sensing and side determination. Each sensor is equipped with two transducers. One emits an ultrasonic wave while the other measures the echo. By differentiation of the input and output signals, a microcontroller PIC16F87 computes the distance to the nearest obstacle. Then this information is transmitted as a PWM signal to the receiver. Fig.3. Sonar Sensor 8 Hardware Description 8.1 Navigation Aid: 4

Fig 4: Block Diagram of Navigation Aid The PIC16F877A CMOS FLASH-based 8-bit microcontroller and it features 200 ns instruction execution, 256 bytes of EEPROM data memory, self programming, an ICD, 2Comparators, 8 channels of 10-bit Analog-to- Digital (A/D) converter, 2 capture/ compare/ PWM functions, a synchronous serial port that can be configured as either 3-wire SPI or 2-wire I2C bus, a USART, and a Parallel Slave Port. Ultrasonic sensors generate high frequency sound waves and evaluate the echo which is received back by the sensors. Sensors calculate the time interval between sending the signal and receiving the echo to determine the distance to an object. A water detector is a small electronic device that is designed to detect the presence of water. According Hamelain, by using water sensor, as soon as it touches the water, it will short the circuit and this will cause a closed circuit then obtain the output that we desired. The water sensor is useful in a normally occupied area near any appliance that has the potential to leak water. 8.2 Label Reading: Transmitter: Receiver: 5

Fig 5: Block Diagram of Vision Based Assistive System The block diagram consists of micro controller, USB cam, Power supply, BLUETOOTH. The camera is connected to an LPC2148 by a USB connection and it can capture the hand-held object appears in the camera view. LAN9512 is interface with the system to view the monitor. SRAM is used for the temporary storage and flash memory is used for the permanent storage. The LPC2148 performs the processing and provides audio output through earphone. USB cameras are imaging cameras that use USB 2.0 or USB 3.0 technology to transfer image data. USB cameras are designed to easily interface with dedicated computer systems by using the same USB technology that is found on most computers. Static random-access memory (SRAM) is a type of a semiconductor memory that uses bistable latching circuitry to store each bit. 9 Conclusion And Future Work The proposed system ensures to read printed text on hand-held objects for assisting blind persons. In order to solve the common aiming problem for blind users, a motion-based method to detect the object of interest is projected, while the blind user simply shakes the object for a couple of seconds. This method can effectively distinguish the object of interest from background or other objects in the camera view. The future development will be an obstacle detection process, using haar cascade classifier for recognizing the obstacle from the object. If such a system is developed, it will act as a basic platform for the generation of more such devices for the visually impaired in the future which will be cost effective. And as far as the localization is concerned it will be able to provide accurate details of the location of the blind if in case they lost with help from the GPS. It will be real boon for the blind. The developed prototype gives good results in detecting obstacles paced at distance in front of the user. 6

References ISSN: 2393-994X KARPAGAM JOURNAL OF ENGINEERING RESEARCH (KJER) [1] A.Shahab, F.Shafait, and A. Dengel, Multi-script robust reading competition in ICDAR 2013 in Proc. Int. Conf. Document Anal. Recognit, 2013, pp. 1491-1496. [2] C. Yi and Y. Tian, Text string detection from natural scenes by structure based partition and grouping, IEEE Trans. Image Process., vol. 20, no. 9, pp. 2594-2605, Sep. 2011 [3] Jun Baba and Akihiro Yamamoto, Text Localization in Scene Images by Using Character Energy and Maximally Stable Extremal Regions IEEE Trans Image Process, vol. 9, pp. II-252-303, Sep. 2009. [4] K. Kim, K. Jung, and J. Kim, A Novel character Segmentation Method for text images Captured by Cameras IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 12, pp. 1631-1639, Dec. 2003. 7