OBSTACLES DETECTION FOR VISUALLY IMPAIRED PEOPLE USING SMART PHONES

Similar documents
Navigation Aid And Label Reading With Voice Communication For Visually Impaired People

False alarm in outdoor environments

Colorado School of Mines Computer Vision Professor William Hoff

A secure face tracking system

Neural Network based Vehicle Classification for Intelligent Traffic Control

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

Fall detection in the elderly by head tracking

The Scientific Data Mining Process

SECURITY SYSTEM WITH AUTHENTICATION CODE AND ADAPTIVE PHOTO LOG

Vision based Vehicle Tracking using a high angle camera

Classifying Manipulation Primitives from Visual Data

Kapitel 12. 3D Television Based on a Stereoscopic View Synthesis Approach

Online Learning for Offroad Robots: Using Spatial Label Propagation to Learn Long Range Traversability

Effective Interface Design Using Face Detection for Augmented Reality Interaction of Smart Phone

Mouse Control using a Web Camera based on Colour Detection

Locating and Decoding EAN-13 Barcodes from Images Captured by Digital Cameras

Speed Performance Improvement of Vehicle Blob Tracking System

3D Vehicle Extraction and Tracking from Multiple Viewpoints for Traffic Monitoring by using Probability Fusion Map

Applications of Deep Learning to the GEOINT mission. June 2015

Automatic Labeling of Lane Markings for Autonomous Vehicles

How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm

Professor, D.Sc. (Tech.) Eugene Kovshov MSTU «STANKIN», Moscow, Russia

CERTIFICATE COURSE IN WEB DESIGNING

ACCURACY ASSESSMENT OF BUILDING POINT CLOUDS AUTOMATICALLY GENERATED FROM IPHONE IMAGES

Circle Object Recognition Based on Monocular Vision for Home Security Robot

Journal of Industrial Engineering Research. Adaptive sequence of Key Pose Detection for Human Action Recognition

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014

A Computer Vision System on a Chip: a case study from the automotive domain

Smartphone Overview for the Blind and Visually Impaired

Image Processing Based Automatic Visual Inspection System for PCBs

Tracking performance evaluation on PETS 2015 Challenge datasets

Product Characteristics Page 2. Management & Administration Page 2. Real-Time Detections & Alerts Page 4. Video Search Page 6

ROBOTRACKER A SYSTEM FOR TRACKING MULTIPLE ROBOTS IN REAL TIME. by Alex Sirota, alex@elbrus.com

ISSN: A Review: Image Retrieval Using Web Multimedia Mining

Demo: Real-time Tracking of Round Object

Summary Table Voluntary Product Accessibility Template

Tracking and Recognition in Sports Videos

A Survey of Video Processing with Field Programmable Gate Arrays (FGPA)

Real Time Target Tracking with Pan Tilt Zoom Camera

The Implementation of Face Security for Authentication Implemented on Mobile Phone

Static Environment Recognition Using Omni-camera from a Moving Vehicle

Introduction. Selim Aksoy. Bilkent University

Video Surveillance System for Security Applications

Tracking of Small Unmanned Aerial Vehicles

Indoor Surveillance System Using Android Platform

Tutorial for Tracker and Supporting Software By David Chandler

Development of Integrated Management System based on Mobile and Cloud service for preventing various dangerous situations

Video Analytics A New Standard

E27 SPRING 2013 ZUCKER PROJECT 2 PROJECT 2 AUGMENTED REALITY GAMING SYSTEM

White paper. Axis Video Analytics. Enhancing video surveillance efficiency

The use of computer vision technologies to augment human monitoring of secure computing facilities

International Journal of Advancements in Research & Technology, Volume 2, Issue 5, M ay ISSN

REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING

AT Inventory Checklists

LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE.

White paper. Axis Video Analytics. Enhancing video surveillance efficiency

HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT

Poker Vision: Playing Cards and Chips Identification based on Image Processing

Analysis of Preview Behavior in E-Book System

CCTV - Video Analytics for Traffic Management

An Approach for Utility Pole Recognition in Real Conditions

Development of Integrated Management System based on Mobile and Cloud Service for Preventing Various Hazards

ACE: After Effects CC

Visual and mobile Smart Data

An Active Head Tracking System for Distance Education and Videoconferencing Applications

Automated Monitoring System for Fall Detection in the Elderly

Virtual Mouse Using a Webcam

Overview. 1. Introduction. 2. Parts of the Project. 3. Conclusion. Motivation. Methods used in the project Results and comparison

Introduction to Pattern Recognition

MusicGuide: Album Reviews on the Go Serdar Sali

Anime Studio Debut vs. Pro

Distributed Vision Processing in Smart Camera Networks

Terrain Traversability Analysis using Organized Point Cloud, Superpixel Surface Normals-based segmentation and PCA-based Classification

FPGA Implementation of Human Behavior Analysis Using Facial Image

EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM

Effective Use of Android Sensors Based on Visualization of Sensor Information

A Cognitive Approach to Vision for a Mobile Robot

3D Scanner using Line Laser. 1. Introduction. 2. Theory

Build Panoramas on Android Phones

Improving Computer Vision-Based Indoor Wayfinding for Blind Persons with Context Information

A General Framework for Tracking Objects in a Multi-Camera Environment

NO 1 IN SMART HOUSE AND HIGH TECH BUILDING TECHNOLOGIES

ESE498. Intruder Detection System

Xerox DocuMate 3125 Document Scanner

Help Software Design of NDT Instrument Based on FLASH Cartoon

Object tracking in video scenes

Fall Detection System based on Kinect Sensor using Novel Detection and Posture Recognition Algorithm

Transcription:

OBSTACLES DETECTION FOR VISUALLY IMPAIRED PEOPLE USING SMART PHONES KARTHICK.M #1 and Dr.R.SUGUNA *2 # ME,PG Scholar,Department of CSE, SKR Engineering College,Poonamallee,Chennai,TamilNadu * Ph.d,Dean Department of CSE, SKR Engineering College,Poonamallee,Chennai,TamilNadu. ABSTRACT-Computer vision and human-powered services can provide blind people access to visual information in the world around them, but their efficacy is dependent on highquality photo inputs. Blind people often have difficulty to identify the obstacles on their way. Obstacles are now ubiquitous and easy to acquire. While humans easily recognize the objects and other semantic contents in images, it has been much more difficult to do so automatically. The main aim of this paper is to develop a navigation aid for blind and visually impaired people. This is proposed to develop a mobile application that offers a new way for visually impaired peoples to identify the obstacles. To support real-time scanning of objects, A key frame extraction algorithm is developed that automatically retrieves high-quality frames from continuous live camera video of mobile phones, captured frames are segmented and compared with the stored template to recognize the obstacles and the recognized obstacles is conveyed to the visually impaired people through Text To Speech(TTS) converter. Index Terms: Obstacles, TTS, ubiquitous,recognized. I. INTRODUCTION vision, a god gifted sense to human being is an important aspect of life. With the help of human eyes, human are able to see the beauty of nature, things which can happen in day-to-day life. But; there are some unfortunate people who lack this ability of experiencing these things. They face many problems in their daily chores. The problem gets worse when they are in an unfamiliar place. Hence to minimize their difficulties and that too with maximum ease, a concept has been thought of, which will guide visually impaired people to reach their destination(s).this paper presents an obstacle detection system for visually impaired people. User can be alerted of closed obstacles in range while traveling in their environment. The proposed system detects the nearest obstacle via any android smart phone. Live video camera is used to capture the nearest obstacles. To support real-time detection of objects, a key frame extraction algorithm is developed that automatically retrieves high-quality frames from continuous camera video stream of mobile phones.the sequence of key frame extraction algorithm is approximately captures 3 frames per second. The live video capturing frames are tracking with the memory card images to recognize the obstacle captured in phone and the matched image will be converted into the voice by using TTS (text to speech) converter technique. The goal of the paper is using the Android platform for developing a mobile phone virtual museum guide. Basically, a user should recognize the obstacles by taking the video using smart phone camera. II. PROBLEM DESCRIPTION Visually impaired people cannot navigate easily in their day to day life. They need help of their cane or other electronic mobility devices or guided dogs which 530

guide them in an appropriate manner. So they need a self assistive device to guide and make them independent from being dependent on others for navigation. The very preliminary and significant problem is the detection of obstacles in front of them and avoiding it. So, to avoid this bottleneck of the problem, need to classify the objects, recognize obstacles or objects to identify them and track the objects through a key frame extraction algorithm and text to speech technique and providing an alternative path. III. EXISTING SYSTEM In the existing system the obstacles are classified based on the blind language characters portrayed i.e. all A s will be 1 class, all E s will be another class, etc. The round shaped features are grouped together as class A, the line shaped features are grouped together as class E and so on. Thus each class will be represented by a feature extraction consisting of values for a number of features. When a new image is presented to the system, the feature vector of the new image is computed and compared to the predefined feature vector for each class. The unknown image is given the label that corresponds with the closest predefined feature vector. A. ISSUES IN EXISTING SYSTEM Reduce the 3rd party help whichh used to adapt the system according to the user and investigate the possibility of self-adaptive wearable obstacle detection system, Use other available functions in calibration tool and investigate the possibility of complete path navigation and obstacle aware system are planned to be enhanced. B. PROPOSED SYSTEM With over 39 million visually impaired people worldwide, the need for an assistive device that allows the blind user navigate freely is crucial. This paper have developed an off-line navigation device that conveys voice (i.e) text is converted into speech which provides navigation instructions to the user. The device relays directional information to the user through special Audio Bone headphones, whichh use bone conduction technology. This Text To Speech is used to convert the obstacles names into voice command and delivered to the visually impaired people. This application makes use of the talk back feature in Android smart phones that help visually impaired people to use a smart phone. Also, Text-To-Speech (TTS) API of Android is used to give voice commands to the user to specify the object name and user s current location. IV. SYSTEM ARCHITECTURE DESIGN The above system architecture diagram describes about the real time images of the obstacles are captured through a live video camera of the smart phones and that images are extracted into the number of frames using key frame extraction algorithm. This extracted frame is converted into pixel format and that converted frames are compared with the images which all are present in the database. Once the match for the outline of the object is found, then the audio that is associated with the identified image is fed into the handheld devicee that reads it out so that the obstacles could be identifiedd by the visually challenged peoples. V. MOTION DETECTION 532

A. Frame Extraction B. Tracking C. Background color removal D. Feature Extraction E. Voice Convert A. FRAME EXTRACTION In this module read the input video and extract the number of frames from that live video in Smartphone after reboot using a key frame extraction algorithm. A.1.1 OBSTACLES DETECTION In this obstacles detection module that it can exploit the information from two images at once, obtaining dense disparity maps (between the left and right stereo views at each frame). Since for every pixel that has a valid disparity value with respect to the camera coordinate frame in order to detect obstacles in any scenario. The obstacle detection method described in this section relies on the creation of a virtual cumulative grid, which represents the area of interest ahead of the visually impaired user. In other words, the grid is the area where the potential obstacles have to be detected in order to avoid the user run into them. The goal is to accumulate in the bins of a grid all the pixels that may belong to obstacles according to its position, depth and height. Motion detection works on the basis of frame differencing - meaning comparing how pixels (usually blobs) change location after each frame. There are two ways you can do motion detection. The first method just looks for a bulk change in the image: Calculate the average of a selected color in frame 1 Wait X seconds Calculate the average of a selected color in frame 2 if (abs(avg_frame_1 - avg_frame_2) > threshold) then motion detected The other method looks at the motion of the middle mass: calculate the middle mass in frame 1 wait X seconds calculate the middle mass in frame 2 if (mm_frame_1 - mm_frame_2) > threshold) then motion detected The problem with these motion detection methods is that neither detects very slow moving objects, determined by the sensitivity of the threshold. But if the threshold is too sensitive, it will detect things like shadows and changes in sunlight. B. TRAINING OF MOVABLE OBJECTS In the first step training of movable objects, the object of interest is trained according to its shape and its color. The corresponding object description and its name is stored in a database with shape images, color histogram information, and basic color terms. For the segmentation of the outline it use an alpha-blending between one of the color images and the disparity image of the live video camera. The color image is multiplied by a factor of 0.9 and the disparity image is added with factor 0.1. The resulting image is used for the segmentation of the region of interest using a standard region-growing algorithm. During the segmentation procedure the disparity information assures that only image parts with the correct depth are chosen. For the selected region an image mask is generated and stored in a database together with the pertinent color histogram and the basic color term of this region the color histogram consists of 16 hue-ranges of the HSV color model. The basic color term is determined with an algorithm that takes into account the region s color environment. This reflects human color perception, which is influenced by surrounding colors. To allow for 533

increased independence of the viewing direction during the search, several image masks containing shape information can be stored for one object in the database. C. BACKGROUND COLOR REMOVAL In this module construct the color histogram of each frame and remove the colors that appear most frequently in the scene. These removed pixels do not be considered in subsequent detection processes. Performing background color removal not only reduces object information but also speed up the detection process. Background subtraction is the method of removing pixels that do not move, focusing only on objects that do. The method works like this: capture two frames compare the pixel colors on each frame if the colors are the same, replace with the color white else, keep the new pixel. D. FEATURE EXTRACTION In this module, extracting the feature from the image frame is done. The edge detection, corner detection, color transformation and color classification and it will determine obstacle and image recognition result. A feature is a specific identified point in the image that a tracking algorithm can lock onto and follow through multiple frames. Often features are selected because they are bright/dark spots, edges or corners - depending on the particular tracking algorithm. Template matching is also quite common. What is important is that each feature represents a specific point on the surface of a real object. As a feature is tracked it becomes a series of two-dimensional coordinates that represent the position of the feature across a series of frames. This series is referred to as a track. Once tracks have been created they can be used immediately for 2D motion tracking, or then be used to calculate 3D information. A.Figure DATA INPUT AND INTERMEDIATE OUTPUT. 534 E. VOICE CONVERT Convert voice from obstacle name which is matched with the stored template and conveyed it to the visually impaired peoples. VI. CONCLUSION A smart phone based obstacle detection system for visually impaired peoples has been implemented in this paper. This provides a practical solution for blind peoples to detect the obstacles near them. The work done in this phase one are the obstacle are captured using the built-in camera of the smart phones in real-time. The captured videos undergoes preprocessing and subjected to key frame extraction algorithm, then the obstacles is recognized and conveyed to the visually impaired users by using the TTS (Text To Speech) converter, which blind people can identify everyday objects around them. In future, the magnetic GPS, (Global Positioning System) direction and the location tracking will be enhanced. REFERENCES [1]. Yu Zhong, Pierre J..Garrigues, Jeffred P.Bigham : Real time object scanning using a mobile phone and cloud based visual search engine ASSETS 13, Bellevue, Washington, USA, October 21-23,2013.

[2]. Satoshi Hashino, Sho Yamada. : Ultrasonic path guidance for visually impaired ICCHP 10 Proceedings of the 12 th international conference on computers ISBN: 3-642-14099-8 978-3- 642-14099-0 Published in 2010. [3]. Dr.Shraga Shoval, Mr. Iwan Ulrich, D. Johann Borenstein. : Computerized obstacle avoidance systems for the blind and visually impaired Editors: Teodorescu, H.N.L and Jain, L.C., CRC Press, ISBN/ISSN: 0849301408, pp. 414-448, Publication Date: 12/26/00 [4]. Anupama S. Bagalkot, Priyanka B Mogali, Reshma R Nayak. : Blind audio guidance system (blind navigation) International Journal of Innovative Technology and Research volume no.2; Issue no.2, 846-849, February-March 2014 [5]. Cooper bills, Arjun Prakash, and T.S. Leung. : Visionbased obstacle detection and avoidance in IROS 2012. [6]. Kaikai Liu, Xinxin Liu, and Xiaolin Li : Enabling Finegrained Indoor Localization via Smartphone MobiSys 13, Taipei, Taiwan, June 25-28, 2013. [7]. Tahir Nawaz, Fabio Poiesi and Andrea Cavallaro : Measures of Effective Video Tracking Vol. 23. No. 1, January 2014. [8]. Serge Belongie, Jitendra Malik, and Jan Puzich : Shape Matching and Object Recognition Using Shape Contexts Vol.24, No. 24, April 2002. 535