Indoor Navigation for Blind using Android Smartphone



Similar documents
An Indoor Navigation System For Smartphones. Abhijit Chandgadkar Department of Computer Science Imperial College London

Indoor Positioning Systems WLAN Positioning

Indoor Localization Using Step and Turn Detection Together with Floor Map Information

Essentials of Positioning and Location Technology

Situated Visualization with Augmented Reality. Augmented Reality

Sensor Fusion Mobile Platform Challenges and Future Directions Jim Steele VP of Engineering, Sensor Platforms, Inc.

Automatic Labeling of Lane Markings for Autonomous Vehicles

Modeling and Performance Analysis of Hybrid Localization Using Inertial Sensor, RFID and Wi-Fi Signal

SENSORS ON ANDROID PHONES. Indian Institute of Technology Kanpur Commonwealth of Learning Vancouver

Static Environment Recognition Using Omni-camera from a Moving Vehicle

Experimental Results from TelOpTrak - Precision Indoor Tracking of Tele-operated UGVs

Effective Use of Android Sensors Based on Visualization of Sensor Information

Robot Sensors. Outline. The Robot Structure. Robots and Sensors. Henrik I Christensen

Kathy Au Billy Yi Fan Zhou Department of Electrical and Computer Engineering University of Toronto { kathy.au, billy.zhou }@utoronto.

Ubiquitous Tracking. Ubiquitous Tracking. Martin Bauer Oberseminar Augmented Reality. 20. Mai 2003

Navigation Aid And Label Reading With Voice Communication For Visually Impaired People

Barcode Based Automated Parking Management System

Can the smartphone sensors be used for wheelchair accessibility studies? Dr. Manik Gupta Dr. Behzad Heravi Dr. Catherine Holloway Prof.

STMicroelectronics is pleased to present the. SENSational. Attend a FREE One-Day Technical Seminar Near YOU!

New Insights into WiFi-based Device-free Localization

3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving

Colorado School of Mines Computer Vision Professor William Hoff

Tracking of Small Unmanned Aerial Vehicles

Feasibility of an Augmented Reality-Based Approach to Driving Simulation

3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving

Position Estimation and Hospital Guidance Systems

Basic Principles of Inertial Navigation. Seminar on inertial navigation systems Tampere University of Technology

MOVEIRO BT-200 Technical Information for Application Developer

Real Time Baseball Augmented Reality

CS231M Project Report - Automated Real-Time Face Tracking and Blending

Locating and Decoding EAN-13 Barcodes from Images Captured by Digital Cameras

Robotics. Lecture 3: Sensors. See course website for up to date information.

Intelligent Submersible Manipulator-Robot, Design, Modeling, Simulation and Motion Optimization for Maritime Robotic Research

Frequently Asked Questions

Visual Servoing using Fuzzy Controllers on an Unmanned Aerial Vehicle

Paper-based Document Authentication using Digital Signature and QR Code

Experiences in positioning and sensor network applications with Ultra Wide Band technology

Indoor Floorplan with WiFi Coverage Map Android Application

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow

C# Implementation of SLAM Using the Microsoft Kinect

CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation. Prof. Dr. Hani Hagras

Sensors and Cellphones

VIRTUAL REALITY GAME CONTROLLED WITH USER S HEAD AND BODY MOVEMENT DETECTION USING SMARTPHONE SENSORS

Indoor Surveillance Security Robot with a Self-Propelled Patrolling Vehicle

Effective Interface Design Using Face Detection for Augmented Reality Interaction of Smart Phone

Removing Moving Objects from Point Cloud Scenes

Robot Perception Continued

Implementation of Augmented Reality System for Smartphone Advertisements

Advanced Methods for Pedestrian and Bicyclist Sensing

STORE VIEW: Pervasive RFID & Indoor Navigation based Retail Inventory Management

Resource Library. Consumer Location-Based Analytics Deliver Actionable Insights. From Platt Retail Institute s. Bringing Research to Retail SM

Vision-Based Blind Spot Detection Using Optical Flow

RFID Asset Management Solutions. Distributed globally by

Indoor Surveillance System Using Android Platform

Mobile Robot FastSLAM with Xbox Kinect

Context-aware Library Management System using Augmented Reality

Sensor Modeling for a Walking Robot Simulation. 1 Introduction

Indoor Triangulation System. Tracking wireless devices accurately. Whitepaper

A Reliability Point and Kalman Filter-based Vehicle Tracking Technique

How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm

Unit 1: INTRODUCTION TO ADVANCED ROBOTIC DESIGN & ENGINEERING

Augmented Reality Tic-Tac-Toe

ANDROID LEVERED DATA MONITORING ROBOT

High speed 3D capture for Configuration Management DOE SBIR Phase II Paul Banks

MOBILE ROBOT TRACKING OF PRE-PLANNED PATHS. Department of Computer Science, York University, Heslington, York, Y010 5DD, UK

Off-line programming of industrial robots using co-located environments

REAL TIME MONITORING AND TRACKING SYSTEM FOR AN ITEM USING THE RFID TECHNOLOGY

WEARIT DEVELOPER DOCUMENTATION 0.2 preliminary release July 20 th, 2013

General background on mobile devices and solutions including context awareness

Ubiquitous Positioning, Indoor Navigation and Location-Based Service UPINLBS 2010

Video Tracking Software User s Manual. Version 1.0

How To Use Trackeye

Visual Servoing Methodology for Selective Tree Pruning by Human-Robot Collaborative System

Part 21: Augmented Reality

Toolkit for Bar Code Recognition and Resolving on Camera Phones - Jump-Starting the Internet of Things

Design and Implementation of a Wireless Gesture Controlled Robotic Arm with Vision

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

People centric sensing Leveraging mobile technologies to infer human activities. Applications. History of Sensing Platforms

FRC WPI Robotics Library Overview

A Study on M2M-based AR Multiple Objects Loading Technology using PPHT

Enhancements to RSS Based Indoor Tracking Systems Using Kalman Filters

Transcription:

Indoor Navigation for Blind using Android Smartphone CS676A Introduction to Computer Vision & Image Processing Prof Prof. Vinay P. Namboodiri, Department of Computer Science & Engineering, IIT Kanpur Done by - Group 27 Pankaj Gupta (11483), 3rd year, Dept. Of Mathematics & Statistics Satyam Kumar Shivam (11658), 3rd year, Dept. Of Mathematics & Statistics

Abstract This project has also been submitted as an entry for a competition organised by Goldman Sachs ABLE Solutions. The competion is currently going on and the problem statement reads as : Participating teams are to develop a working prototype of a mobile app (on Android Platform) that will help an individual having visual impairment navigate an indoor environment (e.g. office building) In this report, we propose a navigation system for smartphones capable of guiding users accurately to their destinations in an unfamiliar indoor environment, without requiring any expensive alterations to the infrastructure or any prior knowledge of the site s layout. We have used OpenCV libraries for Android and inbuilt accelerometer & magnetometer in our application. We have done the testing of our prototype using Google Nexus-4 in the CSE building, IIT Kanpur and it gave fairly good results. All the programming have been done in Android. The mobile has to be held by user at 30 degree approx from the horizontal so that it may tackle the obstacle which come in its path as well as recognize the position markers (which we have proposed in ou solution). Motivation Navigation for a visually-impaired person in an unknown indoor environment is near to impossible. Even in a known-indoor system, it is very difficult for him to navigate. The aim is to make office/workplacess more open and inclusive to people with disabilities and helping attract and retain differently-abled talent into the workforce. We talked to a blind person before starting our app-development process. The following points came up as a summary after the talk. 1 - App should be able to detect doors, stairs, wall and any other obstacle. 2 - Recognize which locations are allocated to what purpose 3 - Must work for different buildings

4 - Obstacle should be detected at an appropriate distance 5 - Sound feedback should be clear Salient Features of our Approach High accuracy: The application should consistently guide users to their destinations wand also detect obstacle from a reasonable distance. Low-cost: No external sensors were used. Only mobile back-camera was used. Even though using external sensors would increase our accuracy but they would also add discomfort for the user. External sensors would have their further maintanence cost. Hence, we decided on not using any external sensors. No pre-loaded indoor maps: Preloaded map of the entire building would diminish the flexibility of a solution. Intuitive user interface and feedback : The app should be easy to use and also it should give proper feedback to user so that user may act accordingly. We are using sound feedback in our app. There are three major parts of our app : 1 - Direction (Indicators)/Markers : These markers encode information abour directions and destination and are easy to detect. 2 - Obstacle Detection : Computer Vision libraries are sed to detect obstacle from an appropriate distance. 3 - Dead Reckoning : If users looses his way, using Inertial Navigation system user may return to the previous marker.

Previous Work In the past few years, a great amount of interest has been shown to develop indoor navigation systems for the common user. Researchers have explored possibilities of indoor positioning systems that use Wi-Fi signal intensities to determine the subjects position[1][2]. Other wireless technologies, such as bluetooth[1], ultra-wideband (UWB)[4] and radio-frequency identification (RFID)[3], have also been proposed. Although some of these techniques have achieved fairly accurate results, they are either highly dependent on fixed-position beacons or have been unsuccessful in porting the implementation to a ubiquitous hand-held device. Many have approached the problem of indoor localisation by means of inertial sensors. A foot-mounted unit has recently been developed to track the movement of a pedestrian[5]. Some have also exploited the smartphone accelerometer and gyroscope to build a reliable indoor positioning system. A system developed by Microsoft[6] relies upon a pre-loaded indoor floor map and does not yet support any navigation. An altogether different approach applies vision. In robotics, simultaneous localisation and mapping (SLAM) is used by robots to navigate in unknown environments[7]. In 2011, a thesis considered the SLAM problem using inertial sensors and a monocular camera[8]. It also looked at calibrating an optical see-through head mounted display with augmented reality to overlay visual information. Localisation using markers have also been proposed. One such technique uses QR codes1 to determine the current location of the user[9]. There is also a smartphone solution, which scans square fiducial markers in real time to establish the user s position and orientation for indoor positioning[10]. Direction Indicators/Markers Markers should be such which stand out from the surrounding environment, We had many options for markers such as Barcode Scanning, Location fingerprinting, Triangulation, QR-codes. But we decided upon using QR-codes. Cicle with a letter embed : We firslty tried a very simple marker. It looks like a cicle with a letter (for e.g. L or R) written inside it. Due to orientation of mobile, it's detection was very problematic usig Hough Circle Transform.

Barcode Scanning : Barcodes could be placed in various locations across the building, encoded with their respective grid coordinates. The smartphone camera could then be used to take a picture of the barcode and decode the encoded data. It had many parallel lines and hence may cause trouble in properly detecting stairs and few obstacle. Hence we rejected it. It's detection is orientation dependent. Location fingerprinting : Location fingerprinting is a technique that compares the received signal strength (RSS) from each wireless access point in the area with a set of pre-recorded values taken from several locations. The location with the closest match is used to calculate the position of the mobile unit. With a great deal of calibration, this solution can yield very accurate results. However, this process is time-consuming and has to be repeated at every new site. QR-codes : They are cheap to produce. They have an advantage over barcode by being orientation independent. They are detection is less dependent on lightconditions. Hence we decided to use it. Probably integating this technique with technologies like RFID may give better results but we haven't tried it.

Obstacle Detection : We have explored many methods for obstacle detection : 1 - Canny Edge Detection and Hough Line : After applying Canny Edge detection on the image recieved from camera, we applied hough line function on it. These function are available in ImageProc package of OpenCV library void Canny (InputArray image, OutputArray edges, double threshold1, double threshold2 ) void HoughLinesP(InputArray image, OutputArray lines, double rho, double theta, int threshold, double minlinelength, double maxlinegap) For detecting the obstacle - if(lines.cols() >linethreshold) { playsound( wait ) } For detection of QRcode, we divide the screen into 9 parts(bins). The place where QRcode appears, lines intensity becomes very high. Hence, for detection of QRcode, we compare intensity in each bin with a threshold. for(int i=0; i<3; i++) { for(int j=0;j<3; j++) { if(bins[i][j] > 110) playsound( Possibility of QR-code ) } } 2 - Homography Estimation : Two image frames are compared for homography to match corresponding feature points. It helps to track the obstacle. Obstacle tracking is required in order to give directions to user to avoid it. Mat findhomography(inputarray srcpoints, InputArray dstpoints, int method=0, double ransacreprojthreshold=3, OutputArray mask=noarray() )

3 - Optical Flow : The optical flow information is extracted from the image sequence in order to be used in the navigation algorithm. The optical flow provides very important information about the environment, like: the obstacles disposition, the agent heading, the time to collision and the depth. The strategy consists in balancing the amount of left and right side flow to avoid obstacles, this technique allows agent navigation without any collision with obstacles. We tested with two optical flow algorithms. These algorithms are present in Video package of OpenCV library for android : 1 - calcopticalflowsf : Calculate an optical flow using SimpleFlow algorithm void calcopticalflowsf(mat& from, Mat& to, Mat& flow, int layers, int averaging_block_size, int max_flow) 2 - calcopticalflowpyrlk : Calculates an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids void calcopticalflowpyrlk(inputarray previmg, InputArray nextimg, InputArray prevpts, InputOutputArray nextpts, OutputArray status, OutputArray err Dead Reckoning 1 - Direction and Orientation estimation: Accelerometer and geo-magnetic field sensors are used to provide an accurate representation of the device s orientation. Orientation angles for each axis are calculated from rotation matrix. The Android sensor API function getrotationmatrixfromvector(), and also getorientation() are used to calculate these angles.

2 - Pedometer: Pedometer is used to count the number of footsteps the visually impaired person has taken. Pedometer algorithm is based upon detecting the peaks and troughs from the time-domain waveform. When a peak was detected, the next trough is searched. Similarly, when a trough was detected, the next peak is searched. Difference between a peak and a trough is calculated by taking first derivative of acceleration data. Characteristics of an acceleration waveform used for step detection. The X and Y coordinates are found as follows: X_new = X_old + cos(orientartion)*stridelength; Y_new = Y_old + sin(orientartion)*stridelength;

Summary Our approach has three main parts Markers, Obstacle Avoidance and Dead Reckoning. We used QRcodes as markers. For obstacle avoidance and reading QRcodes we used OpenCV libraary for Computer Vision. For Dead Recockning we used sensors of mobile. The app gave fairly good result when tested. Future Work 1 MonoSLAM : The core of the approach of this algorithm[7] is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. This paper present an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. We can use this for giving more integrity to our app. 2 - Background Substraction : Substracting background will eliminate floor pattern and hence we would have more accuracy in obstacle detection. 3 - Staircase Detection : Staircase detection would enable us to navigate to different floors which would add generality to our app. The paper[11] by Stephen Se and Michael Brady looks very promising for this. 4 - Improving Pedometer Accuracy : Our current implementation of Pedometer performs if we walk a little force. We have to improve it so as to extend it to normal step. Bibliography [1] Kamol Kaemarungsi and Krishnamurthy Prashant. Modeling of indoor positioning systems based on location fingerprinting, 2004. [2] P. Bahl and Padmanabhan V.N. Radar: an in-building rf-based user

location and tracking system. In 2, pages 775 784, March 2000. [3] Ahmed Wasif Reza and Tan Kim Geok. Investigation of indoor location sensing via rfid reader network utilizing grid covering algorithm. In Wireless Personal Communications, volume 49, pages 67 80, June 2008 [4] Sinan Gezici, Tian Zhi, G.B. Giannakis, and et al. Localization via ultra-wideband radios: a look at positioning aspects for future sensor networks. Signal Processing Magazine, IEEE, 22:70 84, July 2005. [5] Oliver Woodman and Robert Harle. Pedestrian localisation for indoor environments. In Proceedings of the Tenth International Conference on Ubiquitous Computing (UbiComp 08), pages 362 368, 2008. [6] Fan Li, Chunshui Zhao, Guanzhong Ding, and et al. A reliable and accurate indoor localization method using phone inertial sensors. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing, pages 421 430, September 2012. [7] Andrew John Davidson. Real-time simultaneous localisation and mapping with a single camera. In Proceedings of the Ninth IEEE International Conference on Computer Vision, 2003, volume 2, pages 1403 1410, October 2003. [8] Martin A. Skoglund. Visual Inertial Navigation and Calibration. Linkping University Electronic Press, 2011. [9] Sung Hyun Jang. A qr code-based indoor navigation system using augmented reality. In RoboCup 2004, March 2012. [10] Alessandro Mulloni, Daniel Wagner, Dieter Schmalstieg, and Istvan Barakonyi. Indoor positioning and navigation with camera phones. In Pervasive Computing, IEEE, volume 8, pages 22 31, April 2009. [11] Vision-based Detection of Stair-cases Stephen Se, Department of Computer Science, University of British Columbia and Michael Brady, Department of Engineering Science, University of Oxford.