Pedestrian Traffic Light Recognition For The Visually Impaired

Similar documents
Speed Performance Improvement of Vehicle Blob Tracking System

Poker Vision: Playing Cards and Chips Identification based on Image Processing

Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture.

HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER

Vision-Based Blind Spot Detection Using Optical Flow

Building an Advanced Invariant Real-Time Human Tracking System

A Method of Caption Detection in News Video

Automatic Detection of Emergency Vehicles for Hearing Impaired Drivers

Tracking of Small Unmanned Aerial Vehicles

Vision based Vehicle Tracking using a high angle camera

Using Image J to Measure the Brightness of Stars (Written by Do H. Kim)

ISSN: A Review: Image Retrieval Using Web Multimedia Mining

REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

Automated Monitoring System for Fall Detection in the Elderly

Canny Edge Detection

Colour Image Segmentation Technique for Screen Printing

Automatic Labeling of Lane Markings for Autonomous Vehicles

Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences

A method of generating free-route walk-through animation using vehicle-borne video image

3D Scanner using Line Laser. 1. Introduction. 2. Theory

Formal and informal Assessment

White paper. CCD and CMOS sensor technology Technical white paper

Automatic Detection of PCB Defects

Image Processing Based Automatic Visual Inspection System for PCBs

6: LANE POSITIONS, TURNING, & PASSING

T O B C A T C A S E G E O V I S A T DETECTIE E N B L U R R I N G V A N P E R S O N E N IN P A N O R A MISCHE BEELDEN

The benefits need to be seen to be believed!

The main imovie window is divided into six major parts.

Analecta Vol. 8, No. 2 ISSN

Visual Structure Analysis of Flow Charts in Patent Images

ESE498. Intruder Detection System

To determine vertical angular frequency, we need to express vertical viewing angle in terms of and. 2tan. (degree). (1 pt)

Scanners and How to Use Them

Using games to support. Win-Win Math Games. by Marilyn Burns

Using MATLAB to Measure the Diameter of an Object within an Image

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall

Current California Math Standards Balanced Equations

QUALITY TESTING OF WATER PUMP PULLEY USING IMAGE PROCESSING

Fall detection in the elderly by head tracking

Indoor Surveillance System Using Android Platform

Neural Network based Vehicle Classification for Intelligent Traffic Control

Locating and Decoding EAN-13 Barcodes from Images Captured by Digital Cameras

Tracking and Recognition in Sports Videos

Make and Model Recognition of Cars

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.

Circle Object Recognition Based on Monocular Vision for Home Security Robot

Navigation Aid And Label Reading With Voice Communication For Visually Impaired People

WHITE PAPER. Are More Pixels Better? Resolution Does it Really Matter?

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014

Bicycle Safety Quiz Answers Parental Responsibilities

3DVista Virtual Tour Suite

An Efficient Geometric feature based License Plate Localization and Stop Line Violation Detection System

PageX: An Integrated Document Processing and Management Software for Digital Libraries

Saving Mobile Battery Over Cloud Using Image Processing

A Study on M2M-based AR Multiple Objects Loading Technology using PPHT

International Journal of Advanced Engineering Research and Applications (IJAERA) ISSN: Vol. 1, Issue 6, October Big Data and Hadoop

ECE 533 Project Report Ashish Dhawan Aditi R. Ganesan

Medical Image Segmentation of PACS System Image Post-processing *

Self-Calibrated Structured Light 3D Scanner Using Color Edge Pattern

Whitepaper. Image stabilization improving camera usability

Density Based Traffic Signal System

Extend Table Lens for High-Dimensional Data Visualization and Classification Mining

A System for Capturing High Resolution Images

Crater detection with segmentation-based image processing algorithm

Real time vehicle detection and tracking on multiple lanes

Tutorial for Tracker and Supporting Software By David Chandler

UNIVERSITY OF LONDON GOLDSMITHS COLLEGE. B. Sc. Examination Sample CREATIVE COMPUTING. IS52020A (CC227) Creative Computing 2.

T-REDSPEED White paper

More Local Structure Information for Make-Model Recognition

Bicycle riding is a great way to get into shape

Nowcasting of significant convection by application of cloud tracking algorithm to satellite and radar images

Mathematics Task Arcs

Mouse Control using a Web Camera based on Colour Detection

Human behavior analysis from videos using optical flow

A Method for Image Processing and Distance Measuring Based on Laser Distance Triangulation

BCC Multi Stripe Wipe

Detecting and positioning overtaking vehicles using 1D optical flow

Mean-Shift Tracking with Random Sampling

PASSENGER/PEDESTRIAN ANALYSIS BY NEUROMORPHIC VISUAL INFORMATION PROCESSING

SentryScope. Achieving Ultra-high Resolution Video Surveillance through Linescan Camera Technology. Spectrum San Diego, Inc Technology Place

Blood Vessel Classification into Arteries and Veins in Retinal Images

HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT

Intelligent Video Technology

Fixplot Instruction Manual. (data plotting program)

Aim To help students prepare for the Academic Reading component of the IELTS exam.

3D Drawing. Single Point Perspective with Diminishing Spaces

Morphological segmentation of histology cell images

Problem-Based Group Activities for a Sensation & Perception Course. David S. Kreiner. University of Central Missouri

EFX Keying/Alpha plugins for After Effects

EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM

Traffic Safety Quiz Show. Grade level: 4/5

User Recognition and Preference of App Icon Stylization Design on the Smartphone

Procedure for Marine Traffic Simulation with AIS Data

1. The volume of the object below is 186 cm 3. Calculate the Length of x. (a) 3.1 cm (b) 2.5 cm (c) 1.75 cm (d) 1.25 cm

How To Filter Spam Image From A Picture By Color Or Color

Transcription:

Pedestrian Traffic Light Recognition For The Visually Impaired Department of Electrical Engineering, Ben-Gurion University Amir Shalev ; Ben Lauterbach Abstract Our project main focus is the to give a better and much more efficient solution for a well known problem. We choose to tackle a problem, which concern the visually impaired community Road crossing; And to be more precise: Pedestrian Traffic Light Recognition. In this work we employ the technology of computer vision for solving this issue. Our approach is mainly based on color segmentation. The system is able to detect and to track green and red traffic lights used by the pedestrians. We notice a high rate of correctly recognized traffic lights and very few false alarms. Keywords Traffic light recognition; computer vision; color detection; classification. INTRODUCTION AND PROBLEM DESCRIPTION According to the statistics of World Health Organization, there are approximately 40 million people suffering form visually impairment all over the world. Usually, blind people use a first aid, such as the white cane. It allows them to detect obstacles in a close range, but cannot help them with detecting the condition of a traffic light. The visually impaired have difficulty seeing the traffic signs, however they can hear. Although there are some audio beepers employed to inform a visually impaired person what color the traffic light is by means of different frequency. Such equipment is not available at every road crossing and also makes unbearable noise, especially at populated areas. With the fast growing of smartphones users around the world and the advanced technological development of wearable electronic products such as: eyewear cameras (Google glass) a computer vision solution is inevitable. Our goal was to design a real time recognition system based on video stream input. The system will recognize the traffic light and will inform the user of its sate (red/green light). The system should work in both day and night time and should have a high reliability. Such a system would increase the mobility of the blind people, freeing them from the special facilities provided by different cities or districts and will increase their sense of security regarding road crossing. In recent years, several different approaches have been taken to resolve the problem [-4]: color detector, a shaped based detector, spot light detection etc. Knowing each picture (video frame) is a mathematical combination of three basic colors also known as the RGB color model (were red, green, and blue light are added together in various ways to reproduce a broad array of colors). And the fact that we wish to find red/green colors, led us to look for a color model solution. However, we found that using the RGB model to detect strain colors is not an easy task in a real time complex scene, such as road crossing. Moreover, even when we do succeed detecting all the red and green colors in a frame, who can we be sure which one is a red / green light of a Pedestrian Traffic light? Many road-crossing junctions include several traffic lights, not necessarily in the same direction. Respectively, another problem is to determine which traffic light is relevant for the pedestrian in a certain point of time. In general, our approach to reach our goals was based on color segmentation by HSI color model. Similar to the HSV (fig.) color model the HIS model is also a cylindricalrepresentations of points in an RGB color model, were H stands for Hue, S stands for Saturation and I for Intensity. Fig.: HSV color model In order to successfully recognize the red and green light of the traffic light, after extracting all the

suspicious areas we perform a series of tests to distinguish between foals and true results.. Step : After we compute the most efficient threshold. We used this threshold to produce a binary image contain only the red and green areas. For better clean result we also used opening and closing functions. (Fig.) Step : One can see that even after applying the most efficient threshold, plus minimize the false results with OUR SOLUTION The input of the system is a continues traffic video sequence, which first will be separated into individual image frames for further processing. Step : The image frames composed of the RGB tricolor model are converted into the HIS (fig.) color model () I= (R + G + B) S = H = cos (R + G + B) { [min(r, G, B)] () [(R G) + (R B)] [(R G) + (R B)(G B)] / } Fig.(a): Binary red areas [4]. First we tried to extract the red and green colors by the next defined values (as mentioned in: [4]): Red light is defined by the condition of hue value with 0 plus or minus 5 and intensity value of 0.8. Green light is defined by the condition of hue value with 0 plus or minus 5 and intensity value of 0.8 as well. Unfortunately we discover that those values are not so accurate. We decide to examine a large group of stills photos (frames) to get to best threshold for extracting only the red and green lights. At this part we collected the results in EXCEL sheet, which made it easier for calculating the best thresholds. (See appendices for farther information). Fig.(b): Binary green areas opening and closing functions. We still cannot get a clear satisfying result. To over come a situation, where in every frame we can get multiply results about the location of the traffic light, we chose to process each frame through number of functions (filters). Each function shall give a grade for each suspicious pixels cluster. By the end of the procedure the one who got the highest score will be choosen as the traffic light coordinate. First we shell count and locate all the suspicious candidates (using the bwlabel function). Next we ll calculate their center of mass coordinate. (From this point each center of mass (CM) coordinate will represent each candidate). Traffic light pole detection considering the fact that each pedestrian traffic light is mounted on a pole, we deiced to detect all the relevant poles in each frame. By doing so we gave a high score for those center of mass coordinate that found closest to the pole. We preform major steps for poles detection (Fig.4). Edge detection using Canny, Hough transform for straight-line detection and finally ACE algorithm (applying Inner product for the angle of the line and it s radius) to extract the orthogonal lines meaning the poles (most Fig.(a): HSI color space Fig.(b): HSI color space traffic light zoom

of the straight lines in a frame are horizontal). to frame is also slow changing. This function gives a higher score for suspicious CM coordinates which are relative close to the old CM found in the previous frame. By now we got consistently satisfying results, expect in extreme cases such as green poles (Fig.5). Accordingly the final frame process refers to the traffic light as a source of light. We used imtophat and imadjust functions for spot light detection. [] Fig.4(a): Edge detection using Canny Fig.5: False identification Fig.6 summarize the Traffic Light Recognition algorithm used in this paper. Fig.4(b): Hough transform for straight-line detection Fig.6: Algorithm flowing chart Fig.4(c): ACE algorithm, extracting poles As we already monition, cross road scene are very complex, and sometimes include several traffic lights. After we detect all the poles in a fame and score each center of mass coordinates respectively, we shall know deal we the issue of determine which traffic light is relevant for the pedestrian in a certain point of time. Due to the fact we are using just one camera and can t analyze a D scene, we had to come with a simple but efficient solution. After examine significant number of cross roads we come to conclusion that the designate traffic light (Center of mass) is always higher then the rest of the traffic lights. A simple function is scoring the CM coordinates form the highest to the lowest. In order to improve and accelerate the process even more, we wrote a function that considerate the last location the traffic light was found. Due to the fact that people (and especially visually impairment person) tend to cross intersection in slow walking pace. The location of the traffic light (relative to the observer) from frame

. EXPERIMENTS AND RESULTS One can see, we were able to get good and reliable results. However we straggle to get the same quality results just before night time, at twilight (Fig.7). Most false results were related to red light detection. Fig.6(a-d) showing the results we accomplish using the above algorithm. We exam our algorithm on steals photo and video, both in day and night time. Fig.7(a): False result, wrong traffic light Fig.6(a): Green traffic light day time Fig.7(a): False result, wrong red recognition Fig.6(b): Red traffic light day time Another weakness is our program running time. We didn t manage to get a clean smooth video. 4. CONCLUSIONS AND FUTURE IDEAS In this work we examined and executed a real time traffic light recognition system, based on computer vision technology. To achieve our goal we used a variety of techniques. We think that the most significant thing that we learned from this project is the fact, that to accomplish a specific task (red/green color recognition) a whole set of tools is needed. Good chance that we would never reach such results if we tried to solve the recognition dilemma only by color segmentation methods. As we shown in this paper, traffic light recognition is possible by means of computer vision. Without a doubt implement such thing can benefit the life of the visually impairment in the near future. Fig.6(c): Green traffic light, night time Future ideas to make this algorithm effective and truly useful for every day use, a better running time is required, (Less time consuming functions). In addition a solution for twilight time is required. This project can be expanded for other road cross themes, such as Zebra Crossing detection. We did deal with this issue a little Fig.6(c): Red traffic light, night time. 4

bit. Like to pole detection we assumed that find the zebra crossing pattern will help us recognize more effectively the traffic light (maybe using vanishing point?). Unfortunately our function didn t work on video stream and only gave some satisfactory results on steals photos (Fig.8). 6. APPENDICES In the google drive share folder of the course we put some excel sheets to illustrate the work we did about finding the best green / res threshold. (Project No 9) Fig.8: Zebra crossing detection 5. REFERENCES. R. de Charette and F. Nashashibi, Real time visual traffic lights recognition based on Spot Light Detection and adaptive traffic lights templates, 009 IEEE Intelligent Vehicles Symposium, Xian: IEEE, 009, pp. 58-6.. Jianwei Gong, Yanhua Jiang, Guangming Xiong, Chaohua Guan, Gang Tao and Huiyan Chen. The Recognition and Tracking of Traffic Lights Based on Color Segmentation and CAMSHIFT for Intelligent Vehicles. 00 IEEE Intelligent Vehicles Symposium University of California, San Diego, CA, USA June -4, 00.. Bruno A, Electronic Sensory Systems for the Visually Impaired, IEEE Inst & Mea Mag, 00. 4. Shun-Hsien Taso, Pedestrian Traffic light recognition For The Visually Impaired, Biomedical Engineering: Applications, Basis and Communications, Vol 9, No. 5 (007), 89-94. 5