Vision-Based Blind Spot Detection Using Optical Flow



Similar documents
Extended Floating Car Data System - Experimental Study -

A Reliability Point and Kalman Filter-based Vehicle Tracking Technique

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

Comparing Improved Versions of K-Means and Subtractive Clustering in a Tracking Application

Speed Performance Improvement of Vehicle Blob Tracking System

Detection and Recognition of Mixed Traffic for Driver Assistance System

Detecting and positioning overtaking vehicles using 1D optical flow

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow

Vision based Vehicle Tracking using a high angle camera

False alarm in outdoor environments

REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING

Method for Traffic Flow Estimation using Ondashboard

Vehicle Tracking System Robust to Changes in Environmental Conditions

The demonstration will be performed in the INTA high speed ring to emulate highway geometry and driving conditions.

Real-Time Tracking of Pedestrians and Vehicles

Analecta Vol. 8, No. 2 ISSN

Static Environment Recognition Using Omni-camera from a Moving Vehicle

An Active Head Tracking System for Distance Education and Videoconferencing Applications

Electric Power Steering Automation for Autonomous Driving

Monitoring Head/Eye Motion for Driver Alertness with One Camera

Real time vehicle detection and tracking on multiple lanes

Synthetic Aperture Radar: Principles and Applications of AI in Automatic Target Recognition

Super-resolution method based on edge feature for high resolution imaging

VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS

Fall Detection System based on Kinect Sensor using Novel Detection and Posture Recognition Algorithm

Cloud tracking with optical flow for short-term solar forecasting

Automatic Labeling of Lane Markings for Autonomous Vehicles

Last Mile Intelligent Driving in Urban Mobility

EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM

Navigation Aid And Label Reading With Voice Communication For Visually Impaired People

Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences

Multimodal Biometric Recognition Security System

Human behavior analysis from videos using optical flow

Facial Features Tracking applied to Drivers Drowsiness Detection

Mouse Control using a Web Camera based on Colour Detection

chapter 3 basic driving skills

Urban Vehicle Tracking using a Combined 3D Model Detector and Classifier

Tips and Technology For Bosch Partners

Nighttime Vehicle Distance Alarm System

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE.

Evaluation of Optimizations for Object Tracking Feedback-Based Head-Tracking

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT

Journal of Industrial Engineering Research. Adaptive sequence of Key Pose Detection for Human Action Recognition

A Real-Time Driver Fatigue Detection System Based on Eye Tracking and Dynamic Template Matching

Vision based approach to human fall detection

Automated Monitoring System for Fall Detection in the Elderly

T O B C A T C A S E G E O V I S A T DETECTIE E N B L U R R I N G V A N P E R S O N E N IN P A N O R A MISCHE BEELDEN

Tracking and Recognition in Sports Videos

Tracking And Object Classification For Automated Surveillance

A Vision-Based Tracking System for a Street-Crossing Robot

3D Vehicle Extraction and Tracking from Multiple Viewpoints for Traffic Monitoring by using Probability Fusion Map

Crater detection with segmentation-based image processing algorithm

Observing Human Behavior in Image Sequences: the Video Hermeneutics Challenge

Density Based Traffic Signal System

ESE498. Intruder Detection System

T-REDSPEED White paper

Honeywell Video Analytics

EB Automotive Driver Assistance EB Assist Solutions. Damian Barnett Director Automotive Software June 5, 2015

Scalable Traffic Video Analytics using Hadoop MapReduce

How To Run A Factory I/O On A Microsoft Gpu 2.5 (Sdk) On A Computer Or Microsoft Powerbook 2.3 (Powerpoint) On An Android Computer Or Macbook 2 (Powerstation) On

A Method of Caption Detection in News Video

Robust Real-Time Face Detection

Programs for diagnosis and therapy of visual field deficits in vision rehabilitation

Colorado School of Mines Computer Vision Professor William Hoff

Automatic 3D Mapping for Infrared Image Analysis

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY

Vision-based Real-time Driver Fatigue Detection System for Efficient Vehicle Control

A Multiagent Model for Intelligent Distributed Control Systems

Design Report. IniTech for

Segmentation of building models from dense 3D point-clouds

Poker Vision: Playing Cards and Chips Identification based on Image Processing

Human Detection Robot using PIR Sensors

VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS

PIXEL-LEVEL IMAGE FUSION USING BROVEY TRANSFORME AND WAVELET TRANSFORM

Canny Edge Detection

Locating and Decoding EAN-13 Barcodes from Images Captured by Digital Cameras

Object Tracking System Using Motion Detection

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall

Fall detection in the elderly by head tracking

Visual Servoing using Fuzzy Controllers on an Unmanned Aerial Vehicle

Determining optimal window size for texture feature extraction methods

BENEFIT OF DYNAMIC USE CASES TO EARLY DESIGN A DRIVING ASSISTANCE SYSTEM FOR PEDESTRIAN/TRUCK COLLISION AVOIDANCE

Traffic Monitoring Systems. Technology and sensors

Neural Network based Vehicle Classification for Intelligent Traffic Control

Weal-Time System for Monitoring Driver Vigilance

Project. Keywords: Hybrid Algorithm, Human-Face Detection, Tracking Unrestricted, Identification of People, Fall, dynamic and kinematic.

Limitations of Human Vision. What is computer vision? What is computer vision (cont d)?

HIGH-PERFORMANCE INSPECTION VEHICLE FOR RAILWAYS AND TUNNEL LININGS. HIGH-PERFORMANCE INSPECTION VEHICLE FOR RAILWAY AND ROAD TUNNEL LININGS.

Transcription:

Vision-Based Blind Spot Detection Using Optical Flow M.A. Sotelo 1, J. Barriga 1, D. Fernández 1, I. Parra 1, J.E. Naranjo 2, M. Marrón 1, S. Alvarez 1, and M. Gavilán 1 1 Department of Electronics, University of Alcalá, Alcalá de Henares, Madrid, Spain {sotelo, llorca, marta, parra}@depeca.uah.es 2 Industrial Automation Institute, CSIC, Arganda del Rey, Madrid, Spain jnaranjo@iai.csic.es Abstract. This paper describes a vision-based system for blind spot detection in intelligent vehicle applications. A camera is mounted in the lateral mirror of a car with the intention of visually detecting cars that can not be perceived by the vehicle driver since they are located in the so-called blind spot. The detection of cars in the blind spot is carried out using computer vision techniques, based on optical flow and data clustering, as described in the following lines. Keywords: Computer Vision, Optical Flow, Blind Spot, Intelligent Vehicles. 1 Introduction A vision-based blind spot detection system has been developed for Intelligent Vehicle Applications. The main sensor for providing detection of cars is vision. Images are analyzed using optical flow techniques [1] in order to detect pixels that move in the same direction as the ego-vehicle. Pixels producing movement as described are grouped following the clustering techniques described in [2]. The resulting clusters are considered as potential vehicles overtaking the ego-vehicle. A double-stage detection mechanism has been devised for providing robust vehicle detection. In a first stage, a pre-detector system computes the mass center of the resulting clusters and determines whether the detected cluster is a potential vehicle according to the size of detected pixels. In a second stage, another detector looks for the appearance of vehicles frontal parts. Any object looking like the frontal part of a vehicle is considered as a potential vehicle, whenever the mass center pre-detector triggers the pre-detection signal. Thus, a sufficiently big object in the image plane, producing optical flow in the same direction as the ego-vehicle, and exhibiting a part similar to the frontal part of a car is validated as a car entering the blind spot. The position of the vehicle in the image plane is computed and tracked using a Kalman Filter. Tracking continues until the vehicle disappears from the scene, and an alarm signal is triggered indicating the driver that a vehicle has entered the blind spot zone. 2 System Description The description of the algorithm is provided in figure 1 in the form of flow diagram. As can be observed, there are several computation steps based on optical flow com- R. Moreno-Díaz et al. (Eds.): EUROCAST 2007, LNCS 4739, pp. 1113 1118, 2007. Springer-Verlag Berlin Heidelberg 2007

1114 M.A. Sotelo et al. puting at image level, pixel-wise clustering, analysis of clusters and final vehicle detection. As previously stated, the system relies on the computation of optical flow using vision as main sensor providing information about the road scene. In order to reduce computational time, optical flow is computed only on relevant points in the image. These points are characterized for exhibiting certain features that permit to discriminate them from the rest of point in their environment. rmally, these salient features have prominent values of energy, entropy, or similar statistics. In this work, a salient feature point has been considered as that exhibiting a relevant differential value. Accordingly, a Canny edge extractor is applied to the original incoming image. Pixels providing a positive value after the Canny filter are considered for calculation of optical flow. The reason for this relies on the fact that relevant points are needed for optical flow computation since matching of the points have to be done between two consecutive frames. Fig. 1. Flow Diagram of the Blind-spot detection algorithm After that, Canny edge pixels are matched and grouped together in order to detect clusters of pixels that can be considered as candidate vehicles in the image. Classical clustering techniques are used to determine groups of pixels, as well as their likelihood to form a single object. Figure 2 depicts a typical example of matched points after computing optical flow and performing pixels clustering.

Vision-Based Blind Spot Detection Using Optical Flow 1115 Fig. 2. Clustering of pixels providing relevant optical flow Fig. 3. Grouping of close clusters in a second clustering stage Pixels in blue represent edge points that have produced relevant optical flow in two consecutive frames. Red ellipses stand for possible groups (clusters) of objects. Violet ellipses represent ambiguous groups of objects that could be possibly split in two. Gray pixels represent the mass center of detected clusters. Even after pixels clustering, some clusters can still be clearly regarded as belonging to the same real object. A second grouping stage is then carried out among different clusters in order to determine which of them can be further merged together into a single blob. For this purpose, simple distance criteria are considered. As depicted in figure 3, two objects that

1116 M.A. Sotelo et al. are very close to each other are finally grouped together in the same cluster. The reason for computing a two-stage clustering process relies on the fact that by selecting a small distance parameter in the first stage interesting information about clusters in the scene can be achieved. Otherwise, i.e. using a large distance parameter in single clustering process, very gross clusters would have been achieved, losing all information about the granular content of the points providing optical flow in the image. The selected clusters constitute the starting point for locating candidate vehicles in the image. For that purpose, the detected positions of clusters are used as a seed point for finding collection of horizontal edges that could potentially represent the lower part of a car. The candidate is located on detected horizontal edges that meet certain conditions of entropy and vertical symmetry. Some of the most critical aspects in blind spot detection are listed below: 1. Shadows on the asphalt due to lampposts, other artifacts or a large vehicle overtaking the ego-vehicle on the right lane. 2. Self-shadow reflected on the asphalt (especially problematic in sharp turns like in round-about points), or self-shadow reflected on road protection fences. Pre-Detector Vehicle Behaviour Frontal part detection Mass centre detection Pre-Warning Warning Tracking Frontal part detection Fig. 4. Pre-detection Flow Diagram

Vision-Based Blind Spot Detection Using Optical Flow 1117 3. Robust performance in tunnels. 4. Avoiding false alarms due to vehicles on the third lane. The flow diagram of the double-stage detection algorithm is depicted in figure 4. As can be observed, there is a pre-detector that discriminates whether the detected object is behaving like a vehicle or not. If so, the frontal part of the vehicle is located in the Region Of Interest and a pre-warning is issued. In addition, the vehicle mass centre is computed. In case the frontal part of the vehicle is properly detected and its mass centre can also be computed a final warning message is issued. Vehicle tracking starts at that point. Tracking is stopped when the vehicle gets out of the image. Some times, the shadow of the vehicle remains in the image for a while after the vehicle disappears from the scene, provoking the warning alarm to hold on for 1 or 2 seconds. This is not a problem, since the overtaking car is running in parallel with the ego-vehicle during that time although it is our of the image scene. Thus, maintaining the alarm in such cases turns out to be a desirable side-effect. After locating vehicle candidates, these are classified using a SVM classifier previously trained with samples obtained from real road images. 3 Implementation and Results A digital camera was mounted in the lateral mirror of a real car equipped with a Pentium IV 2.8 GHz PC running Linux Knoppix 3.7 and OpenCV libraries 0.9.6. The car Fig. 5. Example of blind spot detection in a sequence of images. The indicator in the upperright part of the figure toggles from green to blue when a car is detected in the blind spot.

1118 M.A. Sotelo et al. was manually driven for several hours in real highways and roads. After the experiments, the system achieved a detection rate of 99% (1 missing vehicle), producing 5 false positive detections. Figure 5 shows an example of blind spot detection in a sequence of images. The indicator depicted in the upper-right part of the figure toggles from green to blue when a vehicle enters the blind spot area (indicated by a green polygon). A blue bounding box depicts the position of the detected vehicle. Our current research focuses on the development of SVM-based vehicle recognition for increasing the detection rate and decreasing the false alarm rate, as demonstrated in [3], where SVM was used for vehicle detection in an ACC application. Acknowledgments. This work has been funded by Research Project CICYT DPI2005-07980-C03-02 (Ministerio de Educación y Ciencia, Spain). References 1. Giachetti, A., Campani, M., Torre, V.: The use of optical flow for road navigation. IEEE Transactions on Robotics and Automation 14(1) (1998) 2. Kailunailen, J.: Clustering algorithms: basics and visualization. Helsinki University of Technology. Laboratory of Computer and Information Science. T61.195, Special Assignment 1 (2002) 3. Sotelo, M.A., Nuevo, J., Ocaña, M., Bergasa, L.M.: A Monocular Solution to Vision-Based ACC in Road Vehicles. In: Moreno Díaz, R., Pichler, F., Quesada Arencibia, A. (eds.) EUROCAST 2005. LNCS, vol. 3643, pp. 507 512. Springer, Heidelberg (2005)