FUSION OF LASERSCANNER AND VIDEO FOR ADVANCED DRIVER ASSISTANCE SYSTEMS
|
|
- Sherman Bryant
- 7 years ago
- Views:
Transcription
1 FUSION OF LASERSCANNER AND VIDEO FOR ADVANCED DRIVER ASSISTANCE SYSTEMS Nico Kaempchen, Klaus C.J. Dietmayer University of Ulm, Department of Measurement, Control and Microtechnology Albert-Einstein-Allee 41, D Ulm, Germany Tel: Fax: {Nico.Kaempchen, Klaus.Dietmayer}@e-technik.uni-ulm.de ABSTRACT In established driver assistance systems each application relies on its own sensor, which observes the vehicles environment. Advanced driver assistance systems (ADAS) have increasing demand for several sensor systems. The described Laserscanner and video based sensor fusion approach serves as a general platform for multiple active safety and comfort applications. A large field of view is obtained and the certainty and precision of the estimates in the relevant regions is increased significantly. Vehicle position measurements and classification, lane estimation as well as pedestrian recognition enable a broad support for applications such as Lane Departure Warning, Automatic Emergency Breaking, PreCrash, Pedestrian Protection, ACC Stop&Go and Low Speed Following. INTRODUCTION Recent projects on driver assistance systems have focused on applications such as PreCrash (Chameleon) [1], ACC Stop&Go (Carsense) [2] and recognition of vulnerable road users (PROTECTOR). These advanced driver assistant systems (ADAS) have increasing demand for several sensor systems, which are mainly complementary but also redundant. Some research has therefore been focused on multi-sensor fusion [2,3,4,5,6]. The aim of such a step is to provide a fused description of the traffic scene surrounding the vehicle, which is relevant for ADAS, but not specific to a certain application. This fusion system incorporates the data of diverse sensors into a single description. Thus the field of view of a single sensor is enlarged, the certainty and precision of the estimates is increased and additionally this system design is economically efficient, as different applications share a set of sensors. SENSORS A Laserscanner and a monocular camera are combined in order to enable a reliable environment recognition in distances of up to 100 m and a wide angular field of view. LASERSCANNER Figure 1: ALASCA Laserscanner of the company IBEO AS integrated into a test vehicle. 1
2 VIDEO The monocular camera is mounted behind the windscreen beside the inner rear mirror. The camera is equipped with a ½ CCD chip which has a standard VGA resolution of 640x480 pixel. With a 8 mm lens a horizontal view of 44 degrees is realised at an average angular resolution of 0.07 degrees per pixel. SPATIO-TEMPORAL ALIGNMENT In order to synchronise the sensors, the camera is triggered when the rotating Laserscanner head is aligned with the direction of the optical axis of the camera [7]. The sensors are calibrated in order to enable a spatial alignment. By means of an accurate synchronisation and calibration, image regions can be associated directly with Laserscanner measurements (Fig. 2). Therefore it is possible to assign certain image parts a distance, which is a major advantage of this fusion approach. Figure 2: Left: Laserscanner measurement in the bird view. The different layers of the Multilayer Laserscanner are colour coded. In grey the field of view of the camera. Right: Laserscanner data projected into the image domain. SENSOR FUSION The Laserscanner and the video sensors are combined in the fusion system in a way which enables a many synergetic effects. The Laserscanner tracks and classifies vehicles, motorbikes and pedestrians in the environment in front of the ego vehicle. The image processing unit estimates the lane and the position of the ego vehicle in the lane. Another image processing module refines the lateral position and velocity estimation of the objects. High-Level Fusion Combination, Verification Lane Recognition Object Tracking and Classification Object Tracking ESP Laserscanner Video Figure 3: System architecture involving different levels of sensor fusion. 2
3 The high-level fusion combines the ego motion estimation, the lane recognition and the detected objects into a unified description. Based on this high-level fusion, applications could detect lane departures of the ego vehicle, obstacles in the driving path, determine objects of interest for ACC and Low Speed Following or detected potentially endangered pedestrians. The sensors are used mainly complementarily. Environment recognition in the near field (< 50 m) is based on the very accurate Laserscanners object tracking which covers a wide angular field of view. In the frontal far field the lateral offset estimate is refined by an image processing (Fig. 4). The highly reliable object detection of the laserscanner is used in order to control the attention of the video system. The distance measurements are combined with the viewing angle measurement of the video resulting in a precise width and position measurement of the object without any flat world assumption often used in monocular image processing approaches. Figure 4: Synergetic effect of sensor fusion in the far field: Accurate distance measurements of the Laserscanner (blue) and the viewing angle of the video (green) result in precise width and position measurement (red). The sensor fusion architecture also enables a data flow from the high-level modules to lower levels. For instance the Laserscanner and the video object tracking are provided with information about the ego-motion whereas the lane recognition module might incorporate the object positions. LASERSCANNER TRACKING AND CLASSIFICATION The distance measurements generated by one revolution of the rotating Laserscanner head (scan) are divided into segments, which are assumed to belong to the same object. The appearance depends on the orientation and distance of the object in the Laserscanner s coordinate system. Tracking is based on these segments using a Kalman-filter approach, where the state vector consists of the objects position, velocity, orientation, width and length [8]. The object is classified using the filtered width, length and velocity as features [9,10]. Possible object types are pedestrian, car, truck, (motor-) bike, small and big. VIDEO OBJECT TRACKING One advantage of the Laserscanner measurement principle is that only there where the laser beam is reflected by an object, a measurement is performed. The Laserscanner is therefore very reliable in object detection. Additionally the distance is measured very accurately. Both characteristics are weak points in monocular image processing approaches. On the other hand a video image exhibits a higher angular resolution than a scan. We therefore decided to take the advantage of both systems and design an video image processing module which is attention driven by the Laserscanner s object detection and which refines the lateral position and velocity as well as the width measurements. 3
4 ATTENTION CONTROL A region of interest (ROI) in the image is calculated from the objects width, measured by the Laserscanner and the type of the object, classified by the Laserscanners object recognition (Fig. 5). The ROI of each object is then searched for features which permit to recognise the presence of a vehicle. Depending on the classification of the Laserscanners object recognition into cars and trucks, different features are searched for. Using adaptable thresholds the applied image processing is widely independent of illumination changes. (a) (b) Figure 5: Attention control by the Laserscanners object detection PATTERN MATCHING FOR CARS Figure 6: Illumination and car type independent dark areas underneath cars. The appearance of cars depends on the type of the car, its colour and the illumination. There are however certain features which are independent of these characteristics (Fig. 6). The tyres, the lower part of the rear bumper and the shadow underneath the car are reliable features which appear in the image as dark regions and are widely independent of both the illumination and the cars brand and colour. In order to search for this pattern of dark area, a binary image is calculated from the ROI. The threshold is calculated from the histogram of the ROI. 10% of the darkest pixels are selected for further processing steps (Fig. 7). 10% Figure 7: Image in the ROI, histogram, adaptive threshold and binary. Possible shapes and sizes of the template s appearances in the image domain are motivated by the possible width of passenger cars as well as the real sizes of their wheels. The templates are scaled with respect to the distance of the vehicle as measured by the Laserscanner. 4
5 (a) (b) Figure 8: Wheel patterns with different tyre sizes (a) and of different width (b). The wheel pattern is searched for in every row of the ROI taking minimal and maximal possible parameters for the width of cars and their wheels into account (Fig. 8). Every match is taken as a hypothesis. In Fig. 9 (a) the green horizontal lines show the regions a matching wheel template was found. (a) (b) Figure 9: Matched wheel patterns (green), bumper patterns (red) and shadow patterns (blue). Then the ROI is searched for shaded regions which are above and below the hypothesised wheel patterns and stem from either the dark areas of the lower part of the bumper or the shadow underneath the car (Fig. 9 (b)). For all hypothesised combinations a probability is calculated. If the probability of the best hypothesis exceeds a chosen threshold it is taken as a match for the recognition of the wheels. From the position and width of the wheel pattern in the image domain together with the distance measured by the Laserscanner, a very accurate position and width of the object in the 3d ego-vehicles co-ordinate system can be calculated (Fig. 10). Figure 10: Resulting position and width measurements. PATTERN MATCHING FOR TRUCKS Trucks do not exhibit the same common patterns as cars as can be seen in Fig. 11. Even the often used symmetry operator performs poorly as there are often highly unsymmetrical writings or drawings and illuminations on the rear of lorries. Figure 11: Trucks exhibiting strong vertical edges. 5
6 The rear of most trucks is however characterised by strong vertical edges. The strongest edges are selected by analysing the sums of the edge magnitudes of each column in the ROI. Figure 12: Detection of strong vertical edges and the lower edge of the rear. The edges delimiting the rear of the truck are selected from the hypothesis by incorporating a priori knowledge about the viewing angle and the edge measurements of the Laserscanner (Fig. 12). The lower edge of the rear is determined by analysing the shadow underneath the truck in the binary image of the 10% darkest pixels in ROI. KORRELATION TRACKING In order to stabilise the estimates a feature tracking approach is applied to the vehicle in the ROI. Additionally this tracking approach delivers an estimate for the lateral velocity if scaled with the distance measured by the Laserscanner. LANE RECOGNITION The lane recognition based on image sequences aims in our approach at a determination of the ego vehicle in the lane by the means of lateral offset, yaw angle and lane width. Taking the speed of the ego vehicle into account, the time of lane departure can be predicted. Additionally, the linear extrapolation of the estimated lane can be used by the high-level fusion systems for a lane assignment of tracked preceding vehicles. Figure 13: Lane recognition based on lane markings. The lane recognition is based on multiple features which are bright patches of a certain size and strong edges (Fig. 13). The gradient direction of the edges is considered as well. This enables a precise and robust extraction of the lane markings. 6
7 RESULTS A major problem of the first generation of radar based ACC systems is the inaccuracy of the estimation of the lateral offset of preceding vehicles. Therefore lane changes of objects in front of the ego-vehicle are detected only when the vehicle is already completely on the target lane. Therefore in case of an object cutting into the ego-vehicles lane, the ACC system often reacts too late, so that the user is forced to take over. Figure 14: Distance x, the width and the lateral offset y of a preceding car. The proposed fusion was tested concerning the accuracy of the estimation of the lateral offset. Figure 14 shows the distance x of a preceding car over time. The car was detected in every instance at a distance range of between 50 m and slightly above 100 m. The second plot shows the width of the car estimated by the Laserscanner compared to the fused estimate and the third plot shows the estimated lateral offset y. No tracking was applied in this case, every parameter was derived from a single scan and image. The estimates of width and lateral offset of the Laserscanner exhibit a certain discretisation which originates from the angular resolution. The image processing algorithms are able to refine both estimates of width and lateral offset. CONCLUSION A sensor fusion system was presented which combines the object tracking of a Laserscanner and of a video system as well as lane recognition, performed by an image processing unit. The environment recognition in the near field (< 50 m) is mainly based on the very accurate Laserscanners object tracking which covers a wide angular field of view. In the frontal far 7
8 field the lateral offset estimate is refined by an image processing module which searches for illumination and vehicle appearance independent features. The fused estimate exhibits a high precision which is necessary for early lane change detection of preceding vehicles. The environment description is application independent and can be used by multiple automotive active safety and comfort applications. REFERENCES 1. Fuerstenberg, K.; Baraud, P.; Caporaletti G.; Citelli, S.; Eitan, Z.; Lages, U.; Lavergne, C.: Development of a Pre-Chrash sensorial system the CHAMELEON Project, in VDI Berichte 1653: Fahrzeugkonzepte für das 2. Jahrhundert Automobiltechnik 2001, Wolfsburg, Germany, 2001, pp Langheim, J; Buchanan, A.J.; Lages, U.; Wahl, M.: CARSENSE New environment sensing for advanced driver assistance systems, in Proceedings of the IEEE Intelligent Vehicle Symposium 2001, Tokyo, Japan, 2001, pp Stiller, C.; Hipp, J.; Rössig, C.; Ewald, A.: Multisensor obstacle detection and tracking, Image and Vision Computing, vol. 18, pp , Gruyer, D.; Royere, C.; Berge-Cherfaoui, V.: Credibilist multi-sensor fusion for the mapping of dynamic environment, in Fusion 2000, Paris, France, July Vukotich, A. and Kirchner, A., Sensor fusion for driver-assistance-systems, in Elektronik im Kraftfahrzeug, Baden-Baden, Germany, Dietmayer, K.C.J., Kirchner, A.; Kaempchen, N.: Fusionsarchitekturen zur Umfeldwahrnehmung für zukünftige Fahrerassistenzsysteme, in Fahrerassistenzsysteme, Springer Verlag, Germany, 2004, accepted 7. Kaempchen, N.; Dietmayer, K.C.J.: Data synchronization strategies for multi-sensor fusion, in Proceedings of the 10 th World Congress on Intelligent Transport Systems and Services, Madrid, Spain, 2003, No. T Fuerstenberg, K.Ch.; Dietmayer, K.C.J; Eisenlauer, S.; Willhoeft, V.: Multilayer Laserscanner for robust Object Tracking and Classification in Urban Traffic Scenes. Proceedings of ITS 2002, 9th World Congress on Intelligent Transport Systems, October 2002, ITS 2002 Chicago, Paper Fuerstenberg, K.Ch.; Dietmayer, K.C.J: Object Tracking and Classification for Multiple Active Safety and Comfort Applications using a Multilayer Laserscanner, in Proceedings of the IEEE Intelligent Vehicles Symposium 2004, Parma, Italy, Fuerstenberg, K.Ch.; Dietmayer, K.C.J.; Willhoeft, V.: Pedestrian Recognition in Urban Traffic using a vehicle based Multilayer Laserscanner, in Proceedings of the IEEE Intelligent Vehicles Symposium 2002, IV 2002 Versailles, Paper IV-80. 8
An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network
Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal
More informationVision-Based Blind Spot Detection Using Optical Flow
Vision-Based Blind Spot Detection Using Optical Flow M.A. Sotelo 1, J. Barriga 1, D. Fernández 1, I. Parra 1, J.E. Naranjo 2, M. Marrón 1, S. Alvarez 1, and M. Gavilán 1 1 Department of Electronics, University
More informationTips and Technology For Bosch Partners
Tips and Technology For Bosch Partners Current information for the successful workshop No. 04/2015 Electrics / Elektronics Driver Assistance Systems In this issue, we are continuing our series on automated
More informationAdvanced Vehicle Safety Control System
Hitachi Review Vol. 63 (2014), No. 2 116 Advanced Vehicle Safety Control System Hiroshi Kuroda, Dr. Eng. Atsushi Yokoyama Taisetsu Tanimichi Yuji Otsuka OVERVIEW: Hitachi has been working on the development
More informationA Reliability Point and Kalman Filter-based Vehicle Tracking Technique
A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video
More informationDetection and Recognition of Mixed Traffic for Driver Assistance System
Detection and Recognition of Mixed Traffic for Driver Assistance System Pradnya Meshram 1, Prof. S.S. Wankhede 2 1 Scholar, Department of Electronics Engineering, G.H.Raisoni College of Engineering, Digdoh
More informationAutomatic Labeling of Lane Markings for Autonomous Vehicles
Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 jkiske@stanford.edu 1. Introduction As autonomous vehicles become more popular,
More informationHow To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud
REAL TIME 3D FUSION OF IMAGERY AND MOBILE LIDAR Paul Mrstik, Vice President Technology Kresimir Kusevic, R&D Engineer Terrapoint Inc. 140-1 Antares Dr. Ottawa, Ontario K2E 8C4 Canada paul.mrstik@terrapoint.com
More informationVEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS
VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS Norbert Buch 1, Mark Cracknell 2, James Orwell 1 and Sergio A. Velastin 1 1. Kingston University, Penrhyn Road, Kingston upon Thames, KT1 2EE,
More informationVision In and Out of Vehicles: Integrated Driver and Road Scene Monitoring
Vision In and Out of Vehicles: Integrated Driver and Road Scene Monitoring Nicholas Apostoloff and Alexander Zelinsky The Australian National University, Robotic Systems Laboratory, Research School of
More informationA PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - nzarrin@qiau.ac.ir
More informationPHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia
More informationTest-bed for Unified Perception & Decision Architecture
Test-bed for Unified Perception & Decision Architecture Luca Bombini, Stefano Cattani, Pietro Cerri, Rean Isabella Fedriga, Mirko Felisa, and Pier Paolo Porta Abstract This paper presents the test-bed
More informationA STUDY ON WARNING TIMING FOR LANE CHANGE DECISION AID SYSTEMS BASED ON DRIVER S LANE CHANGE MANEUVER
A STUDY ON WARNING TIMING FOR LANE CHANGE DECISION AID SYSTEMS BASED ON DRIVER S LANE CHANGE MANEUVER Takashi Wakasugi Japan Automobile Research Institute Japan Paper Number 5-29 ABSTRACT The purpose of
More informationA Computer Vision System on a Chip: a case study from the automotive domain
A Computer Vision System on a Chip: a case study from the automotive domain Gideon P. Stein Elchanan Rushinek Gaby Hayun Amnon Shashua Mobileye Vision Technologies Ltd. Hebrew University Jerusalem, Israel
More informationWATER BODY EXTRACTION FROM MULTI SPECTRAL IMAGE BY SPECTRAL PATTERN ANALYSIS
WATER BODY EXTRACTION FROM MULTI SPECTRAL IMAGE BY SPECTRAL PATTERN ANALYSIS Nguyen Dinh Duong Department of Environmental Information Study and Analysis, Institute of Geography, 18 Hoang Quoc Viet Rd.,
More informationSensor Integration in the Security Domain
Sensor Integration in the Security Domain Bastian Köhler, Felix Opitz, Kaeye Dästner, Guy Kouemou Defence & Communications Systems Defence Electronics Integrated Systems / Air Dominance & Sensor Data Fusion
More informationExtended Floating Car Data System - Experimental Study -
2011 IEEE Intelligent Vehicles Symposium (IV) Baden-Baden, Germany, June 5-9, 2011 Extended Floating Car Data System - Experimental Study - R. Quintero, A. Llamazares, D. F. Llorca, M. A. Sotelo, L. E.
More informationFUTURE E/E-ARCHITECTURES IN THE SAFETY DOMAIN
FUTURE E/E-ARCHITECTURES IN THE SAFETY DOMAIN Dr. Michael Bunse, Dr. Matthias Wellhöfer, Dr. Alfons Doerr Robert Bosch GmbH, Chassis Systems Control, Business Unit Occupant Safety Germany Paper Number
More informationROBUST REAL-TIME ON-BOARD VEHICLE TRACKING SYSTEM USING PARTICLES FILTER. Ecole des Mines de Paris, Paris, France
ROBUST REAL-TIME ON-BOARD VEHICLE TRACKING SYSTEM USING PARTICLES FILTER Bruno Steux Yotam Abramson Ecole des Mines de Paris, Paris, France Abstract: We describe a system for detection and tracking of
More informationE70 Rear-view Camera (RFK)
Table of Contents (RFK) Subject Page Introduction..................................................3 Rear-view Camera..............................................3 Input/Output...................................................4
More informationFace detection is a process of localizing and extracting the face region from the
Chapter 4 FACE NORMALIZATION 4.1 INTRODUCTION Face detection is a process of localizing and extracting the face region from the background. The detected face varies in rotation, brightness, size, etc.
More informationLIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK
vii LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK LIST OF CONTENTS LIST OF TABLES LIST OF FIGURES LIST OF NOTATIONS LIST OF ABBREVIATIONS LIST OF APPENDICES
More informationMean-Shift Tracking with Random Sampling
1 Mean-Shift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of
More informationVisual Perception and Tracking of Vehicles for Driver Assistance Systems
3-11 Visual Perception and Tracking of Vehicles for Driver Assistance Systems Cristina Hilario, Juan Manuel Collado, Jose Maria Armingol and Arturo de la Escalera Intelligent Systems Laboratory, Department
More informationTracking of Moving Objects from a Moving Vehicle Using a Scanning Laser Rangefinder
Tracking of Moving Objects from a Moving Vehicle Using a Scanning Laser Rangefinder Robert A. MacLachlan, Member, IEEE, Christoph Mertz Abstract The capability to use a moving sensor to detect moving objects
More informationNeural Network based Vehicle Classification for Intelligent Traffic Control
Neural Network based Vehicle Classification for Intelligent Traffic Control Saeid Fazli 1, Shahram Mohammadi 2, Morteza Rahmani 3 1,2,3 Electrical Engineering Department, Zanjan University, Zanjan, IRAN
More informationHIGH-PERFORMANCE INSPECTION VEHICLE FOR RAILWAYS AND TUNNEL LININGS. HIGH-PERFORMANCE INSPECTION VEHICLE FOR RAILWAY AND ROAD TUNNEL LININGS.
HIGH-PERFORMANCE INSPECTION VEHICLE FOR RAILWAYS AND TUNNEL LININGS. HIGH-PERFORMANCE INSPECTION VEHICLE FOR RAILWAY AND ROAD TUNNEL LININGS. The vehicle developed by Euroconsult and Pavemetrics and described
More informationRobust and accurate global vision system for real time tracking of multiple mobile robots
Robust and accurate global vision system for real time tracking of multiple mobile robots Mišel Brezak Ivan Petrović Edouard Ivanjko Department of Control and Computer Engineering, Faculty of Electrical
More informationA Learning Based Method for Super-Resolution of Low Resolution Images
A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 emre.ugur@ceng.metu.edu.tr Abstract The main objective of this project is the study of a learning based method
More informationDetecting and positioning overtaking vehicles using 1D optical flow
Detecting and positioning overtaking vehicles using 1D optical flow Daniel Hultqvist 1, Jacob Roll 1, Fredrik Svensson 1, Johan Dahlin 2, and Thomas B. Schön 3 Abstract We are concerned with the problem
More informationLocating and Decoding EAN-13 Barcodes from Images Captured by Digital Cameras
Locating and Decoding EAN-13 Barcodes from Images Captured by Digital Cameras W3A.5 Douglas Chai and Florian Hock Visual Information Processing Research Group School of Engineering and Mathematics Edith
More informationPoker Vision: Playing Cards and Chips Identification based on Image Processing
Poker Vision: Playing Cards and Chips Identification based on Image Processing Paulo Martins 1, Luís Paulo Reis 2, and Luís Teófilo 2 1 DEEC Electrical Engineering Department 2 LIACC Artificial Intelligence
More information3D Scanner using Line Laser. 1. Introduction. 2. Theory
. Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric
More informationDevelopment of an Automotive Active Safety System Using a 24 GHz-band High Resolution Multi-Mode Radar
Special Issue Automobile Electronics Development of an Automotive Active Safety System Using a 24 GHz-band High Resolution Multi-Mode Yasushi Aoyagi* 1, Keisuke Morii* 1, Yoshiyuki Ishida* 1, Takashi Kawate*
More informationReal-time Stereo Vision Obstacle Detection for Automotive Safety Application
Real-time Stereo Vision Obstacle Detection for Automotive Safety Application D. Perrone L. Iocchi P.C. Antonello Sapienza University of Rome Dipartimento di Informatica e Sistemistica E-mail: dperrone@it.gnu.org,
More informationMultisensor Data Fusion and Applications
Multisensor Data Fusion and Applications Pramod K. Varshney Department of Electrical Engineering and Computer Science Syracuse University 121 Link Hall Syracuse, New York 13244 USA E-mail: varshney@syr.edu
More informationREAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING Ms.PALLAVI CHOUDEKAR Ajay Kumar Garg Engineering College, Department of electrical and electronics Ms.SAYANTI BANERJEE Ajay Kumar Garg Engineering
More informationTracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object
More informationAnalecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
More informationT-REDSPEED White paper
T-REDSPEED White paper Index Index...2 Introduction...3 Specifications...4 Innovation...6 Technology added values...7 Introduction T-REDSPEED is an international patent pending technology for traffic violation
More informationEFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM
EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM Amol Ambardekar, Mircea Nicolescu, and George Bebis Department of Computer Science and Engineering University
More informationEB Automotive Driver Assistance EB Assist Solutions. Damian Barnett Director Automotive Software June 5, 2015
EB Automotive Driver Assistance EB Assist Solutions Damian Barnett Director Automotive Software June 5, 2015 Advanced driver assistance systems Market growth The Growth of ADAS is predicted to be about
More informationStatic Environment Recognition Using Omni-camera from a Moving Vehicle
Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing
More informationFalse alarm in outdoor environments
Accepted 1.0 Savantic letter 1(6) False alarm in outdoor environments Accepted 1.0 Savantic letter 2(6) Table of contents Revision history 3 References 3 1 Introduction 4 2 Pre-processing 4 3 Detection,
More informationMobile Robot FastSLAM with Xbox Kinect
Mobile Robot FastSLAM with Xbox Kinect Design Team Taylor Apgar, Sean Suri, Xiangdong Xi Design Advisor Prof. Greg Kowalski Abstract Mapping is an interesting and difficult problem in robotics. In order
More informationTraffic Monitoring Systems. Technology and sensors
Traffic Monitoring Systems Technology and sensors Technology Inductive loops Cameras Lidar/Ladar and laser Radar GPS etc Inductive loops Inductive loops signals Inductive loop sensor The inductance signal
More informationHow To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm
IJSTE - International Journal of Science Technology & Engineering Volume 1 Issue 10 April 2015 ISSN (online): 2349-784X Image Estimation Algorithm for Out of Focus and Blur Images to Retrieve the Barcode
More informationACCIDENTS AND NEAR-MISSES ANALYSIS BY USING VIDEO DRIVE-RECORDERS IN A FLEET TEST
ACCIDENTS AND NEAR-MISSES ANALYSIS BY USING VIDEO DRIVE-RECORDERS IN A FLEET TEST Yuji Arai Tetsuya Nishimoto apan Automobile Research Institute apan Yukihiro Ezaka Ministry of Land, Infrastructure and
More informationSynthetic Sensing: Proximity / Distance Sensors
Synthetic Sensing: Proximity / Distance Sensors MediaRobotics Lab, February 2010 Proximity detection is dependent on the object of interest. One size does not fit all For non-contact distance measurement,
More informationBasler. Line Scan Cameras
Basler Line Scan Cameras High-quality line scan technology meets a cost-effective GigE interface Real color support in a compact housing size Shading correction compensates for difficult lighting conditions
More informationIntroduction. www.imagesystems.se
Product information Image Systems AB Main office: Ågatan 40, SE-582 22 Linköping Phone +46 13 200 100, fax +46 13 200 150 info@imagesystems.se, Introduction Motion is the world leading software for advanced
More informationTemplate-based Eye and Mouth Detection for 3D Video Conferencing
Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer
More informationRobotics. Lecture 3: Sensors. See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information.
Robotics Lecture 3: Sensors See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review: Locomotion Practical
More informationCollision Avoidance. The car we couldn t crash! The future for drivers. Compare the technologies. research news
special edition volume three, issue two February, 2008 research news Collision Avoidance The car we couldn t crash! During the low speed bumper crash test of the new Volvo, its automatic braking system
More informationA Hybrid Software Platform for Sony AIBO Robots
A Hybrid Software Platform for Sony AIBO Robots Dragos Golubovic, Bo Li, and Huosheng Hu Department of Computer Science, University of Essex Wivenhoe Park, Colchester CO4 3SQ, United Kingdom {dgolub,bli,hhu}@essex.ac.uk
More informationAdaptive cruise control (ACC)
Adaptive cruise control (ACC) PRINCIPLE OF OPERATION The Adaptive Cruise Control (ACC) system is designed to assist the driver in maintaining a gap from the vehicle ahead, or maintaining a set road speed,
More informationHANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT
International Journal of Scientific and Research Publications, Volume 2, Issue 4, April 2012 1 HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT Akhil Gupta, Akash Rathi, Dr. Y. Radhika
More informationModelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
More informationCOMPARISON OF OBJECT BASED AND PIXEL BASED CLASSIFICATION OF HIGH RESOLUTION SATELLITE IMAGES USING ARTIFICIAL NEURAL NETWORKS
COMPARISON OF OBJECT BASED AND PIXEL BASED CLASSIFICATION OF HIGH RESOLUTION SATELLITE IMAGES USING ARTIFICIAL NEURAL NETWORKS B.K. Mohan and S. N. Ladha Centre for Studies in Resources Engineering IIT
More informationSafe Robot Driving 1 Abstract 2 The need for 360 degree safeguarding
Safe Robot Driving Chuck Thorpe, Romuald Aufrere, Justin Carlson, Dave Duggins, Terry Fong, Jay Gowdy, John Kozar, Rob MacLaughlin, Colin McCabe, Christoph Mertz, Arne Suppe, Bob Wang, Teruko Yata @ri.cmu.edu
More informationRealization of a UV fisheye hyperspectral camera
Realization of a UV fisheye hyperspectral camera Valentina Caricato, Andrea Egidi, Marco Pisani and Massimo Zucco, INRIM Outline Purpose of the instrument Required specs Hyperspectral technique Optical
More informationMODULAR TRAFFIC SIGNS RECOGNITION APPLIED TO ON-VEHICLE REAL-TIME VISUAL DETECTION OF AMERICAN AND EUROPEAN SPEED LIMIT SIGNS
MODULAR TRAFFIC SIGNS RECOGNITION APPLIED TO ON-VEHICLE REAL-TIME VISUAL DETECTION OF AMERICAN AND EUROPEAN SPEED LIMIT SIGNS Fabien Moutarde and Alexandre Bargeton Robotics Laboratory Ecole des Mines
More informationRIEGL VZ-400 NEW. Laser Scanners. Latest News March 2009
Latest News March 2009 NEW RIEGL VZ-400 Laser Scanners The following document details some of the excellent results acquired with the new RIEGL VZ-400 scanners, including: Time-optimised fine-scans The
More informationCanny Edge Detection
Canny Edge Detection 09gr820 March 23, 2009 1 Introduction The purpose of edge detection in general is to significantly reduce the amount of data in an image, while preserving the structural properties
More informationEfficient Background Subtraction and Shadow Removal Technique for Multiple Human object Tracking
ISSN: 2321-7782 (Online) Volume 1, Issue 7, December 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com Efficient
More informationTube Control Measurement, Sorting Modular System for Glass Tube
Tube Control Measurement, Sorting Modular System for Glass Tube Tube Control is a modular designed system of settled instruments and modules. It comprises measuring instruments for the tube dimensions,
More informationVehicle Tracking System Robust to Changes in Environmental Conditions
INORMATION & COMMUNICATIONS Vehicle Tracking System Robust to Changes in Environmental Conditions Yasuo OGIUCHI*, Masakatsu HIGASHIKUBO, Kenji NISHIDA and Takio KURITA Driving Safety Support Systems (DSSS)
More informationInformation Contents of High Resolution Satellite Images
Information Contents of High Resolution Satellite Images H. Topan, G. Büyüksalih Zonguldak Karelmas University K. Jacobsen University of Hannover, Germany Keywords: satellite images, mapping, resolution,
More informationDetection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences
Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences Byoung-moon You 1, Kyung-tack Jung 2, Sang-kook Kim 2, and Doo-sung Hwang 3 1 L&Y Vision Technologies, Inc., Daejeon,
More informationBasler. Area Scan Cameras
Basler Area Scan Cameras VGA to 5 megapixels and up to 210 fps Selected high quality Sony and Kodak CCD sensors Powerful Gigabit Ethernet interface Superb image quality at all resolutions and frame rates
More informationWHITE PAPER. Are More Pixels Better? www.basler-ipcam.com. Resolution Does it Really Matter?
WHITE PAPER www.basler-ipcam.com Are More Pixels Better? The most frequently asked question when buying a new digital security camera is, What resolution does the camera provide? The resolution is indeed
More informationElectric Power Steering Automation for Autonomous Driving
Electric Power Steering Automation for Autonomous Driving J. E. Naranjo, C. González, R. García, T. de Pedro Instituto de Automática Industrial (CSIC) Ctra. Campo Real Km.,2, La Poveda, Arganda del Rey,
More informationOptimal Vision Using Cameras for Intelligent Transportation Systems
WHITE PAPER www.baslerweb.com Optimal Vision Using Cameras for Intelligent Transportation Systems Intelligent Transportation Systems (ITS) require intelligent vision solutions. Modern camera technologies
More informationIntegrated sensors for robotic laser welding
Proceedings of the Third International WLT-Conference on Lasers in Manufacturing 2005,Munich, June 2005 Integrated sensors for robotic laser welding D. Iakovou *, R.G.K.M Aarts, J. Meijer University of
More informationA method of generating free-route walk-through animation using vehicle-borne video image
A method of generating free-route walk-through animation using vehicle-borne video image Jun KUMAGAI* Ryosuke SHIBASAKI* *Graduate School of Frontier Sciences, Shibasaki lab. University of Tokyo 4-6-1
More information3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving
3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving AIT Austrian Institute of Technology Safety & Security Department Manfred Gruber Safe and Autonomous Systems
More informationAdaptive Cruise Control
IJIRST International Journal for Innovative Research in Science & Technology Volume 3 Issue 01 June 2016 ISSN (online): 2349-6010 Adaptive Cruise Control Prof. D. S. Vidhya Assistant Professor Miss Cecilia
More informationFace Recognition in Low-resolution Images by Using Local Zernike Moments
Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August14-15, 014 Paper No. 15 Face Recognition in Low-resolution Images by Using Local Zernie
More informationHow To Use Trackeye
Product information Image Systems AB Main office: Ågatan 40, SE-582 22 Linköping Phone +46 13 200 100, fax +46 13 200 150 info@imagesystems.se, Introduction TrackEye is the world leading system for motion
More informationScanners and How to Use Them
Written by Jonathan Sachs Copyright 1996-1999 Digital Light & Color Introduction A scanner is a device that converts images to a digital file you can use with your computer. There are many different types
More informationCorrecting the Lateral Response Artifact in Radiochromic Film Images from Flatbed Scanners
Correcting the Lateral Response Artifact in Radiochromic Film Images from Flatbed Scanners Background The lateral response artifact (LRA) in radiochromic film images from flatbed scanners was first pointed
More informationEFFECT OF VEHICLE DESIGN ON HEAD INJURY SEVERITY AND THROW DISTANCE VARIATIONS IN BICYCLE CRASHES
EFFECT OF VEHICLE DESIGN ON HEAD INJURY SEVERITY AND THROW DISTANCE VARIATIONS IN BICYCLE CRASHES S. Mukherjee A. Chawla D. Mohan M. Singh R. Dey Transportation Res. and Injury Prevention program Indian
More informationCollision Prevention and Area Monitoring with the LMS Laser Measurement System
Collision Prevention and Area Monitoring with the LMS Laser Measurement System PDF processed with CutePDF evaluation edition www.cutepdf.com A v o i d...... collisions SICK Laser Measurement Systems are
More informationTestimony of Ann Wilson House Energy & Commerce Committee Subcommittee on Commerce, Manufacturing and Trade, October 21, 2015
House Energy & Commerce Committee Subcommittee on Commerce, Manufacturing and Trade, October 21, 2015 Introduction Chairman Burgess, Ranking Member Schakowsky, members of the subcommittee: Thank you for
More informationA Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow
, pp.233-237 http://dx.doi.org/10.14257/astl.2014.51.53 A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow Giwoo Kim 1, Hye-Youn Lim 1 and Dae-Seong Kang 1, 1 Department of electronices
More informationOverview. Proven Image Quality and Easy to Use Without a Frame Grabber. Your benefits include:
Basler runner Line Scan Cameras High-quality line scan technology meets a cost-effective GigE interface Real color support in a compact housing size Shading correction compensates for difficult lighting
More informationDistributed Vision Processing in Smart Camera Networks
Distributed Vision Processing in Smart Camera Networks CVPR-07 Hamid Aghajan, Stanford University, USA François Berry, Univ. Blaise Pascal, France Horst Bischof, TU Graz, Austria Richard Kleihorst, NXP
More informationBasler scout AREA SCAN CAMERAS
Basler scout AREA SCAN CAMERAS VGA to 2 megapixels and up to 120 fps Selected high quality CCD and CMOS sensors Gigabit Ethernet and FireWire-b interfaces Perfect fit for a variety of applications - extremely
More informationPREDICTING THROW DISTANCE VARIATIONS IN BICYCLE CRASHES
PREDICTING THROW DISTANCE VARIATIONS IN BICYCLE CRASHES MUKHERJEE S, CHAWLA A, MOHAN D, CHANDRAWAT S, AGARWAL V TRANSPORTATION RESEARCH AND INJURY PREVENTION PROGRAMME INDIAN INSTITUTE OF TECHNOLOGY NEW
More informationENGINEERING METROLOGY
ENGINEERING METROLOGY ACADEMIC YEAR 92-93, SEMESTER ONE COORDINATE MEASURING MACHINES OPTICAL MEASUREMENT SYSTEMS; DEPARTMENT OF MECHANICAL ENGINEERING ISFAHAN UNIVERSITY OF TECHNOLOGY Coordinate Measuring
More informationVIAVISION. Intelligent Driving NO 03. April 2012. The Car with an Independent Mind. ¼ of a second. 99 out of 1oo new cars
VOLKSWAGEN GROUP NO 03 April 2012 SHAPING THE FUTURE OF MOBILITY Editorial Dr. Ulrich Hackenberg 2 Clever Companions Safety and Comfort on the Road 2 Connected When Cars Learn to Communicate 6 Vision of
More informationA System for Capturing High Resolution Images
A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: pitas@zeus.csd.auth.gr
More informationBasler pilot AREA SCAN CAMERAS
Basler pilot AREA SCAN CAMERAS VGA to 5 megapixels and up to 210 fps Selected high quality CCD sensors Powerful Gigabit Ethernet interface Superb image quality at all Resolutions and frame rates OVERVIEW
More informationDigitization of Old Maps Using Deskan Express 5.0
Dražen Tutić *, Miljenko Lapaine ** Digitization of Old Maps Using Deskan Express 5.0 Keywords: digitization; scanner; scanning; old maps; Deskan Express 5.0. Summary The Faculty of Geodesy, University
More informationA semi-autonomous sewer surveillance and inspection vehicle
A semi-autonomous sewer surveillance and inspection vehicle R.M. Gooch, T.A. Clarke, & T.J. Ellis. Dept of Electrical, Electronic and Information Engineering, City University, Northampton Square, LONDON
More informationDepartment of Information Engineering University of Pisa. Automotive Radar. Maria S. Greco. 2012 IEEE Radar Conference, May 7-11, Atlanta
Department of Information Engineering University of Pisa. Automotive Radar Maria S. Greco Automotive RADAR Why? Automotive RADARs as core sensor (range, speed) of driver assistance systems: long range
More informationA vehicle independent low cost platform for naturalistic driving studies Henning Mosebach, DLR
A vehicle independent low cost platform for naturalistic driving studies Henning Mosebach, DLR A low cost platform for naturalistic driving studies > 14th of may 2009 > Folie 1 Institute of transportation
More informationMACHINE VISION MNEMONICS, INC. 102 Gaither Drive, Suite 4 Mount Laurel, NJ 08054 USA 856-234-0970 www.mnemonicsinc.com
MACHINE VISION by MNEMONICS, INC. 102 Gaither Drive, Suite 4 Mount Laurel, NJ 08054 USA 856-234-0970 www.mnemonicsinc.com Overview A visual information processing company with over 25 years experience
More informationVECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION
VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION Mark J. Norris Vision Inspection Technology, LLC Haverhill, MA mnorris@vitechnology.com ABSTRACT Traditional methods of identifying and
More information