3D Vehicle Extraction and Tracking from Multiple Viewpoints for Traffic Monitoring by using Probability Fusion Map
|
|
|
- Phillip Stewart
- 10 years ago
- Views:
Transcription
1 Electronic Letters on Computer Vision and Image Analysis 7(2): , D Vehicle Extraction and Tracking from Multiple Viewpoints for Traffic Monitoring by using Probability Fusion Map Zhencheng Hu and Chenhao Wang and Keiichi Uchimura The Graduate School of Science and Technology, Kumamoto University, Kurokami , Kumamoto, Japan Received 15th May 2008; accepted 12th June 2008 Abstract This paper presents a novel solution of vehicle occlusion and 3D measurement for traffic monitoring by data fusion from multiple stationary cameras. Comparing with single camera based conventional methods in traffic monitoring, our approach fuses video data from different viewpoints into a common probability fusion map (PFM) and extracts targets. The proposed PFM concept is efficient to handle and fuse data in order to estimate the probability of vehicle appearance, which is verified to be more reliable than single camera solution by real outdoor experiments. An AMF based shadowing modeling algorithm is also proposed in this paper in order to remove shadows on the road area and extract the proper vehicle regions. Key Words: Probability Fusion Map, 3D Modeling, Multiple View, Traffic Monitoring 1 Introduction Intelligent traffic monitoring is an active area of research given the increasing volumes of vehicular traffic around the world leading to the problems as environmental degradation and economic inefficiency. For this purpose, the acquisition of accurate traffic data is essential for optimizing traffic management systems. Approaches to traffic monitoring system can be classified into one of two major categories: spot monitoring and area monitoring systems. Microwave transducer, underground magnetic loop sensors are examples of spot monitoring system. Area monitoring systems generally rely on video cameras. Ground (magnetic) loop detectors are accurate but have some major drawbacks. Installation of these detectors requires the excavation of road surfaces and are, therefore, constructively expensive and complex. In addition, loop detectors can only acquire the vehicle count and speed. Video (camera) monitoring systems, on the other hand, have several advantages. They are easy to be installed and maintained. In addition, visual information has potential ability to provide more information including lane changing frequency, vehicle trajectories, and driving behavior analysis through continuous tracking of the target vehicle. However, previous approaches are generally based on single camera s output, which is sensitive to the problem of occlusion, shadows and various illumination conditions. And traditional video monitoring systems cannot measure target vehicle s 3D size for vehicle classification and recognition. Correspondence to: <[email protected]> Recommended for acceptance by <João Manuel Tavares> ELCVIA ISSN: Published by Computer Vision Center / Universitat Autònoma de Barcelona, Barcelona, Spain
2 Z. Hu et al. / Electronic Letters on Computer Vision and Image Analysis 7(2): , In the work presented here, we describe a 3D vehicle extraction and tracking approach based on fusion of video data acquired from multiple un-calibrated cameras. We are using the fact that the images of the same scene acquired by the different cameras must have some degree of self-consistency when projected onto a common framework. Because points on the road surface can satisfy the constraint that the inverse mapped points lie on a horizontal plane, a set of common road features can be defined for the purpose of registering the inverse mapped image from the different cameras. Thus, the individual background-subtracted and inverse mapped images can be merged to create a Probability Fusion Map (PFM) where the intensity represents the probability of a vehicle being present. Length and width data of the target vehicle can be directly derived from the PFM, and height data is calculated through the original background-subtracted binary result. In this paper, we also proposed an Applied Median Filter (AMF) based shadowing modeling algorithm to remove shadows in order to extract the proper vehicle regions. Result of real road tests shows more than 95 percent of correct recognition and 92 percent of vehicle speed estimation correctness. Vehicle 3D measurements will be fed into the further recognition process for vehicle classification and recognition. 2 Review of Previous Works Tracking of moving vehicles from video stream has been an active area of research both in traffic surveillance and computer vision. Recently, many single camera based works focus on the improvement of robustness and precision in different weather conditions and traffic. The most popular self-adaptive background subtraction algorithms are concluded in [1]. Non-recursive techniques, such as Frame Differencing (FD), Median Filter (MF), Linear Predictive Filter (LPF), and recursive techniques, such as Approximated Median Filter (AMF), Kalman filter (KF), Mixture of Gaussians (MoG), are evaluated in retrieval measurement (recall and precision), to quantify how well each algorithm matches with the ground truth in different weather conditions. However, the vital problem of these single camera s solutions is still existed: the target areas will be occluded with each other because of the limitation of camera s viewpoint and angle (Figure 1). Occlusion will cause under segmentation of extracted regions and lead to miss counting and miss tracking. Most of the single camera solutions try to use target tracking to overcome the occlusion problem within certain frames. Jung [2] provided a real time system for measuring traffic parameter through adaptive background extraction, and used the concept of explicit occlusion and implicit occlusion to deal with occlusion problem. However, this work was effective only to the cases of straight line movement in highway traffic and there was no discussion on the dense traffic. Malik [3] in University of California, Berkeley, used a feature-based method along with occlusion reasoning for tracking vehicles in congested traffic. And in [5], vehicle parameters are extracted and followed by Kalman Filtering. The University of Reading has done extensive work in three-dimensional tracking of vehicles and classification of the tracked vehicles using 3-D model matching methods. However, tracking is only effective to less dense traffic scenes and has to make sure the targets are not occluded in the first couples of frames. In addition, because of the drawback of less depth information from single camera, 3D measurement of target vehicle is extremely difficult. Stereo cameras are rarely used for traffic monitoring, since the inherent complexity of stereo algorithm and the need to solve the correspondence problem makes them difficult for real-time applications. Another critical issue is the shadow problem. Shadows will significantly effect the extraction of target region, especially in urban areas where the targets are frequently moving in and out from shadows of roadside buildings. Geometric Project [7] uses binocular vision by two cameras registration to eliminate shadows. Color Constancy [8] defines the pixel and chromaticity and their distortion in HSV. And in [9], shadow model is concerned with the derivation of a progression of shadow-free image representation. However, none of the shadow-removing algorithms so far can provide an effective solution for traffic surveillance applications since the environmental light is changing continuously and the algorithm has to deal with the uneven shadowed regions for up to 100 meters.
3 112 Z. Hu et al. / Electronic Letters on Computer Vision and Image Analysis 7(2): , 2008 Figure 1: Example of occlusion in vision-based traffic monitoring system Figure 2: Block diagram of conventional traffic analysis algorithm 3 Probability Fusion Map (PFM) To overcome the problems of single camera solution, fusion of multiple viewpoints visual data is employed based on the concept of probability fusion map (PFM). PFM represents the probability of a vehicle being present in the scene, and is calculated by merging individual inversely mapped images within a common framework. In addition, since PFM is based on the inversely mapped image, the target 3D information including length, width and height could be directly derived through the process of blob analysis on PFM. In the following sections, Section 3.1 describes the common processing steps of a conventional traffic analysis system. Section 3.2 describes the PFM-based solution and an effective shadowing removing algorithm is provided in Section Conventional Traffic Analysis The common processing steps for single camera based vehicle detection and tracking is listed here and a block diagram of this procedure is shown in Figure 2. Background Generation and Update: AMF or MoG based automatic background generation algorithms are generally employed to adaptively track the illumination change of background. Background subtraction: Each new frame will be fed into Background Subtraction Module and the result will be binarized adaptively to extract the foreground regions. Morphological processing like opening/closing is necessary to segment the proper target regions. Vehicle Detection: To improve the accuracy of identification and classification, a vehicle is recognized and analyzed by blob analysis from image coordinates and projection coordinates.
4 Z. Hu et al. / Electronic Letters on Computer Vision and Image Analysis 7(2): , Tracking: Vehicle s position, velocity and driving state are derived in real time through vehicle tracking. Although the conventional solutions have been enhanced in precision and efficiency, the occlusion problem still affect the results of detection. 3.2 Image data fusion solution Our solution (PFM) focuses on data fusion provided by different viewpoints, and the probability of vehicle presence is calculated to judge vehicle and related parameters depending on fusion results. Therefore, even if one of the viewpoints were to fail because of occlusion or backlighting, the object could still be extracted correctly by some viewpoints, as the final detection result is synthesized referring to weight and probability from each viewpoint. Furthermore, it could simplify the traditional algorithms in terms of object extraction and tracking. Noise caused by weather, sudden light changing, and weak shadows, could be eliminated effectively. The elementary of data fusion by PFM, is to set up a common inverse image for a same road scene from different viewpoints with omnidirectional monitoring. Common feature points are set on the road ahead or be referred to landmarks on the road. They are used for registering the cameras through inverse projection. And the probability of vehicle appearing on the road is calculated by fusion map of multiple views. Finally, all information about vehicle including: location, 3D size, velocity and so on, could be acquired as a direct result of PFM. In the following sections, theoretical explanations of inverse projection will be described in subsection 3.2.1, and the calculation of PFM and vehicle extraction algorithm will be described in subsection and respectively Inverse Projection The relationship among cameras is realized by transform matrix calculated by way of inverse projection referring to the feature points on the road. And the model function of inverse projection is expressed as follows: x i y i 1 = AP X i Y i Z i 1 = f x 0 u 0 0 f y v where i = 0, 1, 2,... is the number of cameras. (X i, Y i, Z i, 1) T is the position of feature in the 3D world coordinate and the corresponding position of image coordinate is (x i, y i, 1). The matrix A is camera s intrinsic parameter matrix. Focal of lens is f and the optical center coordinate is (u 0, v 0 ). The extrinsic matrix P is acquired by following steps: 1. The 3D world coordinates (X i, Y i, Z i, 1) T of feature point s are acquired on the road pavement surface. Assuming the road surface is flat, the parameter Z i could be set as 0 and the plane position among these points are acquired by pre-knowledge or real measurements. 2. At the mean time, feature points could be marked on the scene and the positions of image coordinate are acquired. 3. Finally, the relationship between image and real world is calculated easily through Mean-least Algorithm in [6]. Figure 3 illustrates the inverse projection from different viewpoints. P X i Y i Z i 1 (1)
5 114 Z. Hu et al. / Electronic Letters on Computer Vision and Image Analysis 7(2): , 2008 Figure 3: Inverse projection on the common framework derived from different viewpoints Figure 4: Inverse projection from real scene Fusion of multiple views With the solution of inverse projection and the transform matrix, every pixel on the scene from different viewpoints could be transformed by individual inverse matrix onto a common framework that is similar to a virtual platform scene from a virtual bird viewpoint. Equation (2) can be derived from Eq.(1), where Z i is set to zero. X i Y i 0 1 = A 1 P 1 x i y i 1 (2) As a result, each road scene from different viewpoints could be transformed on the virtual platform through respective inverse projection. An example of inverse projection result is shown in figure 4. However, because of the vehicle height and camera location, the inverse projected images from different viewpoints will not corresponding to each other except for the region lying on the road. Here, we proposed a judgmental factor called PFM factor, which synthesizes the inverse projected images with considerations of Inverse Projector Factors (IPF) and Perspective Accuracy Factors (PAF). IPF (expressed as α i (x, y) ) is decided by the property of pixel. For example, if the pixel belongs to vehicle, its IPF value is set to 0.95, and 0.05 for other pixels. PAF is calculated based on the Projection Principle and expressed as β i (x, y) here. β i (x, y) = 1 κ i log 10 P i (x, y) P i0 (3) Where P i0 is the camera distance to the nearest feature points. κ i is an adjustment weight parameter. P i (x, y) is the distance from target to camera calculated on the inverse projected map.
6 Z. Hu et al. / Electronic Letters on Computer Vision and Image Analysis 7(2): , Figure 5: PFM result from 3 different viewpoints Figure 6: Bounding box in PFM is re-projected to the original images As a result, the probability of each camera is calculated as follows: Finally, a target pixel s PFM value is acquired by (5). P i (x,y) = α i (x, y) β i (x, y) (4) n n P (x,y) = P i (x,y) = α i (x, y) β i (x, y) (5) i=0 i=0 Figure 5 shows the result of PFM. The red, green and blue region on the right map indicate different inverse projected result from left, middle and right camera respectively. The vehicle extraction result is shown as a white region with higher probability value calculated by PFM. Since three cameras are located behind the vehicle, there is a salient region in the rear part. The salient region could be eliminated if another camera is installed on the opposite direction Vehicle 3D Measurement from PFM Vehicle region, as the high probability blob region in PFM can be extracted with the general morphological process and blob analysis. In order to obtain 3D information of the target, 4 corners of the target bounding box in PFM are re-projected to their original images(shown in Figure 6). And the height of vehicle is then calculated as the lower coordinate by blob analysis of rest part of object, while the length and width of vehicle is acquired by PFM directly. The 3D information of vehicle is shown on the Figure Shadow Removing Shadow will influence the accuracy of detection and PFM solution may not work well if the vehicles are connected by shadow regions since shadow will appears from any viewpoints. Normally, shadow in traffic surveillance is projected by two kinds of object: structures on the roadside and moving objects such as vehicles or clouds. Because of outdoor monitoring, size, intensity and direction of shadow would be different when illumination changes. Fortunately, the shadow by structure could be eliminated as a road background during the background update. However, shadow which may accompany the moving objects has to be removed. In this paper, a global filter for shadow removing is proposed which is not necessary to have previous knowledge like road surface reflection factor and shadow orientation. The main idea of this algorithm is to create
7 116 Z. Hu et al. / Electronic Letters on Computer Vision and Image Analysis 7(2): , 2008 Figure 7: Vehicle 3D information extracted from PFM a shadowing model with the intensity of typical shadow area, which could be extracted from shadow in foreground if the shadow region is confirmed. Based on the AMF algorithm, parameters of this filter will be updated automatically with illumination changing and distance from cameras. Detail processing steps are shown as follows: 1. Background subtraction and foreground extraction as described in Section Blob analysis on the foreground extraction result and removed the blobs which have low average luminance values (most of these regions are the dark vehicles). Result is shown in Figure Extracting the pixels from the rest of foreground regions with the constraint that the pixel value is lower than its corresponding background pixel s luminance. And these pixels are belonging to the typical shadow regions. 4. Put the average luminance value of the shadow into non-shadow pixels as the initial shadow value to build a shadow background. Figure 9 shows one result of the shadow background. 5. Shadow background value are kept updating with Adaptive Median Filter algorithm as Eq.(6). S i (x, y) = S i 1 (x, y) + k iff i (x, y) > S i 1 (x, y), S i 1 (x, y) k iff i (x, y) < S i 1 (x, y), S i 1 (x, y) iff i (x, y) = S i 1 (x, y) (6) Where S i (x, y) is the updated shadow background value on pixel (x, y) at time i, and F i (x, y) is the newly detected shadow pixel s luminance. k is the updating weight. 6. Extracted foreground regions on each frame are subtracted again with the shadow background to remove shadows. The proposed shadow removing filter updates itself with the real shadow pixels luminance value, which does not rely on the previous knowledge like road pavement refection and shadow orientation. Therefore, it is robust to the variation on size, intensive and orientation of shadow area. 4 Experiment and Result To evaluate the performance of our approach, outdoor experiments were carried out. We tested different width and shape of roads. Figure 10 shows the extraction and tracking result of a three-lane road in Shanghai, China. The left part shows the extracted vehicles and the right part is the result of PFM. It is obvious to see that even the target are occluded visually from one viewpoint, our approach can still obtain the correct result since the target regions are well isolated on PFM.
8 Z. Hu et al. / Electronic Letters on Computer Vision and Image Analysis 7(2): , Figure 8: (a) Background subtraction result. (b) Extracted foreground. (c) Extracted shadow regions Figure 9: Shadow background generated for shadow removing Table 1 compares the result of vehicle account between conventional method using single camera and our method using three cameras. PFM shows very high accuracy in almost all time segments and is much stable than single camera based solution since our approach eliminates the effect of occlusion and shadows. And Table 2 shows the accuracy of vehicle speed by PFM estimation. The performance of PFM is related to two factors: one is the density of vehicle on the road and the other is arrangement of cameras. The occlusion vehicle may be distinguished easily just using two cameras if the traffic flow is in low density. On the other hand, PFM will be failed even with 2 or 3 viewpoints in the cases of heavy traffic congestion. Further discussion on optimizing the number of cameras and the distribution of camera location will be done in our future work. Table 1: Comparison result between PFM and conventional solution Time segments Ground truth PFM solution Conventional solution (minutes) Detected vehicles Recall rate Detected vehicles Recall rate % % % % % % % % % % % % Total % %
9 118 Z. Hu et al. / Electronic Letters on Computer Vision and Image Analysis 7(2): , 2008 Table 2: Accuracy of speed estimated by PFM No. Truth (km/h) PFM (km/h) Accuracy % % % % % % % Figure 10: Fusion result of vehicle identification and tracking 5 Conclusion A novel solution of vehicle occlusion and 3D measurement for traffic monitoring is proposed in this paper. Comparing with single camera based conventional methods in traffic monitoring, our approach fuses video data from different viewpoints into a common probability fusion map (PFM) and extracts targets. The proposed PFM concept is efficient to handle and fuse data in order to estimate the probability of vehicle appearance, which is verified to be more reliable than single camera solution by real outdoor experiments. An AMF based shadowing modeling algorithm is also proposed in this paper in order to remove shadows on the road area and extract the proper vehicle regions. Real outdoor experiment result shows the excellent performance of our approach. 6 Acknowledgment This work was partially supported by the MEXT Grant under the grant number of References [1] S. Sen-Ching, C.K. Cheung, Robust Techniques for Background Subtraction in Urban Traffic Video, Proceedings of SPIE, [2] Y.K. Jung, Y.S. Ho, Traffic Parameter Extraction using Video-based Vehicle Tracking, Proc. IEEE Intelligent Transportation System Conf. 99, 1: , 1999.
10 Z. Hu et al. / Electronic Letters on Computer Vision and Image Analysis 7(2): , [3] D. Beymer, P. McLauchlan, B. Coifman, and J. Malik, A real-time computer vision system for measuring traffic parameters, IEEE Proc. of Conf. on Computer Vision and Pattern Recognition, Puerto Rico, 1: , [4] G. Surendra, M. Osama, F.K.M. Robert, and P.P. Nikolaos, Detection and Classification of Vehicles, IEEE Transactions on Intelligent Transportation System 3(1), 2002 [5] Z. Qiu, D. Yao, Kalman Filtering Used in Video-Based Traffic Monitoring System, Journal of Intelligent Transportation Systems 10(1), 2006 [6] Z. Zhang, A flexible new technique for camera calibration, IEEE Transactions on Pattern Analysis and Machine Intelligence 22(11): , 2000 [7] K. Onoguchi, Shadow Elimination Method for Moving Object Detection, Proc. Int l Conf. Pattern Recognition, 1(2): , 1998 [8] R. Cucchiara, C. Grana, M. Piccardi, A. Parati and S. Sirotti, Improving Shadow Suppression in Moving Object Detection with HSV Color Information, Proc. IEEE Intelligent Transportation System Conf., 1: , 2001 [9] G.D. Finlayson, S.D. Hordley, C. Lu and M.S. Drew, On the Removal of Shadows from Images, IEEE Transactions on Pattern Analysis and Machine Intelligence 28(1), 2006 [10] J.M. Ferryman, S.J. Maybank and A.D. Worrall, Visual Surveillance for Moving Vehicles, Intl. Journal of Computer Vision, 37(2): , 2000
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - [email protected]
Vision based Vehicle Tracking using a high angle camera
Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu [email protected] [email protected] Abstract A vehicle tracking and grouping algorithm is presented in this work
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING Ms.PALLAVI CHOUDEKAR Ajay Kumar Garg Engineering College, Department of electrical and electronics Ms.SAYANTI BANERJEE Ajay Kumar Garg Engineering
Automatic Traffic Estimation Using Image Processing
Automatic Traffic Estimation Using Image Processing Pejman Niksaz Science &Research Branch, Azad University of Yazd, Iran [email protected] Abstract As we know the population of city and number of
VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS
VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS Norbert Buch 1, Mark Cracknell 2, James Orwell 1 and Sergio A. Velastin 1 1. Kingston University, Penrhyn Road, Kingston upon Thames, KT1 2EE,
An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network
Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal
False alarm in outdoor environments
Accepted 1.0 Savantic letter 1(6) False alarm in outdoor environments Accepted 1.0 Savantic letter 2(6) Table of contents Revision history 3 References 3 1 Introduction 4 2 Pre-processing 4 3 Detection,
Real-Time Tracking of Pedestrians and Vehicles
Real-Time Tracking of Pedestrians and Vehicles N.T. Siebel and S.J. Maybank. Computational Vision Group Department of Computer Science The University of Reading Reading RG6 6AY, England Abstract We present
Automatic Labeling of Lane Markings for Autonomous Vehicles
Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 [email protected] 1. Introduction As autonomous vehicles become more popular,
EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM
EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM Amol Ambardekar, Mircea Nicolescu, and George Bebis Department of Computer Science and Engineering University
FLEXSYS Motion-based Traffic Analysis and Incident Detection
FLEXSYS Motion-based Traffic Analysis and Incident Detection Authors: Lixin Yang and Hichem Sahli, IBBT/VUB-ETRO Contents.1 Introduction......................................... 1.2 Traffic flow modelling
Traffic Monitoring Systems. Technology and sensors
Traffic Monitoring Systems Technology and sensors Technology Inductive loops Cameras Lidar/Ladar and laser Radar GPS etc Inductive loops Inductive loops signals Inductive loop sensor The inductance signal
T-REDSPEED White paper
T-REDSPEED White paper Index Index...2 Introduction...3 Specifications...4 Innovation...6 Technology added values...7 Introduction T-REDSPEED is an international patent pending technology for traffic violation
3D Scanner using Line Laser. 1. Introduction. 2. Theory
. Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric
Speed Performance Improvement of Vehicle Blob Tracking System
Speed Performance Improvement of Vehicle Blob Tracking System Sung Chun Lee and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA [email protected], [email protected] Abstract. A speed
Multi-view Intelligent Vehicle Surveillance System
Multi-view Intelligent Vehicle Surveillance System S. Denman, C. Fookes, J. Cook, C. Davoren, A. Mamic, G. Farquharson, D. Chen, B. Chen and S. Sridharan Image and Video Research Laboratory Queensland
Real-time Traffic Congestion Detection Based on Video Analysis
Journal of Information & Computational Science 9: 10 (2012) 2907 2914 Available at http://www.joics.com Real-time Traffic Congestion Detection Based on Video Analysis Shan Hu a,, Jiansheng Wu a, Ling Xu
ROBUST VEHICLE TRACKING IN VIDEO IMAGES BEING TAKEN FROM A HELICOPTER
ROBUST VEHICLE TRACKING IN VIDEO IMAGES BEING TAKEN FROM A HELICOPTER Fatemeh Karimi Nejadasl, Ben G.H. Gorte, and Serge P. Hoogendoorn Institute of Earth Observation and Space System, Delft University
A Vision-Based Tracking System for a Street-Crossing Robot
Submitted to ICRA-04 A Vision-Based Tracking System for a Street-Crossing Robot Michael Baker Computer Science Department University of Massachusetts Lowell Lowell, MA [email protected] Holly A. Yanco
Neural Network based Vehicle Classification for Intelligent Traffic Control
Neural Network based Vehicle Classification for Intelligent Traffic Control Saeid Fazli 1, Shahram Mohammadi 2, Morteza Rahmani 3 1,2,3 Electrical Engineering Department, Zanjan University, Zanjan, IRAN
Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006
Practical Tour of Visual tracking David Fleet and Allan Jepson January, 2006 Designing a Visual Tracker: What is the state? pose and motion (position, velocity, acceleration, ) shape (size, deformation,
Tracking and Recognition in Sports Videos
Tracking and Recognition in Sports Videos Mustafa Teke a, Masoud Sattari b a Graduate School of Informatics, Middle East Technical University, Ankara, Turkey [email protected] b Department of Computer
A Reliability Point and Kalman Filter-based Vehicle Tracking Technique
A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video
Urban Vehicle Tracking using a Combined 3D Model Detector and Classifier
Urban Vehicle Tracing using a Combined 3D Model Detector and Classifier Norbert Buch, Fei Yin, James Orwell, Dimitrios Maris and Sergio A. Velastin Digital Imaging Research Centre, Kingston University,
A Prototype For Eye-Gaze Corrected
A Prototype For Eye-Gaze Corrected Video Chat on Graphics Hardware Maarten Dumont, Steven Maesen, Sammy Rogmans and Philippe Bekaert Introduction Traditional webcam video chat: No eye contact. No extensive
The Visual Internet of Things System Based on Depth Camera
The Visual Internet of Things System Based on Depth Camera Xucong Zhang 1, Xiaoyun Wang and Yingmin Jia Abstract The Visual Internet of Things is an important part of information technology. It is proposed
Robot Perception Continued
Robot Perception Continued 1 Visual Perception Visual Odometry Reconstruction Recognition CS 685 11 Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart
Template-based Eye and Mouth Detection for 3D Video Conferencing
Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer
Tracking in flussi video 3D. Ing. Samuele Salti
Seminari XXIII ciclo Tracking in flussi video 3D Ing. Tutors: Prof. Tullio Salmon Cinotti Prof. Luigi Di Stefano The Tracking problem Detection Object model, Track initiation, Track termination, Tracking
Vehicle Segmentation and Tracking in the Presence of Occlusions
Word count 6030 + 1 Table + 4 Figures TRB Paper Number: 06-2943 Vehicle Segmentation and Tracking in the Presence of Occlusions Neeraj K. Kanhere Department of Electrical and Computer Engineering 207-A
Static Environment Recognition Using Omni-camera from a Moving Vehicle
Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing
Geometric Camera Parameters
Geometric Camera Parameters What assumptions have we made so far? -All equations we have derived for far are written in the camera reference frames. -These equations are valid only when: () all distances
A method of generating free-route walk-through animation using vehicle-borne video image
A method of generating free-route walk-through animation using vehicle-borne video image Jun KUMAGAI* Ryosuke SHIBASAKI* *Graduate School of Frontier Sciences, Shibasaki lab. University of Tokyo 4-6-1
Efficient Background Subtraction and Shadow Removal Technique for Multiple Human object Tracking
ISSN: 2321-7782 (Online) Volume 1, Issue 7, December 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com Efficient
Distributed Vision-Based Reasoning for Smart Home Care
Distributed Vision-Based Reasoning for Smart Home Care Arezou Keshavarz Stanford, CA 9435 [email protected] Ali Maleki Tabar Stanford, CA 9435 [email protected] Hamid Aghajan Stanford, CA 9435 [email protected]
Detection and Recognition of Mixed Traffic for Driver Assistance System
Detection and Recognition of Mixed Traffic for Driver Assistance System Pradnya Meshram 1, Prof. S.S. Wankhede 2 1 Scholar, Department of Electronics Engineering, G.H.Raisoni College of Engineering, Digdoh
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
Shape Measurement of a Sewer Pipe. Using a Mobile Robot with Computer Vision
International Journal of Advanced Robotic Systems ARTICLE Shape Measurement of a Sewer Pipe Using a Mobile Robot with Computer Vision Regular Paper Kikuhito Kawasue 1,* and Takayuki Komatsu 1 1 Department
EXPLORING IMAGE-BASED CLASSIFICATION TO DETECT VEHICLE MAKE AND MODEL FINAL REPORT
EXPLORING IMAGE-BASED CLASSIFICATION TO DETECT VEHICLE MAKE AND MODEL FINAL REPORT Jeffrey B. Flora, Mahbubul Alam, Amr H. Yousef, and Khan M. Iftekharuddin December 2013 DISCLAIMER The contents of this
Tracking performance evaluation on PETS 2015 Challenge datasets
Tracking performance evaluation on PETS 2015 Challenge datasets Tahir Nawaz, Jonathan Boyle, Longzhen Li and James Ferryman Computational Vision Group, School of Systems Engineering University of Reading,
An automatic system for sports analytics in multi-camera tennis videos
Workshop on Activity Monitoring by Multiple Distributed Sensing (AMMDS) in conjunction with 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance An automatic system for
How To Calculate Traffic Density On A Motorway On A Smart Camera
Robust traffic state estimation on smart cameras Felix Pletzer, Roland Tusch, Laszlo Böszörmenyi, Bernhard Rinner Alpen-Adria-Universität Klagenfurt and Lakeside Labs Klagenfurt, Austria {felix.pletzer,
Video Surveillance System for Security Applications
Video Surveillance System for Security Applications Vidya A.S. Department of CSE National Institute of Technology Calicut, Kerala, India V. K. Govindan Department of CSE National Institute of Technology
Vision-Based Blind Spot Detection Using Optical Flow
Vision-Based Blind Spot Detection Using Optical Flow M.A. Sotelo 1, J. Barriga 1, D. Fernández 1, I. Parra 1, J.E. Naranjo 2, M. Marrón 1, S. Alvarez 1, and M. Gavilán 1 1 Department of Electronics, University
Building an Advanced Invariant Real-Time Human Tracking System
UDC 004.41 Building an Advanced Invariant Real-Time Human Tracking System Fayez Idris 1, Mazen Abu_Zaher 2, Rashad J. Rasras 3, and Ibrahiem M. M. El Emary 4 1 School of Informatics and Computing, German-Jordanian
Real Time Target Tracking with Pan Tilt Zoom Camera
2009 Digital Image Computing: Techniques and Applications Real Time Target Tracking with Pan Tilt Zoom Camera Pankaj Kumar, Anthony Dick School of Computer Science The University of Adelaide Adelaide,
CCTV - Video Analytics for Traffic Management
CCTV - Video Analytics for Traffic Management Index Purpose Description Relevance for Large Scale Events Technologies Impacts Integration potential Implementation Best Cases and Examples 1 of 12 Purpose
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud
REAL TIME 3D FUSION OF IMAGERY AND MOBILE LIDAR Paul Mrstik, Vice President Technology Kresimir Kusevic, R&D Engineer Terrapoint Inc. 140-1 Antares Dr. Ottawa, Ontario K2E 8C4 Canada [email protected]
Real-Time Background Estimation of Traffic Imagery Using Group-Based Histogram *
JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 24, 411-423 (2008) Real-Time Background Estimation of Traffic Imagery Using Group-Based Histogram KAI-TAI SONG AND JEN-CHAO TAI + Department of Electrical
Scalable Traffic Video Analytics using Hadoop MapReduce
Scalable Traffic Video Analytics using Hadoop MapReduce Vaithilingam Anantha Natarajan Subbaiyan Jothilakshmi Venkat N Gudivada Department of Computer Science and Engineering Annamalai University Tamilnadu,
Method for Traffic Flow Estimation using Ondashboard
Method for Traffic Flow Estimation using Ondashboard Camera Image Kohei Arai Graduate School of Science and Engineering Saga University Saga, Japan Steven Ray Sentinuwo Department of Electrical Engineering
An Approach for Utility Pole Recognition in Real Conditions
6th Pacific-Rim Symposium on Image and Video Technology 1st PSIVT Workshop on Quality Assessment and Control by Image and Video Analysis An Approach for Utility Pole Recognition in Real Conditions Barranco
Real time vehicle detection and tracking on multiple lanes
Real time vehicle detection and tracking on multiple lanes Kristian Kovačić Edouard Ivanjko Hrvoje Gold Department of Intelligent Transportation Systems Faculty of Transport and Traffic Sciences University
LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK
vii LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK LIST OF CONTENTS LIST OF TABLES LIST OF FIGURES LIST OF NOTATIONS LIST OF ABBREVIATIONS LIST OF APPENDICES
OBJECT TRACKING USING LOG-POLAR TRANSFORMATION
OBJECT TRACKING USING LOG-POLAR TRANSFORMATION A Thesis Submitted to the Gradual Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements
A General Framework for Tracking Objects in a Multi-Camera Environment
A General Framework for Tracking Objects in a Multi-Camera Environment Karlene Nguyen, Gavin Yeung, Soheil Ghiasi, Majid Sarrafzadeh {karlene, gavin, soheil, majid}@cs.ucla.edu Abstract We present a framework
A Movement Tracking Management Model with Kalman Filtering Global Optimization Techniques and Mahalanobis Distance
Loutraki, 21 26 October 2005 A Movement Tracking Management Model with ing Global Optimization Techniques and Raquel Ramos Pinho, João Manuel R. S. Tavares, Miguel Velhote Correia Laboratório de Óptica
Removing Moving Objects from Point Cloud Scenes
1 Removing Moving Objects from Point Cloud Scenes Krystof Litomisky [email protected] Abstract. Three-dimensional simultaneous localization and mapping is a topic of significant interest in the research
Topographic Change Detection Using CloudCompare Version 1.0
Topographic Change Detection Using CloudCompare Version 1.0 Emily Kleber, Arizona State University Edwin Nissen, Colorado School of Mines J Ramón Arrowsmith, Arizona State University Introduction CloudCompare
Computer Vision for Quality Control in Latin American Food Industry, A Case Study
Computer Vision for Quality Control in Latin American Food Industry, A Case Study J.M. Aguilera A1, A. Cipriano A1, M. Eraña A2, I. Lillo A1, D. Mery A1, and A. Soto A1 e-mail: [jmaguile,aciprian,dmery,asoto,]@ing.puc.cl
VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS
VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS Aswin C Sankaranayanan, Qinfen Zheng, Rama Chellappa University of Maryland College Park, MD - 277 {aswch, qinfen, rama}@cfar.umd.edu Volkan Cevher, James
HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT
International Journal of Scientific and Research Publications, Volume 2, Issue 4, April 2012 1 HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT Akhil Gupta, Akash Rathi, Dr. Y. Radhika
Robust and accurate global vision system for real time tracking of multiple mobile robots
Robust and accurate global vision system for real time tracking of multiple mobile robots Mišel Brezak Ivan Petrović Edouard Ivanjko Department of Control and Computer Engineering, Faculty of Electrical
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia
VEHICLE TRACKING AND SPEED ESTIMATION SYSTEM CHAN CHIA YIK. Report submitted in partial fulfillment of the requirements
VEHICLE TRACKING AND SPEED ESTIMATION SYSTEM CHAN CHIA YIK Report submitted in partial fulfillment of the requirements for the award of the degree of Bachelor of Computer System & Software Engineering
Analecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
A feature-based tracking algorithm for vehicles in intersections
A feature-based tracking algorithm for vehicles in intersections Nicolas Saunier and Tarek Sayed Departement of Civil Engineering, University of British Columbia 6250 Applied Science Lane, Vancouver BC
Mouse Control using a Web Camera based on Colour Detection
Mouse Control using a Web Camera based on Colour Detection Abhik Banerjee 1, Abhirup Ghosh 2, Koustuvmoni Bharadwaj 3, Hemanta Saikia 4 1, 2, 3, 4 Department of Electronics & Communication Engineering,
BACnet for Video Surveillance
The following article was published in ASHRAE Journal, October 2004. Copyright 2004 American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. It is presented for educational purposes
Traffic Flow Monitoring in Crowded Cities
Traffic Flow Monitoring in Crowded Cities John A. Quinn and Rose Nakibuule Faculty of Computing & I.T. Makerere University P.O. Box 7062, Kampala, Uganda {jquinn,rnakibuule}@cit.mak.ac.ug Abstract Traffic
Author: Hamid A.E. Al-Jameel (Research Institute: Engineering Research Centre)
SPARC 2010 Evaluation of Car-following Models Using Field Data Author: Hamid A.E. Al-Jameel (Research Institute: Engineering Research Centre) Abstract Traffic congestion problems have been recognised as
Low-resolution Image Processing based on FPGA
Abstract Research Journal of Recent Sciences ISSN 2277-2502. Low-resolution Image Processing based on FPGA Mahshid Aghania Kiau, Islamic Azad university of Karaj, IRAN Available online at: www.isca.in,
Automatic Calibration of an In-vehicle Gaze Tracking System Using Driver s Typical Gaze Behavior
Automatic Calibration of an In-vehicle Gaze Tracking System Using Driver s Typical Gaze Behavior Kenji Yamashiro, Daisuke Deguchi, Tomokazu Takahashi,2, Ichiro Ide, Hiroshi Murase, Kazunori Higuchi 3,
HIGH-PERFORMANCE INSPECTION VEHICLE FOR RAILWAYS AND TUNNEL LININGS. HIGH-PERFORMANCE INSPECTION VEHICLE FOR RAILWAY AND ROAD TUNNEL LININGS.
HIGH-PERFORMANCE INSPECTION VEHICLE FOR RAILWAYS AND TUNNEL LININGS. HIGH-PERFORMANCE INSPECTION VEHICLE FOR RAILWAY AND ROAD TUNNEL LININGS. The vehicle developed by Euroconsult and Pavemetrics and described
Canny Edge Detection
Canny Edge Detection 09gr820 March 23, 2009 1 Introduction The purpose of edge detection in general is to significantly reduce the amount of data in an image, while preserving the structural properties
Automatic parameter regulation for a tracking system with an auto-critical function
Automatic parameter regulation for a tracking system with an auto-critical function Daniela Hall INRIA Rhône-Alpes, St. Ismier, France Email: [email protected] Abstract In this article we propose
Effective Use of Android Sensors Based on Visualization of Sensor Information
, pp.299-308 http://dx.doi.org/10.14257/ijmue.2015.10.9.31 Effective Use of Android Sensors Based on Visualization of Sensor Information Young Jae Lee Faculty of Smartmedia, Jeonju University, 303 Cheonjam-ro,
SYNTHESIZING FREE-VIEWPOINT IMAGES FROM MULTIPLE VIEW VIDEOS IN SOCCER STADIUM
SYNTHESIZING FREE-VIEWPOINT IMAGES FROM MULTIPLE VIEW VIDEOS IN SOCCER STADIUM Kunihiko Hayashi, Hideo Saito Department of Information and Computer Science, Keio University {hayashi,saito}@ozawa.ics.keio.ac.jp
Rafael Martín & José M. Martínez
A semi-supervised system for players detection and tracking in multi-camera soccer videos Rafael Martín José M. Martínez Multimedia Tools and Applications An International Journal ISSN 1380-7501 DOI 10.1007/s11042-013-1659-6
Development of an automated Red Light Violation Detection System (RLVDS) for Indian vehicles
CS11 59 Development of an automated Red Light Violation Detection System (RLVDS) for Indian vehicles Satadal Saha 1, Subhadip Basu 2 *, Mita Nasipuri 2, Dipak Kumar Basu # 2 # AICTE Emeritus Fellow 1 CSE
IMPLICIT SHAPE MODELS FOR OBJECT DETECTION IN 3D POINT CLOUDS
IMPLICIT SHAPE MODELS FOR OBJECT DETECTION IN 3D POINT CLOUDS Alexander Velizhev 1 (presenter) Roman Shapovalov 2 Konrad Schindler 3 1 Hexagon Technology Center, Heerbrugg, Switzerland 2 Graphics & Media
Object tracking & Motion detection in video sequences
Introduction Object tracking & Motion detection in video sequences Recomended link: http://cmp.felk.cvut.cz/~hlavac/teachpresen/17compvision3d/41imagemotion.pdf 1 2 DYNAMIC SCENE ANALYSIS The input to
Assessment of Camera Phone Distortion and Implications for Watermarking
Assessment of Camera Phone Distortion and Implications for Watermarking Aparna Gurijala, Alastair Reed and Eric Evans Digimarc Corporation, 9405 SW Gemini Drive, Beaverton, OR 97008, USA 1. INTRODUCTION
SmartMonitor An Intelligent Security System for the Protection of Individuals and Small Properties with the Possibility of Home Automation
Sensors 2014, 14, 9922-9948; doi:10.3390/s140609922 OPEN ACCESS sensors ISSN 1424-8220 www.mdpi.com/journal/sensors Article SmartMonitor An Intelligent Security System for the Protection of Individuals
Advanced Methods for Pedestrian and Bicyclist Sensing
Advanced Methods for Pedestrian and Bicyclist Sensing Yinhai Wang PacTrans STAR Lab University of Washington Email: [email protected] Tel: 1-206-616-2696 For Exchange with University of Nevada Reno Sept. 25,
Poker Vision: Playing Cards and Chips Identification based on Image Processing
Poker Vision: Playing Cards and Chips Identification based on Image Processing Paulo Martins 1, Luís Paulo Reis 2, and Luís Teófilo 2 1 DEEC Electrical Engineering Department 2 LIACC Artificial Intelligence
