DETECTING AND COUNTING VEHICLES FROM SMALL LOW-COST UAV IMAGES
|
|
|
- Sherman Chase
- 10 years ago
- Views:
Transcription
1 DETECTING AND COUNTING VEHICLES FROM SMALL LOW-COST UAV IMAGES Penggen Cheng 1, Guoqing Zhou 1, and Zezhong Zheng,3 1 School of Geosciences and Geomatics, East China Institute of Technology 56 Xuefu Road, Fuzhou City, Jiangxi , P. R. China Department of Civil Engineering and Technology, Old Dominion University, Norfolk, VA 359, USA Tel: (757) ; Fax: (757) ; [email protected] 3 School of Civil Engineering, Southwest Jiaotong University, Chengdu, Sichuan, , China ABSTRACT In recent years, many civil users have been interested in unmanned aerial vehicle (UAV) for traffic monitoring and traffic data collection because they have the ability to cover a large area, focus resources on the current problems, travel at higher speeds than ground vehicles, and are not restricted to traveling on the road network. This paper presents a method for detecting and counting vehicles from UAV video flow. The algorithm for vision-based detection and counting of vehicles in monocular image sequences for traffic scenes have been developed. In the algorithm, video frame-to-frame matching to track vehicle is one of important steps. Dynamic vehicles are identified using both background elimination and background registration techniques. The background elimination method uses concept of least squares to compare the accuracies of the current algorithm with the already existing algorithms. The background registration method uses background subtraction which improves the adaptive background mixture model and makes the system learn faster and more accurately, as well as adapt effectively to changing environments. In addition, because of high data sampling rates of video flow, resampling of video flow is also analyzed and discussed. The objective of this research is to monitor activities at traffic intersections for detecting congestions, and then predict the traffic flow. INTRODUCTION Traffic congestion has been a significantly challenging problem. It has widely been realized that increases of preliminary transportation infrastructure e.g., more pavements, and widened road, have not been able to relieve city congestion. As a result, many investigators have paid their attentions on intelligent transportation system (ITS), such as predict the traffic flow on the basis of monitoring the activities at traffic intersections for detecting congestions. To better understand traffic flow, an increasing reliance on traffic surveillance is in a need for better vehicle detection such at a wide-area. Traditionally the high costs detectors are mounted at the edge of pavement. In recent years, many civil users have been interested in unmanned aerial vehicle (UAV) for traffic monitoring and traffic data collection because they have the ability to cover a large area, focus resources on the current problems, travel at higher speeds than ground vehicles, and are not restricted to traveling on the road network. This paper presents a method for detecting and counting vehicles from UAV video flow. This is because the vehicle tracks, or trajectories, can be measured over a length of roadway, rather than at a single point, which used to be measured traditionally using detectors. Thus, it is possible to measure true density instead of simply recording detector occupancy. In fact, the traditional traffic parameters are more stable than corresponding measurements from point detectors when averaging trajectories over space and time from a UAV-based video flow. The additional information, such as lane changes of vehicles from UAV-based video, acceleration/deceleration patterns, could lead to improved incident detection. The trajectory data could also be used to automate previously labor intensive traffic studies, such as examining vehicle maneuvers in weaving sections or bottlenecks. UAV PLATFORM AND DATA COLLECTION Old Dominion University was recently awarded a grant for development and test of low-cost small civilian UAV system under contract of U.S. National Science Foundation. The goal of this project will be to investigate and develop techniques for generating high-resolution video flow from UAV video for fast-response to time-critical event, such as incident. In collaboration with the Air-O-Space International L.L.C. (AOSI), a small and lower-cost ASPRS 009 Annual Conference Baltimore, Maryland March 9-13, 009
2 UAV system was implemented to collect aerial video stream. This system, as illustrated in Figure 1, is actually divided into two parts: UAV platform and UAV ground control. As for deployment of UAV platform, video camera, GPS receiver and INS sensor are amounted on a small UAV aircraft as well as a Command Parser is installed to respond to ground control instructions. The UAV ground control mainly includes: Monitoring UAV flying status, Downloading video stream and GPS/INS navigation information, Planning UAV flight route, and Remotely control UAV flight route, height, and attitude. On April 3, 005, high-resolution video as well as telemetry data from on board GPS/INS orientation system is collected using the UAV system. The collected UAV video is a MPEG-encoded stream captured by a non-metric Topica Color CCD video camera (TP-6001A), whose video frame size is pixels. The video flow remains basically at a nadir looking. The UTC time from onboard GPS were overlaid on to the video frames in the lower left hand corner. Telemetry data including UAV's position and attitude is recorded by GPS/INS sensors at -second interval. Beside video stream and telemetry data, ancillary data including USGS 10-meter grid DEM and 1-meter DOQQ referencing image covering experimental area are collected as well, which will provide ground control points for UAV system calibration and checking points for evaluating the accuracy of UAV data collection (see Table 1). Data Mepg-based Video stream Telemetry Data Ancillary data (USGS DEM/ Reference image) Table 1. Data collected in this project Functionality Collected by video camera onboard UAV and used to generate orthoimage Collected by GPS/INS system onboard UAV and used to record UAV position and attitude Collected over the experimental area near AOSI office in Picayune, Mississippi and used for collecting ground control points (GCPs) to solve camera parameter or providing check points to evaluate accuracy of orthoimages. Figure 1. Small low-cost UAV developed by Zhou et al. (006). ASPRS 009 Annual Conference Baltimore, Maryland March 9-13, 009
3 UAV-BASED VIDEO FLOW PROCESSING Real-time UAV Video Flow Resampling Because the original video is recorded at a sampling rate of 30 frame per second, a resampling has to be conducted in order to reduce the information redundant. Thus, we used Microsoft DirectX set up architecture, called DirectShow, for streaming media on the Microsoft Windows platform. The core of DirectShow is to use a modular architecture, called a filter graph, to mix and match different software components, called filters, for specific task related to multimedia streams. In this paper, a filter graph (Figure ), is implemented for the purpose of real-timely re-sampling desired video frames from collected MPEG-encoded UAV video stream. Because each Filter in Figure can respond to user s requirements and how stream data go through filters is fully controlled simply by issuing high-level command or requirement calls such as "Run" (to move data through the graph) or "Stop" (to stop the flow of data) or sampling (to extract any time-stamped sample from stream data), directly accessing MPEG-decoded stream and seeking any sample frame seamlessly in memory is achievable, which means video re-sampling can be achieved with high efficiency. (Please refer Microsoft DirectX developing documents for more specific explanation on using Microsoft DirectShow to process video stream.) As an example of a video pair of frame, <N i, N i+1 > (i >1), the resampling steps included: 1. Extracting frame N i+1 from video stream and calculate coarse parallax vector (dx i+1, dy i+1 ) for frame pair <N i, N i+1 >. Suppose that sampling time interval between last pair of frame <N i-1, N i > is T i and its average parallax vector calculated from all tie points is (dx i, dy i ), and a desired sampling time interval is T i+1 (e.g., i one second), we can calculate time-stamp for frame N i+1 using + 1 T (the first frame N 0 is supposed to be re-sampled at beginning, i.e., 0 second). Through simple linear interpolation, coarse parallax vector for the a video pair <N i, N i+1 > is calculated by (dx i+1, dy i+1 ) = (dx i, dy i ) T i+1 / T i ;. Generating tie point candidates for the video pair <N i, N i+1 >. With the coarse parallax vector (dx i+1, dy i+1 ), the potential conjugate points in the frame N i+1 can be predicted based on the feature points extracted from the frame N i using the cross correlation algorithm of the feature points between the frame N i and N i+1, tie points are determined based on local extrema. To enlarge pull-in range, hierarchical matching based on pyramid-image is introduced in this step. 3. Detecting blunders of the tie point candidates and giving quantitative estimation T e for the quality of rest tie points. With two-view co-planar geometry constrain (explained later) constructed by conjugated points and air line, the tie point candidates are refined by removing detected blunder and calculating statistic error T e as quantitative estimation for the quality of rest tie points. 4. Repeating the steps 1 through 3 for generating valid tie points for video pair <N i, N i+1 > when T e is bigger than a given threshold T 0. When T e is not bigger than a given threshold T 0, tie points for the next video pair <N i+1, N i+ > will be generated by enlarging time-stamp for re-sampling the frame N i+. The proposed re-sampling method in Step 4 is based on one fact that the liner interpolation for parallax vector can be guaranteed when the sampling time interval is very small although the UAV flies unsteadily. In other words, even though the UAV was flying at poor condition, the reliability of liner interpolation of the parallax vector still can be guaranteed. k = 1 k Figure. Flowchart for real-time video re-sampling. UAV-based Vehicle Tracking Many algorithms for detecting and counting vehicles from video have been developed in the computer vision community. A detailed review may be beyond of this paper. Briefly, the different tracking approaches for video data can be classified as follows (Koller, et al, 1993; Sullivan, 199): 3D Model-Based Tracking: 3D model-based vehicle tracking method is to use the prior 3D vehicle model to carry 3D-to-D matching technology. The critical weakness of this approach lies on the detailed ASPRS 009 Annual Conference Baltimore, Maryland March 9-13, 009
4 geometric object models. It is unrealistic to expect to be able to have detailed models for all vehicles that could be found on the roadway Region-Based Tracking: This approach is based on the traditional gray-based image matching technology to track vehicle over time. This approach works fairly well in free-flowing traffic. However, under congested traffic conditions, vehicles partially occlude one another instead of being spatially isolated, which makes the task of segmenting individual vehicles difficult. Moreover, this algorithm is very sensitive to the image quality. Active Contour-Based Tracking: The basic idea of this approach is to have a representation of the bounding contour of the object and keep dynamically updating it (Koller, et al.; 1994a; 1994b). The advantage of having a contour based representation instead of a region based representation is reduced computational complexity. However, this approach is also sensitive to occlusions and image quality. Feature-Based Tracking: This approach is tracking distinct feature, such as corner, edge line. The advantage of this approach is to tolerate partial occlusion, and not sensitive to image quality relative to other tracking methods. This paper presents our algorithm for detecting and counting the vehicle from UAV-based video data. The main idea is to track the pixel from consecutive video frames, since the video flow can provide us with the overlapped image pair at any extent. The major steps include as follow. A. Interesting Point (IP) Feature Extraction. Moravec operator (Moravec **) is implemented to extract Interested Point (IP) features from the left frame. A little but special improvement to the Moravec operator in this paper is that we cover target frame with certain virtual grid (e.g. each cell of this grid is pixel) and only one point feature is extracted, if it exists, see Figure 3, to each cell respectively. With this improvement, not only timecost for point feature extraction is reduced but also possible ambiguity problems for point matching are greatly relieved. Figure 3. Method for extracting IP within grid and the perdition of the conjugate point (blue cross). B. Conjugate Point Prediction. Based on hypothesized coarse parallax vector, corresponding conjugate points in the right frame can be predicted. Figure 3 shows the predicted conjugate points in the right frames. Figure 4. The perdition of the conjugate point. ASPRS 009 Annual Conference Baltimore, Maryland March 9-13, 009
5 C. Implementing Cross Correlation. For area-based matching, the similarity between gray value windows is usually defined as a function of the differences between the corresponding gray values. In this paper, this function is the cross correlation coefficient: ρ = S xy / S xxs. (where: Sxy is covariance function of target window and yy matching window ; Sxx, Syy is variance function of target or matching window respectively). By comparing calculated cross correlation coefficient to a given threshold, conjugate points for each extracted point feature can be determined. Figure 5 show the result when the cross correlation calculation is implemented to Figure 3 and Figure 4. Figure 5. Tie point candidates from cross correlation. D. Verification of Matched Results. Since the cross correlation method only applies the local extrema information to determine conjugate points, the reliability analysis for global consistency will be given to detect mismatched conjugate points in last step and estimate the quality of tie point candidates. Suppose that <a 1, a > be one conjugate point in tie point candidates, the homologous rays S 1 a 1, S a from project center S 1, S respectively should intersect at the same space point A, namely, vector S1a1, Sa and S1S (called air base B) are co-planar and can be expressed by: F = S1 S (S1a1 Sa ) = 0 (1) Actually, from the view of relative orientation in photogrammetry community, Eq. 1 is the function of five relative orientation parameters of ϕ, κ, ϕ, ω, κ and can be linearized as [14]: F = F + φ = 1 + φ + κ 1 + κ + ω 0 () 0 φ 1 φ κ1 κ ω for each conjugate point, one error equation can be set up based on Eq. and thus, for all tie point candidates, all error equations can be expressed in the form of matrix, i.e., v = AX L (3) With redundant observations, five relative orientation parameters X and standard deviation can be estimated. With the standard deviation σ 0, tie point candidates with errors beyond 3σ 0 can be thought of blunder and removed on the basis of probability theory. Figure 5 is the result when coplanar constrain is applied to Figure 6. ASPRS 009 Annual Conference Baltimore, Maryland March 9-13, 009
6 Figure 6. Verification of matched tie points. EXPERIMENTS ON DETECTING AND COUNTING VEHICLES Table is an example of the results of UAV-based video match. As seen from Table, using average parallax vector calculated from all tie points in the last video stereo frames (e.g., averaged parallax vectors of 7 th frame and 10 th frame are.8, , respectively) as prediction for next consecutive video stereo frame (e.g. predicted parallax vectors of 10 th frame and 11 th frame are 11., , respectively), which is solved out by linearly interpolating, i.e., (11.,-68.8) = (.8,-16.7) (.19sec-1.39sec)/(1.39sec-1.19sec) is reasonable. The predicted parallax vector is appropriately close to average the parallax vector in the same stereo frame (e.g. averaged parallax vector of 10 th frame and 11 th frame is (14.3, )). Also, when the average parallax vector calculated from last stereo frame is not good as predicted one for next stereo frame (e.g. 14 th and 15 th frame), self-adjusting parallax vector by reducing time interval to generate new stereo frame (e.g. 14 th and 16 th frame) does work well. With the proposed matching method, the vehicle can be tracked. Figure 6 is the result of tracked vehicle. With the tracked vehicle, the accounting vehicle from UAV-based video can be obtained. Table. Example of searching tie points (source: Wu and Zhou (006)) Figure 7. Tracked and accounted vehicle from video. ASPRS 009 Annual Conference Baltimore, Maryland March 9-13, 009
7 CONCLUSIONS This paper presents UAV-based real-time video data processing with emphasis on the video frame matching algorithm for vehicle detection, tracking and accounting. The algorithm takes advantage of the inherent characteristics of UAV video system of parallax, which is used to predict the corresponding points, and delete the mismatching points from video stream. This paper also discussed re-sampling of the video data. The experimental results demonstrated that the proposed image matching algorithm is efficient and effective. The vehicles from UAV can be detected, tracked and accounted. ACKNOWLEDGMENTS This paper was funded by U.S. National Science Foundation under the contract number of NSF Dr. Jun Wu, a postdoctoral researcher, conducted all code development and data processing. Authors thank him very much for his important contribution. REFERENCES Koller, D., K. Daniilidis, H. Nagel, Model-based object tracking in monocular image sequences of road traffic scenes, International Journal of Computer Vision, 10: Koller, D., J. Weber, J. Malik, 1994a. Robust multiple car tracking with occlusion reasoning, ECCV, Stockholm, Sweden, pp Koller, D., J. Weber, T. Huang, J. Malik, G. Ogasawara, B. Rao, S. Russell, 1994b. Towards robust automatic traffic scene analysis in real-time, ICPR, Israel, vol. 1, pp Moraver, H.P., Towards automatic visual obstable avoidance, In Proc. 5th Int. Joint Conf. Artificial Intell, Cambridge, MA, p.584. Sullivan, G., 199. Visual interpretation of known objects in constrained scenes, Phil. Trans. Roy. Soc (B), 337: Wu, Jun and G. Zhou, 006. Low-cost unmanned aerial vehicle (UAV) video orthorectification, 006 IEEE International Geoscience and Remote Sensing Symposium, Denver, Colorado, USA, July 31, August 4, 006. Zhou, G., J. Wu, S. Wright, and J. Gao, 006. High-resolution UAV video data processing for forest fire surveillance, Technical Report to National Science Foundation, Old Dominion University, Norfolk, Virginia, USA, August, 8 p. ASPRS 009 Annual Conference Baltimore, Maryland March 9-13, 009
Automatic Traffic Estimation Using Image Processing
Automatic Traffic Estimation Using Image Processing Pejman Niksaz Science &Research Branch, Azad University of Yazd, Iran [email protected] Abstract As we know the population of city and number of
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - [email protected]
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia
Vision based Vehicle Tracking using a high angle camera
Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu [email protected] [email protected] Abstract A vehicle tracking and grouping algorithm is presented in this work
Traffic Monitoring Systems. Technology and sensors
Traffic Monitoring Systems Technology and sensors Technology Inductive loops Cameras Lidar/Ladar and laser Radar GPS etc Inductive loops Inductive loops signals Inductive loop sensor The inductance signal
ROBUST VEHICLE TRACKING IN VIDEO IMAGES BEING TAKEN FROM A HELICOPTER
ROBUST VEHICLE TRACKING IN VIDEO IMAGES BEING TAKEN FROM A HELICOPTER Fatemeh Karimi Nejadasl, Ben G.H. Gorte, and Serge P. Hoogendoorn Institute of Earth Observation and Space System, Delft University
Object tracking in video scenes
A Seminar On Object tracking in video scenes Presented by Alok K. Watve M.Tech. IT 1st year Indian Institue of Technology, Kharagpur Under the guidance of Dr. Shamik Sural Assistant Professor School of
Automatic georeferencing of imagery from high-resolution, low-altitude, low-cost aerial platforms
Automatic georeferencing of imagery from high-resolution, low-altitude, low-cost aerial platforms Amanda Geniviva, Jason Faulring and Carl Salvaggio Rochester Institute of Technology, 54 Lomb Memorial
EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM
EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM Amol Ambardekar, Mircea Nicolescu, and George Bebis Department of Computer Science and Engineering University
Neural Network based Vehicle Classification for Intelligent Traffic Control
Neural Network based Vehicle Classification for Intelligent Traffic Control Saeid Fazli 1, Shahram Mohammadi 2, Morteza Rahmani 3 1,2,3 Electrical Engineering Department, Zanjan University, Zanjan, IRAN
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING Ms.PALLAVI CHOUDEKAR Ajay Kumar Garg Engineering College, Department of electrical and electronics Ms.SAYANTI BANERJEE Ajay Kumar Garg Engineering
Solving for Camera Exterior Orientation on Historical Images Using Imagine/LPS Version 1.0
Solving for Camera Exterior Orientation on Historical Images Using Imagine/LPS Version 1.0 Prepared by: Jason Faulring Systems Integration Engineer Laboratory for Imaging Algorithms & Systems Rochester
Automatic Labeling of Lane Markings for Autonomous Vehicles
Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 [email protected] 1. Introduction As autonomous vehicles become more popular,
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
Scalable Traffic Video Analytics using Hadoop MapReduce
Scalable Traffic Video Analytics using Hadoop MapReduce Vaithilingam Anantha Natarajan Subbaiyan Jothilakshmi Venkat N Gudivada Department of Computer Science and Engineering Annamalai University Tamilnadu,
Detection and Recognition of Mixed Traffic for Driver Assistance System
Detection and Recognition of Mixed Traffic for Driver Assistance System Pradnya Meshram 1, Prof. S.S. Wankhede 2 1 Scholar, Department of Electronics Engineering, G.H.Raisoni College of Engineering, Digdoh
Speed Performance Improvement of Vehicle Blob Tracking System
Speed Performance Improvement of Vehicle Blob Tracking System Sung Chun Lee and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA [email protected], [email protected] Abstract. A speed
An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network
Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal
Method for Traffic Flow Estimation using Ondashboard
Method for Traffic Flow Estimation using Ondashboard Camera Image Kohei Arai Graduate School of Science and Engineering Saga University Saga, Japan Steven Ray Sentinuwo Department of Electrical Engineering
VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS
VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS Norbert Buch 1, Mark Cracknell 2, James Orwell 1 and Sergio A. Velastin 1 1. Kingston University, Penrhyn Road, Kingston upon Thames, KT1 2EE,
LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. [email protected]
LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE 1 S.Manikandan, 2 S.Abirami, 2 R.Indumathi, 2 R.Nandhini, 2 T.Nanthini 1 Assistant Professor, VSA group of institution, Salem. 2 BE(ECE), VSA
3D Vehicle Extraction and Tracking from Multiple Viewpoints for Traffic Monitoring by using Probability Fusion Map
Electronic Letters on Computer Vision and Image Analysis 7(2):110-119, 2008 3D Vehicle Extraction and Tracking from Multiple Viewpoints for Traffic Monitoring by using Probability Fusion Map Zhencheng
A method of generating free-route walk-through animation using vehicle-borne video image
A method of generating free-route walk-through animation using vehicle-borne video image Jun KUMAGAI* Ryosuke SHIBASAKI* *Graduate School of Frontier Sciences, Shibasaki lab. University of Tokyo 4-6-1
Mean-Shift Tracking with Random Sampling
1 Mean-Shift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of
Automated Process for Generating Digitised Maps through GPS Data Compression
Automated Process for Generating Digitised Maps through GPS Data Compression Stewart Worrall and Eduardo Nebot University of Sydney, Australia {s.worrall, e.nebot}@acfr.usyd.edu.au Abstract This paper
Design Specifications of an UAV for Environmental Monitoring, Safety, Video Surveillance, and Urban Security
Design Specifications of an UAV for Environmental Monitoring, Safety, Video Surveillance, and Urban Security A. Alessandri, P. Bagnerini, M. Gaggero, M. Ghio, R. Martinelli University of Genoa - Faculty
Opportunities for the generation of high resolution digital elevation models based on small format aerial photography
Opportunities for the generation of high resolution digital elevation models based on small format aerial photography Boudewijn van Leeuwen 1, József Szatmári 1, Zalán Tobak 1, Csaba Németh 1, Gábor Hauberger
Low-resolution Character Recognition by Video-based Super-resolution
2009 10th International Conference on Document Analysis and Recognition Low-resolution Character Recognition by Video-based Super-resolution Ataru Ohkura 1, Daisuke Deguchi 1, Tomokazu Takahashi 2, Ichiro
Tracking of Small Unmanned Aerial Vehicles
Tracking of Small Unmanned Aerial Vehicles Steven Krukowski Adrien Perkins Aeronautics and Astronautics Stanford University Stanford, CA 94305 Email: [email protected] Aeronautics and Astronautics Stanford
A Method of Caption Detection in News Video
3rd International Conference on Multimedia Technology(ICMT 3) A Method of Caption Detection in News Video He HUANG, Ping SHI Abstract. News video is one of the most important media for people to get information.
How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud
REAL TIME 3D FUSION OF IMAGERY AND MOBILE LIDAR Paul Mrstik, Vice President Technology Kresimir Kusevic, R&D Engineer Terrapoint Inc. 140-1 Antares Dr. Ottawa, Ontario K2E 8C4 Canada [email protected]
Real-time Traffic Congestion Detection Based on Video Analysis
Journal of Information & Computational Science 9: 10 (2012) 2907 2914 Available at http://www.joics.com Real-time Traffic Congestion Detection Based on Video Analysis Shan Hu a,, Jiansheng Wu a, Ling Xu
FLEXSYS Motion-based Traffic Analysis and Incident Detection
FLEXSYS Motion-based Traffic Analysis and Incident Detection Authors: Lixin Yang and Hichem Sahli, IBBT/VUB-ETRO Contents.1 Introduction......................................... 1.2 Traffic flow modelling
A Reliability Point and Kalman Filter-based Vehicle Tracking Technique
A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video
A feature-based tracking algorithm for vehicles in intersections
A feature-based tracking algorithm for vehicles in intersections Nicolas Saunier and Tarek Sayed Departement of Civil Engineering, University of British Columbia 6250 Applied Science Lane, Vancouver BC
VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS
VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS Aswin C Sankaranayanan, Qinfen Zheng, Rama Chellappa University of Maryland College Park, MD - 277 {aswch, qinfen, rama}@cfar.umd.edu Volkan Cevher, James
Digital Photogrammetric System. Version 6.0.2 USER MANUAL. Block adjustment
Digital Photogrammetric System Version 6.0.2 USER MANUAL Table of Contents 1. Purpose of the document... 4 2. General information... 4 3. The toolbar... 5 4. Adjustment batch mode... 6 5. Objects displaying
Vehicle Segmentation and Tracking in the Presence of Occlusions
Word count 6030 + 1 Table + 4 Figures TRB Paper Number: 06-2943 Vehicle Segmentation and Tracking in the Presence of Occlusions Neeraj K. Kanhere Department of Electrical and Computer Engineering 207-A
Tracking Groups of Pedestrians in Video Sequences
Tracking Groups of Pedestrians in Video Sequences Jorge S. Marques Pedro M. Jorge Arnaldo J. Abrantes J. M. Lemos IST / ISR ISEL / IST ISEL INESC-ID / IST Lisbon, Portugal Lisbon, Portugal Lisbon, Portugal
Multi-view Intelligent Vehicle Surveillance System
Multi-view Intelligent Vehicle Surveillance System S. Denman, C. Fookes, J. Cook, C. Davoren, A. Mamic, G. Farquharson, D. Chen, B. Chen and S. Sridharan Image and Video Research Laboratory Queensland
Digital Image Increase
Exploiting redundancy for reliable aerial computer vision 1 Digital Image Increase 2 Images Worldwide 3 Terrestrial Image Acquisition 4 Aerial Photogrammetry 5 New Sensor Platforms Towards Fully Automatic
High Resolution Digital Surface Models and Orthoimages for Telecom Network Planning
Renouard, Lehmann 241 High Resolution Digital Surface Models and Orthoimages for Telecom Network Planning LAURENT RENOUARD, S ophia Antipolis FRANK LEHMANN, Berlin ABSTRACT DLR of Germany and ISTAR of
Digital Orthophoto Production In the Desktop Environment 1
Digital Orthophoto Production In the Desktop Environment 1 By Dr. Roy A. Welch and Thomas R. Jordan Digital orthophotos are proving suitable for a variety of mapping, GIS and environmental monitoring tasks.
EXPLORING IMAGE-BASED CLASSIFICATION TO DETECT VEHICLE MAKE AND MODEL FINAL REPORT
EXPLORING IMAGE-BASED CLASSIFICATION TO DETECT VEHICLE MAKE AND MODEL FINAL REPORT Jeffrey B. Flora, Mahbubul Alam, Amr H. Yousef, and Khan M. Iftekharuddin December 2013 DISCLAIMER The contents of this
Advanced Methods for Pedestrian and Bicyclist Sensing
Advanced Methods for Pedestrian and Bicyclist Sensing Yinhai Wang PacTrans STAR Lab University of Washington Email: [email protected] Tel: 1-206-616-2696 For Exchange with University of Nevada Reno Sept. 25,
Colorado School of Mines Computer Vision Professor William Hoff
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Introduction to 2 What is? A process that produces from images of the external world a description
Mapping Linear Networks Based on Cellular Phone Tracking
Ronen RYBOWSKI, Aaron BELLER and Yerach DOYTSHER, Israel Key words: Cellular Phones, Cellular Network, Linear Networks, Mapping. ABSTRACT The paper investigates the ability of accurately mapping linear
Static Environment Recognition Using Omni-camera from a Moving Vehicle
Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing
Files Used in this Tutorial
Generate Point Clouds Tutorial This tutorial shows how to generate point clouds from IKONOS satellite stereo imagery. You will view the point clouds in the ENVI LiDAR Viewer. The estimated time to complete
A secure face tracking system
International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 10 (2014), pp. 959-964 International Research Publications House http://www. irphouse.com A secure face tracking
Simultaneous Gamma Correction and Registration in the Frequency Domain
Simultaneous Gamma Correction and Registration in the Frequency Domain Alexander Wong [email protected] William Bishop [email protected] Department of Electrical and Computer Engineering University
Real time vehicle detection and tracking on multiple lanes
Real time vehicle detection and tracking on multiple lanes Kristian Kovačić Edouard Ivanjko Hrvoje Gold Department of Intelligent Transportation Systems Faculty of Transport and Traffic Sciences University
A Study on M2M-based AR Multiple Objects Loading Technology using PPHT
A Study on M2M-based AR Multiple Objects Loading Technology using PPHT Sungmo Jung, Seoksoo Kim * Department of Multimedia Hannam University 133, Ojeong-dong, Daedeok-gu, Daejeon-city Korea [email protected],
Accurate and robust image superresolution by neural processing of local image representations
Accurate and robust image superresolution by neural processing of local image representations Carlos Miravet 1,2 and Francisco B. Rodríguez 1 1 Grupo de Neurocomputación Biológica (GNB), Escuela Politécnica
A SPECIAL APPLICATION OF A VIDEO-IMAGE SYSTEM FOR VEHICLE TRACKING AND SPEED MEASUREMENT
A SPECIAL APPLICATION OF A VIDEO-IMAGE SYSTEM FOR VEHICLE TRACKING AND SPEED MEASUREMENT Zong Tian (Corresponding Author) Texas Transportation Institute The Texas A&M University System College Station,
Analecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
High Quality Image Magnification using Cross-Scale Self-Similarity
High Quality Image Magnification using Cross-Scale Self-Similarity André Gooßen 1, Arne Ehlers 1, Thomas Pralow 2, Rolf-Rainer Grigat 1 1 Vision Systems, Hamburg University of Technology, D-21079 Hamburg
Vehicle Tracking System Robust to Changes in Environmental Conditions
INORMATION & COMMUNICATIONS Vehicle Tracking System Robust to Changes in Environmental Conditions Yasuo OGIUCHI*, Masakatsu HIGASHIKUBO, Kenji NISHIDA and Takio KURITA Driving Safety Support Systems (DSSS)
Color Segmentation Based Depth Image Filtering
Color Segmentation Based Depth Image Filtering Michael Schmeing and Xiaoyi Jiang Department of Computer Science, University of Münster Einsteinstraße 62, 48149 Münster, Germany, {m.schmeing xjiang}@uni-muenster.de
Leica Photogrammetry Suite Project Manager
Leica Photogrammetry Suite Project Manager Copyright 2006 Leica Geosystems Geospatial Imaging, LLC All rights reserved. Printed in the United States of America. The information contained in this document
Circle Object Recognition Based on Monocular Vision for Home Security Robot
Journal of Applied Science and Engineering, Vol. 16, No. 3, pp. 261 268 (2013) DOI: 10.6180/jase.2013.16.3.05 Circle Object Recognition Based on Monocular Vision for Home Security Robot Shih-An Li, Ching-Chang
CHAPTER 1 INTRODUCTION
CHAPTER 1 INTRODUCTION 1.1 Background of the Research Agile and precise maneuverability of helicopters makes them useful for many critical tasks ranging from rescue and law enforcement task to inspection
MetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH
MetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH VRVis Research Center for Virtual Reality and Visualization, Virtual Habitat, Inffeldgasse
Real-Time Tracking of Pedestrians and Vehicles
Real-Time Tracking of Pedestrians and Vehicles N.T. Siebel and S.J. Maybank. Computational Vision Group Department of Computer Science The University of Reading Reading RG6 6AY, England Abstract We present
A Wide Area Tracking System for Vision Sensor Networks
A Wide Area Tracking System for Vision Sensor Networks Greg Kogut, Mohan Trivedi Submitted to the 9 th World Congress on Intelligent Transport Systems, Chicago, Illinois, October, 2002 Computer Vision
CCTV - Video Analytics for Traffic Management
CCTV - Video Analytics for Traffic Management Index Purpose Description Relevance for Large Scale Events Technologies Impacts Integration potential Implementation Best Cases and Examples 1 of 12 Purpose
A Vision-Based Tracking System for a Street-Crossing Robot
Submitted to ICRA-04 A Vision-Based Tracking System for a Street-Crossing Robot Michael Baker Computer Science Department University of Massachusetts Lowell Lowell, MA [email protected] Holly A. Yanco
Density Map Visualization for Overlapping Bicycle Trajectories
, pp.327-332 http://dx.doi.org/10.14257/ijca.2014.7.3.31 Density Map Visualization for Overlapping Bicycle Trajectories Dongwook Lee 1, Jinsul Kim 2 and Minsoo Hahn 1 1 Digital Media Lab., Korea Advanced
T-REDSPEED White paper
T-REDSPEED White paper Index Index...2 Introduction...3 Specifications...4 Innovation...6 Technology added values...7 Introduction T-REDSPEED is an international patent pending technology for traffic violation
Integer Computation of Image Orthorectification for High Speed Throughput
Integer Computation of Image Orthorectification for High Speed Throughput Paul Sundlie Joseph French Eric Balster Abstract This paper presents an integer-based approach to the orthorectification of aerial
3-D Object recognition from point clouds
3-D Object recognition from point clouds Dr. Bingcai Zhang, Engineering Fellow William Smith, Principal Engineer Dr. Stewart Walker, Director BAE Systems Geospatial exploitation Products 10920 Technology
Traffic Estimation and Least Congested Alternate Route Finding Using GPS and Non GPS Vehicles through Real Time Data on Indian Roads
Traffic Estimation and Least Congested Alternate Route Finding Using GPS and Non GPS Vehicles through Real Time Data on Indian Roads Prof. D. N. Rewadkar, Pavitra Mangesh Ratnaparkhi Head of Dept. Information
BACnet for Video Surveillance
The following article was published in ASHRAE Journal, October 2004. Copyright 2004 American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. It is presented for educational purposes
High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound
High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound Ralf Bruder 1, Florian Griese 2, Floris Ernst 1, Achim Schweikard
VEHICLE TRACKING AND SPEED ESTIMATION SYSTEM CHAN CHIA YIK. Report submitted in partial fulfillment of the requirements
VEHICLE TRACKING AND SPEED ESTIMATION SYSTEM CHAN CHIA YIK Report submitted in partial fulfillment of the requirements for the award of the degree of Bachelor of Computer System & Software Engineering
Edge tracking for motion segmentation and depth ordering
Edge tracking for motion segmentation and depth ordering P. Smith, T. Drummond and R. Cipolla Department of Engineering University of Cambridge Cambridge CB2 1PZ,UK {pas1001 twd20 cipolla}@eng.cam.ac.uk
Vision-Based Blind Spot Detection Using Optical Flow
Vision-Based Blind Spot Detection Using Optical Flow M.A. Sotelo 1, J. Barriga 1, D. Fernández 1, I. Parra 1, J.E. Naranjo 2, M. Marrón 1, S. Alvarez 1, and M. Gavilán 1 1 Department of Electronics, University
IMPROVEMENT OF DIGITAL IMAGE RESOLUTION BY OVERSAMPLING
ABSTRACT: IMPROVEMENT OF DIGITAL IMAGE RESOLUTION BY OVERSAMPLING Hakan Wiman Department of Photogrammetry, Royal Institute of Technology S - 100 44 Stockholm, Sweden (e-mail [email protected]) ISPRS Commission
Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006
Practical Tour of Visual tracking David Fleet and Allan Jepson January, 2006 Designing a Visual Tracker: What is the state? pose and motion (position, velocity, acceleration, ) shape (size, deformation,
Introduction. C 2009 John Wiley & Sons, Ltd
1 Introduction The purpose of this text on stereo-based imaging is twofold: it is to give students of computer vision a thorough grounding in the image analysis and projective geometry techniques relevant
Synthetic Aperture Radar: Principles and Applications of AI in Automatic Target Recognition
Synthetic Aperture Radar: Principles and Applications of AI in Automatic Target Recognition Paulo Marques 1 Instituto Superior de Engenharia de Lisboa / Instituto de Telecomunicações R. Conselheiro Emídio
The Big Data methodology in computer vision systems
The Big Data methodology in computer vision systems Popov S.B. Samara State Aerospace University, Image Processing Systems Institute, Russian Academy of Sciences Abstract. I consider the advantages of
UNIVERSITY OF CENTRAL FLORIDA AT TRECVID 2003. Yun Zhai, Zeeshan Rasheed, Mubarak Shah
UNIVERSITY OF CENTRAL FLORIDA AT TRECVID 2003 Yun Zhai, Zeeshan Rasheed, Mubarak Shah Computer Vision Laboratory School of Computer Science University of Central Florida, Orlando, Florida ABSTRACT In this
Extended Floating Car Data System - Experimental Study -
2011 IEEE Intelligent Vehicles Symposium (IV) Baden-Baden, Germany, June 5-9, 2011 Extended Floating Car Data System - Experimental Study - R. Quintero, A. Llamazares, D. F. Llorca, M. A. Sotelo, L. E.
ACCURACY ASSESSMENT OF BUILDING POINT CLOUDS AUTOMATICALLY GENERATED FROM IPHONE IMAGES
ACCURACY ASSESSMENT OF BUILDING POINT CLOUDS AUTOMATICALLY GENERATED FROM IPHONE IMAGES B. Sirmacek, R. Lindenbergh Delft University of Technology, Department of Geoscience and Remote Sensing, Stevinweg
