Microscopic Traffic Data Collection by Remote Sensing



Similar documents
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

Author: Hamid A.E. Al-Jameel (Research Institute: Engineering Research Centre)

ROBUST VEHICLE TRACKING IN VIDEO IMAGES BEING TAKEN FROM A HELICOPTER

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY

Traffic flow theory and modelling

Integrated Data System Structure for Active Traffic Management - Planning and Operation

Statistical Forecasting of High-Way Traffic Jam at a Bottleneck

SoMA. Automated testing system of camera algorithms. Sofica Ltd

Opportunities for the generation of high resolution digital elevation models based on small format aerial photography

A method of generating free-route walk-through animation using vehicle-borne video image

Automatic Labeling of Lane Markings for Autonomous Vehicles

Ramp Metering. Index. Purpose. Description. Relevance for Large Scale Events. Options. Technologies. Impacts. Integration potential.

Towards Safe and Efficient Driving through Vehicle Automation: The Dutch Automated Vehicle Initiative

How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud

Method for Traffic Flow Estimation using Ondashboard

Whitepaper. Image stabilization improving camera usability

Traffic Simulation Modeling: VISSIM. Koh S.Y Doina 1 and Chin H.C 2

The Scientific Data Mining Process

T-REDSPEED White paper

VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS

ACCIDENTS AND NEAR-MISSES ANALYSIS BY USING VIDEO DRIVE-RECORDERS IN A FLEET TEST

Video-Based Vehicle Trajectory Data Collection

A Prototype For Eye-Gaze Corrected

pb tec solutions GmbH, Max-Planck-Str. 11, Alzenau (Germany) Tel.: Fax:

PDF Created with deskpdf PDF Writer - Trial ::

INVESTIGATION OF ASIM 29X, CANOGA, RTMS, SAS-1, SMARTSENSOR, TIRTL & OTHER SENSORS FOR AUTOMATIC VEHICLE CLASSIFICATION

High Resolution RF Analysis: The Benefits of Lidar Terrain & Clutter Datasets

3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving

High Resolution Digital Surface Models and Orthoimages for Telecom Network Planning

IP-S3 HD1. Compact, High-Density 3D Mobile Mapping System

Neural Network based Vehicle Classification for Intelligent Traffic Control

REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING

CAPACITY AND LEVEL-OF-SERVICE CONCEPTS

WHITE PAPER. Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception

Automated Process for Generating Digitised Maps through GPS Data Compression

Face detection is a process of localizing and extracting the face region from the

The process components and related data characteristics addressed in this document are:

Document Name: Driving Skills. Purpose: To outline necessary driving skills required to maximize driving safety.

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Traffic Management for a Smarter City:Istanbul Istanbul Metropolitan Municipality

Real-time 3D Scanning System for Pavement Distortion Inspection

Tracking of Small Unmanned Aerial Vehicles

Traffic Monitoring Systems. Technology and sensors

Vorstellung eines photogrammetrischen Kamerasystems für UAVs mit hochgenauer GNSS/INS Information für standardisierte Verarbeitungsverfahren

The Russian Arm & Flight Head on the ML55 Chase Car

Correcting the Lateral Response Artifact in Radiochromic Film Images from Flatbed Scanners

RESOLUTION MERGE OF 1: SCALE AERIAL PHOTOGRAPHS WITH LANDSAT 7 ETM IMAGERY

THE BENEFITS OF SIGNAL GROUP ORIENTED CONTROL

Freehand Sketching. Sections

A STUDY ON WARNING TIMING FOR LANE CHANGE DECISION AID SYSTEMS BASED ON DRIVER S LANE CHANGE MANEUVER

Photography of Cultural Heritage items

3D Vehicle Extraction and Tracking from Multiple Viewpoints for Traffic Monitoring by using Probability Fusion Map

Calibration of AFM with virtual standards; robust, versatile and accurate. Richard Koops VSL Dutch Metrology Institute Delft

2/27/14. Future Urban Mobility. Daniela Rus. Daniela Rus. Tackling the Challenges of Big Data Introduction and Use Cases

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

Fixplot Instruction Manual. (data plotting program)

FLEXSYS Motion-based Traffic Analysis and Incident Detection

CCTV - Video Analytics for Traffic Management

The Trade-off between Image Resolution and Field of View: the Influence of Lens Selection

3D Scanner using Line Laser. 1. Introduction. 2. Theory

Optical Digitizing by ATOS for Press Parts and Tools

Photogrammetric Point Clouds

Wii Remote Calibration Using the Sensor Bar

Active noise control in practice: transformer station

From Pixel to Info-Cloud News at Leica Geosystems JACIE Denver, 31 March 2011 Ruedi Wagner Hexagon Geosystems, Geospatial Solutions Division.

EmerT a web based decision support tool. for Traffic Management

Ultra-High Resolution Digital Mosaics

QUALITY ASSURANCE FOR MOBILE RETROREFLECTIVITY UNITS


False alarm in outdoor environments

How To Use Trackeye

Signature Region of Interest using Auto cropping

Behavior Analysis in Crowded Environments. XiaogangWang Department of Electronic Engineering The Chinese University of Hong Kong June 25, 2011

White paper. HDTV (High Definition Television) and video surveillance

Development and Evaluation of Point Cloud Compression for the Point Cloud Library

Static Environment Recognition Using Omni-camera from a Moving Vehicle

Optiffuser. High-performance, high bandwidth lightweight 1D diffuser.

High speed 3D capture for Configuration Management DOE SBIR Phase II Paul Banks

GEOENGINE MSc in Geomatics Engineering (Master Thesis) Anamelechi, Falasy Ebere

Protocol for Microscope Calibration

Resolution Enhancement of Photogrammetric Digital Images

A System for Capturing High Resolution Images

ASSESSMENT OF VISUALIZATION SOFTWARE FOR SUPPORT OF CONSTRUCTION SITE INSPECTION TASKS USING DATA COLLECTED FROM REALITY CAPTURE TECHNOLOGIES

GUIDELINES. for oversize and overmass vehicles and loads MAY Government of South Australia. Department for Transport, Energy and Infrastructure

Motion & The Global Positioning System (GPS)

GelAnalyzer 2010 User s manual. Contents

3D SCANNERTM. 3D Scanning Comes Full Circle. s u n. Your Most Valuable QA and Dosimetry Tools A / B / C. The 3D SCANNER Advantage

INDIANA DEPARTMENT OF TRANSPORTATION OFFICE OF MATERIALS MANAGEMENT. MEASUREMENT OF RETRO-REFLECTIVE PAVEMENT MARKING MATERIALS ITM No.

Auckland Motorways Network Performance Monitoring

3D City Modelling from LIDAR Data

Massachusetts Department of Transportation, Highway Division Ten Park Plaza, Boston, MA A Guide on Traffic Analysis Tools

Transcription:

Hoogendoorn, Van Zuylen, Gorte, Vosselman and Schreuder 1 Microscopic Traffic Data Collection by Remote Sensing S. P. Hoogendoorn 1, H. J. Van Zuylen 1, M. Schreuder 2, B. Gorte 3, and G. Vosselman 3 Abstract. To gain more insight into the behavior of drivers during congestion, and to develop and test theories and models describing congested driving behavior very detailed data are needed. This paper describes a new data collection system prototype for determining individual vehicle trajectories from sequences of digital aerial images. Software was developed to detect and track vehicles from image sequences. Besides the longitudinal and lateral positions as a function of time, the system can also determine the vehicle lengths and widths. Before vehicle detection and tracking can be achieved, the software handles correction for lens distortion, radiometric correction, and orthorectification of the image. The software was tested on data collected from a helicopter, using a digital camera gathering highresolution monochrome images, covering 280 m of a Dutch motorway. From the test, it is concluded that the techniques for analyzing the digital images can be applied automatically without much problems. However, given the limited stability of the helicopter, only 210 m of the motorway could be used for vehicle detection and tracking. Furthermore, the poor weather conditions at the time of data collection had a significant influence on accuracy and reliability of the collected data: 94% of the vehicles could be detected and tracked automatically, with resolution of 22 cm. This percentage will increase substantially under better weather conditions. Furthermore, equipment stabilizing the camera so-called gyroscopic mounting and the use of color images can be applied to further improve the system. Keywords: congestion, data collection, vehicle trajectories, remote sensing, vehicle tracking Word count Abstract 246 Main text 3970 Figures and tables (7 x 250) 1750 Total 5966 BACKGROUND Congestion is an important issue from both an economic and a societal viewpoint. Understanding its causes, in particular the influence of driver behavior on the origin of congestion can be very important and may lead to improved measures to either prevent congestion or to reduce its negative effects. Research at the microscopic level requires however a much higher detail of data collection than research at the macroscopic level in order to be useful. Empirical study of driver behavior is not possible without real-life data that consist of complete vehicle trajectories recorded at a long roadway stretch and at a high frequency, as well information on the vehicles themselves. Longitudinal and lateral positions are thus to be collected very precisely in order to determine the exact lane changing and acceleration/deceleration behavior of the vehicles. On top of this, data are usually only collected at a number of fixed locations where the detectors are positioned, rather than over an entire stretch. Among the drawbacks of the local measurements is the fact that instantaneous characteristics, such as distance headways, densities, space-mean-speeds, cannot be determined directly, let alone their dynamics. 1 Transportation and Traffic Engineering Section, Faculty of Civil Engineering and Geosciences, Delft University of Technology. 2 Traffic Research Center, Dutch Ministry of Transportation. 3 Photogrammetry and Remote Sensing Section, Faculty of Civil Engineering and Geosciences, Delft University of Technology.

Hoogendoorn, Van Zuylen, Gorte, Vosselman and Schreuder 2 In most cases, microscopic simulation models are calibrated and validated using macroscopic traffic data. That is, the microscopic parameters are tuned such that they yield macroscopic flow characteristics consistent with the traffic flow data. Given the large number of unobservable microscopic parameters, and their complex and non-linear relation with the resulting macroscopic flow characteristics, such indirect model calibration may not always be good practice. For this reason, Brackstone and Mac- Donald (1995,1998) argue that calibration of true microscopic simulation models (microscopic representation and microscopic behavioral rules describing interaction between vehicles) requires detailed information on dynamic behavior of vehicle pairs. Also the existence of stable or metastable traffic states in congested circumstances and the behavioral hypotheses postulated to explain these phenomena (Kerner,1999) can also be studied in detail when detailed microscopic information is available. Finally, the microscopic processes that cause congestion to occur 500 1000 m downstream of the on-ramp (Cassidy and Bertini,1999) rather than at the location of the on-ramp can be analyzed in detail. The availability of these data is limited and dated (Treiterer and Myers,1974). Research questions Typical research questions for studying driver behavior during congestion are the following: How can the car-following behavior of drivers be described? Are the models that have been developed during the seventies (Herman and Rothary,1963) still valid? Are different parameter settings required? Is the gap that drivers accept when entering the motorway different than the gap that is maintained when following a vehicle (Dijker,1997)? How does the overtaking behavior of drivers depend on the width of the lanes and on obstacles and obstructions? How does driver behavior change in case of narrow lanes? What about the behavior in case of roadworks? Are multiple stable states identifiable during congested traffic flow? Can microscopic phenomena be identified that can explain macroscopic traffic states and the transitions between these macroscopic states? Currently available data collection systems, such as inductive loops, pneumatic tubes, dgps, video, etc., are not suitable to answer these fundamental research questions satisfactory. Study objective To answer the research questions a new approach to individual traffic data collection that enables studying the dynamics of the individual vehicles and the interdependency between them is needed. The objective of this study is to develop a data collection method to collect vehicle trajectories (longitudinal and lateral position of the center of the vehicle represented by a rectangle as a function of time) and individual vehicle characteristics (vehicle length and width) in particular during congested traffic flow conditions. System requirements Given the fundamental requirements of research into driver behavior during congestion, specific demands to the monitoring system pertain to both the temporal and spatial resolution. For the latter, it was decided that the final system must have a resolution of 0.4 m. The roadway length that can be observed by a single camera is thus determined by resolution of the camera. It turns out that a high resolution B&W digital camera has a resolution of 1300 pixels (x 1030 pixels), implying that 1300 x 0.4 = 520 m is the maximum roadway length that can be observed. Given the average headways between vehicles, and their average speeds, it was decided that the time between two observations should not exceed 0.1 s. It can be shown that in case the specifications above are met, the locations of the vehicles can be determined with an accuracy of ¼ pixel (= 0.1 m). The resolution of the speeds that are determined from the vehicle positions is thus 1 m/s. MEASUREMENT AND EXPERIMENTAL SET-UP Having considered a number of alternative systems, such as taking video measurements from VMS gantries, it was decided that the data must be collected from an elevated position, more specifically from a helicopter, to which a digital camera system was attached. The camera system had to meet very high stan-

Hoogendoorn, Van Zuylen, Gorte, Vosselman and Schreuder 3 dards with respect to the resolution of the images as well as the frequency at which the images could be collected. Camera and helicopter The camera used in the measurement system provides grayscale images at a resolution of 1300 by 1030 pixels with a maximum frequency of 8.6 Hz. The camera, a Baser A101f, is very sensitive to light, yielding a short integration time, with little loss of image quality due to the vibrations of the helicopter. The camera is able to collect color images as well. In case color images are collected, the resolution of the camera is reduced to 650 by 515 pixels. Given the number of pixels in the longitudinal direction (1300 pixels) and the spatial resolution (40 cm per pixel), in theory the roadway length that can be observed equals 1300 0.4 = 520 m. The area that each pixel represents in reality (in this case, 40 40 cm) is determined by the specifications of the lens (light sensitive chip and lens) and the height at which the images are collected. To decrease the probability that clouds obstruct the observations, it was decided to not fly higher than 500 m. It was decided to use a camera with a 2/3 chip, and a lens of 16 mm. Regrettably, a camera with a different chip was delivered. As a result, each pixel represents an area of approximately 20 20 cm, leaving only 280 m of the roadway observed. The camera itself does not have sufficient memory to store all the images needed for analyses. On top of this, compression of the images is not an option due to the loss of image quality. Instead, a Personal Computer equipped with a frame grabber was attached to the camera enabling real-time storage of the digital images. The helicopter a Bell 206 JetRanger and its pilot were hired from the Dutch firm Heli Holland. The camera was attached to the helicopter and was fixed. No gyroscopic mounting was used to attach the camera, assuming that the resulting vibrations and movements of the helicopter would not influence the quality of the collected data too much. Measurement location The data was collected at different motorway sites near the Dutch city of Utrecht, in particular on the A2 motorway. The sites were selected both for the very high probability of congestion occurring during the afternoon peak hour (between 15:30 and 17:30), the type of bottleneck that causes the congestion, and the possibility to observe the traffic without too many obstructions present (e.g. viaducts). The list below describes the main measurement sites (see Figure 1): 1. Merge / bottleneck near the South-East of Rijnsweerd, on-ramp De Meern, at a height of 500 m and 300 m (see top-left inlay picture of figure 1). 2. Queue spilling backwards above the viaduct of the A27 over the A2, at a height of 500 m and 300 m. (see lower-right inlay picture of figure 1). 3. 2 2 weaving section, North-West of the Rijnsweert at a height of 500 m and 300 m.

Hoogendoorn, Van Zuylen, Gorte, Vosselman and Schreuder 4 Figure 1 Overview of the study location near the Dutch city of Utrecht. Data was collected mostly on the A2 motorway. Weather conditions The data was collected at the 25 th of April, 2002. Although it was rather clouded, the images from the digital cameras looked sufficiently clear to go ahead with the data collection. During the flight, weather conditions were constantly changing: at some times, the cloudiness was rather thin; while at other times it was too thick to see the road at all. Due to the varying weather conditions, the ambient conditions also changed constantly. As a result, the shadows of the vehicles were not the same in all the image sequences. Finally, the wind conditions were unfavorable and it was difficult for the pilot to keep the helicopter at a fixed location. Together with the instability of the helicopter itself (in terms of its pitch and yaw), the movements of the helicopter reduced the constantly observed part of the roadway substantially (approximately 200 m instead of 280 m). Figure 2 shows an example of the raw image data.

Hoogendoorn, Van Zuylen, Gorte, Vosselman and Schreuder 5 Figure 2 Example reflecting movement of helicopter and effect on collected images. The time gap between the two shown images equals 6 seconds. The instability of the images increases the requirements for the vehicle detection software, as will be discussed in the remainder of the paper. In addition, the duration of the usable sequence was rather short (approximately 35 s). PROCESSING OF THE IMAGES The objective of the image data analysis is to automatically determine the vehicle trajectories from the rough pictures. Before the vehicles can be detected, the following operations are applied to correct the rough images and to convert them into standard pictures: 1. Lens distortion correction 2. Orthorectification 3. Radiometric correction These steps are described in the remainder of this chapter. After these steps have been completed, the images can be used to detect and track the vehicles, which is described in the following chapter. Subsequently, the screen coordinates are converted into world coordinates. Correction for lens distortion Due to the movements of the helicopter, the distortion of the lens complicated the orthorectification of the images: the considered part of the roadway sometimes is at the top of the image, while it is at the bottom in other cases. As a result, an essential step is to correct for the distortion of the lens. For this specific camera, a radiometric distortion was present: the corners of the image were 7 pixels (1%) too far to the inside. Orthorectification In an aerial photograph of a rectangular object, the image of this object will only be rectangular is the camera is located exactly above the middle of the rectangle (neglecting the lens distortion discussed above). Otherwise, the perspective of the image will be distorted, depending on the location and the angle of the camera. On top of this, the size of the rectangle will depend on the height at which the images are collected, and will the image be rotated around the vertical axis. During orthorectification, the perspective distortion, the scale and the rotation of the images are adjusted such that the objects on the image are projected at the same location as the same objects in the reference image R. Orthorectification needs control points, which are points in the image that are visible in both the reference image and the processed image. In theory, only 4 control points are needed. However, due to the fact that the determination of the location of the control points cannot be achieved with 100% reliability, 10 to 30 points were used instead of four, which also gives an indication of the accuracy of the process.

Hoogendoorn, Van Zuylen, Gorte, Vosselman and Schreuder 6 To handle the fact that the control points in the reference image and the processed image may be far apart, a special process was developed that uses the information of the control point location in image I-1 to determine the location of these points in image I. This process starts from reference image R. If R is not the first image of the sequence, the process is also performed backwards (for images R-1, R-2, etc.). Two sets of control points have been used. The first set, the so-called characteristic control points are used for coarse matching. These are around these points are unique for the entire image and can thus be used to match to images where the amount of perspective distortion is large (i.e. the objects on the image and the objects on the reference image are far apart). Typical objects in these sets are lanterns, gantries, etc. The second set the roadsurface control points contains points of the roadway surface, i.e. on the reference plane such as the lane markings. These points are not characteristic and are used for finematching only. Figure 3 illustrates both sets. The two phases of the process are: 1. Coarse matching finds the control points in the characteristic set of the reference image R in image I. This is achieved using an iterative approach that maximizes the cross-correlation coefficient of the pixels around the control points in the characteristic set. The search window contains 50 50 pixels. The accuracy of coarsematching is approximately 1 pixel. The results of coarsematching are used to determine the transformation of image I-1 to image I. 2. Fine matching uses the same approach as coarsemathing. In this case however, the roadsurface control points are used instead of the characteristic control points, while the search window is only 7 7 pixels large. Figure 3 The two sets of control points: characteristic control points (left) and road surface control points (right). Radiometric correction Since ambient conditions are changing during data collection, some images are brighter than others. The vehicle detection is based on differences in the intensity of the pixels of image I and the background image (image of the roadway without vehicles on it). It turned out that the vehicle detection process was very sensitive to these differences in intensities. This is why it was decided to normalize the images using so-called histogram matching: by comparing the histograms of the reference image and the current image, the intensities of the current image is adjusted such that they agree with the reference images as much as possible. VEHICLE DETECTION AND TRACKING The previous sections discussed the operations that need to be applied to the image sequence before the actual vehicle detection and tracking can be performed. In this section, we briefly discuss the approach to detection and tracking itself. The approach consists of the following steps: 1. Determination of background 2. Vehicle detection (determination of center of vehicle, its length, and its width) 3. Vehicle tracking 4. Conversion of image coordinates to world coordinates

Hoogendoorn, Van Zuylen, Gorte, Vosselman and Schreuder 7 Determination of background image In this first step, the background image (i.e. the empty roadway) is determined. The approach that was used is very straightforward: for each pixel of an image sequence, the different intensity values are stored. For these intensity values, the median value is assumed to be the intensity value of the background. Figure 4 shows the result of this operation. Note that the main assumption is that the probability that the roadway surface is empty is larger than the probability that a vehicle is present. This implies that the applicability of the method may be restricted to periods where congestion is not too severe. Other approaches (such as morphological operations) are not restricted by this requirement. Figure 4 Background image Vehicle detection For any image, vehicle detection is based on the difference between the current image R and the background image B. A first approximation is to use a threshold value to decide whether a pixel represents a vehicle or not. If so, neighboring pixels can either be identified as a vehicle or not. In practice, a number of complicating factors will occur: 1. Both light and dark vehicles will cast shadows which are generally darker than the roadway surface. 2. Light vehicles have dark spots (the windshield, etc.). 3. On occasion, a small vehicle completely drives in the shadow of a big vehicle (e.g. a truck or a bus). As a result, the shadow of the small vehicle disappears. Furthermore, the intensity of the vehicle itself may be close to the intensity of the background image. The biggest problems are caused by vehicles that have the same intensity as the roadway surface or vehicles that have the same intensity as their shadow. Different approaches have been implemented to resolve these issues (morphological grayscale operations, binary morphological operations, split and merge image segmentation, etc.). Table 1shows an example. No definite algorithm could be chosen based on the performance: in most cases, many vehicles are detected (about 94%,). It is likely that under better weather conditions, and with the use of color images, nearly 100% of all vehicles will be detected and subsequently tracked.

Hoogendoorn, Van Zuylen, Gorte, Vosselman and Schreuder 8 Table 1 Overview of vehicle detection algorithm (in this case, for light vehicles only). A similar approach can be applied to detect the dark vehicles (only step 6 is skipped, since the shadows will also be identified as part of the dark vehicle). 1. Determining the difference D by subtracting the background image B from 2. Determining pixels in D with values larger than threshold value. 3. Morphological opening 4. Morphological closing 5. Determining bounding boxes. 6. Expanding bounding boxes to include vehicle shadows. When the vehicles are detected, the positions as well as the length and width of the vehicles could be established easily. Figure 5 shows an example of the final result of vehicle detection, where the vehicles are described by rectangles. Figure 5 Vehicle positions and their dimensions resulting from vehicle detection. In this example, all vehicles are detected. However, the figure also shows a false detection, namely the shadow of a truck. Moreover, some of the vehicle dimensions are not correctly detected. Vehicle tracking The aim of vehicle tracking is to follow the vehicles detected in an image, i.e. to determine their position in the other images. In most cases, tracking is done using an approach similar to the control-point approach used in the orthorectification step (using both coarsematching and finematching). For the applica-

Hoogendoorn, Van Zuylen, Gorte, Vosselman and Schreuder 9 tion at hand, only coarsematching was required since it provided sufficient accuracy for vehicle tracking. Application of coarsematching yields an unique label for all vehicles detected during the vehicle detection step, enabling determination of the vehicle trajectories in the following step. Figure 6 shows the results of the vehicle tracking process. Figure 6 Vehicle tracking in four subsequent images. The lines behind the vehicles are indicative for the speed of the vehicles. Note that vehicle detection only occurs at designated time instants and not to all collected images; this is why some vehicles in the figure have not yet been detected. Conversion of image coordinates to world coordinates In the final step, the image vehicle coordinates are translated into world coordinates. Scaling and translation determine both the longitudinal position of the rear bumper of the vehicle relative to an arbitrary location on the roadway, and the lateral location of the vehicle relative to the right lane demarcation. To this end, maps of the roadway are used. Furthermore, the length and the width of the vehicles are determined as well. Application example Figure 7 shows an example of the results of application of the data collection approach described in this contribution. Besides the trajectories indicating the longitudinal positions of the vehicles on the roadway as well as the roadway lane, the approach also yields the lateral positions and the dimensions of the vehicles.

Hoogendoorn, Van Zuylen, Gorte, Vosselman and Schreuder 10 overtaking maneuver vehicle entering main road on-ramp right lane left lane Figure 7 Example of trajectories derived from observations at on-ramp during congested traffic flow operations. Verification of data collection To verify the approach, two image sequences were analyzed. The first sequence is 45 s long and pertains to a situation where traffic flow is near capacity, but not yet congested. The images were collected from 500 m height; the observed roadway was 210 m long. Since the position of the vehicles can be determined at a one-pixel accuracy, the spatial resolution for the first sequence equals 22 cm. The results of the first sequence were very positive: after correcting lens distortion, the images from the sequence could be rectified using the approach described in the previous sections. Application of the vehicle detection procedure on 40 images of the sequence showed that 98% of the vehicles were detected. After detection, the vehicle tracking was applied determining the trajectories of the vehicles at an accuracy of 1 pixel. It should be noted that the error made while tracking the vehicle across the images is cumulative. The second sequence pertains to an on-ramp situation where traffic conditions are congested (see Figure 7). The sequence lasts for 52 s and has been collected from a height of 300 m. As a result, only 120 m of the roadway were observed, while the spatial resolution is approximately 13 cm. The ambient conditions during the second sequence were less favorable than during the first sequence; clouds frequently moving in front of the sun mainly caused this. The radiometric correction was not sufficiently able to handle this. This is why only 90% of the vehicles were correctly detected and tracked. Also, the cumulative error while tracking slow moving vehicles caused a maximum error in the position of the vehicle of 1.3 m at the time the vehicle left the roadway section. CONCLUSIONS AND LESSONS LEARNED This paper describes a new data collection system that was developed to determined individual vehicle trajectories from high-resolution grayscale images. These images were collected using a digital camera mounted underneath a helicopter, and stored on a personal computer. Having applied a number of photogrammetric operations on the images, approximately 94% of the vehicles could be detected and tracked, yielding both vehicle positions, and vehicle dimensions. The spatial resolution was 22 cm; the temporal resolution is 8.6 Hz. This is even smaller than the required 40 cm, due to the fact that a different chip has

Hoogendoorn, Van Zuylen, Gorte, Vosselman and Schreuder 11 been used than was anticipated. Using the measurement set-up and detection and tracking algorithms described in this paper, it was possible to track vehicles on an area of 210 m length. On top of this, the maximum duration of the useable image sequence was only 35 s. There appears to be no limit to the number of vehicles that can be detected. Firstly, it turned out that the weather conditions have quite an adverse effect on the quality of the collected data, thereby reducing the accuracy of detection and tracking. Furthermore, the windy conditions caused the helicopter to move even more. Secondly, the use of a gyroscopic mounting can increase the stability of the images. Besides increasing the effective part of the images that can be used for vehicle detected and tracking, the increased stability will also substantially increase the duration of the sequences that can be used. Using a different camera setup, and gyroscopic stabilizing devices, the observed area (using a single camera) may be increased to 500 m (with a resolution of 40 cm), and the sequence duration will be longer than 15 minutes. Thirdly, the approach is labor intensive, especially considering data collection. Data collection effort can be reduced somewhat using unmanned helicopters or by collecting data from a fixed location (e.g. a high building). The analyses of the images and the trajectories will be completely automated in time. Future research is aimed at further refining the data collection approach. To this end, a second helicopter flight is planned to collect data under more favorable conditions, i.e. less clouds, less wind, gyroscopic mounting, etc. The resulting footage will be of higher quality and will be used to fine-tune the methods. We will also consider whether post-processing the data (e.g. by Kalman filtering) can further improve the quality of the data. After the second flight and the algorithmic refinements, the maximum length of the measured roadway length as well as the maximum duration of the data collection can be determined. The collected microscopic traffic data will be used for scientific research (theory building and model development, e.g. microscopic origin of congestion, gap-acceptance for different traffic states, etc.), practical research (such as ex-post evaluation studies of measures, such as behavior on narrow lanes), and the calibration and validation of microscopic simulation models. It is important to note that traffic is not influenced during data collection. Furthermore, current research is aimed at investigating the applicability of the system as an on-line monitoring system, in particular in urban areas (e.g. traffic monitoring from a high building for intersection control). Acknowledgements - The research described here was performed by the Delft University of Technology on behalf of the Traffic Research Center AVV of the Dutch Ministry of Transportation, Public Works and Water Management. REFERENCES Brackstone, M. and M. McDonald (1995). The microscopic modelling of traffic flow: weaknesses and potential developments. Traffic and Granular Flow. Brackstone, M. and, M. McDonald (1998). Modeling of motorway operations Transportation Research Records 1485. Cassidy, M.J., and R.L. Bertini (1999) Observations at a freeway bottleneck. Proceedings 14 th International Symposium of Transportation and Traffic Theory, Jerusalem. Dijker, T. (1997). Verkeersafwikkeling bij congestie. Graduation Thesis (in Dutch), Transportation and Traffic Engineering Section, Delft University of Technology. Herman, R. and R.W. Rothery (1963). Car-following and Steady-State Flow. Theory of Traffic Flow Symposium Proceedings, 1-11. Hoogendoorn, S.P., and T.P. Alkim (1999). Expert views on traffic flow operations during congestion. Research Report on behalf of the Traffic Research Centre of the Dutch Ministry of Transport. Kerner, B.S. (1999). Theory of Congested Traffic Flow: Self-Organization without Bottlenecks. Proceedings of the 14 th International Symposium of Transportation and Traffic Theory, 147-172. Treiterer, J. and J.A. Myers (1974) The hysteresis phenomenon in traffic flow. The 6 th International Symposium on Transportation and Traffic Theory, Reed Pty Ltd, Artamon N.S.W., 13-38.