SAFE/T Tool for Analyzing Driver Video Data Part 2: Software Development Carol Martell, Rob Foss UNC Highway Safety Research Center Ken Gish, Loren Staplin TransAnalytics
Objective Overall: Develop analytic methods for the SHRP2 driving behavior and crash risk study such as: crash surrogates exposure-based collision risk driver vehicle roadway environment Project: Develop software to automatically output driver's head direction Software for Automatic Feature Extraction/Tracking (SAFE/T)
Why is SAFE/T needed? 1) Driver video data permits analysis of driving behavior in context. 2) Processing all driver video data using current methods is not feasible. 3) Without efficient video processing tools, SHRP2 may differ from its predecessors only in the volume of unused driver video data.
SAFE/T Main Features Early Processing: Low-pass filter Histogram equalization Edge detection Feature Detection: Forward = Eyes+Nose+Mouth Side = Ear/Hairline Tracking Motion prediction Reacquiring track
Early Processing: Histogram Equalization Original Equalized
Early Processing: Edge Detection Original Laplace Canny
Manual Video Data Extraction Digitization (completed) Video from AAA Driver Distraction study Format: 640x480 @ 30fps Codec: Divx Quantity: 2 hours of video for 4 drivers Manual Coding (in progress at UNC) Observer Pro Main coding variables: Head direction Features visible (1 trip per person)
Tracking Challenges Obstructions Eyewear Hand/arm Shadows Full face Partial face Glare Sun Headlights
Physical Obstructions
Frame Obstruction: Side Glances
Frame Obstruction: Forward Glance Lost track side glance!
System Performance Questions Can SAFE/T reliably detect not forward? What are the system requirements for recording, processing, and data output? Is the system cost-beneficial?
Assessing Tracking Performance Threshold for not forward : False positives: p( not forward forward ) False negatives: p( forward not forward ) Decision matrix: FP worse? Sensitivity analysis: Development: before/after software modification Future Evaluations: Tracker #1 vs. Tracker #2. Dependence on resolution and frame rate
Is tracking accurate enough? Compared to... Current status? Operational requirements? Ideal world? Can tracking accuracy and reliability be improved for... Pre-recorded video? To-be-recorded video?
How can tracking be improved? Optics NIR Filters NIR Sources Electronics Camera dynamic range Electronic synchronization Depth processing using stereo camera setup Software
High Dynamic Range Camera
Background Subtraction & Face Detection Image 1: Ambient + NIR t Ambien Camera NIR NIR Filter Image 2: Ambient Only t Ambien Camera NIR Filter
Background Subtraction & Face Detection Ambient Only Ambient + NIR Minus NIR Only Equals AdaBoost Face Detection
Vector Keying Original Scene After Setting Vector Key Background defined mathematically Sharp, dynamic edges around foreground object
Feature Tracking OpenCV Library Background subtraction Feature tracking Motion prediction Other Video Processing Libraries
How can tracking be optimized? Shadow Sun Glare Shadow + Obstruction Shadows: reduce or eliminate with optics & electronics Glare: reduce with optics & electronics Obstructions: improve with independent feature tracking Pre-recording vs. post-recording
What types of safety analyses does SAFE/T permit? How often do drivers direct their head away from the forward roadway? What are the durations of these glances? Combinations of frequency & duration Dependence on roadway & environment
Summary of Potential Benefits System: Quality & quantity of driver behavior data Efficiency of video processing Safety: True baseline (in many contexts) Exposure-based crash risk Prioritize countermeasure efforts
Future Work Phase I Software Development: Develop SAFE/T Prototype Use AAA data to evaluate output Optimization: modification+evaluation cycles Phase II Field Experiment: Electronic and optical configuration Compare true state with tracking output Multiple resolutions/frame rates ROC analysis