Mining the Situation: Spatiotemporal Traffic Prediction with Big Data



Similar documents
Context-Aware Online Traffic Prediction

our algorithms will work and our results will hold even when the context space is discrete given that it is bounded. arm (or an alternative).

A MPCP-Based Centralized Rate Control Method for Mobile Stations in FiWi Access Networks

Maintenance Scheduling Optimization for 30kt Heavy Haul Combined Train in Daqin Railway

Is the trailing-stop strategy always good for stock trading?

The Tangled Web of Agricultural Insurance: Evaluating the Impacts of Government Policy

FIXED INCOME ATTRIBUTION

Financial Services [Applications]

FINAL REPORT DEVELOPMENT OF CONGESTION PERFORMANCE MEASURES USING ITS INFORMATION. Sarah B. Medley Graduate Research Assistant

Mechanism Design for Online Real-Time Scheduling

A NOVEL ALGORITHM WITH IM-LSI INDEX FOR INCREMENTAL MAINTENANCE OF MATERIALIZED VIEW

A performance analysis of EtherCAT and PROFINET IRT

!!! Technical Notes : The One-click Installation & The AXIS Internet Dynamic DNS Service. Table of contents

Theoretical Computer Science

Integrated airline scheduling

The Tangled Web of Agricultural Insurance: Evaluating the Impacts of Government Policy

Polynomials with non-negative coefficients

1. Overview of Nios II Embedded Development

A FRAMEWORK FOR AUTOMATIC FUNCTION POINT COUNTING

Retrospective Test for Loss Reserving Methods - Evidence from Auto Insurers

Chapter 6. The stacking ensemble approach

Call Tracking & Google Analytics Integration

Patch management is a crucial component of information security management. An important problem within

Time-Optimal Online Trajectory Generator for Robotic Manipulators

2. Developing Nios II Software

Using quantum computing to realize the Fourier Transform in computer vision applications

Small Cell Traffic Balancing Over Licensed and Unlicensed Bands

1. Overview of Nios II Embedded Development

8. Exception Handling

Rainfall generator for the Meuse basin

Dynamic TCP Acknowledgement: Penalizing Long Delays

Use of System Dynamics for modelling customers flows from residential areas to selling centers

Prediction of DDoS Attack Scheme

Highway Asset Management

A Study Of Bagging And Boosting Approaches To Develop Meta-Classifier

Forecasting Spatiotemporal Impact of Traffic Incidents on Road Networks

Monotone multi-armed bandit allocations

Adaptive Online Gradient Descent

Social Media Mining. Data Mining Essentials

Using Big Data and Efficient Methods to Capture Stochasticity for Calibration of Macroscopic Traffic Simulation Models

High School Students Who Take Acceleration Mechanisms Perform Better in SUS Than Those Who Take None

8. Hardware Acceleration and Coprocessing

The Trip Scheduling Problem

Online Algorithms: Learning & Optimization with No Regret.

Making Sense of the Mayhem: Machine Learning and March Madness

Offshore Oilfield Development Planning under Uncertainty and Fiscal Considerations

Introduction to Online Learning Theory

Lecture 4 Online and streaming algorithms for clustering

Offline sorting buffers on Line

Appendix C White Paper to Support Bicycle Facility Design Toolkit on Lane Width Design Modifications DRAFT

An Overview of Knowledge Discovery Database and Data mining Techniques

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Options on Stock Indices, Currencies and Futures

CSC2420 Fall 2012: Algorithm Design, Analysis and Theory

A RAPID METHOD FOR WATER TARGET EXTRACTION IN SAR IMAGE

Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets

Chapter 3 - GPS Data Collection Description and Validation

Knowledge Discovery and Data Mining

SIMPLIFIED CBA CONCEPT AND EXPRESS CHOICE METHOD FOR INTEGRATED NETWORK MANAGEMENT SYSTEM

A Network Equilibrium Framework for Internet Advertising: Models, Quantitative Analysis, and Algorithms

Chapter 20: Data Analysis

Split Lane Traffic Reporting at Junctions

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

Efficient Continuous Scanning in RFID Systems

CAPACITY AND LEVEL-OF-SERVICE CONCEPTS

Statistical Machine Learning

Knowledge Discovery and Data Mining

The Advantages and Disadvantages of Online Linear Optimization

Single-Link Failure Detection in All-Optical Networks Using Monitoring Cycles and Paths

CI6227: Data Mining. Lesson 11b: Ensemble Learning. Data Analytics Department, Institute for Infocomm Research, A*STAR, Singapore.

Patient Risk Prediction Model via Top-k Stability Selection

The Calculation of Marginal Effective Tax Rates

2. Getting Started with the Graphical User Interface

CELL PHONE TRACKING. Index. Purpose. Description. Relevance for Large Scale Events. Options. Technologies. Impacts. Integration potential

Data Mining - Evaluation of Classifiers

Accommodating Transient Connectivity in Ad Hoc and Mobile Settings

U + PV(Interest Tax Shield)

Lecture 6 Online and streaming algorithms for clustering

Risk and Return: Estimating Cost of Capital

DATA MINING CLUSTER ANALYSIS: BASIC CONCEPTS

Runtime Hardware Reconfiguration using Machine Learning

Strategic Online Advertising: Modeling Internet User Behavior with

Quality assurance of solar thermal systems with the ISFH- Input/Output-Procedure

AUTOMATION OF ENERGY DEMAND FORECASTING. Sanzad Siddique, B.S.

Using Ensemble of Decision Trees to Forecast Travel Time

2DI36 Statistics. 2DI36 Part II (Chapter 7 of MR)

A Service Choice Model for Optimizing Taxi Service Delivery

90 IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, VOL. 11, NO. 1, MARCH Proactive Workload Management in Hybrid Cloud Computing

Load Balancing and Switch Scheduling

A THEORETICAL FRAMEWORK FOR CHILD FOSTERING ARRANGEMENTS IN SUB-SAHARAN AFRICA

Time-Memory Trade-Offs: False Alarm Detection Using Checkpoints

Exchange Rate Volatility and the Timing of Foreign Direct Investment: Market-seeking versus Export-substituting

Copyright 2005 IEEE. Reprinted from IEEE MTT-S International Microwave Symposium 2005

Parallel Scalable Algorithms- Performance Parameters

Preventing Denial-of-request Inference Attacks in Location-sharing Services

III. DIVERSITY-MULTIPLEXING TRADEOFF Theorem 3.1: The maximum achievable DMT for network described in Sec II is given by,

Data Mining. Nonlinear Classification

Business Analytics using Data Mining

Week 1: Introduction to Online Learning

Pattern recognition using multilayer neural-genetic algorithm

Transcription:

1 Mining the Situation: Spatiotemporal Traic Prediction with Big Data Jie Xu, Dingxiong Deng, Ugur Demiryurek, Cyrus Shahabi, Mihaela van der Schaar Abstract With the vast availability o traic sensors rom which traic inormation can be derived, a lot o research eort has been devoted to developing traic prediction techniques, which in turn improve route navigation, traic regulation, urban area planning and etc. One key challenge in traic prediction is how much to rely on prediction models that are constructed using historical data in real-time traic situations, which may dier rom that o the historical data and change over time. In this paper, we propose a novel online ramework that could learn rom the current traic situation (or context) in real-time and predict the uture traic by matching the current situation to the most eective prediction model trained using historical data. As real-time traic arrives, the traic context space is adaptively partitioned in order to eiciently estimate the eectiveness o each base predictor in dierent situations. We obtain and prove both short-term and long-term perormance guarantees (bounds) or our online algorithm. The proposed algorithm also works eectively in scenarios where the true labels (i.e. realized traic) are missing or become available with delay. Using the proposed ramework, the context dimension that is the most relevant to traic prediction can also be revealed, which can urther reduce the implementation complexity as well as inorm traic policy making. Our experiments with real-world data in reallie conditions show that the proposed approach signiicantly outperorms existing solutions. I. INTRODUCTION Traic congestion causes tremendous loss in terms o both time and energy wasted. According to a recent report rom the Texas Transportation Institute [1], in 2007, all 439 metropolitan areas experienced 4.2 billion vehicle-hours o delay, which is equivalent to 2.8 billion gallons in wasted uel and $87.2 billion in lost productivity, or about 0.7% o the nation s GDP. Traic congestion is caused when the traic demand approaches or exceeds the available capacity o the traic system. In United States, Federal Highway Administration [2] [3] has observed that the number o miles o vehicle travel increased by 76 percent rom 1980 to 1999, while the total miles o highway increased merely by 1.5 percent, which hardly accommodates growth in travel. It is now generally conceded that it is impossible to build our way out o congestion, mainly because increased capacity results in induced demand. These actors motivate an inormation-based approach to address these problems. Fortunately, due to thorough sensor instrumentations o road networks in major cities as well as the vast availability o J. Xu and M. van der Schaar are with the Department o Electrical Engineering, University o Caliornia, Los Angeles. Emails: jiexu@ucla.edu, mihaela@ee.ucla.edu. D. Deng, U. Demiryurek and C. Shahabi are with the Department o Computer Science, University o Southern Caliornia. Emails: dingxiod@usc.edu, demiryur@usc.edu, shahabi@usc.edu. auxiliary commodity sensors rom which traic inormation can be derived (e.g., CCTV cameras, GPS devices), a large volume o real-time and historical traic data at very high spatial and temporal resolutions has become available. Several companies, such as Inrix, now sell both types and at our research center we have had access to both datasets rom Los Angeles County or the past three years. As shown by many studies [4] [5] [6] [7] [8], these traic datasets can be used to predict traic congestion, which in turn enables drivers to avoid congested areas (e.g., through intelligent navigation systems), policy makers to decide about changes to traic regulations (e.g., replace a carpool lane with a toll lane), urban planners to design better pathways (e.g., adding an extra lane) and civil engineers to plan better or construction zones (e.g., how a short-term construction would impact traic). One major challenge in predicting traic is how much to rely on the prediction model constructed using historical data in the real-time traic situation, which may dier rom that o the historical data due to the act that traic situations are numerous and changing over time. Previous studies showed that depending on the traic situation one prediction model may be more useul than the other. For example, in [7] it is shown that a hybrid orecasting model that selects in real-time depending on the current situation between Auto-Regressive Integrated Moving Average (ARIMA) model and Historical Average Model (HAM) model yields signiicant better prediction accuracy. It is shown that the ARIMA prediction model is more eective in predicting the speed in normal conditions but at the edges o the rush-hour time (i.e., the beginning and the end o rush hour), the HAM model is more useul. This becomes even more challenging when considering dierent causes or congestion, e.g., recurring (e.g., daily rush hours), occasional (e.g., weather conditions), unpredictable (e.g., accidents), and temporarily or short term (e.g., a basketball game) or long term (e.g., road construction) congestions. However there is no holistic approach on when and in which situations to switch rom one prediction model to the other or a more eective prediction. The exhaustive method that trains or each traic situation a prediction model is obviously impractical since it would induce extremely high complexity due to the numerous possible traic situations. Our main thesis in this paper is that we try to learn rom the current traic situation in real-time and predict the uture traic by matching the current situation to the most eective prediction model that we constructed using historical data. First, a inite (possibly small) number o traic predictors are constructed or the same number o representative traic conditions using historical data. Using a small set o base

2 predictors reduces the training and maintenance costs. Given this set o base predictors, we learn to select the most eective predictor that best suits the current traic situation in realtime. For instance, suppose we have two traic predictors trained on historical datasets in dierent weather conditions, sunny and rainy. We will learn online which predictor to use or prediction in a cloudy weather which does not have a predictor trained or it. The basic idea to learn and select the most eective predictor is based on estimating the reward o a predictor in dierent situations. The reward estimate is calculated based on how accurate each predictor has been in predicting, say, speed value, given the actual speed values we have observed in the recent past via the real-time data. However, signiicant challenges still remain as we will explain shortly. Many eatures can be used to identiy a traic situation, which henceorth are called context. Example eatures include: location, time, weather condition, number o lanes, area type (e.g., business district, residential), etc. Thereore, the context space is a multidimensional space with D dimensions, where D is the number o eatures. Since D can be very large and the value space can be continuous, learning the most eective predictor in each individual context using reward estimates or this individual context (i.e. a D-dimensional point in the context space) can be extremely slow. For example, there are numerous possible weather conditions (characterized by temperature, humidity, wind speed etc.) but each speciic weather condition only appears occasionally in real-time. Thus, we may initially group weather conditions into rough categories such as sunny, rainy, cloudy etc. and then reine each category to improve prediction. However, how to adaptively group contexts and partition the context space poses a signiicant challenge or ast learning o the best predictor or dierent traic contexts. Moreover a rigourous perormance characterization o such a method is missing. These are the problems that we are going to solve in this paper. Our approach has three important byproducts. First, since the reward is continuously being updated/aggregated, we are utilizing what we learn in real-time to adapt to both shortterm and long-term changes. For example, suppose that on a location that is a 3-lane reeway, one o the lanes is closed or a week due to construction. Our approach that is originally using the predictor that is the most eective to a 3-lane subspace would start observing that the predictor does not work well or the 2-lane case and thus switching to the new predictor. Later once the construction is complete, the reward mechanism would guide our approach to go back to the previous 3-lane predictor. Second, our approach is agnostic to the congestion cause. For example, our reward mechanism may guide us to select a predictor that is trained or a rush-hour subspace, even though the current time is not a rush-hour time, but perhaps because o an incident (e.g., an unknown object in the middle o the reeway) that resulted in a similar traic condition as that o a rush hour at that location. Thereore, we can still predict the traic condition successully in the presence o that event. Finally, since location and time are two eatures o our context space, our approach is inherently spatiotemporal and takes into consideration the sensor readings that are spatially and temporally closest to each other. To evaluate our approach, we obtain and prove both shortterm and long-term perormance guarantees (bounds) or our online algorithm. This provides not only the assurance that our algorithm will converge over time to the optimal predictor or each possible traic situation (i.e., there is no loss in terms o the average reward) but also provides a bound or the speed o convergence o our algorithm to the optimal predictor (i.e., our algorithm is ast to converge to the optimal perormance). In addition, we conducted a number o experiments to veriy our approach with real-world data in real-lie conditions. The results show our approach signiicantly outperorms existing approaches that do not adapt to the varying traic situations. The remainder o the paper is organized as ollows. Section II reviews the related work and highlights the distinctions o our approach. Section III ormulates the traic prediction problem and deines the perormance metric. Section IV describes our context-aware adaptive traic prediction algorithm. Section V discusses several ways to optimize our algorithm. Section VI reports our experimental results with real world traic datasets. Section VII concludes the paper. II. RELATED WORK In this related work section, we irst compare our scheme against other existing traic prediction works (i.e. applicationrelated work) and aterwards we compare our work against various classes o online learning techniques (i.e. algorithm and theory related work). A. Traic prediction Several traic prediction techniques have been studied in the past. The majority o these techniques ocus on predicting traic in typical conditions (e.g., morning rush hours) [4] [7] [9] [10], and more recently in the presence o accidents, e.g., [5] [4]. Both qualitative [11] and quantitative [12] approaches have been used to measure the impact o an accident on road networks and various machine learning techniques have been applied to predict the typical traic conditions and the impact o accidents, including Naive Bayesian [13], Decision Tree Learning [6], and Nearest Neighbor [14]. The main dierences between our work and the existing studies on traic prediction are: 1) All existing techniques or traic prediction are aimed at predicting traic in speciic traic situations, e.g. either typical conditions or when accidents occur. Instead, our scheme is applicable to all traic situations and learns to match the current traic situation to the best traic prediction model, by exploiting spatiotemporal and other context similarity inormation. 2) All existing techniques used or traic prediction deploy models learned oline (i.e. they rely on a priori training sessions) or that they are retrained ater long periods and thus, they cannot adapt (learn rom) dynamically changing traic situations. Instead, our scheme is able to dynamically adapt to the changing traic situations on the ly and improve the traic prediction overtime as additional traic data is received. 3) Most existing works are based on empirical studies and do not oer rigorous perormance guarantees or traic prediction. Instead, our scheme is able

3 to provide both short-term and long-term perormance bounds in dynamically changing traic situations. B. Ensemble learning Our ramework builds a hybrid traic predictor on top o a set o base predictors and thus, it appertains to the class o ensemble learning techniques. Traditional ensemble schemes [15] [16] or data analysis are mostly ocused on analyzing oline datasets; examples o these techniques include bagging [15] and boosting [16]. In the past decade much work has been done to develop online versions o such ensemble techniques. For example, an online version o Adaboost is described in [17]. Another strand o literature on online ensemble learning is represented by the weight update schemes [18] [19] [20] [21] [22] that maintain a collection o given predictors, predict using a weighted majority rule, and update online the weights associated to the learners. Most o these schemes develop multiplicative update rules [18] [19] [20]. For example, weighted majority in [18] decreases the weights o predictors in the pool that disagree with the true label whenever the ensemble makes a mistake. Additive weight update is adopted in [21] where the weights o learners that predict correctly are increased by a certain amount. In [22], weights o the learners are updated based on stochastic gradient descent. Most prior works on ensemble learning provide algorithms which are only asymptotically converging to an optimal or locally-optimal solution without providing any rates o convergence. On the contrary, we not only prove convergence results, but we are also able to explicitly characterize the perormance loss incurred at each time step with respect to the omniscient oracle (i.e. complete knowledge) benchmark which knows the accuracies o all classiiers or the data. Speciically, we prove regret bounds that hold uniormly over time. Moreover, existing works do not exploit the contextual inormation associated with the data. In contrast, we ocus on how contextual specialization o predictors can be discovered over time to create a strong predictor rom many weak predictors in dierent contexts. C. Contextual multi-armed bandits The proposed algorithm is analyzed using techniques developed or the contextual multi-armed bandits (MAB) problems. Previously, MAB methods were applied to solve problems in clinical trials [23] [24], multi-user communication networks [25], web advertising [26], recommender systems [27] [28] and stream mining systems [29] [30]. A key advantage o MAB methods as compared to other online learning methods is that they can provide a bound on the convergence speed as well as a bound on the loss due to learning compared to an oracle solution which requires knowledge o the stochastic model o the system, which is named regret. To the authors best knowledge, this paper is the irst attempt to develop MAB methods to solve spatiotemporal traic prediction problems or intelligent transportation systems. In existing works on MAB, the learner can only observe the reward o the selected action (which in our case is the base predictor). In contrast, in this paper, the learner can observe the rewards o all predictors since the prediction action does not have an explicit impact on reward realization. This makes the learning much aster than existing works. Moreover, this paper also studies which context dimension (or set o context dimensions) is more relevant to the traic prediction problem, thereby reducing the implementation complexity and providing guidelines or traic policy making by identiying what are the causes or various traic situations. A. Problem setting III. PROBLEM FORMULATION Figure 1 illustrates the system model under consideration. We consider a set o locations L where traic sensors are deployed. These locations can be either on the highways or arterial streets. We consider an ininite horizon discrete time system t = 1, 2,... where in each slot t a traic prediction request rom one o the location l o L arrives to the system in sequence. Given the current traic speed x t at this location, the goal is to predict the traic speed ŷ t in some predetermined uture time, e.g. in the next 15 minutes or in the next 2 hours. Note that the notation t is only used to order the requests according to their relative arrival time. Each request can come rom any location in L at any time in a day, thereby posing a spatiotemporal prediction problem. Each request is associated with a set o traic context inormation which is provided by the road sensors. The context inormation can include but is not limited to: The location context, e.g. the longitude and latitude o the requested location l o, the location type (highway, arterial way), the area type (business district, residential). The time context, e.g. whether on weekday or weekend, at daytime or night, in the rush hour or not, etc. The incident context, e.g. whether there is a traic incident occurred nearby and how ar away rom l o, the type o the incident, the number o aected lanes etc. Other contexts such as weather (temperature, humidity, wind speed etc.), temporary events etc. We use the notation θ t Θ to denote the context inormation associated with the t-th request where Θ is a D-dimensional space and D is the number o types o context used. Without loss o generality, we normalize the context space Θ to be [0, 1] D. For example, time in a day can be normalized with respect to 24 hours. The system maintains a set o K base predictors F that can take input o the current speed x t, sent by the road sensors, and output the predicted speed (x t ) in the predetermined uture at location l o. These base predictors are trained and constructed using historical data or K representative traic situations beore the system operates. However, their perormance is unknown or the other traic situations since they are changing over time. We aim to build a hybrid predictor that selects the most eective predictor or the real-time traic situations by exploiting the traic context inormation. Thus, or each request, the system selects the prediction result o one o the base predictors as the inal traic prediction result, denoted by y t. The prediction result can be consumed by thirdparty applications such as navigation.

time Context Traic (location, acquisition data Data Interace uture Observed Context PartitionSpace Reward Estimates time)θ Prediction Predictors Base F traicŷ Congestion request User Interace Rerouting... ahead! uture Predicted Base Predictor traicy r PC Current traicx 4 Fig. 1. System Diagram. Eventually, the real traic at the predetermined uture or the t-th request, denoted by ŷ t, is revealed. We also call ŷ t the ground-truth label or the t-th request. For now we assume that the label is revealed or each request at the end o each prediction. In reality, the label can arrive with delay or even be missing. We will consider these scenarios in Section V. By comparing the system predicted traic y t and the true traic ŷ t, a reward r t is obtained according to a general reward unction r t = R(y t, ŷ t ). For example, a simple reward unction indicates the accuracy o the prediction, i.e. R(y t, ŷ t ) = I(y t = ŷ t ) where I( ) is the indicator unction. The system obtains a reward o 1 only i the prediction is correct and 0 otherwise. Other reward unctions that depend on how close the prediction is to the true label can also be adopted. As mentioned, each based predictor is a unction o the current traic x t which outputs the uture traic prediction y t. Since or a given x t the true uture traic ŷ t is a random variable, the reward by selecting a predictor, i.e. R((x t ), ŷ t ), is also a random variable at each t. The eectiveness o a base predictor is measured by its expected reward, which depends on the underlying unknown joint distribution o x t and ŷ t. The eectiveness o a base predictor in a traic context θ is thus its expected reward conditional on θ and is determined by the underlying unknown joint distribution o x t and ŷ t conditional on the event θ. Let π (θ) = E{R((x), ŷ) θ} be the expected reward o a predictor in context θ. However, the base predictors are constructed using historical data can thus, their expected reward is unknown a priori or real-time situations which may vary over time. Thereore, the system will continuously revise its selection o base predictors as it learns better and better the base predictors expected rewards in the current context. B. Spatiotemporal prediction and multi-predictor diversity gain By taking into consideration o the traic context inormation when making traic prediction, we are exploiting the multi-predictor diversity to improve the prediction perormance. To get a sense o where the multi-predictor diversity gain comes rom, consider the simple example in Figure 2, which shows the expected rewards o various base predictors. Since the traic prediction is a spatiotemporal problem, we use both time and location o the traic as the context inormation. Given a location at 5 miles rom the reerence location, we have three predictors constructed or three representative traic situations - morning around 6am, aternoon around 2pm and evening around 7pm. These predictors work eectively in their corresponding situations but may not work well in other time contexts due to the dierent traic conditions in dierent time o the day. I we use the same predictor or the entire day, then the average prediction perormance can be very bad. Instead, i we use the predictors or traic situations that are similar to the representative situation, then much better prediction perormance can be obtained. However, the challenge is when to use which predictor or prediction since the eectiveness o the base predictors is unknown or every traic context. For example, the three base predictors (0 mile, 5 miles, 10 miles) given time 12pm have complex expected reward curves which need to be learned over time to determine which predictor is the best at dierent locations. C. Perormance metric or our algorithm The goal o our system is to learn the optimal hybrid predictor which selects the most eective base predictor or each traic situation. Since we do not have the complete knowledge o the perormance o all base predictors or all

context Time Expected reward Location context 0 7am mile 10 miles 12pm 7pm 5 miles: Expected reward Optimal context7am Time (7pm, Predictor 5miles) or (7am, Predictor (12pm, Predictor predictor 12pm 5miles) or 5miles) or 12pm: 7pm Expected reward Location context0 (12pm, Predictor or milepredictor 5miles) Optimal predictor (12pm, 10miles)Predictor or5 (12pm, 0mile) or 10 miles 5 Fig. 2. Spatiotemporal prediction and multi-predictor diversity gain. contexts in the online environment, we will develop online learning algorithms that learn to select the best predictors or dierent traic contexts over time. The benchmark when evaluating the perormance o our learning algorithm is the optimal hybrid predictor that is constructed by an oracle that has the complete inormation o the expected rewards o all base predictors in all situations. For a traic context θ, the optimal base predictor selected in the oracle benchmark is opt (θ) := arg max F π (θ), θ Θ (1) Let σ be a learning algorithm and σ(t) be the predictor selected by σ at time t, then the regret o learning by time T is deined as the aggregate reward dierence between our learning algorithm and the oracle solution up to T, i.e. [ T T ] Reg(T ) := π (θ t )(θ t ) E R( σ(t) (x t ), ŷ t ) (2) t=1 t=1 where the expectation is taken with respect to the randomness o the prediction, true traic realization and predictors selected. The regret characterizes the loss incurred due to the unknown transportation system dynamics and gives the convergence rate o the total expected reward o the learning algorithm to the value o the optimal hybrid predictor in (1). The regret is non-decreasing in the total number o requests T but we want it to increase as slow as possible. Any algorithm whose regret is sublinear in T, i.e. Reg(T ) = O(T γ ) such that γ < 1, will converge to the optimal solution in terms o the average reward, i.e. lim T Reg(T ) T = 0. The regret o learning also gives a measure or the rate o learning. A smaller γ will result in a aster convergence to the optimal average reward and thus, learning the optimal hybrid predictor is aster i γ is smaller. IV. CONTEXT-AWARE ADAPTIVE TRAFFIC PREDICTION A natural way to learn a base predictor s perormance in a non-representative traic context is to record and update its sample mean reward as additional data (i.e. traic requests) in the same context arrives. Using such a sample mean-based approach to construct a hybrid predictor is the basic idea o our learning algorithm; however, signiicant challenges still remain. One the one hand, exploiting the context inormation can potentially boost the prediction perormance as it provides ways to construct a strong hybrid predictor as suggested in Section III(B). Without the context inormation, we would only learn the average perormance o each predictor over all contexts and thus, a single base predictor would always be selected even though on average it does not perorm well. On the other hand, building the optimal hybrid predictor can be very diicult since the context space Θ can be very large and the value space can even be continuous. Thus, the sample mean reward approach would ail to work eiciently due to the small number o samples or each individual context θ. Our method to overcome this problem is to dynamically partition the entire context space into multiple smaller context subspaces and maintain and update the sample mean reward estimates or each subspace. This is due to the act that the expected rewards o a predictor are likely to be similar or similar contexts. For instance, similar weather conditions would have similar impacts on the traic on close locations. Next, we will propose an online prediction algorithm that adaptively partitions the context space according to the traic prediction request arrivals on the ly and guarantees the sublinear learning regret.

A. Algorithm description In this subsection, we describe the proposed online adaptive traic prediction algorithm (CA-Traic). First we introduce several useul concepts or describing the proposed algorithm. Context subspace. A context subspace C is a subspace o the entire context space Θ, i.e. C Θ. In this paper, we will consider only context subspaces that are created by uniormly partitioning the context space on each dimension, which is enough to guarantee sublinear learning regrets. Thus, each context subspace is a D- dimensional hypercube with side length being 2 l or some l. We call such a hypercube a level-l subspace. For example, when the entire context space is [0, 1], namely the context dimension is D = 1, the entire context space is a level-0 subspace, [0, 1/2) and [1/2, 1] are two level- 1 subspaces, [0, 1/4), [1/4, 1/2), [1/2, 3/4), [3/4, 1] are our level-2 subspaces etc. Context space partition. A context space partition P is a set o non-overlapping context subspaces that cover the entire context space. For example, when D = 1, {[0, 1]}, {[0, 1/2), [1/2, 3/4), [3/4, 1]} are two context space partitions. Since our algorithm will adaptively partition the context space by adaptively removing subspaces rom the partition and adding new subspaces into the partition, the context space partition is time-varying depending on the context arrival process o the traic requests. Initially, the context space partition includes only the entire context space, i.e. P 0 = {Θ}. Active context subspace. A context subspace C is active i it is in the current context space partition P t, at time t. For each active context subspace C P t, the algorithm maintains the sample mean reward estimates r t (C) or each or the predictor or the context arrivals to this subspace. For each active subspace C P t, the algorithm also maintains a counter MC t that records the number o context arrivals to C. The algorithm works as ollows (See Algorithm 1). We will describe the algorithm in two parts. The irst part (line 3-8) is the predictor selection and reward estimates update. When a traic prediction request comes, the current and past traic speed vector x t along with the traic context inormation θ t are sent to the system. The algorithm irst checks to which active subspace C t P t in the current partition P t the context θ t belongs (line 3). Next, the algorithm activates all predictors and obtains their predictions (x t ), F given the input x t (line 4). However, it selects only one o the prediction as the inal prediction y t, according to the selection as ollows (line 5) y t = (x t ) where = arg max r t (C t ) (3) In words, the selected base predictor has the highest reward estimate or the context subspace C t among all predictors. This is an intuitive selection based on the sample mean rewards. Next the counter MC t steps by 1 since we have one more sample in C. When the true traic pattern ŷ t is revealed (line 6), the sample mean reward estimates or all predictors are then updated (line 7-8). Current space partition context MCt A2pl New context time space partition6 Current subspace context location Old New subspaces removed; added Fig. 3. An illustration o the context space partitioning in a 2-dimensional space: the lower let subspace is urther partitioned into 4 smaller subspaces because the partition condition is satisied. The second part o the algorithm, namely the adaptive context space partitioning, is the key o our algorithm (line 9-11). At the end o each slot t, the algorithm decides whether to urther partition the current subspace C t, depending on whether we have seen suiciently many request arrivals in C t. More speciically, i MC t A2lp, then C t will be urther partitioned (line 9), where l is the subspace level o C t, A > 0 and p > 0 are two design parameters. When partitioning is needed, C t is uniormly partitioned into 2 D smaller hypercubes (each hypercube is a level-l + 1 subspace with side-length hal o that o C t ). Then C t is removed rom the active context subspace set P and the new subspaces are added into P (line 11). In this way, P is still a partition whose subspaces are non-overlapping and cover the entire context space. Figure 3 provides an illustrative example o the context space partitioning or a 2-dimensional context space. The current context space partition P t is shown in the let plot and the current subspace C t is the shaded bottom let square. When the partitioning condition is satisied, C t is urther split into our smaller squares. Intuitively, the context space partitioning process can help reine the learning in smaller subspaces. In the next subsection, we will show that by careully choosing the design parameters A and p, we can achieve sublinear learning regret in time, which implies that the optimal timeaverage prediction perormance can be achieved. B. Learning regret analysis In this subsection, we analyze the regret o the proposed traic prediction algorithm. To enable this analysis, we make a technical assumption that each base predictor achieves similar expected rewards (accuracies) or similar contexts; this is ormalized in terms o a Hölder condition. Assumption. For each F, there exists L > 0, α > 0 such that or all θ, θ Θ, we have π (θ) π (θ ) L θ θ α (4) This is a natural and reasonable assumption in traic prediction problems since similar contexts would lead to similar impact on the prediction outcomes. Note that L is not required

7 Algorithm Context-aware Traic Prediction (CA-Traic) 1: Initialize P 0 = {Θ}, r (Θ) = 0, F, MΘ 0 = 0. 2: or each traic prediction request (time slot t) do 3: Determine C t P t such that θ t C t. 4: Generate the predictions results or all predictors (x t ),. 5: Select the inal prediction y t = (x t ) according to (18). 6: The true traic pattern ŷ t is revealed. 7: Update the sample mean reward r (C t ),. 8: MC t = M C t + 1. 9: i MC t A2pl then 10: C t is urther partitioned. 11: end i 12: end or to be known and that an unknown α can be estimated online using the sample mean estimates o accuracies or similar contexts, and our proposed algorithm can be modiied to include the estimation o α. To acilitate the analysis, we artiicially create two learning steps in the algorithm: or each traic prediction request at time t, it belongs to either a virtual exploration step o a virtual exploitation step. We introduce a control unction ζ(t) with the orm ζ(t) = 2 2αl log(t) that controls which virtual step a time slot t belongs to, where l is the level o the current active subspace C t. Speciically, it depends on the counter MC t o the current active context subspace. I MC t ζ(t), then it is a virtual exploration step; otherwise, it is a virtual exploitation step. Note that these steps are not part o the algorithm, they are solely used or the regret analysis. Next, we introduce some notations here or the regret analysis. Let E,C t be the set o rewards collected rom choosing predictor by time t or subspace C. For each subspace C let C be the predictor which is optimal or the center context in that subspace. Let π,c := sup θ C π (θ) and π,c := in θ C π (θ). For a level-l subspace C, we deined the set o suboptimal predictor as S C,l,B := { : π C,C π,c > B2 αl } (5) where l is the level o subspace C, B > 0 is a constant, L and α are the Hölder condition parameters. We also deine β a := 1/t a. t=1 The prediction regret can be written as a sum o three terms: Reg(T ) = Reg e (T ) + Reg s (T ) + Reg n (T ) (6) where Reg e (T ) is the regret due to virtual exploration steps by time T, Reg s (T ) is the regret due to sub-optimal predictor selection in the virtual exploitation steps by time T and Reg n (T ) is the regret due to near-optimal predictor selection in the virtual exploitation steps by time T. We will bound these three terms separately to obtain the complete regret. The irst lemma gives an upper bound on the highest level o the active subspace at any time t. Lemma 1. Any active subspace C P t has a level at most log(t)/p + 1. The next lemmas bound the regrets or any subspace o level l. Lemma 2. For any level-l subspace the regret due to virtual explorations by time t is bounded above by 2 2αl log(t) + 1. Proo. The result ollows rom the act that the number o exploration slots or contexts arriving to context subspace C by time T is upper bounded by 2 2αl log T + 1. Lemma 3. For any level-l subspace, given that 2L( D/2 l ) α + (2 B)2 αl 0, the regret due to choosing a sub-optimal predictor in the exploitation slots is bounded above by 2Kβ 2. Proo. We will bound the probability that the algorithm ollows a suboptimal predictor in the virtual exploitation slots in a context subspace C with level l. Thus, using this we will bound the expected number o times a sub-optimal predictor is selected by the algorithm in the virtual exploitation slots in C. Let λ t,c be the event that a sub-optimal predictor S C is selected at time t. Let γc t be the event that the request arrival to C belongs to a exploitation slot. We have R s,c (T ) For some H t > 0, we have T t=1 S C P (λ t,c, γ t C) (7) P (λ t,c, γt C ) P ( rt (C) rt (C), γt C ) P ( r t (C) πt (C) + H t, γ t C ) +P ( r t (C) πt (C) H t, γ t C ) +P ( r t (C) rt (C), r t (C) < πt (C) + H t, r t (C) > πt (C) H t, γ t C ) and or a sub-optimal predictor S C, (8) P ( r t (C) rt (C), rt (C) < πt (C) + H t, r t (C) > πt (C) H t, γc t ) P ( r t,best (C) r t,worst (C), r t,best (C) < π t (C) + L( D/2 l ) α + H t, r t,worst (C) > π t (C) L( D/2 l ) α H t, γc t ) (9) For S C, when 2L( D/2 l ) α + 2H t B2 αl 0 (10) we have (9) equal to zeros. Let H t = 2 αl. Assume that (10) holds. Using a Cherno-Hoeding bound, or any S C, since on the event γc t, the number o samples is greater than 2 2αl log t, we have P ( r t (C) π t (C) + H t, γ t C) e 2(Ht)2 2 2αl log t = 1 t 2 (11) and similarly, P ( r t (C) 1 t 2 (12) Hence, when (10) holds, summing over time and the suboptimal predictors, we get R s,c (T ) 2Kβ 2.

8 Lemma 4. For any level-l subspace, the regret due to selecting a near-optimal predictor in the virtual exploitation slots is bounded above by AB2 l(p α). Proo. I a near optimal predictor is selected or a level l subspace C, then the one-slot contribution to the regret is at most B2 αl. We multiply this by A2 pl, which is the maximum number o slots that C can stay active. Thus, we must have l max < 1 + log T/(D + 3α). Thereore, Reg(T ) 1+log T/(D+3α) 2 Dl (2 2αl (2LD α/2 + 2 + log(t )) + 2Kβ 2 ) l=1 T D+2α D+3α 2 (D+2α)l (2LD α/2 + 2 + log(t )) +T D D+3α 2 Dl 2Kβ 2 (17) Next, we combine the results rom the previous lemmas to obtain our regret bound. Denote W l (T ) be the number o level-l subspaces that are activated by time T, we know that W l (T ) 2 Dl or any l and T. Moreover, rom the result o Lemma 1, we know that W l (T ) = 0 or l > log(t)/p + 1. Theorem 1. The regret is upper bounded by We have shown that the regret upper bound is sublinear in time, implying that the average traic prediction rewards (e.g. accuracy) achieves the optimal reward as time goes to ininity. Moreover, it also provides perormance bounds or any inite time T rather than the asymptotic result. Ensuring a ast convergence rate is important or the algorithm to quickly adapt to the dynamically changing environment. Reg(T ) V. EXTENSIONS log(t )/(3α) +1 W l (T )(2 2αl (2LD α/2 + 2 + log(t )) + 2Kβ 2 ) A. Dimension reduction l=1 (13) Proo. Consider a subspace C, the highest order o regret comes rom Reg e,c (T ) and Reg n,c (T ). The ormer is on the order o O(2 2αl ) and the latter is on the order o O(2 l(p α) ). These two are balanced or p = 3α. Although choosing p smaller than 3α will not make the regret in C larger, it will increase the number o subspaces activated by time T, causing an increase in the regret. Since we sum over all activated hypercubes, it is best to choose p as large as possible. In order or the condition in Lemma 3 to hold, B needs to satisy B 2(LD α/2 + 1). Choosing B = 2(LD α/2 + 1) minimizes Reg n,c (T ). The ollowing corollary establishes the regret bound when the context arrivals are uniormly distributed over the context space. For example, i the context is the location, then the requests come uniormly rom the area. This is the worst-case scenario because the algorithm has to learn over the entire context space. Corollary 1. I the context arrival by time T is uniormly distributed over the context space, we have Reg(T ) T D+2α D+3α 2 (D+2α)l (2LD α/2 + 2 + log(t )) (14) + T D D+3α 2 Dl 2Kβ 2 (15) In the previous section, the context space partitioning is perormed on all context dimensions simultaneously. In particular, each context subspace C has dimension D and each time it is urther partitioned, 2 D new subspaces are added into the context space partition P. Thus, learning can be very slow when D is large since many traic requests are required to learn the best predictors or all these subspaces. One way to reduce the number o new subspaces created during the partitioning process is to maintain the context partition and subspaces and perorm the partition or each dimension separately. In this way, each time a partitioning is needed or one dimension, only two new subspaces will be created or this dimension. Thereore, at most 2D more subspaces will be created or each request arrival. Note, however, that most o the time a partitioning is not needed. The modiied algorithm works as ollows. For each context dimension (e.g. time, type and distance), we maintain a similar context space and partition structure as in Section III (in other words the context space dimension is 1 but we have D such spaces). Denote Pd t as the context space partition or dimension d and Cd t as the current context subspace or dimension d, at time t. Note now that since we consider only one dimension, Cd t is a one-dimensional subspace or each d. Each time a traic instant x t with context θ t arrives, we obtain the prediction results o all base predictors given x t. The inal prediction y t is selected according to a dierent rule than (18) as ollows Proo. First we calculate the highest level when context arrivals are uniorm. In the worst case, all level l subspaces will stay active and then, they are deactivate till all level l + 1 subspaces become active and so on. Let l max be the maximum level subspace under this scenario. We have l max 1 l=1 2 Dl 2 3αl < T (16) y t = (x t ) where = arg max {max r t (Cd)} t (18) In words, the algorithm selects the predictor that has the highest reward estimate or all current subspace among all context dimensions. Figure 4 shows an illustrative example or the predictor selection when we only use the time and location as the contexts. In this example, the time context (10:05am) alls into the subspace at the most let quarter (7am - 11pm) and the location context (3.7 miles away rom a reerence location) alls into the right hal subspace (2.5-5 miles). d

10:05am Time context Location 3.7 miles context away M M< tc tc1 A2pl1 2 2pl2 Partition context only location Fig. 5. An illustrative example or context space partition with relevant context: partitioning only occurs on the location context since the partitioning condition is satisied. According to the time context dimension, the predictor with the highest reward estimate is Predictor 1 while according to the location context dimension, the predictor with the highest reward estimate is Predictor 2. Overall, the best estimated predictor is Predictor 2, which is selected by the algorithm. Ater the true traic ŷ t is observed, the reward estimates or all predictors in all D one-dimensional context subspace Cd t, d are updated. The D partitions Pt d, d are also updated in a similar way as beore depending on whether there have been suiciently many traic requests with contexts in the current subspaces. Figure 5 illustrates the context space partition or each individual dimension. In this example, only the location context satisies the partitioning condition and hence its right hal subspace is urther partitioned. B. Relevant context dimension While using all context dimensions is supposed to provide the most reined inormation and thus leads to the best perormance, it is equally important to investigate which dimension or set o dimensions is the most inormative or a speciic traic situation. The beneits o revealing the most relevant context dimension (or set o dimensions) are manyold, including reduced cost due to context inormation retrieval and transmission, reduced algorithmic and computation complexity and targeted active traic control. In the extreme case, a context dimension (e.g. the time) is not inormative at all i or all values o the context along this dimension, the best traic predictor is the same. Hence, having this context dimension does not add beneits or the traic prediction but only incurs additional cost. For the expositional clariy, in the ollowing we will ocus only on the most relevant context. The extension to the k most relevant context dimensions ( k < D) is straightorward. Let π (θ d ) be the expected prediction reward o predictor F when the context along the d-th dimension is θ d and (θ d ) = arg max π (θ d ) be the predictor with the highest expected reward given θ d. Then the expected reward i we only use d- th dimension context inormation is R d = E θd {π (θ d )(θ d )} where the expectation is taken over the distribution o the d- th dimension context. The most relevant context dimension is deined to be d = arg max d R d. Our ramework can be easily extended to determine the most relevant context dimension. For each dimension, we maintain the similar partition and subspace structure as in Section III (with D = 1). In addition, we maintain the time-average prediction reward R d t or each dimension d. The estimated most relevant dimension at time t is thus (d ) t = arg max d Rt d. Theorem 2. The estimated most relevant dimension converges to the true most relevant dimension, i.e. lim t (d ) t = d. Proo. Since or each dimension d, the time-average regret tends to 0 as t, the time-average reward also R t d R d as t. Thereore, the most relevant dimension can also be revealed when t. C. Missing and delayed eedback The proposed algorithm requires the knowledge o the true label ŷ t on the predicted traic to update reward estimates o dierent predictors so that their true perormance can be learned. In practice, the eedback about true traic label ŷ t can be missing or delayed due to, or example, delayed traic reports and temporary sensor down. In this subsection, we can make small modiications to the proposed algorithm to deal with such scenarios. Consider the case when the eedback is missing with probability p m. The algorithm is modiied so that it updates the sample mean reward and perorms context space partitioning only or requests in which the true label is revealed. Let Reg m (T ) denote the regret o the modiied algorithm with missing eedback, we have the ollowing result. Proposition 1. Suppose the eedback about the true label is missing with probability p m, we have Reg m (T ) log(t )/(3α) +1 + 1 l=1 1 p m log(t )) + 2Kβ 2 ) W l (T )(2 2αl (2LD α/2 + 2 9 (19) Proo. Missing labels cause more virtual exploration slots to learn the perormance o base predictors accurately enough. 1 In expectation, 1 p m 1 more virtual exploration slots are required in ratio. Hence, the regret due to virtual exploration 1 increases to 1 p m o beore. The regret due to virtual exploitation slots is not aected since the the control unction ζ(t) ensures the reward estimates are accurate enough. Using the original regret bound and taking into account the increased regret due to virtual exploration, we obtain the new regret bound. Consider the case when the eedback is delayed. We assume that the true label o the request at t is observed at most L max slots later. The algorithm is modiied so that it keeps in its memory the last L max labels and the reward estimates are updated whenever the corresponding true label is revealed. Let Reg d (T ) denote the regret o the modiied algorithm with delayed eedback. We then have the ollowing result Proposition 2. Suppose the eedback about the true label is delayed by at most L max slots, then we have Reg d (T ) L max + Reg(T ) (20) Proo. A new sample is added to sample mean accuracy whenever the true label o a precious prediction arrives. The

Time 10:05am context Location context 3.7miles away Predictor estimate 1 1 has or the time highest context reward 2 2Predictor Predictor estimate 2 or has the the location highest estimate highest 2 has the context reward reward overall 10 Fig. 4. An illustrative example or predictor selection with separately maintained context partition: a request with context (10:05am and 3.7 miles away rom reerence location) arrives; Predictor 1 is the best or the time context and Predictor 2 is the best or the location context; Predictor 2 is the inally selected predictor worst case is when all labels are delayed by L max time steps. This is equivalent to starting the algorithm with an L max delay. The above two propositions show that the missing and delayed labels reduce the learning speed. However, since the regret bounds are still sublinear in time T, the time average reward converges to the optimal reward as T. This shows that our algorithm is robust to errors caused by uncertain traic conditions. B A. Experimental setup VI. EXPERIMENTS 1) Dataset: Our experiment utilizes a very large real-world traic dataset, which includes both real-time and historically archived data since 2010. The dataset consists o two parts: (i) Traic sensor data rom 9300 traic loop-detectors located on the highways and arterial streets o Los Angeles County (covering 5400 miles cumulatively). Several main traic parameters such as occupancy, volume and speed are collected in this dataset at the rate o 1 reading per sensor per minute; (ii) Traic incidents data. This dataset contains the traic incident inormation in the same area as in the traic sensor dataset. On average, 400 incidents occur per day and the dataset includes detailed inormation o each incident, including the severity and location inormation o the incident as well as the incident type etc. 2) Evaluation Method: The proposed method is suitable or any spatiotemporal traic prediction problem. In our experiments, the prediction requests come rom a reeway segment o 3.4 miles on interstate reeway 405 (I-405) during daytime 8am to 17pm. Figure 6 shows the reeway segment used in the experiment. Locations will be reerred using the distance rom the reerence location A. For each request rom location l o, the system aims to predict whether the traic will be congested at l o in the next 15 minutes using the current traic speed data. I the traic speed drops below a threshold λ, then the location is labeled as congested, denoted by ŷ = 1; Fig. 6. Freeway segment used in the experiment. otherwise, the location is labeled as not congested, denoted by ŷ = 1. We will show the results or dierent values o λ. We use the simple binary reward unction or evaluation. That is, the system obtains a reward o 1 i the prediction is correct and 0 otherwise. Thereore, the reward represents the prediction accuracy. The context inormation that we use in the experiments include the time when the prediction request is made and the location where the request comes rom. These contexts capture the spatiotemporal eature o the considered problem. Nevertheless, other context inormation mentioned in Section III(A) can also be adopted in our algorithm. Using historical data, we construct 6 base predictors (Naive Bayes) or 6 representative situations with context inormation rom the set [8am, 12pm, 16pm] [0mile, 3.4miles]. These are representative traic situations since 8am represents the morning rush hour, 12pm represents non-rush hour, 16pm represents the aternoon rush hour, 0 mile is at the reeway intersection and 3.4 miles is the arthest location away rom the intersection in considered reeway segment. A

11 CA-Traic MU AU GDU λ = 50mph 0.94 0.83 0.83 0.82 λ = 30mph 0.91 0.82 0.80 0.78 TABLE I OVERALL PREDICTION ACCURACY. 3) Benchmarks: Since our scheme appertains to the class o online ensemble learning techniques, we will compare our scheme against several such approaches. These benchmark solutions assign weights to base predictors but use dierent rules to update the weights. Denote the weight or base predictor by w. The inal traic prediction depends on the weighted combination o the predictions o the base predictors: { +1, i w y 0 y = F (21) 1, otherwise Three approaches are used to update the weights: Multiplicative Update (MU) [18] [19]: I the prediction is correct or predictor, i.e. y = ŷ, then w αw where α > 1 is a constant; otherwise, w w /α. Additive Update (AU) [21]: I the prediction is correct or predictor, i.e. y = ŷ, then w w + 1; otherwise, w w. Gradient Descent Update (GDU) [22]: The weight o predictor is update as w (1 β)w 2β(w y ŷ)ŷ where β (0, 1) is a constant. B. Prediction accuracies In Table I, we report the prediction accuracies o our proposed algorithm (CA-Traic) and the benchmark solutions or λ = 50mph and λ = 30mph. Our algorithm outperorms the benchmark solutions by more than 10% in terms o prediction accuracy. Note that this is a huge improvement since a random guessing can already achieve 50% accuracy and thus, the improvement cannot be more than 50%. Table II and III urther report the prediction accuracies in dierent traic situations. In Table II, the location context is ixed at 0.8 miles rom the reerence location and the accuracies or various time contexts (i.e. 10am, 2pm and 5pm) are presented or our proposed algorithm and the benchmarks. In Table III, the time context is ixed at 10am and the accuracies or various location contexts (i.e. 0.8 miles, 2.1 miles, 3.1 miles) are reported. In all traic situations, the proposed algorithm signiicantly outperorms the benchmark solutions since it is able to match speciic traic situations to the best predictors. C. Convergence o learning Since our algorithm is an online algorithm, it is also important to investigate its convergence rate. Figure 7 and 8 illustrate the prediction accuracies o our proposed algorithm over time, where the horizontal axis is the number o requests. As we can see, the proposed algorithm converges ast, requiring only a couple o hundreds o traic prediction requests. (λ = 50mph) CA-Traic MU AU GDU 10am 0.93 0.81 0.85 0.83 2pm 0.93 0.86 0.81 0.80 5pm 0.99 0.86 0.87 0.88 (λ = 30mph) CA-Traic MU AU GDU 10am 0.93 0.83 0.83 0.81 2pm 0.91 0.80 0.83 0.82 5pm 0.99 0.81 0.85 0.83 TABLE II TRAFFIC PREDICTION ACCURACY AT 0.8 MILES. (λ = 50mph) CA-Traic MU AU GDU 0.8 mile 0.92 0.84 0.83 0.82 2.1 mile 0.96 0.81 0.85 0.85 3.1 mile 0.93 0.85 0.83 0.81 (λ = 30mph) CA-Traic MU AU GDU 0.8 mile 0.92 0.81 0.83 0.82 2.1 mile 0.93 0.83 0.82 0.81 3.1 mile 0.94 0.81 0.82 0.82 TABLE III TRAFFIC PREDICTION ACCURACY AT 10AM D. Missing context inormation The context inormation associated with the requests may be missing occasionally due to, or example, missing reports and record mistakes. However, our modiied algorithm (described in Section V(A)), denoted by CA-Traic(R), can easily handle these scenarios. In this set o experiments, we show the perormance o the modiied algorithm or the extreme cases in which one type o context inormation is always missing. Table IV reports the accuracies o our algorithms (CA-Traic and CA-Traic(R)) as well as the benchmarks. Although CA- Traic(R) perorms slightly worse than CA-Traic when there is no missing context, it perorms much better than CA- Traic and the benchmark solutions when context can be Accuracy 1 0.8 0.6 10am 2pm 5pm 0.5 0 50 100 150 200 No. o requests Fig. 7. Accuracy over time with dierent time contexts at 0.8 miles. (λ = 50mph)

12 1 1 Accuracy 0.8 0.6 0.8 miles 2.1 miles 3.1 miles Accuracy 0.8 0.6 without m&d labels with missing labels with delayed labels 0.5 0 50 100 150 200 No. o requests 0.5 0 50 100 150 200 No. o requests Fig. 8. Accuracy over time with dierent location contexts at 10am. (λ = 50mph Fig. 10. Prediction accuracy with missing and delayed labels. λ = 50mph. 1 0.7 time distance Percentage 0.5 0.3 0.1 0 20 30 50 λ Accuracy 0.8 0.6 without m&d labels with missing labels with delayed labels 0.5 0 50 100 150 200 No. o requests Fig. 9. Relative importance o contexts. missing because it maintains the context partition separately or each context type and hence, it is robust to missing context inormation. E. Relevant context In this set o experiments, we unravel the most relevant context that leads to the best prediction perormance. To do so, we run the algorithm using only a single context (i.e. either time or location) and records the average reward. The most relevant context is the one leading to the highest average reward. Figure 9 shows the the relative importance (e.g. Reward(time)/(Reward(time) + Reward(location))) o each context or dierent congestion threshold λ = 20mph, 30mph, 50mph. The result indicates that time is the more important context or the traic prediction problem in our experiment. F. Missing and delayed labels Finally, we investigate the impact o missing and delayed labels on the prediction accuracy, as shown in Figure 10 and Fig. 11. Prediction accuracy with missing and delayed labels. λ = 30mph. 11. In the missing label case, the system observes the true traic label with probability 0.8. In the delayed label case, the true label o the traic comes at most ive prediction requests later. In both cases, the prediction accuracy is lower than that without missing or delayed labels. However, the proposed algorithm is still able to achieve very high accuracy which exceeds 90%. VII. CONCLUSIONS In this paper, we proposed a ramework or online traic prediction, which discovers online the contextual specialization o predictors to create a strong hybrid predictor rom several weak predictors. The proposed ramework matches the real-time traic situation to the most eective predictor constructed using historical data, thereby sel-adapting to the dynamically changing traic situations. We systematically proved both short-term and long-term perormance guarantees or our algorithm, which provide not only the assurance that our algorithm will converge over time to the optimal hybrid predictor or each possible traic situation but also provide a bound or the speed o convergence to the optimal predictor.

13 (λ = 50mph) CA-Traic CA-Traic(R) MU AU GDU time&distance 0.94 0.89 0.83 0.83 0.82 time 0.78 0.92 0.76 0.76 0.78 distance 0.72 0.88 0.79 0.77 0.78 (λ = 30mph) CA-Traic CA-Traic(R) MU AU GDU time&distance 0.91 0.80 0.81 0.83 0.78 time 0.76 0.86 0.70 0.72 0.75 distance 0.75 0.89 0.72 0.71 0.74 TABLE IV TRAFFIC PREDICTION ACCURACY WITH INCOMPLETE CONTEXT INFORMATION. Our experiments on real-world dataset veriied the eicacy o the proposed scheme and show that it signiicantly outperorms existing online learning approaches or traic prediction. As a uture work, we plan to extend the current ramework to distributed scenarios where traic data is gathered by distributed entities and thus, coordination among distributed entities are required to achieve a global traic prediction goal [31] [32] [33]. REFERENCES [1] What congestion means to your town, 2007 urban area totals, Texas Transportation Institute. [Online]. Available: http://mobility.tamu.edu/ums/congestion data/tables/national/table 2.pd [2] Annual vehicle miles o travel, Federal Highway Administration, 2003-02-14. [Online]. Available: http://www.hwa.dot.gov/ohim/onh00/graph1.htm [3] The highway system, FHWA, Oice o Highway Policy Inormation. [Online]. Available: http://www.hwa.dot.gov/ohim/onh00/onh2p5.htm [4] W. Liu, Y. Zheng, S. Chawla, J. Yuan, and X. Xing, Discovering spatiotemporal causal interactions in traic data streams, in KDD, 2011, pp. 1010 1018. [5] M. Miller and C. Gupta, Mining traic incidents to orecast impact, ser. UrbComp 12. New York, NY, USA: ACM, 2012, pp. 33 40. [6] W. Kim, S. Natarajan, and G.-L. Chang, Empirical analysis and modeling o reeway incident duration, in Intelligent Transportation Systems, 2008. ITSC 2008, Oct 2008, pp. 453 457. [7] B. Pan, U. Demiryurek, and C. Shahabi, Utilizing real-world transportation data or accurate traic prediction, ser. ICDM 12. Washington, DC, USA: IEEE Computer Society, 2012, pp. 595 604. [8] B. Pan, U. Demiryurek, C. Shahabi, and C. Gupta, Forecasting spatiotemporal impact o traic incidents on road networks, in ICDM, 2013, pp. 587 596. [9] S. Lee and D. B. Fambro, Application o subset autoregressive integrated moving average model or short-term reeway tra?c volume orecasting, in TRR98, 1998. [10] X. Li, Z. Li, J. Han, and J.-G. Lee, Temporal outlier detection in vehicle traic data, in Data Engineering, 2009. ICDE 09. IEEE 25th International Conerence on, 2009, pp. 1319 1322. [11] G. Giuliano, Incident characteristics, requency, and duration on a high volume urban reeway, Transportation Research Part A: General, vol. 23, no. 5, 1989. [12] Y. Chung and W. Recker, A methodological approach or estimating temporal and spatial extent o delays caused by reeway accidents, Intelligent Transportation Systems, IEEE Transactions on, vol. 13, no. 3, pp. 1454 1461, Sept 2012. [13] B. Stephen, F. David, and W. S. Travis, Naive bayesian classiier or incident duration prediction, in Transportation Research Board, 2007. [14] J. Kwon, M. Mauch, and P. Varaiya, The components o congestion: delay rom incidents, special events, lane closures, weather, potential ramp metering gain, and demand, in in Proceedings o the 85th annual meeting o the Transportation Research, 2006. [15] L. Breiman, Bagging predictors, Machine learning, vol. 24, no. 2, pp. 123 140, 1996. [16] Y. Freund, H. S. Seung, E. Shamir, and N. Tishby, Selective sampling using the query by committee algorithm, Machine learning, vol. 28, no. 2-3, pp. 133 168, 1997. [17] W. Fan, S. J. Stolo, and J. Zhang, The application o adaboost or distributed, scalable and on-line learning, in Proceedings o the ith ACM SIGKDD international conerence on Knowledge discovery and data mining. ACM, 1999, pp. 362 366. [18] N. Littlestone and M. K. Warmuth, The weighted majority algorithm, Inormation and computation, vol. 108, no. 2, pp. 212 261, 1994. [19] A. Blum, Empirical support or winnow and weighted-majority algorithms: Results on a calendar scheduling domain, Machine Learning, vol. 26, no. 1, pp. 5 23, 1997. [20] M. Herbster and M. K. Warmuth, Tracking the best expert, Machine Learning, vol. 32, no. 2, pp. 151 178, 1998. [21] A. Fern and R. Givan, Online ensemble learning: An empirical study, Machine Learning, vol. 53, no. 1-2, pp. 71 109, 2003. [22] S. Shalev-Shwartz, Y. Singer, N. Srebro, and A. Cotter, Pegasos: Primal estimated sub-gradient solver or svm, Mathematical programming, vol. 127, no. 1, pp. 3 30, 2011. [23] T. L. Lai and H. Robbins, Asymptotically eicient adaptive allocation rules, Advances in applied mathematics, vol. 6, no. 1, pp. 4 22, 1985. [24] P. Auer, N. Cesa-Bianchi, and P. Fischer, Finite-time analysis o the multiarmed bandit problem, Machine learning, vol. 47, no. 2-3, pp. 235 256, 2002. [25] K. Liu and Q. Zhao, Distributed learning in multi-armed bandit with multiple players, Signal Processing, IEEE Transactions on, vol. 58, no. 11, pp. 5667 5681, 2010. [26] A. Slivkins, Contextual bandits with similarity inormation, arxiv preprint arxiv:0907.3986, 2009. [27] L. Li, W. Chu, J. Langord, and R. E. Schapire, A contextual-bandit approach to personalized news article recommendation, in Proceedings o the 19th international conerence on World wide web. ACM, 2010, pp. 661 670. [28] C. Tekin, S. Zhang, and M. van der Schaar, Distributed online learning in social recommender systems, arxiv preprint arxiv:1309.6707, 2013. [29] C. Tekin and M. van der Schaar, Distributed online big data classiication using context inormation, in Proceedings o 51st annual allerton conerence on communication, control, and computing, 2013. [30] J. Xu, C. Tekin, and M. van der Schaar, Learning optimal classiier chains or real-time big data mining, in Proceedings o 51st annual allerton conerence on communication, control, and computing, 2013. [31] G. Mateos, J. A. Bazerque, and G. B. Giannakis, Distributed sparse linear regression, Signal Processing, IEEE Transactions on, vol. 58, no. 10, pp. 5262 5276, 2010. [32] A. H. Sayed, S.-Y. Tu, J. Chen, X. Zhao, and Z. J. Towic, Diusion strategies or adaptation and learning over networks: an examination o distributed strategies and network behavior, Signal Processing Magazine, IEEE, vol. 30, no. 3, pp. 155 171, 2013. [33] H. Zheng, S. R. Kulkarni, and H. V. Poor, Attribute-distributed learning: models, limits, and algorithms, Signal Processing, IEEE Transactions on, vol. 59, no. 1, pp. 386 398, 2011.