Adaptive Online Gradient Descent



Similar documents
Online Convex Optimization

1 Introduction. 2 Prediction with Expert Advice. Online Learning Lecture 09

Follow the Perturbed Leader

Table 1: Summary of the settings and parameters employed by the additive PA algorithm for classification, regression, and uniclass.

Strongly Adaptive Online Learning

2.3 Convex Constrained Optimization Problems

Online Learning. 1 Online Learning and Mistake Bounds for Finite Hypothesis Classes

Notes from Week 1: Algorithms for sequential prediction

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

Week 1: Introduction to Online Learning

Numerical Analysis Lecture Notes

10. Proximal point method

The Advantages and Disadvantages of Online Linear Optimization

The p-norm generalization of the LMS algorithm for adaptive filtering

ALMOST COMMON PRIORS 1. INTRODUCTION

(Quasi-)Newton methods

Foundations of Machine Learning On-Line Learning. Mehryar Mohri Courant Institute and Google Research

Stochastic Inventory Control

Adaptive Bound Optimization for Online Convex Optimization

Solutions of Equations in One Variable. Fixed-Point Iteration II

The Goldberg Rao Algorithm for the Maximum Flow Problem

Metric Spaces. Chapter Metrics

Linear Codes. Chapter Basics

Inner Product Spaces

Introduction to Online Optimization

Projection-free Online Learning

Online Learning, Stability, and Stochastic Gradient Descent

Prime Numbers and Irreducible Polynomials

arxiv: v1 [math.pr] 5 Dec 2011

No: Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, Notes on Algebra

TD(0) Leads to Better Policies than Approximate Value Iteration

Load balancing of temporary tasks in the l p norm

Online Learning: Theory, Algorithms, and Applications. Thesis submitted for the degree of Doctor of Philosophy by Shai Shalev-Shwartz

Date: April 12, Contents

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

Several Views of Support Vector Machines

Mathematical Methods of Engineering Analysis

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

How To Prove The Dirichlet Unit Theorem

Properties of BMO functions whose reciprocals are also BMO

GI01/M055 Supervised Learning Proximal Methods

Game Theory: Supermodular Games 1

it is easy to see that α = a

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

Interior-Point Methods for Full-Information and Bandit Online Learning

FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z

CSC2420 Fall 2012: Algorithm Design, Analysis and Theory

Online Convex Programming and Generalized Infinitesimal Gradient Ascent

Error Bound for Classes of Polynomial Systems and its Applications: A Variational Analysis Approach

Mechanisms for Fair Attribution

1 Portfolio Selection

On Adaboost and Optimal Betting Strategies

Economics 1011a: Intermediate Microeconomics

The Online Set Cover Problem

THE SCHEDULING OF MAINTENANCE SERVICE

Online Lazy Updates for Portfolio Selection with Transaction Costs

Convex analysis and profit/cost/support functions

Applied Algorithm Design Lecture 5

Reference: Introduction to Partial Differential Equations by G. Folland, 1995, Chap. 3.

24. The Branch and Bound Method

Simple and efficient online algorithms for real world applications

Notes on Symmetric Matrices

Duality of linear conic problems

CONSTANT-SIGN SOLUTIONS FOR A NONLINEAR NEUMANN PROBLEM INVOLVING THE DISCRETE p-laplacian. Pasquale Candito and Giuseppina D Aguí

Tail inequalities for order statistics of log-concave vectors and applications

1 if 1 x 0 1 if 0 x 1

Chapter 7. Sealed-bid Auctions

A PRIORI ESTIMATES FOR SEMISTABLE SOLUTIONS OF SEMILINEAR ELLIPTIC EQUATIONS. In memory of Rou-Huai Wang

Separation Properties for Locally Convex Cones

Chapter 6. Cuboids. and. vol(conv(p ))

BANACH AND HILBERT SPACE REVIEW

Notes on metric spaces

Lecture 3: Finding integer solutions to systems of linear equations

Fairness in Routing and Load Balancing

Random graphs with a given degree sequence

Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm.

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

Online Algorithms: Learning & Optimization with No Regret.

Scheduling Real-time Tasks: Algorithms and Complexity

How To Know If A Domain Is Unique In An Octempo (Euclidean) Or Not (Ecl)

Online Bandit Learning against an Adaptive Adversary: from Regret to Policy Regret

Factorization Theorems

Recall that two vectors in are perpendicular or orthogonal provided that their dot

6.207/14.15: Networks Lecture 15: Repeated Games and Cooperation

About the inverse football pool problem for 9 games 1

(67902) Topics in Theory and Complexity Nov 2, Lecture 7

Online Learning and Online Convex Optimization. Contents

The Relative Worst Order Ratio for On-Line Algorithms

1 Formulating The Low Degree Testing Problem

Transcription:

Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650 Harry Road San Jose, CA 950 hazan@usibmcom Alexander Rakhlin Division of Computer Science UC Berkeley Berkeley, CA 94709 rakhlin@csberkeleyedu Abstract We study the rates of growth of the regret in online convex optimization First, we show that a simple extension of the algorithm of Hazan et al eliminates the need for a priori knowledge of the lower bound on the second derivatives of the observed functions We then provide an algorithm, Adaptive Online Gradient Descent, which interpolates between the results of Zinkevich for linear functions and of Hazan et al for strongly convex functions, achieving intermediate rates between T and log T Furthermore, we show strong optimality of the algorithm Finally, we provide an extension of our results to general norms Introduction The problem of online convex optimization can be formulated as a repeated game between a player and an adversary At round t, the player chooses an action x t from some convex subset K of R n, and then the adversary chooses a convex loss function f t The player aims to ensure that the total loss, T f tx t, is not much larger than the smallest total loss T f tx of any fixed action x The difference between the total loss and its optimal value for a fixed action is known as the regret, which we denote R T = f t x t min x K f t x Many problems of online prediction of individual sequences can be viewed as special cases of online convex optimization, including prediction with expert advice, sequential probability assignment, and sequential investment [] A central question in all these cases is how the regret grows with the number of rounds of the game Zinkevich [] considered the following gradient descent algorithm, with step size η t = Θ/ t Here, Π K v denotes the Euclidean projection of v on to the convex set K Corresponding author

Algorithm Online Gradient Descent OGD : Initialize x arbitrarily : for t = to T do 3: Predict x t, observe f t 4: Update x t+ = Π K x t η t+ f t x t 5: end for Zinkevich showed that the regret of this algorithm grows as T, where T is the number of rounds of the game This rate cannot be improved in general for arbitrary convex loss functions However, this is not the case if the loss functions are uniformly convex, for instance, if all f t have second derivative at least H > 0 Recently, Hazan et al [3] showed that in this case it is possible for the regret to grow only logarithmically with T, using the same algorithm but with the smaller step size η t = /Ht Increasing convexity makes online convex optimization easier The algorithm that achieves logarithmic regret must know in advance a lower bound on the convexity of the loss functions, since this bound is used to determine the step size It is natural to ask if this is essential: is there an algorithm that can adapt to the convexity of the loss functions and achieve the same regret rates in both cases Olog T for uniformly convex functions and O T for arbitrary convex functions? In this paper, we present an adaptive algorithm of this kind The key technique is regularization: We consider the online gradient descent OGD algorithm, but we add a uniformly convex function, the quadratic λ t x, to each loss function f t x This corresponds to shrinking the algorithm s actions x t towards the origin It leads to a regret bound of the form R T c λ t + pλ,, λ T The first term on the right hand side can be viewed as a bias term; it increases with λ t because the presence of the regularization might lead the algorithm away from the optimum The second term is a penalty for the flatness of the loss functions that becomes smaller as the regularization increases We show that choosing the regularization coefficient λ t so as to balance these two terms in the bound on the regret up to round t is nearly optimal in a strong sense Not only does this choice give the T and log T regret rates in the linear and uniformly convex cases, it leads to a kind of oracle inequality: The regret is no more than a constant factor times the bound on regret that would have been suffered if an oracle had provided in advance the sequence of regularization coefficients λ,, λ T that minimizes the final regret bound To state this result precisely, we introduce the following definitions Let K be a convex subset of R n and suppose that sup x K x D For simplicity, throughout the paper we assume that K is centered around 0, and, hence, D is the diameter of K Define a shorthand t = f t x t Let H t be the largest value such that for any x K, f t x f t x t + t x x t + H t x x t In particular, if f t H t I 0, then the above inequality is satisfied Furthermore, suppose t G t Define λ :t := t s= λ s and H :t := t s= H s Let H :0 = 0 Let us now state the Adaptive Online Gradient Descent algorithm as well as the theoretical guarantee for its performance Algorithm Adaptive Online Gradient Descent : Initialize x arbitrarily : for t = to T do 3: Predict x t, observe f t 4: Compute λ t = H :t + λ :t + 8 /3D H :t + λ :t 5: Compute η t+ = H :t + λ :t 6: Update x t+ = Π K x t η t+ f t x t + λ t x t 7: end for

Theorem The regret of Algorithm is bounded by R T 3 inf D λ :T + λ,,λ T G t + λ t D H :t + λ :t While Algorithm is stated with the squared Euclidean norm as a regularizer, we show that it is straightforward to generalize our technique to other regularization functions that are uniformly convex with respect to other norms This leads to adaptive versions of the mirror descent algorithm analyzed recently in [4, 5] Preliminary results The following theorem gives a regret bound for the OGD algorithm with a particular choice of step size The virtue of the theorem is that the step size can be set without knowledge of the uniform lower bound on H t, which is required in the original algorithm of [3] The proof is provided in Section 4 Theorem 4, where the result is extended to arbitrary norms Theorem Suppose we set η t+ = H :t Then the regret of OGD is bounded as In particular, loosening the bound, R T R T H :t max t t min t t s= H log T s Note that nothing prevents H t from being negative or zero, implying that the same algorithm gives logarithmic regret even when some of the functions are linear or concave, as long as the partial averages t t s= H s are positive and not too small The above result already provides an important extension to the log-regret algorithm of [3]: no prior knowledge on the uniform convexity of the functions is needed, and the bound is in terms of the observed sequence {H t } Yet, there is still a problem with the algorithm If H > 0 and H t = 0 for all t >, then t s= H s = H, resulting in a linear regret bound However, we know from [] that a O T bound can be obtained In the next section we provide an algorithm which interpolates between Olog T and O T bound on the regret depending on the curvature of the observed functions 3 Adaptive Regularization Suppose the environment plays a sequence of f t s with curvature H t 0 Instead of performing gradient descent on these functions, we step in the direction of the gradient of f t x = f t x + λ t x, where the regularization parameter λ t 0 is chosen appropriately at each step as a function of the curvature of the previous functions We remind the reader that K is assumed to be centered around the origin, for otherwise we would instead use x x 0 to shrink the actions x t towards the origin x 0 Applying Theorem, we obtain the following result Theorem 3 If the Online Gradient Descent algorithm is performed on the functions f t x = f t x + λ t x with η t+ = H :t + λ :t for any sequence of non-negative λ,, λ T, then R T D λ :T + G t + λ t D H :t + λ :t 3

Proof By Theorem applied to functions f t, f t x t + T λ t x t min f t x + x λ t x + G t + λ t D H :t + λ :t Indeed, it is easy to verify that condition for f t implies the corresponding statement with H t = H t +λ t for f t Furthermore, by linearity, the bound on the gradient of f t is G t +λ t x t G t +λ t D Define x T = arg min x f tx Then, dropping the x t terms and bounding x D, f t x t which proves the the theorem f t x + D λ :T + G t + λ t D H :t + λ :t, The following inequality is important in the rest of the analysis, as it allows us to remove the dependence on λ t from the numerator of the second sum at the expense of increased constants We have D λ :T + G t + λ t D H :t + λ :t D λ :T + 3 D λ :T + H :t + λ :t + λ t D H :t + λ :t + λ t H :t + λ :t, where the first inequality holds because a + b a + b for any a, b R It turns out that for appropriate choices of {λ t }, the above theorem recovers the O T bound on the regret for linear functions [] and the Olog T bound for strongly convex functions [3] Moreover, under specific assumptions on the sequence {H t }, we can define a sequence {λ t } which produces intermediate rates between log T and T These results are exhibited in corollaries at the end of this section Of course, it would be nice to be able to choose {λ t } adaptively without any restrictive assumptions on {H t } Somewhat surprisingly, such a choice can be made near-optimally by simple local balancing Observe that the upper bound of Eq consists of two sums: D T λ t and T H :t+λ :t The first sum increases in any particular λ t and the other decreases While the influence of the regularization parameters λ t on the first sum is trivial, the influence on the second sum is more involved as all terms for t t depend on λ t Nevertheless, it turns out that a simple choice of λ t is optimal to within a multiplicative factor of This is exhibited by the next lemma Lemma 3 Define C t H T {λ t } = H T λ λ T = λ :T +, H :t + λ :t C where C t 0 does not depend on λ t s If λ t satisfies λ t = t H :t+λ :t for t =,, T, then H T {λ t } inf H {λ t } 0 T {λ t } Proof We prove this by induction Let {λ t } be the optimal sequence of non-negative regularization coefficients The base of the induction is proved by considering two possibilities: either λ < λ or not In the first case, λ + C /H + λ = λ λ λ + C /H + λ The other case is proved similarly Now, suppose H T {λ t } H T {λ t } Consider two possibilities If λ :T < λ :T, then C t H T {λ t } = λ :T + = λ :T λ :T H T {λ t } H :t + λ :t 4

If, on the other hand, λ :T λ :T, then C t λ T + = H :T + λ :T H :T + λ :T H :T + λ :T Using the inductive assumption, we obtain H T {λ t } H T {λ t } C t C t λ T + C t H :T + λ :T The lemma above is the key to the proof of the near-optimal bounds for Algorithm Proof of Theorem By Eq and Lemma 3, R T 3 D λ :T + 6 inf λ,,λ T H :t + λ :t D λ :T + provided the λ t are chosen as solutions to inf λ,,λ T G t + λ t D H :t + λ :t 3D λ :T +, H :t + λ :t 3 G D t λ t = 3 H :t + λ :t + λ t It is easy to verify that λ t = H :t + λ :t + 8 /3D H :t + λ :t is the non-negative root of the above quadratic equation We note that division by zero in Algorithm occurs only if λ = H = G = 0 Without loss of generality, G 0, for otherwise x is minimizing f x and regret is negative on that round Hence, the algorithm has a bound on the performance which is 6 times the bound obtained by the best offline adaptive choice of regularization coefficients While the constant 6 might not be optimal, it can be shown that a constant strictly larger than one is unavoidable see previous footnote We also remark that if the diameter D is unknown, the regularization coefficients λ t can still be chosen by balancing as in Eq 3, except without the D term This choice of λ t, however, increases the bound on the regret suffered by Algorithm by a factor of OD Let us now consider some special cases and show that Theorem not only recovers the rate of increase of regret of [3] and [], but also provides intermediate rates For each of these special cases, we provide a sequence of {λ t } which achieves the desired rates Since Theorem guarantees that Algorithm is competitive with the best choice of the parameters, we conclude that Algorithm achieves the same rates Corollary 3 Suppose G t G for all t T Then for any sequence of convex functions {f t }, the bound on regret of Algorithm is O T Proof Let λ = T and λ t = 0 for < t T By Eq, D G t + λ t D λ :T + 3 H :t + λ :t D λ :T + 3 D T + H :t + λ :t G T = 3 T D + G Lemma 3 effectively describes an algorithm for an online problem with competitive ratio of In the full version of this paper we give a lower bound strictly larger than one on the competitive ratio achievable by any online algorithm for this problem 5

Hence, the regret of Algorithm can never increase faster than T We now consider the assumptions of [3] Corollary 3 Suppose H t H > 0 and < G for all t T Then the bound on regret of Algorithm is Olog T Proof Set λ t = 0 for all t It holds that R T T H :t T G th G H log T + The above proof also recovers the result of Theorem The following Corollary shows a spectrum of rates under assumptions on the curvature of functions Corollary 33 Suppose H t = t α and G t G for all t T If α = 0, then R T = Olog T If α > /, then R T = O T 3 If 0 < α /, then R T = OT α Proof The first two cases follow immediately from Corollaries 3 and 3 For the third case, let λ = T α and λ t = 0 for < t T Note that t s= H s t x=0 x + α dx = α t α α Hence, D G t + λ t D λ :T + 3 G H :t + λ :t D t λ :T + H :t + λ :t D T α + G α t α D T α + G α T α + O = OT α 4 Generalization to different norms The original online gradient descent OGD algorithm as analyzed by Zinkevich [] used the Euclidean distance of the current point from the optimum as a potential function The logarithmic regret bounds of [3] for strongly convex functions were also stated for the Euclidean norm, and such was the presentation above However, as observed by Shalev-Shwartz and Singer in [5], the proof technique of [3] extends to arbitrary norms As such, our results above for adaptive regularization carry on to the general setting, as we state below Our notation follows that of Gentile and Warmuth [6] Definition 4 A function g over a convex set K is called H-strongly convex with respect to a convex function h if x, y K gx gy + gy x y + H B hx, y Here B h x, y is the Bregman divergence with respect to the function h, defined as B h x, y = hx hy hy x y This notion of strong convexity generalizes the Euclidean notion: the function gx = x is strongly convex with respect to hx = x in this case B h x, y = x y More generally, the Bregman divergence can be thought of as a squared norm, not necessarily Euclidean, ie, B h x, y = x y Henceforth we also refer to the dual norm of a given norm, defined by y = sup x {y x} For the case of l p norms, we have y = y q where q satisfies p + q =, and by Hölder s inequality y x y x y + x this holds for norms other than l p as well 6

For simplicity, the reader may think of the functions g, h as convex and differentiable The following algorithm is a generalization of the OGD algorithm to general strongly convex functions see the derivation in [6] In this extended abstract we state the update rule implicitly, leaving the issues of efficient computation for the full version these issues are orthogonal to our discussion, and were addressed in [6] for a variety of functions h Algorithm 3 General-Norm Online Gradient Descent : Input: convex function h : Initialize x arbitrarily 3: for t = to T do 4: Predict x t, observe f t 5: Compute η t+ and let y t+ be such that hy t+ = hx t η t+ f t x t 6: Let x t+ = arg min x K B h x, y t+ be the projection of y t+ onto K 7: end for The methods of the previous sections can now be used to derive similar, dynamically optimal, bounds on the regret As a first step, let us generalize the bound of [3], as well as Theorem, to general norms: Theorem 4 Suppose that, for each t, f t is a H t -strongly convex function with respect to h, and let h be such that B h x, y x y for some norm Let f t x t G t for all t Applying the General-Norm Online Gradient Algorithm with η t+ = H :t, we have R T H :t Proof The proof follows [3], with the Bregman divergence replacing the Euclidean distance as a potential function By assumption on the functions f t, for any x K, f t x t f t x f t x t x t x H t B hx, x t By a well-known property of Bregman divergences see [6], it holds that for any vectors x, y, z, Combining both observations, x y hz hy = B h x, y B h x, z + B h y, z f t x t f t x f t x t x t x H t B h x, x t = η t+ hy t+ hx t x x t H t B h x, x t = [B h x, x t B h x, y t+ + B h x t, y t+ ] H t B h x, x t η t+ [B h x, x t B h x, x t+ + B h x t, y t+ ] H t B h x, x t, η t+ where the last inequality follows from the Pythagorean Theorem for Bregman divergences [6], as x t+ is the projection wrt the Bregman divergence of y t+ and x K is in the convex set Summing over all iterations and recalling that η t+ = R T = B h x, x t t= H :t, H t + B h x, x η t+ η t H + η η t+ B h x t, y t+ η t+ B h x t, y t+ 4 Since the set of points of nondifferentiability of convex functions has measure zero, convexity is the only property that we require Indeed, for nondifferentiable functions, the algorithm would choose a point x t, which is x t with the addition of a small random perturbation With probability one, the functions would be smooth at the perturbed point, and the perturbation could be made arbitrarily small so that the regret rate would not be affected 7

We proceed to bound B h x t, y t+ By definition of Bregman divergence, and the dual norm inequality stated before, B h x t, y t+ + B h y t+, x t = hx t hy t+ x t y t+ Thus, by our assumption B h x, y x y, we have = η t+ f t x t x t y t+ η t+ t + x t y t+ B h x t, y t+ η t+ t + x t y t+ B h y t+, x t η t+ t Plugging back into Eq 4 we get R T η t+ = H :t The generalization of our technique is now straightforward Let A = sup x K gx and B = sup x K gx The following algorithm is an analogue of Algorithm and Theorem 4 is the analogue of Theorem for general norms Algorithm 4 Adaptive General-Norm Online Gradient Descent : Initialize x arbitrarily Let gx be -strongly convex with respect to the convex function h : for t = to T do 3: Predict x t, observe f t 4: Compute λ t = H :t + λ :t + 8 /A + B H :t + λ :t 5: Compute η t+ = H :t + λ :t 6: 7: Let y t+ be such that hy t+ = hx t η t+ f t x t + λt gx t Let x t+ = arg min x K B h x, y t+ be the projection of y t+ onto K 8: end for Theorem 4 Suppose that each f t is a H t -strongly convex function with respect to h, and let g be a -strongly convex with respect h Let h be such that B h x, y x y for some norm Let f t x t G t The regret of Algorithm 4 is bounded by R T inf λ,,λ T A + B λ :T + G t + λ t B H :t + λ :t If the norm in the above theorem is the Euclidean norm and gx = x, we find that D = sup x K x = A = B and recover the results of Theorem References [] Nicolò Cesa-Bianchi and Gábor Lugosi Prediction, Learning, and Games Cambridge University Press, 006 [] Martin Zinkevich Online convex programming and generalized infinitesimal gradient ascent In ICML, pages 98 936, 003 [3] Elad Hazan, Adam Kalai, Satyen Kale, and Amit Agarwal Logarithmic regret algorithms for online convex optimization In COLT, pages 499 53, 006 [4] Shai Shalev-Shwartz and Yoram Singer Convex repeated games and fenchel duality In B Schölkopf, J Platt, and T Hoffman, editors, Advances in Neural Information Processing Systems 9 MIT Press, Cambridge, MA, 007 [5] Shai Shalev-Shwartz and Yoram Singer Logarithmic regret algorithms for strongly convex repeated games In Technical Report 007-4 The Hebrew University, 007 [6] C Gentile and M K Warmuth Proving relative loss bounds for on-line learning algorithms using bregman divergences In COLT Tutorial, 000 8