Journal of Machine Learning Research 0 (2009) 2445-247 Submitted 6/09; Published /09 Prediction With Expert Advice For The Brier Game Vladimir Vovk Fedor Zhdanov Computer Learning Research Centre Department of Computer Science Royal Holloway, University of London Egham, Surrey TW20 0EX, England VOVK@CSRHULACUK FEDOR@CSRHULACUK Editor: Yoav Freund Abstract We show that the Brier game of prediction is mixable and find the optimal learning rate and substitution function for it The resulting prediction algorithm is applied to predict results of football and tennis matches, with well-known bookmakers playing the role of experts The theoretical performance guarantee is not excessively loose on the football data set and is rather tight on the tennis data set Keywords: Brier game, classification, on-line prediction, strong aggregating algorithm, weighted average algorithm Introduction The paradigm of prediction with expert advice was introduced in the late 980s (see, eg, DeSantis et al, 988, Littlestone and Warmuth, 994, Cesa-Bianchi et al, 997) and has been applied to various loss functions; see Cesa-Bianchi and Lugosi (2006) for a recent book-length review An especially important class of loss functions is that of mixable ones, for which the learner s loss can be made as small as the best expert s loss plus a constant (depending on the number of experts) It is known (Haussler et al, 998; Vovk, 998) that the optimal additive constant is attained by the strong aggregating algorithm proposed in Vovk (990) (we use the adjective strong to distinguish it from the weak aggregating algorithm of Kalnishkan and Vyugin, 2008) There are several important loss functions that have been shown to be mixable and for which the optimal additive constant has been found The prime examples in the case of binary observations are the log loss function and the square loss function The log loss function, whose mixability is obvious, has been explored extensively, along with its important generalizations, the Kullback- Leibler divergence and Cover s loss function (see, eg, the review by Vovk, 200, Section 25) In this paper we concentrate on the square loss function In the binary case, its mixability was demonstrated in Vovk (990) There are two natural directions in which this result could be generalized: Regression: observations are real numbers (square-loss regression is a standard problem in statistics) c 2009 Vladimir Vovk and Fedor Zhdanov
VOVK AND ZHDANOV Classification: observations take values in a finite set (this leads to the Brier game, to be defined shortly, a standard way of measuring the quality of predictions in meteorology and other applied fields: see, eg, Dawid, 986) The mixability of the square loss function in the case of observations belonging to a bounded interval of real numbers was demonstrated in Haussler et al (998); Haussler et al s algorithm was simplified in Vovk (200) Surprisingly, the case of square-loss non-binary classification has never been analysed in the framework of prediction with expert advice The purpose of this paper is to fill this gap Its short conference version (Vovk and Zhdanov, 2008a) appeared in the ICML 2008 proceedings 2 Prediction Algorithm and Loss Bound A game of prediction consists of three components: the observation space Ω, the decision space Γ, and the loss function λ : Ω Γ R In this paper we are interested in the following Brier game (Brier, 950): Ω is a finite and non-empty set, Γ :=P(Ω) is the set of all probability measures on Ω, and λ(ω,γ) = (γ{o} δ ω {o}) 2, o Ω where δ ω P(Ω) is the probability measure concentrated at ω: δ ω {ω} = and δ ω {o} = 0 for o ω (For example, if Ω = {,2,3}, ω =, γ{} = /2, γ{2} = /4, and γ{3} = /4, λ(ω,γ) = (/2 ) 2 +(/4 0) 2 +(/4 0) 2 = 3/8) The game of prediction is being played repeatedly by a learner having access to decisions made by a pool of experts, which leads to the following prediction protocol: Protocol Prediction with expert advice L 0 := 0 L0 k := 0, k =,,K for N =,2, do Expert k announces γ k N Γ, k =,,K Learner announces γ N Γ Reality announces ω N Ω L N := L N + λ(ω N,γ N ) LN k := Lk N + λ(ω N,γ k N ), k =,,K end for At each step of Protocol Learner is given K experts advice and is required to come up with his own decision; L N is his cumulative loss over the first N steps, and LN k is the kth expert s cumulative loss over the first N steps In the case of the Brier game, the decisions are probability forecasts for the next observation An optimal (in the sense of Theorem below) strategy for Learner in prediction with expert advice for the Brier game is given by the strong aggregating algorithm (see Algorithm ) For each expert k, the algorithm maintains its weight w k, constantly slashing the weights of less successful experts Its description uses the notation t + := max(t,0) The algorithm will be derived in Section 5 The following result (to be proved in Section 4) gives a performance guarantee for it that cannot be improved by any other prediction algorithm 2446
PREDICTION WITH EXPERT ADVICE FOR THE BRIER GAME Algorithm Strong aggregating algorithm for the Brier game w k 0 :=, k =,,K for N =,2, do Read the Experts predictions γ k N, k =,,K Set G N (ω) := ln K k= wk N e λ(ω,γk N ), ω Ω Solve ω Ω (s G N (ω)) + = 2 in s R Set γ N {ω} := (s G N (ω)) + /2, ω Ω Output prediction γ N P(Ω) Read observation ω N w k N := wk N e λ(ω N,γ k N ) end for Theorem Using Algorithm as Learner s strategy in Protocol for the Brier game guarantees that L N min k=,,k Lk N + lnk () for all N =, 2, If A < ln K, Learner does not have a strategy guaranteeing for all N =,2, 3 Experimental Results L N min k=,,k Lk N + A (2) In our first empirical study of Algorithm we use historical data about 8999 matches in various English football league competitions, namely: the Premier League (the pinnacle of the English football system), the Football League Championship, Football League One, Football League Two, the Football Conference Our data, provided by Football-Data, cover four seasons, 2005/2006, 2006/2007, 2007/2008, and 2008/2009 The matches are sorted first by date, then by league, and then by the name of the home team In the terminology of our prediction protocol, the outcome of each match is the observation, taking one of three possible values, home win, draw, or away win ; we will encode the possible values as, 2, and 3 For each match we have forecasts made by a range of bookmakers We chose eight bookmakers for which we have enough data over a long period of time, namely Bet365, Bet&Win, Gamebookers, Interwetten, Ladbrokes, Sportingbet, Stan James, and VC Bet (And the seasons mentioned above were chosen because the forecasts of these bookmakers are available for them) A probability forecast for the next observation is essentially a vector (p, p 2, p 3 ) consisting of positive numbers summing to The bookmakers do not announce these numbers directly; instead, they quote three betting odds, a, a 2, and a 3 Each number a i > is the total amount which the bookmaker undertakes to pay out to a client betting on outcome i per unit stake in the event that i happens (if the bookmaker wishes to return the stake to the bettor, it should be included in a i ; ie, the odds are announced according to the continental rather than traditional system) The inverse value /a i, i {,2,3}, can be interpreted as the bookmaker s quoted probability for the observation i The bookmaker s quoted probabilities are usually slightly (because of the competition with other bookmakers) in his favour: the sum /a + /a 2 + /a 3 exceeds by the amount called 2447
VOVK AND ZHDANOV the overround (at most 05 in the vast majority of cases) We use Victor Khutsishvili s (2009) formula p i := a γ i, i =,2,3, (3) for computing the bookmaker s probability forecasts, where γ > 0 is chosen such that a γ + a γ 2 + a γ 3 = Such a value of γ exists and is unique since the function a γ + a γ 2 + a γ 3 continuously and strictly decreases from 3 to 0 as γ changes from 0 to In practice, we usually have γ > as a + a 2 + a 3 > (ie, the overround is positive) The method of bisection was more than sufficient for us to solve a γ + a γ 2 + a γ 3 = with satisfactory accuracy Khutsishvili s argument for (3) is outlined in Appendix B Typical values of γ in (3) are close to, and the difference γ reflects the bookmaker s target profit margin In this respect γ is similar to the overround; indeed, the approximate value of the overround is (γ ) 3 i= a i lna i assuming that the overround is small and none of a i is too close to 0 The coefficient of proportionality 3 i= a i lna i can be interpreted as the entropy of the quoted betting odds The results of applying Algorithm to the football data, with 8 experts and 3 possible observations, are shown in Figure Let LN k be the cumulative loss of Expert k, k =,,8, over the first N matches and L N be the corresponding number for Algorithm (ie, we essentially continue to use the notation of Theorem ) The dashed line corresponding to Expert k shows the excess loss N LN k L N of Expert k over Algorithm The excess loss can be negative, but from the first part of Theorem (Equation ()) we know that it cannot be less than ln8; this lower bound is also shown in Figure Finally, the thick line (the positive part of the x axis) is drawn for comparison: this is the excess loss of Algorithm over itself We can see that at each moment in time the algorithm s cumulative loss is fairly close to the cumulative loss of the best expert (at that time; the best expert keeps changing over time) Figure 2 shows the distribution of the bookmakers overrounds We can see that in most cases overrounds are between 005 and 05, but there are also occasional extreme values, near zero or in excess of 03 Figure 3 shows the results of another empirical study, involving data about a large number of tennis tournaments in 2004, 2005, 2006, and 2007, with the total number of matches 0,087 The tournaments include, for example, Australian Open, French Open, US Open, and Wimbledon; the data is provided by Tennis-Data The matches are sorted by date, then by tournament, and then by the winner s name The data contain information about the winner of each match and the betting odds of 4 bookmakers for his/her win and for the opponent s win Therefore, now there are two possible observations (player s win and player 2 s win) There are four bookmakers: Bet365, Centrebet, Expekt, and Pinnacle Sports The results in Figure 3 are presented in the same way as in Figure Typical values of the overround are below 0, as shown in Figure 4 (analogous to Figure 2) In both Figure and Figure 3 the cumulative loss of Algorithm is close to the cumulative loss of the best expert The theoretical bound is not hopelessly loose for the football data and is rather tight for the tennis data The pictures look almost the same when Algorithm is applied in the more realistic manner where the experts weights w k are not updated over the matches that are played simultaneously Our second empirical study (Figure 3) is about binary prediction, and so the algorithm of Vovk (990) could have also been used (and would have given similar results) We included it since we are not aware of any empirical studies even for the binary case 2448
PREDICTION WITH EXPERT ADVICE FOR THE BRIER GAME 2 0 theoretical bound for Algorithm Algorithm experts 8 6 4 2 0 2 4 0 000 2000 3000 4000 5000 6000 7000 8000 9000 Figure : The difference between the cumulative loss of each of the 8 bookmakers (experts) and of Algorithm on the football data The theoretical lower bound ln8 from Theorem is also shown For comparison with several other popular prediction algorithms, see Appendix C The data used for producing all the figures and tables in this section and in Appendix C can be downloaded from http://vovknet/icml2008 4 Proof of Theorem This proof will use some basic notions of elementary differential geometry, especially those connected with the Gauss-Kronecker curvature of surfaces (The use of curvature in this kind of results is standard: see, eg, Vovk, 990, and Haussler et al, 998) All definitions that we will need can be found in, for example, Thorpe (979) A vector f R Ω (understood to be a function f : Ω R) is a superprediction if there is γ Γ such that, for all ω Ω, λ(ω,γ) f(ω); the set Σ of all superpredictions is the superprediction set For each learning rate η > 0, let Φ η : R Ω (0, ) Ω be the homeomorphism defined by Φ η ( f) : ω Ω e η f(ω), f R Ω (4) The image Φ η (Σ) of the superprediction set will be called the η-exponential superprediction set It is known that L N min k=,,k Lk N + lnk η, N =,2,, can be guaranteed if and only if the η-exponential superprediction set is convex (part if for all K and part only if for K are proved in Vovk, 998; part only if for all K is proved by Chris Watkins, and the details can be found in Appendix A) Comparing this with () and (2) we can see that we are required to prove that 2449
VOVK AND ZHDANOV 0000 9000 8000 7000 6000 5000 4000 3000 2000 000 0 0 0 0 02 03 04 05 06 07 Figure 2: The overround distribution histogram for the football data, with 200 bins of equal size between the minimum and maximum values of the overround Φ η (Σ) is convex when η ; Φ η (Σ) is not convex when η > Define the η-exponential superprediction surface to be the part of the boundary of the η- exponential superprediction set Φ η (Σ) lying inside (0, ) Ω The idea of the proof is to check that, for all η <, the Gauss-Kronecker curvature of this surface is nowhere vanishing Even when this is done, however, there is still uncertainty as to in which direction the surface is bulging (towards the origin or away from it) The standard argument (as in Thorpe, 979, Chapter 2, Theorem 6) based on the continuity of the smallest principal curvature shows that the η-exponential superprediction set is bulging away from the origin for small enough η: indeed, since it is true at some point, it is true everywhere on the surface By the continuity in η this is also true for all η < Now, since the η-exponential superprediction set is convex for all η <, it is also convex for η = Let us now check that the Gauss-Kronecker curvature of the η-exponential superprediction surface is always positive when η < and is sometimes negative when η > (the rest of the proof, an elaboration of the above argument, will be easy) Set n := Ω ; without loss of generality we assume Ω = {,,n} A convenient parametric representation of the η-exponential superprediction surface is x x 2 x n x n = e η((u ) 2 +u 2 2 + +u2 n) e η(u2 +(u 2 ) 2 + +u 2 n) e η(u2 + +(u n ) 2 +u 2 n) e η(u2 + +u2 n +(u n ) 2 ) 2450, (5)
PREDICTION WITH EXPERT ADVICE FOR THE BRIER GAME 20 5 theoretical bound for Algorithm Algorithm experts 0 5 0 5 0 2000 4000 6000 8000 0000 2000 Figure 3: The difference between the cumulative loss of each of the 4 bookmakers and of Algorithm on the tennis data Now the theoretical bound is ln4 where u,,u n are the coordinates on the surface, u,,u n (0,) subject to u + u n <, and u n is a shorthand for u u n The derivative of (5) in u is u x x 2 x n x n the derivative in u 2 is and so on, up to (u n u + )e η((u ) 2 +u 2 2 + +u2 n +u2 n) (u n u + )e 2ηu (u n u )e η(u2 +(u 2 ) 2 + +u 2 n +u2 n) (u n u )e 2ηu 2 = 2η (u n u )e η(u2 +u2 2 + +(u n ) 2 +u 2, n) (u n u )e 2ηu n (u n u )e η(u2 +u2 2 + +u2 n +(u n ) 2 ) (u n u )e 2ηu n u 2 u n x x 2 x n x n x x 2 x n x n (u n u 2 )e 2ηu (u n u 2 + )e 2ηu 2, (u n u 2 )e 2ηu n (u n u 2 )e 2ηu n (u n u n )e 2ηu (u n u n )e 2ηu 2 (u n u n + )e 2ηu n (u n u n )e 2ηu n all coefficients of proportionality being equal and positive, 245
VOVK AND ZHDANOV 9000 8000 7000 6000 5000 4000 3000 2000 000 0 04 02 0 02 04 06 08 Figure 4: The overround distribution histogram for the tennis data A normal vector to the surface can be found as e e n e n (u n u + )e 2ηu (u n u )e 2ηu n (u n u )e 2ηu n Z :=, (u n u n )e 2ηu (u n u n + )e 2ηu n (u n u n )e 2ηu n where e i is the ith vector in the standard basis of R n and stands for the determinant (the matrix contains both scalars and vectors, but its determinant can still be computed using the standard rules) The coefficient in front of e is the (n ) (n ) determinant (u n u )e 2ηu 2 (u n u )e 2ηu n (u n u )e 2ηu n (u n u 2 + )e 2ηu 2 (u n u 2 )e 2ηu n (u n u 2 )e 2ηu n (u n u n )e 2ηu 2 (u n u n + )e 2ηu n (u n u n )e 2ηu n u n u u n u u n u e 2ηu u n u 2 + u n u 2 u n u 2 u n u n u n u n + u n u n u n u u n u 2 u n u 2 0 0 u u 2 = e 2ηu 2 u n u 3 = e 2ηu 0 0 u u 3 2 u n u n 0 0 u u n 2452
PREDICTION WITH EXPERT ADVICE FOR THE BRIER GAME = e 2ηu ( ( ) n (u n u )+( ) n+ (u u 2 ) +( ) n+ (u u 3 )+ +( ) n+ (u u n ) ) = e 2ηu ( ) n ((u 2 + u 3 + +u n ) (n )u ) = e 2ηu ( ) n nu u e 2ηu (6) (with a positive coefficient of proportionality, e 2η, in the first ; the third equality follows from the expansion of the determinant along the last column and then along the first row) Similarly, the coefficient in front of e i is proportional (with the same coefficient of proportionality) to u i e 2ηu i for i = 2,,n ; indeed, the (n ) (n ) determinant representing the coefficient in front of e i can be reduced to the form analogous to (6) by moving the ith row to the top The coefficient in front of e n is proportional to e 2ηu n u n u + u n u u n u u n u u n u 2 u n u 2 + u n u 2 u n u 2 u n u n 2 u n u n 2 u n u n 2 + u n u n 2 u n u n u n u n u n u n u n u n + 0 0 u n u 0 0 u n u 0 0 u n u 2 0 0 u n u 2 = e 2ηu n = e 2ηu n 0 0 u n u n 2 0 0 u n u n 2 u n u n + 0 0 0 nu n = nu n e 2ηu n (with the coefficient of proportionality e 2η ( ) n ) The Gauss-Kronecker curvature at the point with coordinates (u,,u n ) is proportional (with a positive coefficient of proportionality, possibly depending on the point) to Z T u Z T u n Z T (Thorpe, 979, Chapter 2, Theorem 5, with T standing for transposition) A straightforward calculation allows us to rewrite determinant (7) (ignoring the positive coefficient (( ) n ne 2η ) n ) as ( 2ηu )e 2ηu 0 0 (2ηu n )e 2ηu n 0 ( 2ηu 2 )e 2ηu 2 0 (2ηu n )e 2ηu n 0 0 ( 2ηu n )e 2ηu n (2ηu n )e 2ηu n u e 2ηu u 2 e 2ηu 2 u n e 2ηu n u n e 2ηu n 2453 (7)
VOVK AND ZHDANOV 2ηu 0 0 2ηu n 0 2ηu 2 0 2ηu n 0 0 2ηu n 2ηu n u u 2 u n u n = u ( 2ηu 2 )( 2ηu 3 ) ( 2ηu n )+u 2 ( 2ηu )( 2ηu 3 ) ( 2ηu n )+ + u n ( 2ηu )( 2ηu 2 ) ( 2ηu n ) (8) (with a positive coefficient of proportionality; to avoid calculation of the parities of various permutations, the reader might prefer to prove the last equality by induction in n, expanding the last determinant along the first column) Our next goal is to show that the last expression in (8) is positive when η < but can be negative when η > If η >, set u = u 2 := /2 and u 3 = = u n := 0 The last expression in (8) becomes negative It will remain negative if u and u 2 are sufficiently close to /2 and u 3,,u n are sufficiently close to 0 It remains to consider the case η < Set t i := 2ηu i, i =,,n; the constraints on the t i are < 2η < t i <, i =,,n, t + +t n = n 2η > n 2 (9) Our goal is to prove that is, ( t )t 2 t 3 t n + +( t n )t t 2 t n > 0, t 2 t 3 t n + +t t 2 t n > nt t n (0) This reduces to + + > n () t t n if t t n > 0, and to + + < n (2) t t n if t t n < 0 The remaining case is where some of the t i are zero; for concreteness, let t n = 0 By (9) we have t + +t n > n 2, and so all of t,,t n are positive; this shows that (0) is indeed true Let us prove () Since t t n > 0, all of t,,t n are positive (if two of them were negative, the sum t + +t n would be less than n 2; cf (9)) Therefore, + + > + + t t n } {{ } = n n times To establish (0) it remains to prove (2) Suppose, without loss of generality, that t > 0, t 2 > 0,, t n > 0, and t n < 0 We will prove a slightly stronger statement allowing t,,t n 2 to take value and removing the lower bound on t n Since the function t (0,] /t is convex, we can also assume, without loss of generality, t = = t n 2 = Then t n +t n > 0, and so t n + t n < 0; 2454
PREDICTION WITH EXPERT ADVICE FOR THE BRIER GAME therefore, + + + + < n 2 < n t t n 2 t n t n Finally, let us check that the positivity of the Gauss-Kronecker curvature implies the convexity of the η-exponential superprediction set in the case η, and the lack of positivity of the Gauss- Kronecker curvature implies the lack of convexity of the η-exponential superprediction set in the case η > The η-exponential superprediction surface will be oriented by choosing the normal vector field directed towards the origin This can be done since x e 2ηu u e 2ηu, Z ( ) n, (3) x n e 2ηu n u n e 2ηu n with both coefficients of proportionality positive (cf (5) and the bottom row of the first determinant in (8)), and the sign of the scalar product of the two vectors on the right-hand sides in (3) does not depend on the point (u,,u n ) Namely, we take ( ) n Z as the normal vector field directed towards the origin The Gauss-Kronecker curvature will not change sign after the re-orientation: if n is even, the new orientation coincides with the old, and for odd n the Gauss-Kronecker curvature does not depend on the orientation In the case η >, the Gauss-Kronecker curvature is negative at some point, and so the η- exponential superprediction set is not convex (Thorpe, 979, Chapter 3, Theorem and its proof) It remains to consider the case η Because of the continuity of the η-exponential superprediction surface in η we can and will assume, without loss of generality, that η < Let us first check that the smallest principal curvature k = k (u,,u n,η) of the η-exponential superprediction surface is always positive (among the arguments of k we list not only the coordinates u,,u n of a point on the surface (5) but also the learning rate η (0,)) At least at some (u,,u n,η) the value of k (u,,u n,η) is positive: take a sufficiently small η and the point on the surface (5) with coordinates u = = u n = /n; a simple calculation shows that this point will be a point of local maximum for x + +x n Therefore, for all (u,,u n,η) the value of k (u,,u n,η) is positive: if k had different signs at two points in the set { (u,,u n,η) u (0,),,u n (0,),u + +u n <,η (0,) }, (4) we could connect these points by a continuous curve lying completely inside (4); at some point on the curve, k would be zero, in contradiction to the positivity of the Gauss-Kronecker curvature k k n Now it is easy to show that the η-exponential superprediction set is convex Suppose there are two points A and B on the η-exponential superprediction surface such that the interval [A, B] contains points outside the η-exponential superprediction set The intersection of the plane OAB, where O is the origin, with the η-exponential superprediction surface is a planar curve; the curvature of this curve at some point between A and B will be negative (remember that the curve is oriented by directing the normal vector field towards the origin), contradicting the positivity of k at that point 5 Derivation of the Prediction Algorithm To achieve the loss bound () in Theorem Learner can use, as discussed earlier, the strong aggregating algorithm (see, eg, Vovk, 200, Section 2, (5)) with η = In this section we will find 2455
VOVK AND ZHDANOV a substitution function for the strong aggregating algorithm for the Brier game with η, which is the only component of the algorithm not described explicitly in Vovk (200) Our substitution function will not require that its input, the generalized prediction, should be computed from the normalized distribution (w k ) K k= on the experts; this is a valuable feature for generalizations to an infinite number of experts (as demonstrated in, eg, Vovk, 200, Appendix A) Suppose that we are given a generalized prediction (l,,l n ) T computed by the aggregating pseudo-algorithm from a normalized distribution on the experts Since (l,,l n ) T is a superprediction (remember that we are assuming η ), we are only required to find a permitted prediction λ (u ) 2 + u 2 2 λ 2 + +u2 n = u 2 +(u 2 ) 2 + +u 2 n λ n u 2 + u2 2 + +(u n ) 2 (5) (cf (5)) satisfying λ l,,λ n l n (6) Now suppose we are given a generalized prediction (L,,L n ) T computed by the aggregating pseudo-algorithm from an unnormalized distribution on the experts; in other words, we are given L l + c = L n l n + c for some c R To find (5) satisfying (6) we can first find the largest t R such that (L t,,l n t) T is still a superprediction and then find (5) satisfying λ L t,,λ n L n t (7) Since t c, it is clear that (λ,,λ n ) T will also satisfy the required (6) Proposition 2 Define s R by the requirement n i= (s L i ) + = 2 (8) The unique solution to the optimization problem t max under the constraints (7) with λ,,λ n as in (5) will be u i = (s L i) +, i =,,n, 2 (9) t = s u 2 u 2 n (20) There exists a unique s satisfying (8) since the left-hand side of (8) is a continuous, increasing (strictly increasing when positive) and unbounded above function of s The substitution function is given by (9) 2456
PREDICTION WITH EXPERT ADVICE FOR THE BRIER GAME Proof of Proposition 2 Let us denote the u i and t defined by (9) and (20) as u i and t, respectively To see that they satisfy the constraints (7), notice that the ith constraint can be spelt out as u 2 + +u 2 n 2u i + L i t, which immediately follows from (9) and (20) As a by-product, we can see that the inequality becomes an equality, that is, t = L i +2u i u 2 u 2 n, (2) for all i with u i > 0 We can rewrite (7) as t L +2u u 2 u2 n, t L n +2u n u 2 u2 n, and our goal is to prove that these inequalities imply t < t (unless u = u,,u n = u n ) Choose u i (necessarily u i > 0 unless u = u,,u n = u n ; in the latter case, however, we can, and will, also choose u i > 0) for which ε i := u i u i is maximal Then every value of t satisfying (22) will also satisfy t L i +2u i = L i +2u i 2ε i L i +2u i The penultimate in (23) follows from ε i + n j= n u 2 j j= n j= n u 2 n j j= j= ε j u j = n j= u 2 j + 2 n j= ε j u j n ε 2 j j= (22) ε 2 j t (23) (ε j ε i )u j 0 The last in (23) follows from (2) and becomes < when not all u j coincide with u j The detailed description of the resulting prediction algorithm was given as Algorithm in Section 2 As discussed, that algorithm uses the generalized prediction G N (ω) computed from unnormalized weights 6 Conclusion In this paper we only considered the simplest prediction problem for the Brier game: competing with a finite pool of experts In the case of square-loss regression, it is possible to find efficient closed-form prediction algorithms competitive with linear functions (see, eg, Cesa-Bianchi and Lugosi, 2006, Chapter ) Such algorithms can often be kernelized to obtain prediction algorithms competitive with reproducing kernel Hilbert spaces of prediction rules This would be an appealing research programme in the case of the Brier game as well 2457
VOVK AND ZHDANOV Acknowledgments Victor Khutsishvili has been extremely generous in sharing his ideas with us Comments by Alexey Chernov, Yuri Kalnishkan, Alex Gammerman, Bob Vickers, and the anonymous referees for the conference and journal versions have helped us improve the presentation The referees for the conference version also suggested comparing our results with the Weighted Average Algorithm and the Hedge algorithm We are grateful to Football-Data and Tennis-Data for providing access to the data used in this paper This work was supported in part by EPSRC (grant EP/F002998/) Appendix A Watkins s Theorem Watkins s theorem is stated in Vovk (999, Theorem 8) not in sufficient generality: it presupposes that the loss function is mixable The proof, however, shows that this assumption is irrelevant (it can be made part of the conclusion), and the goal of this appendix is to give a self-contained statement of a suitable version of the theorem (The reader will notice that the generality of the new version is essential only for our discussion in Section 4, not for Theorem itself) In this appendix we will use a slightly more general notion of a game of prediction (Ω,Γ,λ): namely, the loss function λ : Ω Γ R is now allowed to take values in the extended real line R := R {, } (although the value will be later disallowed) Partly following Vovk (998), for each K =,2, and each a > 0 we consider the following perfect-information game G K (a) (the global game ) between two players, Learner and Environment Environment is a team of K + players called Expert to Expert K and Reality, who play with Learner according to Protocol Learner wins if, for all N =, 2, and all k {,, K}, L N L k N + a; (24) otherwise, Environment wins It is possible that L N = or LN k = in (24); the interpretation of inequalities involving infinities is natural For each K we will be interested in the set of those a > 0 for which Learner has a winning strategy in the gameg K (a) (we will denote this by L G K (a)) It is obvious that L G K (a) & a > a = L G K (a ); therefore, for each K there exists a unique borderline value a K such that L G K (a) holds when a > a K and fails when a < a K It is possible that a K = (but remember that we are only interested in finite values of a) These are our assumptions about the game of prediction (similar to those in Vovk, 998): Γ is a compact topological space; for each ω Ω, the function γ Γ λ(ω,γ) is continuous (R is equipped with the standard topology); there exists γ Γ such that, for all ω Ω, λ(ω,γ) < ; the function λ is bounded below 2458
PREDICTION WITH EXPERT ADVICE FOR THE BRIER GAME We say that the game of prediction (Ω,Γ,λ) is η-mixable, where η > 0, if γ Γ,γ 2 Γ,α [0,] δ Γ ω Ω: e ηλ(ω,δ) αe ηλ(ω,γ ) +( α)e ηλ(ω,γ 2) (25) In the case of finite Ω, this condition says that the image of the superprediction set under the mapping Φ η (see (4)) is convex The game of prediction is mixable if it is η-mixable for some η > 0 It follows from Hardy et al (952, Theorem 92, applied to the means M φ with φ(x) = e ηx ) that if the prediction game is η-mixable it will remain η -mixable for any positive η < η (For another proof, see the end of the proof of Lemma 9 in Vovk, 998) Let η be the supremum of the η for which the prediction game is η-mixable (with η := 0 when the game is not mixable) The compactness of Γ implies that the prediction game is η -mixable Theorem 3 (Chris Watkins) For any K {2,3,}, a K = lnk η In particular, a K < if and only if the game is mixable The theorem does not say explicitly, but it is easy to check, that L G K (a K ): this follows both from general considerations (cf Lemma 3 in Vovk, 998) and from the fact that the strong aggregating algorithm winsg K (a K ) =G K (lnk/η ) Proof of Theorem 3 The proof will use some notions and notation used in the statement and proof of Theorem of Vovk (998) Without loss of generality we can, and will, assume that the loss function satisfies λ > (add a suitable constant to λ if needed) Therefore, Assumption 4 of Vovk (998) (the only assumption in that paper not directly made here) is satisfied In view of the fact that L G K (lnk/η ), we only need to show that L G K (a) does not hold for a < lnk/η Fix a < lnk/η The separation curve consists of the points (c(β),c(β)/η) [0, ) 2, where β := e η and η ranges over [0, ] (see Vovk, 998, Theorem ) Since the two-fold convex mixture in (25) can be replaced by any finite convex mixture (apply two-fold mixtures repeatedly), setting η := η shows that the point (,/η ) is Northeast of (actually belongs to) the separation curve On the other hand, the point (,a/lnk) is Southwest and outside of the separation curve (use Lemmas 8 2 of Vovk, 998) Therefore, E (ie, Environment) has a winning strategy in the gameg(,a/lnk) It is easy to see from the proof of Theorem in Vovk (998) that the definition of the gameg can be modified, without changing the conclusion aboutg(,a/lnk), by replacing the line E chooses n {size of the pool} in the protocol on p 53 of Vovk (998) by E chooses n {lower bound on the size of the pool} L chooses n n {size of the pool} (indeed, the proof in Section 6 of Vovk, 998, only requires that there should be sufficiently many experts) Let n be the first move by Environment according to her winning strategy Now suppose L G K (a) From the fact that there exists Learner s strategyl winningg K (a) we can deduce: there exists Learner s strategyl 2 winningg K 2(2a) (we can split the K 2 experts into K groups of K, merge the experts decisions in each group with L, and finally merge the groups decisions withl ); there exists Learner s strategyl 3 winningg K 3(3a) (we can split the K 3 experts 2459
VOVK AND ZHDANOV Loss resulting from (3) Loss resulting from (26) Difference 558569 558820 252 558594 558667 072 558660 558737 077 558847 559065 28 55886 558992 3 55997 559348 52 55960 56085 584 559656 559802 46 Table : The bookmakers cumulative Brier losses over the football data set when their probability forecasts are computed using formula (3) and formula (26) into K groups of K 2, merge the experts decisions in each group with L 2, and finally merge the groups decisions with L ); and so on When the number K m of experts exceeds n, we obtain a contradiction: Learner can guarantee L N L k N + ma for all N and all K m experts k, and Environment can guarantee that for some N and k L N > L k N + a lnk ln(km ) = L k N + ma Appendix B Khutsishvili s Theory In the conference version of this paper (Vovk and Zhdanov, 2008a) we used p i := /a i /a + /a 2 + /a 3, i =,2,3, (26) in place of (3) A natural way to compare formulas (3) and (26) is to compare the losses of the probability forecasts found from the bookmakers betting odds using those formulas Using Khutsishvili s formula (3) consistently leads to smaller losses as measured by the Brier loss function: see Tables and 2 The improvement of each bookmaker s total loss over the football data set is in the range 072 584; over the tennis data set the difference is in the range 27 64 These differences are of the order of the differences in cumulative loss between different bookmakers, and so the improvement is significant The goal of this appendix is to present, in a rudimentary form, Khutsishvili s theory behind (3) The theory is based on a very idealized model of a bookmaker, who is assumed to compute the betting odds a for an event of probability p using a function f, a := f(p) 2460
PREDICTION WITH EXPERT ADVICE FOR THE BRIER GAME Loss resulting from (3) Loss resulting from (26) Difference 393532 394402 869 394383 39450 27 394570 395733 64 395383 395775 392 Table 2: The bookmakers cumulative Brier losses over the tennis data set when their probability forecasts are computed using formula (3) and formula (26) Different bookmakers (and the same bookmaker at different times) can use different functions f Therefore, different bookmakers may quote different odds because they may use different f and because they may assign different probabilities to the same event The following simple corollary of Darboux s theorem describes the set of possible functions f ; its interpretation will be discussed straight after the proof Theorem 4 (Victor Khutsishvili) Suppose a function f : (0, ) (, ) satisfies the condition f(pq) = f(p) f(q) (27) for all p,q (0,) There exists c > 0 such that f(p) = p c for all p (0,) Proof Equation (27) is one of the four fundamental Cauchy equations, which can be easily reduced to each other For example, introducing a new function g : (0, ) (0, ) by g(u) := ln f(e u ) and new variables x,y (0, ) by x := ln p and y := lnq, we transform (27) to the most standard Cauchy equation g(x+y) = g(x)+g(y) By Darboux s theorem (see, eg, Aczél, 966, Section 2, Theorem (b)), g(x) = cx for all x > 0, that is, f(p) = p c for all p (0,) The function f is defined on (0,) since we assume that in real life no bookmaker will assign a subjective probability of exactly 0 or to an event on which he accepts bets It would be irrational for the bookmaker to have f(p) for some p, so f : (0,) (, ) To justify the requirement (27), we assume that the bookmaker offers not only single but also double bets (Wikipedia, 2009) If there are two events with quoted odds a and b that the bookmaker considers independent, his quoted odds on the conjunction of the two events will be ab If the probabilities of the two events are p and q, respectively, the probability of their conjunction will be pq Therefore, we have (27) Theorem 4 provides a justification of Khutsishvili s formula (3): we just assume that the bookmaker applies the same function f to all three probabilities p, p 2, and p 3 If f(p) = p c, we have p i = a γ i, where γ = /c and i =,2,3, and γ can be found from the requirement p + p 2 + p 3 = An important advantage of (3) over (26) is that (3) does not impose any upper limits on the overround that the bookmaker may charge (Khutsishvili, 2009) If the game has n possible outcomes (n = 3 for football and n = 2 for tennis) and the bookmaker uses f(p) = p c, the overround is n a i= i = n i= p c i 246
VOVK AND ZHDANOV and so continuously changes between and n as c ranges over (0, ) (in practice, the overround is usually positive, and so c (0,)) Even for n = 2, the upper bound of is too large to be considered a limitation The situation with (26) is very different: upper bounding the numerator of (26) by and replacing the denominator by +o, where o is the overround, we obtain p i < +o for all i, and so o < min i p i ; this limitation on o is restrictive when one of the p i is close to An interesting phenomenon in racetrack betting, known since Griffith (949), is that favourites are usually underbet while longshots are overbet (see, eg, Snowberg and Wolfers, 2007, for a recent survey and analysis) Khutsishvili s formula (3) can be regarded as a way of correcting this favourite-longshot bias : when a i is large (the outcome i is a longshot), (3) slashes /a i when computing p i more than (26) does Appendix C Comparison with Other Prediction Algorithms Other popular algorithms for prediction with expert advice that could be used instead of Algorithm in our empirical studies reported in Section 3 are, among others, the Weighted Average Algorithm (WdAA, proposed by Kivinen and Warmuth, 999), the weak aggregating algorithm (WkAA, proposed independently by Kalnishkan and Vyugin, 2008, and Cesa-Bianchi and Lugosi, 2006, Theorem 23; we are using Kalnishkan and Vyugin s name), and the Hedge algorithm (HA, proposed by Freund and Schapire, 997) In this appendix we pay most attention to the WdAA since neither WkAA nor HA satisfy bounds of the form (2) (The reader can consult Vovk and Zhdanov, 2008b, for details of experiments with the latter two algorithms and formula (26) used for extracting probabilities from the quoted betting odds) We also briefly discuss three more naive algorithms The Weighted Average Algorithm is very similar to the strong aggregating algorithm (SAA) used in this paper: the WdAA maintains the same weights for the experts as the SAA, and the only difference is that the WdAA merges the experts predictions by averaging them according to their weights, whereas the SAA uses a more complicated minimax optimal merging scheme (given by (9) for the Brier game) The performance guarantee for the WdAA applied to the Brier game is weaker than the optimal (), but of course this does not mean that its empirical performance is necessarily worse than that of the SAA (ie, Algorithm ) Figures 5 and 6 show the performance of this algorithm, in the same format as before (see Figures and 3) We can see that for the football data the maximal difference between the cumulative loss of the WdAA and the cumulative loss of the best expert is slightly larger than that for Algorithm but still well within the optimal bound lnk given by () For the tennis data the maximal difference is almost twice as large as for Algorithm, violating the optimal bound lnk In its most basic form (Kivinen and Warmuth, 999, the beginning of Section 6), the WdAA works in the following protocol At each step each expert, Learner, and Reality choose an element of the unit ball in R n, and the loss function is the squared distance between the decision (Learner s or an expert s move) and the observation (Reality s move) This covers the Brier game with Ω = {,,n}, each observation ω Ω represented as the vector (δ ω {},,δ ω {n}), and each decision γ P(Ω) represented as the vector (γ{},,γ{n}) However, in the Brier game the decision makers moves are known to belong to the simplex {(u,,u n ) [0, ) n n i= u i = }, and Reality s move is known to be one of the vertices of this simplex Therefore, we can optimize the ball radius by considering the smallest ball containing the simplex rather than the unit ball This is what we did for the results reported here (although the results reported in the conference version of this paper, Vovk and Zhdanov, 2008a, are for the WdAA applied to the unit ball in R n ) The 2462
PREDICTION WITH EXPERT ADVICE FOR THE BRIER GAME 2 0 theoretical bound for Algorithm Weighted Average Algorithm experts 8 6 4 2 0 2 4 0 000 2000 3000 4000 5000 6000 7000 8000 9000 Figure 5: The difference between the cumulative loss of each of the 8 bookmakers and of the Weighted Average Algorithm (WdAA) on the football data The chosen value of the parameter c = /η for the WdAA, c := 6/3, minimizes its theoretical loss bound The theoretical lower bound ln8 20794 for Algorithm is also shown (the theoretical lower bound for the WdAA, 0904, can be extracted from Table 3 below) 20 5 theoretical bound for Algorithm Weighted Average Algorithm experts 0 5 0 5 0 2000 4000 6000 8000 0000 2000 Figure 6: The difference between the cumulative loss of each of the 4 bookmakers and of the WdAA for c := 4 on the tennis data 2463
VOVK AND ZHDANOV Algorithm Maximal difference Theoretical bound Algorithm 238 20794 WdAA (c = 6/3) 4076 0904 WdAA (c = ) 2255 none Table 3: The maximal difference between the loss of each algorithm in the selected set and the loss of the best expert for the football data (second column); the theoretical upper bound on this difference (third column) radius of the smallest ball is R := 0865 if n = 3 n 0707 if n = 2 if n is large As described in Kivinen and Warmuth (999), the WdAA is parameterized by c := /η instead of η, and the optimal value of c is c = 8R 2, leading to the guaranteed loss bound L N min k=,,k Lk N + 8R 2 lnk for all N =,2, (see Kivinen and Warmuth, 999, Section 6) This is significantly looser than the bound () for Algorithm The values c = 6/3 and c = 4 used in Figures 5 and 6, respectively, are obtained by minimizing the WdAA s performance guarantee, but minimizing a loose bound might not be such a good idea Figure 7 shows the maximal difference ( ) max L N (c) min N=,,8999 k=,,8 Lk N, (28) where L N (c) is the loss of the WdAA with parameter c on the football data over the first N steps and LN k is the analogous loss of the kth expert, as a function of c Similarly, Figure 8 shows the maximal difference ( ) max L N (c) min N=,,0087 k=,,4 Lk N (29) for the tennis data And indeed, in both cases the value of c minimizing the empirical loss is far from the value minimizing the bound; as could be expected, the empirical optimal value for the WdAA is not so different from the optimal value for Algorithm The following two figures, 9 and 0, demonstrate that there is no such anomaly for Algorithm Figures and 2 show the behaviour of the WdAA for the value of parameter c =, that is, η =, that is optimal for Algorithm They look remarkably similar to Figures and 3, respectively Precise numbers associated with the figures referred to above are given in Tables 3 and 4: the second column gives the maximal differences (28) and (29), respectively The third column gives the theoretical upper bound on the maximal difference (ie, the optimal value of A in (2), if available) 2464
PREDICTION WITH EXPERT ADVICE FOR THE BRIER GAME 24 22 maximal difference optimal parameter theoretical bound for Algorithm 2 8 6 4 2 0 2 3 4 5 6 7 8 9 parameter c Figure 7: The maximal difference (28) for the WdAA as function of the parameter c on the football data The theoretical guarantee ln 8 for the maximal difference for Algorithm is also shown (the theoretical guarantee for the WdAA, 0904, is given in Table 3) 35 3 maximal difference optimal parameter theoretical bound for Algorithm 25 2 5 0 2 3 4 5 6 7 8 9 parameter c Figure 8: The maximal difference (29) for the WdAA as function of the parameter c on the tennis data The theoretical bound for the WdAA is 55452 (see Table 4) 2465
VOVK AND ZHDANOV 25 maximal difference optimal parameter theoretical bound for Algorithm 2 5 0 05 5 2 25 3 35 4 45 5 parameter η Figure 9: The maximal difference ((28) with η in place of c) for Algorithm as function of the parameter η on the football data 5 45 maximal difference optimal parameter theoretical bound for Algorithm 4 35 3 25 2 5 0 05 5 2 25 3 35 4 45 5 parameter η Figure 0: The maximal difference ((29) with η in place of c) for Algorithm as function of the parameter η on the tennis data 2466
PREDICTION WITH EXPERT ADVICE FOR THE BRIER GAME 2 0 theoretical bound for Algorithm Weighted Average Algorithm experts 8 6 4 2 0 2 4 0 000 2000 3000 4000 5000 6000 7000 8000 9000 Figure : The difference between the cumulative loss of each of the 8 bookmakers and of the WdAA on the football data for c = (the value of parameter minimizing the theoretical performance guarantee for Algorithm ) 20 5 theoretical bound for Algorithm Weighted Average Algorithm experts 0 5 0 5 0 2000 4000 6000 8000 0000 2000 Figure 2: The difference between the cumulative loss of each of the 4 bookmakers and of the WdAA for c = on the tennis data 2467
VOVK AND ZHDANOV Algorithm Maximal difference Theoretical bound Algorithm 9 3863 WdAA (c = 4) 20583 55452 WdAA (c = ) 207 none Table 4: The maximal difference between the loss of each algorithm in the selected set and the loss of the best expert for the tennis data (second column); the theoretical upper bound on this difference (third column) The following two algorithms, the weak aggregating algorithm (WkAA) and the Hedge algorithm (HA), make increasingly weaker assumptions about the prediction game being played Algorithm computes the experts weights taking full account of the degree of convexity of the loss function and uses a minimax optimal substitution function Not surprisingly, it leads to the optimal loss bound of the form (2) The WdAA computes the experts weights in the same way, but uses a suboptimal substitution function; this naturally leads to a suboptimal loss bound The WkAA does not know that the loss function is strictly convex; it computes the experts weights in a way that leads to decent results for all convex functions The WkAA uses the same substitution function as the WdAA, but this appears less important than the way it computes the weights The HA knows even less: it does not even know that its and the experts performance is measured using a loss function At each step the HA decides which expert it is going to follow, and at the end of the step it is only told the losses suffered by all experts Both WkAA and HA depend on a parameter, which is denoted c in the case of WkAA and β in the case of HA; the ranges of the parameters are c (0, ) and β [0,) The loss bounds that we give below assume that the loss function takes values in the interval [0,L], in the case of the WkAA, and that the losses are chosen from [0,L], in the case of HA, where L is a known constant In the case of the Brier loss function, L = 2 In the notation of (), a simple loss bound for the WkAA is L N min k=,,k Lk N + 2L N lnk (30) (Kalnishkan and Vyugin, 2008, Corollary 4); this is quite different from () as the regret term 2L N lnk in (30) depends on N This bound is guaranteed for c = lnk/l For c = 8lnK/L, Cesa-Bianchi and Lugosi (2006, Theorem 23) prove the stronger bound L N min k=,,k Lk N + L lnk 2N lnk + L 8 The performance of the WkAA on our data sets is significantly worse than that of the WdAA with c = : the maximal difference (28) (29) does not exceed ln K for all reasonable values of c in the case of football but only for a very narrow range of c (which is far from both Kalnishkan and Vyugin s lnk/2 and Cesa-Bianchi and Lugosi s 8lnK/2) in the case of tennis Moreover, the WkAA violates the bound for Algorithm for all reasonable values of c on some natural subsets of the football data set: for example, when prediction starts from the second (2006/2007) season Nothing similar happens for the WdAA with c = on our data sets 2468
PREDICTION WITH EXPERT ADVICE FOR THE BRIER GAME The loss bound for the HA is EL N L N ln β + LlnK β (3) (Freund and Schapire, 997, Theorem 2), where EL N stands for Learner s expected loss (the HA is a randomized algorithm) and LN stands for min k=,,k LN k In the same framework, the strong aggregating algorithm attains the stronger bound EL N L N ln β + LlnK K K ln K+β (32) (Vovk, 998, Example 7) Of course, the SAA applied to the HA framework (as described above, with no loss function) is very different from Algorithm, which is the SAA applied to the Brier game; we refer to the former algorithm as SAA-HA Figure 3 shows the ratio of the right-hand side of (32) to the right-hand side of (3) as function of β 095 09 085 08 075 8 experts 4 experts 2 experts 07 0 0 02 03 04 05 06 07 08 09 parameter β Figure 3: The relative performance of the HA and SAA-HA for various numbers of experts as function of parameter β The losses suffered by the HA and the SAA-HA on our data sets are very close and violate Algorithm s regret term lnk for all values of β It is interesting that, for both football and tennis data, the loss of the HA is almost minimized by setting its parameter β to 0 (the qualification almost is necessary only in the case of the tennis data) The HA with β = 0 coincides with the Follow the Leader Algorithm (FLA), which chooses the same decision as the best (with the smallest loss up to now) expert; if there are several best experts (which almost never happens after the first step), their predictions are averaged with equal weights Standard examples (see, eg, Cesa-Bianchi 2469
VOVK AND ZHDANOV and Lugosi, 2006, Section 43) show that this algorithm (unlike its version Follow the Perturbed Leader) can fail badly on some data sequences Its empirical performance on the football data set is not so bad: it violates the loss bound for Algorithm only slightly; however, on the tennis data set the bound is violated badly The decent performance of the Follow the Leader Algorithm on the football data set suggests checking the empirical performance of other similarly naive algorithms, such as the following two The Simple Average Algorithm s decision is defined as the arithmetic mean of the experts decisions (with equal weights) The Bayes Mixture Algorithm (BMA) is the strong aggregating algorithm applied to the log loss function; this algorithm is in fact optimal, but not for the Brier loss function The BMA has a very simple description (Cesa-Bianchi and Lugosi, 2006, Section 92), and was studied from the point of view of prediction with expert advice already in DeSantis et al (988) We have found that none of the three naive algorithms perform consistently poorly, but they always fail badly on some natural part of our data sets The advantage of the more sophisticated algorithms having strong performance guarantees is that there is no danger of catastrophic performance on any data set References János Aczél Lectures on Functional Equations and their Applications Academic Press, New York, 966 Glenn W Brier Verification of forecasts expressed in terms of probability Monthly Weather Review, 78: 3, 950 Nicolò Cesa-Bianchi and Gábor Lugosi Prediction, Learning, and Games Cambridge University Press, Cambridge, England, 2006 Nicolò Cesa-Bianchi, Yoav Freund, David Haussler, David P Helmbold, Robert E Schapire, and Manfred K Warmuth How to use expert advice Journal of the Association for Computing Machinery, 44:427 485, 997 A Philip Dawid Probability forecasting In Samuel Kotz, Norman L Johnson, and Campbell B Read, editors, Encyclopedia of Statistical Sciences, volume 7, pages 20 28 Wiley, New York, 986 Alfredo DeSantis, George Markowsky, and Mark N Wegman Learning probabilistic prediction functions In Proceedings of the Twenty Ninth Annual IEEE Symposium on Foundations of Computer Science, pages 0 9, Los Alamitos, CA, 988 IEEE Computer Society Yoav Freund and Robert E Schapire A decision-theoretic generalization of on-line learning and an application to boosting Journal of Computer and System Sciences, 55:9 39, 997 Richard M Griffith Odds adjustments by American horse-race bettors American Journal of Psychology, 62:290 294, 949 Godfrey H Hardy, John E Littlewood, and George Pólya Inequalities Cambridge University Press, Cambridge, England, second edition, 952 2470
PREDICTION WITH EXPERT ADVICE FOR THE BRIER GAME David Haussler, Jyrki Kivinen, and Manfred K Warmuth Sequential prediction of individual sequences under general loss functions IEEE Transactions on Information Theory, 44:906 925, 998 Yuri Kalnishkan and Michael V Vyugin The weak aggregating algorithm and weak mixability Journal of Computer and System Sciences, 74:228 244, 2008 Special Issue devoted to COLT 2005 Victor Khutsishvili Personal communication E-mail exchanges (from 27 November 2008), 2009 Jyrki Kivinen and Manfred K Warmuth Averaging expert predictions In Paul Fischer and Hans U Simon, editors, Proceedings of the Fourth European Conference on Computational Learning Theory, volume 572 of Lecture Notes in Artificial Intelligence, pages 53 67, Berlin, 999 Springer Nick Littlestone and Manfred K Warmuth The Weighted Majority Algorithm Information and Computation, 08:22 26, 994 Erik Snowberg and Justin Wolfers Explaining the favorite-longshot bias: Is it risk-love or misperceptions? Available on-line at http://bppwhartonupennedu/jwolfers/ (accessed on 2 November 2009), November 2007 John A Thorpe Elementary Topics in Differential Geometry Springer, New York, 979 Vladimir Vovk Aggregating strategies In Mark Fulk and John Case, editors, Proceedings of the Third Annual Workshop on Computational Learning Theory, pages 37 383, San Mateo, CA, 990 Morgan Kaufmann Vladimir Vovk A game of prediction with expert advice Journal of Computer and System Sciences, 56:53 73, 998 Vladimir Vovk Derandomizing stochastic prediction strategies Machine Learning, 35:247 282, 999 Vladimir Vovk Competitive on-line statistics International Statistical Review, 69:23 248, 200 Vladimir Vovk and Fedor Zhdanov Prediction with expert advice for the Brier game In Andrew McCallum and Sam Roweis, editors, Proceedings of the Twenty Fifth International Conference on Machine Learning, pages 04, New York, 2008a ACM Vladimir Vovk and Fedor Zhdanov Prediction with expert advice for the Brier game Technical Report arxiv:07082502v2 [cslg], arxivorg e-print archive, June 2008b Wikipedia Glossary of bets offered by UK bookmakers Wikipedia, The Free Encyclopedia, 2009 Accessed on 2 November 247