Beat the Mean Bandit


 Anastasia Webb
 1 years ago
 Views:
Transcription
1 Yisong Yue H. John Heinz III College, Carnegie Mellon University, Pittsburgh, PA, USA Thorsten Joachims Department of Computer Science, Cornell University, Ithaca, NY, USA Abstract The Dueling Bandits Problem is an online learning framework in which actions are restricted to noisy comparisons between pairs of strategies (also called bandits. It models settings where absolute rewards are difficult to elicit but pairwise preferences are readily available. In this paper, we extend the Dueling Bandits Problem to a relaxed setting where preference magnitudes can violate transitivity. We present the first algorithm for this more general Dueling Bandits Problem and provide theoretical guarantees in both the online and the PAC settings. We also show that the new algorithm has stronger guarantees than existing results even in the original Dueling Bandits Problem, which we validate empirically.. Introduction Online learning approaches have become increasingly popular for modeling recommendation systems that learn from user feedback. Unfortunately, conventional online learning methods assume that absolute rewards (e.g. rate A from to 5 are observable and reliable, which is not the case in search engines and other systems that have access only to implicit feedback (e.g. clicks (Radlinski et al., 008. However, for search engines there exist reliable methods for inferring preference feedback (e.g. is A better than B from clicks (Radlinski et al., 008. This motivates the Karmed Dueling Bandits Problem (Yue et al., 009, which formalizes the problem of online learning with preference feedback instead of absolute rewards. One major limitation of existing algorithms for the Appearing in Proceedings of the 8 th International Conference on Machine Learning, Bellevue, WA, USA, 0. Copyright 0 by the author(s/owner(s. Dueling Bandits Problem is the assumption that user preferences satisfy strong transitivity. For example, for strategies A, B and C, if users prefer A to B by 55%, and B to C by 60%, then strong transitivity requires that users prefer A to C at least 60%. Such requirements are often violated in practice (see Section 3.. In this paper, we extend the Karmed Dueling Bandits Problem to a relaxed setting where stochastic preferences can violate strong transitivity. We present a new algorithm, called BeattheMean, with theoretical guarantees that are not only stronger than previous results for the original setting, but also degrade gracefully with the degree of transitivity violation. We empirically validate our findings and observe that the new algorithm is indeed more robust, and that it has ordersofmagnitude lower variability. Finally, we show that the new algorithm also has PACstyle guarantees for the Dueling Bandits Problem.. Related Work Conventional multiarmed bandit problems have been well studied in both the online (Lai & Robbins, 985; Auer et al., 00 and PAC (Mannor & Tsitsiklis, 00; EvenDar et al., 006; Kalyanakrishnan & Stone, 00 settings. These settings differ from ours primarily in that feedback is measured on an absolute scale. Methods that learn using noisy pairwise comparisons include active learning approaches (Radlinski & Joachims, 007 and algorithms for finding the maximum element (Feige et al., 99. The latter setting is similar to our PAC setting, but requires a common stochastic model for all comparisons. In contrast, our analysis explicitly accounts for not only that different pairs of items yield different stochastic preferences, but also that these preferences might not be internally consistent (i.e. violate strong transitivity. Yue & Joachims (009 considered a continuous version of the Dueling Bandits Problem, where bandits
2 Table. Pr(Row > Col / as estimated from interleaving experiments with six retrieval functions on ArXiv.org. A B C D E F A B C D E F are represented as high dimensional points, and derivatives are estimated by comparing two bandits. More recently, Agarwal et al. (00 proposed a nearoptimal multipoint algorithm for the conventional continuous bandit setting, which may be adaptable to the Dueling Bandits Problem. Our proposed algorithm is structurally similar to the Successive Elimination algorithm proposed by Even Dar et al. (006 for the conventional PAC bandit setting. Our theoretical analysis differs significantly due to having to deal with pairwise comparisons. 3. The Learning Problem The Karmed Dueling Bandits Problem (Yue et al., 009 is an iterative learning problem on a set of bandits B = {b,..., b K } (also called arms or strategies. Each iteration comprises a noisy comparison (duel between two bandits (possibly the same bandit with itself. We assume the comparison outcomes to have independent and timestationary distributions. We write the comparison probabilities as P (b > b = ɛ(b, b + /, where ɛ(b, b ( /, / represents the distinguishability between b and b. We assume there exists a total ordering such that b b ɛ(b, b > 0. We also use the notation ɛ i,j ɛ(b i, b j. Note that ɛ(b, b = ɛ(b, b and ɛ(b, b = 0. For ease of analysis, we also assume WLOG that the bandits are indexed in preferential order b b... b K. Online Setting. In the online setting, algorithms are evaluated on the fly during every iteration. Let (b (t, b(t be the bandits chosen at iteration t. Let T be the time horizon. We quantify performance using the following notion of regret, R T = T ( ɛ(b, b (t + ɛ(b, b (t. ( t= In search applications, ( reflects the fraction of users who would have preferred b over b (t and b (t. PAC Setting. In the PAC setting, the goal is to confidently find a nearoptimal bandit. More precisely, an (ε,δpac algorithm will find a bandit ˆb such that P (ɛ(b, ˆb > ε δ. Efficiency is measured via the sample complexity, i.e. the total number of comparisons required. Note that sample complexity penalizes each comparison equally, whereas the regret ( of a comparison depends on the bandits being compared. 3.. Modeling Assumptions Previous work (Yue et al., 009 relied on two properties, called stochastic triangle inequality and strong stochastic transitivity. In this paper, we assume a relaxed version of strong stochastic transitivity that more accurately characterizes realworld user preferences. Note that we only require the two properties below to be defined relative to the best bandit b. Relaxed Stochastic Transitivity. For any triplet of bandits b b j b k and some γ, we assume γɛ,k max{ɛ,j, ɛ j,k }. This can be viewed as a monotonicity or internal consistency property of user preferences. Strong stochastic transitivity, considered in (Yue et al., 009, is the special case where γ =. Stochastic Triangle Inequality. For any triplet of bandits b b j b k, we assume ɛ,k ɛ,j +ɛ j,k. This can be viewed as a diminishing returns property. To understand why relaxed stochastic transitivity is important, consider Table, which describes preferences elicited from pairwise interleaving experiments (Radlinski et al., 008 using six retrieval functions in the fulltext search engine of ArXiv.org. We see that user preferences obey a total ordering A B... F, and satisfy relaxed stochastic transitivity for γ =.5 (due to A, B, D as well as stochastic triangle inequality. A good algorithm should have guarantees that degrade smoothly as γ increases.. Algorithm and Analysis Our algorithm, called BeattheMean, is described in Algorithm. The online and PAC settings require different input parameters, and those are specified in Algorithm and Algorithm 3, respectively. BeattheMean proceeds in a sequence of rounds, and maintains a working set W l of active bandits during each round l. For each active bandit b i W l, an empirical estimate ˆP i (Line 6 is maintained for how often b i beats the mean bandit b l of W l, where comparing b i with b l is functionally identical to comparing b i Our results can be extended to the relaxed case where ɛ,k λ(ɛ,j + ɛ j,k for λ. However, we focus on strong transitivity since it is far more easily violated in practice.
3 Algorithm BeattheMean : Input: B = {b,..., b K}, N, T, c δ,γ ( : W {b,..., b K} //working set of active bandits 3: l //num rounds : b W l, n b 0 //num comparisons 5: b W l, w b 0 //num wins 6: b W l, ˆPb w b /n b, or / if n b = 0 7: n min b Wl n b 8: c c δ,γ (n, or if n = 0 //confidence radius 9: t 0 //total number of iterations 0: while > and t < T and n < N do : b argmin b Wl n b //break ties randomly : select b W l at random, compare b vs b 3: if b wins, w b w b + : n b n b + 5: t t + 6: if min b W l ˆPb + c max b Wl ˆPb c then 7: b argmin b Wl ˆPb 8: b W l, delete comparisons with b from w b, n b 9: W l+ W l \ {b } //update working set 0: l l + //new round : end if : end while 3: return argmax b Wl ˆPb Algorithm BeattheMean (Online : Input B = {b,..., b K}, γ, T : δ /(T K 3: Define c δ,γ ( using ( : ˆb BeattheMean(B,, T, c δ,γ with a bandit sampled uniformly from W l (Line. In each iteration, a bandit with the fewest recorded comparisons is selected to compare with b l (Line. Whenever the empirically worst bandit b is separated from the empirically best one by a sufficient confidence margin (Line 6, then the round ends, all recorded comparisons involving b are removed (Line 8, and b is removed from W l (Line 9. Afterwards, each remaining ˆP i is again an unbiased estimate of b i versus the mean bandit b l+ of the new W l+. The algorithm terminates when only one active bandit remains, or when another termination condition is met (Line 0. Notation and terminology. We call a round all the contiguous comparisons until a bandit is removed. We say b i defeats b j if b i and b j have the highest and lowest empirical means, respectively, and that the difference is sufficiently large (Line 6. Our algorithm makes a mistake whenever it removes the best bandit b from any W l. We will use the shorthand ˆP i,j,n ˆP i,n ˆP j,n, ( where ˆP i,n refers to the empirical estimate of b i versus the mean bandit b l after n comparisons (we often suppress l for brevity. We call the empirically Algorithm 3 BeattheMean (PAC : Input B = {b,..., b K}, γ, ε, δ : Define N using (8 3: Define c δ,γ ( using (7 : ˆb BeattheMean(B, N,, c δ,γ best and empirically worst bandits to be the ones with highest and lowest ˆP i,n, respectively. We call the best and worst bandits in W l to be argmin bi W l i and argmax bi W l i, respectively. We define the expected performance of any active bandit b i W l to be E[ ˆP i ] = ( b W l P (b i > b. (3 For clarity of presentation, all proofs are contained in the appendix. We begin by stating two observations. Observation. Let b k be the worst bandit in W l, and let b W l. Then E[ ˆP,k,n ] ɛ,k. Observation. Let b k be the worst bandit in W l, and let b W l. Then b j W l : E[ ˆP j,k,n ] γ ɛ,k. Observation implies a margin between the expected performance of b and the worst bandit in W l. This will be used to bound the comparisons required in each round. Due to relaxed stochastic transitivity, b may not have the best expected performance. 3 Observation bounds the difference in expected performance between any bandit and the worst bandit in W l. This will be used to derive the appropriate confidence intervals so that b W l+ with sufficient probability... Online Setting We take an explore then exploit approach for the online setting, similar to (Yue et al., 009. For time horizon T, relaxed transitivity parameter γ, and bandits B = {b,..., b K }, we use BeattheMean in the explore phase (see Algorithm. Let ˆb denote the bandit returned by BeattheMean. We then enter an exploit phase by repeatedly choosing (b (t, b(t = (ˆb, ˆb until reaching T total comparisons. Comparisons in the exploit phase incur no regret assuming ˆb = b. We use the following confidence interval c(, c δ,γ (n = 3γ n log δ, ( where δ = /(KT. We do not use the last remaining input N to BeattheMean (i.e., we set N = ; it is used only in the PAC setting. 3 For example, the second best bandit may lose slightly to b but be strongly preferred versus the other bandits. When γ =, we can use a tighter conf. interval (9.
4 We will show that BeattheMean correctly returns the best bandit w.p. at least /T. Correspondingly, a suboptimal bandit is returned with probability at most /T, in which case we assume maximal regret O(T. We can thus bound the expected regret by E[R T ] ( /T E [ R BtM ] + (/T O(T = O ( E [ R BtM T T ] + (5 where RT BtM denotes the regret incurred from running BeattheMean. Thus the regret bound depends entirely on the regret incurred by BeattheMean. Theorem. For T K, BeattheMean makes a mistake with probability at most /T, or otherwise returns the best bandit b B and accumulates online regret ( that is bounded with high probability by O ( K l= { } γ 7 min, γ5 ɛ l ɛ l ɛ log T ( γ 7 K = O log T ɛ where ɛ l = ɛ,k if b k is the worst remaining bandit in round l, and ɛ = min{ɛ,,..., ɛ,k }. Corollary. For T K, mistakefree executions of BeattheMean accumulate online regret that is bounded with high probability by ( K γ 8 O log T. ɛ,k k= Our algorithm improves on the previously proposed Interleaved Filter (IF algorithm (Yue et al., 009 in two ways. First, (6 applies when γ >, 5 whereas the bound for IF does not. Second, while (6 matches the expected regret bound for IF when γ =, ours is a high probability bound. Our experiments show that IF can accumulate large regret even when strong stochastic transitivity is slightly violated (e.g. γ =.5 as in Table, and has high variance when γ =. BeattheMean exhibits neither drawback... PAC Setting When running BeattheMean in the PAC setting, one of two things can happen. In the first case, the active set W l is reduced to a single bandit ˆb, in which case we will prove that ˆb = b with sufficient probability. In the second case, the algorithm terminates when the number of comparisons recorded for each remaining bandit is at least N defined in (8 below, in which 5 In practice, γ is typically close to, making the somewhat poor dependence on γ less of a concern. This poor dependence seems primarily due to the bound of Observation often being quite loose. However, it is unclear if one can do better in the general worst case. (6 Regret Regret x Number of Bandits x Number of Bandits Figure. Comparing regret of BeattheMean (light and Interleaved Filter (dark when γ =. For the top graph, each ɛ i,j = 0. where b i b j. For the bottom graph, each ɛ i,j = /( + exp(µ j µ i 0.5, where each µ i N(0,. Error bars indicate one standard deviation. case we prove that every remaining bandit is within ε of b with sufficient probability. It suffices to focus on the second case when analyzing sample complexity. The input parameters are described in Algorithm 3. We use the following confidence interval c(, c δ,γ (n = 3γ n log K3 N, (7 δ where N is the smallest positive integer such that 36γ 6 N = ε log K3 N. (8 δ Note that there are at most K N total time steps, since there are at most K rounds with at most KN comparisons removed after each round. 6 We do not use the last remaining input T to BeattheMean (i.e., we set T = ; it is used only in the online setting. Theorem. BeattheMean in Algorithm 3 is an (ε, δpac algorithm with sample complexity ( Kγ 6 O (KN = O ε log KN. δ 5. Evaluating Online Regret As mentioned earlier, in the online setting, the theoretical guarantees of BeattheMean offer two advantages over the previously proposed Interleaved Filter 6 K N is a trivial bound on the sample complexity of Alg. 3. Thm. gives a high probability bound of O(KN.
5 Regret Regret 3.5 x Number of Bandits x gamma (K=500 Figure. Comparing regret of BeattheMean (light and Interleaved Filter (dark when γ >. For the top graph, we fix γ =.3 and vary K. For the bottom graph, we fix K = 500 and vary γ. Error bars indicate one standard deviation. (IF algorithm (Yue et al., 009 by (A giving a high probability exploration bound versus one in expectation (and thus ensuring low variance, and (B being provably robust to relaxations of strong transitivity (γ >. We now evaluate these cases empirically. IF maintains a candidate bandit, and plays the candidate against the remaining bandits until one is confidently superior. Thus, the regret of IF heavily depends on the initial candidate, resulting in increased variance. When strong transitivity is violated (γ >, one can create scenarios where b i is more likely to defeat b i than any other bandit. Thus, IF will sift through O(K candidates and suffer O(K regret in the worst case. BeattheMean avoids this issue by playing every bandit against the mean bandit. We evaluate using simulations, where we construct stochastic preferences ɛ i,j that are unknown to the algorithms. Since the guarantees of Beatthe Mean and IF differ w.r.t. the number of bandits K, we fix T = 0 0 and vary K. We first evaluate Case (A, where strong transitivity holds (γ =. In this setting, we will use the following confidence interval, c δ,γ (n = (/n log(/δ, (9 where δ = /(T K. This is tighter than (, leading to a more efficient algorithm that still retains the correctness guarantees when γ =. 7 7 The key observation is that cases (b and (c in Lemma do not apply when γ =. The confidence interval used by IF (Yue et al., 009 is also equivalent to (9. We evaluate two settings for Case (A, with the results presented in Figure. The first setting (Figure top defines ɛ i,j = 0. for all b i b j. The second setting (Figure bottom defines for each b i a utility µ i N(0, drawn i.i.d. from a unit normal distribution. Stochastic preferences are defined using a logistic model, i.e. ɛ i,j = /( + exp(µ j µ i 0.5. For both settings, we run 00 trials and observe the average regret of both methods to increase linearly with K, but the regret of IF shows much higher variance, which matches the theoretical results. We next evaluate Case (B, where stochastic preferences only satisfy relaxed transitivity. For any b i b j, we define ɛ i,j = 0.γ if < i = j, or ɛ i,j = 0. otherwise. Figure top shows the results for γ =.3. We observe the similar behavior from BeattheMean as in Case (A, 8 but IF suffers superlinear regret (and is thus not robust with significantly higher variance. For completeness, Figure bottom shows how regret changes as γ is varied for fixed K = 500. We observe a phase transition near γ =.3 where IF begins to suffer superlinear regret (w.r.t. K. As γ increases further, each round in IF also shortens due to each ɛ i,i increasing, causing the regret of IF to decrease slightly (for fixed K. BeattheMean uses confidence intervals ( that grow as γ increases, causing it to require more comparisons for each bandit elimination. It is possible that BeattheMean can still behave correctly (i.e. be mistakefree in practice while using tighter confidence intervals (e.g. (9. 6. Conclusion We have presented an algorithm for the Dueling Bandits Problem with high probability exploration bounds for the online and PAC settings. The performance guarantees of our algorithm degrade gracefully as one relaxes the strong stochastic transitivity property, which is a property often violated in practice. Empirical evaluations confirm the advantages of our theoretical guarantees over previous results. Acknowledgements. This work was funded in part by NSF Award IIS A. Extended Analysis Proof of Observation. We can write E[ ˆP,k,n ] ( as E[ ˆP,k,n ] = ɛ,k + (ɛ,j ɛ k,j. b j W l \{b,b k } 8 The regret is larger than in the analogous setting in Case (A due to the use of wider confidence intervals.
6 Each b j in the summation above satisfies b b j b k. Thus, ɛ,j ɛ k,j = ɛ,j + ɛ j,k ɛ,k, due to stochastic triangle inequality, implying E[ ˆP,k,n ] ɛ,k. Proof of Observation. We focus on the nontrivial case where b j b k, b. Combining (3 and ( yields E[ ˆP j,k,n ] = ɛ j,k + b h W l \{b j,b k } (ɛ j,h ɛ k,h. We know that ɛ j,k γɛ,k from relaxed transitivity. For each b h in the above summation, either (a b h b j b k, or (b b j b h b k. In case (a we have ɛ j,h ɛ k,h = ɛ j,h + ɛ h,k ɛ h,k γɛ,k, since ɛ j,h 0. In case (b we have ɛ j,h +ɛ h,k γ(ɛ,h +ɛ,k γ max{ɛ,h, ɛ,k } γ ɛ,k. This implies that E[ ˆP j,k,n ] γ ɛ,k. A.. Online Setting We assume here that BeattheMean is run according to Alg.. We define c n c δ,γ (n using (. Lemma. For δ = /(T K, and assuming that b W l, the probability of b i W l \{b } defeating b at the end of round l (i.e., a mistake is at most /(T K. Proof. Having b i defeat b requires that for some n, ˆP,n +c n < ˆP i,n c n, and also that b is the first bandit to be defeated in the round. Since at any time, all remaining bandits have the same confidence interval size, then ˆP,n must have the lowest empirical mean. In particular, this requires ˆP,n ˆP k,n, where b k is the worst bandit in W l. We will show that, for any n, the probability of making a mistake is at most δ < /(T K. Thus, by the union bound, the probability of b i mistakenly defeating b for any n T is at most T δ < /(T K. We consider three sufficient cases: (a E[ ˆP i,n ] E[ ˆP,n ] (b E[ ˆP i,n ] > E[ ˆP,n ] and n < log(/δ (c E[ ˆP i,n ] > E[ ˆP,n ] and n log(/δ In case (a applying Hoeffding s inequality yields P ( ˆP,n + c n < ˆP i,n c n δ < δ. In cases (b and (c, we have b i b k since Observation implies E[ ˆP,n ] E[ ˆP k,n ]. In case (b we have c n > (3/γ ɛ,k, which implies via Hoeffding s inequality, P ( ˆP i,n c n > ˆP,n + c n P ( ˆP i,n c n > ˆP k,n + c n (0 = P ( ˆP i,k,n E[ ˆP i,k,n ] > c n E[ ˆP i,k,n ] P ( ˆP i,k,n E[ ˆP i,k,n ] > c n γ ɛ,k ( P ( ˆP i,k,n E[ ˆP i,k,n ] > (/3c n exp ( γ log(/δ δ < δ where (0 follows from j : ɛ,j ɛ k,j, and ( follows from Observation. In case (c we know that P ( ˆP k,n ˆP,n = P ( ˆP k,,n E[ ˆP k,,n ] E[ ˆP k,,n ] P ( ˆP k,,n E[ ˆP k,,n ] ɛ,k ( exp ( n/ = δ < δ Lemma. BeattheMean makes a mistake with probability at most /T. Proof. By Lemma, the probability b j W l \ {b } defeats b is at most /(T K. There are at most K active bandits in any round and at most K rounds. Applying the union bound proves the lemma. Lemma 3. Let δ = /(T K, T K and assume b W l. If b k is the worst bandit in W l, then the number of comparisons each b W l needs to accumulate before some bandit being removed (and thus ending the round is with high probability bounded by ( ( O γ log(t K = O γ log T Proof. It suffices to bound the comparisons n required to remove b k. We will show that for any d, there exists an m depending only on d such that ( P n mγ ɛ log(t K min { K d, T d},k for all K and T sufficiently large. We will focus on the sufficient condition of b defeating b k : if at any t we have ˆP,t c t > ˆP k,t + c t, then b k is removed from W l. It follows that for any t, if n > t, then ˆP,t c t ˆP k,t + c t, and so P (n > t P ( ˆP,t c t ˆP k,t + c t. Note from Observation that E[ ˆP,k,t ] ɛ,k. Thus, P ( ˆP,t c t ˆP k,t + c t = P (E[ ˆP,k,t ] ˆP,k,t E[ ˆP,k,t ] c t P (E[ ˆP,k,t ] ˆP,k,t ɛ,k c t (3.
7 For any m 8 and t 8mγ log(t K/, we have c t γɛ,k /, and so applying Hoeffding s inequality for this m and t shows that (3 is bounded by P ( ˆP,k,t E[ ˆP,k,t ] ɛ,k / exp( t/8. Since t 8mγ log(t K/ by assumption, we have /8 m log(t K, and so t exp( t/8 exp( m log(t K = /(T K m, which is bounded by K m and T m. We finally note that for T K, O(log(T K = O(log T. Lemma. Assume b W l for l l. Then the number of comparisons removed at the end of round l is bounded with high probability by ( { } γ O min ɛ, γ6 ɛ log T, l where ɛ l = ɛ,k if b k is the worst bandit in W l, and ɛ = min{ɛ,,..., ɛ,k }. Proof. Let b k be the worst bandit in W l, and let b j W l denote the bandit removed at the end of round l. By Lemma 3, the total number of comparisons that b j accumulates is bounded with high probability by O ( γ log T ( γ = O ɛ log T. ( Some bandits may have accumulated more comparisons than (, since the number of remaining comparisons from previous rounds may exceed (. However, Lemma 3 implies the number of remaining comparisons is at most ( γ O log T, where ɛ,k = min bk b q ɛ,q ɛ,k /γ. We can bound the number of comparisons accumulated at the end of round l by O(D l,γ,t, where { } γ D l,γ,t min ɛ, γ6 ɛ log T, (5 l and ɛ l = ɛ,k. In expectation + ( / < (6 fraction of the comparisons will be removed at the end of round l (those that involve b j. Applying Hoeffding s inequality yields the number of removed comparisons is with high probability O(D l,γ,t. Proof of Theorem. Lemma bounds the mistake probability by /T. We focus here on the case where BeattheMean is mistakefree. We will show that, at the end of each round l, the regret incurred from the removed comparisons is bounded with high probability by O(γɛ l D l,γ,t for D l,γ,t defined in (5. Since this happens K times (once per round and accounts for the regret of every comparison, then the total accumulated regret of BeattheMean will be bounded with high probability by O ( K l= γɛ l D l,γ,t = O ( K l= { γ 5 } ɛ l min ɛ, γ7 log T. ɛ l By Lemma, the number of removed comparisons in round l is at most O(D l,γ,t. The incurred regret of a comparison between any b i and the removed bandit b j is (ɛ,i +ɛ,j / γɛ l, which follows from relaxed transitivity. Thus, the regret incurred from all removed comparisons of round l is at most O(γɛ l D l,γ,t. Proof of Corollary. We know from Theorem that the regret assigned to the removed bandit in each round l is O((γ 7 /ɛ l log T, where ɛ l = ɛ,k if b k is the worst bandit in W l. By the pigeonhole principle b K+ l b k, since in round l there are K + l bandits. By relaxed transitivity, we have ɛ,k+ l γɛ,k. The desired result naturally follows. A.. PAC Setting We assume here that BeattheMean is run according to Alg. 3. We define c n c δ,γ (n using (7. Lemma 5. Assuming that b W l, the probability of b i W l \ {b } defeating b at the end of round l (i.e., a mistake is at most δ/k 3. Proof. (Sketch. This proof is structurally identical to Lemma, except using a different confidence interval. We consider three analogous cases: (a E[ ˆP i,n ] < E[ ˆP,n ] (b E[ ˆP i,n ] E[ ˆP,n ] and n < log(k 3 N/δ (c E[ ˆP i,n ] E[ ˆP,n ] and n log(k 3 N/δ In case (a applying Hoeffding s inequality yields P ( ˆP,n + c n < ˆP i,n c n (δ/(k 3 N < δ/(k 5 N. In case (b, following (0 and (, we have P ( ˆP i,n c n > ˆP,n + c n P ( ˆP i,k,n E[ ˆP i,k,n ] > (/3c n exp ( γ log(/δ (δ/(k 3 N < δ/(k 5 N.
8 In case (c, following ( and using K, we have P ( ˆP,n ˆP k,n exp ( nɛ i,k/ exp ( log(k 3 N/δ = (δ/(k 3 N < δ/(k 5 N. So the probability of each failure case is at most δ/(k 5 N. There are at most K N time steps, so applying the union bound proves the claim. Lemma 6. BeattheMean makes a mistake with probability at most δ/k. The proof of Lemma 6 is exactly the same as Lemma, except leveraging Lemma 5 instead of Lemma. Lemma 7. If a mistakefree execution of Beatthe Mean terminates due to n = N in round l, then P ( b j W l : ɛ,j > ε δ/k. Proof. Suppose BeattheMean terminates due to n = N after round l and t total comparisons. For any b i W l, we know via Hoeffding s inequality that P ( ˆP i,n E[ ˆP i,n ] c N exp ( 8γ log(k 3 N/δ, which is at most δ/(k N. There are at most K N comparisons, so taking the union bound over all t and b i yields δ/k. So with probability at least δ/k, each ˆP i is within c N of its expectation when BeattheMean terminates. Using (8, this implies b j W l, E[ ˆP ] E[ ˆP j ] c N ε/γ. In particular, for the worst bandit b k in W l we have ε/γ E[ ˆP ] E[ ˆP k ] ɛ,k, (7 which follows from Observation. Since γɛ,k ɛ,j for any b j W l, then (7 implies ε ɛ,j. Proof of Theorem. We first analyze correctness. We consider two sufficient failure cases: (a BeattheMean makes a mistake (b BeattheMean is mistakefree and there exists an active bandit b j upon termination where ɛ,j > ε Lemma 6 implies that the probability of (a is at most δ/k. If BeattheMean terminates due to n = N, then Lemma 7 implies that the probability of (b is also at most δ/k. By the union bound the total probability of (a or (b is bounded by δ/k δ, since K. We now analyze the sample complexity. Let L denote the round when the termination condition n = N is satisfied (L = K if the condition was never satisfied. For rounds,..., L, the maximum number of unremoved comparisons accumulated by any bandit is N. Using the same argument from (6, the number of comparisons removed after each round is then with high probability O(N. In round L, each of the K L + remaining bandits accumulate at most N comparisons, implying that the total number of comparisons made is bounded by ( Kγ 6 O ((K L + N + (L N = O ε log KN. δ References Agarwal, Alekh, Dekel, Ofer, and Xiao, Lin. Optimal algorithms for online convex optimization with multipoint bandit feedback. In Conference on Learning Theory (COLT, 00. Auer, Peter, CesaBianchi, Nicolò, Freund, Yoav, and Schapire, Robert. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 3(:8 77, 00. EvenDar, Eyal, Mannor, Shie, and Mansour, Yishay. Action elimination and stopping conditions for the multiarmed bandit and reinforcement learning problems. Journal of Machine Learning Research (JMLR, 7:079 05, 006. Feige, Uriel, Raghavan, Prabhakar, Peleg, David, and Upfal, Eli. Computing with noisy information. SIAM Journal on Computing, 3(5, 99. Kalyanakrishnan, Shivaram and Stone, Peter. Efficient selection of multiple bandit arms: Theory and practice. In International Conference on Machine Learning (ICML, 00. Lai, T. L. and Robbins, Herbert. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6:, 985. Mannor, Shie and Tsitsiklis, John N. The sample complexity of exploration in the multiarmed bandit problem. Journal of Machine Learning Research (JMLR, 5: 63 68, 00. Radlinski, Filip and Joachims, Thorsten. Active exploration for learning rankings from clickthrough data. In ACM Conference on Knowledge Discovery and Data Mining (KDD, 007. Radlinski, Filip, Kurup, Madhu, and Joachims, Thorsten. How does clickthrough data reflect retrieval quality? In ACM Conference on Information and Knowledge Management (CIKM, 008. Yue, Yisong and Joachims, Thorsten. Interactively optimizing information retrieval systems as a dueling bandits problem. In International Conference on Machine Learning (ICML, 009. Yue, Yisong, Broder, Josef, Kleinberg, Robert, and Joachims, Thorsten. The karmed dueling bandits problem. In Conference on Learning Theory (COLT, 009.