# Lecture 3: UCB Algorithm, Worst-Case Regret Bound

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 IEOR : Learning and Optimization for Sequential Decision Making 01/7/16 Lecture 3: UCB Algorithm, Worst-Case Regret Bound Instructor: Shipra Agrawal Scribed by: Karl Stratos = Jang Sun Lee 1 UCB 1.1 Algorithm The mechanics of the upper confidence bound UCB algorithm is simple. At each round, we simply pull the arm that has the highest empirical reward estimate up to that point plus some term that s inversely proportional to the number of times the arm has been played. More formally, define to be the number of times arm i has been played up to time t. Define r t [0, 1] to be the reward we observe at time t. Define I t {1... } to be the choice of arm at time t. Then the empirical reward estimate of arm i at time t is: ˆµ i,t = UCB assigns the following value to each arm i at each time t: The UCB algorithm is given below: UCB i,t := ˆµ i,t UCB Input: arms, number of rounds T 1. For t = 1..., play arm t.. For t = 1... T, play arm t s=1: I s=i r s 1 I t = arg max UCB i,t 1. i {1...} ote that we re assuming at least in this formulation that we will play for at least times. Also, we re implicitly updating our empirical estimate 1 whenever we play an arm. Observe that at time t, the algorithm uses the UCB i,t 1, which can be computed using observations made until time t 1. At an intuitive level, the additional term helps us avoid always playing the same arm without checking out other arms. This is because as increases, UCB i,t decreases. Take the -arm example: arm 1 with a fixed reward 0.5 and arm with a 0-1 reward following a Bernoulli distribution π = Recall that the greedy strategy i.e., selecting arg max i {1...} ˆµ i,t incurs linear regret RT = OT with constant probability: with probability 0.5, arm yields reward 0, upon which we will always select arm 1 and never revisit arm. If we track UCB in this situation, we see that we don t have this problem. t = 1 Arm 1 is played: ˆµ 1,1 =

2 t = Arm is played: ˆµ, = 0 with probability 0.5 this occurs. t = 3 Arm 1 is played, because UCB 1, = 0.5 ln > UCB, = 0 ln ln 3 t = Arm is played, because UCB 1,3 = < UCB,3 = 0 ln Instance-Dependent Regret Analysis But there is a more fundamental reason for the choice of the term. It is a high confidence upper bound on the empirical error of ˆµ i,t. Specifically, for each arm i at time t, we must have 1 ˆµ i,t µ i < with probability at least 1 /t. There are two useful bounds we can immediately take from : 1. A lower bound for UCB i,t. With probability at least 1 /t, UCB i,t > µ i 3. An upper bound for ˆµ i,t with many samples. Given that, with probability at least 1 /t, ˆµ i,t < µ i i 3 states that the UCB value is probably as large as the true reward: in this sense, the UCB algorithm is optimistic. states that given enough specifically, at least samples, the reward estimate probably doesn t exceed the true reward by more than i /. These bounds can be used to show that UCB quickly figures out a suboptimal arm: Lemma 1.1. At any point t, if a suboptimal arm i i.e., µ i < µ has been played for UCB i,t < UCB I,t with probability at least 1 /t. Therefore, for any t, I t1 = i t times, then 1 This is derived from the Chernoff/Hoeffding bound, which states that for iid samples x 1... x n [0, 1] with E[x i ] = µ, n i=1 x i µ n δ e nδ Since we use iid samples of the reward of arm i at time t, we can apply this bound with δ = and get.

3 roof. If both 3 and hold, UCB i,t = ˆµ i,t ˆµ i,t i < µ i i i since by = µ since i := µ µ i < ˆµ i,t by 3 n i,t = UCB i,t The probability of 3 or not holding is at most /t by the union bound. The lemma is useful because once UCB i,t < UCB i,t, we will stop playing arm i and prevent it from causing further regret. This is formalized in the following bound on the expected number of pulls of a suboptimal arm i. Lemma 1.. Let n i,t be the number of times arm i is pulled by UCB algorithm run on instance Θ = {ν 1, µ 1,..., ν, µ } of the stochastic IID multi-armed bandit prbolem. Then, for any arm i with µ i < µ, E[n i,t ] i 8. roof. Let 1A denote indicator of an event, i.e., 1A is 1 if event A is true and 0 otherwise. For any arm i, the expected number of times it is played up to round T under UCB is E[n i,t ] = 1 E[ = 1 E[ = = 1I t1 = i] E[ 8 1 I t1 = i, < t ] E[ 1 I t1 = i, ] i I t1 = i, I t1 = i 1 I t1 = i, 1 was added in the first equality to account for 1 initial pull of every arm by the algorithm. For the first inequality, suppose for contradiction that the indicator 1I t1 = i, < L takes value 1 at more than L 1 time steps, where L := lnt. Let τ be the time step at which this indicator is 1 for the L 1 th time. Then, arm i has been pulled at least L times until time τ including the one initial pull, and for all t > τ, L which implies 3

4 lnt, since t T. Thus, the indicator cannot be 1 for any t > τ, contradicting the assumption that the indicator takes value 1 for more than L 1 times. This bounds 1 E[ T 1 I t1 = i, < ] by L. For second inequality we use Lemma 1.1 to bound the first conditional probability term, and the fact that probabilities are at most 1 to bound the second probability term. Theorem 1.3. Let RT, Θ denote the regret of UCB algorithm in time T for instance Θ = {ν 1, µ 1,..., ν, µ } of the stochastic IID multi-armed bandit prbolem. For all instances Θ, and all T, the expected regret of UCB algorithm is bounded as: where i = µ µ i. E[RT, Θ] 8 i, i: µ i<µ i roof. Using previous lemma, the expected total regret up to round T is: E[RT, Θ] = E[n i,t ] i i: µ i<µ i: µ i<µ i 8 i 1.3 Instance-Independent Regret Analysis Theorem 1.3 gives an upper bound on E[RT, Θ] that is logarithmic in T. This is in an optimal form: recall from the last lecture that any reasonable algorithm must suffer ln T expected total regret, no matter what instance Θ it s given. However, note that Theorem 1.3 is dependent on a specific instance of arms, parametrized by Such bounds are called instance-dependent or problem-dependent bounds. This bound does directly imply a very good worst-case bound: for instance with i = ln T/T, then the bound is linear in T which is as bad as the naive ɛ-greedy algorithm. But a simple trick can be applied on Theorem 1.3 to obtain the following instance-independent aka problemindependent or worst-case regret bound. Theorem 1.. For all T, the expected total regret achieved by the UCB algorithm in round T is E[RT ] = 5 T ln T 8 roof. For the analysis purposes only, divide the arms into two groups: 1. Group 1 contains almost optimal arms with i < T ln T.. Group contains arms with i T ln T. The total regret is the sum of the regret of each group. The maximum total regret incurred due to pulling arms in Group 1 is bounded by n i,t i T ln T n i,t T T ln T = T ln T i Group 1 i Group 1

5 where used that i T ln T for all i in group 1, and the trivial bound i n i,t T on total number of pulls. ext, we apply Lemma 1. on every arm in Group to bound the expected regret by i Group E[n i,t ] i i Group where in the first inequality we used that for all i Group, gives the desired result. i 8 i i Group T ln T 8 T ln T 8 T ln T i 1. Summing the two inequalities 5

### Finite-time Analysis of the Multiarmed Bandit Problem*

Machine Learning, 47, 35 56, 00 c 00 Kluwer Academic Publishers. Manufactured in The Netherlands. Finite-time Analysis of the Multiarmed Bandit Problem* PETER AUER University of Technology Graz, A-8010

### Un point de vue bayésien pour des algorithmes de bandit plus performants

Un point de vue bayésien pour des algorithmes de bandit plus performants Emilie Kaufmann, Telecom ParisTech Rencontre des Jeunes Statisticiens, Aussois, 28 août 2013 Emilie Kaufmann (Telecom ParisTech)

### 8.1 Makespan Scheduling

600.469 / 600.669 Approximation Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programing: Min-Makespan and Bin Packing Date: 2/19/15 Scribe: Gabriel Kaptchuk 8.1 Makespan Scheduling Consider an instance

### Direct Algorithms for Checking Coherence and Making Inferences from Conditional Probability Assessments

Direct Algorithms for Checking Coherence and Making Inferences from Conditional Probability Assessments Peter Walley, Renato Pelessoni a and Paolo Vicig a a Dipartimento di Matematica Applicata B. de Finetti,

### A Problem With The Rational Numbers

Solvability of Equations Solvability of Equations 1. In fields, linear equations ax + b = 0 are solvable. Solvability of Equations 1. In fields, linear equations ax + b = 0 are solvable. 2. Quadratic equations

### L1 vs. L2 Regularization and feature selection.

L1 vs. L2 Regularization and feature selection. Paper by Andrew Ng (2004) Presentation by Afshin Rostami Main Topics Covering Numbers Definition Convergence Bounds L1 regularized logistic regression L1

### Contextual-Bandit Approach to Recommendation Konstantin Knauf

Contextual-Bandit Approach to Recommendation Konstantin Knauf 22. Januar 2014 Prof. Ulf Brefeld Knowledge Mining & Assesment 1 Agenda Problem Scenario Scenario Multi-armed Bandit Model for Online Recommendation

### The Value of Knowing a Demand Curve: Bounds on Regret for On-line Posted-Price Auctions

The Value of Knowing a Demand Curve: Bounds on Regret for On-line Posted-Price Auctions Robert Kleinberg F. Thomson Leighton Abstract We consider the revenue-maximization problem for a seller with an unlimited

### Online Network Revenue Management using Thompson Sampling

Online Network Revenue Management using Thompson Sampling Kris Johnson Ferreira David Simchi-Levi He Wang Working Paper 16-031 Online Network Revenue Management using Thompson Sampling Kris Johnson Ferreira

CS787: Advanced Algorithms Lecture 14: Online algorithms We now shift focus to a different kind of algorithmic problem where we need to perform some optimization without knowing the input in advance. Algorithms

### Algorithms for the multi-armed bandit problem

Journal of Machine Learning Research 1 (2000) 1-48 Submitted 4/00; Published 10/00 Algorithms for the multi-armed bandit problem Volodymyr Kuleshov Doina Precup School of Computer Science McGill University

### Applied Algorithm Design Lecture 5

Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design

Online Learning with Switching Costs and Other Adaptive Adversaries Nicolò Cesa-Bianchi Università degli Studi di Milano Italy Ofer Dekel Microsoft Research USA Ohad Shamir Microsoft Research and the Weizmann

Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

### Lecture 2: Universality

CS 710: Complexity Theory 1/21/2010 Lecture 2: Universality Instructor: Dieter van Melkebeek Scribe: Tyson Williams In this lecture, we introduce the notion of a universal machine, develop efficient universal

### 1 Review of Newton Polynomials

cs: introduction to numerical analysis 0/0/0 Lecture 8: Polynomial Interpolation: Using Newton Polynomials and Error Analysis Instructor: Professor Amos Ron Scribes: Giordano Fusco, Mark Cowlishaw, Nathanael

### Chapter 1: Financial Markets and Financial Derivatives

Chapter 1: Financial Markets and Financial Derivatives 1.1 Financial Markets Financial markets are markets for financial instruments, in which buyers and sellers find each other and create or exchange

### Notes from Week 1: Algorithms for sequential prediction

CS 683 Learning, Games, and Electronic Markets Spring 2007 Notes from Week 1: Algorithms for sequential prediction Instructor: Robert Kleinberg 22-26 Jan 2007 1 Introduction In this course we will be looking

Handling Advertisements of Unknown Quality in Search Advertising Sandeep Pandey Carnegie Mellon University spandey@cs.cmu.edu Christopher Olston Yahoo! Research olston@yahoo-inc.com Abstract We consider

### Optimal Online-list Batch Scheduling

Optimal Online-list Batch Scheduling Jacob Jan Paulus a,, Deshi Ye b, Guochuan Zhang b a University of Twente, P.O. box 217, 7500AE Enschede, The Netherlands b Zhejiang University, Hangzhou 310027, China

### Week 1: Introduction to Online Learning

Week 1: Introduction to Online Learning 1 Introduction This is written based on Prediction, Learning, and Games (ISBN: 2184189 / -21-8418-9 Cesa-Bianchi, Nicolo; Lugosi, Gabor 1.1 A Gentle Start Consider

### Lecture 6. Inverse of Matrix

Lecture 6 Inverse of Matrix Recall that any linear system can be written as a matrix equation In one dimension case, ie, A is 1 1, then can be easily solved as A x b Ax b x b A 1 A b A 1 b provided that

### 1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let

Copyright c 2009 by Karl Sigman 1 Stopping Times 1.1 Stopping Times: Definition Given a stochastic process X = {X n : n 0}, a random time τ is a discrete random variable on the same probability space as

### 6.254 : Game Theory with Engineering Applications Lecture 5: Existence of a Nash Equilibrium

6.254 : Game Theory with Engineering Applications Lecture 5: Existence of a Nash Equilibrium Asu Ozdaglar MIT February 18, 2010 1 Introduction Outline Pricing-Congestion Game Example Existence of a Mixed

### Induction. Margaret M. Fleck. 10 October These notes cover mathematical induction and recursive definition

Induction Margaret M. Fleck 10 October 011 These notes cover mathematical induction and recursive definition 1 Introduction to induction At the start of the term, we saw the following formula for computing

### Divide And Conquer Algorithms

CSE341T/CSE549T 09/10/2014 Lecture 5 Divide And Conquer Algorithms Recall in last lecture, we looked at one way of parallelizing matrix multiplication. At the end of the lecture, we saw the reduce SUM

### Foundations of Machine Learning On-Line Learning. Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu

Foundations of Machine Learning On-Line Learning Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Motivation PAC learning: distribution fixed over time (training and test). IID assumption.

### ( ) = ( ) = {,,, } β ( ), < 1 ( ) + ( ) = ( ) + ( )

{ } ( ) = ( ) = {,,, } ( ) β ( ), < 1 ( ) + ( ) = ( ) + ( ) max, ( ) [ ( )] + ( ) [ ( )], [ ( )] [ ( )] = =, ( ) = ( ) = 0 ( ) = ( ) ( ) ( ) =, ( ), ( ) =, ( ), ( ). ln ( ) = ln ( ). + 1 ( ) = ( ) Ω[ (

### Tiers, Preference Similarity, and the Limits on Stable Partners

Tiers, Preference Similarity, and the Limits on Stable Partners KANDORI, Michihiro, KOJIMA, Fuhito, and YASUDA, Yosuke February 7, 2010 Preliminary and incomplete. Do not circulate. Abstract We consider

### The Inverse of a Square Matrix

These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

### 4 Learning, Regret minimization, and Equilibria

4 Learning, Regret minimization, and Equilibria A. Blum and Y. Mansour Abstract Many situations involve repeatedly making decisions in an uncertain environment: for instance, deciding what route to drive

### Approximation Algorithms

Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

### The Goldberg Rao Algorithm for the Maximum Flow Problem

The Goldberg Rao Algorithm for the Maximum Flow Problem COS 528 class notes October 18, 2006 Scribe: Dávid Papp Main idea: use of the blocking flow paradigm to achieve essentially O(min{m 2/3, n 1/2 }

### The Price of Truthfulness for Pay-Per-Click Auctions

Technical Report TTI-TR-2008-3 The Price of Truthfulness for Pay-Per-Click Auctions Nikhil Devanur Microsoft Research Sham M. Kakade Toyota Technological Institute at Chicago ABSTRACT We analyze the problem

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### The SIS Epidemic Model with Markovian Switching

The SIS Epidemic with Markovian Switching Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH (Joint work with A. Gray, D. Greenhalgh and J. Pan) Outline Motivation 1 Motivation

CS787: Advanced Algorithms Topic: Online Learning Presenters: David He, Chris Hopman 17.3.1 Follow the Perturbed Leader 17.3.1.1 Prediction Problem Recall the prediction problem that we discussed in class.

### Chapter 7. BANDIT PROBLEMS.

Chapter 7. BANDIT PROBLEMS. Bandit problems are problems in the area of sequential selection of experiments, and they are related to stopping rule problems through the theorem of Gittins and Jones (974).

### CONSIDER a merchant selling items through e-bay

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 6, NO., JANUARY 5 549 Regret Minimization for Reserve Prices in Second-Price Auctions Nicolò Cesa-Bianchi, Claudio Gentile, and Yishay Mansour Abstract We

### arxiv:1112.0829v1 [math.pr] 5 Dec 2011

How Not to Win a Million Dollars: A Counterexample to a Conjecture of L. Breiman Thomas P. Hayes arxiv:1112.0829v1 [math.pr] 5 Dec 2011 Abstract Consider a gambling game in which we are allowed to repeatedly

### Question: What is the probability that a five-card poker hand contains a flush, that is, five cards of the same suit?

ECS20 Discrete Mathematics Quarter: Spring 2007 Instructor: John Steinberger Assistant: Sophie Engle (prepared by Sophie Engle) Homework 8 Hints Due Wednesday June 6 th 2007 Section 6.1 #16 What is the

### Triangle deletion. Ernie Croot. February 3, 2010

Triangle deletion Ernie Croot February 3, 2010 1 Introduction The purpose of this note is to give an intuitive outline of the triangle deletion theorem of Ruzsa and Szemerédi, which says that if G = (V,

### 1 The EOQ and Extensions

IEOR4000: Production Management Lecture 2 Professor Guillermo Gallego September 9, 2004 Lecture Plan 1. The EOQ and Extensions 2. Multi-Item EOQ Model 1 The EOQ and Extensions This section is devoted to

### Online Convex Optimization

E0 370 Statistical Learning heory Lecture 19 Oct 22, 2013 Online Convex Optimization Lecturer: Shivani Agarwal Scribe: Aadirupa 1 Introduction In this lecture we shall look at a fairly general setting

### Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091)

Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091) Magnus Wiktorsson Centre for Mathematical Sciences Lund University, Sweden Lecture 6 Sequential Monte Carlo methods II February

### 1 Introduction. 2 Prediction with Expert Advice. Online Learning 9.520 Lecture 09

1 Introduction Most of the course is concerned with the batch learning problem. In this lecture, however, we look at a different model, called online. Let us first compare and contrast the two. In batch

### Playing Tetris and Routing Under Uncertainty

IMPERIAL COLLEGE LONDON Department of Computing Approximate Dynamic Programming: Playing Tetris and Routing Under Uncertainty Michael Hadjiyiannis Supervised by Dr Daniel Kuhn 06/16/2009 MEng Final Year

### FE670 Algorithmic Trading Strategies. Stevens Institute of Technology

FE670 Algorithmic Trading Strategies Lecture 6. Portfolio Optimization: Basic Theory and Practice Steve Yang Stevens Institute of Technology 10/03/2013 Outline 1 Mean-Variance Analysis: Overview 2 Classical

### Using Generalized Forecasts for Online Currency Conversion

Using Generalized Forecasts for Online Currency Conversion Kazuo Iwama and Kouki Yonezawa School of Informatics Kyoto University Kyoto 606-8501, Japan {iwama,yonezawa}@kuis.kyoto-u.ac.jp Abstract. El-Yaniv

### our algorithms will work and our results will hold even when the context space is discrete given that it is bounded. arm (or an alternative).

1 Context-Adaptive Big Data Stream Mining - Online Appendix Cem Tekin*, Member, IEEE, Luca Canzian, Member, IEEE, Mihaela van der Schaar, Fellow, IEEE Electrical Engineering Department, University of California,

### Mälardalen University

Mälardalen University http://www.mdh.se 1/38 Value at Risk and its estimation Anatoliy A. Malyarenko Department of Mathematics & Physics Mälardalen University SE-72 123 Västerås,Sweden email: amo@mdh.se

Mechanisms for Fair Attribution Eric Balkanski Yaron Singer Abstract We propose a new framework for optimization under fairness constraints. The problems we consider model procurement where the goal is

### NPV Versus IRR. W.L. Silber -1000 0 0 +300 +600 +900. We know that if the cost of capital is 18 percent we reject the project because the NPV

NPV Versus IRR W.L. Silber I. Our favorite project A has the following cash flows: -1 + +6 +9 1 2 We know that if the cost of capital is 18 percent we reject the project because the net present value is

### Chapter 2 Allocation of Virtual Machines

Chapter 2 Allocation of Virtual Machines In this chapter, we introduce a solution to the problem of cost-effective VM allocation and reallocation. Unlike traditional solutions, which typically reallocate

### Estimation Bias in Multi-Armed Bandit Algorithms for Search Advertising

Estimation Bias in Multi-Armed Bandit Algorithms for Search Advertising Min Xu Machine Learning Department Carnegie Mellon University minx@cs.cmu.edu ao Qin Microsoft Research Asia taoqin@microsoft.com

### The Foundations: Logic and Proofs. Chapter 1, Part III: Proofs

The Foundations: Logic and Proofs Chapter 1, Part III: Proofs Rules of Inference Section 1.6 Section Summary Valid Arguments Inference Rules for Propositional Logic Using Rules of Inference to Build Arguments

### Online Algorithms: Learning & Optimization with No Regret.

Online Algorithms: Learning & Optimization with No Regret. Daniel Golovin 1 The Setup Optimization: Model the problem (objective, constraints) Pick best decision from a feasible set. Learning: Model the

### The Taxman Game. Robert K. Moniot September 5, 2003

The Taxman Game Robert K. Moniot September 5, 2003 1 Introduction Want to know how to beat the taxman? Legally, that is? Read on, and we will explore this cute little mathematical game. The taxman game

### Section 3 Sequences and Limits, Continued.

Section 3 Sequences and Limits, Continued. Lemma 3.6 Let {a n } n N be a convergent sequence for which a n 0 for all n N and it α 0. Then there exists N N such that for all n N. α a n 3 α In particular

### IEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Tuesday, September 11 Normal Approximations and the Central Limit Theorem

IEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Tuesday, September 11 Normal Approximations and the Central Limit Theorem Time on my hands: Coin tosses. Problem Formulation: Suppose that I have

### A Note on Maximum Independent Sets in Rectangle Intersection Graphs

A Note on Maximum Independent Sets in Rectangle Intersection Graphs Timothy M. Chan School of Computer Science University of Waterloo Waterloo, Ontario N2L 3G1, Canada tmchan@uwaterloo.ca September 12,

### Examination 110 Probability and Statistics Examination

Examination 0 Probability and Statistics Examination Sample Examination Questions The Probability and Statistics Examination consists of 5 multiple-choice test questions. The test is a three-hour examination

### Informatics 2D Reasoning and Agents Semester 2, 2015-16

Informatics 2D Reasoning and Agents Semester 2, 2015-16 Alex Lascarides alex@inf.ed.ac.uk Lecture 30 Markov Decision Processes 25th March 2016 Informatics UoE Informatics 2D 1 Sequential decision problems

### GI01/M055 Supervised Learning Proximal Methods

GI01/M055 Supervised Learning Proximal Methods Massimiliano Pontil (based on notes by Luca Baldassarre) (UCL) Proximal Methods 1 / 20 Today s Plan Problem setting Convex analysis concepts Proximal operators

### Lecture 4: Properties of and Rules for

Lecture 4: Properties of and Rules for Asymptotic Big-Oh, Big-Omega, and Big-Theta Notation Georgy Gimel farb COMPSCI 220 Algorithms and Data Structures 1 / 13 1 Time complexity 2 Big-Oh rules Scaling

### Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

### Stochastic Inventory Control

Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to the

### Find-The-Number. 1 Find-The-Number With Comps

Find-The-Number 1 Find-The-Number With Comps Consider the following two-person game, which we call Find-The-Number with Comps. Player A (for answerer) has a number x between 1 and 1000. Player Q (for questioner)

### Pooling and Meta-analysis. Tony O Hagan

Pooling and Meta-analysis Tony O Hagan Pooling Synthesising prior information from several experts 2 Multiple experts The case of multiple experts is important When elicitation is used to provide expert

### 3 Does the Simplex Algorithm Work?

Does the Simplex Algorithm Work? In this section we carefully examine the simplex algorithm introduced in the previous chapter. Our goal is to either prove that it works, or to determine those circumstances

Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one

### LARGE CLASSES OF EXPERTS

LARGE CLASSES OF EXPERTS Csaba Szepesvári University of Alberta CMPUT 654 E-mail: szepesva@ualberta.ca UofA, October 31, 2006 OUTLINE 1 TRACKING THE BEST EXPERT 2 FIXED SHARE FORECASTER 3 VARIABLE-SHARE

### Solutions for Practice problems on proofs

Solutions for Practice problems on proofs Definition: (even) An integer n Z is even if and only if n = 2m for some number m Z. Definition: (odd) An integer n Z is odd if and only if n = 2m + 1 for some

### Sequential Data Structures

Sequential Data Structures In this lecture we introduce the basic data structures for storing sequences of objects. These data structures are based on arrays and linked lists, which you met in first year

### ) ( ) Thus, (, 4.5] [ 7, 6) Thus, (, 3) ( 5, ) = (, 6). = ( 5, 3).

152 Sect 10.1 - Compound Inequalities Concept #1 Union and Intersection To understand the Union and Intersection of two sets, let s begin with an example. Let A = {1, 2,,, 5} and B = {2,, 6, 8}. Union

### CHAPTER 2 Estimating Probabilities

CHAPTER 2 Estimating Probabilities Machine Learning Copyright c 2016. Tom M. Mitchell. All rights reserved. *DRAFT OF January 24, 2016* *PLEASE DO NOT DISTRIBUTE WITHOUT AUTHOR S PERMISSION* This is a

### Dynamic Pay-Per-Action Mechanisms and Applications to Online Advertising

Submitted to Operations Research manuscript OPRE-2009-06-288 Dynamic Pay-Per-Action Mechanisms and Applications to Online Advertising Hamid Nazerzadeh Microsoft Research, Cambridge, MA, hamidnz@microsoft.com

### Convex hull for intersections of random lines

005 International Conference on Analysis of Algorithms DMTCS proc AD, 005, 39 48 Convex hull for intersections of random lines Daniel Berend 1 and Vladimir Braverman 1 Depts of Matematics and of Computer

### ! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

### A crash course in probability and Naïve Bayes classification

Probability theory A crash course in probability and Naïve Bayes classification Chapter 9 Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. s: A person s

### Math212a1010 Lebesgue measure.

Math212a1010 Lebesgue measure. October 19, 2010 Today s lecture will be devoted to Lebesgue measure, a creation of Henri Lebesgue, in his thesis, one of the most famous theses in the history of mathematics.

### Near Optimal Solutions

Near Optimal Solutions Many important optimization problems are lacking efficient solutions. NP-Complete problems unlikely to have polynomial time solutions. Good heuristics important for such problems.

### The Delta Method and Applications

Chapter 5 The Delta Method and Applications 5.1 Linear approximations of functions In the simplest form of the central limit theorem, Theorem 4.18, we consider a sequence X 1, X,... of independent and

### Gambling and Data Compression

Gambling and Data Compression Gambling. Horse Race Definition The wealth relative S(X) = b(x)o(x) is the factor by which the gambler s wealth grows if horse X wins the race, where b(x) is the fraction

### 3. Mathematical Induction

3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)

### Lecture 13: Factoring Integers

CS 880: Quantum Information Processing 0/4/0 Lecture 3: Factoring Integers Instructor: Dieter van Melkebeek Scribe: Mark Wellons In this lecture, we review order finding and use this to develop a method

### Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

### Solving Linear Systems, Continued and The Inverse of a Matrix

, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

### Lecture 8: Random Walk vs. Brownian Motion, Binomial Model vs. Log-Normal Distribution

Lecture 8: Random Walk vs. Brownian Motion, Binomial Model vs. Log-ormal Distribution October 4, 200 Limiting Distribution of the Scaled Random Walk Recall that we defined a scaled simple random walk last

### Open and Closed Sets

Open and Closed Sets Definition: A subset S of a metric space (X, d) is open if it contains an open ball about each of its points i.e., if x S : ɛ > 0 : B(x, ɛ) S. (1) Theorem: (O1) and X are open sets.

### Trading regret rate for computational efficiency in online learning with limited feedback

Trading regret rate for computational efficiency in online learning with limited feedback Shai Shalev-Shwartz TTI-C Hebrew University On-line Learning with Limited Feedback Workshop, 2009 June 2009 Shai

Collaborative Filtering Radek Pelánek 2015 Collaborative Filtering assumption: users with similar taste in past will have similar taste in future requires only matrix of ratings applicable in many domains

### Ri and. i=1. S i N. and. R R i

The subset R of R n is a closed rectangle if there are n non-empty closed intervals {[a 1, b 1 ], [a 2, b 2 ],..., [a n, b n ]} so that R = [a 1, b 1 ] [a 2, b 2 ] [a n, b n ]. The subset R of R n is an

### Equilibrium with Complete Markets

Equilibrium with Complete Markets Jesús Fernández-Villaverde University of Pennsylvania February 12, 2016 Jesús Fernández-Villaverde (PENN) Equilibrium with Complete Markets February 12, 2016 1 / 24 Arrow-Debreu

### 2.3 Scheduling jobs on identical parallel machines

2.3 Scheduling jobs on identical parallel machines There are jobs to be processed, and there are identical machines (running in parallel) to which each job may be assigned Each job = 1,,, must be processed

### COMP 250 Fall 2012 lecture 2 binary representations Sept. 11, 2012

Binary numbers The reason humans represent numbers using decimal (the ten digits from 0,1,... 9) is that we have ten fingers. There is no other reason than that. There is nothing special otherwise about

### 6.207/14.15: Networks Lecture 15: Repeated Games and Cooperation

6.207/14.15: Networks Lecture 15: Repeated Games and Cooperation Daron Acemoglu and Asu Ozdaglar MIT November 2, 2009 1 Introduction Outline The problem of cooperation Finitely-repeated prisoner s dilemma

### 5.1 Identifying the Target Parameter

University of California, Davis Department of Statistics Summer Session II Statistics 13 August 20, 2012 Date of latest update: August 20 Lecture 5: Estimation with Confidence intervals 5.1 Identifying