National Sun Yat-Sen University CSE Course: Information Theory. Gambling And Entropy



Similar documents
Review Horse Race Gambling and Side Information Dependent horse races and the entropy rate. Gambling. Besma Smida. ES250: Lecture 9.

Gambling and Data Compression

Gambling with Information Theory

On Directed Information and Gambling

Elementary Statistics and Inference. Elementary Statistics and Inference. 16 The Law of Averages (cont.) 22S:025 or 7P:025.

Chapter 7: Proportional Play and the Kelly Betting System

The Kelly criterion for spread bets

The Mathematics of Gambling

Part I. Gambling and Information Theory. Information Theory and Networks. Section 1. Horse Racing. Lecture 16: Gambling and Information Theory

On Adaboost and Optimal Betting Strategies

Gambling and Portfolio Selection using Information theory

The Kelly Betting System for Favorable Games.

Week 4: Gambler s ruin and bold play

HONORS STATISTICS. Mrs. Garrett Block 2 & 3

An Introduction to Information Theory

A Quantitative Measure of Relevance Based on Kelly Gambling Theory

Week 5: Expected value and Betting systems

Joint Exam 1/P Sample Exam 1

Ex. 2.1 (Davide Basilio Bartolini)

Betting on Excel to enliven the teaching of probability

SOME ASPECTS OF GAMBLING WITH THE KELLY CRITERION. School of Mathematical Sciences. Monash University, Clayton, Victoria, Australia 3168

Goal Problems in Gambling and Game Theory. Bill Sudderth. School of Statistics University of Minnesota

We rst consider the game from the player's point of view: Suppose you have picked a number and placed your bet. The probability of winning is

Statistics and Random Variables. Math 425 Introduction to Probability Lecture 14. Finite valued Random Variables. Expectation defined

A New Interpretation of Information Rate

Elementary Statistics and Inference. Elementary Statistics and Inference. 17 Expected Value and Standard Error. 22S:025 or 7P:025.

Betting systems: how not to lose your money gambling

arxiv: v1 [math.pr] 5 Dec 2011

Decision Theory Rational prospecting

Section 7C: The Law of Large Numbers

Ch. 13.3: More about Probability

Midterm Exam #1 Instructions:

Statistics 100A Homework 3 Solutions

Betting with the Kelly Criterion

How to Gamble If You Must

36 Odds, Expected Value, and Conditional Probability

The overall size of these chance errors is measured by their RMS HALF THE NUMBER OF TOSSES NUMBER OF HEADS MINUS NUMBER OF TOSSES

Lucky vs. Unlucky Teams in Sports

A Note on Proebsting s Paradox

Term Project: Roulette

6.042/18.062J Mathematics for Computer Science December 12, 2006 Tom Leighton and Ronitt Rubinfeld. Random Walks

THE WINNING ROULETTE SYSTEM by

Chapter 4 Lecture Notes

MA 1125 Lecture 14 - Expected Values. Friday, February 28, Objectives: Introduce expected values.

AP Stats - Probability Review

Polarization codes and the rate of polarization

Equotion. Working with Equotion

3.2 Roulette and Markov Chains

Finite Horizon Investment Risk Management

Section 6.2 Definition of Probability

ROULETTE. APEX gaming technology

Introduction to Learning & Decision Trees

From the standard normal probability table, the answer is approximately 0.89.

You can place bets on the Roulette table until the dealer announces, No more bets.

Lecture 25: Money Management Steven Skiena. skiena

Solutions: Problems for Chapter 3. Solutions: Problems for Chapter 3

פרויקט מסכם לתואר בוגר במדעים )B.Sc( במתמטיקה שימושית

Homework Assignment #1: Answer Key

MONEY MANAGEMENT. Guy Bower delves into a topic every trader should endeavour to master - money management.

Lecture 15. Ranking Payoff Distributions: Stochastic Dominance. First-Order Stochastic Dominance: higher distribution

Unit 19: Probability Models

In the situations that we will encounter, we may generally calculate the probability of an event

John Kerrich s coin-tossing Experiment. Law of Averages - pg. 294 Moore s Text

Probability: The Study of Randomness Randomness and Probability Models. IPS Chapters 4 Sections

Basic Probability. Probability: The part of Mathematics devoted to quantify uncertainty

REWARD System For Even Money Bet in Roulette By Izak Matatya

Chance and Uncertainty: Probability Theory

Economics 1011a: Intermediate Microeconomics

Decision Making Under Uncertainty. Professor Peter Cramton Economics 300

This Method will show you exactly how you can profit from this specific online casino and beat them at their own game.

Responsible Gambling Education Unit: Mathematics A & B

Standard 12: The student will explain and evaluate the financial impact and consequences of gambling.

Diamond Games Premium VI Game Description Revision 1.0 WS

THE POKIES: BEFORE YOU PRESS THE BUTTON, KNOW THE FACTS.

"TWO DOLLARS TO SHOW ON NUMBER

How to bet and win: and why not to trust a winner. Niall MacKay. Department of Mathematics

Influences in low-degree polynomials

Gambling Systems and Multiplication-Invariant Measures

1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let

Expected Value and the Game of Craps

Diamond Games Premium III Game Description Revision 1.0

TABLE OF CONTENTS. ROULETTE FREE System # ROULETTE FREE System #

Introduction to Discrete Probability. Terminology. Probability definition. 22c:19, section 6.x Hantao Zhang

Choice Under Uncertainty

Basic Probability Theory II

Random Variables. Chapter 2. Random Variables 1

Analogy Between Gambling and. Measurement-Based Work Extraction

MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question.

Solution (Done in class)

From slot machines to gaming terminals experiences with regulatory changes in Norway Vienna September 2010

STRIKE FORCE ROULETTE

A THEORETICAL ANALYSIS OF THE MECHANISMS OF COMPETITION IN THE GAMBLING MARKET

Binomial lattice model for stock prices

Section 6.1 Discrete Random variables Probability Distribution

Transcription:

Gambling And Entropy 1

Outline There is a strong relationship between the growth rate of investment in a horse race and the entropy of the horse race. The value of side information is related to the mutual information from the perspective of growth rate. We can use a (gambling) game to estimate the entropy of English. 2

The Horse Race p i, the probability that horse i wins the race. o i, the payoff for 1 dollars bet on horse i if it wins. a-for-1 vs. r-to-1, equivalent if r = a 1. b i, the proportion of money bet on horse i. S(X) = b(x)o(x) is the multiplication factor by which the gambler s wealth is increased when horse X wins a race. S n = S(X i ) is the multiplication factor of the gambler s wealth after n races. 3

The Doubling Rate The doubling rate of a horse race is defined by W(b,p) = E log S(X) = m i=1 p i log b i o i. Here the o i is given (decided by the bookies). Let the horse race outcomes be i.i.d. random variables. Then the wealth of gambler increases exponentially since 1 n log S n = 1 n S n. = 2 nw(b, p), log S(X i ) E log S(X). i 4

The Optimal Doubling Rate Given p and o, the optimal doubling rate is the maximum doubling rate over the choice of b. W (p) = max b W(b,p). The optimal doubling rate is W (p) = i p i log o i H(p), achieved by the proportional gambling b = p. Note that entropy and optimal rate is complementary. 5

Proof W(b,p) = p i log b i o i = p i log b i p i p i o i = p i log o i H(p) D(p b) p i log o i H(p), with equality iff p = b. See Example 6.1.1 for illustration. Note: While it is tentative to bet all money on horse i with maximum p i o i, this strategy has the risk of losing all money. 6

Fair Odds Odds are fair if 1 i o i = 1. Betting according to b i = 1 o i guarantees money back, no more and no less. Suppose the odd o is fair. Define the probability r i = 1 o i. For betting b, W(b,p) = p i log b i o i = p i log b i p i p i r i = D(p r) D(p b). Therefore, betting is a competition between the gambler s and the bookie s estimates of p. 7

Uniform Fair Odds Uniform fair odds are m-for-1 on each horse. For uniform fair odds, W (p) + H(p) = log m. The sum does not depend on p! 8

Superfair and Subfair Odds Odds are superfair if 1 o i < 1. In this case, one can adopt a Dutch book strategy, betting b i = 1 o i and getting 1 + (1 1 o i ) > 1 in return. While risk-free, this does not optimize the doubling rate. Odds are subfair if 1 o i > 1. This is the real-life scenario. The organizer takes a cut of all bets. If you have to bet, it s probably better to leave some cash for cab. 9

Side Information Suppose the gambler has some information relevant to the outcome of the race. Does such information help to gambler to win more money? Let X be the outcome of race, Y be the information. Without Y, the optimal doubling rate is W (X) = max p(x) log b(x)o(x) b(x) = x x p(x) log o(x) H(X), since we know that proportional gambling is optimal. 10

Side Information With Y, the multiplication factor of a race is changed to S(X Y ) = b(x Y )o(x) So the optimal doubling rate is W (X Y ) = maxe[log S] b(x y) = max p(x, y) log b(x y)o(x) b(x y) = y x,y p(y) max b(x y) p(x y) log b(x y)o(x). x 11

Value of Side Information Again the optimal bet is proportional to the conditional probability of x given y, for each y. So the optimal doubling rate is W (X Y ) = x,y p(x, y) log p(x y)o(x) = x p(x) log o(x) H(X Y ). So the optimal doubling rate increases with side information Y by W (X Y ) W (X) = H(X) H(X Y ) = I(X;Y ). 12

Red and Black Consider the game of betting on the color of the next card of a deck of 26 blacks and 26 reds. One can bet sequentially, based on the conditional probability of black and red given the history of cards. One can also bet the entire sequence at once. Betting 1 C26 52 on each of the C26 52 possible output sequences, which are equally likely. These two schemes are equivalent. The resulting wealth is 9.08. 13

Simulation of English Text Choose the alphabet. Use a book. For the first token, open the book to a random page and choose a random token on this page. For the second token, start at a random position of a random page until the first token is encountered. Record the token next. For the third token, again start at a random position of another random page until the second token is encountered. Record the token next. This approximates English with the first-order Markov model. Examples in pp. 134-135. 14

The Entropy of Simulated English As the order becomes higher, the model gets closer to the true English. The entropy of the zero-th order model is log 27 = 4.76 bits. The first order: 4.03 bits. The fourth order: 2.8 bits. What is the limit? The dependency between tokens can be backward or long-range. However, the N-gram (Markov) word models do quite well in some applications. 15

The Entropy of English: Shannon Game A human subject is given a sample of English text and asked to guess the next letter. The experimenter records the number of guesses required to figure out the next letter. Assume that the subject can be modeled as a computer making deterministic choice of guesses based on the history. Then if we have this machine and the numbers of each guess, we can reconstruct the English text, so the entropy of the guess sequence is the entropy of English text. The entropy of English is estimated to be 1.3 bits. 16

The Entropy of English: Gambling Estimate A human subject gambles on the next letter. The payoff is 27-to-1 for all letters and the space. Since sequential betting is equivalent to betting on the entire sequence, the payoff after n letters are S n (X 1,...,X n ) = (27) n b(x 1,...,X n ) The expected log wealth satisfies E 1 n log S n = log 27 + 1 n E log b(x 1,...,X n ) = log 27 + 1 n p(x n ) log b(x n ) log 27 H(X). Ĥ = log 27 1 n log S n is an estimate (1.34 bits). 17