The `Russian Roulette' problem: a probabilistic variant of the Josephus Problem



Similar documents
Statistics 100A Homework 4 Solutions

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!

Lecture 13 - Basic Number Theory.

Betting systems: how not to lose your money gambling

Expected Value. 24 February Expected Value 24 February /19

Discrete Math in Computer Science Homework 7 Solutions (Max Points: 80)

2 When is a 2-Digit Number the Sum of the Squares of its Digits?

Question 1 Formatted: Formatted: Formatted: Formatted:

arxiv: v1 [math.pr] 5 Dec 2011

Elementary Statistics and Inference. Elementary Statistics and Inference. 17 Expected Value and Standard Error. 22S:025 or 7P:025.

CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Linda Shapiro Winter 2015

Gaming the Law of Large Numbers

$ ( $1) = 40

SECTION 10-2 Mathematical Induction

Solutions for Practice problems on proofs

NIM with Cash. Abstract. loses. This game has been well studied. For example, it is known that for NIM(1, 2, 3; n)

Introduction. Appendix D Mathematical Induction D1

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 13. Random Variables: Distribution and Expectation

Probability Using Dice

6.042/18.062J Mathematics for Computer Science. Expected Value I

Course Notes for Math 162: Mathematical Statistics Approximation Methods in Statistics

Universal hashing. In other words, the probability of a collision for two different keys x and y given a hash function randomly chosen from H is 1/m.

If, under a given assumption, the of a particular observed is extremely. , we conclude that the is probably not

Section 7C: The Law of Large Numbers

3.2 Roulette and Markov Chains

STT315 Chapter 4 Random Variables & Probability Distributions KM. Chapter 4.5, 6, 8 Probability Distributions for Continuous Random Variables

John Kerrich s coin-tossing Experiment. Law of Averages - pg. 294 Moore s Text

Worksheet for Teaching Module Probability (Lesson 1)

Section 6.2 Definition of Probability

6 Scalar, Stochastic, Discrete Dynamic Systems

Collinear Points in Permutations

Combinatorics 3 poker hands and Some general probability

Integer Factorization using the Quadratic Sieve

Random Fibonacci-type Sequences in Online Gambling

Gambling Systems and Multiplication-Invariant Measures

Statistics 100A Homework 3 Solutions

Lecture 3: Finding integer solutions to systems of linear equations

Math 55: Discrete Mathematics

Algebra 2 C Chapter 12 Probability and Statistics

THE CENTRAL LIMIT THEOREM TORONTO

Min. Art.No.: RIO524 Made in Germany Copyright

STAT 35A HW2 Solutions

MOVIES, GAMBLING, SECRET CODES, JUST MATRIX MAGIC

9.2 Summation Notation

Collatz Sequence. Fibbonacci Sequence. n is even; Recurrence Relation: a n+1 = a n + a n 1.

That s Not Fair! ASSESSMENT #HSMA20. Benchmark Grades: 9-12

Practical Probability:

Thursday, October 18, 2001 Page: 1 STAT 305. Solutions

TAKE-AWAY GAMES. ALLEN J. SCHWENK California Institute of Technology, Pasadena, California INTRODUCTION

Week 2: Conditional Probability and Bayes formula

ECE 316 Probability Theory and Random Processes

Revised Version of Chapter 23. We learned long ago how to solve linear congruences. ax c (mod m)

Operation Count; Numerical Linear Algebra

Topology-based network security

Wald s Identity. by Jeffery Hein. Dartmouth College, Math 100

MA 1125 Lecture 14 - Expected Values. Friday, February 28, Objectives: Introduce expected values.

Assignment 2: Option Pricing and the Black-Scholes formula The University of British Columbia Science One CS Instructor: Michael Gelbart

Factoring & Primality

13.0 Central Limit Theorem

Remarks on the Concept of Probability

Chapter 16. Law of averages. Chance. Example 1: rolling two dice Sum of draws. Setting up a. Example 2: American roulette. Summary.

Chapter 7 Probability and Statistics

Section 1.3 P 1 = 1 2. = P n = 1 P 3 = Continuing in this fashion, it should seem reasonable that, for any n = 1, 2, 3,..., =

Texas Hold em. From highest to lowest, the possible five card hands in poker are ranked as follows:

A NOTE ON OFF-DIAGONAL SMALL ON-LINE RAMSEY NUMBERS FOR PATHS

Notes on Determinant

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

4.1 Introduction - Online Learning Model

How to Beat Online Roulette!

Stochastic Processes and Advanced Mathematical Finance. Ruin Probabilities

OpenStax-CNX module: m Quadratic Sequences 1; 2; 4; 7; 11;... (1)

Chapter 5 A Survey of Probability Concepts

AMS 5 CHANCE VARIABILITY

פרויקט מסכם לתואר בוגר במדעים )B.Sc( במתמטיקה שימושית

The Trip Scheduling Problem

Polynomial and Synthetic Division. Long Division of Polynomials. Example 1. 6x 2 7x 2 x 2) 19x 2 16x 4 6x3 12x 2 7x 2 16x 7x 2 14x. 2x 4.

Probability. Sample space: all the possible outcomes of a probability experiment, i.e., the population of outcomes

On closed-form solutions of a resource allocation problem in parallel funding of R&D projects

Every Positive Integer is the Sum of Four Squares! (and other exciting problems)

On the Relationship between Classes P and NP

15-251: Great Theoretical Ideas in Computer Science Anupam Gupta Notes on Combinatorial Games (draft!!) January 29, 2012

Colored Hats and Logic Puzzles

Faster deterministic integer factorisation

The Distributive Property

Factoring Quadratic Trinomials

The Taxman Game. Robert K. Moniot September 5, 2003

Responsible Gambling Education Unit: Mathematics A & B

Solutions to Problem Set 1

How To Play Classic Slots

Probabilistic Strategies: Solutions

Midterm Practice Problems

Mathematical Induction. Lecture 10-11

Example: Find the expected value of the random variable X. X P(X)

6.3 Conditional Probability and Independence

Offline 1-Minesweeper is NP-complete

U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, Notes on Algebra

Combinatorial PCPs with ecient veriers

Transcription:

The `Russian Roulette' problem: a probabilistic variant of the Josephus Problem Joshua Brulé 2013 May 2 1 Introduction The classic Josephus problem describes n people arranged in a circle, such that every m th person is removed going around the circle until only one remains. The problem is to nd the position J n,m {1, 2,..., n} at which one should stand in order to be the survivor. [1] The `Russian Roulette' problem considers n people arranged in a circle, with a (biased) coin passed from person to person. Upon receiving the coin, the k th player ips the coin and is eliminated if and only if it comes up heads. The coin is then passed to the (k + 1) th person continuing until only one person - the survivor - remains. Note that as long as the probability of elimination is non-zero, the game will almost surely terminate. To give the problem richer structure, we allow the probability of elimination to depend on the number of players still in the game (i.e. p k (0, 1] is the probability of elimination with k players remaining in the game). 1.1 Problem Denition Given n players and a list of elimination probabilities P = {p n, p n 1,..., p 2 }, nd the probability R n,k (P ) that the k th player will be the survivor? 1

2 Solution as a Recurrence Relation Let the list of elimination probabilities be P = {p n, p n 1,..., p 2 } and for convenience, let q n = 1 p n. Let L n,k (P ) be the probability that the k th player in an n player game will not be the survivor. (For conciseness, the list P is implied in the following calculations.) Clearly, L 1,1 = 0 (the case of only one player, who wins by default). In addition, L i,0 = 1 for all i 1 (`Player 0' represents someone who has already been eliminated, thus the probability of them not being the survivor is 1) Then, L n,k = p n 1 q n n n i=1 qn i 1 L (n 1),(k i) mod n or equivalently, L n,k = p n 1 (1 p n ) n n (1 p n ) i 1 L (n 1),(k i) mod n i=1 Proof. Suppose that the i th player is eliminated in an n player game: For k > i the k th player becomes the k i (k i) mod n player in a game with n 1 players and elimination probability p n 1 For k = i, the k th player is eliminated, and has probability 1 of not surviving the rest of the rounds, regardless of their outcome. By dening L n,0 = 1, we can consider the eliminated player as the `0 th ' player in an n 1 player game. For k < i, the k th player becomes the n i + k (k i) mod n in a game with n 1 players and elimination probability p n 1 Finally in the case of n consecutive non-eliminations, the probability of each players survival is unchanged (we `made a complete circle' where no one was eliminated). Thus: 2

L n,k = p n L n 1,k 1 mod n + q n (p n L n 1,k 2 mod n + q n (... q n (p n L n 1,k (n 1) mod n + q n L n,k )... )) = p n (qnl 0 n 1,k 1 mod n + qnl 1 n 1,k 2 mod n + + qn n 1 L n 1,k (n 1) mod n ) + qnl n n,k = 1 1 q p n(qnl 0 n n 1,k 1 mod n + qnl 1 n 1,k 2 mod n + + qn n 1 L n 1,k (n 1) mod n ) = pn 1 q n n n i=1 qi 1 n L n 1,(k i) mod n Since the game almost surely terminates, the probability of the k th player winning in an n player game is simply 1 L n,k Using dynamic programming, L n,k for all 1 k n can be computed in O(n 3 ) time. An example implementation in python: #create diagonal half-matrix for memoization L = [[0 for k in range(0, n + 1)] for n in range(0, max_players + 1)] #base cases for i in range(1, max_players + 1): L[i][0] = 1 L[1][1] = 0 L[0][0] = None #compute the probability of losing for n in range(2, max_players + 1): for k in range(1, n + 1): sum = 0 for i in range(1, n + 1): sum = sum + (1 - prob[n])**(i-1) * L[n - 1][mod(k - i, n)] L[n][k] = prob[n]/(1 - (1 - prob[n])**n) * sum Where prob[] is an array of the elimination probabilities (prob[i] = p i (0, 1] or a symbolic variable if an appropriate library is available. (The experiments described below were all conducted in Sage [2] which provides a symbolic computing environment.) 3

2.1 Bounds on the complexity of the solution In the special case of p 2 = p 3 =... = p n = p, L n,k is the ratio of two polynomials, each of degree at most n(n 1) 2 Proof. Base case: L n,1 = 1 which is clearly the ratio of two polynomials each of degree at most n(n 1) 2 = 0. p p 1 (1 p) = n In general, L n,k = n 1 (1 p) n i=1 (1 p)i 1 L n 1,(k i) mod n. Note that 1 poly(n 1) (where poly(n 1) is a polynomial of degree n 1). The summation is the sum of polynomials of degree at most (n 1) multiplied by L n 1,j for some j, which is - by the inductive hypothesis - the ratio of two polynomials each of degree at most (n 2). The multiplication of two polynomials of degrees n and m, yields a new polynomial of degree n + m. Thus, L n,k is of degree at most n 1 i=0 i = (n 1)n/2. 3 Specic solutions and experimental verication 3.1 n = 2 The case n = 2 has a simple solution that matches well with intuition: L 2,1 = 1/(2 p 2 ) L 2,2 = (1 p 2 )/(2 p 2 ) As the probability of elimination tends to 1, the probability that player 1 will not be the survivor also tends to 1. With a probability of elimination near 0, the probability of each player being the survivor is very nearly 1/2. To experimentally verify, a simulation of the game was run and the results plotted against a graph of the probability of surviving vs. probability of elimination. 4

(Each point represents the percent games lost out of 10000 trials.) 3.2 n=3 With n 3 more interesting results are possible - a demonstrative example is with p 3 xed at 1/2 and the probability of survival for each player (red=1, blue=2, green=3) plotted against p 2 below. 5

(Each point represents 10000 simulations.) As can be seen, for certain elimination probabilities, player 1 actually has a better chance of winning then player 2! However, it appears (and indeed, can be proven) that player 3 always has the best probability of winning, regardless of which elimination probabilities are used.) 3.3 n > 3 Setting p n 1 = p 2 n, p n 2 = p 3 n... and plotting probability of survival (red=1, orange=2, yellow=3,... ) against p n gives the following results: 6

7

Setting p n = p n 1 =... = p 2 and plotting probability of survival against p n gives the following results: (Similar behavior is observed for n < 6). These results (and additional experimentation with various lists of elimination probabilities) are very suggestive and motivate several conjectures. 3.4 Conjectures L n,n L n,k for all k, n, and p i 's. Given p n = p n 1 =... = p 2, L n,n L n,n 1... L n,1 for all n and p n. For all permutations k 1, k 2,..., k n 1 of 1, 2,..., n 1 there exists p i 's such that L n,k1 < L n,k2 < < L n,kn 1 In other words, there always exists a list of elimination probabilities such that sorting the players by L n,k yields every possible permutation of players - excluding player n who always has the lowest probability of losing. 8

With computer assistance, this last conjecture was directly proven up to n = 6 by grid searching the space of elimination probability lists. Every permutation of players' probability of survival was proven to be possible by nding a specic list of elimination probabilities that yields that permutation. Unfortunately, naive grid search is exponential in n, and is computationally intractable for n 7. A formal proof for all n remains elusive. 4 Acknowledgment I would like to thank Dr. William Gasarch for his support and guidance while conducting this research. References [1] Weisstein, Eric W. "Josephus Problem." MathWorld http://mathworld. wolfram.com/josephusproblem.html [2] William A. Stein et al. Sage Mathematics Software http://www. sagemath.org 9