# Gambling and Data Compression

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Gambling and Data Compression Gambling. Horse Race Definition The wealth relative S(X) = b(x)o(x) is the factor by which the gambler s wealth grows if horse X wins the race, where b(x) is the fraction of the gambler s wealth invested in horse X and o(x) is the corresponding odds. Definition The doubling rate of a horse race is W (b, p) = E[log S(X)] = m p k log b k o k Theorem Let the race outcomes X, X 2, be i.i.d. p(x). Then the wealth of the gambler using betting strategy b grows exponentially at rate W (b, p); that is, k= S n. = 2 nw (b,p) Definition The optimum doubling rate W (p) is the maximum doubling rate over all choices of the portfolio b: W (p) = max W (b, p) = max b b:b i 0, P i b i= m p i log b i Theorem (proportional gambling is log-optimal) the optimal doubling rate is given by i= W (p) = p i log H(p) and is achieved by the proportional gambling scheme b = p. Theorem (Conservation theorem) For uniform fair odds, W (p) + H(p) = log m Thus, the sum of the doubling rate and the entropy is a constant. If the gambler does not always bet all the money, then the optimum strategy may depend on the odds and will not necessarily have the simple form of proportional gambling. There are three cases:. Fair odds with respect to some distribution: =. By betting b i =, one achieves S(X) =, which is the same as keeping some cash aside. Proportional betting is optimal. 2. Superfair odds: <. By choosing b i = c, where c = /, one has S(X) = / > with probability. In this case, the gambler will always want to bet all the money and the optimum strategy is again proportional betting. 3. Subfair odds: >. Proportional gambling is no longer log-optimal. The gambler may want to bet only some of the money and keep the rest aside as cash, depending on the odds. Based on Cover & Thomas, Chapter 5,6

2 .2 Side Information and Entropy Rate Definition The increase W is defined as: W = W (X Y ) W (X), where W (X) = max b(x) W (X Y ) = max b(x y) p(x) log b(x)o(x) x p(x, y) log b(x y)o(x) x,y Theorem The increase W in doubling rate due to side information Y for a horse race X is W = I(X; Y )..3 Dependent horse races and the entropy rate If the horse races are dependent, suppose that the winning horses form a stochastic process {X k }: The optimal doubling rate for uniform fair odds (m for ) is, W (X k X k, X k 2,, X ) = log m H(X k X k, X k 2,, X ), which is achieved by b (x k x k,, x ) = p(x k x k,, x ). The doubling rate then satisfies n E[log S n] = log m H(X,, H n ) n Thus in the limit as n, the doubling rate is related to the entropy rate as lim n n E[log S n] = log m H(X ). 2 Data Compression Codes and Optimality 2. Definitions and Examples of Codes Definition A source code C for a random variable X is a mapping from X, the range of X, to D, the set of finite-length strings of symbols from a D-ary alphabet. Definition Let C(x) denote the codeword corresponding to x and let l(x) denote its length. Then the expected length of source code C(x) for random variable X with pmf p(x) is given by L(C) := E p [l(x)] = x X p(x)l(x), Definition A code is said to be nonsingular if every element of the range of X maps into a different string in D ; that is, x x C(x) C(x ). 2

3 Definition The extension C of a code C is the mapping from finite-length strings of X to finite-length strings of D, defined by C(x x 2 x n ) = C(x )C(x 2 ) C(x n ), where C(x )C(x 2 ) C(x n ) indicates concatenation of the corresponding codewords. Definition A code is called uniquely decodable if its extension is nonsingular. May have tnspect entire string to decode first codeword. Definition A code is called a prefix code or an instantaneous code if no codeword is a prefix of any other codeword. Instantaneous code self-punctuating code. 2.2 Instantaneous Codes and the Kraft Inequality Theorem (Kraft inequality) For any instantaneous code (prefix code) over an alphabet of size D, the codeword lengths l, l 2,..., l m must satisfy the inequality i Conversely, given a set of codeword lengths that satisfy this inequality, there exists an instantaneous code with these word lengths. Theorem (Extended Kraft inequality) For any countably infinite set of codewords that form a prefix code, the codeword lengths satisfy the extended Kraft inequality: i= Conversely, given any l, l 2,... satisfying the extended Kraft inequality, we can construct a prefix code with these codeword lengths. 3 Data Compression Optimal Codes and Length Bounds 3. Optimal Codes Definition A probability distribution is called D-adic if each of the probabilities is equal to D n for some n. Theorem (Lower bound on codeword length) The expected length L of any instantaneous D-ary code for a random variable X is greater than or equal to the base-d entropy H D (X): Equality holds iff the distribution of X is D-adic. L H D (x), with equality iff = p i for all i. 3

4 3.2 Bounds on the Optimal Code Length The previous theorem suggests finding the D-adic distribution vector r closest to a given source distribution vector p, and then designing a code for r. By minimizing D(p r) for D-adic r, we may exhibit a code (not necessarily optimal) whose length L satisfies the following bound: Theorem (Optimal expected codeword length) Let l, l 2,..., l m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L be the associated expected length of an optimal code (L = p i l i ). Then H D (x) L < H D (x) +. Consider sending a sequence of n symbols drawn iid according to p(x) in a block, so that we have a supersymbol from X n. Let L n be the expected codeword length per input symbol: L n := n E [l(x, X 2,..., X n )]. Then by letting the block length n become large, we may achieve an expected length per symbol L n arbitrarily close to the entropy: Theorem (Distributing the extra overhead bit) The minimum expected codeword length per symbol satisfies H(X, X 2,..., X n ) L n < H(X, X 2,..., X n ) + n n n. Moreover, if X, X 2,..., X n is a stationary stochastic process, then L n H(X ), where H(X ) is the entropy rate of the process. The previous theorem confirms that the entropy rate of a stationary stochastic process is indeed the minimum expected number of bits per symbol needed to describe the process. If we design a code for the wrong input distribution, then the increase in expected description length is given exactly by the relative entropy: Theorem (Wrong code) The expected length under p(x) of the code assignment l(x) = log q(x) satisfies H(p) + D(p q) E p [l(x)] < H(p) + D(p q) Kraft Inequality for Uniquely Decodable Codes In the sense of expected length, the set of uniquely decodable codes while larger does not improve upon instantaneous codes: Theorem (McMillan) The codeword lengths of any uniquely decodable D-ary code must satisfy the Kraft inequality: i Conversely, given a set of codeword lengths satisfying this inequality, it is possible to construct a uniquely decodable code with these lengths. Corollary A uniquely decodable code for an infinite source alphabet X also satisfies the Kraft inequality. 4

5 3.4 Huffman Codes: Optimality and Examples Consider the tree construction we used earlier in order to suggest a proof of the Kraft inequality for finite, instantaneous codes. It suggests a constructive procedure for assigning codewords in a manner such that their lengths are roughly inversely proportional to corresponding symbol probabilities. We now formalize this idea through an example of Huffman codes: Construct a D-ary tree from which codewords can be assigned. Build up the tree recursively by combining D lowest-probability symbols together at each stage. A simple algorithm due to Huffman allows for the construction of optimal prefix codes for a given distribution: Lemma (Existence of a particular optimal code) For any distribution, there exists an optimal instantaneous code (with minimum expected length) that satisfies the following properties:. The lengths are ordered inversely with the probabilities (i.e., if p j > p k, then l j l k ). 2. The two longest codewords have the same length. 3. Two of the longest codewords differ only in the last bit and correspond to the two least likely symbols. Theorem (Optimality of Huffman coding) Huffman coding is optimal; that is, if C is a Huffman code and C is any other uniquely decodable code, then L(C ) L(C ). 3.5 Shannon-Fano-Elias Coding Fano proposed a suboptimal procedure based on recursively partitioning the unit interval, under the assumption that symbol probabilities are given in decreasing order. A related procedure, Shannon-Fano-Elias coding, makes direct use of the cumulative distribution function (cdf) F (x) to assign codewords. By using the midpoint F (x) of each jump in the cdf, we may exhibit a prefix-free code C satisfying Codeword lengths l(x) = log p(x) +. Expected length L(C) < H(X)

### Review Horse Race Gambling and Side Information Dependent horse races and the entropy rate. Gambling. Besma Smida. ES250: Lecture 9.

Gambling Besma Smida ES250: Lecture 9 Fall 2008-09 B. Smida (ES250) Gambling Fall 2008-09 1 / 23 Today s outline Review of Huffman Code and Arithmetic Coding Horse Race Gambling and Side Information Dependent

### National Sun Yat-Sen University CSE Course: Information Theory. Gambling And Entropy

Gambling And Entropy 1 Outline There is a strong relationship between the growth rate of investment in a horse race and the entropy of the horse race. The value of side information is related to the mutual

### Gambling with Information Theory

Gambling with Information Theory Govert Verkes University of Amsterdam January 27, 2016 1 / 22 How do you bet? Private noisy channel transmitting results while you can still bet, correct transmission(p)

### On Directed Information and Gambling

On Directed Information and Gambling Haim H. Permuter Stanford University Stanford, CA, USA haim@stanford.edu Young-Han Kim University of California, San Diego La Jolla, CA, USA yhk@ucsd.edu Tsachy Weissman

### Exercises with solutions (1)

Exercises with solutions (). Investigate the relationship between independence and correlation. (a) Two random variables X and Y are said to be correlated if and only if their covariance C XY is not equal

### Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 17 Shannon-Fano-Elias Coding and Introduction to Arithmetic Coding

### arxiv:1112.0829v1 [math.pr] 5 Dec 2011

How Not to Win a Million Dollars: A Counterexample to a Conjecture of L. Breiman Thomas P. Hayes arxiv:1112.0829v1 [math.pr] 5 Dec 2011 Abstract Consider a gambling game in which we are allowed to repeatedly

### Information, Entropy, and Coding

Chapter 8 Information, Entropy, and Coding 8. The Need for Data Compression To motivate the material in this chapter, we first consider various data sources and some estimates for the amount of data associated

### Goal Problems in Gambling and Game Theory. Bill Sudderth. School of Statistics University of Minnesota

Goal Problems in Gambling and Game Theory Bill Sudderth School of Statistics University of Minnesota 1 Three problems Maximizing the probability of reaching a goal. Maximizing the probability of reaching

### Gambling and Portfolio Selection using Information theory

Gambling and Portfolio Selection using Information theory 1 Luke Vercimak University of Illinois at Chicago E-mail: lverci2@uic.edu Abstract A short survey is given of the applications of information theory

### An Introduction to Information Theory

An Introduction to Information Theory Carlton Downey November 12, 2013 INTRODUCTION Today s recitation will be an introduction to Information Theory Information theory studies the quantification of Information

### On Adaboost and Optimal Betting Strategies

On Adaboost and Optimal Betting Strategies Pasquale Malacaria School of Electronic Engineering and Computer Science Queen Mary, University of London Email: pm@dcs.qmul.ac.uk Fabrizio Smeraldi School of

### Week 4: Gambler s ruin and bold play

Week 4: Gambler s ruin and bold play Random walk and Gambler s ruin. Imagine a walker moving along a line. At every unit of time, he makes a step left or right of exactly one of unit. So we can think that

### Betting rules and information theory

Betting rules and information theory Giulio Bottazzi LEM and CAFED Scuola Superiore Sant Anna September, 2013 Outline Simple betting in favorable games The Central Limit Theorem Optimal rules The Game

### A Note on Proebsting s Paradox

A Note on Proebsting s Paradox Leonid Pekelis March 8, 2012 Abstract Proebsting s Paradox is two-stage bet where the naive Kelly gambler (wealth growth rate maximizing) can be manipulated in some disconcertingly

### LECTURE 4. Last time: Lecture outline

LECTURE 4 Last time: Types of convergence Weak Law of Large Numbers Strong Law of Large Numbers Asymptotic Equipartition Property Lecture outline Stochastic processes Markov chains Entropy rate Random

### Probability Interval Partitioning Entropy Codes

SUBMITTED TO IEEE TRANSACTIONS ON INFORMATION THEORY 1 Probability Interval Partitioning Entropy Codes Detlev Marpe, Senior Member, IEEE, Heiko Schwarz, and Thomas Wiegand, Senior Member, IEEE Abstract

### How to build a probability-free casino

How to build a probability-free casino Adam Chalcraft CCR La Jolla dachalc@ccrwest.org Chris Freiling Cal State San Bernardino cfreilin@csusb.edu Randall Dougherty CCR La Jolla rdough@ccrwest.org Jason

### The Cost of Offline Binary Search Tree Algorithms and the Complexity of the Request Sequence

The Cost of Offline Binary Search Tree Algorithms and the Complexity of the Request Sequence Jussi Kujala, Tapio Elomaa Institute of Software Systems Tampere University of Technology P. O. Box 553, FI-33101

### Chapter 7: Proportional Play and the Kelly Betting System

Chapter 7: Proportional Play and the Kelly Betting System Proportional Play and Kelly s criterion: Investing in the stock market is, in effect, making a series of bets. Contrary to bets in a casino though,

### Part I. Gambling and Information Theory. Information Theory and Networks. Section 1. Horse Racing. Lecture 16: Gambling and Information Theory

and Networks Lecture 16: Gambling and Paul Tune http://www.maths.adelaide.edu.au/matthew.roughan/ Lecture_notes/InformationTheory/ Part I Gambling and School of Mathematical

### A New Interpretation of Information Rate

A New Interpretation of Information Rate reproduced with permission of AT&T By J. L. Kelly, jr. (Manuscript received March 2, 956) If the input symbols to a communication channel represent the outcomes

### FUNDAMENTALS of INFORMATION THEORY and CODING DESIGN

DISCRETE "ICS AND ITS APPLICATIONS Series Editor KENNETH H. ROSEN FUNDAMENTALS of INFORMATION THEORY and CODING DESIGN Roberto Togneri Christopher J.S. desilva CHAPMAN & HALL/CRC A CRC Press Company Boca

### CHANNEL. 1 Fast encoding of information. 2 Easy transmission of encoded messages. 3 Fast decoding of received messages.

CHAPTER : Basics of coding theory ABSTRACT Part I Basics of coding theory Coding theory - theory of error correcting codes - is one of the most interesting and applied part of mathematics and informatics.

### CHAPTER 6. Shannon entropy

CHAPTER 6 Shannon entropy This chapter is a digression in information theory. This is a fascinating subject, which arose once the notion of information got precise and quantifyable. From a physical point

### Statistics 100A Homework 3 Solutions

Chapter Statistics 00A Homework Solutions Ryan Rosario. Two balls are chosen randomly from an urn containing 8 white, black, and orange balls. Suppose that we win \$ for each black ball selected and we

### Principle of Data Reduction

Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then

### 1 if 1 x 0 1 if 0 x 1

Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

### An example of a computable

An example of a computable absolutely normal number Verónica Becher Santiago Figueira Abstract The first example of an absolutely normal number was given by Sierpinski in 96, twenty years before the concept

### Stochastic Inventory Control

Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to the

### Low upper bound of ideals, coding into rich Π 0 1 classes

Low upper bound of ideals, coding into rich Π 0 1 classes Antonín Kučera the main part is a joint project with T. Slaman Charles University, Prague September 2007, Chicago The main result There is a low

### Decision Theory. 36.1 Rational prospecting

36 Decision Theory Decision theory is trivial, apart from computational details (just like playing chess!). You have a choice of various actions, a. The world may be in one of many states x; which one

### Master s Theory Exam Spring 2006

Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem

### SOME ASPECTS OF GAMBLING WITH THE KELLY CRITERION. School of Mathematical Sciences. Monash University, Clayton, Victoria, Australia 3168

SOME ASPECTS OF GAMBLING WITH THE KELLY CRITERION Ravi PHATARFOD School of Mathematical Sciences Monash University, Clayton, Victoria, Australia 3168 In this paper we consider the problem of gambling with

### 2.1 Complexity Classes

15-859(M): Randomized Algorithms Lecturer: Shuchi Chawla Topic: Complexity classes, Identity checking Date: September 15, 2004 Scribe: Andrew Gilpin 2.1 Complexity Classes In this lecture we will look

### How to Gamble If You Must

How to Gamble If You Must Kyle Siegrist Department of Mathematical Sciences University of Alabama in Huntsville Abstract In red and black, a player bets, at even stakes, on a sequence of independent games

### A Quantitative Measure of Relevance Based on Kelly Gambling Theory

A Quantitative Measure of Relevance Based on Kelly Gambling Theory Mathias Winther Madsen ILLC, University of Amsterdam Defining a good concept of relevance is a key problem in all disciplines that theorize

### The Mathematics of Gambling

The Mathematics of Gambling with Related Applications Madhu Advani Stanford University April 12, 2014 Madhu Advani (Stanford University) Mathematics of Gambling April 12, 2014 1 / 23 Gambling Gambling:

### Compressing Forwarding Tables for Datacenter Scalability

TECHNICAL REPORT TR12-03, TECHNION, ISRAEL 1 Compressing Forwarding Tables for Datacenter Scalability Ori Rottenstreich, Marat Radan, Yuval Cassuto, Isaac Keslassy, Carmi Arad, Tal Mizrahi, Yoram Revah

### Arithmetic Coding: Introduction

Data Compression Arithmetic coding Arithmetic Coding: Introduction Allows using fractional parts of bits!! Used in PPM, JPEG/MPEG (as option), Bzip More time costly than Huffman, but integer implementation

### Information Theory and Stock Market

Information Theory and Stock Market Pongsit Twichpongtorn University of Illinois at Chicago E-mail: ptwich2@uic.edu 1 Abstract This is a short survey paper that talks about the development of important

### PROCESS AND TRUTH-TABLE CHARACTERISATIONS OF RANDOMNESS

PROCESS AND TRUTH-TABLE CHARACTERISATIONS OF RANDOMNESS ADAM R. DAY Abstract. This paper uses quick process machines to provide characterisations of computable randomness, Schnorr randomness and weak randomness.

### Ex. 2.1 (Davide Basilio Bartolini)

ECE 54: Elements of Information Theory, Fall 00 Homework Solutions Ex.. (Davide Basilio Bartolini) Text Coin Flips. A fair coin is flipped until the first head occurs. Let X denote the number of flips

### The Kelly criterion for spread bets

IMA Journal of Applied Mathematics 2007 72,43 51 doi:10.1093/imamat/hxl027 Advance Access publication on December 5, 2006 The Kelly criterion for spread bets S. J. CHAPMAN Oxford Centre for Industrial

### it is easy to see that α = a

21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coefficient ring is a field. We already know that such a polynomial ring is a UF. Therefore

### EMBEDDING COUNTABLE PARTIAL ORDERINGS IN THE DEGREES

EMBEDDING COUNTABLE PARTIAL ORDERINGS IN THE ENUMERATION DEGREES AND THE ω-enumeration DEGREES MARIYA I. SOSKOVA AND IVAN N. SOSKOV 1. Introduction One of the most basic measures of the complexity of a

### The Kelly Betting System for Favorable Games.

The Kelly Betting System for Favorable Games. Thomas Ferguson, Statistics Department, UCLA A Simple Example. Suppose that each day you are offered a gamble with probability 2/3 of winning and probability

### Influences in low-degree polynomials

Influences in low-degree polynomials Artūrs Bačkurs December 12, 2012 1 Introduction In 3] it is conjectured that every bounded real polynomial has a highly influential variable The conjecture is known

### Fairness in Routing and Load Balancing

Fairness in Routing and Load Balancing Jon Kleinberg Yuval Rabani Éva Tardos Abstract We consider the issue of network routing subject to explicit fairness conditions. The optimization of fairness criteria

### Degrees that are not degrees of categoricity

Degrees that are not degrees of categoricity Bernard A. Anderson Department of Mathematics and Physical Sciences Gordon State College banderson@gordonstate.edu www.gordonstate.edu/faculty/banderson Barbara

### Week 1: Introduction to Online Learning

Week 1: Introduction to Online Learning 1 Introduction This is written based on Prediction, Learning, and Games (ISBN: 2184189 / -21-8418-9 Cesa-Bianchi, Nicolo; Lugosi, Gabor 1.1 A Gentle Start Consider

### Lecture 2: Universality

CS 710: Complexity Theory 1/21/2010 Lecture 2: Universality Instructor: Dieter van Melkebeek Scribe: Tyson Williams In this lecture, we introduce the notion of a universal machine, develop efficient universal

### 1 Portfolio Selection

COS 5: Theoretical Machine Learning Lecturer: Rob Schapire Lecture # Scribe: Nadia Heninger April 8, 008 Portfolio Selection Last time we discussed our model of the stock market N stocks start on day with

### The Ergodic Theorem and randomness

The Ergodic Theorem and randomness Peter Gács Department of Computer Science Boston University March 19, 2008 Peter Gács (Boston University) Ergodic theorem March 19, 2008 1 / 27 Introduction Introduction

### character E T A S R I O D frequency

Data Compression Data compression is any process by which a digital (e.g. electronic) file may be transformed to another ( compressed ) file, such that the original file may be fully recovered from the

### Betting with the Kelly Criterion

Betting with the Kelly Criterion Jane June 2, 2010 Contents 1 Introduction 2 2 Kelly Criterion 2 3 The Stock Market 3 4 Simulations 5 5 Conclusion 8 1 Page 2 of 9 1 Introduction Gambling in all forms,

### The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].

Probability Theory Probability Spaces and Events Consider a random experiment with several possible outcomes. For example, we might roll a pair of dice, flip a coin three times, or choose a random real

CS787: Advanced Algorithms Topic: Online Learning Presenters: David He, Chris Hopman 17.3.1 Follow the Perturbed Leader 17.3.1.1 Prediction Problem Recall the prediction problem that we discussed in class.

### PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.

PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

### Entropy and Mutual Information

ENCYCLOPEDIA OF COGNITIVE SCIENCE 2000 Macmillan Reference Ltd Information Theory information, entropy, communication, coding, bit, learning Ghahramani, Zoubin Zoubin Ghahramani University College London

### . (3.3) n Note that supremum (3.2) must occur at one of the observed values x i or to the left of x i.

Chapter 3 Kolmogorov-Smirnov Tests There are many situations where experimenters need to know what is the distribution of the population of their interest. For example, if they want to use a parametric

### About the inverse football pool problem for 9 games 1

Seventh International Workshop on Optimal Codes and Related Topics September 6-1, 013, Albena, Bulgaria pp. 15-133 About the inverse football pool problem for 9 games 1 Emil Kolev Tsonka Baicheva Institute

### Finite Sets. Theorem 5.1. Two non-empty finite sets have the same cardinality if and only if they are equivalent.

MATH 337 Cardinality Dr. Neal, WKU We now shall prove that the rational numbers are a countable set while R is uncountable. This result shows that there are two different magnitudes of infinity. But we

### ALOHA Performs Delay-Optimum Power Control

ALOHA Performs Delay-Optimum Power Control Xinchen Zhang and Martin Haenggi Department of Electrical Engineering University of Notre Dame Notre Dame, IN 46556, USA {xzhang7,mhaenggi}@nd.edu Abstract As

### Linear Codes. Chapter 3. 3.1 Basics

Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length

### The Convolution Operation

The Convolution Operation Convolution is a very natural mathematical operation which occurs in both discrete and continuous modes of various kinds. We often encounter it in the course of doing other operations

### 1 Approximating Set Cover

CS 05: Algorithms (Grad) Feb 2-24, 2005 Approximating Set Cover. Definition An Instance (X, F ) of the set-covering problem consists of a finite set X and a family F of subset of X, such that every elemennt

### Basics of information theory and information complexity

Basics of information theory and information complexity a tutorial Mark Braverman Princeton University June 1, 2013 1 Part I: Information theory Information theory, in its modern format was introduced

### A Catalogue of the Steiner Triple Systems of Order 19

A Catalogue of the Steiner Triple Systems of Order 19 Petteri Kaski 1, Patric R. J. Östergård 2, Olli Pottonen 2, and Lasse Kiviluoto 3 1 Helsinki Institute for Information Technology HIIT University of

### 6.042/18.062J Mathematics for Computer Science December 12, 2006 Tom Leighton and Ronitt Rubinfeld. Random Walks

6.042/8.062J Mathematics for Comuter Science December 2, 2006 Tom Leighton and Ronitt Rubinfeld Lecture Notes Random Walks Gambler s Ruin Today we re going to talk about one-dimensional random walks. In

### Analogy Between Gambling and. Measurement-Based Work Extraction

Analogy Between Gambling and 1 Measurement-Based Work Extraction Dror A. Vinkler, Haim H. Permuter and Neri Merhav Abstract arxiv:144.6788v1 [cs.it] 27 Apr 214 In information theory, one area of interest

### Maximum Entropy. Information Theory 2013 Lecture 9 Chapter 12. Tohid Ardeshiri. May 22, 2013

Maximum Entropy Information Theory 2013 Lecture 9 Chapter 12 Tohid Ardeshiri May 22, 2013 Why Maximum Entropy distribution? max f (x) h(f ) subject to E r(x) = α Temperature of a gas corresponds to the

### International Journal of Advanced Research in Computer Science and Software Engineering

Volume 3, Issue 7, July 23 ISSN: 2277 28X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Greedy Algorithm:

### Introduction to Online Learning Theory

Introduction to Online Learning Theory Wojciech Kot lowski Institute of Computing Science, Poznań University of Technology IDSS, 04.06.2013 1 / 53 Outline 1 Example: Online (Stochastic) Gradient Descent

### Lecture 2: The Kelly criterion for favorable games: stock market investing for individuals

Lecture 2: The Kelly criterion for favorable games: stock market investing for individuals David Aldous September 8, 2014 Most adults drive/own a car Few adults work in the auto industry. By analogy Most

### Limit processes are the basis of calculus. For example, the derivative. f f (x + h) f (x)

SEC. 4.1 TAYLOR SERIES AND CALCULATION OF FUNCTIONS 187 Taylor Series 4.1 Taylor Series and Calculation of Functions Limit processes are the basis of calculus. For example, the derivative f f (x + h) f

### Lectures on Stochastic Processes. William G. Faris

Lectures on Stochastic Processes William G. Faris November 8, 2001 2 Contents 1 Random walk 7 1.1 Symmetric simple random walk................... 7 1.2 Simple random walk......................... 9 1.3

### Wald s Identity. by Jeffery Hein. Dartmouth College, Math 100

Wald s Identity by Jeffery Hein Dartmouth College, Math 100 1. Introduction Given random variables X 1, X 2, X 3,... with common finite mean and a stopping rule τ which may depend upon the given sequence,

### 1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let

Copyright c 2009 by Karl Sigman 1 Stopping Times 1.1 Stopping Times: Definition Given a stochastic process X = {X n : n 0}, a random time τ is a discrete random variable on the same probability space as

### STAT 315: HOW TO CHOOSE A DISTRIBUTION FOR A RANDOM VARIABLE

STAT 315: HOW TO CHOOSE A DISTRIBUTION FOR A RANDOM VARIABLE TROY BUTLER 1. Random variables and distributions We are often presented with descriptions of problems involving some level of uncertainty about

### Universal Portfolios With and Without Transaction Costs

CS8B/Stat4B (Spring 008) Statistical Learning Theory Lecture: 7 Universal Portfolios With and Without Transaction Costs Lecturer: Peter Bartlett Scribe: Sahand Negahban, Alex Shyr Introduction We will

### MA2C03 Mathematics School of Mathematics, Trinity College Hilary Term 2016 Lecture 59 (April 1, 2016) David R. Wilkins

MA2C03 Mathematics School of Mathematics, Trinity College Hilary Term 2016 Lecture 59 (April 1, 2016) David R. Wilkins The RSA encryption scheme works as follows. In order to establish the necessary public

### Fund Manager s Portfolio Choice

Fund Manager s Portfolio Choice Zhiqing Zhang Advised by: Gu Wang September 5, 2014 Abstract Fund manager is allowed to invest the fund s assets and his personal wealth in two separate risky assets, modeled

### A Practical Scheme for Wireless Network Operation

A Practical Scheme for Wireless Network Operation Radhika Gowaikar, Amir F. Dana, Babak Hassibi, Michelle Effros June 21, 2004 Abstract In many problems in wireline networks, it is known that achieving

### The Dirichlet Unit Theorem

Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if

### Optimal Design of Sequential Real-Time Communication Systems Aditya Mahajan, Member, IEEE, and Demosthenis Teneketzis, Fellow, IEEE

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 11, NOVEMBER 2009 5317 Optimal Design of Sequential Real-Time Communication Systems Aditya Mahajan, Member, IEEE, Demosthenis Teneketzis, Fellow, IEEE

### Lecture 25: Money Management Steven Skiena. http://www.cs.sunysb.edu/ skiena

Lecture 25: Money Management Steven Skiena Department of Computer Science State University of New York Stony Brook, NY 11794 4400 http://www.cs.sunysb.edu/ skiena Money Management Techniques The trading

### MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

### = 2 + 1 2 2 = 3 4, Now assume that P (k) is true for some fixed k 2. This means that

Instructions. Answer each of the questions on your own paper, and be sure to show your work so that partial credit can be adequately assessed. Credit will not be given for answers (even correct ones) without

### arxiv:math/0412362v1 [math.pr] 18 Dec 2004

Improving on bold play when the gambler is restricted arxiv:math/0412362v1 [math.pr] 18 Dec 2004 by Jason Schweinsberg February 1, 2008 Abstract Suppose a gambler starts with a fortune in (0, 1) and wishes

### IEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Tuesday, September 11 Normal Approximations and the Central Limit Theorem

IEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Tuesday, September 11 Normal Approximations and the Central Limit Theorem Time on my hands: Coin tosses. Problem Formulation: Suppose that I have

### The p-norm generalization of the LMS algorithm for adaptive filtering

The p-norm generalization of the LMS algorithm for adaptive filtering Jyrki Kivinen University of Helsinki Manfred Warmuth University of California, Santa Cruz Babak Hassibi California Institute of Technology

### Lecture 4: AC 0 lower bounds and pseudorandomness

Lecture 4: AC 0 lower bounds and pseudorandomness Topics in Complexity Theory and Pseudorandomness (Spring 2013) Rutgers University Swastik Kopparty Scribes: Jason Perry and Brian Garnett In this lecture,

### Week 5: Expected value and Betting systems

Week 5: Expected value and Betting systems Random variable A random variable represents a measurement in a random experiment. We usually denote random variable with capital letter X, Y,. If S is the sample

### Random access protocols for channel access. Markov chains and their stability. Laurent Massoulié.

Random access protocols for channel access Markov chains and their stability laurent.massoulie@inria.fr Aloha: the first random access protocol for channel access [Abramson, Hawaii 70] Goal: allow machines

### Economics 1011a: Intermediate Microeconomics

Lecture 12: More Uncertainty Economics 1011a: Intermediate Microeconomics Lecture 12: More on Uncertainty Thursday, October 23, 2008 Last class we introduced choice under uncertainty. Today we will explore

Single-Period Balancing of Pay Per-Click and Pay-Per-View Online Display Advertisements Changhyun Kwon Department of Industrial and Systems Engineering University at Buffalo, the State University of New

### Universal hashing. In other words, the probability of a collision for two different keys x and y given a hash function randomly chosen from H is 1/m.

Universal hashing No matter how we choose our hash function, it is always possible to devise a set of keys that will hash to the same slot, making the hash scheme perform poorly. To circumvent this, we