Gambling and Data Compression


 Eugenia Atkins
 3 years ago
 Views:
Transcription
1 Gambling and Data Compression Gambling. Horse Race Definition The wealth relative S(X) = b(x)o(x) is the factor by which the gambler s wealth grows if horse X wins the race, where b(x) is the fraction of the gambler s wealth invested in horse X and o(x) is the corresponding odds. Definition The doubling rate of a horse race is W (b, p) = E[log S(X)] = m p k log b k o k Theorem Let the race outcomes X, X 2, be i.i.d. p(x). Then the wealth of the gambler using betting strategy b grows exponentially at rate W (b, p); that is, k= S n. = 2 nw (b,p) Definition The optimum doubling rate W (p) is the maximum doubling rate over all choices of the portfolio b: W (p) = max W (b, p) = max b b:b i 0, P i b i= m p i log b i Theorem (proportional gambling is logoptimal) the optimal doubling rate is given by i= W (p) = p i log H(p) and is achieved by the proportional gambling scheme b = p. Theorem (Conservation theorem) For uniform fair odds, W (p) + H(p) = log m Thus, the sum of the doubling rate and the entropy is a constant. If the gambler does not always bet all the money, then the optimum strategy may depend on the odds and will not necessarily have the simple form of proportional gambling. There are three cases:. Fair odds with respect to some distribution: =. By betting b i =, one achieves S(X) =, which is the same as keeping some cash aside. Proportional betting is optimal. 2. Superfair odds: <. By choosing b i = c, where c = /, one has S(X) = / > with probability. In this case, the gambler will always want to bet all the money and the optimum strategy is again proportional betting. 3. Subfair odds: >. Proportional gambling is no longer logoptimal. The gambler may want to bet only some of the money and keep the rest aside as cash, depending on the odds. Based on Cover & Thomas, Chapter 5,6
2 .2 Side Information and Entropy Rate Definition The increase W is defined as: W = W (X Y ) W (X), where W (X) = max b(x) W (X Y ) = max b(x y) p(x) log b(x)o(x) x p(x, y) log b(x y)o(x) x,y Theorem The increase W in doubling rate due to side information Y for a horse race X is W = I(X; Y )..3 Dependent horse races and the entropy rate If the horse races are dependent, suppose that the winning horses form a stochastic process {X k }: The optimal doubling rate for uniform fair odds (m for ) is, W (X k X k, X k 2,, X ) = log m H(X k X k, X k 2,, X ), which is achieved by b (x k x k,, x ) = p(x k x k,, x ). The doubling rate then satisfies n E[log S n] = log m H(X,, H n ) n Thus in the limit as n, the doubling rate is related to the entropy rate as lim n n E[log S n] = log m H(X ). 2 Data Compression Codes and Optimality 2. Definitions and Examples of Codes Definition A source code C for a random variable X is a mapping from X, the range of X, to D, the set of finitelength strings of symbols from a Dary alphabet. Definition Let C(x) denote the codeword corresponding to x and let l(x) denote its length. Then the expected length of source code C(x) for random variable X with pmf p(x) is given by L(C) := E p [l(x)] = x X p(x)l(x), Definition A code is said to be nonsingular if every element of the range of X maps into a different string in D ; that is, x x C(x) C(x ). 2
3 Definition The extension C of a code C is the mapping from finitelength strings of X to finitelength strings of D, defined by C(x x 2 x n ) = C(x )C(x 2 ) C(x n ), where C(x )C(x 2 ) C(x n ) indicates concatenation of the corresponding codewords. Definition A code is called uniquely decodable if its extension is nonsingular. May have tnspect entire string to decode first codeword. Definition A code is called a prefix code or an instantaneous code if no codeword is a prefix of any other codeword. Instantaneous code selfpunctuating code. 2.2 Instantaneous Codes and the Kraft Inequality Theorem (Kraft inequality) For any instantaneous code (prefix code) over an alphabet of size D, the codeword lengths l, l 2,..., l m must satisfy the inequality i Conversely, given a set of codeword lengths that satisfy this inequality, there exists an instantaneous code with these word lengths. Theorem (Extended Kraft inequality) For any countably infinite set of codewords that form a prefix code, the codeword lengths satisfy the extended Kraft inequality: i= Conversely, given any l, l 2,... satisfying the extended Kraft inequality, we can construct a prefix code with these codeword lengths. 3 Data Compression Optimal Codes and Length Bounds 3. Optimal Codes Definition A probability distribution is called Dadic if each of the probabilities is equal to D n for some n. Theorem (Lower bound on codeword length) The expected length L of any instantaneous Dary code for a random variable X is greater than or equal to the based entropy H D (X): Equality holds iff the distribution of X is Dadic. L H D (x), with equality iff = p i for all i. 3
4 3.2 Bounds on the Optimal Code Length The previous theorem suggests finding the Dadic distribution vector r closest to a given source distribution vector p, and then designing a code for r. By minimizing D(p r) for Dadic r, we may exhibit a code (not necessarily optimal) whose length L satisfies the following bound: Theorem (Optimal expected codeword length) Let l, l 2,..., l m be optimal codeword lengths for a source distribution p and a Dary alphabet, and let L be the associated expected length of an optimal code (L = p i l i ). Then H D (x) L < H D (x) +. Consider sending a sequence of n symbols drawn iid according to p(x) in a block, so that we have a supersymbol from X n. Let L n be the expected codeword length per input symbol: L n := n E [l(x, X 2,..., X n )]. Then by letting the block length n become large, we may achieve an expected length per symbol L n arbitrarily close to the entropy: Theorem (Distributing the extra overhead bit) The minimum expected codeword length per symbol satisfies H(X, X 2,..., X n ) L n < H(X, X 2,..., X n ) + n n n. Moreover, if X, X 2,..., X n is a stationary stochastic process, then L n H(X ), where H(X ) is the entropy rate of the process. The previous theorem confirms that the entropy rate of a stationary stochastic process is indeed the minimum expected number of bits per symbol needed to describe the process. If we design a code for the wrong input distribution, then the increase in expected description length is given exactly by the relative entropy: Theorem (Wrong code) The expected length under p(x) of the code assignment l(x) = log q(x) satisfies H(p) + D(p q) E p [l(x)] < H(p) + D(p q) Kraft Inequality for Uniquely Decodable Codes In the sense of expected length, the set of uniquely decodable codes while larger does not improve upon instantaneous codes: Theorem (McMillan) The codeword lengths of any uniquely decodable Dary code must satisfy the Kraft inequality: i Conversely, given a set of codeword lengths satisfying this inequality, it is possible to construct a uniquely decodable code with these lengths. Corollary A uniquely decodable code for an infinite source alphabet X also satisfies the Kraft inequality. 4
5 3.4 Huffman Codes: Optimality and Examples Consider the tree construction we used earlier in order to suggest a proof of the Kraft inequality for finite, instantaneous codes. It suggests a constructive procedure for assigning codewords in a manner such that their lengths are roughly inversely proportional to corresponding symbol probabilities. We now formalize this idea through an example of Huffman codes: Construct a Dary tree from which codewords can be assigned. Build up the tree recursively by combining D lowestprobability symbols together at each stage. A simple algorithm due to Huffman allows for the construction of optimal prefix codes for a given distribution: Lemma (Existence of a particular optimal code) For any distribution, there exists an optimal instantaneous code (with minimum expected length) that satisfies the following properties:. The lengths are ordered inversely with the probabilities (i.e., if p j > p k, then l j l k ). 2. The two longest codewords have the same length. 3. Two of the longest codewords differ only in the last bit and correspond to the two least likely symbols. Theorem (Optimality of Huffman coding) Huffman coding is optimal; that is, if C is a Huffman code and C is any other uniquely decodable code, then L(C ) L(C ). 3.5 ShannonFanoElias Coding Fano proposed a suboptimal procedure based on recursively partitioning the unit interval, under the assumption that symbol probabilities are given in decreasing order. A related procedure, ShannonFanoElias coding, makes direct use of the cumulative distribution function (cdf) F (x) to assign codewords. By using the midpoint F (x) of each jump in the cdf, we may exhibit a prefixfree code C satisfying Codeword lengths l(x) = log p(x) +. Expected length L(C) < H(X)
Review Horse Race Gambling and Side Information Dependent horse races and the entropy rate. Gambling. Besma Smida. ES250: Lecture 9.
Gambling Besma Smida ES250: Lecture 9 Fall 200809 B. Smida (ES250) Gambling Fall 200809 1 / 23 Today s outline Review of Huffman Code and Arithmetic Coding Horse Race Gambling and Side Information Dependent
More informationNational Sun YatSen University CSE Course: Information Theory. Gambling And Entropy
Gambling And Entropy 1 Outline There is a strong relationship between the growth rate of investment in a horse race and the entropy of the horse race. The value of side information is related to the mutual
More informationGambling with Information Theory
Gambling with Information Theory Govert Verkes University of Amsterdam January 27, 2016 1 / 22 How do you bet? Private noisy channel transmitting results while you can still bet, correct transmission(p)
More informationOn Directed Information and Gambling
On Directed Information and Gambling Haim H. Permuter Stanford University Stanford, CA, USA haim@stanford.edu YoungHan Kim University of California, San Diego La Jolla, CA, USA yhk@ucsd.edu Tsachy Weissman
More informationExercises with solutions (1)
Exercises with solutions (). Investigate the relationship between independence and correlation. (a) Two random variables X and Y are said to be correlated if and only if their covariance C XY is not equal
More informationInformation Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay
Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture  17 ShannonFanoElias Coding and Introduction to Arithmetic Coding
More informationarxiv:1112.0829v1 [math.pr] 5 Dec 2011
How Not to Win a Million Dollars: A Counterexample to a Conjecture of L. Breiman Thomas P. Hayes arxiv:1112.0829v1 [math.pr] 5 Dec 2011 Abstract Consider a gambling game in which we are allowed to repeatedly
More informationInformation, Entropy, and Coding
Chapter 8 Information, Entropy, and Coding 8. The Need for Data Compression To motivate the material in this chapter, we first consider various data sources and some estimates for the amount of data associated
More informationGoal Problems in Gambling and Game Theory. Bill Sudderth. School of Statistics University of Minnesota
Goal Problems in Gambling and Game Theory Bill Sudderth School of Statistics University of Minnesota 1 Three problems Maximizing the probability of reaching a goal. Maximizing the probability of reaching
More informationGambling and Portfolio Selection using Information theory
Gambling and Portfolio Selection using Information theory 1 Luke Vercimak University of Illinois at Chicago Email: lverci2@uic.edu Abstract A short survey is given of the applications of information theory
More informationAn Introduction to Information Theory
An Introduction to Information Theory Carlton Downey November 12, 2013 INTRODUCTION Today s recitation will be an introduction to Information Theory Information theory studies the quantification of Information
More informationOn Adaboost and Optimal Betting Strategies
On Adaboost and Optimal Betting Strategies Pasquale Malacaria School of Electronic Engineering and Computer Science Queen Mary, University of London Email: pm@dcs.qmul.ac.uk Fabrizio Smeraldi School of
More informationWeek 4: Gambler s ruin and bold play
Week 4: Gambler s ruin and bold play Random walk and Gambler s ruin. Imagine a walker moving along a line. At every unit of time, he makes a step left or right of exactly one of unit. So we can think that
More informationBetting rules and information theory
Betting rules and information theory Giulio Bottazzi LEM and CAFED Scuola Superiore Sant Anna September, 2013 Outline Simple betting in favorable games The Central Limit Theorem Optimal rules The Game
More informationA Note on Proebsting s Paradox
A Note on Proebsting s Paradox Leonid Pekelis March 8, 2012 Abstract Proebsting s Paradox is twostage bet where the naive Kelly gambler (wealth growth rate maximizing) can be manipulated in some disconcertingly
More informationLECTURE 4. Last time: Lecture outline
LECTURE 4 Last time: Types of convergence Weak Law of Large Numbers Strong Law of Large Numbers Asymptotic Equipartition Property Lecture outline Stochastic processes Markov chains Entropy rate Random
More informationProbability Interval Partitioning Entropy Codes
SUBMITTED TO IEEE TRANSACTIONS ON INFORMATION THEORY 1 Probability Interval Partitioning Entropy Codes Detlev Marpe, Senior Member, IEEE, Heiko Schwarz, and Thomas Wiegand, Senior Member, IEEE Abstract
More informationHow to build a probabilityfree casino
How to build a probabilityfree casino Adam Chalcraft CCR La Jolla dachalc@ccrwest.org Chris Freiling Cal State San Bernardino cfreilin@csusb.edu Randall Dougherty CCR La Jolla rdough@ccrwest.org Jason
More informationThe Cost of Offline Binary Search Tree Algorithms and the Complexity of the Request Sequence
The Cost of Offline Binary Search Tree Algorithms and the Complexity of the Request Sequence Jussi Kujala, Tapio Elomaa Institute of Software Systems Tampere University of Technology P. O. Box 553, FI33101
More informationChapter 7: Proportional Play and the Kelly Betting System
Chapter 7: Proportional Play and the Kelly Betting System Proportional Play and Kelly s criterion: Investing in the stock market is, in effect, making a series of bets. Contrary to bets in a casino though,
More informationPart I. Gambling and Information Theory. Information Theory and Networks. Section 1. Horse Racing. Lecture 16: Gambling and Information Theory
and Networks Lecture 16: Gambling and Paul Tune http://www.maths.adelaide.edu.au/matthew.roughan/ Lecture_notes/InformationTheory/ Part I Gambling and School of Mathematical
More informationA New Interpretation of Information Rate
A New Interpretation of Information Rate reproduced with permission of AT&T By J. L. Kelly, jr. (Manuscript received March 2, 956) If the input symbols to a communication channel represent the outcomes
More informationFUNDAMENTALS of INFORMATION THEORY and CODING DESIGN
DISCRETE "ICS AND ITS APPLICATIONS Series Editor KENNETH H. ROSEN FUNDAMENTALS of INFORMATION THEORY and CODING DESIGN Roberto Togneri Christopher J.S. desilva CHAPMAN & HALL/CRC A CRC Press Company Boca
More informationCHANNEL. 1 Fast encoding of information. 2 Easy transmission of encoded messages. 3 Fast decoding of received messages.
CHAPTER : Basics of coding theory ABSTRACT Part I Basics of coding theory Coding theory  theory of error correcting codes  is one of the most interesting and applied part of mathematics and informatics.
More informationCHAPTER 6. Shannon entropy
CHAPTER 6 Shannon entropy This chapter is a digression in information theory. This is a fascinating subject, which arose once the notion of information got precise and quantifyable. From a physical point
More informationStatistics 100A Homework 3 Solutions
Chapter Statistics 00A Homework Solutions Ryan Rosario. Two balls are chosen randomly from an urn containing 8 white, black, and orange balls. Suppose that we win $ for each black ball selected and we
More informationPrinciple of Data Reduction
Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then
More information1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
More informationAn example of a computable
An example of a computable absolutely normal number Verónica Becher Santiago Figueira Abstract The first example of an absolutely normal number was given by Sierpinski in 96, twenty years before the concept
More informationStochastic Inventory Control
Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to the
More informationLow upper bound of ideals, coding into rich Π 0 1 classes
Low upper bound of ideals, coding into rich Π 0 1 classes Antonín Kučera the main part is a joint project with T. Slaman Charles University, Prague September 2007, Chicago The main result There is a low
More informationDecision Theory. 36.1 Rational prospecting
36 Decision Theory Decision theory is trivial, apart from computational details (just like playing chess!). You have a choice of various actions, a. The world may be in one of many states x; which one
More informationMaster s Theory Exam Spring 2006
Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem
More informationSOME ASPECTS OF GAMBLING WITH THE KELLY CRITERION. School of Mathematical Sciences. Monash University, Clayton, Victoria, Australia 3168
SOME ASPECTS OF GAMBLING WITH THE KELLY CRITERION Ravi PHATARFOD School of Mathematical Sciences Monash University, Clayton, Victoria, Australia 3168 In this paper we consider the problem of gambling with
More information2.1 Complexity Classes
15859(M): Randomized Algorithms Lecturer: Shuchi Chawla Topic: Complexity classes, Identity checking Date: September 15, 2004 Scribe: Andrew Gilpin 2.1 Complexity Classes In this lecture we will look
More informationHow to Gamble If You Must
How to Gamble If You Must Kyle Siegrist Department of Mathematical Sciences University of Alabama in Huntsville Abstract In red and black, a player bets, at even stakes, on a sequence of independent games
More informationA Quantitative Measure of Relevance Based on Kelly Gambling Theory
A Quantitative Measure of Relevance Based on Kelly Gambling Theory Mathias Winther Madsen ILLC, University of Amsterdam Defining a good concept of relevance is a key problem in all disciplines that theorize
More informationThe Mathematics of Gambling
The Mathematics of Gambling with Related Applications Madhu Advani Stanford University April 12, 2014 Madhu Advani (Stanford University) Mathematics of Gambling April 12, 2014 1 / 23 Gambling Gambling:
More informationCompressing Forwarding Tables for Datacenter Scalability
TECHNICAL REPORT TR1203, TECHNION, ISRAEL 1 Compressing Forwarding Tables for Datacenter Scalability Ori Rottenstreich, Marat Radan, Yuval Cassuto, Isaac Keslassy, Carmi Arad, Tal Mizrahi, Yoram Revah
More informationArithmetic Coding: Introduction
Data Compression Arithmetic coding Arithmetic Coding: Introduction Allows using fractional parts of bits!! Used in PPM, JPEG/MPEG (as option), Bzip More time costly than Huffman, but integer implementation
More informationInformation Theory and Stock Market
Information Theory and Stock Market Pongsit Twichpongtorn University of Illinois at Chicago Email: ptwich2@uic.edu 1 Abstract This is a short survey paper that talks about the development of important
More informationPROCESS AND TRUTHTABLE CHARACTERISATIONS OF RANDOMNESS
PROCESS AND TRUTHTABLE CHARACTERISATIONS OF RANDOMNESS ADAM R. DAY Abstract. This paper uses quick process machines to provide characterisations of computable randomness, Schnorr randomness and weak randomness.
More informationEx. 2.1 (Davide Basilio Bartolini)
ECE 54: Elements of Information Theory, Fall 00 Homework Solutions Ex.. (Davide Basilio Bartolini) Text Coin Flips. A fair coin is flipped until the first head occurs. Let X denote the number of flips
More informationThe Kelly criterion for spread bets
IMA Journal of Applied Mathematics 2007 72,43 51 doi:10.1093/imamat/hxl027 Advance Access publication on December 5, 2006 The Kelly criterion for spread bets S. J. CHAPMAN Oxford Centre for Industrial
More informationit is easy to see that α = a
21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coefficient ring is a field. We already know that such a polynomial ring is a UF. Therefore
More informationEMBEDDING COUNTABLE PARTIAL ORDERINGS IN THE DEGREES
EMBEDDING COUNTABLE PARTIAL ORDERINGS IN THE ENUMERATION DEGREES AND THE ωenumeration DEGREES MARIYA I. SOSKOVA AND IVAN N. SOSKOV 1. Introduction One of the most basic measures of the complexity of a
More informationThe Kelly Betting System for Favorable Games.
The Kelly Betting System for Favorable Games. Thomas Ferguson, Statistics Department, UCLA A Simple Example. Suppose that each day you are offered a gamble with probability 2/3 of winning and probability
More informationInfluences in lowdegree polynomials
Influences in lowdegree polynomials Artūrs Bačkurs December 12, 2012 1 Introduction In 3] it is conjectured that every bounded real polynomial has a highly influential variable The conjecture is known
More informationFairness in Routing and Load Balancing
Fairness in Routing and Load Balancing Jon Kleinberg Yuval Rabani Éva Tardos Abstract We consider the issue of network routing subject to explicit fairness conditions. The optimization of fairness criteria
More informationDegrees that are not degrees of categoricity
Degrees that are not degrees of categoricity Bernard A. Anderson Department of Mathematics and Physical Sciences Gordon State College banderson@gordonstate.edu www.gordonstate.edu/faculty/banderson Barbara
More informationWeek 1: Introduction to Online Learning
Week 1: Introduction to Online Learning 1 Introduction This is written based on Prediction, Learning, and Games (ISBN: 2184189 / 2184189 CesaBianchi, Nicolo; Lugosi, Gabor 1.1 A Gentle Start Consider
More informationLecture 2: Universality
CS 710: Complexity Theory 1/21/2010 Lecture 2: Universality Instructor: Dieter van Melkebeek Scribe: Tyson Williams In this lecture, we introduce the notion of a universal machine, develop efficient universal
More information1 Portfolio Selection
COS 5: Theoretical Machine Learning Lecturer: Rob Schapire Lecture # Scribe: Nadia Heninger April 8, 008 Portfolio Selection Last time we discussed our model of the stock market N stocks start on day with
More informationThe Ergodic Theorem and randomness
The Ergodic Theorem and randomness Peter Gács Department of Computer Science Boston University March 19, 2008 Peter Gács (Boston University) Ergodic theorem March 19, 2008 1 / 27 Introduction Introduction
More informationcharacter E T A S R I O D frequency
Data Compression Data compression is any process by which a digital (e.g. electronic) file may be transformed to another ( compressed ) file, such that the original file may be fully recovered from the
More informationBetting with the Kelly Criterion
Betting with the Kelly Criterion Jane June 2, 2010 Contents 1 Introduction 2 2 Kelly Criterion 2 3 The Stock Market 3 4 Simulations 5 5 Conclusion 8 1 Page 2 of 9 1 Introduction Gambling in all forms,
More informationThe sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].
Probability Theory Probability Spaces and Events Consider a random experiment with several possible outcomes. For example, we might roll a pair of dice, flip a coin three times, or choose a random real
More information17.3.1 Follow the Perturbed Leader
CS787: Advanced Algorithms Topic: Online Learning Presenters: David He, Chris Hopman 17.3.1 Follow the Perturbed Leader 17.3.1.1 Prediction Problem Recall the prediction problem that we discussed in class.
More informationPUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.
PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include
More informationEntropy and Mutual Information
ENCYCLOPEDIA OF COGNITIVE SCIENCE 2000 Macmillan Reference Ltd Information Theory information, entropy, communication, coding, bit, learning Ghahramani, Zoubin Zoubin Ghahramani University College London
More information. (3.3) n Note that supremum (3.2) must occur at one of the observed values x i or to the left of x i.
Chapter 3 KolmogorovSmirnov Tests There are many situations where experimenters need to know what is the distribution of the population of their interest. For example, if they want to use a parametric
More informationAbout the inverse football pool problem for 9 games 1
Seventh International Workshop on Optimal Codes and Related Topics September 61, 013, Albena, Bulgaria pp. 15133 About the inverse football pool problem for 9 games 1 Emil Kolev Tsonka Baicheva Institute
More informationFinite Sets. Theorem 5.1. Two nonempty finite sets have the same cardinality if and only if they are equivalent.
MATH 337 Cardinality Dr. Neal, WKU We now shall prove that the rational numbers are a countable set while R is uncountable. This result shows that there are two different magnitudes of infinity. But we
More informationALOHA Performs DelayOptimum Power Control
ALOHA Performs DelayOptimum Power Control Xinchen Zhang and Martin Haenggi Department of Electrical Engineering University of Notre Dame Notre Dame, IN 46556, USA {xzhang7,mhaenggi}@nd.edu Abstract As
More informationLinear Codes. Chapter 3. 3.1 Basics
Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length
More informationThe Convolution Operation
The Convolution Operation Convolution is a very natural mathematical operation which occurs in both discrete and continuous modes of various kinds. We often encounter it in the course of doing other operations
More information1 Approximating Set Cover
CS 05: Algorithms (Grad) Feb 224, 2005 Approximating Set Cover. Definition An Instance (X, F ) of the setcovering problem consists of a finite set X and a family F of subset of X, such that every elemennt
More informationBasics of information theory and information complexity
Basics of information theory and information complexity a tutorial Mark Braverman Princeton University June 1, 2013 1 Part I: Information theory Information theory, in its modern format was introduced
More informationA Catalogue of the Steiner Triple Systems of Order 19
A Catalogue of the Steiner Triple Systems of Order 19 Petteri Kaski 1, Patric R. J. Östergård 2, Olli Pottonen 2, and Lasse Kiviluoto 3 1 Helsinki Institute for Information Technology HIIT University of
More information6.042/18.062J Mathematics for Computer Science December 12, 2006 Tom Leighton and Ronitt Rubinfeld. Random Walks
6.042/8.062J Mathematics for Comuter Science December 2, 2006 Tom Leighton and Ronitt Rubinfeld Lecture Notes Random Walks Gambler s Ruin Today we re going to talk about onedimensional random walks. In
More informationAnalogy Between Gambling and. MeasurementBased Work Extraction
Analogy Between Gambling and 1 MeasurementBased Work Extraction Dror A. Vinkler, Haim H. Permuter and Neri Merhav Abstract arxiv:144.6788v1 [cs.it] 27 Apr 214 In information theory, one area of interest
More informationMaximum Entropy. Information Theory 2013 Lecture 9 Chapter 12. Tohid Ardeshiri. May 22, 2013
Maximum Entropy Information Theory 2013 Lecture 9 Chapter 12 Tohid Ardeshiri May 22, 2013 Why Maximum Entropy distribution? max f (x) h(f ) subject to E r(x) = α Temperature of a gas corresponds to the
More informationInternational Journal of Advanced Research in Computer Science and Software Engineering
Volume 3, Issue 7, July 23 ISSN: 2277 28X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Greedy Algorithm:
More informationIntroduction to Online Learning Theory
Introduction to Online Learning Theory Wojciech Kot lowski Institute of Computing Science, Poznań University of Technology IDSS, 04.06.2013 1 / 53 Outline 1 Example: Online (Stochastic) Gradient Descent
More informationLecture 2: The Kelly criterion for favorable games: stock market investing for individuals
Lecture 2: The Kelly criterion for favorable games: stock market investing for individuals David Aldous September 8, 2014 Most adults drive/own a car Few adults work in the auto industry. By analogy Most
More informationLimit processes are the basis of calculus. For example, the derivative. f f (x + h) f (x)
SEC. 4.1 TAYLOR SERIES AND CALCULATION OF FUNCTIONS 187 Taylor Series 4.1 Taylor Series and Calculation of Functions Limit processes are the basis of calculus. For example, the derivative f f (x + h) f
More informationLectures on Stochastic Processes. William G. Faris
Lectures on Stochastic Processes William G. Faris November 8, 2001 2 Contents 1 Random walk 7 1.1 Symmetric simple random walk................... 7 1.2 Simple random walk......................... 9 1.3
More informationWald s Identity. by Jeffery Hein. Dartmouth College, Math 100
Wald s Identity by Jeffery Hein Dartmouth College, Math 100 1. Introduction Given random variables X 1, X 2, X 3,... with common finite mean and a stopping rule τ which may depend upon the given sequence,
More information1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let
Copyright c 2009 by Karl Sigman 1 Stopping Times 1.1 Stopping Times: Definition Given a stochastic process X = {X n : n 0}, a random time τ is a discrete random variable on the same probability space as
More informationSTAT 315: HOW TO CHOOSE A DISTRIBUTION FOR A RANDOM VARIABLE
STAT 315: HOW TO CHOOSE A DISTRIBUTION FOR A RANDOM VARIABLE TROY BUTLER 1. Random variables and distributions We are often presented with descriptions of problems involving some level of uncertainty about
More informationUniversal Portfolios With and Without Transaction Costs
CS8B/Stat4B (Spring 008) Statistical Learning Theory Lecture: 7 Universal Portfolios With and Without Transaction Costs Lecturer: Peter Bartlett Scribe: Sahand Negahban, Alex Shyr Introduction We will
More informationMA2C03 Mathematics School of Mathematics, Trinity College Hilary Term 2016 Lecture 59 (April 1, 2016) David R. Wilkins
MA2C03 Mathematics School of Mathematics, Trinity College Hilary Term 2016 Lecture 59 (April 1, 2016) David R. Wilkins The RSA encryption scheme works as follows. In order to establish the necessary public
More informationFund Manager s Portfolio Choice
Fund Manager s Portfolio Choice Zhiqing Zhang Advised by: Gu Wang September 5, 2014 Abstract Fund manager is allowed to invest the fund s assets and his personal wealth in two separate risky assets, modeled
More informationA Practical Scheme for Wireless Network Operation
A Practical Scheme for Wireless Network Operation Radhika Gowaikar, Amir F. Dana, Babak Hassibi, Michelle Effros June 21, 2004 Abstract In many problems in wireline networks, it is known that achieving
More informationThe Dirichlet Unit Theorem
Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if
More informationOptimal Design of Sequential RealTime Communication Systems Aditya Mahajan, Member, IEEE, and Demosthenis Teneketzis, Fellow, IEEE
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 11, NOVEMBER 2009 5317 Optimal Design of Sequential RealTime Communication Systems Aditya Mahajan, Member, IEEE, Demosthenis Teneketzis, Fellow, IEEE
More informationLecture 25: Money Management Steven Skiena. http://www.cs.sunysb.edu/ skiena
Lecture 25: Money Management Steven Skiena Department of Computer Science State University of New York Stony Brook, NY 11794 4400 http://www.cs.sunysb.edu/ skiena Money Management Techniques The trading
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete
More information= 2 + 1 2 2 = 3 4, Now assume that P (k) is true for some fixed k 2. This means that
Instructions. Answer each of the questions on your own paper, and be sure to show your work so that partial credit can be adequately assessed. Credit will not be given for answers (even correct ones) without
More informationarxiv:math/0412362v1 [math.pr] 18 Dec 2004
Improving on bold play when the gambler is restricted arxiv:math/0412362v1 [math.pr] 18 Dec 2004 by Jason Schweinsberg February 1, 2008 Abstract Suppose a gambler starts with a fortune in (0, 1) and wishes
More informationIEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Tuesday, September 11 Normal Approximations and the Central Limit Theorem
IEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Tuesday, September 11 Normal Approximations and the Central Limit Theorem Time on my hands: Coin tosses. Problem Formulation: Suppose that I have
More informationThe pnorm generalization of the LMS algorithm for adaptive filtering
The pnorm generalization of the LMS algorithm for adaptive filtering Jyrki Kivinen University of Helsinki Manfred Warmuth University of California, Santa Cruz Babak Hassibi California Institute of Technology
More informationLecture 4: AC 0 lower bounds and pseudorandomness
Lecture 4: AC 0 lower bounds and pseudorandomness Topics in Complexity Theory and Pseudorandomness (Spring 2013) Rutgers University Swastik Kopparty Scribes: Jason Perry and Brian Garnett In this lecture,
More informationWeek 5: Expected value and Betting systems
Week 5: Expected value and Betting systems Random variable A random variable represents a measurement in a random experiment. We usually denote random variable with capital letter X, Y,. If S is the sample
More informationRandom access protocols for channel access. Markov chains and their stability. Laurent Massoulié.
Random access protocols for channel access Markov chains and their stability laurent.massoulie@inria.fr Aloha: the first random access protocol for channel access [Abramson, Hawaii 70] Goal: allow machines
More informationEconomics 1011a: Intermediate Microeconomics
Lecture 12: More Uncertainty Economics 1011a: Intermediate Microeconomics Lecture 12: More on Uncertainty Thursday, October 23, 2008 Last class we introduced choice under uncertainty. Today we will explore
More informationSinglePeriod Balancing of Pay PerClick and PayPerView Online Display Advertisements
SinglePeriod Balancing of Pay PerClick and PayPerView Online Display Advertisements Changhyun Kwon Department of Industrial and Systems Engineering University at Buffalo, the State University of New
More informationUniversal hashing. In other words, the probability of a collision for two different keys x and y given a hash function randomly chosen from H is 1/m.
Universal hashing No matter how we choose our hash function, it is always possible to devise a set of keys that will hash to the same slot, making the hash scheme perform poorly. To circumvent this, we
More informationAdaptive Online Gradient Descent
Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650
More informationLecture 18 October 30
EECS 290S: Network Information Flow Fall 2008 Lecture 18 October 30 Lecturer: Anant Sahai and David Tse Scribe: Changho Suh In this lecture, we studied two types of onetomany channels: (1) compound channels
More information