Using game-theoretic probability for probability judgment. Glenn Shafer
|
|
|
- Gerald Butler
- 10 years ago
- Views:
Transcription
1 Workshop on Game-Theoretic Probability and Related Topics Tokyo University February 28, 2008 Using game-theoretic probability for probability judgment Glenn Shafer For 170 years, people have asked whether probabilities are objective or subjective. Game-theoretic probability asks a simpler question: Is there a repetitive structure? For probability judgement outside repetitive structures we need judgements of independence. 1
2 Probability and Finance: It s Only a Game! Glenn Shafer and Vladimir Vovk, Wiley, Sample chapters and working papers at 2
3 Abstract The success of defensive forecasting shows that the crucial question is not whether there is a valid objective model but simply whether there is an agreed-on repetitive structure for the questions to be answered and the data for answering them. Once we place the current example in a sequence of other examples, defensive forecasting can produce valid probability forecasts-valid insofar as they resist statistical tests without any assumptions. But if no single sequence of previous examples imposes itself, then we must go outside probability theory to weigh evidence. 3
4 Theory To prove a probabilistic prediction, construct a betting strategy that makes you rich if the prediction fails. Testing To test a probabilistic theory, bet against its predictions. Forecasting To make probability forecasts in a repetitive structure, construct a forecasting strategy that defeats all reasonable betting strategies. Probability judgement Judgements of evidential independence are needed to extend reasoning to non-repetitive situations (Dempster-Shafer, etc.). 4
5 Part I. Game-theoretic probability Blaise Pascal ( ) Probability is about betting. Antoine Cournot ( ) Events of small probability do not happen. Jean Ville ( ) Pascal + Cournot: If the probabilities are right, you don t get rich. 5
6 A physically impossible event is one whose probability is infinitely small. This remark alone gives substance an objective and phenomenological value to the mathematical theory of probability. (1843) Antoine Cournot ( ) This is more basic than frequentism. 6
7 Borel: the principle that an event with very small probability will not happen is the only law of chance. Émile Borel Inventor of measure theory. Impossibility on the human scale: p < Impossibility on the terrestrial scale: p < Impossibility on the cosmic scale: p < Minister of the French navy in
8 Kol- In his celebrated 1933 book, mogorov wrote: Andrei Kolmogorov Hailed as the Soviet Euler, Kolmogorov was credited with establishing measure theory as the mathematical foundation for probability. When P(A) very small, we can be practically certain that the event A will not happen on a single trial of the conditions that define it. 8
9 In 1939, Ville showed that the laws of probability can be derived from this principle: You will not multiply the capital you risk by a large factor. Jean Ville, , on entering the École Normale Supérieure. Ville showed that this principle is equivalent to the principle that events of small probability will not happen. We call both principles Cournot s principle. 9
10 Suppose you gamble without risking more than your initial capital. Your resulting wealth is a nonnegative random variable X with expected value E(X) equal to your initial capital. Markov s inequality says P ( X E(X) ɛ ) ɛ. You have probability ɛ or less of multiplying your initial capital by 1/ɛ or more. Game-theoretic probability generalizes classical probability to the case of limited betting offers (less than a probability distribution) by taking the inability to multiply initial capital as basic. 10
11 Perfect-information protocol for predicting and testing even without a probability model K 0 = 1. FOR n = 1, 2,..., N: Forecaster announces prices for various payoffs. Skeptic decides which payoffs to buy. Reality determines the payoffs. K n := K n 1 + Skeptic s net gain or loss. 11
12 In Ville s theory, Forecaster gives prices based on a probability distribution. He uses conditional probabilities for Reality s next move given her past moves. In Vovk s generalization, (1) Forecaster does not necessarily use a known probability distribution, and (2) he may give less than a probability distribution for Reality s next move. 12
13 Ville s strong law of large numbers. (Special case where probability is always 1/2.) K 0 = 1. FOR n = 1, 2,... : Skeptic announces s n R. Reality announces y n {0, 1}. K n := K n 1 + s n (y n 1 2 ). Skeptic wins if (1) K n is never negative and (2) either lim n 1 n ni=1 y i = 1 2 or lim n K n =. Theorem Skeptic has a winning strategy. 13
14 Ville s strategy K 0 = 1. FOR n = 1, 2,... : Skeptic announces s n R. Reality announces y n {0, 1}. K n := K n 1 + s n (y n 1 2 ). Ville suggested the strategy s n (y 1,..., y n 1 = 4 n + 1 K n 1 It produces the capital ( r n 1 n 1 2 ), where r n 1 := K n = 2 nr n!(n r n )!. (n + 1)! From the assumption that this remains bounded by some constant C, you can easily derive the strong law of large numbers using Stirling s formula. n 1 i=1 y i. 14
15 Ville s more general game. Ville started with a probability distribution for P for y 1, y 2,.... The conditional probability for y n = 1 given y 1,..., y n 1 is not necessarily 1/2. K 0 := 1. FOR n = 1, 2,... : Skeptic announces s n R. Reality announces y n {0, 1}. K n := K n 1 + s n ( yn P(y n = 1 y 1,..., y n 1 ) ). Skeptic wins if (1) K n is never negative and (2) either lim n 1 n ni=1 ( yi P(y i = 1 y 1,..., y i 1 ) ) = 0 or lim n K n =. Theorem Skeptic has a winning strategy. 15
16 Vovk s generalization: Replace P with a forecaster. K 0 := 1. FOR n = 1, 2,... : Forecaster announces p n [0, 1]. Skeptic announces s n R. Reality announces y n {0, 1}. K n := K n 1 + s n (y n p n ). Skeptic wins if (1) K n is never negative and (2) either lim n 1 n ni=1 (y i p i ) = 0 or lim n K n =. Theorem Skeptic has a winning strategy. 16
17 Defensive forecasting Under repetition, good probability forecasting is possible. We call it defensive because it defends against a quasi-universal test. Your probability forecasts will pass this test even if reality plays against you. 17
18 Why Phil Dawid thought good probability prediction is impossible... FOR n = 1, 2,... Forecaster announces p n [0, 1]. Skeptic announces continuous s n R. Reality announces y n {0, 1}. Skeptic s profit := s n (y n p n ). Reality can make Forecaster uncalibrated by setting y n := { 1 if pn < if p n 0.5, Skeptic can then make steady money with s n := { 1 if p < if p 0.5, But if Skeptic is forced to approximate s n by a continuous function of p n, then the continuous function will be zero close to p = 0.5, and Forecaster can set p n equal to this point. 18
19 Skeptic adopts a continuous strategy S. FOR n = 1, 2,... Reality announces x n X. Forecaster announces p n [0, 1]. Skeptic makes the move s n specified by S. Reality announces y n {0, 1}. Skeptic s profit := s n (y n p n ). Theorem Forecaster can guarantee that Skeptic never makes money. We actually prove a stronger theorem. Instead of making Skeptic announce his entire strategy in advance, only make him reveal his strategy for each round in advance of Forecaster s move. FOR n = 1, 2,... Reality announces x n X. Skeptic announces continuous S n : [0, 1] R. Forecaster announces p n [0, 1]. Reality announces y n {0, 1}. Skeptic s profit := S n (p n )(y n p n ). Theorem. Forecaster can guarantee that Skeptic never makes money. 19
20 FOR n = 1, 2,... Reality announces x n X. Skeptic announces continuous S n : [0, 1] R. Forecaster announces p n [0, 1]. Reality announces y n {0, 1}. Skeptic s profit := S n (p n )(y n p n ). Theorem Forecaster can guarantee that Skeptic never makes money. Proof: If S n (p) > 0 for all p, take p n := 1. If S n (p) < 0 for all p, take p n := 0. Otherwise, choose p n so that S n (p n ) = 0. 20
21 TWO APPROACHES TO FORECASTING FOR n = 1, 2,... Forecaster announces p n [0, 1]. Skeptic announces s n R. Reality announces y n {0, 1}. 1. Start with strategies for Forecaster. Improve by averaging (Bayes, prediction with expert advice). 2. Start with strategies for Skeptic. Improve by averaging (defensive forecasting). 21
22 We can always give probabilities with good calibration and resolution. FOR n = 1, 2,... Forecaster announces p n [0, 1]. Reality announces y n {0, 1}. There exists a strategy for Forecaster that gives p n with good calibration and resolution. 22
23 FOR n = 1, 2,... Reality announces x n X. Forecaster announces p n [0, 1]. Reality announces y n {0, 1}. 1. Fix p [0, 1]. Look at n for which p n p. If the frequency of y n = 1 always approximates p, Forecaster is properly calibrated. 2. Fix x X and p [0, 1]. Look at n for which x n x and p n p. If the frequency of y n = 1 always approximates p, Forecaster is properly calibrated and has good resolution. 23
24 Fundamental idea: Average strategies for Skeptic for a grid of values of p. (The p -strategy makes money if calibration fails for p n close to p.) The derived strategy for Forecaster guarantees good calibration everywhere. Example of a resulting strategy for Skeptic: S n (p) := n 1 i=1 e C(p p i) 2 (y i p i ) Any kernel K(p, p i ) can be used in place of e C(p p i) 2. 24
25 Skeptic s strategy: S n (p) := n 1 i=1 e C(p p i) 2 (y i p i ) Forecaster s strategy: Choose p n so that n 1 i=1 e C(p n p i ) 2 (y i p i ) = 0. The main contribution to the sum comes from i for which p i is close to p n. So Forecaster chooses p n in the region where the y i p i average close to zero. On each round, choose as p n the probability value where calibration is the best so far. 25
26 Objective vs. subjective From a 1970s perspective: Aleatory probability is the irreducible uncertainty that remains when knowledge is complete. Epistemic probability arises when knowledge is incomplete. New game-theoretic perspective: Under a repetitive structure you can make make good probability forecasts relative to whatever state of knowledge you have. If there is no repetitive structure, your task is to combine evidence rather than to make probability forecasts. 26
27 Part 2. Cournotian justification of Bayesian updating We update probabilities using the rule P (B A) = P (A&B) P (A). In 1738, De Moivre justified this rule under the classical betting interpretation. In 1937, de Finetti recast the justification in terms of his betting interpretation. I will recast it in Cournotian terms. 27
28 Three betting interpretations: De Moivre: P (E) is the value of a ticket that pays 1 if E happens. (No explanation of what value means.) De Finetti: P (E) is a price at which YOU would buy or sell a ticket that pays 1 if E happens. Shafer: The price P (E) cannot be beat i.e., a strategy for buying and selling such tickets at such prices will not multiply the capital it risks by a large factor. 28
29 De Moivre s argument for P (A&B) = P (A)P (B A) Gambles available: pay P (A) for 1 if A happens, pay P (A)x for x if A happens, and after A happens, pay P (B A) for 1 if B happens. To get 1 if A&B if happens, pay Abraham de Moivre P (A)P (B A) for P (B A) if A happens, then if A happens, pay the P (B A) you just got for 1 if B happens. 29
30 De Finetti s argument for P (A&B) = P (A)P (B A) Suppose you are required to announce... prices P (A) and P (A&B) at which you will buy or sell $1 tickets on these events. a price P (B A) at which you will buy or sell $1 tickets on B if A happens. Opponent can make money for sure if you announce P (A&B) different from P (A)P (B A). Bruno de Finetti ( ) 30
31 Cournotian argument for P (B A) = P (A&B)/P (A) Claim: Suppose P (A) and P (A&B) cannot be beat. Suppose we learn A happens and nothing more. Then we can include P (A&B)/P (A) as a new probability for B among the probabilities that cannot be beat. Structure of proof: Consider a bankruptcy-free strategy S against probabilities P (A) and P (A&B) and P (A&B)/P (A). We want to show that S does not get rich. Do this by constructing a strategy S against P (A) and P (A&B) alone that does the same thing as S. 31
32 Given: Bankruptcy-free strategy S that deals in A-tickets and A&B-tickets in the initial situation and B-tickets in the situation where A has just happened. Construct: Strategy S that agrees with S except that it does not buy the B-tickets but instead initially buys additional A- and A&B-tickets. S S not A A not A A not B B not B B 32
33 S S not A A not A A not B B not B B 1. A s happening is the only new information used by S. So S uses only the initial information. 2. Because the additional initial tickets have net cost zero, S and S have the same cash on hand in the initial situation. 3. In the situation where A happens, they again produce the same cash position, because the additional A-tickets require S which is the cost of the B tickets that S buys. 4. They have the same payoffs if not A happens (0), if A&(not B) happens (0), or if A&B happens (M). 5. By hypothesis, S is bankruptcy-free. So S is also bankruptcy-free. 6. Therefore S does not get rich. So S does not get rich either. to pay M P (A&B) P (A), 33
34 Crucial assumption for conditioning on A: You learn A and nothing more that can help you beat the probabilities. In practice, you always learn more than A. But you judge that the other things don t matter. Probability judgement is always in a small world. We judge knowledge outside the small world irrelevant. 34
35 Part 3. Cournotian justification of Dempster-Shafer operations Dempster-Shafer has three fundamental operations: Transferring belief Independence Conditioning (same as Bayesian updating) All three are justified by a judgement that certain information does not help us beat certain probabilities. 35
36 Fundamental idea: transferring belief Variable ω with set of possible values Ω. Random variable X with set of possible values X. We learn a mapping Γ : X 2 Ω with this meaning: If X = x, then ω Γ(x). For A Ω, our belief that ω A is now B(A) = P{x Γ(x) A}. Cournotian judgement of independence: Learning the relationship between X and ω does not affect our inability to beat the probabilities for X. 36
37 Example: The sometimes reliable witness Joe is reliable with probability 30%. When he is reliable, what he says is true. Otherwise, it may or may not be true. X = {reliable, not reliable} P(reliable) = 0.3 P(not reliable) = 0.7 Did Glenn pay his dues for coffee? Ω = {paid, not paid} Joe says Glenn paid. Γ(reliable) = {paid} Γ(not reliable) = {paid, not paid} New beliefs: B(paid) = 0.3 B(not paid) = 0 Cournotian judgement of independence: Hearing what Joe said does not affect our inability to beat the probabilities concerning his reliability. 37
38 Conditioning Variable ω with set of possible values Ω. Random variable X with set of possible values X. We learn a mapping Γ : X 2 Ω with this meaning: Γ(x) = for some x X. If X = x, then ω Γ(x). For A Ω, our belief that ω A is now B(A) = P {x Γ(x) A & Γ(x) }. P{x Γ(x) } Cournotian judgement of independence: Aside from the impossibility of the x for which Γ(x) =, learning Γ does not affect our inability to beat the probabilities for X. 38
39 Example: The witness caught out Tom is absolutely precise with probability 70%, approximate with probability 20%, and unreliable with probability 10%. X = {precise, approximate, not reliable} P(precise) = 0.7 P(approximate) = 0.2 P(not reliable) = 0.1 What did Glenn pay? Ω = {0, $1, $5} Tom says Glenn paid $ 10. Γ(precise) = Γ(approximate) = {$5} Γ(not reliable) = {0, $1, $5} New beliefs: B{0} = 0 B{$1} = 0 B{$5} = 2/3 B{$1, $5} = 2/3 Cournotian judgement of independence: Aside ruling out his being absolutely precise, what Tom said does not help us beat the probabilities for his precision. 39
40 Independence X Bill = {Bill precise, Bill approximate, Bill not reliable} P(precise) = 0.7 P(approximate) = 0.2 P(not reliable) = 0.1 X Tom = {Tom precise, Tom approximate, Tom not reliable} P(precise) = 0.7 P(approximate) = 0.2 P(not reliable) = 0.1 Product measure: X Bill & Tom = X Bill X Tom P(Bill precise,tom precise) = = 0.49 P(Bill precise,tom approximate) = = 0.14 etc. Cournotian judgements of independence: Learning about the precision of one of the witnesses will not help us beat the probabilities for the other. Nothing novel here. Dempsterian independence = Cournotian independence. 40
41 EXTRA SLIDES 41
42 Example: The more or less precise witness Bill is absolutely precise with probability 70%, approximate with probability 20%, and unreliable with probability 10%. X = {precise, approximate, not reliable} P(precise) = 0.7 P(approximate) = 0.2 P(not reliable) = 0.1 What did Glenn pay? Ω = {0, $1, $5} Bill says Glenn paid $ 5. Γ(precise) = {$5} Γ(approximate) = {$1, $5} Γ(not reliable) = {0, $1, $5} New beliefs: B{0} = 0 B{$1} = 0 B{$5} = 0.7 B{$1, $5} = 0.9 Cournotian judgement of independence: Hearing what Bill said does not affect our inability to beat the probabilities concerning his precision. 42
43 Dempster s rule (independence + conditioning) Variable ω with set of possible values Ω. Random variables X 1 and X 2 with sets of possible values X 1 and X 2. Form the product measure on X 1 X 2. We learn mappings Γ 1 : X 1 2 Ω and Γ 2 : X 2 2 Ω : If X 1 = x 1, then ω Γ 1 (x 1 ). If X 2 = x 2, then ω Γ 2 (x 2 ). So if (X 1, X 2 ) = (x 1, x 2 ), then ω Γ 1 (x 1 ) Γ 2 (x 2 ). Conditioning on what is not ruled out, B(A) = P {(x 1, x 2 ) = Γ 1 (x 1 ) Γ 2 (x 2 ) A} P{(x 1, x 2 ) = Γ 1 (x 1 ) Γ 2 (x 2 )} Cournotian judgement of independence: Aside from ruling out some (x 1, x 2 ), learning the Γ i does not help us beat the probabilities for X 1 and X 2. 43
44 Example: Independent contradictory witnesses Joe and Bill are both reliable with probability 70%. Did Glenn pay his dues? Joe says, Glenn paid. Γ 1 (Joe reliable) = {paid} Γ 2 (Bill reliable) = {not paid} Ω = {paid, not paid} Bill says, Glenn did not pay. Γ 1 (Joe not reliable) = {paid, not paid} Γ 2 (Bill not reliable) = {paid, not paid} The pair (Joe reliable,bill reliable), which had probability 0.49, is ruled out. B(paid) = = 0.41 B(not paid) = = 0.41 Cournotian judgement of independence: Aside from learning that they are not both reliable, what Joe and Bill said does not help us beat the probabilities concerning their reliability. 44
45 You can suppress the Γs and describe Dempster s rule in terms of the belief functions Joe: B1{paid} = 0.7 B1{not paid} = 0 Bill: B2{not paid} = 0.7 B2{paid} = paid Joe 0.7 not paid Bill 0.3?? Paid B(paid) = = 0.41 B(not paid) = = ?? Not paid 45
46 Dempster s rule is unnecessary. It is merely a composition of Cournot operations: formation of product measures, conditioning, transferring belief. But Dempster s rule is a unifying idea. Each Cournot operation is an example of Dempster combination. Forming product measure is Dempster combination. Conditioning on A is Demspter combination with a belief function that gives belief one to A. Transferring belief is Dempster combination of (1) a belief function on X Ω that gives probabilities to cylinder sets {x} Ω with (2) a belief function that gives probability one to {(x, ω) ω Γ(x)}. 46
47 Parametric models are not the starting point! Mathematical statistics departs from probability by standing outside the protocol. Classical example: the error model Parametric modeling Dempster-Shafer modeling 47
48 The perfect-information protocol for probability K 0 = 1. FOR n = 1, 2,..., N: Forecaster announces prices for various payoffs. Skeptic decides which payoffs to buy. Reality determines the payoffs. K n := K n 1 + Skeptic s net gain or loss. 48
49 Mathematical statistics departs from probability by standing outside the protocol in various ways. Forecaster, Skeptic, and Reality see each others moves, but we do not. Usually Skeptic is not really there. We can take this player s role if we see the other players moves. Perhaps we do not see Forecaster s moves. We infer what we can about them from Reality s moves. Or perhaps it is our job to make the forecasts. Perhaps we see only a noisy or distorted version of Reality s moves. We infer what we can about them from Forecaster s moves. 49
50 Classical example: errors in measurement A measuring instrument makes errors obeying some probability distribution. You do not see the errors e 1,..., e N. You only see measurements x 1,..., x N, where x n = θ + e n. How do you make inferences about θ? 50
51 Parametric modeling. The parametric model P θ is a class of strategies for Forecaster. K 0 = 1. FOR n = 1, 2,..., N: Forecaster gives prices p n following a strategy P θ. Skeptic makes purchases M n following a strategy S θ. Reality announces y n. K(θ) n := K(θ) n 1 + Skeptic s net gain or loss. Cournot s principle: Not all the K(θ) get very large. We see y n, and we know the strategies, but we do not know θ and do not see p n and M n. If all the K(θ) N K for all θ, we reject the model. Otherwise, those θ for which K(θ) N < K form a 1 K 1 confidence interval for θ. 51
52 Errors in measurement as a parametric model K 0 = 1. FOR n = 1, 2,..., N: Forecaster announces (but not to us) the price θ. Skeptic announces M n R. Reality announces y n R. K n := K n 1 + M n (y n θ). Winner: Skeptic wins if K n is never negative and either K N K or y θ < ɛ, where y := N n=1 y n. According to Probability and Finance (p. 125), if N KC 2 /ɛ 2 and Reality is constrained to obey y n [θ C, θ + C], then Skeptic has a winning strategy. 52
53 Dempster-Shafer modeling. We see the moves by Forecaster and Skeptic, but not those by Reality. K 0 = 1. FOR n = 1, 2,..., N: Forecaster announces prices p n. Skeptic makes purchases M n. Reality announces (but not to us) x n. K n := K n 1 + Skeptic s net gain or loss. Cournot s principle: With probability 1 K 1, K N < K. We see only y n = ω(x n ) for some function ω. The mapping Γ(x 1,..., x N ) = {ω ω(x n ) = y n, n = 1,..., N} allows us to transfer the probabilities about x 1,..., x N to beliefs about ω. 53
54 Errors in measurement as a Dempster-Shafer model. K 0 = 1. FOR n = 1, 2,..., N: Forecaster announces the standard Gaussian distribution. Skeptic chooses a function f n of the payoff x n. Reality announces (but not to us) x n R. K n := K n 1 + f n (x n ) E(f n (x n )). We see only y n = ω + x n for some ω R. Conditioning on the configuration x 1 x,..., x N x, we get probabilities for ω. Functions of configuration can be used to test the model. 54
Betting interpretations of probability
Betting interpretations of probability Glenn Shafer June 21, 2010 Third Workshop on Game-Theoretic Probability and Related Topics Royal Holloway, University of London 1 Outline 1. Probability began with
Bayesian logistic betting strategy against probability forecasting. Akimichi Takemura, Univ. Tokyo. November 12, 2012
Bayesian logistic betting strategy against probability forecasting Akimichi Takemura, Univ. Tokyo (joint with Masayuki Kumon, Jing Li and Kei Takeuchi) November 12, 2012 arxiv:1204.3496. To appear in Stochastic
Review. Bayesianism and Reliability. Today s Class
Review Bayesianism and Reliability Models and Simulations in Philosophy April 14th, 2014 Last Class: Difference between individual and social epistemology Why simulations are particularly useful for social
Discrete Math in Computer Science Homework 7 Solutions (Max Points: 80)
Discrete Math in Computer Science Homework 7 Solutions (Max Points: 80) CS 30, Winter 2016 by Prasad Jayanti 1. (10 points) Here is the famous Monty Hall Puzzle. Suppose you are on a game show, and you
Wald s Identity. by Jeffery Hein. Dartmouth College, Math 100
Wald s Identity by Jeffery Hein Dartmouth College, Math 100 1. Introduction Given random variables X 1, X 2, X 3,... with common finite mean and a stopping rule τ which may depend upon the given sequence,
R Simulations: Monty Hall problem
R Simulations: Monty Hall problem Monte Carlo Simulations Monty Hall Problem Statistical Analysis Simulation in R Exercise 1: A Gift Giving Puzzle Exercise 2: Gambling Problem R Simulations: Monty Hall
AMS 5 CHANCE VARIABILITY
AMS 5 CHANCE VARIABILITY The Law of Averages When tossing a fair coin the chances of tails and heads are the same: 50% and 50%. So if the coin is tossed a large number of times, the number of heads and
Prediction Markets, Fair Games and Martingales
Chapter 3 Prediction Markets, Fair Games and Martingales Prediction markets...... are speculative markets created for the purpose of making predictions. The current market prices can then be interpreted
Gambling Systems and Multiplication-Invariant Measures
Gambling Systems and Multiplication-Invariant Measures by Jeffrey S. Rosenthal* and Peter O. Schwartz** (May 28, 997.. Introduction. This short paper describes a surprising connection between two previously
Probability and statistics; Rehearsal for pattern recognition
Probability and statistics; Rehearsal for pattern recognition Václav Hlaváč Czech Technical University in Prague Faculty of Electrical Engineering, Department of Cybernetics Center for Machine Perception
What Is Probability? 1
What Is Probability? 1 Glenn Shafer 1 Introduction What is probability? What does it mean to say that the probability of an event is 75%? Is this the frequency with which the event happens? Is it the degree
E3: PROBABILITY AND STATISTICS lecture notes
E3: PROBABILITY AND STATISTICS lecture notes 2 Contents 1 PROBABILITY THEORY 7 1.1 Experiments and random events............................ 7 1.2 Certain event. Impossible event............................
Mathematical Induction. Lecture 10-11
Mathematical Induction Lecture 10-11 Menu Mathematical Induction Strong Induction Recursive Definitions Structural Induction Climbing an Infinite Ladder Suppose we have an infinite ladder: 1. We can reach
Decision Theory. 36.1 Rational prospecting
36 Decision Theory Decision theory is trivial, apart from computational details (just like playing chess!). You have a choice of various actions, a. The world may be in one of many states x; which one
1 Introduction to Option Pricing
ESTM 60202: Financial Mathematics Alex Himonas 03 Lecture Notes 1 October 7, 2009 1 Introduction to Option Pricing We begin by defining the needed finance terms. Stock is a certificate of ownership of
STA 371G: Statistics and Modeling
STA 371G: Statistics and Modeling Decision Making Under Uncertainty: Probability, Betting Odds and Bayes Theorem Mingyuan Zhou McCombs School of Business The University of Texas at Austin http://mingyuanzhou.github.io/sta371g
Lecture 8 The Subjective Theory of Betting on Theories
Lecture 8 The Subjective Theory of Betting on Theories Patrick Maher Philosophy 517 Spring 2007 Introduction The subjective theory of probability holds that the laws of probability are laws that rational
MA 1125 Lecture 14 - Expected Values. Friday, February 28, 2014. Objectives: Introduce expected values.
MA 5 Lecture 4 - Expected Values Friday, February 2, 24. Objectives: Introduce expected values.. Means, Variances, and Standard Deviations of Probability Distributions Two classes ago, we computed the
Basic Probability. Probability: The part of Mathematics devoted to quantify uncertainty
AMS 5 PROBABILITY Basic Probability Probability: The part of Mathematics devoted to quantify uncertainty Frequency Theory Bayesian Theory Game: Playing Backgammon. The chance of getting (6,6) is 1/36.
The "Dutch Book" argument, tracing back to independent work by. F.Ramsey (1926) and B.deFinetti (1937), offers prudential grounds for
The "Dutch Book" argument, tracing back to independent work by F.Ramsey (1926) and B.deFinetti (1937), offers prudential grounds for action in conformity with personal probability. Under several structural
Universal Algorithm for Trading in Stock Market Based on the Method of Calibration
Universal Algorithm for Trading in Stock Market Based on the Method of Calibration Vladimir V yugin Institute for Information Transmission Problems, Russian Academy of Sciences, Bol shoi Karetnyi per.
A New Interpretation of Information Rate
A New Interpretation of Information Rate reproduced with permission of AT&T By J. L. Kelly, jr. (Manuscript received March 2, 956) If the input symbols to a communication channel represent the outcomes
1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let
Copyright c 2009 by Karl Sigman 1 Stopping Times 1.1 Stopping Times: Definition Given a stochastic process X = {X n : n 0}, a random time τ is a discrete random variable on the same probability space as
Lecture Note 1 Set and Probability Theory. MIT 14.30 Spring 2006 Herman Bennett
Lecture Note 1 Set and Probability Theory MIT 14.30 Spring 2006 Herman Bennett 1 Set Theory 1.1 Definitions and Theorems 1. Experiment: any action or process whose outcome is subject to uncertainty. 2.
SOME ASPECTS OF GAMBLING WITH THE KELLY CRITERION. School of Mathematical Sciences. Monash University, Clayton, Victoria, Australia 3168
SOME ASPECTS OF GAMBLING WITH THE KELLY CRITERION Ravi PHATARFOD School of Mathematical Sciences Monash University, Clayton, Victoria, Australia 3168 In this paper we consider the problem of gambling with
The Advantages and Disadvantages of Online Linear Optimization
LINEAR PROGRAMMING WITH ONLINE LEARNING TATSIANA LEVINA, YURI LEVIN, JEFF MCGILL, AND MIKHAIL NEDIAK SCHOOL OF BUSINESS, QUEEN S UNIVERSITY, 143 UNION ST., KINGSTON, ON, K7L 3N6, CANADA E-MAIL:{TLEVIN,YLEVIN,JMCGILL,MNEDIAK}@BUSINESS.QUEENSU.CA
AN ANALYSIS OF A WAR-LIKE CARD GAME. Introduction
AN ANALYSIS OF A WAR-LIKE CARD GAME BORIS ALEXEEV AND JACOB TSIMERMAN Abstract. In his book Mathematical Mind-Benders, Peter Winkler poses the following open problem, originally due to the first author:
Chapter ML:IV. IV. Statistical Learning. Probability Basics Bayes Classification Maximum a-posteriori Hypotheses
Chapter ML:IV IV. Statistical Learning Probability Basics Bayes Classification Maximum a-posteriori Hypotheses ML:IV-1 Statistical Learning STEIN 2005-2015 Area Overview Mathematics Statistics...... Stochastics
Economics 1011a: Intermediate Microeconomics
Lecture 12: More Uncertainty Economics 1011a: Intermediate Microeconomics Lecture 12: More on Uncertainty Thursday, October 23, 2008 Last class we introduced choice under uncertainty. Today we will explore
Nash Equilibria and. Related Observations in One-Stage Poker
Nash Equilibria and Related Observations in One-Stage Poker Zach Puller MMSS Thesis Advisor: Todd Sarver Northwestern University June 4, 2013 Contents 1 Abstract 2 2 Acknowledgements 3 3 Literature Review
MARTINGALES AND GAMBLING Louis H. Y. Chen Department of Mathematics National University of Singapore
MARTINGALES AND GAMBLING Louis H. Y. Chen Department of Mathematics National University of Singapore 1. Introduction The word martingale refers to a strap fastened belween the girth and the noseband of
Unit 19: Probability Models
Unit 19: Probability Models Summary of Video Probability is the language of uncertainty. Using statistics, we can better predict the outcomes of random phenomena over the long term from the very complex,
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem
Economics 1011a: Intermediate Microeconomics
Lecture 11: Choice Under Uncertainty Economics 1011a: Intermediate Microeconomics Lecture 11: Choice Under Uncertainty Tuesday, October 21, 2008 Last class we wrapped up consumption over time. Today we
Software Verification and System Assurance
Software Verification and System Assurance John Rushby Based on joint work with Bev Littlewood (City University UK) Computer Science Laboratory SRI International Menlo Park CA USA John Rushby, SR I Verification
DEVELOPING A MODEL THAT REFLECTS OUTCOMES OF TENNIS MATCHES
DEVELOPING A MODEL THAT REFLECTS OUTCOMES OF TENNIS MATCHES Barnett T., Brown A., and Clarke S. Faculty of Life and Social Sciences, Swinburne University, Melbourne, VIC, Australia ABSTRACT Many tennis
Pascal is here expressing a kind of skepticism about the ability of human reason to deliver an answer to this question.
Pascal s wager So far we have discussed a number of arguments for or against the existence of God. In the reading for today, Pascal asks not Does God exist? but Should we believe in God? What is distinctive
The Kelly Betting System for Favorable Games.
The Kelly Betting System for Favorable Games. Thomas Ferguson, Statistics Department, UCLA A Simple Example. Suppose that each day you are offered a gamble with probability 2/3 of winning and probability
The Math. P (x) = 5! = 1 2 3 4 5 = 120.
The Math Suppose there are n experiments, and the probability that someone gets the right answer on any given experiment is p. So in the first example above, n = 5 and p = 0.2. Let X be the number of correct
The Two Envelopes Problem
1 The Two Envelopes Problem Rich Turner and Tom Quilter The Two Envelopes Problem, like its better known cousin, the Monty Hall problem, is seemingly paradoxical if you are not careful with your analysis.
Math 461 Fall 2006 Test 2 Solutions
Math 461 Fall 2006 Test 2 Solutions Total points: 100. Do all questions. Explain all answers. No notes, books, or electronic devices. 1. [105+5 points] Assume X Exponential(λ). Justify the following two
The result of the bayesian analysis is the probability distribution of every possible hypothesis H, given one real data set D. This prestatistical approach to our problem was the standard approach of Laplace
Linear Risk Management and Optimal Selection of a Limited Number
How to build a probability-free casino Adam Chalcraft CCR La Jolla [email protected] Chris Freiling Cal State San Bernardino [email protected] Randall Dougherty CCR La Jolla [email protected] Jason
1. Probabilities lie between 0 and 1 inclusive. 2. Certainty implies unity probability. 3. Probabilities of exclusive events add. 4. Bayes s rule.
Probabilities as betting odds and the Dutch book Carlton M. Caves 2000 June 24 Substantially modified 2001 September 6 Minor modifications 2001 November 30 Material on the lottery-ticket approach added
CHAPTER 2 Estimating Probabilities
CHAPTER 2 Estimating Probabilities Machine Learning Copyright c 2016. Tom M. Mitchell. All rights reserved. *DRAFT OF January 24, 2016* *PLEASE DO NOT DISTRIBUTE WITHOUT AUTHOR S PERMISSION* This is a
6.042/18.062J Mathematics for Computer Science. Expected Value I
6.42/8.62J Mathematics for Computer Science Srini Devadas and Eric Lehman May 3, 25 Lecture otes Expected Value I The expectation or expected value of a random variable is a single number that tells you
arxiv:1112.0829v1 [math.pr] 5 Dec 2011
How Not to Win a Million Dollars: A Counterexample to a Conjecture of L. Breiman Thomas P. Hayes arxiv:1112.0829v1 [math.pr] 5 Dec 2011 Abstract Consider a gambling game in which we are allowed to repeatedly
P vs NP problem in the field anthropology
Research Article P vs NP problem in the field anthropology Michael.A. Popov, Oxford, UK Email [email protected] Keywords P =?NP - complexity anthropology - M -decision - quantum -like game - game-theoretical
Lecture 13: Martingales
Lecture 13: Martingales 1. Definition of a Martingale 1.1 Filtrations 1.2 Definition of a martingale and its basic properties 1.3 Sums of independent random variables and related models 1.4 Products of
.4 120 +.1 80 +.5 100 = 48 + 8 + 50 = 106.
Chapter 16. Risk and Uncertainty Part A 2009, Kwan Choi Expected Value X i = outcome i, p i = probability of X i EV = pix For instance, suppose a person has an idle fund, $100, for one month, and is considering
Goal Problems in Gambling and Game Theory. Bill Sudderth. School of Statistics University of Minnesota
Goal Problems in Gambling and Game Theory Bill Sudderth School of Statistics University of Minnesota 1 Three problems Maximizing the probability of reaching a goal. Maximizing the probability of reaching
Bayesian Nash Equilibrium
. Bayesian Nash Equilibrium . In the final two weeks: Goals Understand what a game of incomplete information (Bayesian game) is Understand how to model static Bayesian games Be able to apply Bayes Nash
V. RANDOM VARIABLES, PROBABILITY DISTRIBUTIONS, EXPECTED VALUE
V. RANDOM VARIABLES, PROBABILITY DISTRIBUTIONS, EXPETED VALUE A game of chance featured at an amusement park is played as follows: You pay $ to play. A penny and a nickel are flipped. You win $ if either
6.254 : Game Theory with Engineering Applications Lecture 2: Strategic Form Games
6.254 : Game Theory with Engineering Applications Lecture 2: Strategic Form Games Asu Ozdaglar MIT February 4, 2009 1 Introduction Outline Decisions, utility maximization Strategic form games Best responses
Review of Basic Options Concepts and Terminology
Review of Basic Options Concepts and Terminology March 24, 2005 1 Introduction The purchase of an options contract gives the buyer the right to buy call options contract or sell put options contract some
Linear Programming Notes V Problem Transformations
Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material
1 Formulating The Low Degree Testing Problem
6.895 PCP and Hardness of Approximation MIT, Fall 2010 Lecture 5: Linearity Testing Lecturer: Dana Moshkovitz Scribe: Gregory Minton and Dana Moshkovitz In the last lecture, we proved a weak PCP Theorem,
Cournot s model of oligopoly
Cournot s model of oligopoly Single good produced by n firms Cost to firm i of producing q i units: C i (q i ), where C i is nonnegative and increasing If firms total output is Q then market price is P(Q),
Lecture 4: BK inequality 27th August and 6th September, 2007
CSL866: Percolation and Random Graphs IIT Delhi Amitabha Bagchi Scribe: Arindam Pal Lecture 4: BK inequality 27th August and 6th September, 2007 4. Preliminaries The FKG inequality allows us to lower bound
Lectures 5-6: Taylor Series
Math 1d Instructor: Padraic Bartlett Lectures 5-: Taylor Series Weeks 5- Caltech 213 1 Taylor Polynomials and Series As we saw in week 4, power series are remarkably nice objects to work with. In particular,
A Martingale System Theorem for Stock Investments
A Martingale System Theorem for Stock Investments Robert J. Vanderbei April 26, 1999 DIMACS New Market Models Workshop 1 Beginning Middle End Controversial Remarks Outline DIMACS New Market Models Workshop
Hypothesis Testing for Beginners
Hypothesis Testing for Beginners Michele Piffer LSE August, 2011 Michele Piffer (LSE) Hypothesis Testing for Beginners August, 2011 1 / 53 One year ago a friend asked me to put down some easy-to-read notes
The Easy Roulette System
The Easy Roulette System Please note that all information is provided as is and no guarantees are given whatsoever as to the amount of profit you will make if you use this system. Neither the seller of
STOCHASTIC MODELING. Math3425 Spring 2012, HKUST. Kani Chen (Instructor)
STOCHASTIC MODELING Math3425 Spring 212, HKUST Kani Chen (Instructor) Chapters 1-2. Review of Probability Concepts Through Examples We review some basic concepts about probability space through examples,
3. Mathematical Induction
3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)
Solutions: Problems for Chapter 3. Solutions: Problems for Chapter 3
Problem A: You are dealt five cards from a standard deck. Are you more likely to be dealt two pairs or three of a kind? experiment: choose 5 cards at random from a standard deck Ω = {5-combinations of
1 Portfolio Selection
COS 5: Theoretical Machine Learning Lecturer: Rob Schapire Lecture # Scribe: Nadia Heninger April 8, 008 Portfolio Selection Last time we discussed our model of the stock market N stocks start on day with
Solutions for Practice problems on proofs
Solutions for Practice problems on proofs Definition: (even) An integer n Z is even if and only if n = 2m for some number m Z. Definition: (odd) An integer n Z is odd if and only if n = 2m + 1 for some
MTH6120 Further Topics in Mathematical Finance Lesson 2
MTH6120 Further Topics in Mathematical Finance Lesson 2 Contents 1.2.3 Non-constant interest rates....................... 15 1.3 Arbitrage and Black-Scholes Theory....................... 16 1.3.1 Informal
Probability Theory. Florian Herzog. A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T..
Probability Theory A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T.. Florian Herzog 2013 Probability space Probability space A probability space W is a unique triple W = {Ω, F,
Game Theory and Poker
Game Theory and Poker Jason Swanson April, 2005 Abstract An extremely simplified version of poker is completely solved from a game theoretic standpoint. The actual properties of the optimal solution are
PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.
PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include
U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009. Notes on Algebra
U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009 Notes on Algebra These notes contain as little theory as possible, and most results are stated without proof. Any introductory
Chapter 7. Sealed-bid Auctions
Chapter 7 Sealed-bid Auctions An auction is a procedure used for selling and buying items by offering them up for bid. Auctions are often used to sell objects that have a variable price (for example oil)
The Ergodic Theorem and randomness
The Ergodic Theorem and randomness Peter Gács Department of Computer Science Boston University March 19, 2008 Peter Gács (Boston University) Ergodic theorem March 19, 2008 1 / 27 Introduction Introduction
Betting systems: how not to lose your money gambling
Betting systems: how not to lose your money gambling G. Berkolaiko Department of Mathematics Texas A&M University 28 April 2007 / Mini Fair, Math Awareness Month 2007 Gambling and Games of Chance Simple
Basics of Statistical Machine Learning
CS761 Spring 2013 Advanced Machine Learning Basics of Statistical Machine Learning Lecturer: Xiaojin Zhu [email protected] Modern machine learning is rooted in statistics. You will find many familiar
Master s Theory Exam Spring 2006
Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem
National Sun Yat-Sen University CSE Course: Information Theory. Gambling And Entropy
Gambling And Entropy 1 Outline There is a strong relationship between the growth rate of investment in a horse race and the entropy of the horse race. The value of side information is related to the mutual
Example: Find the expected value of the random variable X. X 2 4 6 7 P(X) 0.3 0.2 0.1 0.4
MATH 110 Test Three Outline of Test Material EXPECTED VALUE (8.5) Super easy ones (when the PDF is already given to you as a table and all you need to do is multiply down the columns and add across) Example:
The Notebook Series. The solution of cubic and quartic equations. R.S. Johnson. Professor of Applied Mathematics
The Notebook Series The solution of cubic and quartic equations by R.S. Johnson Professor of Applied Mathematics School of Mathematics & Statistics University of Newcastle upon Tyne R.S.Johnson 006 CONTENTS
Statistics and Random Variables. Math 425 Introduction to Probability Lecture 14. Finite valued Random Variables. Expectation defined
Expectation Statistics and Random Variables Math 425 Introduction to Probability Lecture 4 Kenneth Harris [email protected] Department of Mathematics University of Michigan February 9, 2009 When a large
Chapter 4 Lecture Notes
Chapter 4 Lecture Notes Random Variables October 27, 2015 1 Section 4.1 Random Variables A random variable is typically a real-valued function defined on the sample space of some experiment. For instance,
CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA
We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA April 2016 www.voyagersopris.com Mathematical
The Basics of Graphical Models
The Basics of Graphical Models David M. Blei Columbia University October 3, 2015 Introduction These notes follow Chapter 2 of An Introduction to Probabilistic Graphical Models by Michael Jordan. Many figures
Introduction to Hypothesis Testing OPRE 6301
Introduction to Hypothesis Testing OPRE 6301 Motivation... The purpose of hypothesis testing is to determine whether there is enough statistical evidence in favor of a certain belief, or hypothesis, about
Lecture 13 - Basic Number Theory.
Lecture 13 - Basic Number Theory. Boaz Barak March 22, 2010 Divisibility and primes Unless mentioned otherwise throughout this lecture all numbers are non-negative integers. We say that A divides B, denoted
Probability and Expected Value
Probability and Expected Value This handout provides an introduction to probability and expected value. Some of you may already be familiar with some of these topics. Probability and expected value are
