7 Posterior Probability and Bayes

Save this PDF as:

Size: px
Start display at page:

Transcription

1 7 Posterior Probability and Bayes Examples: 1. In a computer installation, 60% of programs are written in C ++ and 40% in Java. 60% of the programs written in C ++ compile on the first run and 80% of the Java programs compile on the first run. (a) What is the overall proportion of programs that compile on first run? (b) If a randomly selected program compiles on the first run what is the probability that it was written in C ++? 2. In a certain company 50% of documents are written in WORD; 30% in LATEX; 20% in HTML. From past experience it is know that : 40% of the WORD documents exceed 10 pages 20% of the LATEX documents exceed 10 pages 20% of the HTML exceed 10 pages (a) What is the overall proportion of documents containing more than 10 pages? (b) A document is chosen at random and found to to have more than 10 pages. What is the probability that it has been written in LATEX?

2 3. Enquiries to an on-line computer system arrive on 5 communication lines. The percentage of messages received through each line are: Line % received From past experience, it is known that the percentage of messages exceeding 100 characters on the different lines are: Line % exceeding 100 characters (a) Calculate the overall proportion of messages exceeding 100 characters. (b) If a message chosen at random is found to exceed 100 characters, what is the probability that it came through line 5? 4. A binary communication channel carries data as one of two sets of signals denoted by 0 and 1. Owing to noise, a transmitted 0 is sometimes received as a 1, and a transmitted 1 is sometimes received as a 0. For a given channel, it can be assumed that a transmitted 0 is correctly received with probability 0.95 and a transmitted 1 is correctly received with probability Also, 60% of all messages are transmitted as a 0. If a signal is sent, determine the probability that: (a) a 1 was received; (b) a 0 was received; (c) an error occurred; (d) a 1 was transmitted given than a 1 was received; (e) a 0 was transmitted given that a 0 was received.

3 Example 1 Compiles Does not compile on first run on first run C Java The probability that a program has been written in C ++ when we know that it has compiled in the first run? If E is the event of compiling on the first run, we seek: Now and Since Then that So P (C ++ E) =? P (C ++ E) = P (C ++ )P (E C ++ ) P (E C ++ ) = P (E)P (E C ++ ) P (E C ++ ) = P (C ++ E) P (E)P (C ++ E) = P (C ++ )P (E C ++ ) P (C ++ E) = P (C++ )P (E C ++ ) P (E) P(C ++ E) is the Posterior Probability of C ++ after it has been found that the program has compiled on the first run. P (C ++ E) = P (C++ )P (E C ++ ) P (E) P (E) = P (C ++ )P (E C ++ ) + P (J)P (E J) = 120/200 72/ /200 64/80+ = 0.68 Then P (C ++ E) = 120/200 72/120.68

4 In R : totalprob <-((80/200)*(64/80))+((120/200)*(72/120)) # total probability conditc <-(120/200)*(72/120)/totalprob #posterior probability of C++ conditc [1] Analogously, In R P (J E) = P (J)P (E J) P (C ++ )P (E C ++ ) + P (J)P (E J) totalprob <-((80/200)*(64/80))+((120/200)*(72/120)) # total probability conditjava <-(80/200)*(64/80)/totalprob #posterior probability of Java conditjava [1] Prior and Posterior Probabilities after a program has been found to have compiled on first run C ++ Java Prior Posterior

5 Bayes Theorem Bayes Rule Bayes rule for two events Consider two events A and B. P (A B) = P (A)P (B A) and P (B A) = P (B)P (A B) Since P (A B) = P (B A), It follows from the multiplication law that P (B)P (A B) = P (A)P (B A) which implies that P (A B) = P (A)P (B A) P (B) P (A B) is Posterior Probability of A after B has occurred.

6 Example 4 A binary communication channel carries data as one of two sets of signals denoted by 0 and 1. Owing to noise, a transmitted 0 is sometimes received as a 1, and a transmitted 1 is sometimes received as a 0. For a given channel, it can be assumed that a transmitted 0 is correctly received with probability 0.95 and a transmitted 1 is correctly received with probability It is also known that 70% of all messages are transmitted as a 0. If a signal is sent, what is the probability that 1. a 1 was transmitted given that a 1 was received; 2. a 0 was transmitted given that a 0 was received. Solution Let: R 0 is event that a zero is received. T 0 is event that a zero is transmitted, P (T 0 =.7) R 1 is event that a one is received. T 1 is the event that a one is transmitted, P (T 1 =.3) P (T 0 R 0 ) = P (T 0)P (R 0 T 0 ) P (R 0 ) Now: and So P (T 0 )P (R 0 T 0 ) =.7.95 R 0 = (T 0 R 0 ) (T 1 R 0 ) P (R 0 ) = P (T 0 )P (R 0 T 0 ) + P (T 1 )P (R 0 T 1 )

7 Therefore P (R 0 ) = =.74 P (T 0 R 0 ) = = =.90 Similarly P (T 1 R 0 ) = = =.10

8 Example From past experience it is know that : 50% of documents are written in WORD; 30% in LATEX; 20% in HTML. These are the prior probabilities P (W ord) =.5; P (Latex) =.3; P (Html) =.2. 40% of the WORD documents exceed 10 pages 20% of the LATEX documents exceed 10 pages 20% of the HTML exceed 10 pages A a document chosen at random was found to exceed 10 pages, what the probability is that it has been written in Latex? Let E be the event that a document, chosen at random, contains more than 10 pages. P (Latex E) =? Now P (Latex E) = P (Latex)P (E Latex) P (E) P (E) = P (W ord)p (E W ord) + P (Latex)P (E Latex) + P (Html)P (E Html) = =.3

9 So: Similarly and P (Latex E) = = P (W ord E) = P (Html E) = P (Latex)P (E Latex) P (E) = =.67 =.13 Prior and Posterior Probabilities after document has been found to contain more than 10 pages Word Latex Html Prior Posterior

10 General Bayes Rule If a sample space can be partitioned into k mutually exclusive and exhaustive events: A 1, A 2, A 3, A k S = A 1 A 2 A 3 A k Then for any event E,: P (E) = P (A 1 )P (E A 1 ) + P (A 2 )P (E A 2 ) P (A k )P (E A k ) P (A i E) = P (A i)p (E A i ) P (E) Proof: For any i, 1 i k E A i = A i E P (E)P (A i E) = P (A i )P (E A i ) P (A i E) = P (A i)p (E A i ) P (E) P (A i E) is called the POSTERIOR PROBABILITY

11 SOME APPLICATIONS OF BAYES Game Show: Let s make a Deal Hardware Fault Diagnosis Machine Learning Machine Translation

12 Game Show: You are a contestant in a game which allows you to choose one out of three doors. One of these doors conceals a laptop computer while the other two are empty. When you have made your choice, the host of the show opens one of the remaining doors, and shows you that it is one of the empty ones. You are now given the opportunity to change your door. Should you do so, or should you stick with your original choice? The question we address is: If you change, what are your chances of winning the laptop? When you choose a door randomly originally, there is a probability of 1/3 that the laptop will be behind the door that you choose, and a probability of 2/3 that it will be behind one of the other two doors. These probabilities do not change when a door is opened and revealed empty; therefore if you switch you will have a probability of 2/3 of winning the laptop. Let D 1, D 2, and D 3 be the events that the laptop is behind door 1, door 2 and door 3 respectively. We assume that P (D 1 ) = P (D 2 ) = P (D 3 ) = 1 3 and P (D 1 ) = P (D 2 ) = P (D 3 ) = 2 3 Suppose you choose door 1.

13 Let E be the event that the host opens door 2, and reveals it to be empty. If you change after this, you will be choosing door 3. So we need to know P (D 3 E). From Bayes rule Now, from total probability, So P (D 3 E) = P (D 3)P (E D 3 ) P (E) E = (D 1 E) (D 2 E) (D 3 E) P (E) = P (D 1 E) + P (D 2 E) + P (D 3 E) = P (D 1 )P (E D 1 ) + P (D 2 )P (E D 2 ) + P (D 3 )P (E D 3 ) So we can write P (D 3 )P (E D 3 ) P (D 3 E) = P (D 1 )P (E D 1 ) + P (D 2 )P (E D 2 ) + P (D 3 )P (E D 3 ) (1) We need to calculate P (E D 1 ), P (E D 2 ) and P (E D 3 ). P (E D 1 ) is the probability that the host opens door 2 and reveals it to be empty, given that it is behind door 1. When the laptop is behind door 1, the door selected by you, the host has two doors to select from. We assume that the host selects one of these at random, i.e. P (E D 1 ) = 1/2. P (E D 2 ) is the probability that the host opens door 2 and reveals it to be empty, given that it is behind door 2. This is an impossibility, so it has a probability of zero,

14 i.e. P (E D 2 ) = 0. Finally P (E D 3 ) is the probability that the host opens door 2 and reveals it to be empty, given that it behind door 3. When the laptop is behind door 3, there has just one way of revealing an empty door; the host must open door 2. This is a certainty, so it has a probability of one, i.e. P (E D 3 ) = 1. Putting these values into (1) we have P (D 3 E) = ( ) + (1 3 0) + (1 3 1) = 2 3 So changing would increase your probability of winning the laptop from 1/3 to 2/3. You would double your chance by changing. The prior probability of winning the laptop was 1/3, while the posterior probability that the laptop is behind the door you have not chosen is 2/3

15 Hardware Fault Diagnosis Printer failures are associated with three types of problems; hardware, software and electrical connections. A printer manufacturer obtained the following probabilities from a database of tests results. P (H) = 0.1 P (S) = 0.6 P (E) = 0.3 where H, S, and E denote hardware, software or electrical problems respectively: These are the prior probabilities: It is also known that the probability of a printer failure (F ) given a hardware problem is 0.9, P (F H) = 0.9 given a software problem is 0.2, P (F S) = 0.2 given an electrical problem is 0.5, P (F E) = 0.5 These are the conditional probabilities If a customer reports a printer failure, what is the most likely cause of the problem? We need the posterior probabilities P (H F ), P (S F ), P (E F ), the likelihood that, given that a printer fault has been reported, it will be due to faulty hardware, software or electrical.

16 We calculate the posterior probabilities using Bayes rule. P (H F ), P (S F ), P (E F ) First calculate the total probability of a failure occurring. F = (H F ) (S F ) (E F ) P (F ) = P (H)P (F H) + P (S)P (F S) + P (E)P (F E) = =.36 Then the posterior probabilities are: P (H F ) = P (H)P (F H) P (F ) = =.250 P (S F ) = P (S)P (F S) P (F ) = =.333 P (E F ) = P (E)P (F E) P (F ) = =.417 Because P (E F ) is the largest, the most likely cause of the problem is electrical. A help desk or website dialog to diagnose the problem should check into this problem first.

17 Machine Learning Bayes theorem is sometimes used to improve the accuracy in supervised learning. It is used in the classification of items where the system has already learnt the probabilities. We need to classify f, a new example, into the correct class. Suppose there are only two classes, y = 1 and y = 2 into which we can classify new values of f. By Bayes rule, we can write P (y = 1 f) = P (y = 1 f) P (f) = P (y = 1)P (f y = 1) P (f) (2) P (y = 2 f) = P (y = 2 f) P (f) = P (y = 2)P (f y = 2) P (f) (3) Dividing (2) by (3) we get P (y = 1 f) P (y = 2 f) = P (y = 1)P (f y = 1) P (y = 2)P (f y = 2) Our decision is to classify a new example into class 1 if or equivalently if P (y = 1 f) P (y = 2 f) > 1 P (y = 1)P (f y = 1) P (y = 2)P (f y = 2) > 1 (4)

18 f goes into class 1 iff P (y = 1)P (f y = 1) > P (y = 2)P (f y = 2) and f goes into class 2 if P (y = 1)P (f y = 1) < P (y = 2)P (f y = 2) The prior probabilities of being in either of two classes are assumed to be already learnt: P (y = 1) =.4 P (y = 2) =.6 Also the conditional probabilities for the new example f are also aready learnt: P (f y = 1) =.5 P (f y = 2) =.3 Into what class should you classify the new example? and and since P (y = 1)P (f y = 1)) =.4.5 =.20 P (y = 2)P (f y = 2) =.6.3 =.18 P (y = 1)P (f y = 1) > P (y = 2)P (f y = 2), the new example goes into class 1.

19 Machine Translation A string of English words e can be translated into a string of French words f in many different ways. Often, knowing the broader context in which e occurs may narrow the set of acceptable French translations but, even so, many acceptable translations remain. Bayes rule can be used to make a choice between them. To every pair (e, f) a number P (f e) is assigned which is interpreted as the probability that a translator when given e to translate will return f as the translation. Given a string of French words f, the job of the translation system is to find the string e that the native speaker had in mind, when f was produced. We seek P (e f). From Bayes rule P (e f) = P (e)p (f e) P (f) We seek that ê to maximise P (e f), i.e. choose ê so that P (ê f) = max e P (e)p (f e) P (f) which is equivalent to max e (P (e)p (f e)) since P(f) is constant regardless of e. This is known as the Fundamental Equation of Machine Translation usually written as ê = argmaxp (e)p (f e)

20 As a simple example, suppose there are 3 possibilities for e: P (e 1 ) = 0.2, P (e 2 ), = 0.5, P (e 3 ) = 0.3. One of these has been translated into f. A passage f has the following conditional probabilities: P (f e 1 ) = 0.4, P (f e 2 ) = 0.2, P (f e 3 ) = 0.4. We can use Bayes theorem to decide which is the most likely e. P (e 1 )P (f e 1 ) =.2.4 =.08 P (e 2 )P (f e 2 ) =.5.2 =.1 P (e 3 )P (f e 3 ) =.3.4 =.12 The best translation, in this case, is e 3 since it has the highest posterior probability.

Discrete Math in Computer Science Homework 7 Solutions (Max Points: 80)

Discrete Math in Computer Science Homework 7 Solutions (Max Points: 80) CS 30, Winter 2016 by Prasad Jayanti 1. (10 points) Here is the famous Monty Hall Puzzle. Suppose you are on a game show, and you

WORKED EXAMPLES 1 TOTAL PROBABILITY AND BAYES THEOREM

WORKED EXAMPLES 1 TOTAL PROBABILITY AND BAYES THEOREM EXAMPLE 1. A biased coin (with probability of obtaining a Head equal to p > 0) is tossed repeatedly and independently until the first head is observed.

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao,David Tse Note 11

CS 70 Discrete Mathematics and Probability Theory Fall 2009 Satish Rao,David Tse Note Conditional Probability A pharmaceutical company is marketing a new test for a certain medical condition. According

Week 2: Conditional Probability and Bayes formula

Week 2: Conditional Probability and Bayes formula We ask the following question: suppose we know that a certain event B has occurred. How does this impact the probability of some other A. This question

P (A) = lim P (A) = N(A)/N,

1.1 Probability, Relative Frequency and Classical Definition. Probability is the study of random or non-deterministic experiments. Suppose an experiment can be repeated any number of times, so that we

Elements of probability theory

The role of probability theory in statistics We collect data so as to provide evidentiary support for answers we give to our many questions about the world (and in our particular case, about the business

Lecture Note 1 Set and Probability Theory. MIT 14.30 Spring 2006 Herman Bennett

Lecture Note 1 Set and Probability Theory MIT 14.30 Spring 2006 Herman Bennett 1 Set Theory 1.1 Definitions and Theorems 1. Experiment: any action or process whose outcome is subject to uncertainty. 2.

Factory example D MA ND 0 MB D ND

Bayes Theorem Now we look at how we can use information about conditional probabilities to calculate reverse conditional probabilities i.e., how we calculate ( A B when we know ( B A (and some other things.

ECE302 Spring 2006 HW1 Solutions January 16, 2006 1

ECE302 Spring 2006 HW1 Solutions January 16, 2006 1 Solutions to HW1 Note: These solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics

The Story. Probability - I. Plan. Example. A Probability Tree. Draw a probability tree.

Great Theoretical Ideas In Computer Science Victor Adamchi CS 5-5 Carnegie Mellon University Probability - I The Story The theory of probability was originally developed by a French mathematician Blaise

A New Interpretation of Information Rate

A New Interpretation of Information Rate reproduced with permission of AT&T By J. L. Kelly, jr. (Manuscript received March 2, 956) If the input symbols to a communication channel represent the outcomes

Chapter 3: The basic concepts of probability

Chapter 3: The basic concepts of probability Experiment: a measurement process that produces quantifiable results (e.g. throwing two dice, dealing cards, at poker, measuring heights of people, recording

Rate of Return Analysis

Basics Along with the PW and AW criteria, the third primary measure of worth is rate of return Mostly preferred over PW and AW criteria. Ex: an alternative s worth can be reported as: 15% rate of return

Probabilities. Probability of a event. From Random Variables to Events. From Random Variables to Events. Probability Theory I

Victor Adamchi Danny Sleator Great Theoretical Ideas In Computer Science Probability Theory I CS 5-25 Spring 200 Lecture Feb. 6, 200 Carnegie Mellon University We will consider chance experiments with

Basic concepts in probability. Sue Gordon

Mathematics Learning Centre Basic concepts in probability Sue Gordon c 2005 University of Sydney Mathematics Learning Centre, University of Sydney 1 1 Set Notation You may omit this section if you are

Introduction to Learning & Decision Trees

Artificial Intelligence: Representation and Problem Solving 5-38 April 0, 2007 Introduction to Learning & Decision Trees Learning and Decision Trees to learning What is learning? - more than just memorizing

Reading 13 : Finite State Automata and Regular Expressions

CS/Math 24: Introduction to Discrete Mathematics Fall 25 Reading 3 : Finite State Automata and Regular Expressions Instructors: Beck Hasti, Gautam Prakriya In this reading we study a mathematical model

r r + b, r + k P r(r 1 ) = P r(r 2 R 1 ) = r + b + 2k r + 2k b r + b + 3k r(r + k)(r + 2k)b (r + b)(r + b + k)(r + b + 2k)(r + b + 3k).

Stat Solutions for Homework Set 3 Page 6 Exercise : A box contains r red balls and b blue balls. One ball is selected at random and its color is observed. The ball is then returned to the box and k additional

Chapter 5 A Survey of Probability Concepts

Chapter 5 A Survey of Probability Concepts True/False 1. Based on a classical approach, the probability of an event is defined as the number of favorable outcomes divided by the total number of possible

Definition and Calculus of Probability

In experiments with multivariate outcome variable, knowledge of the value of one variable may help predict another. For now, the word prediction will mean update the probabilities of events regarding the

Formal Languages and Automata Theory - Regular Expressions and Finite Automata -

Formal Languages and Automata Theory - Regular Expressions and Finite Automata - Samarjit Chakraborty Computer Engineering and Networks Laboratory Swiss Federal Institute of Technology (ETH) Zürich March

Probability OPRE 6301

Probability OPRE 6301 Random Experiment... Recall that our eventual goal in this course is to go from the random sample to the population. The theory that allows for this transition is the theory of probability.

Math/Stats 425 Introduction to Probability. 1. Uncertainty and the axioms of probability

Math/Stats 425 Introduction to Probability 1. Uncertainty and the axioms of probability Processes in the real world are random if outcomes cannot be predicted with certainty. Example: coin tossing, stock

Jan 31 Homework Solutions Math 151, Winter 2012. Chapter 3 Problems (pages 102-110)

Jan 31 Homework Solutions Math 151, Winter 01 Chapter 3 Problems (pages 10-110) Problem 61 Genes relating to albinism are denoted by A and a. Only those people who receive the a gene from both parents

Part Five. Cost Volume Profit Analysis

Part Five Cost Volume Profit Analysis COST VOLUME PROFIT ANALYSIS Study of the effects of changes of costs and volume on a company s profits A critical factor in management decisions Important in profit

STA 371G: Statistics and Modeling

STA 371G: Statistics and Modeling Decision Making Under Uncertainty: Probability, Betting Odds and Bayes Theorem Mingyuan Zhou McCombs School of Business The University of Texas at Austin http://mingyuanzhou.github.io/sta371g

On the Efficiency of Competitive Stock Markets Where Traders Have Diverse Information

Finance 400 A. Penati - G. Pennacchi Notes on On the Efficiency of Competitive Stock Markets Where Traders Have Diverse Information by Sanford Grossman This model shows how the heterogeneous information

E3: PROBABILITY AND STATISTICS lecture notes

E3: PROBABILITY AND STATISTICS lecture notes 2 Contents 1 PROBABILITY THEORY 7 1.1 Experiments and random events............................ 7 1.2 Certain event. Impossible event............................

Categorical Data Visualization and Clustering Using Subjective Factors

Categorical Data Visualization and Clustering Using Subjective Factors Chia-Hui Chang and Zhi-Kai Ding Department of Computer Science and Information Engineering, National Central University, Chung-Li,

Lecture 1 Introduction Properties of Probability Methods of Enumeration Asrat Temesgen Stockholm University

Lecture 1 Introduction Properties of Probability Methods of Enumeration Asrat Temesgen Stockholm University 1 Chapter 1 Probability 1.1 Basic Concepts In the study of statistics, we consider experiments

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

Section 2.4-2.5 Probability (p.55)

Section 2.4-2.5 Probability (p.55 2.54 Suppose that in a senior college class of 500 students it is found that 210 smoke, 258 drink alcoholic beverage, 216 eat between meals, 122 smoke and drink alcoholic

P (B) In statistics, the Bayes theorem is often used in the following way: P (Data Unknown)P (Unknown) P (Data)

22S:101 Biostatistics: J. Huang 1 Bayes Theorem For two events A and B, if we know the conditional probability P (B A) and the probability P (A), then the Bayes theorem tells that we can compute the conditional

Question 2 Naïve Bayes (16 points)

Question 2 Naïve Bayes (16 points) About 2/3 of your email is spam so you downloaded an open source spam filter based on word occurrences that uses the Naive Bayes classifier. Assume you collected the

Chapter ML:IV. IV. Statistical Learning. Probability Basics Bayes Classification Maximum a-posteriori Hypotheses

Chapter ML:IV IV. Statistical Learning Probability Basics Bayes Classification Maximum a-posteriori Hypotheses ML:IV-1 Statistical Learning STEIN 2005-2015 Area Overview Mathematics Statistics...... Stochastics

LINEAR SYSTEMS. Consider the following example of a linear system:

LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x +2x 2 +3x 3 = 5 x + x 3 = 3 3x + x 2 +3x 3 = 3 x =, x 2 =0, x 3 = 2 In general we want to solve n equations in

Bayes and Naïve Bayes. cs534-machine Learning

Bayes and aïve Bayes cs534-machine Learning Bayes Classifier Generative model learns Prediction is made by and where This is often referred to as the Bayes Classifier, because of the use of the Bayes rule

Worksheet for Teaching Module Probability (Lesson 1)

Worksheet for Teaching Module Probability (Lesson 1) Topic: Basic Concepts and Definitions Equipment needed for each student 1 computer with internet connection Introduction In the regular lectures in

Gambling and Data Compression

Gambling and Data Compression Gambling. Horse Race Definition The wealth relative S(X) = b(x)o(x) is the factor by which the gambler s wealth grows if horse X wins the race, where b(x) is the fraction

Applications & Tools. Configuration of Messages and Alarms in WinCC (TIA Portal) WinCC (TIA Portal) Application description December 2012

Cover Configuration of Messages and Alarms in WinCC (TIA Portal) WinCC (TIA Portal) Application description December 2012 Applications & Tools Answers for industry. Siemens Industry Online Support This

Book Review of Rosenhouse, The Monty Hall Problem. Leslie Burkholder 1

Book Review of Rosenhouse, The Monty Hall Problem Leslie Burkholder 1 The Monty Hall Problem, Jason Rosenhouse, New York, Oxford University Press, 2009, xii, 195 pp, US \$24.95, ISBN 978-0-19-5#6789-8 (Source

2. Information Economics

2. Information Economics In General Equilibrium Theory all agents had full information regarding any variable of interest (prices, commodities, state of nature, cost function, preferences, etc.) In many

ADSL or Asymmetric Digital Subscriber Line. Backbone. Bandwidth. Bit. Bits Per Second or bps

ADSL or Asymmetric Digital Subscriber Line Backbone Bandwidth Bit Commonly called DSL. Technology and equipment that allow high-speed communication across standard copper telephone wires. This can include

Not Only What But also When: A Theory of Dynamic Voluntary Disclosure

Not Only What But also When: A Theory of Dynamic Voluntary Disclosure Ilan Guttman, Ilan Kremer, and Andrzej Skrzypacz Stanford Graduate School of Business September 2012 Abstract The extant theoretical

AP Statistics 2012 Scoring Guidelines

AP Statistics 2012 Scoring Guidelines The College Board The College Board is a mission-driven not-for-profit organization that connects students to college success and opportunity. Founded in 1900, the

MICROPROCESSOR AND MICROCOMPUTER BASICS

Introduction MICROPROCESSOR AND MICROCOMPUTER BASICS At present there are many types and sizes of computers available. These computers are designed and constructed based on digital and Integrated Circuit

Decision Theory. 36.1 Rational prospecting

36 Decision Theory Decision theory is trivial, apart from computational details (just like playing chess!). You have a choice of various actions, a. The world may be in one of many states x; which one

1 Maximum likelihood estimation

COS 424: Interacting with Data Lecturer: David Blei Lecture #4 Scribes: Wei Ho, Michael Ye February 14, 2008 1 Maximum likelihood estimation 1.1 MLE of a Bernoulli random variable (coin flips) Given N

15.0 More Hypothesis Testing

15.0 More Hypothesis Testing 1 Answer Questions Type I and Type II Error Power Calculation Bayesian Hypothesis Testing 15.1 Type I and Type II Error In the philosophy of hypothesis testing, the null hypothesis

Modbus Frequently Asked Questions WP-34-REV0-0609-1/7 The Answer to the 14 Most Frequently Asked Modbus Questions Exactly what is Modbus? Modbus is an open serial communications protocol widely used in

The Calculus of Probability

The Calculus of Probability Let A and B be events in a sample space S. Partition rule: P(A) = P(A B) + P(A B ) Example: Roll a pair of fair dice P(Total of 10) = P(Total of 10 and double) + P(Total of

Creating a NL Texas Hold em Bot

Creating a NL Texas Hold em Bot Introduction Poker is an easy game to learn by very tough to master. One of the things that is hard to do is controlling emotions. Due to frustration, many have made the

Reliability Applications (Independence and Bayes Rule)

Reliability Applications (Independence and Bayes Rule ECE 313 Probability with Engineering Applications Lecture 5 Professor Ravi K. Iyer University of Illinois Today s Topics Review of Physical vs. Stochastic

Jan 17 Homework Solutions Math 151, Winter 2012. Chapter 2 Problems (pages 50-54)

Jan 17 Homework Solutions Math 11, Winter 01 Chapter Problems (pages 0- Problem In an experiment, a die is rolled continually until a 6 appears, at which point the experiment stops. What is the sample

Expected values, standard errors, Central Limit Theorem. Statistical inference

Expected values, standard errors, Central Limit Theorem FPP 16-18 Statistical inference Up to this point we have focused primarily on exploratory statistical analysis We know dive into the realm of statistical

ECE302 Spring 2006 HW4 Solutions February 6, 2006 1

ECE302 Spring 2006 HW4 Solutions February 6, 2006 1 Solutions to HW4 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 10

CS 70 Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 10 Introduction to Discrete Probability Probability theory has its origins in gambling analyzing card games, dice,

LET S MAKE A DEAL! ACTIVITY

LET S MAKE A DEAL! ACTIVITY NAME: DATE: SCENARIO: Suppose you are on the game show Let s Make A Deal where Monty Hall (the host) gives you a choice of three doors. Behind one door is a valuable prize.

Math 370, Actuarial Problemsolving Spring 2008 A.J. Hildebrand. Problem Set 1 (with solutions)

Math 370, Actuarial Problemsolving Spring 2008 A.J. Hildebrand Problem Set 1 (with solutions) About this problem set: These are problems from Course 1/P actuarial exams that I have collected over the years,

Worked examples Basic Concepts of Probability Theory

Worked examples Basic Concepts of Probability Theory Example 1 A regular tetrahedron is a body that has four faces and, if is tossed, the probability that it lands on any face is 1/4. Suppose that one

Network Monitoring. Chu-Sing Yang. Department of Electrical Engineering National Cheng Kung University

Network Monitoring Chu-Sing Yang Department of Electrical Engineering National Cheng Kung University Outline Introduction Network monitoring architecture Performance monitoring Fault monitoring Accounting

3 SOFTWARE AND PROGRAMMING LANGUAGES

3 SOFTWARE AND PROGRAMMING LANGUAGES 3.1 INTRODUCTION In the previous lesson we discussed about the different parts and configurations of computer. It has been mentioned that programs or instructions have

> 2. Error and Computer Arithmetic

> 2. Error and Computer Arithmetic Numerical analysis is concerned with how to solve a problem numerically, i.e., how to develop a sequence of numerical calculations to get a satisfactory answer. Part

1 Nonzero sum games and Nash equilibria

princeton univ. F 14 cos 521: Advanced Algorithm Design Lecture 19: Equilibria and algorithms Lecturer: Sanjeev Arora Scribe: Economic and game-theoretic reasoning specifically, how agents respond to economic

1. The sample space S is the set of all possible outcomes. 2. An event is a set of one or more outcomes for an experiment. It is a sub set of S.

1 Probability Theory 1.1 Experiment, Outcomes, Sample Space Example 1 n psychologist examined the response of people standing in line at a copying machines. Student volunteers approached the person first

P (x) 0. Discrete random variables Expected value. The expected value, mean or average of a random variable x is: xp (x) = v i P (v i )

Discrete random variables Probability mass function Given a discrete random variable X taking values in X = {v 1,..., v m }, its probability mass function P : X [0, 1] is defined as: P (v i ) = Pr[X =

Math 319 Problem Set #3 Solution 21 February 2002

Math 319 Problem Set #3 Solution 21 February 2002 1. ( 2.1, problem 15) Find integers a 1, a 2, a 3, a 4, a 5 such that every integer x satisfies at least one of the congruences x a 1 (mod 2), x a 2 (mod

National Sun Yat-Sen University CSE Course: Information Theory. Gambling And Entropy

Gambling And Entropy 1 Outline There is a strong relationship between the growth rate of investment in a horse race and the entropy of the horse race. The value of side information is related to the mutual

CYC Application Portal FAQs

CYC Application Portal FAQs Q. Which computers can I access the applications from? A. A personal computer (PC or Laptop). Q. Is any special software required to connect? A. Your PC needs to have either

INTRODUCTION TO CODING THEORY: BASIC CODES AND SHANNON S THEOREM

INTRODUCTION TO CODING THEORY: BASIC CODES AND SHANNON S THEOREM SIDDHARTHA BISWAS Abstract. Coding theory originated in the late 1940 s and took its roots in engineering. However, it has developed and

Different Ways of Connecting to. 3DLevelScanner II. A.P.M Automation Solutions LTD. www.apm-solutions.com Version 3.0

3DLevelScanner II Different Ways of Connecting to 3DLevelScanner II A.P.M Automation Solutions LTD. www.apm-solutions.com Version 3.0 2 Different Ways of Connecting to 3DLevelScanner II Version 3.0 Table

Discrete Mathematics for CS Fall 2006 Papadimitriou & Vazirani Lecture 22

CS 70 Discrete Mathematics for CS Fall 2006 Papadimitriou & Vazirani Lecture 22 Introduction to Discrete Probability Probability theory has its origins in gambling analyzing card games, dice, roulette

EE289 Lab Fall 2009. LAB 4. Ambient Noise Reduction. 1 Introduction. 2 Simulation in Matlab Simulink

EE289 Lab Fall 2009 LAB 4. Ambient Noise Reduction 1 Introduction Noise canceling devices reduce unwanted ambient noise (acoustic noise) by means of active noise control. Among these devices are noise-canceling

Spam Filtering based on Naive Bayes Classification. Tianhao Sun

Spam Filtering based on Naive Bayes Classification Tianhao Sun May 1, 2009 Abstract This project discusses about the popular statistical spam filtering process: naive Bayes classification. A fairly famous

Special Note Ethernet Connection Problems and Handling Methods (CS203 / CS468 / CS469)

Special Note Connection Problems and Handling Methods (CS203 / CS468 / CS469) Sometimes user cannot find the RFID device after installing the CSL Demo App and the RFID reader is connected. If user cannot

Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets

Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets Nathaniel Hendren January, 2014 Abstract Both Akerlof (1970) and Rothschild and Stiglitz (1976) show that

CS 3719 (Theory of Computation and Algorithms) Lecture 4

CS 3719 (Theory of Computation and Algorithms) Lecture 4 Antonina Kolokolova January 18, 2012 1 Undecidable languages 1.1 Church-Turing thesis Let s recap how it all started. In 1990, Hilbert stated a

4. Conditional Probability P( ) CSE 312 Winter 2011 W.L. Ruzzo

4. Conditional Probability P( ) CSE 312 Winter 2011 W.L. Ruzzo conditional probability Conditional probability of E given F: probability that E occurs given that F has already occurred. Conditioning on

Ten top problems network techs encounter

Ten top problems network techs encounter Networks today have evolved quickly to include business critical applications and services, relied on heavily by users in the organization. In this environment,

Updating the International Standard Classification of Occupations (ISCO) Draft ISCO-08 Group Definitions: Occupations in ICT

InternationalLabourOrganization OrganisationinternationaleduTravail OrganizaciónInternacionaldelTrabajo Updating the International Standard Classification of Occupations (ISCO) Draft ISCO-08 Group Definitions:

Theory of Computation Prof. Kamala Krithivasan Department of Computer Science and Engineering Indian Institute of Technology, Madras

Theory of Computation Prof. Kamala Krithivasan Department of Computer Science and Engineering Indian Institute of Technology, Madras Lecture No. # 31 Recursive Sets, Recursively Innumerable Sets, Encoding

Vehicle Tracking System Robust to Changes in Environmental Conditions

INORMATION & COMMUNICATIONS Vehicle Tracking System Robust to Changes in Environmental Conditions Yasuo OGIUCHI*, Masakatsu HIGASHIKUBO, Kenji NISHIDA and Takio KURITA Driving Safety Support Systems (DSSS)

Bayesian Tutorial (Sheet Updated 20 March)

Bayesian Tutorial (Sheet Updated 20 March) Practice Questions (for discussing in Class) Week starting 21 March 2016 1. What is the probability that the total of two dice will be greater than 8, given that

arxiv:1112.0829v1 [math.pr] 5 Dec 2011

How Not to Win a Million Dollars: A Counterexample to a Conjecture of L. Breiman Thomas P. Hayes arxiv:1112.0829v1 [math.pr] 5 Dec 2011 Abstract Consider a gambling game in which we are allowed to repeatedly

TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS

TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS 1. Bandwidth: The bandwidth of a communication link, or in general any system, was loosely defined as the width of

ANALOG VS DIGITAL. Copyright 1998, Professor John T.Gorgone

ANALOG VS DIGITAL 1 BASICS OF DATA COMMUNICATIONS Data Transport System Analog Data Digital Data The transport of data through a telecommunications network can be classified into two overall transport

Alternative proof for claim in [1]

Alternative proof for claim in [1] Ritesh Kolte and Ayfer Özgür Aydin The problem addressed in [1] is described in Section 1 and the solution is given in Section. In the proof in [1], it seems that obtaining

A crash course in probability and Naïve Bayes classification

Probability theory A crash course in probability and Naïve Bayes classification Chapter 9 Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. s: A person s

Introduction to Automata Theory. Reading: Chapter 1

Introduction to Automata Theory Reading: Chapter 1 1 What is Automata Theory? Study of abstract computing devices, or machines Automaton = an abstract computing device Note: A device need not even be a

1 PERSONAL COMPUTERS

PERSONAL COMPUTERS 1 2 Personal computer a desktop computer a laptop a tablet PC or a handheld PC Software applications for personal computers include word processing spreadsheets databases web browsers

CHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is.

Some Continuous Probability Distributions CHAPTER 6: Continuous Uniform Distribution: 6. Definition: The density function of the continuous random variable X on the interval [A, B] is B A A x B f(x; A,

Ch. 13.3: More about Probability Complementary Probabilities Given any event, E, of some sample space, U, of a random experiment, we can always talk about the complement, E, of that event: this is the

Finite and discrete probability distributions

8 Finite and discrete probability distributions To understand the algorithmic aspects of number theory and algebra, and applications such as cryptography, a firm grasp of the basics of probability theory

Part III: Machine Learning. CS 188: Artificial Intelligence. Machine Learning This Set of Slides. Parameter Estimation. Estimation: Smoothing

CS 188: Artificial Intelligence Lecture 20: Dynamic Bayes Nets, Naïve Bayes Pieter Abbeel UC Berkeley Slides adapted from Dan Klein. Part III: Machine Learning Up until now: how to reason in a model and

Math 202-0 Quizzes Winter 2009

Quiz : Basic Probability Ten Scrabble tiles are placed in a bag Four of the tiles have the letter printed on them, and there are two tiles each with the letters B, C and D on them (a) Suppose one tile

The Kelly Betting System for Favorable Games.

The Kelly Betting System for Favorable Games. Thomas Ferguson, Statistics Department, UCLA A Simple Example. Suppose that each day you are offered a gamble with probability 2/3 of winning and probability

A Problem With The Rational Numbers

Solvability of Equations Solvability of Equations 1. In fields, linear equations ax + b = 0 are solvable. Solvability of Equations 1. In fields, linear equations ax + b = 0 are solvable. 2. Quadratic equations

1 Portfolio Selection

COS 5: Theoretical Machine Learning Lecturer: Rob Schapire Lecture # Scribe: Nadia Heninger April 8, 008 Portfolio Selection Last time we discussed our model of the stock market N stocks start on day with