An Introduction to Markov Decision Processes. MDP Tutorial - 1
|
|
|
- Jasper Malone
- 10 years ago
- Views:
Transcription
1 An Introduction to Markov Decision Processes Bob Givan Purdue University Ron Parr Duke University MDP Tutorial - 1
2 Outline Markov Decision Processes defined (Bob) Objective functions Policies Finding Optimal Solutions (Ron) Dynamic programming Linear programming Refinements to the basic model (Bob) Partial observability Factored representations MDP Tutorial - 2
3 Stochastic Automata with Utilities A Markov Decision Process (MDP) model contains: A set of possible world states S A set of possible actions A A real valued reward function R(s,a) A description T of each action s effects in each state. We assume the Markov Property: the effects of an action taken in a state depend only on that state and not on the prior history. MDP Tutorial - 3
4 Stochastic Automata with Utilities A Markov Decision Process (MDP) model contains: A set of possible world states S A set of possible actions A A real valued reward function R(s,a) A description T of each action s effects in each state. We assume the Markov Property: the effects of an action taken in a state depend only on that state and not on the prior history. MDP Tutorial - 4
5 Representing Actions Deterministic Actions: T : S A S For each state and action we specify a new state. Stochastic Actions: T : S A Prob( S) For each state and action we specify a probability distribution over next states. Represents the distribution P(s s, a) MDP Tutorial - 5
6 Representing Actions Deterministic Actions: T : S A S For each state and action we specify a new state. Stochastic Actions: T : S A Prob( S) For each state and action we specify a probability distribution over next states. Represents the distribution P(s s, a). 1.0 MDP Tutorial - 6
7 Representing Solutions A policy π is a mapping from S to A MDP Tutorial - 7
8 Following a Policy Following a policy π: 1. Determine the current state s 2. Execute action π(s) 3. Goto step 1. Assumes full observability: the new state resulting from executing an action will be known to the system MDP Tutorial - 8
9 Evaluating a Policy How good is a policy π in a state s? For deterministic actions just total the rewards obtained... but result may be infinite. For stochastic actions, instead expected total reward obtained again typically yields infinite value. How do we compare policies of infinite value? MDP Tutorial - 9
10 Objective Functions An objective function maps infinite sequences of rewards to single real numbers (representing utility) Options: 1. Set a finite horizon and just total the reward 2. Discounting to prefer earlier rewards 3. Average reward rate in the limit Discounting is perhaps the most analytically tractable and most widely studied approach MDP Tutorial - 10
11 Discounting A reward n steps away is discounted by for discount rate 0 < γ < 1. models mortality: you may die at any moment models preference for shorter solutions a smoothed out version of limited horizon lookahead γ n We use cumulative discounted reward as our objective (Max value <= M γ M γ M +... = ) 1 γ M MDP Tutorial - 11
12 Value Functions A value function V π : S R represents the expected objective value obtained following policy π from each state in S. Value functions partially order the policies, but at least one optimal policy exists, and all optimal policies have the same value function, V * MDP Tutorial - 12
13 Bellman Equations Bellman equations relate the value function to itself via the problem dynamics. For the discounted objective function, V π ( s) = R( s, π( s) ) + T ( s, π( s), s ) γ V π ( s ) s S V * ( s) = MAX a A R sa, ( ) + T ( sas,, ) γ V * ( s ) s S In each case, there is one equation per state in S MDP Tutorial - 13
14 Finite-horizon Bellman Equations Finite-horizon values at adjacent horizons are related by the action dynamics V π, 0 ( s) = R( s, π( s) ) V π, n ( s) = R sa, ( ) + T ( sas,, ) γ V π, n 1 ( s ) s S MDP Tutorial - 14
15 Relation to Model Checking Some thoughts on the relationship MDP solution focuses critically on expected value Contrast safety properties which focus on worst case This contrast allows MDP methods to exploit sampling and approximation more aggressively MDP Tutorial - 15
16 At this point, Ron Parr spoke on solution methods for about 1/2 an hour, and then I continued. MDP Tutorial - 16
17 Large State Spaces In AI problems, the state space is typically astronomically large described implicitly, not enumerated decomposed into factors, or aspects of state Issues raised: How can we represent reward and action behaviors in such MDPs? How can we find solutions in such MDPs? MDP Tutorial - 17
18 A Factored MDP Representation State Space S assignments to state variables: On-Mars?, Need-Power?, Daytime?,..etc... Partitions each block a DNF formula (or BDD, etc) Block 1: not On-Mars? Block 2: On-Mars? and Need-Power? Block 3: On-Mars? and not Need-Power? Reward function R labelled state-space partition: Block 1: not On-Mars? Reward=0 Block 2: On-Mars? and Need-Power? Reward=4 Block 3: On-Mars? and not Need-Power?.. Reward=5 MDP Tutorial - 18
19 Factored Representations of Actions Assume: actions affect state variables independently. 1 e.g...pr(nd-power? ^ On-Mars? x, a) = Pr (Nd-Power? x, a) * Pr (On-Mars? x, a) Represent effect on each state variable as labelled partition: Effects of Action Charge-Battery on variable Need-Power? Pr(Need-Power? Block n) Block 1: not On-Mars? Block 2: On-Mars? and Need-Power? Block 3: On-Mars? and not Need-Power? This assumption can be relaxed. MDP Tutorial - 19
20 Representing Blocks Identifying irrelevant state variables Decision trees DNF formulas Binary/Algebraic Decision Diagrams MDP Tutorial - 20
21 Partial Observability System state can not always be determined a Partially Observable MDP (POMDP) Action outcomes are not fully observable Add a set of observations O to the model Add an observation distribution U(s,o) for each state Add an initial state distribution I Key notion: belief state, a distribution over system states representing where I think I am MDP Tutorial - 21
22 POMDP to MDP Conversion Belief state Pr(x) can be updated to Pr(x o) using Bayes rule: Pr(s s,o) = Pr(o s,s ) Pr(s s) / Pr(o s) = U(s,o) T(s,a,s) normalized Pr(s o) = Pr(s s,o) Pr(s) A POMDP is Markovian and fully observable relative to the belief state. a POMDP can be treated as a continuous state MDP MDP Tutorial - 22
23 Belief State Approximation Problem: When MDP state space is astronomical, belief states cannot be explicitly represented. Consequence: MDP conversion of POMDP impractical Solution: Represent belief state approximately Typically exploiting factored state representation Typically exploiting (near) conditional independence properties of the belief state factors MDP Tutorial - 23
Preliminaries: Problem Definition Agent model, POMDP, Bayesian RL
POMDP Tutorial Preliminaries: Problem Definition Agent model, POMDP, Bayesian RL Observation Belief ACTOR Transition Dynamics WORLD b Policy π Action Markov Decision Process -X: set of states [x s,x r
Online Planning Algorithms for POMDPs
Journal of Artificial Intelligence Research 32 (2008) 663-704 Submitted 03/08; published 07/08 Online Planning Algorithms for POMDPs Stéphane Ross Joelle Pineau School of Computer Science McGill University,
Computing Near Optimal Strategies for Stochastic Investment Planning Problems
Computing Near Optimal Strategies for Stochastic Investment Planning Problems Milos Hauskrecfat 1, Gopal Pandurangan 1,2 and Eli Upfal 1,2 Computer Science Department, Box 1910 Brown University Providence,
Lecture 1: Introduction to Reinforcement Learning
Lecture 1: Introduction to Reinforcement Learning David Silver Outline 1 Admin 2 About Reinforcement Learning 3 The Reinforcement Learning Problem 4 Inside An RL Agent 5 Problems within Reinforcement Learning
Reinforcement Learning
Reinforcement Learning LU 2 - Markov Decision Problems and Dynamic Programming Dr. Martin Lauer AG Maschinelles Lernen und Natürlichsprachliche Systeme Albert-Ludwigs-Universität Freiburg [email protected]
Reinforcement Learning
Reinforcement Learning LU 2 - Markov Decision Problems and Dynamic Programming Dr. Joschka Bödecker AG Maschinelles Lernen und Natürlichsprachliche Systeme Albert-Ludwigs-Universität Freiburg [email protected]
BINOMIAL OPTIONS PRICING MODEL. Mark Ioffe. Abstract
BINOMIAL OPTIONS PRICING MODEL Mark Ioffe Abstract Binomial option pricing model is a widespread numerical method of calculating price of American options. In terms of applied mathematics this is simple
Valuation of the Minimum Guaranteed Return Embedded in Life Insurance Products
Financial Institutions Center Valuation of the Minimum Guaranteed Return Embedded in Life Insurance Products by Knut K. Aase Svein-Arne Persson 96-20 THE WHARTON FINANCIAL INSTITUTIONS CENTER The Wharton
Stochastic Models for Inventory Management at Service Facilities
Stochastic Models for Inventory Management at Service Facilities O. Berman, E. Kim Presented by F. Zoghalchi University of Toronto Rotman School of Management Dec, 2012 Agenda 1 Problem description Deterministic
An Application of Inverse Reinforcement Learning to Medical Records of Diabetes Treatment
An Application of Inverse Reinforcement Learning to Medical Records of Diabetes Treatment Hideki Asoh 1, Masanori Shiro 1 Shotaro Akaho 1, Toshihiro Kamishima 1, Koiti Hasida 1, Eiji Aramaki 2, and Takahide
A survey of point-based POMDP solvers
DOI 10.1007/s10458-012-9200-2 A survey of point-based POMDP solvers Guy Shani Joelle Pineau Robert Kaplow The Author(s) 2012 Abstract The past decade has seen a significant breakthrough in research on
Best-response play in partially observable card games
Best-response play in partially observable card games Frans Oliehoek Matthijs T. J. Spaan Nikos Vlassis Informatics Institute, Faculty of Science, University of Amsterdam, Kruislaan 43, 98 SJ Amsterdam,
Between MDPs and Semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning
Between MDPs and Semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning Richard S. Sutton a, Doina Precup b, and Satinder Singh a a AT&T Labs Research, 180 Park Avenue, Florham Park,
POMPDs Make Better Hackers: Accounting for Uncertainty in Penetration Testing. By: Chris Abbott
POMPDs Make Better Hackers: Accounting for Uncertainty in Penetration Testing By: Chris Abbott Introduction What is penetration testing? Methodology for assessing network security, by generating and executing
Learning the Structure of Factored Markov Decision Processes in Reinforcement Learning Problems
Learning the Structure of Factored Markov Decision Processes in Reinforcement Learning Problems Thomas Degris [email protected] Olivier Sigaud [email protected] Pierre-Henri Wuillemin [email protected]
Lecture Note 1 Set and Probability Theory. MIT 14.30 Spring 2006 Herman Bennett
Lecture Note 1 Set and Probability Theory MIT 14.30 Spring 2006 Herman Bennett 1 Set Theory 1.1 Definitions and Theorems 1. Experiment: any action or process whose outcome is subject to uncertainty. 2.
Lecture 6: Option Pricing Using a One-step Binomial Tree. Friday, September 14, 12
Lecture 6: Option Pricing Using a One-step Binomial Tree An over-simplified model with surprisingly general extensions a single time step from 0 to T two types of traded securities: stock S and a bond
Probabilistic Model Checking at Runtime for the Provisioning of Cloud Resources
Probabilistic Model Checking at Runtime for the Provisioning of Cloud Resources Athanasios Naskos, Emmanouela Stachtiari, Panagiotis Katsaros, and Anastasios Gounaris Aristotle University of Thessaloniki,
Optimization under uncertainty: modeling and solution methods
Optimization under uncertainty: modeling and solution methods Paolo Brandimarte Dipartimento di Scienze Matematiche Politecnico di Torino e-mail: [email protected] URL: http://staff.polito.it/paolo.brandimarte
Markov Decision Processes for Ad Network Optimization
Markov Decision Processes for Ad Network Optimization Flávio Sales Truzzi 1, Valdinei Freire da Silva 2, Anna Helena Reali Costa 1, Fabio Gagliardi Cozman 3 1 Laboratório de Técnicas Inteligentes (LTI)
TD(0) Leads to Better Policies than Approximate Value Iteration
TD(0) Leads to Better Policies than Approximate Value Iteration Benjamin Van Roy Management Science and Engineering and Electrical Engineering Stanford University Stanford, CA 94305 [email protected] Abstract
Safe robot motion planning in dynamic, uncertain environments
Safe robot motion planning in dynamic, uncertain environments RSS 2011 Workshop: Guaranteeing Motion Safety for Robots June 27, 2011 Noel du Toit and Joel Burdick California Institute of Technology Dynamic,
Development of dynamically evolving and self-adaptive software. 1. Background
Development of dynamically evolving and self-adaptive software 1. Background LASER 2013 Isola d Elba, September 2013 Carlo Ghezzi Politecnico di Milano Deep-SE Group @ DEIB 1 Requirements Functional requirements
Scalable Planning and Learning for Multiagent POMDPs
Scalable Planning and Learning for Multiagent POMDPs Christopher Amato CSAIL MIT [email protected] Frans A. Oliehoek Informatics Institute, University of Amsterdam DKE, Maastricht University [email protected]
Using Markov Decision Processes to Solve a Portfolio Allocation Problem
Using Markov Decision Processes to Solve a Portfolio Allocation Problem Daniel Bookstaber April 26, 2005 Contents 1 Introduction 3 2 Defining the Model 4 2.1 The Stochastic Model for a Single Asset.........................
Motivation. Motivation. Can a software agent learn to play Backgammon by itself? Machine Learning. Reinforcement Learning
Motivation Machine Learning Can a software agent learn to play Backgammon by itself? Reinforcement Learning Prof. Dr. Martin Riedmiller AG Maschinelles Lernen und Natürlichsprachliche Systeme Institut
Today s Agenda. Automata and Logic. Quiz 4 Temporal Logic. Introduction Buchi Automata Linear Time Logic Summary
Today s Agenda Quiz 4 Temporal Logic Formal Methods in Software Engineering 1 Automata and Logic Introduction Buchi Automata Linear Time Logic Summary Formal Methods in Software Engineering 2 1 Buchi Automata
MODELING CUSTOMER RELATIONSHIPS AS MARKOV CHAINS. Journal of Interactive Marketing, 14(2), Spring 2000, 43-55
MODELING CUSTOMER RELATIONSHIPS AS MARKOV CHAINS Phillip E. Pfeifer and Robert L. Carraway Darden School of Business 100 Darden Boulevard Charlottesville, VA 22903 Journal of Interactive Marketing, 14(2),
Predictive Representations of State
Predictive Representations of State Michael L. Littman Richard S. Sutton AT&T Labs Research, Florham Park, New Jersey {sutton,mlittman}@research.att.com Satinder Singh Syntek Capital, New York, New York
Techniques for Supporting Prediction of Security Breaches in. Critical Cloud Infrastructures Using Bayesian Network and. Markov Decision Process
Techniques for Supporting Prediction of Security Breaches in Critical Cloud Infrastructures Using Bayesian Network and Markov Decision Process by Vinjith Nagaraja A Thesis Presentation in Partial Fulfillment
DiPro - A Tool for Probabilistic Counterexample Generation
DiPro - A Tool for Probabilistic Counterexample Generation Husain Aljazzar, Florian Leitner-Fischer, Stefan Leue, and Dimitar Simeonov University of Konstanz, Germany Abstract. The computation of counterexamples
The MultiAgent Decision Process toolbox: software for decision-theoretic planning in multiagent systems
The MultiAgent Decision Process toolbox: software for decision-theoretic planning in multiagent systems Matthijs T.J. Spaan Institute for Systems and Robotics Instituto Superior Técnico Lisbon, Portugal
Tagging with Hidden Markov Models
Tagging with Hidden Markov Models Michael Collins 1 Tagging Problems In many NLP problems, we would like to model pairs of sequences. Part-of-speech (POS) tagging is perhaps the earliest, and most famous,
1 Representation of Games. Kerschbamer: Commitment and Information in Games
1 epresentation of Games Kerschbamer: Commitment and Information in Games Game-Theoretic Description of Interactive Decision Situations This lecture deals with the process of translating an informal description
Chapter 5 Discrete Probability Distribution. Learning objectives
Chapter 5 Discrete Probability Distribution Slide 1 Learning objectives 1. Understand random variables and probability distributions. 1.1. Distinguish discrete and continuous random variables. 2. Able
Finite Mathematics Using Microsoft Excel
Overview and examples from Finite Mathematics Using Microsoft Excel Revathi Narasimhan Saint Peter's College An electronic supplement to Finite Mathematics and Its Applications, 6th Ed., by Goldstein,
STRATEGIC CAPACITY PLANNING USING STOCK CONTROL MODEL
Session 6. Applications of Mathematical Methods to Logistics and Business Proceedings of the 9th International Conference Reliability and Statistics in Transportation and Communication (RelStat 09), 21
MODELING CUSTOMER RELATIONSHIPS AS MARKOV CHAINS
AS MARKOV CHAINS Phillip E. Pfeifer Robert L. Carraway f PHILLIP E. PFEIFER AND ROBERT L. CARRAWAY are with the Darden School of Business, Charlottesville, Virginia. INTRODUCTION The lifetime value of
DATA MINING IN FINANCE
DATA MINING IN FINANCE Advances in Relational and Hybrid Methods by BORIS KOVALERCHUK Central Washington University, USA and EVGENII VITYAEV Institute of Mathematics Russian Academy of Sciences, Russia
Nonparametric adaptive age replacement with a one-cycle criterion
Nonparametric adaptive age replacement with a one-cycle criterion P. Coolen-Schrijner, F.P.A. Coolen Department of Mathematical Sciences University of Durham, Durham, DH1 3LE, UK e-mail: [email protected]
Chapter 11 Monte Carlo Simulation
Chapter 11 Monte Carlo Simulation 11.1 Introduction The basic idea of simulation is to build an experimental device, or simulator, that will act like (simulate) the system of interest in certain important
Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 13. Random Variables: Distribution and Expectation
CS 70 Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 3 Random Variables: Distribution and Expectation Random Variables Question: The homeworks of 20 students are collected
Planning with Continuous Resources in Stochastic Domains
Planning with Continuous Resources in Stochastic Domains Mausam Dept. of Computer Science and Engineering University of Washington Seattle, WA 981952350 [email protected] Abstract We consider the
Improving Data Driven Part-of-Speech Tagging by Morphologic Knowledge Induction
Improving Data Driven Part-of-Speech Tagging by Morphologic Knowledge Induction Uwe D. Reichel Department of Phonetics and Speech Communication University of Munich [email protected] Abstract
Some Research Directions in Automated Pentesting
Carlos Sarraute Research Directions in Automated Pentesting 1/50 Some Research Directions in Automated Pentesting Carlos Sarraute CoreLabs & ITBA PhD program Buenos Aires, Argentina H2HC October 29/30,
24 Uses of Turing Machines
Formal Language and Automata Theory: CS2004 24 Uses of Turing Machines 24 Introduction We have previously covered the application of Turing Machine as a recognizer and decider In this lecture we will discuss
24. The Branch and Bound Method
24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no
Lecture 12: The Black-Scholes Model Steven Skiena. http://www.cs.sunysb.edu/ skiena
Lecture 12: The Black-Scholes Model Steven Skiena Department of Computer Science State University of New York Stony Brook, NY 11794 4400 http://www.cs.sunysb.edu/ skiena The Black-Scholes-Merton Model
Penetration Testing == POMDP Solving?
Penetration Testing == POMDP Solving? Carlos Sarraute Core Security Technologies & ITBA Buenos Aires, Argentina [email protected] Olivier Buffet and Jörg Hoffmann INRIA Nancy, France {olivier.buffet,joerg.hoffmann}@loria.fr
Inductive QoS Packet Scheduling for Adaptive Dynamic Networks
Inductive QoS Packet Scheduling for Adaptive Dynamic Networks Malika BOURENANE Dept of Computer Science University of Es-Senia Algeria [email protected] Abdelhamid MELLOUK LISSI Laboratory University
Reinforcement Learning with Factored States and Actions
Journal of Machine Learning Research 5 (2004) 1063 1088 Submitted 3/02; Revised 1/04; Published 8/04 Reinforcement Learning with Factored States and Actions Brian Sallans Austrian Research Institute for
Chapter 4. SDP - difficulties. 4.1 Curse of dimensionality
Chapter 4 SDP - difficulties So far we have discussed simple problems. Not necessarily in a conceptual framework, but surely in a computational. It makes little sense to discuss SDP, or DP for that matter,
Approximate Solutions to Factored Markov Decision Processes via Greedy Search in the Space of Finite State Controllers
pproximate Solutions to Factored Markov Decision Processes via Greedy Search in the Space of Finite State Controllers Kee-Eung Kim, Thomas L. Dean Department of Computer Science Brown University Providence,
WEEK #22: PDFs and CDFs, Measures of Center and Spread
WEEK #22: PDFs and CDFs, Measures of Center and Spread Goals: Explore the effect of independent events in probability calculations. Present a number of ways to represent probability distributions. Textbook
SEQUENCES ARITHMETIC SEQUENCES. Examples
SEQUENCES ARITHMETIC SEQUENCES An ordered list of numbers such as: 4, 9, 6, 25, 36 is a sequence. Each number in the sequence is a term. Usually variables with subscripts are used to label terms. For example,
Dynamic Programming 11.1 AN ELEMENTARY EXAMPLE
Dynamic Programming Dynamic programming is an optimization approach that transforms a complex problem into a sequence of simpler problems; its essential characteristic is the multistage nature of the optimization
VENDOR MANAGED INVENTORY
VENDOR MANAGED INVENTORY Martin Savelsbergh School of Industrial and Systems Engineering Georgia Institute of Technology Joint work with Ann Campbell, Anton Kleywegt, and Vijay Nori Distribution Systems:
Measuring the Performance of an Agent
25 Measuring the Performance of an Agent The rational agent that we are aiming at should be successful in the task it is performing To assess the success we need to have a performance measure What is rational
An Extension Model of Financially-balanced Bonus-Malus System
An Extension Model of Financially-balanced Bonus-Malus System Other : Ratemaking, Experience Rating XIAO, Yugu Center for Applied Statistics, Renmin University of China, Beijing, 00872, P.R. China Phone:
Machine Learning and Data Analysis overview. Department of Cybernetics, Czech Technical University in Prague. http://ida.felk.cvut.
Machine Learning and Data Analysis overview Jiří Kléma Department of Cybernetics, Czech Technical University in Prague http://ida.felk.cvut.cz psyllabus Lecture Lecturer Content 1. J. Kléma Introduction,
Lecture notes: single-agent dynamics 1
Lecture notes: single-agent dynamics 1 Single-agent dynamic optimization models In these lecture notes we consider specification and estimation of dynamic optimization models. Focus on single-agent models.
TABLE OF CONTENTS. GENERAL AND HISTORICAL PREFACE iii SIXTH EDITION PREFACE v PART ONE: REVIEW AND BACKGROUND MATERIAL
TABLE OF CONTENTS GENERAL AND HISTORICAL PREFACE iii SIXTH EDITION PREFACE v PART ONE: REVIEW AND BACKGROUND MATERIAL CHAPTER ONE: REVIEW OF INTEREST THEORY 3 1.1 Interest Measures 3 1.2 Level Annuity
Computer Handholders Investment Software Research Paper Series TAILORING ASSET ALLOCATION TO THE INDIVIDUAL INVESTOR
Computer Handholders Investment Software Research Paper Series TAILORING ASSET ALLOCATION TO THE INDIVIDUAL INVESTOR David N. Nawrocki -- Villanova University ABSTRACT Asset allocation has typically used
Monte Carlo Methods in Finance
Author: Yiyang Yang Advisor: Pr. Xiaolin Li, Pr. Zari Rachev Department of Applied Mathematics and Statistics State University of New York at Stony Brook October 2, 2012 Outline Introduction 1 Introduction
Unit 4 The Bernoulli and Binomial Distributions
PubHlth 540 4. Bernoulli and Binomial Page 1 of 19 Unit 4 The Bernoulli and Binomial Distributions Topic 1. Review What is a Discrete Probability Distribution... 2. Statistical Expectation.. 3. The Population
Regular Expressions and Automata using Haskell
Regular Expressions and Automata using Haskell Simon Thompson Computing Laboratory University of Kent at Canterbury January 2000 Contents 1 Introduction 2 2 Regular Expressions 2 3 Matching regular expressions
Part V: Option Pricing Basics
erivatives & Risk Management First Week: Part A: Option Fundamentals payoffs market microstructure Next 2 Weeks: Part B: Option Pricing fundamentals: intrinsic vs. time value, put-call parity introduction
M/M/1 and M/M/m Queueing Systems
M/M/ and M/M/m Queueing Systems M. Veeraraghavan; March 20, 2004. Preliminaries. Kendall s notation: G/G/n/k queue G: General - can be any distribution. First letter: Arrival process; M: memoryless - exponential
An Environment Model for N onstationary Reinforcement Learning
An Environment Model for N onstationary Reinforcement Learning Samuel P. M. Choi Dit-Yan Yeung Nevin L. Zhang pmchoi~cs.ust.hk dyyeung~cs.ust.hk lzhang~cs.ust.hk Department of Computer Science, Hong Kong
Evaluation of Treatment Pathways in Oncology: Modeling Approaches. Feng Pan, PhD United BioSource Corporation Bethesda, MD
Evaluation of Treatment Pathways in Oncology: Modeling Approaches Feng Pan, PhD United BioSource Corporation Bethesda, MD 1 Objectives Rationale for modeling treatment pathways Treatment pathway simulation
Master of Arts in Mathematics
Master of Arts in Mathematics Administrative Unit The program is administered by the Office of Graduate Studies and Research through the Faculty of Mathematics and Mathematics Education, Department of
Principles of demand management Airline yield management Determining the booking limits. » A simple problem» Stochastic gradients for general problems
Demand Management Principles of demand management Airline yield management Determining the booking limits» A simple problem» Stochastic gradients for general problems Principles of demand management Issues:»
