Average System Performance Evaluation using Markov Chain



Similar documents
Development of dynamically evolving and self-adaptive software. 1. Background

Master s Theory Exam Spring 2006

Random access protocols for channel access. Markov chains and their stability. Laurent Massoulié.

Introduction to Markov Chain Monte Carlo

TD(0) Leads to Better Policies than Approximate Value Iteration

1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let

Reinforcement Learning

IEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS

Software Performance and Scalability

Managing Overloaded Hosts for Dynamic Consolidation of Virtual Machines in Cloud Data Centers Under Quality of Service Constraints

E3: PROBABILITY AND STATISTICS lecture notes

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL

Single item inventory control under periodic review and a minimum order quantity

Reinforcement Learning

Operations Research and Financial Engineering. Courses

Business Process Modeling

Theorem (informal statement): There are no extendible methods in David Chalmers s sense unless P = NP.

Exam Introduction Mathematical Finance and Insurance

1/3 1/3 1/

Stochastic Processes and Queueing Theory used in Cloud Computer Performance Simulations

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Probability and statistics; Rehearsal for pattern recognition

Load Balancing and Switch Scheduling

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

PCHS ALGEBRA PLACEMENT TEST

Nonparametric adaptive age replacement with a one-cycle criterion

The Master s Degree with Thesis Course Descriptions in Industrial Engineering

Lecture 7: Continuous Random Variables

Big Data Technology Motivating NoSQL Databases: Computing Page Importance Metrics at Crawl Time

Master of Arts in Mathematics

The Relation between Two Present Value Formulae

LECTURE 4. Last time: Lecture outline

3.2 Roulette and Markov Chains

1 Portfolio Selection

Chapter 1. Introduction

CoolaData Predictive Analytics

Alessandro Birolini. ineerin. Theory and Practice. Fifth edition. With 140 Figures, 60 Tables, 120 Examples, and 50 Problems.

How will the programme be delivered (e.g. inter-institutional, summerschools, lectures, placement, rotations, on-line etc.):

Exercises in Mathematical Analysis I

Discrete-Event Simulation

Machine Learning.

MATH4427 Notebook 2 Spring MATH4427 Notebook Definitions and Examples Performance Measures for Estimators...

An Extension Model of Financially-balanced Bonus-Malus System

4.1. Title: data analysis (systems analysis) Annotation of educational discipline: educational discipline includes in itself the mastery of the

Computational Learning Theory Spring Semester, 2003/4. Lecture 1: March 2

Continued Fractions and the Euclidean Algorithm

Hydrodynamic Limits of Randomized Load Balancing Networks

General Theory of Differential Equations Sections 2.8, , 4.1

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand

Vilnius University. Faculty of Mathematics and Informatics. Gintautas Bareikis

Gambling with Information Theory

Binomial lattice model for stock prices

A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails

The Analysis of Dynamical Queueing Systems (Background)

Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA

Chapter 6. The stacking ensemble approach

ECE 842 Report Implementation of Elliptic Curve Cryptography

ISU Department of Mathematics. Graduate Examination Policies and Procedures

Conductance, the Normalized Laplacian, and Cheeger s Inequality

Some Research Problems in Uncertainty Theory

Introduction to Probability

The Basics of Graphical Models

Psychology and Economics (Lecture 17)

A Profit-Maximizing Production Lot sizing Decision Model with Stochastic Demand

Black-box Performance Models for Virtualized Web. Danilo Ardagna, Mara Tanelli, Marco Lovera, Li Zhang

Markovian Process and Novel Secure Algorithm for Big Data in Two-Hop Wireless Networks

Statistics Graduate Courses

Dynamics and Equilibria

FULL LIST OF REFEREED JOURNAL PUBLICATIONS Qihe Tang

SPARE PARTS INVENTORY SYSTEMS UNDER AN INCREASING FAILURE RATE DEMAND INTERVAL DISTRIBUTION

EXERCISES FOR THE COURSE MATH 570, FALL 2010

Homework set 4 - Solutions

6.207/14.15: Networks Lecture 15: Repeated Games and Cooperation

Note on some explicit formulae for twin prime counting function

1 Short Introduction to Time Series

DATA ANALYSIS II. Matrix Algorithms

Chapter 6: Sensitivity Analysis

Module1. x y 800.

From the probabilities that the company uses to move drivers from state to state the next year, we get the following transition matrix:

Random graphs with a given degree sequence

Background Knowledge

Analysis of a Production/Inventory System with Multiple Retailers

How To Balance In A Distributed System

Notes from Week 1: Algorithms for sequential prediction

Computing Near Optimal Strategies for Stochastic Investment Planning Problems

Transcription:

Designing high performant Systems: statistical timing analysis and optimization Average System Performance Evaluation using Markov Chain Weiyun Lu Supervisor: Martin Radetzki Sommer Semester 2006 Stuttgart University

Overview Scope & Motivation Example of a Labelled Transition System Markov Chain Performance Evaluation with Markov Chain Summary 2

Scope & Motivation System level Motivation -- Complexity of systems -- High impact of system-level decision System Modelling -- Property checking & Performance Evaluation Performance Modelling and Evaluation [1] Example of bus-based or switch based system for communiation 3

Overview Scope & Motivation Example of a Labelled Transition System Markov Chain Performance Evaluation with Markov Chain Summary 4

SHESim window of a Protocal Stack Lossy Channel Example -- SHE: Software/Hardware Engineering -- Model-driven, object-oriented framework for complex system specification. -- UML(class diagram, sequence diagram...) POOSL definition of a Lossy Channel -- POOSL: Parallel Object-Oriented Specification Language -- Formal: models are executable [2] transferframes()() f: Frame in?frame(f); if errordistribution yieldssuccess then delay(transmissiontime); out!frame(f); transferframe()() else transferframe()() fi. 5

transferframes()() f: Frame in?frame(f); Transition System of Lossy Channel if errordistribution yieldssuccess then delay(transmissiontime); out!frame(f); transferframe()() Bernoulli, Uniform, DiscreteUniform, Exponential, Normal distribution are available now in POOSL else fi. transferframe()() S1 out S4 Labeled Transition System (S, Λ, ): S: States; Λ: Labels; Transitions in S2 τ 0.1 0.9 S3 3 6

Another Representation of Lossy Channel Step1: Assume maximal Progress Environment Modeling: -- 'open' system: (sub-modules) the environment is always willing to participate. -- 'closed' system: (complete system) no possibility to interact with environment Step2: Resolving non-determinism Difference between non-determinism & possibility transition Step3: Shifting Action and Time Information into States -- To simplify labels of transition: occurrence of actions and passage of time can be shifted into states. -- So the state becomes (S, e), e here can be an action or time passage S1, - in S1 S2 τ out 0.1 0.9 S4 S3 S1, out S4, 3 0.1 S1, τ S2, in S3, τ 0.9 3 7

... (A) sel skip;... (B) or if (ErrorDistribution yieldssucess) then... (C) else... fi;... (D) or Method()() (E) les;... B A Resolving Non-Determinism Step2: Resolving non-determinism τ E 0.1 0.9 D C A External Scheduler is needed! SHESim uses uniform distribution for POOSL Models Otherwise transform to Markov Decision Process 1/3 1/3 B 1/30 3/10 E D C 8

Overview Scope & Motivation Example of a Labelled Transition System Markov Chain Performance Evaluation with Markov Chain Summary 9

Markov Chain Discrete stochastic process: -- a sequence of probability events, {X i,i 0} -- 'i' here is called time-epoch, if it is in time domain. Markov Property: transitions only depend on current states Markov Chain & Representations Time-homogeneous (stationary) Markov Chain PA, A P A, B P A,C P A, D P A,E P B, A P B, B P B,C P B, D P B,E P row = P C, A P C, B P C,C P C, D P C, E = P D, A P D, B P D,C P D, D P D, E T P col = P row P E, A P E, B P E,C P E, D P E, E S1, - S1, out S4, 3 S1, τ 0.1 S2, in S3, τ A D 0 0.9 0 0 0.1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0.9 E C B 10

Define initial distribution as Markov Chain We are interested in long-run average time fraction in each state of the Markov Chain. u 0 = u A0 u B0 u C0 u D0 u E0 Define at time epoch i, the probability to stay in each state as u i = u Ai u Bi u Ci u Di u Ei u A1 =u A0 P A, A u B0 P B, A u C0 P C, A u D0 P D, A u E0 P E, A <--- Conditional probability u 1 =u 0 P row u i 1 =u i P row <--- Generalize n u n =u 0 P row <--- Stationary 11

Markov Chain Compation for the Lossy Channel example: u 0 = 1 0 0 0 0 u 1 = 0 0.9 0 0 0.1 u 2 = 0.1 0 0.9 0 0 0 0.9 0 0 0.1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 u 0 = 1 0 0 0 0 Calculate the long-run average time fraction in each state: u 50 = 0.4923 0.0 0.5077 0.0 0.0 u 51 = 0.0 0.4712 0.0 0.4359 0.0564 u = 1 2 u 2n u 2n 1 = 0.2633 0.2368 0.2368 0.2368 0.0263 u 100 = 0.5288 0.0 0.4712 0.0 0.0 u 101 = 0.0 0.4712 0.0 0.4764 0.0526 u 1000 = 0.5263 0.0 0.4737 0.0 0.0 u 1001 = 0.0 0.4737 0.0 0.4737 0.0526 u 2n 0.5263 0.0 0.4737 0.0 0.0 u 2n 1 0.0 0.4737 0.0 0.4737 0.0526 12

Ergodic Markov Chain Ergodic Markov chain: -- Def1: possible to go from every state to every state, not necessarily in one move. -- Def2: some power of the the transition matrix has only positive entries. Equilibrium distribution: -- it can be proved in mathematics that for an ergodic Markov Chain with transition probability matrix P, there exists a unique probability vector u, such that u P = u -- u is called equilibrium distribution and is strictly positive. -- u denotes the long-run average time fraction of Markov chain in each state. For matrix of lossy channel u is calculated as u = ( 5/19 9/38 9/38 9/38 1/38 ) = (0.2632 0.2368 0.2368 0.2368 0.0264) n For Def2, it can be proved that: u 0 P n Ergodic Markov chain: Def3: if it has a positive state which is reachable from any other states with probability 1. In this case u is non-negative. u 13

Overview Scope & Motivation Example of a Labelled Transition System Markov Chain Performance Evaluation with Markov Chain Summary 14

Reward Function of Lossy Channel Reward function -- A funcion r : S --> R or {True(1), false(0)}, defined for a Markov Chain with state space S. -- Each time a state is visited, a reward specified by reward function is obtained. S1, - S1, out S4, 3 S1, τ 0.1 S2, in S3, τ 0.9 out(s,e) = { 1, if e = out 0, otherwise in(s,e) = { 1, if e = in 0, otherwise t(s,e) = { e, if e є (R) 0, otherwise out=0 in=0 t=0 out=1 in=0 t=0 out=0 in=1 t=0 0.1 out=0 in=0 t=0 0.9 out=0 in=0 t=3 out=0 in=0 t=0 15

Several Performance metrics of Lossy Channel Long-run average performance metric: For Lossy Channel, long-run average of : (i) number of in actions performed per time epoch of the Markov chain. (ii) number of in actions performed per unit of model time. (Capacity) (iii) time between two in actions. (iv) variance of time between two in actions. lim n n 1 n i=1 r X i Simple metrics: (i) (ii) Complex metrics: (iii) (iv) can be deduced from Atomic rewards only deducible from accumulated atomic rewards 16

Ergodic Theorem Ergodic Theorem n 1 n i=1 r X i a.s. T S u T r X T a.s. : almost surely: when n goes to infinity, with probability 1 r is a properly defined reward function for Markoc chain {X i,i 0} 17

Performance Evaluation of Lossy Channel long-run average of : (i) number of in actions performed per discrete-time epoch of the Markov chain. C1 = 1 n n i=1 (ii) number of in actions performed per unit of model time. C2 = S0:0 out=0 in=0 t=0 (iii) The long-run average time between two in actions. C3 = 1 n i=1 1 C2 n in S,e C1 t S,e = 27 10 = 5 19 = 5/19 3 9/38 = 10 27 D: 9/38 out=1 in=0 t=0 out=0 in=1 t=0 out=0 in=0 t=0 E:1/38 0.9 C:9/38 out=0 in=0 t=3 out=0 in=0 t=0 A: 5/19 B:9/38 18

Classical Performance Evaluation Techniques (assume: an ergodic markov Chain with a proper reward function is defined.) Analytical method: Calculate equilibrium distribution u using u P=u, then apply ergodic theorem. Simulation-based method: lim n n 1 n i=1 r X i --Trace: a finite state sequence S = (S1, S2,....Sn ). n is the length of the trace. The larger is n, the more accurate is the result. ( see slide 11, regarded as r(s)=1. ) -- Point estimation θ: to calculate a single value θ on base of sample data.this value is to serve as a "best guess" for the unknown parameter θ. -- Interval estimation: to give a accuracy bound of a point estimation θ. [φ1, φ2] P(θ [ φ1, φ2 ] ) ß, ß [0, 1], ß is called confidence level 19

What have been presented Summary -- Transforming a labelled transition system into a Markoc Chain ( Lossy Channel, 3 steps ) -- Markoc Chain ( Ergodic Markov Chain & Equilibrium Distribution ) -- Performance Evaluation with Markoc Chain ( Reward Function & Ergodic Theorem, Analytical and Simulation-based method ) 20

Further Topics Temporal rewards in [2] Markov Chain reduction in [1] <--- for complex or delay-type metrics <--- conditional metrics SHE tools <--- relative strong tool for control software, formal framework for computation software and hardware synthesis are still under research ref[1]: Bart Theelen. Performance modelling for system-level design. Technische Universitaet Eindhoven, PhD Thesis, 2004. ref[2]: Jeroen P.M. Voeten. Performance evaluation with temporal rewards. Performance Evaluation, 50:189 218, 2002. 21

Andreyevich Markov (1856-1922) End!