Designing high performant Systems: statistical timing analysis and optimization Average System Performance Evaluation using Markov Chain Weiyun Lu Supervisor: Martin Radetzki Sommer Semester 2006 Stuttgart University
Overview Scope & Motivation Example of a Labelled Transition System Markov Chain Performance Evaluation with Markov Chain Summary 2
Scope & Motivation System level Motivation -- Complexity of systems -- High impact of system-level decision System Modelling -- Property checking & Performance Evaluation Performance Modelling and Evaluation [1] Example of bus-based or switch based system for communiation 3
Overview Scope & Motivation Example of a Labelled Transition System Markov Chain Performance Evaluation with Markov Chain Summary 4
SHESim window of a Protocal Stack Lossy Channel Example -- SHE: Software/Hardware Engineering -- Model-driven, object-oriented framework for complex system specification. -- UML(class diagram, sequence diagram...) POOSL definition of a Lossy Channel -- POOSL: Parallel Object-Oriented Specification Language -- Formal: models are executable [2] transferframes()() f: Frame in?frame(f); if errordistribution yieldssuccess then delay(transmissiontime); out!frame(f); transferframe()() else transferframe()() fi. 5
transferframes()() f: Frame in?frame(f); Transition System of Lossy Channel if errordistribution yieldssuccess then delay(transmissiontime); out!frame(f); transferframe()() Bernoulli, Uniform, DiscreteUniform, Exponential, Normal distribution are available now in POOSL else fi. transferframe()() S1 out S4 Labeled Transition System (S, Λ, ): S: States; Λ: Labels; Transitions in S2 τ 0.1 0.9 S3 3 6
Another Representation of Lossy Channel Step1: Assume maximal Progress Environment Modeling: -- 'open' system: (sub-modules) the environment is always willing to participate. -- 'closed' system: (complete system) no possibility to interact with environment Step2: Resolving non-determinism Difference between non-determinism & possibility transition Step3: Shifting Action and Time Information into States -- To simplify labels of transition: occurrence of actions and passage of time can be shifted into states. -- So the state becomes (S, e), e here can be an action or time passage S1, - in S1 S2 τ out 0.1 0.9 S4 S3 S1, out S4, 3 0.1 S1, τ S2, in S3, τ 0.9 3 7
... (A) sel skip;... (B) or if (ErrorDistribution yieldssucess) then... (C) else... fi;... (D) or Method()() (E) les;... B A Resolving Non-Determinism Step2: Resolving non-determinism τ E 0.1 0.9 D C A External Scheduler is needed! SHESim uses uniform distribution for POOSL Models Otherwise transform to Markov Decision Process 1/3 1/3 B 1/30 3/10 E D C 8
Overview Scope & Motivation Example of a Labelled Transition System Markov Chain Performance Evaluation with Markov Chain Summary 9
Markov Chain Discrete stochastic process: -- a sequence of probability events, {X i,i 0} -- 'i' here is called time-epoch, if it is in time domain. Markov Property: transitions only depend on current states Markov Chain & Representations Time-homogeneous (stationary) Markov Chain PA, A P A, B P A,C P A, D P A,E P B, A P B, B P B,C P B, D P B,E P row = P C, A P C, B P C,C P C, D P C, E = P D, A P D, B P D,C P D, D P D, E T P col = P row P E, A P E, B P E,C P E, D P E, E S1, - S1, out S4, 3 S1, τ 0.1 S2, in S3, τ A D 0 0.9 0 0 0.1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0.9 E C B 10
Define initial distribution as Markov Chain We are interested in long-run average time fraction in each state of the Markov Chain. u 0 = u A0 u B0 u C0 u D0 u E0 Define at time epoch i, the probability to stay in each state as u i = u Ai u Bi u Ci u Di u Ei u A1 =u A0 P A, A u B0 P B, A u C0 P C, A u D0 P D, A u E0 P E, A <--- Conditional probability u 1 =u 0 P row u i 1 =u i P row <--- Generalize n u n =u 0 P row <--- Stationary 11
Markov Chain Compation for the Lossy Channel example: u 0 = 1 0 0 0 0 u 1 = 0 0.9 0 0 0.1 u 2 = 0.1 0 0.9 0 0 0 0.9 0 0 0.1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 u 0 = 1 0 0 0 0 Calculate the long-run average time fraction in each state: u 50 = 0.4923 0.0 0.5077 0.0 0.0 u 51 = 0.0 0.4712 0.0 0.4359 0.0564 u = 1 2 u 2n u 2n 1 = 0.2633 0.2368 0.2368 0.2368 0.0263 u 100 = 0.5288 0.0 0.4712 0.0 0.0 u 101 = 0.0 0.4712 0.0 0.4764 0.0526 u 1000 = 0.5263 0.0 0.4737 0.0 0.0 u 1001 = 0.0 0.4737 0.0 0.4737 0.0526 u 2n 0.5263 0.0 0.4737 0.0 0.0 u 2n 1 0.0 0.4737 0.0 0.4737 0.0526 12
Ergodic Markov Chain Ergodic Markov chain: -- Def1: possible to go from every state to every state, not necessarily in one move. -- Def2: some power of the the transition matrix has only positive entries. Equilibrium distribution: -- it can be proved in mathematics that for an ergodic Markov Chain with transition probability matrix P, there exists a unique probability vector u, such that u P = u -- u is called equilibrium distribution and is strictly positive. -- u denotes the long-run average time fraction of Markov chain in each state. For matrix of lossy channel u is calculated as u = ( 5/19 9/38 9/38 9/38 1/38 ) = (0.2632 0.2368 0.2368 0.2368 0.0264) n For Def2, it can be proved that: u 0 P n Ergodic Markov chain: Def3: if it has a positive state which is reachable from any other states with probability 1. In this case u is non-negative. u 13
Overview Scope & Motivation Example of a Labelled Transition System Markov Chain Performance Evaluation with Markov Chain Summary 14
Reward Function of Lossy Channel Reward function -- A funcion r : S --> R or {True(1), false(0)}, defined for a Markov Chain with state space S. -- Each time a state is visited, a reward specified by reward function is obtained. S1, - S1, out S4, 3 S1, τ 0.1 S2, in S3, τ 0.9 out(s,e) = { 1, if e = out 0, otherwise in(s,e) = { 1, if e = in 0, otherwise t(s,e) = { e, if e є (R) 0, otherwise out=0 in=0 t=0 out=1 in=0 t=0 out=0 in=1 t=0 0.1 out=0 in=0 t=0 0.9 out=0 in=0 t=3 out=0 in=0 t=0 15
Several Performance metrics of Lossy Channel Long-run average performance metric: For Lossy Channel, long-run average of : (i) number of in actions performed per time epoch of the Markov chain. (ii) number of in actions performed per unit of model time. (Capacity) (iii) time between two in actions. (iv) variance of time between two in actions. lim n n 1 n i=1 r X i Simple metrics: (i) (ii) Complex metrics: (iii) (iv) can be deduced from Atomic rewards only deducible from accumulated atomic rewards 16
Ergodic Theorem Ergodic Theorem n 1 n i=1 r X i a.s. T S u T r X T a.s. : almost surely: when n goes to infinity, with probability 1 r is a properly defined reward function for Markoc chain {X i,i 0} 17
Performance Evaluation of Lossy Channel long-run average of : (i) number of in actions performed per discrete-time epoch of the Markov chain. C1 = 1 n n i=1 (ii) number of in actions performed per unit of model time. C2 = S0:0 out=0 in=0 t=0 (iii) The long-run average time between two in actions. C3 = 1 n i=1 1 C2 n in S,e C1 t S,e = 27 10 = 5 19 = 5/19 3 9/38 = 10 27 D: 9/38 out=1 in=0 t=0 out=0 in=1 t=0 out=0 in=0 t=0 E:1/38 0.9 C:9/38 out=0 in=0 t=3 out=0 in=0 t=0 A: 5/19 B:9/38 18
Classical Performance Evaluation Techniques (assume: an ergodic markov Chain with a proper reward function is defined.) Analytical method: Calculate equilibrium distribution u using u P=u, then apply ergodic theorem. Simulation-based method: lim n n 1 n i=1 r X i --Trace: a finite state sequence S = (S1, S2,....Sn ). n is the length of the trace. The larger is n, the more accurate is the result. ( see slide 11, regarded as r(s)=1. ) -- Point estimation θ: to calculate a single value θ on base of sample data.this value is to serve as a "best guess" for the unknown parameter θ. -- Interval estimation: to give a accuracy bound of a point estimation θ. [φ1, φ2] P(θ [ φ1, φ2 ] ) ß, ß [0, 1], ß is called confidence level 19
What have been presented Summary -- Transforming a labelled transition system into a Markoc Chain ( Lossy Channel, 3 steps ) -- Markoc Chain ( Ergodic Markov Chain & Equilibrium Distribution ) -- Performance Evaluation with Markoc Chain ( Reward Function & Ergodic Theorem, Analytical and Simulation-based method ) 20
Further Topics Temporal rewards in [2] Markov Chain reduction in [1] <--- for complex or delay-type metrics <--- conditional metrics SHE tools <--- relative strong tool for control software, formal framework for computation software and hardware synthesis are still under research ref[1]: Bart Theelen. Performance modelling for system-level design. Technische Universitaet Eindhoven, PhD Thesis, 2004. ref[2]: Jeroen P.M. Voeten. Performance evaluation with temporal rewards. Performance Evaluation, 50:189 218, 2002. 21
Andreyevich Markov (1856-1922) End!