Automated Experiments for Deriving Performance-relevant Properties of Software Execution Environments by Michael Hauck ^lxlt Scientific 2^Vl I Publishing
Contents 1. Introduction I l.l. Motivation I l.2. Problem 3 1.3. Shortcomings of Existing Solutions 7 1.4. Contributions 9 1.5. Validation 12 1.6. Outline 13 2. Foundations 17 2.1. Software Performance Analysis 17 2.1.1. Software Performance 17 2.1.2. Software Performance Engineering 19 2.1.3. Performance Experiments and Benchmarking... 21 2.1.4. The Palladio Component Model 22 2.2. Model-driven Software Development 26 2.2.1. Models and Metamodels 26 2.2.2. The Eclipse Modeling Project 28 2.3. Operating System Scheduling and Virtualization 30 2.3.1. Operating System Scheduling 30 2.3.2. Detecting CPU and OS Scheduling Properties... 32 2.3.3. Virtualization 34 xi
3. An Approach For Deriving Execution Environment Properties 39 3.1. Research Context 39 3.1.1. A Definition of the Execution Environment... 40 3.1.2. Performance-relevant Properties of the Execution Environment 41 3.1.3. Separating the Execution Environment Model from the Software Architecture Model 47 3.2. Scientific Challenges 50 3.3. A Method for Automated Derivation of Execution Environment Properties 52 3.3.1. Experiment Design 53 3.3.2. Experiment Execution 55 3.4. Scenarios 57 3.5. Limitations and Assumptions 61 3.6. Summary 64 4. Model-based Definition and Execution of Execution Environment Experiments... 67 4.1. Automated Execution Environment Experiments 68 4.1.1. Requirements 68 4.1.2. Experiment Structure 70 4.2. Experiment Library and Experiment Domains 72 4.3. Parametric Experiments 76 4.4. A Metamodel for Specifying Experiments 79 4.4.1. Experiments 81 4.4.2. Experiment Logic Definition 83 4.4.3. Experiment Tasks 84 4.4.4. Experiment Sensors 88 4.4.5. Example 90 xn
4.5. Experiment Execution and Results Analysis 91 4.5.1. Experiment Execution 91 4.5.2. Results Analysis 93 4.6. A Template for Experiment Description 94 4.6.1. Sections of the Experiment Template 95 4.6.2. Describing the Experiment Logic 98 4.7. Extensibility of the Approach 100 4.7.1. Experiments 100 4.7.2. Experiment domains 101 4.7.3. Experiment tasks and sensors 102 4.7.4. Analysis logic 105 4.8. Experiment Performance Overhead 105 4.9. Summary 107 5. Deriving CPU and OS Scheduling Properties 109 5.1. Experiments Overview 109 5.2. Scientific Challenges Ill 5.3. CPU Simultaneous Multithreading 112 5.3.1. Motivation 112 5.3.2. Experiment Design 115 5.3.3. Experiment Template 116 5.3.4. Experiment Robustness and Performance 117 5.3.5. Example 119 5.4. Number of CPU Cores 124 5.4.1. Motivation 124 5.4.2. Experiment Design 126 5.4.3. Experiment Template 128 5.4.4. Experiment Robustness and Performance 130 5.4.5. Example 132 5.5. Operating System Scheduler Timeslice Length 134 5.5.1. Motivation 134 xiii
5.5.2. Experiment Design 135 5.5.3. Experiment Template 137 5.5.4. Experiment Robustness 139 5.5.5. Experiment Performance 140 5.5.6. Example 140 5.6. Operating System Scheduler Load-balancing Properties. 143 5.6.1. Motivation 143 5.6.2. Initial Load-balancing Strategy 144 5.6.3. Dynamic Load-balancing Strategy 152 5.7. Including Experiment Results in Performance Prediction. 163 5.8. Validation 165 5.8.1. Validation Scenario 167 5.8.2. Execution 169 5.8.3. Results 169 5.8.4. Discussion 172 5.9. Limitations and Assumptions 173 5.10. Summary 175 6. Deriving Visualization Properties 177 6.1. Experiments Overview 178 6.2. Scientific Challenges 179 6.3. Virtualization Overhead 180 6.3.1. Motivation 180 6.3.2. Experiment Design 181 6.3.3. Experiment Template 183 6.3.4. Experiment Robustness 185 6.3.5. Experiment Performance 186 6.3.6. including Experiment Results in Performance Prediction 186 6.3.7. Validation 188 xiv
6.4. Load-dependent Overhead 196 6.4.1. Motivation 197 6.4.2. Experiment Design 198 6.4.3. Experiment Template 208 6.4.4. Experiment Robustness and Performance 210 6.4.5. Including Experiment Results in Performance Prediction 211 6.4.6. Validation 214 6.5. Discussion 224 6.5.1. Additional Load 225 6.5.2. Limitations and Assumptions 231 6.6. Summary 233 7. Related Work 235 7.1. Modeling the Execution Environment for Performance Prediction 235 7.2. Deriving Performance Models through Automated Measurements 237 7.3. Performance Analysis Reflecting CPU and OS Scheduling Properties 244 7.4. Performance Analysis of Virtualized Environments... 247 7.5. Summary 249 8. Conclusions 251 8.1. Summary 251 8.2. Limitations and Assumptions 256 8.3. Further Application Areas 256 8.4. Future Work 257 A. Ginpex Metamodel 263 A. 1. Control Flow Tasks 264 A.2. Stop Conditions 267 xv
A.3. Machine Tasks 269 A.4. Distributions 271 A. 5. Sensors 272 B. Presented Experiments 275 B. l. CPU Simultaneous Multithreading 275 B.2. Detect Number of Available CPU Cores 276 B.3. Detect OS Scheduler Timeslice Length 278 B.4. Detect OS Scheduler Initial Load-balancing Strategy... 280 B.5. Detect OS Scheduler Dynamic Load-balancing Strategy. 282 B.6. Detect Virtualization Overhead 283 B.7. Detect Load-dependent Virtualization Overhead 285 List of Figures 289 xvi