Determining Overhead, Variance & Isola>on Metrics in Virtualiza>on for IaaS Cloud Bukhary Ikhwan Ismail Devendran Jagadisan Mohammad Fairus Khalid MIMOS BERHAD
Contents 1. Introduc>on 2. Tes>ng Methodologies 3. Setup & Tools 4. Results 5. Discussion 6. Summary 29 MIMOS Berhad. All Rights Reserved.!
Introduc>on What is IaaS Commodity base resources User self service consumable resource Consist of Resources (Storage/Compute/Network) Management SoTware IaaS Nature Distributed / Inter- related Virtualiza>on - KVM
Objec>ve Understanding of behavior of VM in KVM hypervisor Benchmark the performance of VM Overhead / Isola>on / Variances Determine the performance factor Understanding design decision
Methodologies Benchmark VM s Processor, Memory, Storage and Network. Guest OS Behavior (Micro Level) Overhead Virtualiza>on Overhead Variance The Effects of more VM resides in a single physical Isola>on or Fairness Obeserva>on of VM in isolated container. Applica>on Specific Behavior (Macro Level) Overhead/variance/Isola>on of Java & MySQL
Setup & tools Benchmark tools Open source / free Results can be comparable with others Specific tool for each components CPU - 7Zipcompression Memory - RAMSpeed Storage IOZone /TIObench Network Netperf Applica>on Scimark/SQLite
Setup & tools libraries KVM 85 KMOD-KVM -2.6.3..1 Libvirt-.6.5 QEMU-KVM 1.6. Network Setup NIC capacity 1Gbps Switch 1Gbps Head node - NAT/forwarding
Setup & tools Table 1: Host System Specification Processor 4 Cores Intel Xeon CPU E545 @ 1.99GHz Mainboard DellPrecision WorkSta>on T54 Chipset Intel 54 Chipset Hub Memory 2 x 496 MB 667MHz Disk 75GBHitachi HDS7217 Graphics nvidia Quadro FX 17, OS CentOS 5.3 64bit Kernel Kernel:2.6.18-128.4.1.el5 (x86_64) File System EXT3 Host Processor QEMU Virtual CPU.1.6 @ 7.97GHz Mainboard Unknown Chipset Unknown Memory 512MB Disk 12GB GEMU Hard disk Graphics Nil OS Fedora release 1 (Cambridge) (Eucalyptus Image) Kernel Kernel: 2.6.28-11- generic (x86_64) File System ext3 Guess VM
Results - Overhead 8 6 Processor 6139 677 MIPS 4 Host Guest 2 Memory MB/s 3 2 1 2579 2367 2366 2282 2323 2328 Host Guest int Add int Copy int Scale
Results - Overhead Storage 15 1296 MB/s 1 5 179 96 355 4GB Write host guest 4GB Read
Results - Overhead Mb/s 1, 8 6 4 2-941 852 between hosts TCP_STREAM Network Throughput 232 45 between guest in single host UDP_STREAM 157 257 between guests across node 15, Network Latency TPS 1, 5, 1,623 3,692 2,866 - Between hosts between guests in single host between guests across node
Results Overhead Applica>ons Java Sun Flower Rendering Time Seconds 1 8 6 4 2 9.37 9.47 SQLite Host Guest Time Seconds 15 1 5 1.43 111.94 Host Guest
Results Variance & Fairness CPU Variance MIPS 2 15 1 5 1792 172 1373 1 guest 2 guest 4 guest CPU Fairness MIPS 15 1 5 1396 137 1394 1334 1st guest 2nd guest 3rd guest 4th guest
Results Variance & Fairness 25 2 Memory Variance 281 2122 2125 MB/s 15 1 5 1169 1186 1164 611 618 61 1 Guest 2 Guests 4 Guests int Add int Copy int Scale Memory Fairness MB/s 64 62 6 58 56 54 633 621 625 6 68 613 619 68 67 613 592 578 int Add int Copy int Scale instance 1 instance 2 instance 3 instance 4
Results Variance & Fairness MB/s 4 3 2 1 Iozone Disk Variance 355 96 47 8 4GB Write 4GB Read 1 Guest 2 Guest MB/s 6. 4. 2.. 47.18 Iozone Disk Fairness 5.96 8.49 9.3 1st Guest 2nd Guest 4GB Write 4GB Read
Results Variance & Fairness Tiobench Disk Variance Microseconds (Hundreds) 15 1 5 17 23 3 39 23 31 52 131 1 Guest 2 Guest 64MB Write 64MB Read 256MB Write 256MB Read Tiobench Disk Fairness Microseconds (Hundreds) 3 2 1 3 76 16 39 3 23 22 6 1st Guest 2nd Guest 64MB Write 64MB Read 256MB Write 256MB Read
Results Variance & Fairness Java Variance Rendering >me 6 4 2 36.95 38.35 SQLite Variance 45.3 1 Guest 2 Guests 4 Guests Time to complete 15 1 5 114 121 136 1 Guest 2 Guests 4 Guests
Results Variance & Fairness Network Throughput Variance Mbps 5 4 3 2 1-45 232 1 to 1 guest within 1 to 1 guest on diff 1 node node 257 255 256 248 157 134 133 121 2 to 2 guests within 1 node 2 to 2 guests on diff node 4 to 4 guests on diff node TCP_STREAM UDP_STREAM TPS 4, 3, 2, 1, - 3,692 1 to 1 guest within node Network Latency Variance 2,866 2,754 2,87 2,65 1 to 1 guest on diff node 2 to 2 guests within node 2 to 2 guests on diff node 4 to 4 guests on diff node
Results Variance & Fairness Mbps Mbps 2 18 16 14 12 1 3 28 26 24 22 2 Network TCP Fairness 1 2 3 4 5 6 7 8 9 1 11 12 13 14 15 16 17 18 19 2 Network UDP Fairness 1 2 3 4 5 6 7 8 9 1 11 12 13 14 15 16 17 18 19 Series1 Series2 Series3 Series4 Series1 Series2 Series3 Series4
Discussion ELEMENTS OVERHEAD VARIANCE FAIRNESS CPU 1% GOOD 2% GOOD 4% GOOD Memory 5% GOOD 7% BAD 5% GOOD Iozone Write 46% FAIR 46% FAIR 17% GOOD Iozone Read 72% BAD 98% BAD 5% GOOD Tiobench Write 187% V.BAD 35% FAIR 87% BAD Tiobench Read 657% V.BAD 12% V.BAD 24% V.BAD Java 1% GOOD 21% GOOD 2% GOOD Sql 11% GOOD 26% GOOD 23% GOOD Net Tcp - N/A 49% FAIR 39% FAIR Net Udp N/A 39% FAIR 8% GOOD Network Latency N/A 32% GOOD 79% BAD
Discussion CPU Good results for all metrics. Memory Shows bad variance. Disk disk I/O bonleneck. High overhead very bad distribu>on of variance and fairness. Tiobench it gives fluctuate results. Disk performs bener in Para- virtualiza>on Applica>on Applica>on specific benchmark result, does not reflect bad CPU, memory and storage performance.
Discussion Overhead CPU & Memory gives low overhead Provisioning decision based on Bare Metal Vs. Virtualiza>on Applica>on specific scenarios (Grid/Rendering/Web) Variance More load (VM s) affects overall performance Fairness/Isola>on Behavioral understanding of KVM hypervisor
Summary What we gain? KVM specific behavior Important role of VM scheduling Storage plays important role Future Work Different benchmarking methodology Focus on values anain from tes>ng Determine performance improvements weight age (%) Hardware choice SoTware stacks (virtualiza>on/libraries) Host OS (Kernel&TCP parameters tweaking /OS choices Different I/O Scheduler
Summary Hypervisor specific results KVM (Vir>o devices) Investment on hardware or sotware stack? Which area contribute to increase the performance and larger capacity? Variance & isola>on issues, makes scheduling policy a crucial component.
THANK YOU