Impact of Virtualization on Network Performance The TCP Case Son-Hai Ha Guillaume Urvoy-Keller Dino López Laboratoire I3S, Université Nice Sophia Antipolis CNRS, France
Introduction Virtualization is more and more deployed in Data Centers (DCs), and make part of our daily life TCP is the dominant traffic in DC (more than 90%) and in the Internet 2
Impact of Virtualization over TCP Virtualization can affect the network performance (heavy load VMs neighbors) Studies carried out in public DCs, esp. Amazon EC 2 [Wang10] Complex solutions to avoid such an impact: VSnoop [Kangar10] Flows are maybe impacted by neighbors VMs Proxy-like mechanisms at the virtualization layers We evaluated the impact of virtualization over TCP performance only (delineated from other effects, e.g., background traffic) Fully controlled environment Free of traffic from operational networks Two most popular virtualized platforms: Xen and VMware [Wang10] G.Wang and T. S. Eugene Ng. The impact of virtualization on network performance of amazon EC2 data center. In Proceedings of IEEE INFOCOM 2010, Piscataway, NJ, USA. [Kangar10] A. Kangarlou et al. VSnoop: Improving tcp throughput in virtualized environments via acknowledgement oofload. In Proceedings of the 2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis, SC '10, Washington, DC, USA, 2010. 3
Virtualization increases the jitter [Arsene2012] Even in the case of only one VM, the jitter of flows from virtualized systems is higher than the one of flows from non virtualized systems Generate data packets with a constant Inter-Packet Delay (IPD) Hypervisors frequently delay the transmission of packets After hypervisor, IPD increases/decreases What is the impact over TCP [Arsene2012]A.Arsene, D. Lopez-Pacheco, and G. Urvoy-Keller. Understanding the network level performance of virtualization solutions. In Cloud Networking (CLOUDNET) 2012, Paris, France 4
Testbed Deployment Physical machine 2 Dell servers Processors: 8 cores Virtual Machine Maximum 32 VMs for each tested platform Memory: 12GBs Processor: one shared core OSes: CentOS 6.2, VMware ESXi 5.0, and XenServer 6.0 Memory: 256MBs 2 Network Interface Cards (NICs) per server OS: CentOS 6.2 2 vnics 5
Test cases Trace Collecting points Single Iperf process in Multiple VMs vs Multiple processes in Native (no virtualization) Trace Collecting points Multiple Iperf processes in single VM vs Multiple processes in Native 6
Metrics We compare Native and Virtualization (Xen, VMware) under the following metrics Total throughput: are Xen and VMware able to reach the same throughput as non virtualized systems? Fairness: do Xen and VMware achieve the same fairness level as non virtualized systems? Goodput: How much faster is the Native case compared to Xen and VMware? 7
Throughput Single VM: almost the same throughput 125 MB/s corresponds to 1Gb/S Mas is only (1460B/1500B)*125MB/S ~= 120MB/s Multiple VMs: Throughput of the native case is 2% higher than the one of virtualization 8
2 Fairness JFI = ( i x i ) n i x 2i Fairness is higher in virtualized systems 9
Zoom on fairness XEN (10 flows) Note: No TCP losses were reported Convergence is exclusively due to systems schedulers VMware (10 flows) Native (10 flows) 10
Finding the root of fairness YES? Same Iperf commands Same OS in VMs and Native cases Tested with different Gb NICs Hypervisor: No (Multiple VMs and single VM have similar fairness) Virtual Switch Probably YES 11
Goodput How long does it take to transfer 10GB with n flows or VMs {1,2... 32}? Each process or VM sends 10/n GB Flows' lifetime is defined like the elapsed time between the SYN packet and the ACK of the last FIN packet Average Flow lifetime: Native CentOS seems to outperform virtualized environments 12
Goodput Average Flow lifetime: Native seems to outperform Virtualization but Difference between native and virtual systems is biased by unfairness Unfairness decreases the average flow's lifetime 0 Average flow's lifetime T Total transferring time T 0 T/2 13
Goodput: definitely no difference between native and virtual. Total transferring time is very similar in all cases 14
Goodput: our explanations TCP sends packets in burst Inter congestion window time is successfully used to assign resources to other VMs When the congestion window completely uses the buffer, there will always be at least some data packets to be transfered 15
Security, Bugs and Issues Packet leakage in Xen Replication of packets in VMware Unfairness in Native case 16
Conclusion and Future Work Performance is similar between the Native and virtualized cases Fairness is higher in virtualized environments Study the impact of virtualization over inter VM traffic exchange Predict the performance of MapReduce in virtualized Data Centers 17