Masters Project Proposal



Similar documents
Virtualization. P. A. Wilsey. The text highlighted in green in these slides contain external hyperlinks. 1 / 16

Virtualization for Cloud Computing

Enabling Technologies for Distributed and Cloud Computing

Enabling Technologies for Distributed Computing

Full and Para Virtualization

Advanced Computer Networks. Network I/O Virtualization

Networked I/O for Virtual Machines

2972 Linux Options and Best Practices for Scaleup Virtualization

Virtualization Technologies

Toward a practical HPC Cloud : Performance tuning of a virtualized HPC cluster

How To Create A Cloud Based System For Aaas (Networking)

Regional SEE-GRID-SCI Training for Site Administrators Institute of Physics Belgrade March 5-6, 2009

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Virtualization. Dr. Yingwu Zhu

I/O Virtualization The Next Virtualization Frontier

9/26/2011. What is Virtualization? What are the different types of virtualization.

HRG Assessment: Stratus everrun Enterprise

Understanding Full Virtualization, Paravirtualization, and Hardware Assist. Introduction...1 Overview of x86 Virtualization...2 CPU Virtualization...

IOS110. Virtualization 5/27/2014 1

VON/K: A Fast Virtual Overlay Network Embedded in KVM Hypervisor for High Performance Computing

StACC: St Andrews Cloud Computing Co laboratory. A Performance Comparison of Clouds. Amazon EC2 and Ubuntu Enterprise Cloud

Virtualization. Types of Interfaces

Hardware Based Virtualization Technologies. Elsie Wahlig Platform Software Architect

RPM Brotherhood: KVM VIRTUALIZATION TECHNOLOGY

Cloud Computing CS

Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies

Virtualization. Michael Tsai 2015/06/08

Module I-7410 Advanced Linux FS-11 Part1: Virtualization with KVM

RUNNING vtvax FOR WINDOWS

Installing & Using KVM with Virtual Machine Manager COSC 495

Broadcom Ethernet Network Controller Enhanced Virtualization Functionality

KVM Virtualized I/O Performance

A B S T R A C T I. INTRODUCTION

Uses for Virtual Machines. Virtual Machines. There are several uses for virtual machines:

CSE 501 Monday, September 09, 2013 Kevin Cleary

CPET 581 Cloud Computing: Technologies and Enterprise IT Strategies. Virtualization of Clusters and Data Centers

Intel Ethernet and Configuring Single Root I/O Virtualization (SR-IOV) on Microsoft* Windows* Server 2012 Hyper-V. Technical Brief v1.

Rackspace Cloud Databases and Container-based Virtualization

Enterprise-Class Virtualization with Open Source Technologies

Virtualization. Jia Rao Assistant Professor in CS

KVM KERNEL BASED VIRTUAL MACHINE

How To Make A Minecraft Iommus Work On A Linux Kernel (Virtual) With A Virtual Machine (Virtual Machine) And A Powerpoint (Virtual Powerpoint) (Virtual Memory) (Iommu) (Vm) (

High Performance OpenStack Cloud. Eli Karpilovski Cloud Advisory Council Chairman

Networking for Caribbean Development

Analysis on Virtualization Technologies in Cloud

Technical Paper. Moving SAS Applications from a Physical to a Virtual VMware Environment

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

Virtualization and the U2 Databases

Running vtserver in a Virtual Machine Environment. Technical Note by AVTware

Dynamic Load Balancing of Virtual Machines using QEMU-KVM

PCI-SIG SR-IOV Primer. An Introduction to SR-IOV Technology Intel LAN Access Division

Date: December 2009 Version: 1.0. How Does Xen Work?

Building Docker Cloud Services with Virtuozzo

Infrastructure as a Service (IaaS)

Resource usage monitoring for KVM based virtual machines

Basics of Virtualisation

Enabling Intel Virtualization Technology Features and Benefits

Virtualizare sub Linux: avantaje si pericole. Dragos Manac

Google

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

SR-IOV In High Performance Computing

Nested Virtualization

BHyVe. BSD Hypervisor. Neel Natu Peter Grehan

Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM

Virtual Machine Monitors. Dr. Marc E. Fiuczynski Research Scholar Princeton University

SCSI support on Xen. MATSUMOTO Hitoshi Fujitsu Ltd.

Virtualization and Performance NSRC

Cloud Computing with Red Hat Solutions. Sivaram Shunmugam Red Hat Asia Pacific Pte Ltd.

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Virtualization Technology. Zhiming Shen

COS 318: Operating Systems. Virtual Machine Monitors

Introduction to Virtual Machines

A quantitative comparison between xen and kvm

RCL: Software Prototype

The Price of Safety: Evaluating IOMMU Performance Preliminary Results

VMware and CPU Virtualization Technology. Jack Lo Sr. Director, R&D

Beyond the Hypervisor

Virtualization. Pradipta De

Cloud^H^H^H^H^H Virtualization Technology. Andrew Jones May 2011

PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE

Hypervisors. Introduction. Introduction. Introduction. Introduction. Introduction. Credits:

Optimize Server Virtualization with QLogic s 10GbE Secure SR-IOV

Cloud Operating Systems for Servers

Data Centers and Cloud Computing

Novell Cluster Services Implementation Guide for VMware

Hypervisors and Virtual Machines

COS 318: Operating Systems. Virtual Machine Monitors

Developing a dynamic, real-time IT infrastructure with Red Hat integrated virtualization

KVM PERFORMANCE IMPROVEMENTS AND OPTIMIZATIONS. Mark Wagner Principal SW Engineer, Red Hat August 14, 2011

Best Practices for Optimizing Your Linux VPS and Cloud Server Infrastructure

Performance tuning Xen

Assessing the Performance of Virtualization Technologies for NFV: a Preliminary Benchmarking

CON9577 Performance Optimizations for Cloud Infrastructure as a Service

KVM: Kernel-based Virtualization Driver

SR-IOV: Performance Benefits for Virtualized Interconnects!

Transcription:

Masters Project Proposal Virtual Machine Storage Performance Using SR-IOV by Michael J. Kopps Committee Members and Signatures Approved By Date Advisor: Dr. Jia Rao Committee Member: Dr. Xiabo Zhou Committee Member: Dr. C. Edward Chow

Abstract This paper outlines a proposal to analyze the performance differences of the storage model on KVM virtual machines when using SR-IOV versus using the traditional Dom0 assisted storage techniques. SR-IOV presents physical hardware devices to the virtual machines and removes the overhead of the hypervisor when making disk IO operations. The goal of this research is to show the degree of performance benefits when removing the hypervisor from the disk IO stack. Introduction Kernel based Virtual Machine (KVM) is a virtualization hypervisor which uses QEMU and is rapidly growing in industry adoption for virtualizing servers. The architectures of the underlying hardware this hypervisor runs on vary widely, but there is always a need for the basics: CPU, memory, network, and persistent storage. CPU and memory virtualization is able to take advantage of CPU hardware such as Intel s VT-x and VT-d to simplify virtualization by the hypervisor operating system. The persistent storage system, which in any computer virtualized or native has always been the largest, least expensive per gigabyte, and slowest form of data storage technology, has struggled to find an easy hardware accelerated virtualization architecture. On virtualization platforms such as KVM, the storage represented is in reality either a file on the hypervisor s file system or a partition on a physical disk set up by the hypervisor. These methods have benefits such as simplicity and ease of management, but they also require the hypervisor to be involved in all disk IO operations, adding an extra layer and therefore extra latency to the already slow persistent storage. When the storage is really a partition on a disk partitioned by the hypervisor, the overhead is small as the hypervisor merely needs to forward the requests on to the disk. On the other hand, when the storage is really a file, all of the requests need to be translated and then passed through the file system of the hypervisor, increasing latency. The latter approach has the benefit of being able to migrate the storage (and the associated virtual machine) easily since the file just needs to be copied. The former approach must resize partitions on the destination disk in order to provide

room for the migration. However, the performance of the IO will suffer as a result of the additional layers each request must go through. Single-Root IO Virtualization (SR-IOV) was developed by the PCI-SIG to allow multiple virtual machines to access the same physical PCI-Express devices as if each virtual machine had their own dedicated device [1]. This technology is growing in use in network interface cards and the performance benefits in that space has been widely seen. This technology has not seen wide adoption in the hard disk host bus adapter (HBA) space, and there is very little known as for the performance benefits of allowing virtual machines to communicate directly to the HBA. This work will set up a sample system with a SR-IOV capable HBA and measure its performance compared with the same system without SR-IOV enabled along with a system running Linux natively without virtualization. Even with SR-IOV, the hypervisor is not completely out of the I/O path due to the need to handle and route interrupts. The hardware will still issue interrupts to the hypervisor operating system, which must then pass those interrupts on to its guests. Secondly, since it requires direct memory access from the hardware, protection should be provided to prevent the hardware from corrupting or accessing memory belonging to another virtual machine. This partitioning and security may be accomplished using an IOMMU, which is available in newer CPUs from both Intel and AMD. Previous Work Dong, et. al., has performed research in this area using network controllers. Their research has shown a significant performance boost when using hardware virtualization technologies [2]. In another paper, he has shown how to perform migration when using a pass-through network controller [3]. Further research was performed by Jiuxing Liu at IBM using 10Gbps network adapters. This work was very thorough and investigated many of the network driver optimizations available for the device used in the test [4]. Edwards, et. al., performed research into storage nodes for cloud computing using a custom storage cluster system leveraging SR-IOV and the Xen hypervisor [5]. They showed the SR- IOV solution provided at least as good performance compared against bare-metal machines

accessing the same storage. Their work is aimed at a much larger solution using a centralized RAID engine and hundreds of disks on the SAS topology. There is very little work in the area of SR-IOV and disk controllers. This work will be groundbreaking in this area and will provide desperately needed information to help research as well as end users determine how to create the most efficient and highest performance virtualization platforms possible. Zhang, et. al., has performed a survey of performance research related to I/O in virtualized environments, and supports the idea passing as much work to the hardware will provide the most benefit to improving I/O performance [6]. Their performance testing on KVM shows a high CPU utilization with small block disk I/O operations, indicating the hypervisor is performing a great deal of work on each I/O. Their tests show as the I/O size gets larger, the amount of overhead gets smaller, and the CPU utilization is shown to taper off. Problem Definition The problem considered in this proposal is the unknown performance implications of SR- IOV in the storage controller domain. What effect does the hypervisor play in the storage access times experienced by the virtual machines? Can the virtual machines achieve native or near native performance when using SR-IOV enabled hardware? As a possible addition, can a virtual machine achieve near native performance when using SR-IOV on both its direct attached storage as well as its network interface? Can SR-IOV performance be modified with different schedulers? Proposed Plan The proposed solution for the problem this paper introduces can be split into five major phases: 1. Native Linux Performance Collection To compare the performance of SR-IOV based persistent storage, it will be interesting to know what the performance level of a bare-metal Linux system can be. The first test to be run

will be against the host operating system without any virtualization enabled. This will provide a baseline to compare all of the other results against. 2. Hypervisor Performance Collection Control data must then be collected to provide a baseline of performance in a virtualized system by which the SR-IOV controller may be compared. This will involve running IO benchmarks with the disk IO passing through KVM and QEMU into the FVD virtual disks [6]. This testing should be conducted with a variable number of guests running on the host system since most virtualization hypervisors will be running with 4 or 8 or even more guests running at once. Tests should be performed in the following configurations: 1 guest, 2 guests, and 4 guests performing benchmarks simultaneously. The IO benchmarks should represent normal server IO profiles using real file systems. These benchmarks should look at sequential and random block reads and writes, big block IO, and small block IO to identify bandwidth and IO per second limitations. Data should be collected to identify latency in completing the requests across all virtual machines. Bonnie++ is a file system based benchmark suite aimed at collecting performance data for hard drive and file system performance [8]. Data collection should also purge the file system cache to eliminate benefits only encountered in the test environment. 3. Configuring SR-IOV Storage The groundbreaking nature of this problem means there is no work either in literature or in publicly available documentation about how to set up and configure a SR-IOV enabled storage controller on a KVM system. LSI SAS controllers can be programmed to provide multiple virtual functions and provide SR-IOV services. The associated drivers will need to be compiled for a KVM system in order to take advantage of the feature, along with performing configuration such as the hard disk assignment to a particular virtual machine. An alternative to KVM is available in the Xen, which is available for various Linux distributions and is very widely used [9]. Xen is primarily a para-virtualized system and will be tested without using hardware virtualization to contrast against KVM [10]. Performance collection on Xen will be performed if time allows. 4. SR-IOV Performance Collection

Once the bare metal and hypervisor performance numbers have been collected, testing will focus on the performance when using a SR-IOV enabled hard drive controller. The same tests will be run in this configuration, again in fully virtualized configurations of guests and the same variety of number of guests on the system. 5. Data Analysis At this point of the research, all of the data which has been collected needs to be collated and analyzed for indications of performance variations between the various configurations. Conclusion The goal of this research is to determine if there is a significant performance benefit when removing the hypervisor from the IO path for disk access. It will also compare the disk access between KVM and a bare metal machine to see what performance differences exist between those configurations. This work will also identify performance optimizations that can drive future work in the virtual server storage system, with the goal of improving SR-IOV disk access in production systems. Detailed Task List Tasks Already Complete Set up host operating system Compile KVM SR-IOV Driver Learn how to configure SR-IOV storage In progress Identify benchmark software (due 23 August) Learn to optimally use this software and create test scripts (due 31 September) Collect bare-metal performance (due 7 September) Set up guest operating systems (due 14 September) Future Configure guests with KVM provided storage and collect performance data (due 21 September) Configure guests with SR-IOV storage and collect performance data (due 28 September)

Modify KVM scheduling algorithms and collect SR-IOV performance data (due 5 October) Perform analysis of collected data and draw conclusions about limitations and ideas to improve performance. (due 26 October) Deliverables The primary output from this work will be a report documenting the process used and the summary performance data. The secondary output will be detailed notes of the process, the collected data, and the scripts used to collect the raw performance data, if any. Works Cited [1] PCI-SIG, "Single Root I/O Virtualization," PCI-SIG, [Online]. Available: http://www.pcisig.com/specifications/iov/single_root/. [Accessed 17 April 2013]. [2] E. Zhai, G. D. Cummings and Y. Dong, "Live migration with pass-through device for Linux VM," in OLS'08: The 2008 Ottawa Linux Symposium, Ottawa, Ontario, Canada, 2008. [3] Y. Dong, X. Yang, J. Li, G. Liao, K. Tian and H. Guan, "High performance network virtualization with SR-IOV," Journal of Parallel and Distributed Computing, vol. 72, no. 11, pp. 1471-1480, November 2012. [4] J. Liu, "Evaluating standard-based self-virtualizing devices: A performance study on 10 GbE NICs with SR-IOV support," in Parallel & Distributed Processing (IPDPS), 2010 IEEE International Symposium on, 2010. [5] N. Edwards, M. Watkins, M. Gates, A. Coles, E. Deliot, A. Edwards, A. Fischer, P. Goldsack, T. Hancock, D. McCabe, T. Reddin, J. Sullivan, P. Toft and L. Wilcock, "High-speed storage nodes for the cloud," in Utility and Cloud Computing (UCC), 2011 Fourth IEEE International Conference on, Victoria, NSW, 2011. [6] B. Zhang, R. Lai, L. Yang, Z. Li, Y. Luo and Z. Wang, "A Survey on I/O Virtualization and Optimization," in ChinaGrid Conference, Guangzhou, 2010. [7] QEMU, "Features/FVD," 28 January 2011. [Online]. Available: http://wiki.qemu.org/features/fvd. [Accessed 8 August 2013]. [8] R. Coker, "Bonnie++," 10 December 2008. [Online]. Available: http://www.coker.com.au/bonnie++/. [Accessed 12 August 2013].

[9] S. J. Vaughan-Nichols, "Amazon EC2 cloud is made up of almost half-a-million Linux servers," ZDNet, [Online]. Available: http://www.zdnet.com/blog/open-source/amazon-ec2-cloud-is-made-upof-almost-half-a-million-linux-servers/10620. [Accessed 17 April 2013]. [10] Kernel Based Virtual Machine, "Kernel Based Virtual Machine," [Online]. Available: http://www.linux-kvm.org/page/main_page. [Accessed 10 June 2013].