and Linux Virtualization Solutions



Similar documents
VMware Server 2.0 Essentials. Virtualization Deployment and Management

Virtualization. Dr. Yingwu Zhu

SUSE Linux Enterprise 10 SP2: Virtualization Technology Support

Full and Para Virtualization

Enterprise-Class Virtualization with Open Source Technologies

The Art of Virtualization with Free Software

Virtualization. Types of Interfaces

Virtualization with Windows

MODULE 3 VIRTUALIZED DATA CENTER COMPUTE

Chapter 14 Virtual Machines

VIRTUALIZATION 101. Brainstorm Conference 2013 PRESENTER INTRODUCTIONS

Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies

Virtualization. Jukka K. Nurminen

4. WHAT DOES IT TAKES TO PROVIDE IAAS?

Hypervisor Software and Virtual Machines. Professor Howard Burpee SMCC Computer Technology Dept.

Hypervisors. Introduction. Introduction. Introduction. Introduction. Introduction. Credits:

COS 318: Operating Systems. Virtual Machine Monitors

9/26/2011. What is Virtualization? What are the different types of virtualization.

Comparing Free Virtualization Products

Dell Solutions Overview Guide for Microsoft Hyper-V

Networking for Caribbean Development

2972 Linux Options and Best Practices for Scaleup Virtualization

Virtualization Technology

Virtual Machine Monitors. Dr. Marc E. Fiuczynski Research Scholar Princeton University

Chapter 5 Cloud Resource Virtualization

Virtualization for Cloud Computing

CPET 581 Cloud Computing: Technologies and Enterprise IT Strategies. Virtualization of Clusters and Data Centers

Servervirualisierung mit Citrix XenServer

WHITE PAPER Mainstreaming Server Virtualization: The Intel Approach

Basics of Virtualisation

Virtualization. Pradipta De

The XenServer Product Family:

Virtual Machines.

Windows Server 2008 R2 Hyper V. Public FAQ

A Comparison of VMware and {Virtual Server}

Uses for Virtual Machines. Virtual Machines. There are several uses for virtual machines:

Distributed systems Techs 4. Virtualization. October 26, 2009

Knut Omang Ifi/Oracle 19 Oct, 2015

Virtualised MikroTik

Broadcom Ethernet Network Controller Enhanced Virtualization Functionality

RUNNING vtvax FOR WINDOWS

Virtualization and Other Tricks.

Intro to Virtualization

Basics in Energy Information (& Communication) Systems Virtualization / Virtual Machines

Enabling Technologies for Distributed Computing

Cooperation of Operating Systems with Hyper-V. Bartek Nowierski Software Development Engineer, Hyper-V Microsoft Corporation

Enabling Technologies for Distributed and Cloud Computing

IOS110. Virtualization 5/27/2014 1

Virtualization Technologies

Citrix XenServer Product Frequently Asked Questions

Cloud Computing CS

PARALLELS SERVER BARE METAL 5.0 README

How do Users and Processes interact with the Operating System? Services for Processes. OS Structure with Services. Services for the OS Itself

Microsoft Windows Common Criteria Evaluation

Best Practices for SAP on Hyper-V

Virtualization. Michael Tsai 2015/06/08

Cloud^H^H^H^H^H Virtualization Technology. Andrew Jones May 2011

Version 3.7 Technical Whitepaper

KVM KERNEL BASED VIRTUAL MACHINE

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

HP Data Protector software. Assuring Business Continuity in Virtualised Environments

Anh Quach, Matthew Rajman, Bienvenido Rodriguez, Brian Rodriguez, Michael Roefs, Ahmed Shaikh

Module I-7410 Advanced Linux FS-11 Part1: Virtualization with KVM

Operating Systems Virtualization mechanisms

RED HAT ENTERPRISE VIRTUALIZATION

RPM Brotherhood: KVM VIRTUALIZATION TECHNOLOGY

Virtualization. P. A. Wilsey. The text highlighted in green in these slides contain external hyperlinks. 1 / 16

Models For Modeling and Measuring the Performance of a Xen Virtual Server

What s new in Hyper-V 2012 R2

Red Hat enterprise virtualization 3.0 feature comparison

COM 444 Cloud Computing

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

Microkernels, virtualization, exokernels. Tutorial 1 CSC469

Hardware Based Virtualization Technologies. Elsie Wahlig Platform Software Architect

nanohub.org An Overview of Virtualization Techniques

StACC: St Andrews Cloud Computing Co laboratory. A Performance Comparison of Clouds. Amazon EC2 and Ubuntu Enterprise Cloud

M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2.

CSE 501 Monday, September 09, 2013 Kevin Cleary

Virtualization across the organization

Lecture 2 Cloud Computing & Virtualization. Cloud Application Development (SE808, School of Software, Sun Yat-Sen University) Yabo (Arber) Xu

Chapter 16: Virtual Machines. Operating System Concepts 9 th Edition

Comparing Virtualization Technologies

Virtualization: an old concept in a new approach

Understanding Full Virtualization, Paravirtualization, and Hardware Assist. Introduction...1 Overview of x86 Virtualization...2 CPU Virtualization...

Systems Administration Introduction to OSPF

The Microsoft Windows Hypervisor High Level Architecture

How To Compare Performance Of A Router On A Hypervisor On A Linux Virtualbox 2.5 (Xen) To A Virtualbox (Xeen) Xen-Virtualization (X

Virtual Machine in Automation Projects

Running vtserver in a Virtual Machine Environment. Technical Note by AVTware

Windows Server 2008 R2 Hyper-V Live Migration

Virtualization and the U2 Databases

Microsoft Windows Common Criteria Evaluation

Virtualization Overview

Compromise-as-a-Service

RCL: Software Prototype

SAN Conceptual and Design Basics

Abstract. Microsoft Corporation Published: August 2009

Server Virtualization with VMWare

Virtualization and Performance NSRC

International Journal of Advancements in Research & Technology, Volume 1, Issue6, November ISSN

Transcription:

1Interoperability Between Windows and Linux Virtualization Solutions A Microsoft/Novell White Paper 1.1 Why You Should Read this White Paper Virtualization technology is a critically necessary solution to the ever-increasing demand to do more with less in your data center. This white paper begins by introducing you to the benefits of virtualization and explains the terminology used within the contexts of the Hyper-V and Xen virtualization technologies. The paper continues by diving into the concepts that are fundamental to any successful implementation of either Xen or Hyper-V. NOTE: Xen Virtualization is included with the SUSE Linux Enterprise Server code These concepts include Virtualization Architecture Virtual Networking Interoperability These are the areas of critical importance where most implementation mistakes are made. Having a solid understanding of these virtualization concepts will prevent the frustrations that you are most likely to encounter. The goal of this white paper is to support the deployment of Microsoft/Novell bi-directional virtualization solutions. With the help of this white paper, you will have a good idea of what you need to know if you plan deploy Hyper-V host machines with SUSE Linux Enterprise Server guest machines or SLES-based Xen host machines with Windows server guests. This paper is not an entire training solution. Both Novell and Microsoft have additional courseware that dig deeper into their respective server technologies. We assume you already have substantial understanding and experience with Linux and/or Windows operating systems, or that you plan to acquire this through training and hands on practice. Also, implementation guides that deal specifically with SUSE Linux Enterprise Server guest deployment on Hyper-V hosts and Windows server guests on Xen hosts are, or will shortly be, available to further guide you in your deployment efforts. 1.2 What Virtualization Is and Why You Should Care This first section discusses the following: Section 1.2.1, What is Virtualization?, on page 2 Section 1.2.2, What Are the Benefits of Virtualization?, on page 2 Section 1.2.3, What is the History of Virtualization?, on page 4 1 Interoperability Between Windows and Linux Virtualization Solutions 1

1.2.1 What is Virtualization? Simply put, the process of virtualization replaces a direct interface between a resource user and a physical resource with an abstracted, virtualized (software-mediated) connection. In an unvirtualized (physical or bare metal ) environment, the resource user interacts directly with a resource, such as a server. In a virtualized environment, this direct interaction is replaced by an interaction with a virtualized resource, such as a virtual machine instance of a server. Virtualization is a broad term that refers to the abstraction of computer resources or a computing environment. Virtualization provides a platform to present a logical view of physical computing resources to an operating system so that multiple operating systems may share a single computer, unaware that it does not have complete control of the underlying hardware. Virtualization can also refer to making one physical resource appear, with somewhat different characteristics, as one logical resource. The term virtualization has been widely used for many years to refer to many different aspects and scopes of computing from entire networks to individual capabilities or components. The common theme of all virtualization technologies is the hiding of underlying technical characteristics by creating a logical interface that is indistinguishable from its physical counterpart. 1.2.2 What Are the Benefits of Virtualization? System virtualization enables you to consolidate systems, workloads, and operating environments, optimize resource use, and increase server flexibility and responsiveness. System virtualization creates many virtual systems within a single physical system. System virtualization can be implemented using hardware partitioning or hypervisor technology. Hardware partitioning subdivides a physical server into fractions, each of which can run an operating system. These fractions are typically created with coarse units of allocation, such as whole processors or physical boards. This type of virtualization allows for hardware consolidation, but does not have the full benefits of resource sharing and emulation offered by hypervisors. Hypervisors use a thin layer of code in software or firmware to achieve fine-grained, dynamic resource sharing. Hypervisors coordinate the low-level interaction between virtual machines and physical hardware. Because hypervisors provide the greatest level of flexibility in how virtual resources are defined and managed, they are the primary technology of choice for system virtualization. System virtualization yields the following benefits: Server Consolidation on page 2 Dynamic Provisioning on page 3 Virtual Hosting on page 3 Reliability, Availability, and Serviceability on page 3 Workload Management on page 3 Server Consolidation Server consolidation is the most common reason for virtualization. Enterprises are experiencing server sprawl as low-cost x86 servers make their way into data centers. Application provisioning on dedicated servers and provisioning hardware capacity for peak loads have led to unprecedented server proliferation and very low utilization (10-20%). 2 Interoperability Between Windows and Linux Virtualization Solutions

Virtualization technology can play a significant role in enabling logical server consolidation. It enables enterprises to achieve higher utilization and manage their hardware resources better, thus reducing Total Cost of Ownership (TCO). Note that some workloads do not lend themselves to virtualization by themselves or in combination with other workloads, so the key word is intelligent virtualization. A scan of the network needs to be followed by a thorough analysis of the different workloads and their purposes. Novell offerings such as the Platespin products and other component of the Intelligent Workload Management strategy help in this task. Dynamic Provisioning Dynamic provisioning refers to the flexible provisioning of additional resources in response to the dynamically changing resource requirements of different applications, without the need to buy new hardware. Workload Quality of Service (QoS) refers to the ability to assign different priorities to different workloads, depending on whether they need to be online at any given moment or not. Clusters can be reconfigured so that the servers in a cluster may be switched dynamically and preemptively from one kind of workload to another. Virtual Hosting Virtual hosting is a method for hosting multiple domain names on a computer using a single IP address. This is usually done to share computer resources, such as memory and processor cycles, thus using the computer more efficiently. One widely used application is shared web hosting. Shared web hosting prices are lower than a dedicated web server, since many customers can be hosted on a single piece of hardware. Reliability, Availability, and Serviceability Reliability, availability, and serviceability are the real benefits of virtualization and are realized by decoupling the workload from its underlying hardware and allowing it to be moved around. The ability to move or migrate live virtual machines around a network of PCs or servers is the feature that is driving a lot of server virtualization these days (after an initial wave of virtualization-enabled consolidation of workloads.) This is especially important for hardware upgrades and maintenance, where the maintenance windows can be very minimal and restricted. Virtualization would allow for migration of workloads between physical systems any time of the day. Rolling software upgrades can be performed in this environment, and even a mix of production and test environments can be achieved by virtually isolating one environment from the other. Workload Management Isolating workloads from the underlying hardware, as already explained, allows for QoS prioritization and the deployment of vertical applications on the same hardware. Workload management can be particularly useful in dealing with legacy compatibility. For example, it allows you to keep legacy operating systems (such as NetWare or Windows 2000) going as one workload alongside others and not have to worry about driver updates and hardware changes. In terms of management tools and administration tasks performed on a workload, a physical machine and a virtual machine are not different from each other. Interoperability Between Windows and Linux Virtualization Solutions 3

1.2.3 What is the History of Virtualization? Virtualization is not a new technology; IBM has been using virtualization on mainframes for decades. Since 1998, when virtualization was introduced on the x86 platform, software and hardware virtualization on x86 has mirrored what was done on mainframes in the past. In 1959, at the first international conference on information processing, Christopher Strachey gave a paper, Time-sharing in Large Fast Computers, which broke new ground at the time. In 1961, CTSS (Compatible Time Sharing System) was written by a team at MIT. The CTSS supervisor provided a number of virtual machines, each of which was an IBM 7094. One of these virtual machines was the background machine and had access to tape drives. The other virtual machines were foreground users: these virtual machines could run regular 7094 machine language code at 7094 speed, and could also execute one extra instruction, which invoked a large set of supervisor services. Multics (Multiplexed Information and Computing Service) was an extremely influential early time-sharing operating system. The system could grow in size by simply adding more of the appropriate resource computing power, main memory, disk storage, etc. Separate access control lists on every file provided flexible information sharing and complete privacy when needed. The IBM M44/44X was an experimental computer system from the mid 1960s. It was based on an IBM 7044 (the 'M44') and simulated multiple 7044 virtual machines (the '44X'), using both hardware and software. The IBM System/360 Model 67 was an important IBM mainframe model in the late 1960s. It included features to facilitate time-sharing applications, notably virtual memory hardware and 32-bit addressing. Unix was designed to be portable, multi-tasking, and multi-user in a time-sharing configuration. The heart of the VM architecture is a control program or hypervisor called VMCP (usually called CP; sometimes, ambiguously called VM). It runs on the physical hardware and creates the virtual machine environment. VM-CP provides full virtualization of the physical machine including all I/O and other privileged operations. Mach is one of the earliest examples of a microkernel. The Mach virtual memory management system appears in modern BSD-derived UNIX systems, such as FreeBSD. Exokernel is an operating system kernel developed by MIT Parallel and also a class of similar operating systems. The idea behind exokernels is to force as few abstractions as possible on developers, enabling them to make as many decisions as possible about hardware abstractions. VMware software first brought the concept of virtualization to the x86 platform. VMware provides a completely virtualized set of hardware to the guest operating system and virtualizes the hardware for a video adapter, a network adapter, and hard disk adapters. The host provides pass-through drivers for guest USB, serial, and parallel devices. In this way, VMware virtual machines become portable between computers, because every host looks nearly identical to the guest. Linux on System z is the collective term for the Linux operating system compiled to run on IBM mainframes, especially System z machines. 4 Interoperability Between Windows and Linux Virtualization Solutions

Xen is an open source hypervisor for x86, x86-64, Itanium and PowerPC 970 architectures. It allows several guest operating systems to execute on the same computer hardware concurrently. Hyper-V is Microsoft s hypervisor-based server virtualization technology for x64 architectures. It provides the ability to run multiple operating systems in parallel, on a single server. 1.3 Virtualization Terminology This section discusses the terminology used within the context of virtualization. The following categories of terminology are covered: Section 1.3.1, Basic Terminology, on page 5 Section 1.3.2, Xen Terminology, on page 6 Section 1.3.3, Hyper-V Terminology, on page 7 Section 1.3.4, Xen/Hyper-V Equivalent Terms, on page 9 1.3.1 Basic Terminology Simulation: Duplicate the behavior and the exact internal state. Software simulation Prototype hardware Mostly used in development environments, such as developing new hardware and testing its behavior Emulation: Duplicate only the behavior. Instruction set emulators OS emulators Can be used in production environments Has its drawback in that something is duplicated or spoofed, which has a negative impact on performance Ring (or Protection Ring): Computer operating systems provide different levels of access to resources. A protection ring is one of two or more hierarchical levels or layers of privilege within the architecture of a computer system. Rings are arranged in a hierarchy from most privileged (most trusted, usually numbered zero) to least privileged (least trusted, usually with the highest ring number). On most operating systems, Ring 0 is the level with the most privileges and interacts most directly with the physical hardware such as the CPU and memory. Pseudo Ring: A VT-enabled CPU can create a pseudo or virtualized set of privileged rings. Privilege: The level of trust, interaction and access to resources, such as CPU and memory, given to a protection ring. Virtualization: Abstract from the underlying implementation, and minimize or avoid any duplication. Allows software to run natively, but in a safe manner. Interoperability Between Windows and Linux Virtualization Solutions 5

Hypervisor: The entity that enables virtualization by abstracting the underlying hardware and creating virtual machines. Virtual machines in this sense are containers of the abstracted hardware in which operating systems run. A hypervisor is a virtualization platform that allows multiple operating systems to execute on a single host computer. Its primary job is to provide isolated execution environments and to control access to the underlying hardware resources. Type I Hypervisor: A hypervisor that runs directly on top of the hardware and does not require a host OS because its function is similar to an operating system kernel. The Type I hypervisor does not perform I/O operations, but acts as the traffic police for I/O, directing traffic for the OS to perform the I/O. The I/O virtualization happens in the virtual machine, and the virtualization management tools run in the virtual machine as well. Type II Hypervisor: A hypervisor that is an application or device driver that must run within a host operating system. With a Type II Hypervisor, the virtualization layer (hypervisor + host OS) is responsible for mediation of access to the underlying hardware, sharing access to the hardware with virtual drivers, and VM management. In this architecture, I/O virtualization happens in the host OS, as does the virtualization management. Workload: An integrated stack of application, middleware, and operating system that accomplishes a computing task. It is portable and platform agnostic and is able to run in physical, virtual, or cloud computing environment. Headless system: A server with no monitor, keyboard, or mouse attached. Memory Ballooning: Allows virtual machines guests (domains or partitions) to dynamically change memory usage up or down (inflating or deflating) during runtime. 1.3.2 Xen Terminology Virtualization (Virtual Machine) Terminology Full-virtual mode: A virtual Machine mode that can run a native, unmodified operating system by virtualizing and emulating all underlying hardware devices. Paravirtual mode: A virtual Machine mode that can run a modified operating system, which cooperates with the Hypervisor through a set of APIs. The operating system has to be aware of this paravirtualization. Progressively paravirtual mode: A Virtual Machine that is a hybrid of a full and paravirtual machine. All hardware is emulated and the paravirtual APIs are present. (Also known as Enlightened mode.) Domain0 (Dom0): A special privileged domain that serves as an administrative interface to Xen. Dom0 implements some of the functionality that is thought of as logically a function of the hypervisor. This allows the Xen hypervisor to be a thin layer. Dom0 is the first domain launched when the system is booted, and it can be used to create and configure all other regular guest domains. Dom0 is also virtual machine as it performs the I/O sharing and management functions needed by a Type I hypervisor. DomainU (DomU): Unprivileged guest domains that are created on and managed through Dom0. Operating System Virtualization Terminology Native OS: A typical operating system that has not been modified to run in a paravirtual machine and must run in fully virtual mode or bare metal. 6 Interoperability Between Windows and Linux Virtualization Solutions

Paravirtual OS: An operating system that has been modified to run in a paravirtual machine. A paravirtual OS cannot run bare metal - it can only run in a paravirtual machine, because the only way it can interact with the underlying hardware is through the paravirtual APIs. Enlightened OS: A native operating system that can run bare metal but is also aware of paravirtual APIs that allow it to run in a progressively paravirtual or enlightened virtual machine. Example: Windows 2008 Server. Hardware Virtualization Terminology VT Computer: A computer that contains processor(s) that support virtualization technology, such as Intel VT or AMD-V. CPU level VT is required for full-virtual and enlightened mode. There are three types of virtualization technology: CPU VT - what we typically think of as VT (Intel:VT-x, VT-i; AMD: SVM) Chipset VT (Intel: VT-d ; generic: IOMMU (Input/output memory management unit)) - Allows for physical devices to get passed into virtual machines. Device VT (Intel: VT-c; SR-IOV (Single-root I/O virtualization)) - Instructions within the devices themselves - virtual ports associated with physical ports at the device level. Non-VT (Legacy) Computer: A computer that does not contain processor(s) that support virtualization technology and therefore can run VMs only in paravirtual mode. The computer may not have VT instruction support in the hardware or may have VT technologies built-in, but disabled in the BIOS. Virtualization Roles Terminology VM Server / vhost / Host Machine: A server running Xen, or some other hypervisor, capable of hosting VMs (i.e., xen + dom0). Example: SLES 11 with the Xen packages installed, booted with Xen. VM / Guest Machine: Virtual Machine (domu) 1.3.3 Hyper-V Terminology Data Execution Prevention (DEP) A security feature that prevents a process from executing code from a non-executable memory region. DEP can be hardware-based, software-based, or a combination of the two. Hardware-based DEP requires a CPU that can mark memory pages as non-executable, such as Intel XD (Execute Disable) and AMD NX (No-Execute) processors. Software-based DEP, first introduced in Service Pack 2 for Windows XP, prevents malicious code from exploiting Windows exception-handling mechanisms. Software-based DEP is also available in Windows Server 2003 SP1, Windows Vista, and Windows Server 2008 and is not dependent upon the CPU s capabilities. Hyper-V requires a CPU that supports hardware-based DEP. Emulated Devices Virtualization Solutions traditionally emulate legacy devices for their guests. These are usually a Motherboard Chipset with IDE Controller, a legacy network card and a video chip. The CPU on the other hand is reported as the one present on the physical machine. Interoperability Between Windows and Linux Virtualization Solutions 7

The emulation pretends the presence of devices by providing the ports and I/O memory of these devices. This allows the original driver of the guest OS to find its device and load the appropriate driver. The drawback of this approach is frequent transitions to the hypervisor and further transitions to the parent partition where the device emulation is handled in the worker process. The emulated device in the worker process then has to translate the device request for the real physical device and call the real device driver in the parent partition and respond back to the guest again via the hypervisor. (This also applies to Xen virtualization as well. Not sure it should be in the Hyper-V Terminology section) Enlightenments Modifications to operating system code to make it hypervisor-aware and to change its operation to be more efficient when running as a guest in a hypervisor environment, mainly with regards to kernel synchronization objects. Hardware-Assisted Virtualization Modern processors (CPUs) from Intel and AMD include extensions to provide the ability to load a hypervisor virtualization platform in between the computer hardware and the main, or host, operating system. This is currently implemented in the Intel VT and AMD-V line of processors. Hypercalls Programming interfaces which provide APIs that partitioned operating systems use to communicate with the hypervisor. Hypervisor A virtualization platform that allows multiple operating systems to execute on a single host computer. Its primary job is to provide isolated execution environments and to control access to the underlying hardware resources. A hypervisor can be a Type 1 (native) or Type 2 (hosted) hypervisor and either monolithic or microkernel. A Type 1 hypervisor runs directly on a specific hardware platform, similar to an operating system kernel. A Type 2 hypervisor is an application or device driver that must run within a host operating system. The Type 1 hypervisor model provides the highest possible performance for virtual machines, enabling performance that would normally only be possible on physical computer hardware. A monolithic hypervisor requires hypervisor-aware device drivers, whereas a microkernel hypervisor utilizes device drivers executing in a root operating system. Integration Services User mode processes that are run on the child partition to provide a level of integration between the parent and child partitions. Partitions The unit of isolation within the hypervisor that is commonly referred to as a virtual machine. A partition is allocated physical memory address space and virtual processors. Hyper-V employs three types of partitions: Child Partition: created by the Parent Partition, guest operating systems and applications run in these partitions. 8 Interoperability Between Windows and Linux Virtualization Solutions

Parent Partition: manages resources for a set of child partitions. Root Partition: Controlling partition in which the virtualization stack runs and which owns hardware devices. Pass-through disks Pass-through disks present an entire physical disk as a virtual disk within a child partition (virtual machine). Data transfer is passed through to an actual physical disk without any processing by the virtualization components. This is in contrast to a virtual disk where the virtual storage stack relies on a parser component to make the underlying storage, such as a VHD or an ISO file-based image, look like a physical disk to the guest partition. Pass-through disk access is independent of the underlying physical connection (i.e. the disk may be direct-attached storage or on a SAN). Snapshots Snapshots are point-in-time saved states for virtual machines. Snapshots provide the ability to restore a virtual machine to the state when the snapshot was created. Synthetic Devices Synthetic Devices are device stacks that do not correspond to a physical device. Synthetic devices use a logical communication channel to communicate with a physical device in the parent partition. Virtualization Virtualization is a technique for abstracting the physical characteristics of computing resources and presenting them as logical resources, sometimes with different characteristics, to operating system(s) that interact with those resources. Virtual Machines Virtual Machines are a computing environment implemented in software in which a computer s hardware resources are abstracted in such a way that multiple operating systems may simultaneously execute on a single hardware platform. Each operating system is allocated logical instances of the computer s CPUs and other hardware resources, and is unaware that it is executing in a virtual environment. 1.3.4 Xen/Hyper-V Equivalent Terms Xen Hypervisor Full-virtual mode (or fully virtualized) Hyper-V Hypervisor Emulated Paravirtual mode Progressively Paravirtual Mode Domain0 DomainU Enlightened Parent (or root) partition Child partition Interoperability Between Windows and Linux Virtualization Solutions 9

Xen Thin Dom0 Server Core Ring 0 Ring -1 Paravirtual Devices Hyper-V Synthetic Devices 1.4 Understand Virtualization Architecture Understanding the underlying architecture of the various virtualization solutions helps you plan your implementation, make installation and configuration decisions, and perform management tasks in the most effective way possible. Planning correctly, making good decisions and performing the right tasks will help prevent the most common problems that occur in virtualization. 1.4.1 Types of Virtualization Architecture The following types of architecture are discussed in this white paper: Traditional on page 11 Xen on page 11 Hyper-V on page 12 VMware ESX on page 13 KVM on page 13 Container Based on page 14 10 Interoperability Between Windows and Linux Virtualization Solutions

Traditional Traditional virtualization relies in a Type II Hypervisor that runs on top of a host operating system. In a Type II Hypervisor, the virtualization layer (comprising the Hypervisor + the Host OS) is responsible for mediation of access to the underlying hardware, sharing access to the hardware with virtual drivers, and VM management. In this architecture, I/O virtualization happens in the host OS, as does the virtualization management. The vast majority of virtualization products, such as VMware, Virtual PC, and others, use this architecture. Xen Interoperability Between Windows and Linux Virtualization Solutions 11

With Xen, the hypervisor runs directly on top of the hardware and does not require a host OS. This is also known as a Type I Hypervisor. Xen is a lean Type I hypervisor in that it is only responsible for mediation of access to the underlying hardware and not for sharing access to that hardware with virtual drivers. No device drivers are loaded into the Xen hypervisor, which makes it compatible with virtually any hardware platform. The hypervisor does not perform I/O operations, but it acts as the traffic police for I/O, directing traffic for the OS to perform the I/O. The I/O virtualization happens in the virtual machine, and the virtualization management tools run there as well (dom0). Sharing of hardware devices with virtual drivers and VM management are handled by one of the virtual machines. In a Xen virtualization environment, it is recommended that the management OS running Domain 0 be as light as possible (for example, without a graphical environment). The reason is that it exists only for one purpose: as the management interface to the hypervisor. This architecture is typically recommended for and used in production environments. Hyper-V Microsoft s Hyper-V virtualization architecture resembles Xen s Thin Dom0 architecture. It s a Type 1 Hypervisor. Hyper-V supports isolation in terms of a partition. A partition is a logical unit of isolation, supported by the hypervisor, in which operating systems execute. A hypervisor instance has to have at least one parent partition running Windows Server 2008. The virtualization stack runs in the parent partition and has direct access to the hardware devices. The parent partition then creates the child partitions which host the guest OSs. A parent partition creates child partitions using the hypercall API, which is the application programming interface exposed by Hyper-V. 12 Interoperability Between Windows and Linux Virtualization Solutions

VMware ESX With VMware ESX, the hypervisor and Host OS are merged to create a fat Type I hypervisor. The hypervisor/os runs directly on top of hardware, making it Type I as well. A fat hypervisor is responsible for mediation of access to the underlying hardware and for sharing that hardware with virtual drivers through hardware emulation and/or paravirtual APIs. This requires device drivers to be loaded into the hypervisor, which in turn limits its compatibility with hardware platforms. KVM Interoperability Between Windows and Linux Virtualization Solutions 13

With KVM (Kernel Virtual Machine), a kernel module is loaded into the Linux kernel that turns the Linux kernel into a hypervisor. KVM is essentially a Type I Hypervisor because it runs directly on top of the hardware. With KVM, the Linux kernel becomes a fat hypervisor because it not only mediates access to the underlying hardware but also loads physical drivers and shares access to the underlying hardware devices with virtual drivers. Device emulation and VM management are handled by a modified version of QEmu running in user space. Container Based With container-based virtualization, no hypervisor is involved. The Host OS provides all OS services to each virtual container. This is also called parallel virtualization. Examples are Parallels Virtuozzo Containers, OpenVZ, Solaris Zones, LXC (Native Linux Containers), and FreeBSD Jails. With container-based virtualization, you cannot run a different OS. Each container is protected from other containers, but they are consuming a single instance of an OS on the backend. While there are performance advantages, this architecture also severely limits flexibility. 1.5 Understand Interoperability Whether you plan to deploy SLES virtual machines as guests on Hyper-V hosts or Windows virtual machines as guests on Xen hosts, you need to understand what it takes to make those operating systems work with the best possible performance. Both virtualization technologies have interoperability components that you need to be aware of. These are: Section 1.5.1, Hyper-V Linux Integration Components (IC), on page 15 Section 1.5.2, SUSE Linux Enterprise Virtual Machine Driver Pack, on page 17 14 Interoperability Between Windows and Linux Virtualization Solutions

1.5.1 Hyper-V Linux Integration Components (IC) Linux Integration Components (IC) take advantage of the VMBUS and synthetic devices provided in Hyper-V to enhance the performance and usability of Linux guests running on Windows servers. Linux IC code was submitted to the Linux Kernel Community in July of 2009 and is maintained by Microsoft. Essentially this is device driver code that was released as open source under GPL v2. The driver code has been and is being updated to enhance interoperability with Linux VMs as later versions of the Linux Kernel are released. Currently part of the Linux Kernel version 2.6.32. This code is designed so that Linux VMs can run in enlightened mode on Hyper-V. Without this driver code, Linux can still run on Hyper-V, but without enhanced performance levels. The drivers that are included in Linux IC: VMBus driver (hv_vmbus.c): The VMBus driver is a Linux kernel module. It provides both a lightweight bus driver and library functionality. As a bus driver, it registers with Linux Driver Model framework (LDM) to provide simple bus and device integration and device tree integration (sysfs). As a library, it implements the VMBus channel protocol and provide an abstraction of channel to its clients (Disk and Network VSCs). StorVSC driver (hv_storvsc.c): The Storage VSC interacts with the Windows Storage VSP. The "wire" protocol defined by the storage VSP determines how a VSC interacts with it. The Linux Storage VSC (LSVSC) basically abstracts the Linux I/O stack from needing to understand the Storage VSP's protocol. At the upper-edge of the LSVSC, it talks to the Linux SCSI subsystem. The Linux SCSI subsystem sees the LSVSC as a SCSI low-level driver (LLD) in Linux parlance. It passes SCSI requests (scsi_cmnd) to LSVSC which in turn converts them into the "wire" format understood by the Windows Storage VSP (VSTOR_PACKET). The bottomedge of the LSVSC talks to Linux VMBus (LVMBUS) which in turn talks to the Windows VMBus to route the packets to the Storage VSP. BlkVSC driver (hv_blkvsc.c): BlkVSC (BlockVSC) supports "fast boot" and fast access to IDE disks. To enable enlightened IDE support for enhancing the performance of Linux when virtualized on Windows, a separate BlockVSC component is used as a Linux block device driver. Like StorVSC, the BlockVSC component is comprised of an upper edge wrapper that interfaces with the Linux block layer and a lower-edge through the infrastructure modules. The infrastructure modules with Hyper-V through the Linux VMBus. NetVSC driver (hv_netvsc.c): The network VSC send and receive network traffic between a Linux guest and Hyper-V host which has direct connection to physical network. The mechanism that this is used to accomplish is the Remote NDIS (RNDIS) protocol. Thus the communication that flows between the VSP and the VSC primarily happens over the RNDIS protocol which then is packaged and forwarded as payload over to the other side over NetVSP / VMBus protocol. Additionally, Linux ICs provide the following functionality: SMP support for up to 4 virtual CPUs Integrated shutdown, which provides the ability to gracefully shutdown Linux from the Hyper- V console (management partition) Timesync, which keeps the time in the guest OS synchronized with the management partition. Interoperability Between Windows and Linux Virtualization Solutions 15

The following illustrates the conceptual architecture of Linux IC providing services to improve the performance of Linux guests on a Hyper-V host. The actual Linux IC modules are indicated in yellow. In this architecture VSP stands for Virtualization Service Provider VSC stands for Virtualization Service Client A VMBus is the data channel between the VSP and the VSC Characteristics and functionality of the Linux IC modules, VMBus and VSCs: Communication with parent partition is done through Linux VMBus VSCs are the Linux drivers for synthetic devices (SCSI, IDE, and Ethernet) provided by Hyper-V. They translate between Linux I/O requests and Hyper-V VSC commands Devices are registered with Linux Driver Model (LDM) Every VSC module contains two portions: Driver Interface Mapper (DIM) (Released as open source): This portion of the VSC component interacts with the Linux kernel like a regular Linux device driver. VSC Core (Released as Open Source): The core portion of the VSC module is implemented based on the protocol of the corresponding VSP at Hyper-V host. The VSC core interacts with VSP via the VMBus interface. 16 Interoperability Between Windows and Linux Virtualization Solutions

1.5.2 SUSE Linux Enterprise Virtual Machine Driver Pack Xen uses a back-end / front-end virtual driver model. The back-end runs in Dom0 and provides the multiplexing of the virtual I/O requests across the physical devices. This back-end can be either an emulated device model or a paravirtual device model. The front-end driver is loaded into the guest which sees it simply as driver that can be used to do I/O with a hardware device. For emulated devices, this front-end driver is actually the same physical driver that would be used to access physical hardware of the same type. For paravirtual devices, it is a special driver designed to talk to the paravirtual device back-end. The guest doesn't know or care that the front-end driver is a virtual driver and is not talking exclusively to a physical piece of hardware. The front-end driver passes all I/O requests to the back-end running in Dom0. This paravirtual device backend provides significantly better perfomance that emulated device backend. The SUSE Linux Enterprise Virtual Machine Driver Pack (VMDP) contains paravirtual disk and network front-end device drivers for Windows that allow it to take advantage of the higher performance paravirtual devices (gray in the diagram) that are on the Xen Bus rather than the emulated devices (purple in the diagram) on the emulated PCI bus. These drivers enable hosting of the unmodified guests on top of SUSE Linux Enterprise Server 10 (SLES 10) SP2 and Xen 3.2 or later, though the recommended host is SUSE Linux Enterprise Server 11 (SP1) and Xen 3.3 or later. The Virtual Machine Driver Pack is provided as an executable installer for Windows at Novell Downloads (http://download.novell.com). The single installer executable contains drivers for Windows 2000, Windows XP, Windows 2003 (both 32 bit and 64 bit), Windows Vista (both 32 bit and 64 bit), and Windows 2008 (both 32 bit and 64 bit). You have to execute the VMDP binary in your Windows virtual machine and the Windows version is detected automatically. Interoperability Between Windows and Linux Virtualization Solutions 17

When you execute the installer the utility will determine the version of Windows and install the appropriate drivers or upgrade existing drivers. The following drivers/services are provided for full-virtual VMs: Xenbus driver. Which make the paravirtual bus visible for windows Paravirtual Disk Controller driver. Is represented as a SCSI controller, so Windows thinks it is connected to SCSI disks Paravirtual LAN Adapter driver Balloon (memory) driver. Allows Windows guests to safely respond to dynamic resizing of memory allocated to the guest. Safe shutdown service. Allows a Windows guest to respond safely to a shutdown/restart command from Dom0. 1.6 Understand Virtual Networking The abstraction of physical resources that takes place in virtualization extends to the configuration of networking. Network cards and switches are also virtualized and the configurations that are possible can be quite complex. Virtual network configuration problems are the most common support issue. This diagram depicts the initial state of a virtual host (a Parent partition or Dom0), with no virtual machine guest (Child partition or DomU) running yet, connected to a physical network through its physical network card. Nothing has to be abstracted at this point. In this initial state, all protocols are bound to the physical network card which provides direct connectivity for the server to the physical network. 18 Interoperability Between Windows and Linux Virtualization Solutions

The networking model changes, however, when virtual machines are created on the virtual host--a virtual network is created. The virtual networking model requires a virtual machine to use a virtual network adapter which presents itself to the virtual machine's operating system with a MAC address and can be configured through the operating system's normal tool as if it were a physical network adapter. The virtual adapter can then connect to a virtual switch or virtual bridge. Before getting into the details of virtual adapters, you need to understand the basics of virtual networks. 1.6.1 Understand Virtual Networks Networks that virtual machines are connected to are called virtual networks. They allow virtual machines to communicate with each other and with physical machines. A virtual network is configured on the virtualization host machine. You can configure simple bridges or complex routed environments. The virtual network in the above graphic is an example of a simple bridge between a physical network interface on the host and a virtual bridge. The bridge is connected to a virtual network interface on the virtual machine. The virtual network is configured as a bridge in the virtualization host machine. When configuring a virtual network, you are creating a network switch in the host. The virtual network can be configured on Xen using configuration files for static configuration, or using scripts or with libvirt for dynamic configuration. The virtual network can be configured on Hyper-V using the Virtual Network Manager tool. A virtual bridge is like a network switch. It functions at layer 2 of the OSI model and forwards frames out through ports based on the MAC address destination. An IP address, if there is one, is configured on the bridge. The physical interfaces are switch ports that are connected to the physical network. Because of this, these physical interfaces do not have IP addresses. This causes the host to see and use the bridge as its network interface (as opposed to the physical interface). (This explanation is true for Xen but I am not sure if it is for Hyper-V.) Interoperability Between Windows and Linux Virtualization Solutions 19

1.6.2 Virtual Network Adapters Xen and Hyper-V each provide network adapters for the guest virtual machine in their own way and both serve as good examples of the virtual network concept. You need to understand virtual network adapters, using the following types as examples: Hyper-V Network Adapters on page 20 Xen Network Adapters on page 21 Hyper-V Network Adapters With Hyper-V there are two types of network adapters that can be used by Guests. There is a Legacy Network Adapter and a Network Adapter. 1. The Legacy Network Adapter is an emulated adapter (Intel 21140 PCI) that is available to Guests who either cannot take advantage of Integrated Services or must have connectivity to the physical network to download and install prerequisites before they can take advantage of Integrated Services (e.g. Windows XP x86 must download and install Service Pack 3). 2. A Network Adapter is a synthetic device that can only be used after Integrated Services are installed in Non-Enlightened Guests. Enlightened Guests already have the necessary components installed in the operating system to begin taking advantage of this type of Network Adapter. The default is to configure a Network Adapter when creating a new virtual machine. If a Legacy Network Adapter needs to be added, it must be done after the virtual machine is created. This is accomplished by selecting the virtual machine in the Hyper-V Management interface and modifying the Settings by using the Add Hardware process. 20 Interoperability Between Windows and Linux Virtualization Solutions

Xen Network Adapters With Xen there are also two different types of virtual network adapters that can be used by guests depending on whether the gusts is paravirtual or full virtual/enlightened: 1. Emulated: Available to fullvirtual and enlightened machines. Emulates a PCI network interface card select from the following: Realtek 8139, Intel e100/e1000, AMD Pcnet32. These are the default for fullvirtual and enlightened guests. 2. Paravirtual: Higher perfomance virtual network interface. Available to paravirtual, fullvirtual, and enlightened guests. These are the default for paravirtual guests but require special drivers for use in fullvirtual and enlightened guests. When a DomU is created, a new virtual switchport (vif) is created on the virtual network (bridge) in Dom0. The virtual network interface in the guest is then connected to the virtual network via this virtual switchport. These steps affect only the general network connectivity. The IP configuration inside the unprivileged domain is done separately with DHCP or a static network configuration. The following graphic illustrates the relationship of the various interfaces involved in Xen virtual networking: 1.6.3 Virtual Network Types You can configure your virtual network in a variety of ways. In using Xen or Hyper-V you will see different terminology but the following information provides examples to help you understand the concepts. Virtual Network Types in Hyper-V External Network: An External network binds to a physical adapter in the Parent Partition to allow Guest connectivity to a physical network connected to the Parent Partition. You can bind only one External network per physical adapter. If multiple External networks are needed, additional physical adapters will have to be installed in the Hyper-V server. An External network is required to access the Internet, or to connect to other organizational resources that do not reside in the Parent partition. Configuring an External connection creates a new network connection that unbinds all the protocols from the physical network card and binds the Microsoft Virtual Network Switch Protocol. The new network connection can be used by the Parent partition to regain access to the physical network. Interoperability Between Windows and Linux Virtualization Solutions 21

Internal Network: An Internal virtual network provides connectivity between the Guest, the Parent partition, and other Child partitions on the same Hyper-V server. In Virtual Server 2005 R2, this type of host connectivity required the installation and configuration of the Microsoft Loopback Adapter. Hyper-V does not require the loopback adapter. As with an External virtual network, an Internal virtual network adds another network connection in the Network Connections interface. This connection can be configured to be on the same logical network as a virtual machine thereby providing connectivity between the Child and Parent partitions for the purposes of exchanging data such as file transfers. Private Network: A Private virtual network provides connectivity between Guests only. There is no external communication and there is no communication with the Parent partition. Additionally, there is no new network connection added to the Network Connections interface. Virtual Network Types in Xen Virtual networks in Xen can be named any way the administrator desires but by default, all virtual networks are named brx (where X is the number of the virtual network: i.e. br0, br1, br2, etc.) despite what type of virtual network they are. Bridged/Shared Interfaces. Virtual networks where all virtual machines are connected to the physical LAN. For simple bridged networks, the interfaces in the vhost are shared. By default the name of a bridged network is brx (where X is the number of the shared interface). VMs appear to be on the same LAN as the VM server and are visible to the world outside. All traffic leaving the VM server and any VMs connected to these bridges travel across these bridges before going out on the wire. For bridged networks on bonded interfaces the bridge is attached to a bonded interface in the vhost. 22 Interoperability Between Windows and Linux Virtualization Solutions

Bridged networks on bonded interfaces essentially are the same as a normal bridged network. However, these bridged networks can now take advantage of NIC fail-over and/or aggregation. With shared interfaces, by default, the name of the interface in the vhost is ethx (where X is the number of the shared interface). Virtual networks with shared interfaces have the same features and behavior as normal bridged networks. The physical interface is renamed to pethx and the bridge is named ethx. This is useful if applications expect an interface to be named ethx. Interoperability Between Windows and Linux Virtualization Solutions 23

Bridges with VLANs: Bridged virtual networks can also be attached to VLANs as well. You can also attach bridges to VLANS that have been created on bonded network interfaces. Host-Only. Virtual networks where virtual machines can only see each other and the VM Server. In a host-only network, you have a bridge that is treated as a network interface in the vhost but is not connected to a physical LAN. 24 Interoperability Between Windows and Linux Virtualization Solutions

In the following you can name the bridge: hostonlyx (where X is the number of that type of network hostonly0, hostonly1, etc.). VMs connected to host-only networks can communicate with each other and the vhost but not with the outside world, and the VMs are not visible to the world. Routed / NAT: Virtual networks where virtual machines can only see each other and the VM Server. A routed or NAT network works like a host-only network, where IP forwarding (and optionally masquerading) have been enabled in the kernel and iptables. In the following, these suggested names are used: natx and routedx (where X is the number of that type of network nat0, routed1, etc.). Interoperability Between Windows and Linux Virtualization Solutions 25

The VMs connected to NAT networks can communicate with each other, the vhost, and also the outside world via NAT routing in Dom0. Private: Virtual networks where virtual machines can only see each other and not the VM Server. A private network has a bridge that has no IP address in the vhost. In the following a suggested name is used: privatex (where X is the number of that type of network private0, private1, etc.). In a private network the VMs can communicate with each other but not with the vhost or the outside world. DMZ: Virtual networks where virtual machines can only see each other and not the VM Server. A DMZ network is connected to a physical interface but has no address in the vhost. In the following we use a suggested name dmzx (where X is the number of that type of network dmz0, dmz1, etc.) 26 Interoperability Between Windows and Linux Virtualization Solutions

The VMs can communicate with machines connected to the network but not with the outside world, unless another VM is configured to route their traffic. DMZ networks are useful for isolating VM network traffic away from the vhost while still allowing the VMs to communicate with the outside world. A VM can be created to act as a firewall/router, protecting all machines from the outside world. 1.7 Understand Virtual Storage Xen supports passing physical disks into a virtual machine in two ways. The preferred way for storage devices other than tape drives is for the physical disk to be passed into the guest as a Xen Virtual Disk (xvd). In Linux these disks appear as a new type of block device rather than being presented as IDE or SCSI. In Windows these are presented either as IDE disks or if the Virtual Machine Driver Pack drivers are installed, as SCSI disks. Xen Virtual Disks do not support removable media so if an optical drive is passed through to a guest as a xvd, it cannot be ejected. Xen Virtual disks are hot pluggable however and can be hot removed and hot added while the guest is running. The Virtual Machine Driver Packe must be installed in Windows guest to support the hot-pluggability feature. The following block devices can be presented to a guest as a Xen Virtual Disk: CD/DVD ROM drives Entire disks (IDE, SATA, SCSI, USB, FireWire) Individual disk partitions Interoperability Between Windows and Linux Virtualization Solutions 27