SOLARIS OPERATING SYSTEM HARDWARE VIRTUALIZATION PRODUCT ARCHITECTURE. Chien-Hua Yen, ISV Engineering
|
|
- Hilary Harris
- 7 years ago
- Views:
Transcription
1 SOLARIS OPERATING SYSTEM HARDWARE VIRTUALIZATION PRODUCT ARCHITECTURE Chien-Hua Yen, ISV Engineering Sun BluePrints On-Line November 2007 Part No Revision 1.0, 11/27/07 Edition: November 2007
2 Sun Microsystems, Inc. Table of Contents Introduction Hardware Level Virtualization Scope Section 1: Background Information Virtual Machine Monitor Basics VMM Requirements VMM Architecture The x86 Processor Architecture SPARC Processor Architecture Section 2: Hardware Virtualization Implementations Sun xvm Server Sun xvm Server Architecture Overview Sun xvm Server CPU Virtualization Sun xvm Server Memory Virtualization Sun xvm Server I/O Virtualization Sun xvm Server with Hardware VM (HVM) HVM Operations and Data Structure Sun xvm Server with HVM Architecture Overview Logical Domains Logical Domains (LDoms) Architecture Overview CPU Virtualization in LDoms Memory Virtualization in LDoms I/O Virtualization in LDoms VMware VMware Infrastructure Overview VMware CPU Virtualization VMware Memory Virtualization VMware I/O Virtualization Section 3: Additional Information VMM Comparison References Terms and Definitions Author Biography
3 1 Introduction Sun Microsystems, Inc. Chapter 1 Introduction In the IT industry, virtualization is a mechanism of presenting a set of logical computing resources over a fixed hardware configuration so that these logical resources can be accessed in the same manner as the original hardware configuration. The concept of virtualization is not new. First introduced in the late 1960s on mainframe computers, virtualization has recently become popular as a means to consolidate servers and reduce the costs of hardware acquisition, energy consumption, and space utilization. The hardware resources that can be virtualized include computer systems, storage, and the network. Server virtualization can be implemented at different levels on the computing stack, including the application level, operating system level, and hardware level: An example of application level virtualization is the Virtual Machine for the Java platform (Java Virtual Machine or JVM machine) 1. The JVM implementation provides an application execution environment as a layer between the application and the OS, removing application dependency on OS-specific APIs and hardwarespecific characteristics. OS level virtualization abstracts OS services such as file systems, devices, networking, and security, and provides a virtualized operating environment to applications. Typically, OS level virtualization is implemented by the OS kernel. Only one instance of the kernel runs on the system, and it provides multiple virtualized operating environments to applications. Examples of OS level virtualization include Solaris Containers technology, Linux VServers, and FreeBSD Jails. OS level virtualization has less performance overhead and better system resource utilization than hardware level virtualization. Since one OS kernel is shared among all virtual operating environments, isolation among all virtualized operating environments is as good as the OS provides. Hardware level virtualization, discussed in detail in this paper, has become popular recently because of increasing CPU power and low utilization of CPU resources in the IT data center. Hardware level virtualization allows a system to run multiple OS instances. With less sharing of system resources than OS level virtualization, hardware virtualization provides stronger isolation of operating environments. The Solaris OS includes bundled support for application and OS level virtualization with its JVM software and Solaris Containers offerings. Sun first added support for hardware virtualization in the Solaris 10 11/06 release with Sun Logical Domains (LDoms) technology, supported on Sun servers which utilize UltraSPARC T1 or UltraSPARC T2 1. The terms "Java Virtual Machine" and "JVM" mean a Virtual Machine for the Java(TM) platform.
4 2 Introduction Sun Microsystems, Inc. processors. VMware also supports the Solaris OS as a guest OS in its VMware Server and Virtual Infrastructure products starting with the Solaris 10 1/06 release. In October 2007, Sun announced the Sun xvm family of products that includes the Sun xvm Server and the Sun xvm Ops Center management system: Sun xvm Server includes support for the Xen open source community work [6] on the x86 platform and support for LDoms on the UltraSPARC T1/T2 platform Sun xvm Ops Center a management suite for the Sun xvm Server Note In this paper, in order to distinguish the discussion of x86 and UltraSPARC T1/T2 processors, Sun xvm Server is specifically used to refer to the Sun hardware virtualization product for the x86 platform, and LDoms is used to refer to the Sun hardware virtualization product for the UltraSPARC T1 and T2 platforms. The hardware virtualization technology and new products built around this technology have expanded options and opportunities for deploying servers with better utilization, more flexibility, and enhanced functionality. In reaping the benefits of the hardware virtualization, IT professionals also face the challenges of operating within the limitation of a virtualized environment while delivering the same level of service agreement as the physical operating environment. Meeting this requirement requires a good understanding of virtualization technologies, CPU architecture, and software implementations, and awareness of their strengths and limitations. Hardware Level Virtualization Hardware level virtualization is a mechanism of virtualizing the system hardware resources such as CPU, memory, and I/O, and creating multiple execution environments on a single system. Each of these execution environments runs an instance of the operating system. A hardware level virtualization implementation typically consists of several virtual machines (VMs), as shown in Figure 1. A layer of software, the virtual machine monitor (VMM), manages system hardware resources and presents an abstraction of these resources to each VM. The VMM runs in privileged mode and has full control of system hardware. A guest operating system (GOS) runs in each VM. The GOS to VM is analogous to program to process in which OS plays the function of the VMM.
5 3 Introduction Sun Microsystems, Inc. VM VM VM GOS GOS GOS Virtual Machine Monitor (VMM) Platform Hardware Figure 1. In hardware level virtualization, the VMM software manages hardware resources and presents an abstraction of these resources to one or more virtual machines. Hardware resource virtualization can take the form of sharing, partitioning, or delegating: Sharing Resources are shared among VMs. The VMM coordinates the use of resources by VMs. For example, the VMM may include a CPU scheduler to run threads of VMs based on a pre-determined scheduling policy and VM priority. Partitioning Resources are partitioned so that each VM gets the portion of resources allocated to it. Partitioning can be dynamically adjusted by the VMM based on the utilization of each VM. Examples of resource partitioning include the ballooning memory technique employed in Sun xvm Server and VMware, and the allocation of CPU resources in Logical Domains technology. Delegating With delegating, resources are not directly accessible by a VM. Instead, all resource accesses are made through a control VM that has direct access to the resource. I/O device virtualization is normally accessed via delegation. The distinction and boundaries between the virtualization methods are often not clear. For example, sharing may be used for one component and partitioning used in others, and together they make up an integral functional module. Benefits of Hardware Level Virtualization Hardware level virtualization allows multiple operating systems to run on a single server system. This ability offers many benefits that are not available in a single OS server. These benefits can be summarized in three functional categories: Workload Consolidation According to Gartner [17] Intel servers running at 10 percent to 15 percent utilization are common. Many IT organizations run out and buy a new server every time they deploy a new application. With virtualization, computers no longer have to be dedicated to a particular task. Applications and users can share computing resources, remaining blissfully unaware that they are doing so. Companies can shift computing resources around to meet demand at a given time, and get by with less infrastructure overall. When used for consolidation, virtualization can also save
6 4 Introduction Sun Microsystems, Inc. hardware and maintenance expenses, floor space, cooling costs, and power consumption. Workload Migration Hardware level virtualization decouples the OS from the underlying physical platform resources. A guest OS state, along with the user applications running on top of it, can be encapsulated into an entity and moved to another system. This capability is useful for migrating a legacy OS system from an old under-powered server to a more powerful server while preserving the investment in software. When a server needs to be maintained, a VM can be dynamically migrated to a new sever with no down time, further enhancing availability. Changes in workload intensity levels can be addressed by dynamically shifting underlying resources to the starving VMs. Legacy applications that ran natively on a server continue to run on the same OS running inside a VM, leveraging the existing investment in applications and tools. Workload Isolation Workload isolation includes fault and security isolations. Multiple guest OSes run independently, and thus a software failure in one VM does not affect other VMs. However, the VMM layer introduces a single point of failure that can bring down all VMs on the system. A VMM failure, although potentially catastrophic, is less probable than a failure in the OS because the complexity of VMM is much less than that of an OS. Multiple VMs also provide strong security isolation among themselves with each VM running an independent OS. Security intrusions are confined to the VM in which they occur. The boundary around each VM is enforced by the VMM and the inter-domain communication, if provided by the VMM, is restricted to specific kernel modules only. One distinct feature of hardware level virtualization is the ability to run multiple instances of heterogeneous operating systems on a single hardware platform. This feature is important for the following reasons: Better security and fault containment among application services can be achieved through OS isolation. Applications written for one OS can run on a system that supports a different OS. Better management of system resource utilization is possible among the virtualized environments. Scope This paper explores the underlying hardware architecture and software implementation for enabling hardware virtualization. Great emphasis has been placed on the CPU hardware architecture limitations for virtualizing CPU services and their software workarounds. In addition, this paper discusses in detail the software architecture for implementing the following types of virtualization:
7 5 Introduction Sun Microsystems, Inc. CPU virtualization uses processor privileged mode to control resource usage by the VM, and relays hardware traps and interrupts to VMs Memory virtualization partitions physical memory among multiple VMs and handles page translations for each VM I/O virtualization uses a dedicated VM with direct access to I/O devices to provide device services The paper is organized into three sections. Section I, Background Information, contains information on VMMs and provides details on the x86 and SPARC processors: Virtual Machine Monitor Basics on page 9 discusses the core of hardware virtualization, the VMM, as well as requirements for the VMM and several types of VMM implementations. The x86 Processor Architecture on page 21 describes features of the x86 processor architecture that are pertinent to virtualization. SPARC Processor Architecture on page 29 describes features of the SPARC processor that affect virtualization implementations. Section II, Hardware Virtualization Implementations, provides details on the Sun xvm Server, Logical Domains, and VMware implementations: Sun xvm Server on page 39 discusses a paravirtualized Solaris OS that is based on an open source VMM implementation for x86[6] processors and is planned for inclusion in a future Solaris release. Sun xvm Server with Hardware VM (HVM) on page 63 continues the discussion of Sun xvm Server for the x86 processors that support hardware virtual machines: Intel- VT and AMD-V. Logical Domains on page 79 discusses Logical Domains (LDoms), supported on Sun servers that utilize UltraSPARC T1 or T2 processors, and describes Solaris OS support for this feature. VMware on page 97 discusses the VMware implementation for the VMM. Section III, Additional Information, contains a concluding comparison, references, and appendices: VMM Comparison on page 109 presents a summary of the VMM implementations discussed in this paper. References on page 111 provides a comprehensive listing of related references. Terms and Definitions on page 113 contains a glossary of terms. Author Biography on page 117 provides information on the author.
8 6 Introduction Sun Microsystems, Inc.
9 Introduction Sun Microsystems, Inc. Section I Background Information Chapter 2: Virtual Machine Monitor Basics (page 9) Chapter 3: The x86 Processor Architecture (page 21) Chapter 4: SPARC Processor Architecture (page 29)
10 8 Introduction Sun Microsystems, Inc.
11 9 Virtual Machine Monitor Basics Sun Microsystems, Inc. Chapter 2 Virtual Machine Monitor Basics At the heart of hardware level virtualization is the VMM. The VMM is a software layer that abstracts computer hardware resources so that multiple OS instances can run on a physical system. Hardware resources are normally controlled and managed by the OS. In a virtualized environment the VMM takes this role, managing and coordinating hardware resources. There is no clear boundary between an OS and the VMM from the definition point of view. The division of functions between OS and the VMM can be influenced by factors such as processor architecture, performance, OS, and nontechnical requirements such as ease of installation and migration. Certain VMM requirements exist for running multiple OS instances on a system. These requirements, discussed in detail in the next section, stem primarily from processor architecture design that is inherently an impediment to hardware virtualization. Based on these requirements, two types of VMMs have emerged, each with distinct characteristics in defining the relationship between the VMM and an OS. This relationship determines the privilege level of the VMM and an OS, and the control and sharing of hardware resources. VMM Requirements A software program communicates with the computer hardware through instructions. Instructions, in turn, operate on registers and memory. If any of the instructions, registers, or memory involved in an action is privileged, that instruction results in a privileged action. Sometimes an action, which is not necessarily privileged, attempts to change the configuration of resources in the system. Subsequently, this action would impact other actions whose behavior or result depends on the configuration of resources. The instructions that result in such operations are called sensitive instructions. In the context of the virtualization discussion, a processor's instructions can be classified into three groups: Privileged instructions are those that trap if the processor is in non-privileged mode and do not trap if it is in privileged mode. Sensitive instructions are those that change or reference the configuration of resources (memory), affect the processor mode without going through the memory trap sequence (page fault), or reference the sensitive registers whose contents change when the processor switches to run another VM. Non-privileged and non-sensitive instructions are those that do not fall into either the privileged or sensitive categories described above.
12 10 Virtual Machine Monitor Basics Sun Microsystems, Inc. Sensitive instructions have a major bearing on the virtualizability of a machine [1] because of their system-wide impact. In a virtualized environment, a GOS should only contain non-privileged and non-sensitive instructions. If sensitive instructions are a subset of privileged instructions, it is relatively easy to build a VM because all sensitive instructions will result in a trap. In this case a VMM can be constructed to catch all traps that result from execution of sensitive instructions by a GOS. All privileged and sensitive actions from VMs would be caught by the VMM, and resources could be allocated and managed accordingly (a technique called trap-andemulate). A GOS's trap handler could then be called by the VMM trap handler to perform the GOS-specific actions for the trap. If a sensitive instruction is a non-privileged instruction, the instruction executed by one VM will be unnoticed. Robin and Irvine [3] identified several x86 instructions in this category. These instructions cannot be safely executed by a GOS as they can impact the operations of other VMs or adversely affect the operation of its own GOS. Instead, these instructions must be substituted by the VMM service. The substitution can be in the form of an API for the GOS to call, or a dynamic conversion of these instructions to explicit processor traps. Types of VMM In a virtualized environment, the VMM controls the hardware resources. VMMs can be categorized into two types, based on this control of resources: Type I maintains exclusive control of hardware resources Type II leverages the host OS by running inside the OS kernel The Type I VMM [3] has several distinct characteristics: it is the first software to run (besides BIOS and the boot loader), it has full and exclusive control of system hardware, and it runs in privileged mode directly on the physical processor. The GOS on a Type I VMM implementation runs in a less privileged mode than the VMM to avoid conflicts managing the hardware resources. An example of a Type I VMM is Sun xvm Server. Sun xvm Server includes a bundled VMM, the Sun vvm Hypervisor for x86. The Sun xvm Hypervisor for x86 is the first software, beside BIOS and boot loader, to run during boot as shown in the GRUB menu.lst file: title Sun xvm Server kernel$ /boot/$isadir/xen.gz module$ /platform/i86xpv/kernel/$isadir/unix /platform/i86xpv/kernel/$isadir/unix module$ /platform/i86pc/$isadir/boot_archive
13 11 Virtual Machine Monitor Basics Sun Microsystems, Inc. The GRUB bootloader first loads the Sun xvm Hypervisor for x86, xen.gz. After the VMM gains control of the hardware, it loads the Solaris kernel, /platform/i86xpv/kernel/$isadir/unix, to run as a GOS. Sun's Logical Domains and VMware's Virtual Infrastructure 3 [4] (formerly knows as VMware ESX Server), described in detail in Chapter 7 Logical Domains on page 79 and Chapter 8 VMware on page 97, are also Type I VMMs. A Type II VMM typically runs inside a host OS kernel as an add-on module, and the host OS maintains control of the hardware resources. The GOS in a Type II VMM is a process of the host OS. A Type II VMM leverages the kernel services of the host OS to access hardware, and intercepts a GOS's privileged operations and performs these operations in the context of the host OS. Type II VMMs have the advantage of preserving the existing installation by allowing a new GOS to be added to an running OS. An example of type II VMM is VMware's VMware Server (formerly known as VMware GSX Server). Figure 2 illustrates the relationships among hardware, VMM, GOS, host OS, and user application in virtualized environments. Type I VMM Server Type II VMM Server Unprivileged Mode Physical Server Apps Apps Apps Apps Apps Apps User Space Applications GOS GOS GOS GOS GOS VMM VMM Host OS VMM OS Platform Hardware Platform Hardware Privileged Mode Platform Hardware Figure 2. Virtual machine monitors vary in how they support guest OS, host OS, and user applications in virtualized environments. VMM Architecture As discussed in VMM Requirements on page 9, the VMM performs some of the functions that an OS normally does: namely, it controls and arbitrates CPU and memory resources, and provides services to upper layer software for sensitive and privileged operations. These functions require the VMM to run in privileged mode and the OS to relinquish the privileged and sensitive operations to the VMM. In addition to processor and memory operation, I/O device support also has a large impact on VMM architecture.
14 12 Virtual Machine Monitor Basics Sun Microsystems, Inc. VMM in Privileged Mode A processor typically has two or more privileged modes. The operating system kernel runs in the privileged mode. The user applications run in a non-privileged mode and trap to the kernel when they need to access system resources or services from the kernel. The GOS normally assumes it runs in the most privileged mode of the processor. Running a VMM in a privileged mode can be accomplished with one of the following three methods: Deprivileging the GOS This method usually requires a modification to the OS to run at a lower privilege level. For x86 systems, the OS normally runs at protected ring 0, the most privileged level. In Sun xvm Server, ring 0 is reserved to run the VMM. This requires the GOS to be modified, or paravirtualized, to run outside of ring 0 at a lower privilege level. Hyperprivileging the VMM Instead of changing the GOS to run at lower privilege, another approach taken by the chip vendors is to create a hyperprivileged processor mode for the VMM. The Sun UltraSPARC T1 and T2 processor s hyperprivileged mode [2], Intel-VT's VMX-root operation (see [7] Volume 3B, Chapter 19), and AMD-V s VMRUN-Exit state (see [9] Chapter 15) are examples of a hyperprivileged processor for VMM operations. Both VMM and GOS run in same privileged mode It is possible to have both the VMM and GOS run in the same privileged mode. In this case, the VMM intercepts all privileged and sensitive operations of a GOS before passing them to the processor. For example, VMware allows both the GOS and the VMM to run in privileged mode. VMware dynamically examines each instruction to decide whether the processor state and the segment reversibility (see Segmented Architecture on page 23) allow the instruction to be executed directly without the involvement of the VMM. If the GOS is in privileged mode or the code segment is non-reversible, the VMM performs necessary conversions of the core execution path. Removing Sensitive Instructions in the GOS Privileged and sensitive operations are normally executed by the OS kernel. In a virtualized environment, the GOS has to relinquish the privileged and sensitive operations to the VMM. This is accomplished by one of the following approaches: Modifying the GOS source code to use the VMM services for handling sensitive operations (paravirtualization) This method is used by Sun xvm Server and Sun's Logical Domains (LDoms). Sun xvm Server and LDoms provide a set of hypercalls for an OS to request VMM services. The VMM-aware Solaris OS uses these hypercalls to replace its sensitive instructions.
15 13 Virtual Machine Monitor Basics Sun Microsystems, Inc. Dynamically translating the GOS sensitive instructions by software As described in a previous section, VMware uses binary translation to replace the GOS sensitive instructions with VMM instructions. Dynamically translating the GOS sensitive instructions by hardware This method requires the processor to provides a special mode of operation that is entered when an sensitive instruction is executed in reduced privileged mode. The first approach, which involves modifying the GOS source code, is called paravirtualization, because the VMM provides only partial virtualization of the processor. The GOS must replace its sensitive and privileged operations with the VMM service. The remaining two approaches provide full virtualization to the VM, enabling the GOS to run without modification In addition to OS modification, performance requirements, processor architecture design, tolerance of a single point of failure, and support for legacy OS installations have an impact on the design of VMM architecture. Physical Memory Virtualization Memory management by the VMM involves two tasks: partitioning physical memory for VMs, and supporting page translations in a VM. Each OS assumes physical memory starts from page frame number (PFN) 0 and is contiguous to the size configured for that VM. An OS uses physical addresses in operations like page table updates and Direct Memory Access (DMA). In reality, the starting PFN of the memory exported to a VM may not start from PFN 0 and may not be contiguous. The virtualization of physical address is provided in the VMM by creating another layer of addressing scheme, namely machine address (MA). Within a GOS, a virtual address (VA) is used by applications, and a physical address (PA) is used by the OS in DMA and page tables. The VMM maps a PA from a VM to a MA, which is used on hardware. The VMM maintains translation tables, one for each VM, for mapping PAs to MAs. Figure 3 depicts the scheme to partition machine memory to physical memory for each VM.
16 14 Virtual Machine Monitor Basics Sun Microsystems, Inc. VM0 PFN 0 VM1 PFN 0 MPFN 0 Physical Memory Machine Memory VM/GOS VMM Figure 3. Example physical-to-machine memory mapping. A ballooning technique [5] has been used in some virtualization products to achieve better utilization of physical memory among VMs. The idea behind the ballooning technique is simple. The VMM controls a balloon module in a GOS. When the VMM wants to reclaim memory, it inflates the balloon to increase pressure on memory, forcing the GOS to page out memory to disk. If the demand for physical memory decreases, the VMM deflates the balloon in a VM, enabling the GOS to claim more memory. Page Translations Virtualization Access to processor's page translation hardware is a privileged operation, and this operation is performed by the privileged VMM. Exactly what the VMM needs to perform depends on the processor architecture. For example, x86 hardware automatically loads translations from the page table to the Translation Lookaside Buffer (TLB). The software has no control of loading page translations to the TLB. Therefore, the VMM is responsible for updating the page table that is seen by the hardware. The SPARC processor uses software through traps to load page translations to the TLB. A GOS maintains its page tables in its own memory, and the VMM gets page translations from the VM and loads them to the TLB. VMMs typically support the following two methods to support page translations: Hypervisor calls The GOS makes a call to the VMM for page translation operations. This method is commonly used by paravirtualized OSes, as it provides better performance. Shadow page table The VMM maintains an independent copy of page tables, called shadow page tables, from the guest page tables. When a page fault occurs, the VMM propagates changes made by the GOS's page table to the shadow page table. This method is commonly used by VMMs that support full virtualization, as the GOS continues to update its own page table and the synchronization of the guest
17 15 Virtual Machine Monitor Basics Sun Microsystems, Inc. page table and the shadow page table is handled by the VMM when page faults occur. Figure 4 shows three different page translation implementations in the Solaris OS on x86 and SPARC platforms. 1. The paravirtualized Sun xvm Server uses the following approach on x86 platforms: [1] The GOS uses the hypervisor call method to update the page tables maintained by the VMM. 2. The Sun xvm Server with HVM and VMware use the following approach: [2a] The GOS maintains its own guest page table. The synchronization between the guest page table and the hardware page table (shadow page table) is handled by the VMM when page faults occur. [2b] The x86 CPU loads the page translation from the hardware page table to the TLB. 3. On SPARC systems, the Solaris OS uses the following approach for Logical Domains: [3a] The GOS maintains its own page table. The GOS takes an entry from the page table as an argument to the hypervisor call that loads the translations to the TLB. [3b] The VMM gets the page translation from the GOS and loads the translation to the TLB. GOS Guest Page Table 2a GOS Guest Page Table HV Calls HV Calls 1 3a VMM HW Page Table TLB Operations VMM 2b 3b Hardware TLB Hardware TLB X86 Page Translations SPARC Page Translations Figure 4. Page translation schemes used on x86 and SPARC architectures. The memory management implementation for Sun xvm Server, Sun xvm Server with HVM, VMware, and Logical Domains using these mechanisms is discussed in detail in later sections of this paper.
18 16 Virtual Machine Monitor Basics Sun Microsystems, Inc. I/O Virtualization I/O devices are typically managed by a special software module called the device driver running in the kernel context. Due to vastly different types and varieties of device types and device drivers, the VMM either includes few device drivers or leaves device management entirely to the GOS. In the latter case, because of existing device architecture limitations (discussed later in the section), devices can only be exclusively managed by one VM. This constraint creates some challenges for I/O access by a VM, and limits the following: What device are exported to a VM How devices are exported to a VM How each I/O transaction is handled by a VM and the VMM Consequently, I/O has the most challenges in the areas of compatibility and performance for virtual machines. In order to explain what devices are exported and how they are exported, it is first necessary to understand the options available to handle I/O transactions in a VM. There are, in general, three approaches for I/O virtualization, as illustrated in Figure 5: Direct I/O (VM1 and VM3) Virtual I/O using I/O transaction emulation (VM2) Virtual I/O using device emulation (VM4) VM1 VM2 VM3 VM4 Direct I/O I/O VM Virtual I/O thru I/O VM Direct I/O Virtual I/O thru VMM I/O Transaction Emulation and Native Driver Virtual Driver Native Driver Native Driver or Virtual Driver VMM Device Emulation and Device Driver Network Chip SCSI Controller Sun X64 Server Figure 5. Different I/O virtualization techniques used by virtual machine monitors. For direct I/O, the VMM exports all or a portion of the physical devices attached to the system to a VM, and relies on VMs to manage devices. The VM that has direct I/O access uses the existing driver in the GOS to communicate directly with the device. VM 1 and VM3 in Figure 5 have direct I/O access to devices. VM1 is also a special I/O VM that provides virtual I/O for other VMs, such as VM2, to access devices.
19 17 Virtual Machine Monitor Basics Sun Microsystems, Inc. Virtual I/O is made possible by controlling the device types exported to a VM. There are two different methods of implementing virtual I/O: I/O transaction emulation (shown in VM2 in Figure 5) and device emulation (shown in VM4). I/O transaction emulation requires virtual drivers on both ends for each type of I/O transaction (data and control functions). As shown in Figure 5, the virtual driver on the client side (VM2) receives I/O requests from applications and forwards requests through the VMM to the virtual driver on the server side (VM1); the virtual driver on the server side then sends out the request to the device. I/O transaction emulation is typically used in paravirtualization because the OS on the client side needs to include the special drivers to communicate with its corresponding driver in the OS on the server side, and needs to add kernel interfaces for inter-domain communication using the VMM services. However, it is possible to have PV drivers in an un-paravirtualized OS (full virtualization) for better I/O performance. For example, Solaris 10, which is not paravirtualized, can include PV drivers on a HVM-capable system to get better performance than that achieved using device emulation drivers such as QEMU. (See Sun xvm Server with HVM I/O Virtualization (QEMU) on page 71.) I /O transaction emulation may cause application compatibility issues if the virtual driver does not provide all data and control functions (for example, ioctl(2)) that the existing driver does. Device emulation provides an emulation of a device type, enabling the existing driver for the emulated device in a GOS to be used. The VMM exports emulated device nodes to a VM so that the existing drivers for the emulated devices in a GOS are used. By doing this, the VMM controls the driver used by a GOS for a particular device type; for example, using the e1000g driver for all network devices. Thus, the VMM can focus on the emulation of underlying hardware using one driver interface. Driver accesses to the I/O register and port in a GOS, which will result in a trap due to invalid address, are caught and converted to access the real device hardware. VM4 in Figure 5 uses native OS drivers to access emulated devices exported by the VMM. Device emulation is in general less efficient and more limited on platforms supported than I/O transaction emulation. Device emulation does not require changes in the GOS and, therefore, is typically used to provide full virtualization to a VM. Virtual I/O, unlike direct I/O, requires additional drivers in either the I/O VM or the VMM to provide I/O virtualization. This constraint: Limits the type of devices that are made available to a VM Limits device functionality Causes significant I/O performance overhead While virtualization provides full application binary compatibility, I/O becomes a trouble area in terms of application compatibility and performance in a VM. One
20 18 Virtual Machine Monitor Basics Sun Microsystems, Inc. solution to the I/O virtualization issues is to allow VMs to directly access I/O, as shown by VM3 in Figure 5. Direct I/O access by VMs requires additional hardware support to ensure device accesses by a VM are isolated and restricted to resources owned by the assigned VM. In order to understand the industry effort to allow an I/O device to be shared among VMs, it is necessary to examine device operations from an OS point of view. The interactions between an OS and a device consist, in general, of three operations: 1. Programmed I/O (PIO) host-initiated data transfer. In PIO, a host OS maps a virtual address to a piece of device memory and accesses the device memory using CPU load/store instructions. 2. Direct Memory Access (DMA) device-initiated data transfer without the CPU involvement. In DMA, a host OS writes an address of its memory and the transfer size to a device's DMA descriptor. After receiving an enable DMA instruction from the host driver, the device performs data transfer at a time it chooses and uses interrupts to notify the host OS of DMA completion. 3. Interrupt a device-generated asynchronous event notification. Interrupts are already virtualized by all VMM implementations as is shown in the later discussions for Sun xvm Server, Logical Domains, and VMware. The challenge of I/O sharing among VMs therefore lies in the device handling for PIO and DMA. To meet the challenges, PCI SIG has released a suite of IOV specifications for PCI Express (PCIe) devices, in particular the Single Root I/O Virtualization and Sharing Specification (SRIOV) specification [35] for device sharing and PIO operation, and the Address Translation Services (ATS) specification [30] for DMA operation. Device Configuration and PIO A PCI device exports its memory to the host through Base Address Registers (BARs) in its configuration space. A device's configuration space is identified in the PCI configuration address space as shown in Figure Reserved Bus Number Device Number Function Number 7 Register Number Figure 6. PCI configuration address space. A PCI device can have up to 8 physical functions (PF). Each PF has its own 256 byte configuration header. The BARs of a PCI function, which are 32-bit wide, are located at offset 0x10-0x24 in the configuration header. The host gets the size of the memory region mapped by a BAR by writing a value of all 1's to the BAR and then reading the value back. The address written to a BAR is the assigned starting address of the memory region mapped to the BAR.
21 19 Virtual Machine Monitor Basics Sun Microsystems, Inc. To allow multiple VMs to share a PF, the SRIOV specification introduces the notion of a Virtual Function (VF). Each VF shares some common configuration header fields with the PF and other VFs. The VF BARs are defined in the PCIe's SRIOV extended capabilities structure. A VF contains a set of non-shared physical resources, such as work queue and data buffer, which are required to deliver function specific services. These resources are exported through the VF BARs and are directly accessible by a VM. The starting address of a VF's memory space is derived from the first VF's memory space address and the size of VF's BAR. For any given VFx, the starting address of its memory space mapped to BARa is calculated according to the following formula: addr( VF x, BAR a ) = addr( VF 1, BAR a ) + ( x 1) ( VF BAR a aperature size) where addr (VF 1, BAR a ) is the starting address of BAR a for the first VF and (VF BAR a aperture size) is the size of the VF BAR a as determined by writing a value of 1's to BAR a and reading the value back. Using this mechanism, a GOS in a VM is able to share the device with other VMs while performing device operations that pertain only to the VM. DMA In many current implementations (especially in most x86 platforms), physical addresses are used in DMA. Since a VM shares the same physical address space on the system with other VMs, a VM might read/write to another VM's memory through DMA. For example, a device driver in a VM might write the memory contents that belong to other VMs to a disk and read the data back into the VM's memory. This causes a potential breach in security and fault isolation among VMs. To provide isolation during DMA operation, the ATS specification defines a scheme for a VM to use the address mapped to its own physical memory for DMA operation. (This approach is used in similar designs such as IOMMU Specification [31] and DMA Remapping [28].) This DMA ATS enables DMA memory to be partitioned into multiple domains, and keeps DMA transactions on one domain isolated from other domains. Figure 7 shows device DMA with and without ATS. With DMA ATS, the DMA address is like a virtual address that is associated with a context (VM). DMA transactions initiated by a VM can only be associated with the memory owned by the VM. DMA ATS is a chipset function that resides outside of the processor.
22 20 Virtual Machine Monitor Basics Sun Microsystems, Inc. CPU System Memory PA DMA Buffer North Bridge PA PA South Bridge PA DMA without ATS DMA Buffer PA PCI Device PCI Device CPU DMA Buffer System Memory VM1 VM2 DMA Buffer HPA North Bridge HPA DVA/GPA South Bridge w/ IOMMU DVA/GPA DMA with ATS DMA Buffer DMA Buffer PCI Device PCI Device HPA PA - Physical Address HPA - Host Physical Address DVA - Device Virtual Address GPA - Guest Physical Address Figure 7. DMA with and without address translation service (ATS). As shown in Figure 7, the physical address (PA) is used on the hardware platform without hardware support for ATS. For platforms with hardware support for ATS, a GOS in a VM writes either a device virtual address (DVA) or a guest physical address (GPA) to the device s DMA engine. The device driver in the GOS loads the mappings of either the DVA or GPA to the host physical address (HPA) in the hardware IOMMU. The HPA is the address understood by the memory controller. Note The distinction between the HPA and GPA is described in detail in later sections for Sun xvm Server (see Physical Memory Management on page 52), for UltraSPARC LDoms (see Physical Memory Allocation on page 88), and for VMware (see Physical Memory Management on page 103). When the device performs a DMA operation, a DVA/GPA address appears on the PCI bus and is intercepted by the hardware IOMMU. The hardware IOMMU looks up the mapping for the DVA/GPA, finds the corresponding HPA, and moves the PCI data to system memory pointed to by the HPA. Since either DVA or GPA of a VM has its own address space, ATS allows system memory for DMA to be partitioned and, thus, prevents a VM from accessing another VM s DMA buffer.
23 21 The x86 Processor Architecture Sun Microsystems, Inc. Chapter 3 The x86 Processor Architecture This chapter provides background information on the x86 processor architecture that is relevant to later discussions on Sun xvm Server (Chapter 5 on page 39), Sun xvm Server with HVM (Chapter 6 on page 63), and VMware (Chapter 8 on page 97). The x86 processor was not designed to run in a virtualized environment, and the x86 architecture presents some challenges for CPU and memory virtualization. This chapter discusses the following x86 architecture features that are pertinent to virtualization: Protected Mode The protected mode in the x86 processor utilizes two mechanisms, segmentation and paging, to prevent a program from accessing a segment or a page with a higher privilege level. Privilege level controls how the VMM and a GOS work together to provide CPU virtualization. Segmented Architecture The x86 segmented architecture converts a program's virtual addresses into linear addresses that are used by the paging mechanism to map into physical memory. During the conversion, the processor's privilege level is checked against the privilege level of the segment for the address. Because of the segment cache technique employed by the x86 processor, the VMM must ensure segment cache consistency with the VM descriptor table updates. This x86 feature results in a significant amount of work for the VMM of full virtualization products such as VMware. Paging Architecture The x86 paging architecture provides page translations to the TLB and page tables. Because the loading of page translations from page table to TLB is done automatically by hardware on the x86 platform, page table updates have to be performed by the privileged VMM. Several mechanisms are available for updating this hardware page table by a VM. I/O and Interrupts A device interacts with a host processor through PIO, DMA, and interrupts. PIO in the x86 processor can be performed through either I/O ports using special I/O instructions or through memory-mapped addresses with general purpose MOVE and String instructions. DMA in most x86 platforms is performed with physical addresses. This can cause a security and isolation breach in a virtualized environment because a VM may read/write other VMs memory contents. Interrupts and exceptions are handled through the Interrupt Descriptor Table (IDT). There is only one IDT on the system and access to the IDT is privileged. Therefore, interrupts have to be handled by the VM and virtualized to be delivered to a VM.
24 22 The x86 Processor Architecture Sun Microsystems, Inc. Timer Devices The x86 platform includes several timer devices for time keeping purposes. Knowledge of the characteristics of these devices is important to fully understand time keeping in a VM: Some timer devices are interrupt driven (which is virtualized and delayed) and some require privileged access to update the device counter. Protected Mode The x86 architecture protected mode provides a protection mechanism to limit access to certain segments or pages and prevent unprivileged access. The processor's segment-protection mechanism recognizes 4 privilege levels, numbered from 0 to 3 (Figure 8). The greater the level number, the lesser the privileges provided. The page-level protection mechanism restricts access to pages based on two privilege levels: supervisor mode and user mode. If the processor is operating at a current privilege level (CPL) 0, 1, or 2, it is in a supervisor mode and the processor can access all pages. If the processor is operating at a CPL 3, it is in a user mode and the processor can access only user level pages. Level 0 - OS Kernel Level 1 Level 2 Level 3 - Applications Figure 8. Privilege levels in the x86 architecture. When the processor detects a privilege level violation, it generates a general-protection exception (#GP). The x86 has more than 20 privileged instructions. These instructions can be executed only when the current privilege level (CPL) is 0 (most privileged). In addition to the CPL, the x86 has an I/O privilege level (IOPL) field in the EFLAGS register that indicates the I/O privilege level of the currently running program. Some instructions, while allowed to execute when the CPL is not 0, might generate a #GP exception if the CPL value is higher than IOPL. These instructions include CLI (clear interrupt), STI (set interrupt flag), IN/INS (input from port), and OUT/OUTS (output to port). In addition to the above instructions, there are many instructions [3] that, while not privileged, reference registers or memory locations that would allow a VM to access a memory region not assigned to that VM. These sensitive instructions will not cause a #GP exception. The trap-and-emulate method for virtualization of a GOS, as stated in VMM Requirements on page 9, does not apply to these instructions. However, these instructions may impact other VMs.
25 23 The x86 Processor Architecture Sun Microsystems, Inc. Segmented Architecture In protected mode, all memory accesses must go through a logical address } Linear address (LA) } Physical Address (PA) translation scheme. The logical address to LA translation is managed by the x86 segmentation architecture which divides a process's address space into multiple protected segments. A logical address, which is used as the address of an operand or of an instruction, consists of a 16-bit segment selector and a 32-bit offset. A segment selector points to a segment descriptor that defines the segment (see Figure 11 on page 24). The segment base address is contained in the segment descriptor. The sum of the offset in a logical address and the segment base address gives the LA. The Solaris OS directly maps an LA to a process's Virtual Address (VA) by setting the segment base address to NULL. Segmentation: VA + Segment Base Address (always 0 in Solaris) } Linear address Paging: Linear address } Physical Address For each memory reference, a VA and a segment selector are provided to the processor (Figure 9). The segment selector, which is loaded to the segment register, is used to identify a segment descriptor for the address Index TI RPL Index: up to 8K descriptors (bits 3-15) TI: Table Indicator; 0=GDT, 1=LDT RPL: Request Privilege Level Figure 9. Segment Selector Every segment descriptor has a visible part and a hidden part, as illustrated in Figure 10 (see also [7], Volume 3A Section 3.4.3). The visible part is the segment selector, an index that points into either the global descriptor table (GDT) or the local descriptor table (LDT) to identify from which descriptor the hidden part of the segment register is to be loaded. The hidden part includes portions containing segment descriptor information loaded from the descriptor table. Selector Type Base Address Limit CPL Visible Hidden Figure 10. Each segment descriptor has a visible and a hidden part.
26 24 The x86 Processor Architecture Sun Microsystems, Inc. The hidden fields of a segment register are loaded to the processor from a descriptor table and are stored in the descriptor cache registers. The descriptor cache registers, like the TLB, allow the hardware processor to refer to the contents of the segment register's hidden part without further reference to the descriptor table. Each time a segment register is loaded, the descriptor cache register gets fully loaded from the descriptor table. Since each VM has its own descriptor table (for example, the GDT), the VMM has to maintain a shadow copy of each VM s descriptor table. A context switch to a VM will cause the VM's shadow descriptor table to be loaded to the hardware descriptor table. If the content of the descriptor table is changed by the VMM because of a context switch to another VM, the segment is non-reversible, which means the segment cannot be restored if an event such as a trap causes the segment to be saved and replaced. The Current Privilege Level (CPL) is stored in the hidden portion of the segment register. The CPL is initially equal to the privilege level of the code segment from which it is being loaded. The processor changes the CPL when program control is transferred to a code segment with a different privilege level. The segment descriptor contains the size, location, access control, and status information of the segment that is stored in either the LDT or GDT. The OS sets segment descriptors in the descriptor table and controls which descriptor entry to use for a segment (Figure 11). See CPU Privilege Mode on page 45 for a discussion of setting the segment descriptor in the Solaris OS Base 31:24 D D/B L AVL SL P DPL S Type Base 23: Base 15:00 Segment Limit 15:00 Figure 11. Segment descriptor. L: 64-bit code segment AVL: Available for use by system software Base: Segment base address D/B Default operation size (0=64-bit segment, 1=32 bit segment) DBL: Descriptor Privilege Level G: Granularity SL: Segment Limit 19:16 P: Segment present S: Descriptor type (0=system, 1=code or data) Type: segment type The privilege check performed by the processor recognizes three types of privilege levels: requested privilege level (RPL), current privilege level (CPL), and descriptor privilege level (DPL). A segment can be loaded if the DPL of the segment is numerically greater than or equal to both the CPL and the RPL. In other words, a segment can be
Virtual Machines. COMP 3361: Operating Systems I Winter 2015 http://www.cs.du.edu/3361
s COMP 3361: Operating Systems I Winter 2015 http://www.cs.du.edu/3361 1 Virtualization! Create illusion of multiple machines on the same physical hardware! Single computer hosts multiple virtual machines
More informationFull and Para Virtualization
Full and Para Virtualization Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF x86 Hardware Virtualization The x86 architecture offers four levels
More informationVirtualization. Pradipta De pradipta.de@sunykorea.ac.kr
Virtualization Pradipta De pradipta.de@sunykorea.ac.kr Today s Topic Virtualization Basics System Virtualization Techniques CSE506: Ext Filesystem 2 Virtualization? A virtual machine (VM) is an emulation
More informationUses for Virtual Machines. Virtual Machines. There are several uses for virtual machines:
Virtual Machines Uses for Virtual Machines Virtual machine technology, often just called virtualization, makes one computer behave as several computers by sharing the resources of a single computer between
More informationCOS 318: Operating Systems. Virtual Machine Monitors
COS 318: Operating Systems Virtual Machine Monitors Kai Li and Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall13/cos318/ Introduction u Have
More informationVirtualization. Types of Interfaces
Virtualization Virtualization: extend or replace an existing interface to mimic the behavior of another system. Introduced in 1970s: run legacy software on newer mainframe hardware Handle platform diversity
More informationVirtualization. Jia Rao Assistant Professor in CS http://cs.uccs.edu/~jrao/
Virtualization Jia Rao Assistant Professor in CS http://cs.uccs.edu/~jrao/ What is Virtualization? Virtualization is the simulation of the software and/ or hardware upon which other software runs. This
More informationThe Microsoft Windows Hypervisor High Level Architecture
The Microsoft Windows Hypervisor High Level Architecture September 21, 2007 Abstract The Microsoft Windows hypervisor brings new virtualization capabilities to the Windows Server operating system. Its
More informationVirtualization. Jukka K. Nurminen 23.9.2015
Virtualization Jukka K. Nurminen 23.9.2015 Virtualization Virtualization refers to the act of creating a virtual (rather than actual) version of something, including virtual computer hardware platforms,
More informationVirtual Machines. Virtual Machine (VM) Examples of Virtual Systems. Types of Virtual Machine
1 Virtual Machines Virtual Machine (VM) Layered model of computation Software and hardware divided into logical layers Layer n Receives services from server layer n 1 Provides services to client layer
More informationVirtualization. ! Physical Hardware. ! Software. ! Isolation. ! Software Abstraction. ! Encapsulation. ! Virtualization Layer. !
Starting Point: A Physical Machine Virtualization Based on materials from: Introduction to Virtual Machines by Carl Waldspurger Understanding Intel Virtualization Technology (VT) by N. B. Sahgal and D.
More informationVirtual Machine Monitors. Dr. Marc E. Fiuczynski Research Scholar Princeton University
Virtual Machine Monitors Dr. Marc E. Fiuczynski Research Scholar Princeton University Introduction Have been around since 1960 s on mainframes used for multitasking Good example VM/370 Have resurfaced
More informationVirtualization Technology. Zhiming Shen
Virtualization Technology Zhiming Shen Virtualization: rejuvenation 1960 s: first track of virtualization Time and resource sharing on expensive mainframes IBM VM/370 Late 1970 s and early 1980 s: became
More informationChapter 5 Cloud Resource Virtualization
Chapter 5 Cloud Resource Virtualization Contents Virtualization. Layering and virtualization. Virtual machine monitor. Virtual machine. Performance and security isolation. Architectural support for virtualization.
More informationIntel Virtualization Technology Overview Yu Ke
Intel Virtualization Technology Overview Yu Ke SSG System Software Division Agenda Virtualization Overview Intel Virtualization Technology 2 What is Virtualization VM 0 VM 1 VM n Virtual Machines (VMs)
More informationVirtualization. Dr. Yingwu Zhu
Virtualization Dr. Yingwu Zhu What is virtualization? Virtualization allows one computer to do the job of multiple computers. Virtual environments let one computer host multiple operating systems at the
More informationCS 695 Topics in Virtualization and Cloud Computing. More Introduction + Processor Virtualization
CS 695 Topics in Virtualization and Cloud Computing More Introduction + Processor Virtualization (source for all images: Virtual Machines: Versatile Platforms for Systems and Processes Morgan Kaufmann;
More informationIntroduction to Virtual Machines
Introduction to Virtual Machines Carl Waldspurger (SB SM 89, PhD 95), VMware R&D 2010 VMware Inc. All rights reserved Overview Virtualization and VMs Processor Virtualization Memory Virtualization I/O
More informationChapter 14 Virtual Machines
Operating Systems: Internals and Design Principles Chapter 14 Virtual Machines Eighth Edition By William Stallings Virtual Machines (VM) Virtualization technology enables a single PC or server to simultaneously
More informationCOS 318: Operating Systems. Virtual Machine Monitors
COS 318: Operating Systems Virtual Machine Monitors Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall10/cos318/ Introduction Have been around
More informationVMware Server 2.0 Essentials. Virtualization Deployment and Management
VMware Server 2.0 Essentials Virtualization Deployment and Management . This PDF is provided for personal use only. Unauthorized use, reproduction and/or distribution strictly prohibited. All rights reserved.
More informationChapter 16: Virtual Machines. Operating System Concepts 9 th Edition
Chapter 16: Virtual Machines Silberschatz, Galvin and Gagne 2013 Chapter 16: Virtual Machines Overview History Benefits and Features Building Blocks Types of Virtual Machines and Their Implementations
More informationVirtualization. Explain how today s virtualization movement is actually a reinvention
Virtualization Learning Objectives Explain how today s virtualization movement is actually a reinvention of the past. Explain how virtualization works. Discuss the technical challenges to virtualization.
More informationVirtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies
Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies Kurt Klemperer, Principal System Performance Engineer kklemperer@blackboard.com Agenda Session Length:
More informationThe Art of Virtualization with Free Software
Master on Free Software 2009/2010 {mvidal,jfcastro}@libresoft.es GSyC/Libresoft URJC April 24th, 2010 (cc) 2010. Some rights reserved. This work is licensed under a Creative Commons Attribution-Share Alike
More informationVirtual machines and operating systems
V i r t u a l m a c h i n e s a n d o p e r a t i n g s y s t e m s Virtual machines and operating systems Krzysztof Lichota lichota@mimuw.edu.pl A g e n d a Virtual machines and operating systems interactions
More informationThe Xen of Virtualization
The Xen of Virtualization Assignment for CLC-MIRI Amin Khan Universitat Politècnica de Catalunya March 4, 2013 Amin Khan (UPC) Xen Hypervisor March 4, 2013 1 / 19 Outline 1 Introduction 2 Architecture
More informationHypervisors and Virtual Machines
Hypervisors and Virtual Machines Implementation Insights on the x86 Architecture DON REVELLE Don is a performance engineer and Linux systems/kernel programmer, specializing in high-volume UNIX, Web, virtualization,
More informationMODULE 3 VIRTUALIZED DATA CENTER COMPUTE
MODULE 3 VIRTUALIZED DATA CENTER COMPUTE Module 3: Virtualized Data Center Compute Upon completion of this module, you should be able to: Describe compute virtualization Discuss the compute virtualization
More informationHypervisors. Introduction. Introduction. Introduction. Introduction. Introduction. Credits:
Hypervisors Credits: P. Chaganti Xen Virtualization A practical handbook D. Chisnall The definitive guide to Xen Hypervisor G. Kesden Lect. 25 CS 15-440 G. Heiser UNSW/NICTA/OKL Virtualization is a technique
More informationVirtualization for Cloud Computing
Virtualization for Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF CLOUD COMPUTING On demand provision of computational resources
More informationUnderstanding Full Virtualization, Paravirtualization, and Hardware Assist. Introduction...1 Overview of x86 Virtualization...2 CPU Virtualization...
Contents Introduction...1 Overview of x86 Virtualization...2 CPU Virtualization...3 The Challenges of x86 Hardware Virtualization...3 Technique 1 - Full Virtualization using Binary Translation...4 Technique
More informationHardware Based Virtualization Technologies. Elsie Wahlig elsie.wahlig@amd.com Platform Software Architect
Hardware Based Virtualization Technologies Elsie Wahlig elsie.wahlig@amd.com Platform Software Architect Outline What is Virtualization? Evolution of Virtualization AMD Virtualization AMD s IO Virtualization
More informationXen and the Art of. Virtualization. Ian Pratt
Xen and the Art of Virtualization Ian Pratt Keir Fraser, Steve Hand, Christian Limpach, Dan Magenheimer (HP), Mike Wray (HP), R Neugebauer (Intel), M Williamson (Intel) Computer Laboratory Outline Virtualization
More information<Insert Picture Here> Oracle Database Support for Server Virtualization Updated December 7, 2009
Oracle Database Support for Server Virtualization Updated December 7, 2009 Support Policy Server virtualization software allows multiple operating system instances to run on the same
More informationVMware and CPU Virtualization Technology. Jack Lo Sr. Director, R&D
ware and CPU Virtualization Technology Jack Lo Sr. Director, R&D This presentation may contain ware confidential information. Copyright 2005 ware, Inc. All rights reserved. All other marks and names mentioned
More informationHybrid Virtualization The Next Generation of XenLinux
Hybrid Virtualization The Next Generation of XenLinux Jun Nakajima Principal Engineer Intel Open Source Technology Center Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL
More informationOS Virtualization. CSC 456 Final Presentation Brandon D. Shroyer
OS Virtualization CSC 456 Final Presentation Brandon D. Shroyer Introduction Virtualization: Providing an interface to software that maps to some underlying system. A one-to-one mapping between a guest
More informationKnut Omang Ifi/Oracle 19 Oct, 2015
Software and hardware support for Network Virtualization Knut Omang Ifi/Oracle 19 Oct, 2015 Motivation Goal: Introduction to challenges in providing fast networking to virtual machines Prerequisites: What
More informationI/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain
More informationVirtualization. 2010 VMware Inc. All rights reserved
Virtualization Based on materials from: Introduction to Virtual Machines by Carl Waldspurger Understanding Intel Virtualization Technology (VT) by N. B. Sahgal and D. Rodgers Intel Virtualization Technology
More informationJukka Ylitalo Tik-79.5401 TKK, April 24, 2006
Rich Uhlig, et.al, Intel Virtualization Technology, Computer, published by the IEEE Computer Society, Volume 38, Issue 5, May 2005. Pages 48 56. Jukka Ylitalo Tik-79.5401 TKK, April 24, 2006 Outline of
More informationx86 Virtualization Hardware Support Pla$orm Virtualiza.on
x86 Virtualization Hardware Support Pla$orm Virtualiza.on Hide the physical characteris.cs of computer resources from the applica.ons Not a new idea: IBM s CP- 40 1967, CP/CMS, VM Full Virtualiza.on Simulate
More informationOutline. Outline. Why virtualization? Why not virtualize? Today s data center. Cloud computing. Virtual resource pool
Outline CS 6V81-05: System Security and Malicious Code Analysis Overview of System ization: The most powerful platform for program analysis and system security Zhiqiang Lin Department of Computer Science
More informationAdvanced Computer Networks. Network I/O Virtualization
Advanced Computer Networks 263 3501 00 Network I/O Virtualization Patrick Stuedi Spring Semester 2014 Oriana Riva, Department of Computer Science ETH Zürich 1 Outline Last week: Today: Software Defined
More informationVirtualization. P. A. Wilsey. The text highlighted in green in these slides contain external hyperlinks. 1 / 16
Virtualization P. A. Wilsey The text highlighted in green in these slides contain external hyperlinks. 1 / 16 Conventional System Viewed as Layers This illustration is a common presentation of the application/operating
More informationVirtualization is set to become a key requirement
Xen, the virtual machine monitor The art of virtualization Moshe Bar Virtualization is set to become a key requirement for every server in the data center. This trend is a direct consequence of an industrywide
More informationMicrokernels, virtualization, exokernels. Tutorial 1 CSC469
Microkernels, virtualization, exokernels Tutorial 1 CSC469 Monolithic kernel vs Microkernel Monolithic OS kernel Application VFS System call User mode What was the main idea? What were the problems? IPC,
More informationCloud Computing CS 15-319
Cloud Computing CS 15-319 Virtualization Case Studies : Xen and VMware Lecture 20 Majd F. Sakr, Mohammad Hammoud and Suhail Rehman 1 Today Last session Resource Virtualization Today s session Virtualization
More informationSystem Virtual Machines
System Virtual Machines Introduction Key concepts Resource virtualization processors memory I/O devices Performance issues Applications 1 Introduction System virtual machine capable of supporting multiple
More informationVirtualization. Clothing the Wolf in Wool. Wednesday, April 17, 13
Virtualization Clothing the Wolf in Wool Virtual Machines Began in 1960s with IBM and MIT Project MAC Also called open shop operating systems Present user with the view of a bare machine Execute most instructions
More informationModels For Modeling and Measuring the Performance of a Xen Virtual Server
Measuring and Modeling the Performance of the Xen VMM Jie Lu, Lev Makhlis, Jianjiun Chen BMC Software Inc. Waltham, MA 2451 Server virtualization technology provides an alternative for server consolidation
More informationVirtual Machine Security
Virtual Machine Security CSE497b - Spring 2007 Introduction Computer and Network Security Professor Jaeger www.cse.psu.edu/~tjaeger/cse497b-s07/ 1 Operating System Quandary Q: What is the primary goal
More informationNested Virtualization
Nested Virtualization Dongxiao Xu, Xiantao Zhang, Yang Zhang May 9, 2013 Agenda Nested Virtualization Overview Dive into Nested Virtualization Details Nested CPU Virtualization Nested MMU Virtualization
More informationVirtualization in Linux KVM + QEMU
CS695 Topics in Virtualization and Cloud Computing KVM + QEMU Senthil, Puru, Prateek and Shashank 1 Topics covered KVM and QEMU Architecture VTx support CPU virtualization in KMV Memory virtualization
More informationBasics in Energy Information (& Communication) Systems Virtualization / Virtual Machines
Basics in Energy Information (& Communication) Systems Virtualization / Virtual Machines Dr. Johann Pohany, Virtualization Virtualization deals with extending or replacing an existing interface so as to
More informationPCI-SIG SR-IOV Primer. An Introduction to SR-IOV Technology Intel LAN Access Division
PCI-SIG SR-IOV Primer An Introduction to SR-IOV Technology Intel LAN Access Division 321211-002 Revision 2.5 Legal NFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,
More informationIntel s Virtualization Extensions (VT-x) So you want to build a hypervisor?
Intel s Virtualization Extensions (VT-x) So you want to build a hypervisor? Mr. Jacob Torrey February 26, 2014 Dartmouth College 153 Brooks Road, Rome, NY 315.336.3306 http://ainfosec.com @JacobTorrey
More informationVirtualization. Michael Tsai 2015/06/08
Virtualization Michael Tsai 2015/06/08 What is virtualization? Let s first look at a video from VMware http://bcove.me/x9zhalcl Problems? Low utilization Different needs DNS DHCP Web mail 5% 5% 15% 8%
More informationGUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR
GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR ANKIT KUMAR, SAVITA SHIWANI 1 M. Tech Scholar, Software Engineering, Suresh Gyan Vihar University, Rajasthan, India, Email:
More informationCloud Computing. Up until now
Cloud Computing Lecture 11 Virtualization 2011-2012 Up until now Introduction. Definition of Cloud Computing Grid Computing Content Distribution Networks Map Reduce Cycle-Sharing 1 Process Virtual Machines
More informationkvm: Kernel-based Virtual Machine for Linux
kvm: Kernel-based Virtual Machine for Linux 1 Company Overview Founded 2005 A Delaware corporation Locations US Office Santa Clara, CA R&D - Netanya/Poleg Funding Expertise in enterprise infrastructure
More informationCloud Computing #6 - Virtualization
Cloud Computing #6 - Virtualization Main source: Smith & Nair, Virtual Machines, Morgan Kaufmann, 2005 Today What do we mean by virtualization? Why is it important to cloud? What is the penalty? Current
More informationIntroduction to Virtual Machines
Introduction to Virtual Machines Introduction Abstraction and interfaces Virtualization Computer system architecture Process virtual machines System virtual machines 1 Abstraction Mechanism to manage complexity
More informationPerformance tuning Xen
Performance tuning Xen Roger Pau Monné roger.pau@citrix.com Madrid 8th of November, 2013 Xen Architecture Control Domain NetBSD or Linux device model (qemu) Hardware Drivers toolstack netback blkback Paravirtualized
More informationBasics of Virtualisation
Basics of Virtualisation Volker Büge Institut für Experimentelle Kernphysik Universität Karlsruhe Die Kooperation von The x86 Architecture Why do we need virtualisation? x86 based operating systems are
More informationSecurity Overview of the Integrity Virtual Machines Architecture
Security Overview of the Integrity Virtual Machines Architecture Introduction... 2 Integrity Virtual Machines Architecture... 2 Virtual Machine Host System... 2 Virtual Machine Control... 2 Scheduling
More informationDistributed Systems. Virtualization. Paul Krzyzanowski pxk@cs.rutgers.edu
Distributed Systems Virtualization Paul Krzyzanowski pxk@cs.rutgers.edu Except as otherwise noted, the content of this presentation is licensed under the Creative Commons Attribution 2.5 License. Virtualization
More informationFRONT FLYLEAF PAGE. This page has been intentionally left blank
FRONT FLYLEAF PAGE This page has been intentionally left blank Abstract The research performed under this publication will combine virtualization technology with current kernel debugging techniques to
More informationWindows Server Virtualization & The Windows Hypervisor
Windows Server Virtualization & The Windows Hypervisor Brandon Baker Lead Security Engineer Windows Kernel Team Microsoft Corporation Agenda - Windows Server Virtualization (WSV) Why a hypervisor? Quick
More informationVMkit A lightweight hypervisor library for Barrelfish
Masters Thesis VMkit A lightweight hypervisor library for Barrelfish by Raffaele Sandrini Due date 2 September 2009 Advisors: Simon Peter, Andrew Baumann, and Timothy Roscoe ETH Zurich, Systems Group Department
More informationCloud^H^H^H^H^H Virtualization Technology. Andrew Jones (drjones@redhat.com) May 2011
Cloud^H^H^H^H^H Virtualization Technology Andrew Jones (drjones@redhat.com) May 2011 Outline Promise to not use the word Cloud again...but still give a couple use cases for Virtualization Emulation it's
More informationEnterprise-Class Virtualization with Open Source Technologies
Enterprise-Class Virtualization with Open Source Technologies Alex Vasilevsky CTO & Founder Virtual Iron Software June 14, 2006 Virtualization Overview Traditional x86 Architecture Each server runs single
More informationCOM 444 Cloud Computing
COM 444 Cloud Computing Lec 3: Virtual Machines and Virtualization of Clusters and Datacenters Prof. Dr. Halûk Gümüşkaya haluk.gumuskaya@gediz.edu.tr haluk@gumuskaya.com http://www.gumuskaya.com Virtual
More informationVirtualization. P. A. Wilsey. The text highlighted in green in these slides contain external hyperlinks. 1 / 16
1 / 16 Virtualization P. A. Wilsey The text highlighted in green in these slides contain external hyperlinks. 2 / 16 Conventional System Viewed as Layers This illustration is a common presentation of the
More informationx86 ISA Modifications to support Virtual Machines
x86 ISA Modifications to support Virtual Machines Douglas Beal Ashish Kumar Gupta CSE 548 Project Outline of the talk Review of Virtual Machines What complicates Virtualization Technique for Virtualization
More informationVirtualization and the U2 Databases
Virtualization and the U2 Databases Brian Kupzyk Senior Technical Support Engineer for Rocket U2 Nik Kesic Lead Technical Support for Rocket U2 Opening Procedure Orange arrow allows you to manipulate the
More informationVirtualization in the ARMv7 Architecture Lecture for the Embedded Systems Course CSD, University of Crete (May 20, 2014)
Virtualization in the ARMv7 Architecture Lecture for the Embedded Systems Course CSD, University of Crete (May 20, 2014) ManolisMarazakis (maraz@ics.forth.gr) Institute of Computer Science (ICS) Foundation
More informationOSes. Arvind Seshadri Mark Luk Ning Qu Adrian Perrig SOSP2007. CyLab of CMU. SecVisor: A Tiny Hypervisor to Provide
SecVisor: A Seshadri Mark Luk Ning Qu CyLab of CMU SOSP2007 Outline Introduction Assumption SVM Background Design Problems Implementation Kernel Porting Evaluation Limitation Introducion Why? Only approved
More informationVirtualization Overview. Yao-Min Chen
Virtualization Overview Yao-Min Chen The new look of computing 10/15/2010 Virtualization Overview 2 Outline Intro to Virtualization (V14n) V14n and Cloud Computing V14n Technologies 10/15/2010 Virtualization
More informationVirtualization: Concepts, Applications, and Performance Modeling
Virtualization: Concepts, s, and Performance Modeling Daniel A. Menascé, Ph.D. The Volgenau School of Information Technology and Engineering Department of Computer Science George Mason University www.cs.gmu.edu/faculty/menasce.html
More informationSolaris Virtualization and the Xen Hypervisor Frank Hofmann
Solaris Virtualization and the Xen Hypervisor Frank Hofmann Solaris Released Products Engineering Sun Microsystems UK All things in the world come from being. And being comes from non-being. Lao Tzu Overview
More informationDate: December 2009 Version: 1.0. How Does Xen Work?
Date: December 2009 Version: 1.0 How Does Xen Work? Table of Contents Executive Summary... 3 Xen Environment Components... 3 Xen Hypervisor... 3... 4 Domain U... 4 Domain Management and Control... 6 Xend...
More information12. Introduction to Virtual Machines
12. Introduction to Virtual Machines 12. Introduction to Virtual Machines Modern Applications Challenges of Virtual Machine Monitors Historical Perspective Classification 332 / 352 12. Introduction to
More informationARM Virtualization: CPU & MMU Issues
ARM Virtualization: CPU & MMU Issues Prashanth Bungale, Sr. Member of Technical Staff 2010 VMware Inc. All rights reserved Overview Virtualizability and Sensitive Instructions ARM CPU State Sensitive Instructions
More informationDistributed and Cloud Computing
Distributed and Cloud Computing K. Hwang, G. Fox and J. Dongarra Chapter 3: Virtual Machines and Virtualization of Clusters and datacenters Adapted from Kai Hwang University of Southern California March
More informationWHITE PAPER Mainstreaming Server Virtualization: The Intel Approach
WHITE PAPER Mainstreaming Server Virtualization: The Intel Approach Sponsored by: Intel John Humphreys June 2006 Tim Grieser IDC OPINION Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200
More informationOPEN SOURCE VIRTUALIZATION TRENDS. SYAMSUL ANUAR ABD NASIR Warix Technologies / Fedora Community Malaysia
OPEN SOURCE VIRTUALIZATION TRENDS SYAMSUL ANUAR ABD NASIR Warix Technologies / Fedora Community Malaysia WHAT I WILL BE TALKING ON? Introduction to Virtualization Full Virtualization, Para Virtualization
More informationKVM Security Comparison
atsec information security corporation 9130 Jollyville Road, Suite 260 Austin, TX 78759 Tel: 512-349-7525 Fax: 512-349-7933 www.atsec.com KVM Security Comparison a t s e c i n f o r m a t i o n s e c u
More informationXen and the Art of Virtualization
Xen and the Art of Virtualization Paul Barham, Boris Dragovic, Keir Fraser, Steven Hand, Tim Harris, Alex Ho, Rolf Neugebauery, Ian Pratt, Andrew Warfield University of Cambridge Computer Laboratory, SOSP
More informationClouds, Virtualization and Security or Look Out Below
Clouds, Virtualization and Security or Look Out Below Lee Badger Hardware Virtualization (Box View) 1 2 dom0 HW type 1 Para-virtualization I/O Host HW type 2 dom0 HW type 1 Full virtualization I/O Host
More informationTOP TEN CONSIDERATIONS
White Paper TOP TEN CONSIDERATIONS FOR CHOOSING A SERVER VIRTUALIZATION TECHNOLOGY Learn more at www.swsoft.com/virtuozzo Published: July 2006 Revised: July 2006 Table of Contents Introduction... 3 Technology
More informationCHAPTER 6 TASK MANAGEMENT
CHAPTER 6 TASK MANAGEMENT This chapter describes the IA-32 architecture s task management facilities. These facilities are only available when the processor is running in protected mode. 6.1. TASK MANAGEMENT
More informationHardware accelerated Virtualization in the ARM Cortex Processors
Hardware accelerated Virtualization in the ARM Cortex Processors John Goodacre Director, Program Management ARM Processor Division ARM Ltd. Cambridge UK 2nd November 2010 Sponsored by: & & New Capabilities
More informationKVM: A Hypervisor for All Seasons. Avi Kivity avi@qumranet.com
KVM: A Hypervisor for All Seasons Avi Kivity avi@qumranet.com November 2007 Virtualization Simulation of computer system in software Components Processor: register state, instructions, exceptions Memory
More informationBHyVe. BSD Hypervisor. Neel Natu Peter Grehan
BHyVe BSD Hypervisor Neel Natu Peter Grehan 1 Introduction BHyVe stands for BSD Hypervisor Pronounced like beehive Type 2 Hypervisor (aka hosted hypervisor) FreeBSD is the Host OS Availability NetApp is
More informationIOMMU: A Detailed view
12/1/14 Security Level: Security Level: IOMMU: A Detailed view Anurup M. Sanil Kumar D. Nov, 2014 HUAWEI TECHNOLOGIES CO., LTD. Contents n IOMMU Introduction n IOMMU for ARM n Use cases n Software Architecture
More informationAn Introduction to Virtual Machines Implementation and Applications
An Introduction to Virtual Machines Implementation and Applications by Qian Huang M.Sc., Tsinghua University 2002 B.Sc., Tsinghua University, 2000 AN ESSAY SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS
More informationWHITE PAPER. AMD-V Nested Paging. AMD-V Nested Paging. Issue Date: July, 2008 Revision: 1.0. Advanced Micro Devices, Inc.
Issue Date: July, 2008 Revision: 1.0 2008 All rights reserved. The contents of this document are provided in connection with ( AMD ) products. AMD makes no representations or warranties with respect to
More informationEnabling Technologies for Distributed Computing
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies
More information