Operating System Support for Virtual Machines
|
|
|
- Amie Heath
- 10 years ago
- Views:
Transcription
1 Proceedings of the 2003 USENIX Technical Conference Operating System Support for Virtual Machines Samuel T. King, George W. Dunlap, Peter M. Chen Computer Science and Engineering Division Department of Electrical Engineering and Computer Science University of Michigan Abstract: A virtual-machine monitor (VMM) is a useful technique for adding functionality below existing operating and software. One class of VMMs (called Type II VMMs) builds on the abstractions provided by a host operating. Type II VMMs are elegant and convenient, but their performance is currently an order of magnitude slower than that achieved when running outside a virtual machine (a standalone ). In this paper, we examine the reasons for this large overhead for Type II VMMs. We find that a few simple extensions to a host operating can make it a much faster platform for running a VMM. Taking advantage of these extensions reduces virtualization overhead for a Type II VMM to 14-35% overhead, even for workloads that exercise the virtual machine intensively. 1. Introduction A virtual-machine monitor (VMM) is a layer of software that emulates the hardware of a complete computer (Figure 1). The abstraction created by the VMM is called a virtual machine. The hardware emulated by the VMM typically is similar or identical to the hardware on which the VMM is running. Virtual machines were first developed and used in the 1960s, with the best-known example being IBM s VM/370 [Goldberg74]. Several properties of virtual machines have made them helpful for a wide variety of uses. First, they can create the illusion of multiple virtual machines on a single physical machine. These multiple virtual machines can be used to run s on different operating s, to allow students to experiment conveniently with building their own operating [Nieh00], to enable existing operating s to run on shared-memory multiprocessors [Bugnion97], and to simulate a network of independent computers. Second, virtual machines can provide a software environment for debugging operating s that is more convenient than using a physical machine. Third, virtual machines provide a convenient interface for adding functionality, such as fault injection [Buchacker01], primary-backup replication [Bressoud96], and undoable disks. Finally, a VMM provides strong isolation operating virtual-machine monitor (VMM) host hardware operating virtual-machine monitor (VMM) host operating host hardware Type I VMM Type II VMM Figure 1: Virtual-machine structures. A virtual-machine monitor is a software layer that runs on a host platform and provides an abstraction of a complete computer to higher-level software. The host platform may be the bare hardware (Type I VMM) or a host operating (Type II VMM). The software running above the virtual-machine abstraction is called software (operating and s).
2 between virtual-machine instances. This isolation allows a single server to run multiple, untrusted s safely [Whitaker02, Meushaw00] and to provide security services such as monitoring s for intrusions [Chen01, Dunlap02, Barnett02]. As a layer of software, VMMs build on a lowerlevel hardware or software platform and provide an interface to higher-level software (Figure 1). In this paper, we are concerned with the lower-level platform that supports the VMM. This platform may be the bare hardware, or it may be a host operating. Building the VMM directly on the hardware lowers overhead by reducing the number of software layers and enabling the VMM to take full advantage of the hardware capabilities. On the other hand, building the VMM on a host operating simplifies the VMM by allowing it to use the host operating s abstractions. Our goal for this paper is to examine and reduce the performance overhead associated with running a VMM on a host operating. Building it on a standard Linux host operating leads to an order of magnitude performance degradation compared to running outside a virtual machine (a standalone ). However, we find that a few simple extensions to the host operating reduces virtualization overhead to 14-35% overhead, which is comparable to the speed of virtual machines that run directly on the hardware. The speed of a virtual machine plays a large part in determining the domains for which virtual machines can be used. Using virtual machines for debugging, student projects, and fault-injection experiments can be done even if virtualization overhead is quite high (e.g. 10x slowdown). However, using virtual machine in production environments requires virtualization overhead to be much lower. Our CoVirt project on computer security depends on running all s inside a virtual machine [Chen01]. To keep the usable in a production environment, we would like the speed of our virtual machine to be within a factor of 2 of a standalone. The paper is organized as follows. Section 2 describes two ways to classify virtual machines, focusing on the higher-level interface provided by the VMM and the lower-level platform upon which the VMM is built. Section 3 describes UMLinux, which is the VMM we use in this paper. Section 4 describes a series of extensions to the host operating that enable virtual machines built on the host operating to approach the speed of those that run directly on the hardware. Section 5 evaluates the performance benefits achieved by each host OS extension. Section 6 describes related work, and Section 7 concludes. 2. Virtual machines Virtual-machine monitors can be classified along many dimensions. This section classifies VMMs along two dimensions: the higher-level interface they provide and the lower-level platform they build upon. The first way we can classify VMMs is according to how closely the higher-level interface they provide matches the interface of the physical hardware. VMMs such as VM/370 [Goldberg74] for IBM mainframes and VMware ESX Server [Waldspurger02] and VMware Workstation [Sugerman01] for x86 processors provide an abstraction that is identical to the hardware underneath the VMM. Simulators such as Bochs [Boc] and Virtutech Simics [Magnusson95] also provide an abstraction that is identical to physical hardware, although the hardware they simulate may differ from the hardware on which they are running. Several aspects of virtualization make it difficult or slow for a VMM to provide an interface that is identical to the physical hardware. Some architectures include instructions whose behavior depends on whether the CPU is running in privileged or user mode (sensitive instructions), yet which can execute in user mode without causing a trap to the VMM [Robin00]. Virtualizing these sensitive-but-unprivileged instructions generally requires binary instrumentation, which adds significant complexity and may add significant overhead. In addition, emulating I/O devices at the low-level hardware interface (e.g. memory-mapped I/O) causes execution to switch frequently between the operating accessing the device and the VMM code emulating the device. To avoid the overhead associated with emulating a low-level device interface, most VMMs encourage or require the user to run a modified version of the operating. For example, the VAX VMM security kernel [Karger91], VMware Workstation s tools [Sugerman01], and Disco [Bugnion97] all add special drivers in the operating to accelerate the virtualization of some devices. VMMs built on host operating s often require additional modifications to the operating. For example, the original version of SimOS adds special signal handlers to support virtual interrupts and requires relinking the operating into a different range of addresses [Rosenblum95]; similar changes are needed by User- Mode Linux [Dike00] and UMLinux [Buchacker01]. Other virtualization strategies make the higherlevel interface further different from the underlying
3 hardware. The Denali isolation kernel does not support instructions that are sensitive but unprivileged, adds several virtual instructions and registers, and changes the memory management model [Whitaker02]. Microkernels provide higher-level services above the hardware to support abstractions such as threads and inter-process communication [Golub90]. The Java virtual machine defines a virtual architecture that is completely independent from the underlying hardware. A second way to classify VMMs is according to the platform upon which they are built [Goldberg73]. Type I VMMs such as IBM s VM/370, Disco, and VMware s ESX Server are implemented directly on the physical hardware. Type II VMMs are built completely on top of a host operating. SimOS, User-Mode Linux, and UMLinux are all implemented completely on top of a host operating. Other VMMs are a hybrid between Type I and II: they operate mostly on the physical hardware but use the host OS to perform I/O. For example, VMware Workstation [Sugerman01] and Connectix VirtualPC [Con01] use the host operating to access some virtual I/O devices. A host operating makes a very convenient platform upon which to build a VMM. Host operating provide a set of abstractions that map closely to each part of a virtual machine [Rosenblum95]. A host process provides a sequential stream of execution similar to a CPU; host signals provide similar functionality to interrupts; host files and devices provide similar functionality to virtual I/O devices; host memory mapping and protection provides similar functionality to a virtual MMU. These features make it possible to implement a VMM as a normal user process with very little code. Other reasons contribute to the attractiveness of using a Type II VMM. Because a Type II VMM runs as a normal process, the developer or administrator of the VMM can use the full power of the host operating to monitor and debug the virtual machine s execution. For example, the developer or administrator can examine or copy the contents of the virtual machine s I/O devices or memory or attach a debugger to the virtual-machine process. Finally, the simplicity of Type II VMMs and the availability of several good open-source implementations make them an excellent platform for experimenting with virtual-machine services. A potential disadvantage of Type II VMMs is performance. Current host operating s do not provide sufficiently powerful interfaces to the bare hardware to support the intensive usage patterns of VMMs. For example, compiling the Linux kernel inside the UMLinux virtual machine takes 18 times as long as compiling it directly on a Linux host operating. VMMs that run directly on the bare hardware achieve much lower performance overhead. For example, VMware Workstation 3.1 compiles the Linux kernel with only a 30% overhead relative to running directly on the host operating. The goal of this paper is to examine and reduce the order-of-magnitude performance overhead associated with running a VMM on a host operating. We find that a few simple extensions to a host operating can make it a much faster platform for running a VMM, while preserving the conceptual elegance of the Type II approach. 3. UMLinux To conduct our study, we use a Type II VMM called UMLinux [Buchacker01]. UMLinux was developed by researchers at the University of Erlangen-Nürnberg for use in fault-injection experiments. UMLinux is a Type II VMM: the operating and all s run as a single process (the machine process) on a host Linux operating. UMLinux provides a higher-level interface to the operating that is similar but not identical to the underlying hardware. As a result, the machine-dependent portion of the Linux operating must be modified to use the interface provided by the VMM. Simple device drivers must be added to interact with the host abstractions used to implement the devices for the virtual machine; a few assembly-language instructions (e.g. iret and in/out) must be replaced with function calls to emulation code; and the kernel must be relinked into a different address range [Hoxer02]. About 17,000 lines of code were added to the kernel to work on the new platform. Applications compiled for the host operating work without modification on the operating. UMLinux uses functionality from the host operating that maps naturally to virtual hardware. The -machine process serves as a virtual CPU; host files and devices serve as virtual I/O devices; a host TUN/TAP device serves as a virtual network; host signals serve as virtual interrupts; and host memory mapping and protection serve as a virtual MMU. The virtual machine s memory is provided by a host file that is mapped into different parts of the -machine process s address space. We store this host file in a memory file (ramfs) to avoid needlessly writing to disk the virtual machine s transient state. The address space of the -machine process differs from a normal host process because it contains
4 0xffffffff 0xc xbfffffff 0x x6fffffff host operating operating VMM process operating -machine process host operating host hardware Figure 3: UMLinux structure. UMLinux uses two host processes. The -machine process executes the operating and all s. The VMM process uses ptrace to mediate access between the machine process and the host operating. 0x0 Figure 2: UMLinux address space. As with all Linux processes, the host kernel address space occupies [0xc , 0xffffffff], and the host user address space occupies [0x0, 0xc ). The kernel occupies the upper portion of the host user space [0x , 0xc ), and the current occupies the remainder of the host user space [0x0, 0x ). both the host and operating address ranges (Figure 2). In a standard Linux process, the operating occupies addresses [0xc , 0xffffffff] while the is given [0x0, 0xc ). Because the UMLinux -machine process must hold both the host and operating s, the address space for the operating must be moved to occupy [0x , 0xc ), which leaves [0x , 0x ) for s. The kernel memory is protected using host mmap and munmap calls. To facilitate this protection, UMLinux maintains a virtual current privilege level, which is analogous to the x86 current privilege level. This is used to differentiate between user and kernel modes, and the kernel memory will be accessible or protected according to the virtual privilege level. Figure 3 shows the basic structure of UMLinux. In addition to the -machine process, UMLinux uses a VMM process to implement the VMM. The VMM process serves two purposes: it redirects to the operating signals and calls that would otherwise go to the host operating, and it restricts the set of calls allowed by the operating. The VMM process uses ptrace to mediate access between the -machine process and the host operating. Figure 4 shows the sequence of steps taken by UMLinux when a issues a call. The VMM process is also invoked when the kernel returns from its SIGUSR1 handler and when the kernel protects its address space from the process. A similar sequence of context switches occurs on each memory, I/O, and timer exception received by the -machine process. 4. Host OS support for Type II VMMs A host operating makes an elegant and convenient base upon which to build and run a VMM such as UMLinux. Each virtual hardware component maps naturally to an abstraction in the host OS, and the administrator can interact conveniently with the machine process just as it does with other host processes. However, while a host OS provides sufficient functionality to support a VMM, it does not provide the primitives needed to support a VMM efficiently. In this section, we investigate three bottlenecks that occur when running a Type II VMM, and we eliminate these bottlenecks through simple changes to the host OS. We find that three bottlenecks are responsible for the bulk of the virtualization overhead. First, UMLinux s structure with two separate host processes causes an inordinate number of context switches on the host. Second, switching between the kernel and the user space generates a large number of
5 1 5 VMM process operating host operating 1. issues call; intercepted by VMM process via ptrace 2. VMM process changes call to no-op (getpid) 3. getpid returns; intercepted by VMM process 4. VMM process sends SIGUSR1 signal to SIGUSR1 handler 5. SIGUSR1 handler calls mmap to allow access to kernel data; intercepted by VMM process 6. VMM process allows mmap to pass through 7. mmap returns to VMM process 8. VMM process returns to SIGUSR1 handler, which handles the s call Figure 4: Guest call. This picture shows the steps UMLinux takes to transfer control to the operating when a process issues a call. The mmap call in the SIGUSR1 handler must reside in user space. For security, the rest of the SIGUSR1 handler should reside in kernel space. The current UMLinux implementation includes an extra section of trampoline code to issue the mmap; this trampoline code is started by manipulating the machine process s context and finishes by causing a breakpoint to the VMM process; the VMM process then transfers control back to the -machine process by sending a SIGUSR1. memory protection operations. Third, switching between two processes generates a large number of memory mapping operations Extra host context switches The VMM process in UMLinux uses ptrace to intercept key events ( calls and signals) executed by the -machine process. ptrace is a powerful tool for debugging, but using it to create a virtual machine causes the host OS to context switch frequently between the -machine process and the VMM process (Figure 4). We can eliminate most of these context switches by moving the VMM process s functionality into the host kernel. We encapsulate the bulk of the VMM process functionality in a VMM loadable kernel module. We also modified a few lines in the host kernel s call and signal handling to transfer control to the VMM kernel module when the -machine process executes a call or receives a signal. The VMM kernel module and other hooks in the host kernel were implemented in 150 lines of code (not including comments). Moving the VMM process s functionality into the host kernel drastically reduces the number of context switches in UMLinux. For example, transferring control to the kernel on a call can be done in just two context switches (Figure 5). It also simplifies the conceptually, because the VMM kernel module has more control over the -machine process than is provided by ptrace. For example, the VMM kernel module can change directly the protections of the -machine process s address space, whereas the ptracing VMM process must cause the -machine process to make multiple calls to change protections.
6 operating VMM kernel module host operating 1. issues call; intercepted by VMM kernel module 2. VMM kernel module calls mmap to allow access to kernel data 3. mmap returns to VMM kernel module 4. VMM kernel module sends SIGUSR1 to SIGUSR1 handler Figure 5: Guest call with VMM kernel module. This picture shows the steps taken by UMLinux with a VMM kernel module to transfer control to the operating when a issues a call Protecting kernel space from processes 1 The -machine process switches frequently between user mode and kernel mode. The kernel is invoked to service calls and other exceptions issued by a process and to service signals initiated by virtual I/O devices. Each time the -machine process switches from kernel mode to user mode, it must first prevent access to the kernel s portion of the address space [0x , 0xc ). Similarly, each time the -machine process switches from user mode to kernel mode, it must first enable access to the kernel s portion of the address space. The -machine process performs these address space manipulations by making the host calls mmap, munmap, and mprotect Unfortunately, calling mmap, munmap, or mprotect on large address ranges incurs significant overhead, especially if the kernel accesses many pages in its address space. In contrast, a standalone host machine incurs very little overhead when switching between user mode and kernel mode. The page table on x86 processors need not change when switching between kernel mode and user mode, because the page table entry for a page can be set to simultaneously allow kernel-mode access and prevent user-mode access. We developed two solutions that use the x86 paged segments and privilege modes to eliminate the overhead incurred when switching between kernel mode and user mode. Linux normally uses paging as its primary mechanism for translation and protection, using segments only to switch between privilege levels. Linux uses four segments: kernel code segment, kernel data segment, user code segment, and user data segment. Normally, all four segments span the entire address range. Linux normally runs all host user code in CPU privilege ring 3 and runs host kernel code in CPU privilege ring 0. Linux uses the supervisor-only bit in the page table to prevent code running in CPU privilege ring 3 from accessing the host operating s data (Figure 6). Our first solution protects the kernel space from user code by changing the bound on the user code and data segments (Figure 7). When the machine process is running in user mode, the VMM kernel module shrinks the user code and data segments to span only [0x0, 0x ). When the -machine process is running in kernel mode, the VMM kernel module grows the user code and data segments to its normal range of [0x0, 0xffffffff]. This solution added only 20 lines of code to the VMM kernel module and is the solution we currently use. One limitation of the first solution is that it assumes the kernel space occupies a contiguous region directly below the host kernel space. Our second solution allows the kernel space to occupy arbitrary ranges of the address space within [0x0, 0xc ) by using the page table s supervisoronly bit to distinguish between kernel mode and user mode (Figure 8). In this solution, the VMM kernel module marks the kernel s pages as accessible only by supervisor code (ring 0-2), then runs the -machine process in ring 1 while in kernel mode. When running in ring 1, the CPU can access pages marked as supervisor in the page table, but it cannot execute privileged instructions (such as changing the segment descriptor). To prevent the -machine process from accessing host kernel space, the VMM kernel module shrinks the user code and data segment to span
7 { user code/data segment 0xffffffff host operating 0xc xbfffffff operating 0x x6fffffff pages are accessible in ring 3 { user code/data segment 0xffffffff host operating 0xc xbfffffff operating 0x x6fffffff pages are accessible in ring 3 0x0 0x0 Figure 6: Segment and page protections when running a normal Linux host processes. A normal Linux host process runs in CPU privilege ring 3 and uses the user code and data segment. The segment bounds allow access to all addresses, but the supervisor-only bit in the page table prevents the host process from accessing the host operating s data. In order to protect the kernel s data with this setup, the -machine process must munmap or mprotect [0x , 0xc ) before switching to user mode. only [0x0, 0xc ). The -machine process runs in ring 3 while in user mode, which prevents user code from accessing the kernel s data. This allows the VMM kernel module to protect arbitrary pages in [0x0, 0xc ) from user mode by setting the supervisor-only bit on those pages. It does still require the host kernel and user address ranges to each be contiguous Switching between processes A third bottleneck in a Type II VMM occurs when switching address spaces between processes. Changing address spaces means changing the current mapping between virtual pages and the page in the virtual machine s physical memory file. Changing this mapping is done by calling munmap for the outgoing process s virtual address space, then calling mmap for each resident virtual page Figure 7: Segment and page protections when running the -machine process in user mode (solution 1). This solution protects the kernel space from processes by changing the bound on the user code and data segments to [0x0, 0x ) when running user code. When the -machine process switches to kernel mode, the VMM kernel module grows the user code and data segments to its normal range of [0x0, 0xffffffff]. in the incoming process. UMLinux minimizes the calls to mmap by doing it on demand, i.e. as the incoming process faults in its address space. Even with this optimization, however, UMLinux generates a large number of calls to mmap, especially when the working sets of the processes are large. To improve the speed of context switches, we enhance the host OS to allow a single process to maintain several address space definitions. Each address space is defined by a separate set of page tables, and the -machine processes switches between address space definitions via a new host call switch. To switch address space definitions, switch needs only to change the pointer to the current first-level page table. This task is much faster than mmap ing each virtual page of the incoming process. We modify the kernel to use switch when context switching from one process to another. We reuse initialized
8 { user code/data segment address space definitions to minimize the overhead of creating processes. We take care to prevent the -machine process from abusing switch by limiting it to 1024 different address spaces and checking all parameters carefully. This optimization added 340 lines of code to the host kernel. 5. Performance results 0xffffffff host operating 0xc xbfffffff operating 0x x6fffffff 0x0 pages are accessible in ring 3 Figure 8: Segment and page protections when running the -machine process (solution 2). This solution protects the kernel space from processes by marking the kernel s pages as accessible only by code running in CPU privilege ring 0-2 and running the machine process in ring 1 when executing kernel code. To prevent the -machine process from accessing host kernel space, the VMM kernel module shrinks the user code and data segment to span only [0x0, 0xc ). This section evaluates the performance benefits achieved by each of the optimizations described in Section 4. We first measure the performance of three important primitives: a null call, switching between two processes (each with a 64 KB working set), and transferring 10 MB of data using TCP across a 100 Mb/s Ethernet switch. The first two of these microbenchmarks come from the lmbench suite [McVoy96]. We also measure performance on three macrobenchmarks. POV-Ray is a CPU-intensive ray-tracing program. We render the benchmark image from the POV-Ray distribution at quality 8. kernel-build compiles the complete Linux kernel (make bzimage). SPECweb99 measures web server performance, using the Apache web server. We configure SPECweb99 with 15 simultaneous connections spread over two clients connected to a 100 Mb/s Ethernet switch. kernel-build and SPECweb99 exercise the virtual machine intensively by making many calls. They are similar to the I/O-intensive and kernel-intensive workloads used to evaluate Cellular Disco [Govil00]. All workloads start with a warm file cache. Each results represents the average of 5 runs. Variance across runs is less than 3%. All experiments are run on an computer with an AMD Athlon CPU, 256 MB of memory, and a Samsung SV4084 IDE disk. The kernel is Linux ported to UMLinux, and the host kernels for UMLinux are all Linux with different degrees of support for VMMs. All virtual machines are configured with 192 MB of physical memory. The virtual hard disk for UMLinux is stored on a raw disk partition on the host to avoid double buffering the virtual disk data in the and host file caches and to prevent the virtual machine from benefitting unfairly from the host s file cache. The host and file s have the same versions of all software (based on RedHat 6.2). We measure baseline performance by running directly on the host operating (standalone). The host uses the same hardware and software installation as the virtual-machine s and has access to the full 256 MB of host memory. We use VMware Workstation 3.1 to illustrate the performance of VMMs that are built directly on the host hardware. We chose VMware Workstation because it executes mostly on host hardware and because it is regarded widely as providing excellent performance. However, note that VMware Workstation may be slower than a Type I VMM that is ideal for the purposes of comparing with UMLinux. First, VMware Workstation issues I/O through the host OS rather than controlling the host I/O devices directly. Second, unlike UMLinux, VMware Workstation can support unmodified operating s, and this capability forces VMware Workstation to do extra work to provide the same interface to the OS as the host hardware does. The configuration for VMware Workstation matches that of the other virtual-machine s, except that VMware
9 Workstation uses the host disk partition s cacheable block device for its virtual disk. Figures 9 and 10 summarize results from all performance experiments. The original UMLinux is hundreds of times slower for null calls and context switches and is not able to saturate the network. UMLinux is 8x as slow as the standalone host on SPECweb99, 18x as slow as the standalone host on kernel-build and 10% slower than the standalone host on POV-Ray. Because POV-Ray is compute-bound, it does not interact much with the kernel and thus incurs little virtualization overhead. The overheads for SPECweb99 and kernel-build are higher because they issue more kernel calls, each of which must be trapped by the VMM kernel module and reflected back to the kernel by sending a signal. VMMs that are built directly on the hardware execute much faster than a Type II VMM without host OS support. VMware Workstation 3.1 executes a null call nearly as fast as the standalone host, can saturate the network, and is within a factor of 5 of the context switch time for a standalone host. VMware Workstation 3.1 incurs an overhead of 6-30% on the intensive macrobenchmarks (SPECweb99 and kernelbuild). Our first optimization (Section 4.1) moves the VMM functionality into the kernel. This improves performance by a factor of about 2-3 on the microbenchmarks, and by a factor of about 2 on the intensive macrobenchmarks. Our second optimization (Section 4.2) uses segment bounds to eliminate the need to call mmap, munmap, and mprotect when switching between kernel mode and user mode. Adding this optimization improves performance on null calls and context switches by another factor of 5 (beyond the performance with just the first optimization) and enables UMLinux to saturate the network. Performance on the two intensive macrobenchmarks improves by a factor of 3-4. Our final optimization (Section 4.3) maintains multiple address space definitions to speed up context switches between processes. This optimization has little effect on benchmarks with only one main process, but it has a dramatic affect on benchmarks with more than one main process. Adding this optimization improves the context switch microbenchmark by a factor of 13 and improves kernel-build by a factor of 2. With all three host OS optimizations to support VMMs, UMLinux runs all macrobenchmarks well within our performance target of a factor of 2 relative to standalone. POV-Ray incurs 1% overhead; kernel-build incurs 35% overhead; and SPECweb99 incurs 14% overhead. These overheads are comparable to those attained by VMware Workstation 3.1. The largest remaining source of virtualization overhead for kernel-build is the cost and frequency of handling memory faults. kernel-build creates a large number of processes, each of which maps its executable pages on demand. Each demandmapped page causes a signal to be delivered to the kernel, which must then ask the host kernel to map the new page. In addition, UMLinux currently does not support the ability to issue multiple outstanding I/Os on the host. We plan to update the disk driver to take advantage of non-blocking I/O when it becomes available on Linux. 6. Related work User-Mode Linux is a Type II VMM that is very similar to UMLinux [Dike00]. Our discussion of User- Mode Linux assumes a configuration that protects kernel memory from processes (jail mode). The major technical difference between the User-Mode Linux and UMLinux is that User-Mode Linux uses a separate host process for each process, while UMLinux runs all code in a single host process. Assigning each process to a separate host process technique speeds up context switches between processes, but it leads to complications such as keeping the shared portion of the address spaces consistent and difficult synchronization issues when switching processes [Dike02a]. User-Mode Linux in jail mode is faster than UMLinux (without host OS support) on context switches (157 vs microseconds) but slower on calls (296 vs. 96 microseconds) and network transfers (54 vs. 39 seconds). User-Mode Linux in jail mode is faster on kernel-build (1309 vs seconds) and slower on SPECweb99 (200 vs. 172 seconds) than UMLinux without host OS support. Concurrently with our work on host OS support for VMMs, the author of User-Mode Linux proposed modifying the host OS to support multiple address space definitions for a single host process [Dike02a]. Like the optimization in Section 4.3, this would speed up switches between processes and allow User-Mode Linux to run all code in a single host
10 Time per syscall (microseconds) UMLinux VMM kernel module segment bounds + multiple address spaces VMware Workstation standalone Time per context switch (microseconds) UMLinux VMM kernel module null call context switch UMLinux + VMM kernel module + segment bounds + multiple address spaces standalone Time to transfer 100 MB (seconds) VMware Workstation segment bounds multiple address spaces VMware Workstation standalone network transfer Figure 9: Microbenchmark results. This figure compares the performance of different virtual-machine monitors on three microbenchmarks: a null call, context switching between two 64 KB processes, and receiving 10 MB of data over the network. The first four bars represent the performance of UMLinux with increasing support from the host OS. Each optimization level is cumulative, i.e. it includes all optimizations of the bars to the left. The performance of a standalone host (no VMM) is shown for reference. Without support from the host OS, UMLinux is much slower than a standalone host. Adding three extensions to the host OS improves the performance of UMLinux dramatically.
11 Runtime (seconds) Runtime (seconds) UMLinux + VMM kernel module POV-Ray kernel-build UMLinux + VMM kernel module + segment bounds + segment bounds + multiple address spaces + multiple address spaces standalone VMware Workstation standalone 0 VMware Workstation Runtime (seconds) UMLinux + VMM kernel module + segment bounds + multiple address spaces VMware Workstation standalone SPECweb99 Figure 10: Macrobenchmark results. This figure compares the performance of different virtual-machine monitors on three macrobenchmarks: the POV-Ray ray tracer, compiling a kernel, and SPECweb99. The first four bars represent the performance of UMLinux with increasing support from the host OS. Each optimization level is cumulative, i.e. it includes all optimizations of the bars to the left. The performance of a standalone host (no VMM) is shown for reference. Without support from the host OS, UMLinux is much slower than a standalone host. Adding three extensions to the host OS allows UMLinux to approach the speed of a Type I VMM.
12 process. Implementation of this optimization is currently underway [Dike02b], though User-Mode Linux still uses two separate host processes, one for the kernel and one for all processes. We currently use UMLinux for our CoVirt research project on virtual machines [Chen01] because running all code in a single host process is simpler, uses fewer host resources, and simplifies the implementation of our VMM-based replay service (ReVirt) [Dunlap02]. The SUNY Palladium project used a combination of page and segment protections on x86 processors to divide a single address space into separate protection domains [Chiueh99]. Our second solution for protecting the kernel space from processes (Section 4.2) uses a similar combination of x86 features. However, the SUNY Palladium project is more complex because it needs to support a more general set of protection domains than UMLinux. Reinhardt, et al. implemented extensions to the CM-5 s operating that enabled a single process to create and switch between multiple address spaces [Reinhardt93]. This capability was added to support the Wisconsin Wind Tunnel s parallel simulation of parallel computers. 7. Conclusions and future work Virtual-machine monitors that are built on a host operating are simple and elegant, but they are currently an order of magnitude slower than running outside a virtual machine, and much slower than VMMs that are built directly on the hardware. We examined the sources of overhead for a VMM that run on a host operating. We found that three bottlenecks are responsible for the bulk of the performance overhead. First, the host OS required a separate host user process to control the main -machine process, and this generated a large number of host context switches. We eliminated this bottleneck by moving the small amount of code that controlled the -machine process into the host kernel. Second, switching between kernel and user space generated a large number of memory protection operations on the host. We eliminated this bottleneck in two ways. One solution modified the host user segment bounds; the other solution modified the segment bounds and ran the -machine process in CPU privilege ring 1. Third, switching between two processes generated a large number of memory mapping operations on the host. We eliminated this bottleneck by allowing a single host process to maintain several address space definitions. In total, 510 lines of code were added to the host kernel to support these three optimizations. With all three optimizations, performance of a Type II VMM on macrobenchmarks improved to within 14-35% overhead relative to running on a standalone host (no VMM), even on benchmarks that exercised the VMM intensively. The main remaining source of overhead was the large number of processes created in one benchmark (kernel-build) and accompanying page faults from demand mapping in the executable. In the future, we plan to reduce the size of the host operating used to support a VMM. Much of the code in the host OS can be eliminated, because the VMM uses only a small number of calls and abstractions in the host OS. Reducing the code size of the host OS will help make Type II VMMs a fast and trusted base for future virtual-machine services. 8. Acknowledgments We are grateful to the researchers at the University of Erlangen-Nürnberg for writing UMLinux and sharing it with us. In particular, Kerstin Buchacker and Volkmar Sieh helped us understand and use UMLinux. Our shepherd Ed Bugnion and the anonymous reviewers helped improve the quality of this paper. This research was supported in part by National Science Foundation grants CCR and CCR and by Intel Corporation. Samuel King was supported by a National Defense Science and Engineering Graduate Fellowship. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. 9. References [Barnett02] Ryan C. Barnett. Monitoring VMware Honeypots, September [Boc] [Bressoud96] Thomas C. Bressoud and Fred B. Schneider. Hypervisor-based fault tolerance. ACM Transactions on Computer Systems, 14(1):80 107, February [Buchacker01] Kerstin Buchacker and Volkmar Sieh. Framework for testing the fault-tolerance of s including OS and network aspects. In Proceedings of the 2001 IEEE Symposium on High Assurance System
13 Engineering (HASE), pages , October [Bugnion97] Edouard Bugnion, Scott Devine, Kinshuk Govil, and Mendel Rosenblum. Disco: Running Commodity Operating Systems on Scalable Multiprocessors. ACM Transactions on Computer Systems, 15(4): , November [Chen01] [Chiueh99] [Con01] [Dike00] [Dike02a] [Dike02b] [Dunlap02] Peter M. Chen and Brian D. Noble. When virtual is better than real. In Proceedings of the 2001 Workshop on Hot Topics in Operating Systems (HotOS), pages , May Tzi-cker Chiueh, Ganesh Venkitachalam, and Prashant Pradhan. Integrating Segmentation and Paging Protection for Safe, Efficient and Transparent Software Extensions. In Proceedings of the 1999 Symposium on Operating Systems Principles, December The Technology of Virtual Machines. Technical report, Connectix Corp., September Jeff Dike. A user-mode port of the Linux kernel. In Proceedings of the 2000 Linux Showcase and Conference, October Jeff Dike. Making Linux Safe for Virtual Machines. In Proceedings of the 2002 Ottawa Linux Symposium (OLS), June Jeff Dike. User-Mode Linux Diary, November George W. Dunlap, Samuel T. King, Sukru Cinar, Murtaza Basrai, and Peter M. Chen. ReVirt: Enabling Intrusion Analysis through Virtual-Machine Logging and Replay. In Proceedings of the 2002 Symposium on Operating Systems Design and Implementation (OSDI), pages , December [Goldberg73] R. Goldberg. Architectural Principles for Virtual Computer Systems. PhD thesis, Harvard University, February [Goldberg74] Robert P. Goldberg. Survey of Virtual Machine Research. IEEE Computer, pages 34 45, June [Golub90] David Golub, Randall Dean, Allessandro Forin, and Richard Rashid. Unix as an Application Program. In Proceedings of the 1990 USENIX Summer Conference, [Govil00] [Hoxer02] Kinshuk Govil, Dan Teodosiu, Yongqiang Huang, and Mendel Rosenblum. Cellular disco: resource management using virtual clusters on shared-memory multiprocessors. ACM Transactions on Computer Systems, 18(3): , August H. J. Hoxer, K. Buchacker, and V. Sieh. Implementing a User-Mode Linux with Minimal Changes from Original Kernel. In Proceedings of the 2002 International Linux System Technology Conference, pages 72 82, September [Karger91] Paul A. Karger, Mary Ellen Zurko, Douglis W. Bonin, Andrew H. Mason, and Clifford E. Kahn. A retrospective on the VAX VMM security kernel. IEEE Transactions on Software Engineering, 17(11), November [Magnusson95] Peter Magnusson and B. Werner. Efficient Memory Simulation in SimICS. In Proceedings of the 1995 Annual Simulation Symposium, pages 62 73, April [McVoy96] Larry McVoy and Carl Staelin. Lmbench: Portable tools for performance analysis. In Proceedings of the Winter 1996 USENIX Conference, January [Meushaw00] Robert Meushaw and Donald Simard. NetTop: Commercial Technology in High Assurance Applications. Tech Trend Notes: Preview of Tomorrow s Information Technologies, 9(4), September [Nieh00] Jason Nieh and Ozgur Can Leonard. Examining VMware. Dr. Dobb s Journal, August [Reinhardt93] Steven K. Reinhardt, Babak Falsafi, and David A. Wood. Kernel Support for the Wisconsin Wind Tunnel. In Proceedings of the 1993 Usenix Symposium on Microkernels and Other Kernel Architectures, pages 73 89, September [Robin00] John Scott Robin and Cynthia E. Irvine. Analysis of the Intel Pentium s Ability to Support a Secure Virtual Machine Monitor. In Proceedings of the 2000 USENIX Security Symposium, August [Rosenblum95] Mendel Rosenblum, Stephen A. Herrod, Emmett Witchel, and Anoop Gupta. Complete computer simulation: the SimOS approach. IEEE Parallel &
14 Distributed Technology: Systems & Applications, 3(4):34 43, January [Sugerman01] Jeremy Sugerman, Ganesh Venkitachalam, and Beng-Hong Lim. Virtualizing I/O Devices on VMware Workstation s Hosted Virtual Machine Monitor. In Proceedings of the 2001 USENIX Technical Conference, June [Waldspurger02] Carl A. Waldspurger. Memory Resource Management in VMware ESX Server. In Proceedings of the 2002 Symposium on Operating Systems Design and Implementation (OSDI), December [Whitaker02] Andrew Whitaker, Marianne Shaw, and Steven D. Gribble. Scale and Performance in the Denali Isolation Kernel. In Proceedings of the 2002 Symposium on Operating Systems Design and Implementation (OSDI), December 2002.
Full and Para Virtualization
Full and Para Virtualization Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF x86 Hardware Virtualization The x86 architecture offers four levels
COS 318: Operating Systems. Virtual Machine Monitors
COS 318: Operating Systems Virtual Machine Monitors Kai Li and Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall13/cos318/ Introduction u Have
ReVirt: Enabling Intrusion Analysis through Virtual-Machine Logging and Replay
Proceedings of the 2002 Symposium on Operating Systems Design and Implementation (OSDI) ReVirt: Enabling Intrusion Analysis through Virtual-Machine Logging and Replay George W. Dunlap, Samuel T. King,
Virtualization. Explain how today s virtualization movement is actually a reinvention
Virtualization Learning Objectives Explain how today s virtualization movement is actually a reinvention of the past. Explain how virtualization works. Discuss the technical challenges to virtualization.
Virtualization. Jukka K. Nurminen 23.9.2015
Virtualization Jukka K. Nurminen 23.9.2015 Virtualization Virtualization refers to the act of creating a virtual (rather than actual) version of something, including virtual computer hardware platforms,
Virtual Machine Monitors. Dr. Marc E. Fiuczynski Research Scholar Princeton University
Virtual Machine Monitors Dr. Marc E. Fiuczynski Research Scholar Princeton University Introduction Have been around since 1960 s on mainframes used for multitasking Good example VM/370 Have resurfaced
Microkernels, virtualization, exokernels. Tutorial 1 CSC469
Microkernels, virtualization, exokernels Tutorial 1 CSC469 Monolithic kernel vs Microkernel Monolithic OS kernel Application VFS System call User mode What was the main idea? What were the problems? IPC,
COS 318: Operating Systems. Virtual Machine Monitors
COS 318: Operating Systems Virtual Machine Monitors Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall10/cos318/ Introduction Have been around
x86 ISA Modifications to support Virtual Machines
x86 ISA Modifications to support Virtual Machines Douglas Beal Ashish Kumar Gupta CSE 548 Project Outline of the talk Review of Virtual Machines What complicates Virtualization Technique for Virtualization
COM 444 Cloud Computing
COM 444 Cloud Computing Lec 3: Virtual Machines and Virtualization of Clusters and Datacenters Prof. Dr. Halûk Gümüşkaya [email protected] [email protected] http://www.gumuskaya.com Virtual
Uses for Virtual Machines. Virtual Machines. There are several uses for virtual machines:
Virtual Machines Uses for Virtual Machines Virtual machine technology, often just called virtualization, makes one computer behave as several computers by sharing the resources of a single computer between
FRONT FLYLEAF PAGE. This page has been intentionally left blank
FRONT FLYLEAF PAGE This page has been intentionally left blank Abstract The research performed under this publication will combine virtualization technology with current kernel debugging techniques to
Models For Modeling and Measuring the Performance of a Xen Virtual Server
Measuring and Modeling the Performance of the Xen VMM Jie Lu, Lev Makhlis, Jianjiun Chen BMC Software Inc. Waltham, MA 2451 Server virtualization technology provides an alternative for server consolidation
nanohub.org An Overview of Virtualization Techniques
An Overview of Virtualization Techniques Renato Figueiredo Advanced Computing and Information Systems (ACIS) Electrical and Computer Engineering University of Florida NCN/NMI Team 2/3/2006 1 Outline Resource
Virtualization Technology. Zhiming Shen
Virtualization Technology Zhiming Shen Virtualization: rejuvenation 1960 s: first track of virtualization Time and resource sharing on expensive mainframes IBM VM/370 Late 1970 s and early 1980 s: became
Avirtual machine monitor is a software system
COVER FEATURE Rethinking the Design of Virtual Machine Monitors To overcome the poor scalability and extensibility of traditional virtual machine monitors that partition a single physical machine into
CPS221 Lecture: Operating System Structure; Virtual Machines
Objectives CPS221 Lecture: Operating System Structure; Virtual Machines 1. To discuss various ways of structuring the operating system proper 2. To discuss virtual machines Materials: 1. Projectable of
Virtualization. Types of Interfaces
Virtualization Virtualization: extend or replace an existing interface to mimic the behavior of another system. Introduced in 1970s: run legacy software on newer mainframe hardware Handle platform diversity
Hypervisors and Virtual Machines
Hypervisors and Virtual Machines Implementation Insights on the x86 Architecture DON REVELLE Don is a performance engineer and Linux systems/kernel programmer, specializing in high-volume UNIX, Web, virtualization,
Virtual machines and operating systems
V i r t u a l m a c h i n e s a n d o p e r a t i n g s y s t e m s Virtual machines and operating systems Krzysztof Lichota [email protected] A g e n d a Virtual machines and operating systems interactions
Chapter 5 Cloud Resource Virtualization
Chapter 5 Cloud Resource Virtualization Contents Virtualization. Layering and virtualization. Virtual machine monitor. Virtual machine. Performance and security isolation. Architectural support for virtualization.
Hypervisors. Introduction. Introduction. Introduction. Introduction. Introduction. Credits:
Hypervisors Credits: P. Chaganti Xen Virtualization A practical handbook D. Chisnall The definitive guide to Xen Hypervisor G. Kesden Lect. 25 CS 15-440 G. Heiser UNSW/NICTA/OKL Virtualization is a technique
How To Write A Windows Operating System (Windows) (For Linux) (Windows 2) (Programming) (Operating System) (Permanent) (Powerbook) (Unix) (Amd64) (Win2) (X
(Advanced Topics in) Operating Systems Winter Term 2009 / 2010 Jun.-Prof. Dr.-Ing. André Brinkmann [email protected] Universität Paderborn PC 1 Overview Overview of chapter 3: Case Studies 3.1 Windows Architecture.....3
How do Users and Processes interact with the Operating System? Services for Processes. OS Structure with Services. Services for the OS Itself
How do Users and Processes interact with the Operating System? Users interact indirectly through a collection of system programs that make up the operating system interface. The interface could be: A GUI,
An Introduction to Virtual Machines Implementation and Applications
An Introduction to Virtual Machines Implementation and Applications by Qian Huang M.Sc., Tsinghua University 2002 B.Sc., Tsinghua University, 2000 AN ESSAY SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS
Introduction to Virtual Machines
Introduction to Virtual Machines Introduction Abstraction and interfaces Virtualization Computer system architecture Process virtual machines System virtual machines 1 Abstraction Mechanism to manage complexity
Lecture 5. User-Mode Linux. Jeff Dike. November 7, 2012. Operating Systems Practical. OSP Lecture 5, UML 1/33
Lecture 5 User-Mode Linux Jeff Dike Operating Systems Practical November 7, 2012 OSP Lecture 5, UML 1/33 Contents User-Mode Linux Keywords Resources Questions OSP Lecture 5, UML 2/33 Outline User-Mode
Virtualization. Dr. Yingwu Zhu
Virtualization Dr. Yingwu Zhu What is virtualization? Virtualization allows one computer to do the job of multiple computers. Virtual environments let one computer host multiple operating systems at the
The Reincarnation of Virtual Machines
The Reincarnation of Virtual Machines By Mendel Rosenblum Co-Founder of VMware Associate Professor, Computer Science Stanford University Abstract:VMware, Inc. has grown to be the industry leader in x86-based
CPET 581 Cloud Computing: Technologies and Enterprise IT Strategies. Virtualization of Clusters and Data Centers
CPET 581 Cloud Computing: Technologies and Enterprise IT Strategies Lecture 4 Virtualization of Clusters and Data Centers Text Book: Distributed and Cloud Computing, by K. Hwang, G C. Fox, and J.J. Dongarra,
Tools Page 1 of 13 ON PROGRAM TRANSLATION. A priori, we have two translation mechanisms available:
Tools Page 1 of 13 ON PROGRAM TRANSLATION A priori, we have two translation mechanisms available: Interpretation Compilation On interpretation: Statements are translated one at a time and executed immediately.
Virtualization. Pradipta De [email protected]
Virtualization Pradipta De [email protected] Today s Topic Virtualization Basics System Virtualization Techniques CSE506: Ext Filesystem 2 Virtualization? A virtual machine (VM) is an emulation
Distributed Systems. Virtualization. Paul Krzyzanowski [email protected]
Distributed Systems Virtualization Paul Krzyzanowski [email protected] Except as otherwise noted, the content of this presentation is licensed under the Creative Commons Attribution 2.5 License. Virtualization
Outline. Outline. Why virtualization? Why not virtualize? Today s data center. Cloud computing. Virtual resource pool
Outline CS 6V81-05: System Security and Malicious Code Analysis Overview of System ization: The most powerful platform for program analysis and system security Zhiqiang Lin Department of Computer Science
Xen and the Art of. Virtualization. Ian Pratt
Xen and the Art of Virtualization Ian Pratt Keir Fraser, Steve Hand, Christian Limpach, Dan Magenheimer (HP), Mike Wray (HP), R Neugebauer (Intel), M Williamson (Intel) Computer Laboratory Outline Virtualization
COS 318: Operating Systems
COS 318: Operating Systems OS Structures and System Calls Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall10/cos318/ Outline Protection mechanisms
Virtual Machines. www.viplavkambli.com
1 Virtual Machines A virtual machine (VM) is a "completely isolated guest operating system installation within a normal host operating system". Modern virtual machines are implemented with either software
Basics of Virtualisation
Basics of Virtualisation Volker Büge Institut für Experimentelle Kernphysik Universität Karlsruhe Die Kooperation von The x86 Architecture Why do we need virtualisation? x86 based operating systems are
VMware Server 2.0 Essentials. Virtualization Deployment and Management
VMware Server 2.0 Essentials Virtualization Deployment and Management . This PDF is provided for personal use only. Unauthorized use, reproduction and/or distribution strictly prohibited. All rights reserved.
How To Understand The Power Of A Virtual Machine Monitor (Vm) In A Linux Computer System (Or A Virtualized Computer)
KVM - The kernel-based virtual machine Timo Hirt [email protected] 13. Februar 2010 Abstract Virtualization has been introduced in the 1960s, when computing systems were large and expensive to operate. It
VMware and CPU Virtualization Technology. Jack Lo Sr. Director, R&D
ware and CPU Virtualization Technology Jack Lo Sr. Director, R&D This presentation may contain ware confidential information. Copyright 2005 ware, Inc. All rights reserved. All other marks and names mentioned
Distributed and Cloud Computing
Distributed and Cloud Computing K. Hwang, G. Fox and J. Dongarra Chapter 3: Virtual Machines and Virtualization of Clusters and datacenters Adapted from Kai Hwang University of Southern California March
Virtual Machines. COMP 3361: Operating Systems I Winter 2015 http://www.cs.du.edu/3361
s COMP 3361: Operating Systems I Winter 2015 http://www.cs.du.edu/3361 1 Virtualization! Create illusion of multiple machines on the same physical hardware! Single computer hosts multiple virtual machines
Security Overview of the Integrity Virtual Machines Architecture
Security Overview of the Integrity Virtual Machines Architecture Introduction... 2 Integrity Virtual Machines Architecture... 2 Virtual Machine Host System... 2 Virtual Machine Control... 2 Scheduling
4-2 A Load Balancing System for Mitigating DDoS Attacks Using Live Migration of Virtual Machines
4-2 A Load Balancing System for Mitigating DDoS Attacks Using Live Migration of Virtual Machines ANDO Ruo, MIWA Shinsuke, KADOBAYASHI Youki, and SHINODA Yoichi Recently, rapid advances of CPU processor
Knut Omang Ifi/Oracle 19 Oct, 2015
Software and hardware support for Network Virtualization Knut Omang Ifi/Oracle 19 Oct, 2015 Motivation Goal: Introduction to challenges in providing fast networking to virtual machines Prerequisites: What
Introduction to Virtual Machines
Introduction to Virtual Machines Carl Waldspurger (SB SM 89, PhD 95), VMware R&D 2010 VMware Inc. All rights reserved Overview Virtualization and VMs Processor Virtualization Memory Virtualization I/O
Multi-core Programming System Overview
Multi-core Programming System Overview Based on slides from Intel Software College and Multi-Core Programming increasing performance through software multi-threading by Shameem Akhter and Jason Roberts,
System Virtual Machines
System Virtual Machines Introduction Key concepts Resource virtualization processors memory I/O devices Performance issues Applications 1 Introduction System virtual machine capable of supporting multiple
OS Virtualization. CSC 456 Final Presentation Brandon D. Shroyer
OS Virtualization CSC 456 Final Presentation Brandon D. Shroyer Introduction Virtualization: Providing an interface to software that maps to some underlying system. A one-to-one mapping between a guest
Understanding Full Virtualization, Paravirtualization, and Hardware Assist. Introduction...1 Overview of x86 Virtualization...2 CPU Virtualization...
Contents Introduction...1 Overview of x86 Virtualization...2 CPU Virtualization...3 The Challenges of x86 Hardware Virtualization...3 Technique 1 - Full Virtualization using Binary Translation...4 Technique
Using a Virtualization Techniques Based Platform for Advanced Studies on Operating Systems
Using a Virtualization Techniques Based Platform for Advanced Studies on Operating Systems Gabriel Rădulescu, Nicolae Paraschiv Abstract The virtualization technology allows several (sometimes critical)
Virtual Machine Security
Virtual Machine Security CSE497b - Spring 2007 Introduction Computer and Network Security Professor Jaeger www.cse.psu.edu/~tjaeger/cse497b-s07/ 1 Operating System Quandary Q: What is the primary goal
Intrusion Detection in Virtual Machine Environments
Intrusion Detection in Virtual Machine Environments Marcos Laureano, Carlos Maziero, Edgard Jamhour Graduate Program in Applied Computer Science Pontifical Catholic University of Paraná - Brazil {laureano,
Chapter 3 Operating-System Structures
Contents 1. Introduction 2. Computer-System Structures 3. Operating-System Structures 4. Processes 5. Threads 6. CPU Scheduling 7. Process Synchronization 8. Deadlocks 9. Memory Management 10. Virtual
Optimizing Network Virtualization in Xen
Optimizing Network Virtualization in Xen Aravind Menon EPFL, Switzerland Alan L. Cox Rice university, Houston Willy Zwaenepoel EPFL, Switzerland Abstract In this paper, we propose and evaluate three techniques
Chapter 3: Operating-System Structures. System Components Operating System Services System Calls System Programs System Structure Virtual Machines
Chapter 3: Operating-System Structures System Components Operating System Services System Calls System Programs System Structure Virtual Machines Operating System Concepts 3.1 Common System Components
Virtualization. Clothing the Wolf in Wool. Wednesday, April 17, 13
Virtualization Clothing the Wolf in Wool Virtual Machines Began in 1960s with IBM and MIT Project MAC Also called open shop operating systems Present user with the view of a bare machine Execute most instructions
VMkit A lightweight hypervisor library for Barrelfish
Masters Thesis VMkit A lightweight hypervisor library for Barrelfish by Raffaele Sandrini Due date 2 September 2009 Advisors: Simon Peter, Andrew Baumann, and Timothy Roscoe ETH Zurich, Systems Group Department
Basics in Energy Information (& Communication) Systems Virtualization / Virtual Machines
Basics in Energy Information (& Communication) Systems Virtualization / Virtual Machines Dr. Johann Pohany, Virtualization Virtualization deals with extending or replacing an existing interface so as to
Virtualization for Cloud Computing
Virtualization for Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF CLOUD COMPUTING On demand provision of computational resources
IOS110. Virtualization 5/27/2014 1
IOS110 Virtualization 5/27/2014 1 Agenda What is Virtualization? Types of Virtualization. Advantages and Disadvantages. Virtualization software Hyper V What is Virtualization? Virtualization Refers to
GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR
GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR ANKIT KUMAR, SAVITA SHIWANI 1 M. Tech Scholar, Software Engineering, Suresh Gyan Vihar University, Rajasthan, India, Email:
Chapter 2 System Structures
Chapter 2 System Structures Operating-System Structures Goals: Provide a way to understand an operating systems Services Interface System Components The type of system desired is the basis for choices
The Microsoft Windows Hypervisor High Level Architecture
The Microsoft Windows Hypervisor High Level Architecture September 21, 2007 Abstract The Microsoft Windows hypervisor brings new virtualization capabilities to the Windows Server operating system. Its
Optimizing Network Virtualization in Xen
Optimizing Network Virtualization in Xen Aravind Menon EPFL, Lausanne [email protected] Alan L. Cox Rice University, Houston [email protected] Willy Zwaenepoel EPFL, Lausanne [email protected]
Performance Isolation of a Misbehaving Virtual Machine with Xen, VMware and Solaris Containers
Performance Isolation of a Misbehaving Virtual Machine with Xen, VMware and Solaris Containers Todd Deshane, Demetrios Dimatos, Gary Hamilton, Madhujith Hapuarachchi, Wenjin Hu, Michael McCabe, Jeanna
Monitoring VirtualBox Performance
1 Monitoring VirtualBox Performance Siyuan Jiang and Haipeng Cai Department of Computer Science and Engineering, University of Notre Dame Email: [email protected], [email protected] Abstract Virtualizers on Type
Virtualization and the U2 Databases
Virtualization and the U2 Databases Brian Kupzyk Senior Technical Support Engineer for Rocket U2 Nik Kesic Lead Technical Support for Rocket U2 Opening Procedure Orange arrow allows you to manipulate the
evm Virtualization Platform for Windows
B A C K G R O U N D E R evm Virtualization Platform for Windows Host your Embedded OS and Windows on a Single Hardware Platform using Intel Virtualization Technology April, 2008 TenAsys Corporation 1400
Chapter 3: Operating-System Structures. Common System Components
Chapter 3: Operating-System Structures System Components Operating System Services System Calls System Programs System Structure Virtual Machines System Design and Implementation System Generation 3.1
PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE
PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE Sudha M 1, Harish G M 2, Nandan A 3, Usha J 4 1 Department of MCA, R V College of Engineering, Bangalore : 560059, India [email protected] 2 Department
Cloud Architecture and Virtualisation. Lecture 4 Virtualisation
Cloud Architecture and Virtualisation Lecture 4 Virtualisation TOC Introduction to virtualisation Layers and interfaces Virtual machines and virtual machine managers Hardware support Security 2 Virtualisation
Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture
Last Class: OS and Computer Architecture System bus Network card CPU, memory, I/O devices, network card, system bus Lecture 3, page 1 Last Class: OS and Computer Architecture OS Service Protection Interrupts
Virtualization. P. A. Wilsey. The text highlighted in green in these slides contain external hyperlinks. 1 / 16
Virtualization P. A. Wilsey The text highlighted in green in these slides contain external hyperlinks. 1 / 16 Conventional System Viewed as Layers This illustration is a common presentation of the application/operating
Operating System Organization. Purpose of an OS
Slide 3-1 Operating System Organization Purpose of an OS Slide 3-2 es Coordinate Use of the Abstractions he Abstractions Create the Abstractions 1 OS Requirements Slide 3-3 Provide resource abstractions
Intel s Virtualization Extensions (VT-x) So you want to build a hypervisor?
Intel s Virtualization Extensions (VT-x) So you want to build a hypervisor? Mr. Jacob Torrey February 26, 2014 Dartmouth College 153 Brooks Road, Rome, NY 315.336.3306 http://ainfosec.com @JacobTorrey
Clouds Under the Covers. Elgazzar - CISC 886 - Fall 2014 1
Clouds Under the Covers KHALID ELGAZZAR GOODWIN 531 [email protected] Elgazzar - CISC 886 - Fall 2014 1 References Understanding Full Virtualization, Paravirtualization, and Hardware Assist White
Virtual Machines and Security Paola Stone Martinez East Carolina University November, 2013.
Virtual Machines and Security Paola Stone Martinez East Carolina University November, 2013. Keywords: virtualization, virtual machine, security. 1. Virtualization The rapid growth of technologies, nowadays,
Jukka Ylitalo Tik-79.5401 TKK, April 24, 2006
Rich Uhlig, et.al, Intel Virtualization Technology, Computer, published by the IEEE Computer Society, Volume 38, Issue 5, May 2005. Pages 48 56. Jukka Ylitalo Tik-79.5401 TKK, April 24, 2006 Outline of
Reconfigurable Architecture Requirements for Co-Designed Virtual Machines
Reconfigurable Architecture Requirements for Co-Designed Virtual Machines Kenneth B. Kent University of New Brunswick Faculty of Computer Science Fredericton, New Brunswick, Canada [email protected] Micaela Serra
A Survey on Virtualization Technologies
A Survey on Virtualization Technologies Susanta Nanda Tzi-cker Chiueh {susanta,chiueh}@cs.sunysb.edu Department of Computer Science SUNY at Stony Brook Stony Brook, NY 11794-4400 Abstract Virtualization
Virtualization Technologies (ENCS 691K Chapter 3)
Virtualization Technologies (ENCS 691K Chapter 3) Roch Glitho, PhD Associate Professor and Canada Research Chair My URL - http://users.encs.concordia.ca/~glitho/ The Key Technologies on Which Cloud Computing
Chapter 16: Virtual Machines. Operating System Concepts 9 th Edition
Chapter 16: Virtual Machines Silberschatz, Galvin and Gagne 2013 Chapter 16: Virtual Machines Overview History Benefits and Features Building Blocks Types of Virtual Machines and Their Implementations
Memory Resource Management in VMware ESX Server
Memory Resource Management in VMware ESX Server Carl Waldspurger OSDI 02 Presentation December 10, 2002 Overview Context Memory virtualization Reclamation Sharing Allocation policies Conclusions 2 2 Motivation
Distributed System Monitoring and Failure Diagnosis using Cooperative Virtual Backdoors
Distributed System Monitoring and Failure Diagnosis using Cooperative Virtual Backdoors Benoit Boissinot E.N.S Lyon directed by Christine Morin IRISA/INRIA Rennes Liviu Iftode Rutgers University Phenix
matasano Hardware Virtualization Rootkits Dino A. Dai Zovi
Hardware Virtualization Rootkits Dino A. Dai Zovi Agenda Introductions Virtualization (Software and Hardware) Intel VT-x (aka Vanderpool ) VM Rootkits Implementing a VT-x based Rootkit Detecting Hardware-VM
