Container-based operating system virtualization: a scalable, high-performance alternative to hypervisors Soltesz, et al (Princeton/Linux-VServer), Eurosys07 Context: Operating System Structure/Organization 1
Introduction Traditional process abstraction provides a weak form of abstraction. Hypervisors provide more complete isolation between virtual machines (VMs) allowing a single machine to host multiple, unrelated applications from independent organizations. Hypervisor approach has some cost in terms of efficency overhead of running VMs. 2
Container-Based Operating Systems (COSs) Rather than a hypervisor, builds on resource container and security container work. COSs exist in Solaris Zones, Virtuozzo and Linux-VServer. Contributions: 1. Description of Linux VServer (one of authors maintains it) 2. Contrasts with latest version of Xen, which uses a hypervisor. 3
VM Approaches Hardware Intel VT Hardware Abstraction Layer Xen, VMware ESX. Can support multiple kernels. System Call Solaris, VServer Hosted VMs VMware GSX Language VMs Java Application-level VMs Apache virtual hosting 4
VM Usage Scenarios 1. Compute Farms (grid computing), flexibility to support specific software configurations of different applications. 2. Hosting Organizations, run many copies of same server software. CoMon defines a VM as active if it contains a process and live if it is using CPU. 3. Other scenarios where efficiency of virtualization in terms of performance and scale is important. 5
COS Virtualization Efficiency measures: performance (throughput, latency) and scalability (number of VMs) Isolation: fault isolation: do not want fault in one VM to leak into another VM resource isolation: want to avoid cross-talk, where there are undesired interactions between VMs. Resources include CPU, memory, network bandwidth security isolation: configuration and name independence 6
Efficiency/Isolation Tradeoffs 7
Container-Based Operating System Approach Host VM is used to manage other VMs. Guest VMs run applications. 8
Isolation Taxonomy of COS and Hypervisor-Based Systems 9
VServer Resource Isolation CPU Scheduling: use a token-bucket filter where each VM accumulates tokens at given rate. On a timer tick the VM using the CPU gets charged a token. VMs with a reservation accumulate tokens at a rate. VMs that have a share run only after those that have a reservation get the CPU. Network I/O: Hierarchical Token Bucket. Have a reserved rate for outgoing traffic and a share. Disk I/O: use Complete Fair Queueing Storage Limits: for memory and disk usage. Need some overbooking of memory for VMs to let them allocate more memory, but can use watchdog daemon to reset memory usage of memory hog when swap almost filled. 10
VServer Security Isolation Filters out processes outside of the current VM. Creates a fake pid of 1 for init. Networking subsystem shared amongst VMs issues if one VM is say receiving lots of network traffic. Uses copy-on-write in the file system to isolate file systems of VM, but reduce resource consumption file system unification. 11
VServer vs. Xen Performance Comparison Use standard configurations for each. All tests done on a uniprocessor (UP) vs Symmetric Multi-Processor (SMP) VServer has been integrated with PlanetLab. 12
Micro-Benchmarks Use lmbench for micro-benchmarks. Lots of overhead for Xen. Little difference between Linux and VServer. 13
System Benchmarks Use one guest VM for each test (along with host VM). Iperf for network bandwidth benchmark. VServer comparable to Linux while Xen at 60%. Xen on SMP could not achieve line rate. Why? Macro Benchmarks tone of the results is similar Linux and VServer comparable while Xen not as good. Disk performance is 25-35% less. Xen CPU and memory-bound performance a bit worse. OSDB scales best at two VMs for VServer and number of CPUs for Xen. 14
Isolation Similar disk and network I/O management for VServer and Xen. Looked at CPU with Fair Share for all VMs and found comparable results for both VServer and Xen. If one VM given a reservation of 25% then as shown in Table 3 VServer achieves performance much closer to this reservation. Xen does provide performance caps, but still not as good. Both show some performance impact with a competing active VM. 15
Summary Less overhead for VServer relative to Xen and shows up in performance tests. Xen does support multiple kernels. Would be nice to see response from Xen. Solid description of VServer and measurement work. 16