Scheduler Vulnerabilities and Coordinated Attacks in Cloud Computing

Size: px
Start display at page:

Download "Scheduler Vulnerabilities and Coordinated Attacks in Cloud Computing"

Transcription

1 Journal of Computer Security Book Editors IOS Press, Scheduler Vulnerabilities and Coordinated Attacks in Cloud Computing Fangfei Zhou 1, Manish Goel, Peter Desnoyers, Ravi Sundaram Department of Computer and Information Science Northeastern University, Boston, MA Abstract. In hardware virtualization a hypervisor provides multiple Virtual Machines (VMs) on a single physical system, each executing a separate operating system instance. The hypervisor schedules execution of these VMs much as the scheduler in an operating system does, balancing factors such as fairness and I/O performance. As in an operating system, the scheduler may be vulnerable to malicious behavior on the part of users seeking to deny service to others or maximize their own resource usage. Recently, publically available cloud computing services such as Amazon EC2 have used virtualization to provide customers with virtual machines running on the provider s hardware, typically charging by wall clock time rather than resources consumed. Under this business model, manipulation of the scheduler may allow theft of service at the expense of other customers, rather than merely re-allocating resources within the same administrative domain. We describe a flaw in the Xen scheduler allowing virtual machines to consume almost all CPU time, in preference to other users, and demonstrate kernel-based and user-space versions of the attack. We show results demonstrating the vulnerability in the lab, consuming as much as 98% of CPU time regardless of fair share, as well as on Amazon EC2, where Xen modifications protect other users but still allow theft of service (following the responsible disclosure model, we have reported this vulnerability to Amazon; they have since implemented a fix that we have tested and verified). We provide a novel analysis of the necessary conditions for such attacks, and describe scheduler modifications to eliminate the vulnerability. We present experimental results demonstrating the effectiveness of these defenses while imposing negligible overhead. Also, cloud providers such as Amazon s EC2 do not explicitly reveal the mapping of virtual machines to physical hosts [25]. Our attack itself provides a mechanism for detecting the co-placement of VMs, which in conjunction with appropriate algorithms can be utilized to reveal this mapping. Other cloud computing attacks may use this mapping algorithm to detect the placement of victims. Keywords. cloud computing, virtualization, schedulers, security 1. Introduction Server virtualization [5] enables multiple instances of an operating system and applications (virtual machines or VMs) to run on the same physical hardware, as if each were 1 Correspondence to: Fangfei Zhou, Complete address of corresponding author. Tel.: ; E- mail: youyou@ccs.neu.edu

2 2 on its own machine. Recently server virtualization has been used to provide so-called cloud computing services, in which customers rent virtual machines running on hardware owned and managed by third-party providers. Two such cloud computing services are Amazon s Elastic Compute Cloud (EC2) service and the Microsoft Windows Azure Platform; in addition, similar services are offered by a number of web hosting providers (e.g. Rackspace s Rackspace Cloud and ServePath Dedicated Hosting s GoGrid) and referred to as Virtual Private Servers (VPS). In each of these services customers are charged by the amount of time their virtual machine is running (in hours or months), rather than by the amount of CPU time used. The operation of a hypervisor is in many ways similar to that of an operating system; just as an operating system manages access by processes to underlying resources, so too a hypervisor must manage access by multiple virtual machines to a single physical machine. In either case the choice of the scheduling algorithm will involve a trade-off between factors such as fairness, usage caps and scheduling latency. As in operating systems, a hypervisor scheduler may be vulnerable to behavior by virtual machines which results in inaccurate or unfair scheduling. Such anomalies and their potential for malicious use have been recognized in the past in operating systems McCanne and Torek [19] demonstrate a denial-of-service attack on 4.4BSD, and more recently Tsafrir [29] presents a similar attack against Linux 2.6 which was fixed only recently. Such attacks typically rely on the use of periodic sampling or a low-precision clock to measure CPU usage; like a train passenger hiding whenever the conductor checks tickets, an attacking process ensures it is never scheduled when a scheduling tick occurs. Cloud computing represents a new environment for such attacks, however, for two reasons. First, the economic model of many services renders them vulnerable to theftof-service attacks, which can be successful with far lower degrees of unfairness than required for strong denial-of-service attacks. In addition, the lack of detailed application knowledge in the hypervisor (e.g. to differentiate I/O wait from voluntary sleep) makes it more difficult to harden a hypervisor scheduler against malicious behavior. The scheduler used by the Xen hypervisor (and with modifications by Amazon EC2) is vulnerable to such timing-based manipulation rather than receiving its fair share of CPU resources, a VM running on unmodified Xen using our attack can obtain up to 98% of total CPU cycles, regardless of the number of other VMs running on the same core. In addition we demonstrate a kernel module allowing unmodified applications to readily obtain 80% of the CPU. The Xen scheduler also supports a non-work-conserving (NWC) mode where each VM s CPU usage is capped ; in this mode our attack is able to evade its limits and use up to 85% of total CPU cycles. The modified EC2 scheduler uses this to differentiate levels of service; it protects other VMs from our attack, but we still evade utilization limits (typically 40%) and consume up to 85% of CPU cycles. We give a novel analysis of the conditions which must be present for such attacks to succeed, and present four scheduling modifications which will prevent this attack without sacrificing efficiency, fairness, or I/O responsiveness. We have implemented these algorithms, and present experimental results evaluating them on Xen The rest of this paper is organized as follows: Section 2 provides a brief introduction to VMM architectures, Xen VMM and Amazon EC2 as background. Section 3 describes the details of the Xen Credit scheduler. Section 4 explains our attacking scheme and presents experimental results in the lab as well as on Amazon EC2. Next, Section 5 de-

3 3 tails our scheduling modifications to prevent this attack, and evaluates their performance and overhead. Section 7 discusses related work, and we conclude in Section Background We first provide a brief overview of hardware virtualization technology in general, and of the Xen hypervisor and Amazon s Elastic Compute Cloud (EC2) service in particular Hardware Virtualization Hardware virtualization refers to any system which interposes itself between an operating system and the hardware on which it executes, providing an emulated or virtualized view of physical resources. Almost all virtualization systems allow multiple operating system instances to execute simultaneously, each in its own virtual machine (VM). In these systems a Virtual Machine Monitor (VMM), also known as a hypervisor, is responsible for resource allocation and mediation of hardware access by the various VMs. Modern hypervisors may be classified by the methods of executing guest OS code without hardware access: (a) binary emulation and translation, (b) paravirtualization, and (c) hardware virtualization support. Binary emulation executes privileged guest code in software, typically with just-in-time translation for speed [1]. Hardware virtualization support [4] in recent x86 CPUs supports a privilege level beyond supervisor mode, used by the hypervisor to control guest OS execution. Finally, paravirtualization allows the guest OS to execute directly in user mode, but provides a set of hypercalls, like system calls in a conventional operating system, which the guest uses to perform privileged functions The Xen Hypervisor Xen is an open source VMM for the x86/x64 platform [10]. It introduced paravirtualization on the x86, using it to support virtualization of modified guest operating systems without hardware support (unavailable at the time) or the overhead of binary translation. Above the hypervisor there are one or more virtual machines or domains which use hypervisor services to manipulate the virtual CPU and perform I/O Amazon Elastic Compute Cloud (EC2) Amazon EC2 is a commercial service which allows customers to run their own virtual machine instances on Amazon s servers, for a specified price per hour each VM is running. Details of the different instance types currently offered, as well as pricing per instance-hour, are shown in Table 1. Amazon states that EC2 is powered by a highly customized version of Xen, taking advantage of virtualization [3]. The operating systems supported are Linux, Open- Solaris, and Windows Server 2003; Linux instances (and likely OpenSolaris) use Xen s paravirtualized mode, and it is suspected that Windows instances do so as well [6].

4 4 3. Xen Scheduling In Xen (and other hypervisors) a single virtual machine consists of one or more virtual CPUs (VCPUs); the goal of the scheduler is to determine which VCPU to execute on each physical CPU (PCPU) at any instant. To do this it must determine which VCPUs are idle and which are active, and then from the active VCPUs choose one for each PCPU. In a virtual machine, a VCPU is idle when there are no active processes running on it and the scheduler on that VCPU is running its idle task. On early systems the idle task would loop forever; on more modern ones it executes a HALT instruction, stopping the CPU in a lower-power state until an interrupt is received. On a fully-virtualized system this HALT traps to the hypervisor and indicates the VCPU is now idle; in a paravirtualized system a direct hypervisor call is used instead. When an exception (e.g. timer or I/O interrupt) arrives, that VCPU becomes active until HALT is invoked again. By default Xen uses the Credit scheduler [7], an implementation of the classic token bucket algorithm in which credits arrive at a constant rate, are conserved up to a maximum, and are expended during service. Each VCPU receives credits at an administratively determined rate, and a periodic scheduler tick debits credits from the currently running VCPU. If it has no more credits, the next VCPU with available credits is scheduled. Every 3 ticks the scheduler switches to the next runnable VCPU in round-robin fashion, and distributes new credits, capping the credit balance of each VCPU at 300 credits. Detailed parameters (assuming even weights) are: Fast tick period: 10ms Slower (rescheduling) tick: 30ms Credits debited per fast tick: 100 Credit arrivals per fast tick: 100/N Maximum credits: 300 where N is the number of VCPUs per PCPU. The fast tick decrements the running VCPU by 100 credits every 10 ms, giving each credit a value of 100 µs of CPU time; the cap of 300 credits corresponds to 30 ms, or a full scheduling quantum. Based on their credit balance, VCPUs are divided into three states: UNDER, with a positive credit balance, OVER, or out of credits, and BLOCKED or halted. The VCPUs on a PCPU are kept in an ordered list, with those in UNDER state ahead of those in OVER state; the VCPU at the head of the queue is selected for execution. In work conserving mode, when no VCPUs are in the UNDER state, one in the OVER state will be chosen, allowing it to receive more than its share of CPU. In non-work-conserving (NWC) mode, the PCPU will go idle instead. The executing VCPU leaves the run queue head in one of two ways: by going idle, or when removed by the scheduler while it is still active. VCPUs which go idle enter the BLOCKED state and are removed from the queue. Active VCPUs are enqueued after all other VCPUs of the same state OVER or UNDER as shown in Figure 1. The basic credit scheduler accurately distributes resources between CPU-intensive workloads, ensuring that a VCPU receiving k credits per 30 ms epoch will receive at least k/10 ms of CPU time within a period of 30N ms. This fairness comes at the expense of I/O performance, however, as events such as packet reception may wait as long as 30N ms for their VCPU to be scheduled.

5 5 To achieve better I/O latency, the Xen Credit scheduler attempts to prioritize such I/O. When a VCPU sleeps waiting for I/O it will typically have remaining credits; when it wakes with remaining credits it enters the BOOST state and may immediately preempt running or waiting VCPUs with lower priorities. If it goes idle again with remaining credits, it will wake again in BOOST priority at the next I/O event. This allows I/O-intensive workloads to achieve very low latency, consuming little CPU and rarely running out of credits, while preserving fair CPU distribution among CPU-bound workloads, which typically utilize all their credits before being preempted. However, as we describe in the following section, it also allows a VM to steal more than its fair share of CPU time. 4. Credit Scheduler Attacks Although the Credit scheduler provides fairness and low I/O latency for well-behaved virtual machines, poorly-behaved ones can evade its fairness guarantees. In this section we describe the features of the scheduler which render it vulnerable to attack, formulate an attack scheme, and present results showing successful theft of service both in the lab and in the field on EC2 instances Attack Description Our attack relies on periodic sampling as used by the Xen scheduler, and is shown as a timeline in Figure 2. Every 10 ms the scheduler tick fires and schedules the attacking VM, which runs for 10 ε ms and then calls Halt() to briefly go idle, ensuring that another VM will be running at the next scheduler tick. In theory the efficiency of this attack increases as ε approaches 0; however in practice some amount of timing jitter is found, and overly small values of ε increase the risk of the VM being found executing when the scheduling tick arrives. When perfectly executed on the non-boost credit scheduler, this ensures that the attacking VM will never have its credit balance debited. If there are N VMs with equal shares, then the N-1 victim VMs will receive credits at a total rate of N 1 N, and will be debited at a total rate of 1. This vulnerability is due not only to the predictability of the sampling, but to the granularity of the measurement. If the time at which each VM began and finished service were recorded with a clock with the same 10 ms resolution, the attack would still succeed, as the attacker would have a calculated execution time of 0 on transition to the next VM. This attack is more effective against the actual Xen scheduler because of its BOOST priority mechanism. When the attacking VM yields the CPU, it goes idle and waits for the next timer interrupt. Due to a lack of information at the VM boundary, however, the hypervisor is unable to distinguish between a VM waking after a deliberate sleep period a non-latency-sensitive event and one waking for e.g. packet reception. The attacker thus wakes in BOOST priority and is able to preempt the currently running VM, so that it can execute for 10 ε ms out of every 10 ms scheduler cycle.

6 User-level Implementation To examine the performance of our attack scenario in practice, we implement it using both user-level and kernel-based code and evaluate them in the lab and on Amazon EC2. In each case we test with two applications: a simple loop we refer to as Cycle Counter described below, and the Dhrystone 2.1 [30] CPU benchmark. Our attack described in Section 4.1 requires millisecond-level timing in order to sleep before the debit tick and then wake again at the tick; it performs best either with a tick-less Linux kernel [27] or with the kernel timer frequency set to 1000 Hz. Experiments in the Lab Our first experiments evaluate our attack against unmodified Xen in the lab in workconserving mode, verifying the ability of the attack to both deny CPU resources to competing victim VMs, and to effectively use the stolen CPU time for computation. All experiments were performed on Xen 3.2.1, on a 2-core 2.7 GHz Intel Core2 CPU. Virtual machines were 32-bit, paravirtualized, single-vcpu instances with 192 MB memory, each running Suse 11.0 Core kernel with a 1000 Hz kernel timer frequency. To test our ability to steal CPU resources from other VMs, we implement a Cycle Counter, which performs no useful work, but rather spins using the RDTSC instruction to read the timestamp register and track the time during which the VM is scheduled. The attack is performed by a variant of this code, Cycle Stealer, which tracks execution time and sleeps once it has been scheduled for 10 ε (here ε = 1 ms). prev=rdtsc() loop: if (rdtsc() - prev) > 9ms prev = rdtsc() usleep(0.5ms) Note that the sleep time is slightly less than ε, as the process will be woken at the next OS tick after timer expiration and we wish to avoid over-sleeping. In Figure 3(a) we see attacker and victim performance on our 2-core test system. As the number of victims increases, attacker performance remains almost constant at roughly 90% of a single core, while the victims share the remaining core. To measure the ability of our attack to effectively use stolen CPU cycles, we embed the attack within the Dhrystone benchmark. By comparing the time required for the attacker and an unmodified VM to complete the same number of Dhrystone iterations, we can determine the net amount of work stolen by the attacker. Our baseline measurement was made with one VM running unmodified Dhrystone, with no competing usage of the system; it completed iterations in seconds. When running 6 unmodified instances, three for each core, each completed the same iterations in seconds on average 32.6% the baseline speed, or close to the expected fair share performance of 33.3%. With one modified attacker instance competing against 5 unmodified victims, the attacker completed in seconds, running at a speed of 85.3% of baseline, rather than 33.3%, with a corresponding de-

7 7 crease in victim performance. Full results for experiments with 0 to 5 unmodified victims and the modified Dhrystone attacker are shown in Figure 3(b). In the modified Dhrystone attacker the TSC register is sampled once for each iteration of a particular loop, as described in Appendix A; if this sampling occurs too slowly the resulting timing inaccuracy might affect results. To determine whether this might occur, lengths of the compute and sleep phases of the attack were measured. Almost all (98.8%) of the compute intervals were found to lie within the bounds 9 ± ms, indicating that the Dhrystone attack was able to attain timing precision comparable to that of Cycle Stealer. As described in Section 4.1, the attacker runs for a period of length 10 ε ms and then briefly goes to sleep to avoid the sampling tick. A smaller value of ε increases the average amount of CPU time stolen by the attacker; however, too small an ε increases the chance of being charged due to timing jitter. To examine this trade-off we tested values of 10 ε between 7 and 9.9 ms. Figure 4 shows that under lab conditions the peak value was 98% with an execution time of 9.8 ms and a requested sleep time of 0.1 ms. When execution time exceeded 9.8 ms the attacker was seen by sampling interrupts with high probability. In this case it received only about 21% of one core, or even less than the fair share of 33.3%. Experiments on Amazon We evaluate our attacking using Amazon EC2 Small instances with the following attributes: 32-bit, 1.7 GB memory, 1 VCPU, running Amazon s Fedora Core 8 kernel , with a 1000 Hz kernel timer. We note that the VCPU provided to the Small instance is described as having 1 EC2 Compute Unit, while the VCPUs for larger and more expensive instances are described as having 2 or 2.5 compute units; this indicates that the scheduler is being used in non-work-conserving mode to throttle Small instances. To verify this hypothesis, we ran Cycle Stealer in measurement (i.e. non-attacking) mode on multiple Small instances, verifying that these instances are capped to less than 1 2 of a single CPU core in particular, approximately 38% on the measured systems. We believe that the nominal CPU cap for 1-unit instances on the measured hardware is 40%, corresponding to an unthrottled capacity of 2.5 units. Additional experiments were performed on a set of 8 Small instances co-located on a single 4-core 2.6 GHz physical system provided by our partners at Amazon. 1 The Cycle Stealer and Dhrystone attacks measured in the lab were performed in this configuration, and results are shown in Figure 5(a) and Figure 5(b), respectively. We find that our attack is able to evade the CPU cap of 40% imposed by EC2 on Small instances, obtaining up to 85% of one core in the absence of competing VMs. When co-located CPU-hungry victim VMs were present, however, EC2 performance diverged from that of unmodified Xen. As seen in the results, co-located VM performance was virtually unaffected by our attack. Although this attack was able to steal cycles from EC2, it was unable to steal cycles from other EC2 customers. 1 This configuration allowed direct measurement of attack impact on co-located victim VMs, as well as eliminating the possibility of degrading performance of other EC2 customers.

8 Kernel Implementation Implementing a theft-of-service attack at user level is problematic it involves modifying and adding overhead to the application consuming the stolen resources, and may not work on applications which lack a regular structure. Although there are a few applications which might fit these requirements (e.g. a password cracker), an attacker which is not subjected to these limitations would be far more useful to a malicious customer. An extension of this mechanism would be to instrument a re-usable component such as a library or interpreter to incorporate the attack. Modifying a BLAS library [13], for instance, would allow a large number of programs performing numeric computation to steal cycles while running, and modifying e.g. a Perl interpreter or Java JVM would allow cycle stealing by most applications written in those languages. However, a more general mechanism would be to implement a kernel level version of this attack, so that any program running within that instance would be able to take advantage of the CPU stealing mechanism Loadable Kernel Module To implement kernel-level attack, the most basic way to do is that to add files to the kernel source tree and modify kernel configuration to include the new files in compilation. However, most Unix kernels are monolithic. The kernel itself is a piece of compact code, in which all functions share a common space and are tightly related. When the kernel needs to be updated, all the functions must be relinked and reinstalled and the system rebooted before the change can take affect. This makes modifying kernel level code difficult, especially when the change is made to a kernel which is not in user s control. Loadable kernel module (LKM) allows developers to make changes to the kernel when it is running. LKM is an object file that contains code to extend the running kernel of an operating system. LKMs are typically used for one of the three functionalities: device drivers, filesystem drivers and system calls. Some of the advantages of using loadable kernel modules rather than modifying the base kernel are: (1) No kernel recompilation is required. (2) New kernel functionality can be added without root administration. (3) New functionality takes effect right after module installation, no reboot is required. (4) LKM can be reloaded at run time, it does not add to the size of kernel permanently Module Implementation To achieve kernel-level attack, we implement our attacking scheme as a kernel thread which invokes an OS sleep for 10 ε ms, allowing user applications to run, and then invokes the SCHED_block hyper-call via the safe_halt function to block the VCPU right before the debit happens. Then the attacking VM is woken up by the tick interrupt from the hypervisor and is scheduled after that debit. In practice ε must be higher than for the user-mode attack, due to timing granularity and jitter in the kernel. #include<linux/module.h> #include<linux/kernel.h> #include<linux/timer.h> #include<linux/kthread.h>

9 9 #define TEST 10 #define SLEEP_TIME 8 struct task_struct *ts; struct timer_list my_timer; void timer_f(unsigned long data) { my_timer.expires = jiffies + msecs_to_jiffies(test); add_timer(&my_timer); } int thread(void *data){ init_timer(&my_timer); my_timer.function = timer_f; my_timer.data = 0; my_timer.expires = jiffies + msecs_to_jiffies(test); add_timer(&my_timer); unsigned long tmp; while(1) { if(kthread_should_stop()) break; msleep(sleep_time); safe_halt(); } return 0; } int init_module(void) { ts = kthread_run(thread, NULL, "kthread"); return 0; } void cleanup_module(void) { del_timer(&my_timer); kthread_stop(ts); } In theory the less ε is in kernel module implementation, the more CPU cycles the user application could consume. However due to our evaluation of msleep, there is a 1ms timing jitter in kernel. msleep(8) allows the kernel thread to wake up on time and temporarily pause all user applications. Then the VM is considered as idle and gets swapped out before debit tick happens. Since msleep(9) does not guarantee the thread wake up before the debit tick every time, thus the attacking VM may not be swapped in/swapped out as expected. According to this observation, ε = 2 is a safe choice in implementation for kernel Experimental Results Lab experiments were performed with one attacking VM, which loads the kernel module, and up to 5 victims running a simple CPU-bound loop. In this case the fair CPU share for each guest instance on our 2-core test system would be 200 N % of a single core, where N is the total number of VMs. Due to the granularity of kernel timekeeping, requiring a larger

10 10 ε, the efficiency of the kernel-mode attack is slightly lower than that of the user-mode attack. We evaluate the kernel-level attack with Cycle Stealer in measurement (nonattacking) mode and unmodified Dhrystone. In our tests, the attacker consumed up to 80.0% of a single core, with the remaining CPU time shared among the victim instances; results are shown in Figure 6(a) and Figure 6(b). The average amount of CPU stolen by the attacker decreases slightly (from 80.0% to 78.2%) as the number of victims increases; we speculate that this may be due to increased timing jitter causing the attacker to occasionally be charged by the sampling tick. The current implementation of the kernel module does not succeed in stealing cycles on Amazon EC2. We dig into Amazon EC2 kernel synchronization and our analysis of timing traces indicates a lack of synchronization of the attacker to the hypervisor tick, as seen for NWC mode in the lab, below; in this case, however, synchronization was never achieved. The user-level attack displays strong self-synchronizing behavior, aligning almost immediately to the hypervisor tick; we are investigating approaches to similarly strengthen self-synchronizing in the kernel module Non-Work Conserving Mode As with most hypervisors, Xen CPU scheduling can be configured in two modes: workconserving mode and non-work-conserving mode [12]. In order to optimize scheduling and allow near native performance, work-conserving scheduling schemes are efficient and do not waste any processing cycles. As long as there are instructions to be executed and there is enough CPU capacity, work-conserving schemes allow those instructions to be executed. Otherwise the instructions will be queued and will be executed based on their priority. In contrast, non-work-conserving schemes allow CPU resources to go unused. In such schemes, there is no advantage to execute an instruction sooner. Usually, the CPU resources are allocated to VMs in proportion to their weights, e.g. two VMs with equal weight will use 50% each of CPU. When one VM goes idle, instead of obtaining the free cycles, the other VM is capped to its 50% sharing. Our attack program helps a VM to evade its capacity cap and obtain more, no matter if other VMs are busy or idle. Additional experiments were performed to examine our attack performance against the Xen scheduler in non-work-conserving mode. One attacker and 5 victims were run on our 2-core test system, with a CPU cap of 33% set for each VM; results are presented in Table 2 for both Cycle Stealer and Dhrystone attackers, including user-level and kernel-level implementation. With a per-vm CPU cap of 33%, the attacker was able to obtain 80% of a single core, as well. In each case the primary limitation on attack efficiency is the granularity of kernel timekeeping; we speculate that by more directly manipulating the hypervisor scheduler it would be possible to increase efficiency. In nonwork-conserving mode, when a virtual machine running attacking program yields CPU, there is a chance (based on our experiment results, not often though) that rescheduling does not happen, the attacking VM is still considered as the scheduled VM when debit tick happens and gets debited. In other words, non-work-conserving mode does not stop our attack, but adds some interferences. In addition we note that it often appeared to take seconds or longer for the attacker to synchronize with the hypervisor tick and evade resource caps, while the user-level attack succeeds immediately.

11 11 5. Theft-resistant Schedulers The class of theft-of-service attacks on schedulers which we describe is based on a process or virtual machine voluntarily sleeping when it could have otherwise remained scheduled. As seen in Figure 7, this involves a tradeoff the attack will only succeed if the expected benefit of sleeping for T sleep is greater than the guaranteed cost of yielding the CPU for that time period. If the scheduler is attempting to provide each user with its fair share based on measured usage, then sleeping for a duration t must reduce measured usage by more than t in order to be effective. Conversely, a scheduler which ensures that yielding the CPU will never reduce measured usage more than the sleep period itself will be resistant to such attacks. This is a broader condition than that of maintaining an unbiased estimate of CPU usage, which is examined by McCanne and Torek [19]. Some theft-resistant schedulers, for instance, may over-estimate the CPU usage of attackers and give them less than their fair share. In addition, for schedulers which do not meet our criteria, if we can bound the ratio of sleep time to measurement error, then we can establish bounds on the effectiveness of a timing-based theft-of-service attack Exact Scheduler The most direct solution is the Exact scheduler: using a high-precision clock (in particular, the TSC) to measure actual CPU usage when a scheduler tick occurs or when a VCPU yields the CPU and goes idle, thus ensuring that an attacking VM is always charged for exactly the CPU time it has consumed. In particular, this involves adding logic to the Xen scheduler to record a high-precision timestamp when a VM begins executing, and then calculate the duration of execution when it yields the CPU. This is similar to the approach taken in e.g. the recent tickless Linux kernel [27], where timing is handled by a variable interval timer set to fire when the next event is due rather than using a fixed-period timer tick Randomized Schedulers An alternative to precise measurement is to sample as before, but on a random schedule. If this schedule is uncorrelated with the timing of an attacker, then over sufficiently long time periods we will be able to estimate the attacker s CPU usage accurately, and thus prevent attack. Assuming a fixed charge per sample, and an attack pattern with period T cycle, the probability P of the sampling timer falling during the sleep period must be no greater than the fraction of the cycle T sleep T cycle which it represents. Poisson Scheduler: This leads to a Poisson arrival process for sampling, where the expected number of samples during an interval is exactly proportional to its duration, regardless of prior history. This leads to an exponential arrival time distribution, T = lnu λ 2 Although Kim et al. [17] use TSC-based timing measurements in their modifications to the Xen scheduler, they do not address theft-of-service vulnerabilities.

12 12 where U is uniform on (0,1) and λ is the rate parameter of the distribution. We approximate such Poisson arrivals by choosing the inter-arrival time according to a truncated exponential distribution, with a maximum of 30 ms and a mean of 10 ms, allowing us to retain the existing credit scheduler structure. Due to the possibility of multiple sampling points within a 10 ms period we use a separate interrupt for sampling, rather than re-using or modifying the existing Xen 10 ms interrupt. Bernoulli Scheduler: The discrete-time analog of the Poisson process, the Bernoulli process, may be used as an approximation of Poisson sampling. Here we divide time into discrete intervals, sampling at any interval with probability p and skipping it with probability q = 1-p. We have implemented a Bernoulli scheduler with a time interval of 1 ms, sampling with p = 1 10, or one sample per 10 ms, for consistency with the unmodified Xen Credit scheduler. Rather than generate a timer interrupt with its associated overhead every 1 ms, we use the same implementation strategy as for the Poisson scheduler, generating an inter-arrival time variate and then setting an interrupt to fire after that interval expires. By quantizing time at a 1 ms granularity, our Bernoulli scheduler leaves a small vulnerability, as an attacker may avoid being charged during any 1 ms interval by sleeping before the end of the interval. Assuming that (as in Xen) it will not resume until the beginning of the next 10 ms period, this limits an attacker to gaining no more than 1 ms every 10 ms above its fair share, a relatively insignificant theft of service. Uniform Scheduler: The final randomized scheduler we propose is the Uniform scheduler, which distributes its sampling uniformly across 10 ms scheduling intervals. Rather than generating additional interrupts, or modifying the time at which the existing scheduler interrupt fires, we perform sampling within the virtual machine switch code as we did for the exact scheduler. In particular, at the beginning of each 10 ms interval (time t 0 ) we generate a random offset uniformly distributed between 0 and 10 ms. At each VCPU switch, as well as at the 10 ms tick, we check to see whether the current time has exceeded t 0 +. If so, then we debit the currently running VCPU, as it was executing when the virtual interrupt fired at t 0 +. Although in this case the sampling distribution is not memoryless, it is still sufficient to thwart our attacker. We assume that sampling is undetectable by the attacker, as it causes only a brief interruption indistinguishable from other asynchronous events such as network interrupts. In this case, as with Poisson arrivals the expected number of samples within any interval in a 10 ms period is exactly proportional to the duration of the interval. Our implementation of the uniform scheduler quantizes with 1 ms granularity, leaving a small vulnerability as described in the case of the Bernoulli scheduler. As in that case, however, the vulnerability is small enough that it may be ignored. We note also that this scheduler is not theft-proof if the attacker is able to observe the sampling process. If we reach the 5 ms point without being sampled, for instance, the probability of being charged 10 ms in the remaining 5 ms is 1, while avoiding that charge would only cost 5 ms Evaluation We have implemented each of the four modified schedulers on Xen Since the basic credit and priority boosting mechanisms have not been modified from the original scheduler, our modified schedulers should retain the same fairness and I/O performance

13 13 properties of the original in the face of well-behaved applications. To verify performance in the face of ill-behaved applications we tested attack performance against the new schedulers; in addition measuring overhead and I/O performance Performance Against Attack In Table 3 we see the performance of our Cycle Stealer on the Xen Credit scheduler and the modified schedulers. All four of the schedulers were successful in thwarting the attack: when co-located with 5 victim VMs on 2 cores on the unmodified scheduler, the attacker was able to consume 85.6% of a single-cpu with user-level attacking and 80% with kernel-level attacking, but no more than its fair share on each of the modified ones. In Table 4 we see similar results for the modified Dhrystone attacker. Compared to the baseline, the unmodified scheduler allows the attacker to steal about 85.3% CPU cycles with user-level attacking and 77.8% with kernel-level attacking; while each of the improved schedulers limits the attacker to approximately its fair share Overhead Measurement To quantify the impact of our scheduler modifications on normal execution (i.e. in the absence of attacks) we performed a series of measurements to determine whether application or I/O performance had been degraded by our changes. Since the primary modifications made were to interrupt-driven accounting logic in the Xen scheduler, we examined overhead by measuring performance of a CPU-bound application (unmodified Dhrystone) on Xen while using the different scheduler. To reduce variance between measurements (e.g. due to differing cache line alignment [22]) all schedulers were compiled into the same binary image, and the desired scheduler selected via a global configuration variable set at boot or compile time. Our modifications added overhead in the form of additional interrupts and/or accounting code to the scheduler, but also eliminated other accounting code which had performed equivalent functions. To isolate the effect of new code from that of the removal of existing code, we also measured versions of the Poisson and Bernoulli schedulers (Poisson-2 and Bernoulli-2 below) which performed all accounting calculations of both schedulers, discarding the output of the original scheduler calculations. Results from 100 application runs for each scheduler are shown in Table 5. Overhead of our modified schedulers is seen to be low well under 1% and in the case of the Bernoulli and Poisson schedulers is negligible. Performance of the Poisson and Bernoulli schedulers was unexpected, as each incurs an additional 100 interrupts per second; the overhead of these interrupts appears to be comparable to or less than the accounting code which we were able to remove in each case. We note that these experiments were performed with Xen running in paravirtualized mode; the relative cost of accounting code and interrupts may be different when using hardware virtualization. We analyzed the new schedulers I/O performances by testing the I/O latency between two VMs in two configurations. In configuration 1, two VMs executed on the same core with no other VMs active, while in configuration 2 a CPU-bound VM was added on the other core. From the first test, we expected to see the performance of well-behaved I/O intensive applications on different schedulers; from the second one, we expected to see that the new schedulers retain the priority boosting mechanism. In Table 6 we see the results of these measurements. Differences in performance were minor, and as may be seen by the overlapping confidence intervals, were not statistically significant.

14 Additional Discussion A comprehensive comparison of our proposed schedulers is shown in Table 7. The Poisson scheduler seems to be the best option in practice, as it has no performance overhead nor vulnerability. Even though it has short-period variance, it guarantees exactly fair share in the long run. The Bernoulli scheduler would be an alternative if the vulnerability of up to 1ms is not a concern. The Uniform scheduler has similar performance to the Bernoulli one, and the implementation of sampling is simpler, but it has more overhead than Poisson and Bernoulli. Lastly, the Exact scheduler is the most straight-forward strategy to prevent cycle stealing, with a relatively trivial implementation but somewhat higher overhead. 6. Coordinated Attacks on Cloud We have shown how an old vulnerability has manifested itself in the modern context of virtualization. Our attack enables an attacking VM to steal cycles by gaming a vulnerability in the hypervisor s scheduler. We now explain how, given a collection of VMs in a cloud, attacks may be coordinated so as to steal the maximal number of cycles. Consider a customer of a cloud service provider with a collection of VMs. Their goal is to extract the maximal number of cycles possible for executing their computational requirements. Note that the goal of the customer is not to inflict the highest load on the cloud but to extract the maximum amount of useful work for themselves. To this end they wish to steal as many cycles as possible. As has already been explained attacking induces an overhead and having multiple VMs in attack mode on the same physical host leads to higher total overhead reducing the total amount of useful work. Ideally, therefore, the customer would like to have exactly one VM in attack mode per physical host with the other VMs functioning normally in non-attack mode. Unfortunately, cloud providers such as Amazon s EC2 not only control the mapping of VMs to physical hosts but also withhold information about the mapping from their customers. As infrastructure providers this is the right thing for them to do. By doing so they make it harder for malicious customers to snoop on others. [25] show the possibility of targeting victim VMs by mapping the internal cloud infrastructure though this is much harder to implement succesfully in the wild [28]. Interestingly, though our attack can be turned on its head not just to steal cycles but also to identify co-placement. Assume without loss of generality for the purposes of the rest of this discussion that hosts are single core machines. Recall that a regular (i.e. non-attacking) VM on a single host is capped at 38% whereas an attacking VM obtains 87% of the cycles. Now, if we have two attacking VMs on the same host then they obtain about 42% each. Thus by exclusively (i.e without running any other VMs) running a pair of VMs in attack mode one can determine whether they are on the same physical host or not (depending on whether they get 42% or 87%). Thus a customer could potentially run every pair of VMs and determine which VMs are on the same physical host. (Of course, this is under the not-unreasonable assumption that only the VMs of this particular customer can be in attack mode.) Note that if there are n VMs then this scheme requires ( n 2) tests for each pair. This leads to a natural question: what is the most efficient way to discover the mapping of VMs to physical host? This problem has a very clean and beautiful formulation.

15 15 Observe that the VMs get partitioned among (an unknown number of) the physical hosts. And we have the flexibility to activate some subset of the VMs in attack mode. We assume (as is the case of Amazon s EC2) that there is no advantage to running any of the other VMs in normal (non-attack) mode since they get their fair share (recall that in EC2 attackers only get additional spare cycles). When we activate a subset of the VMs in attack mode then we get back one bit of information for each VM in the subset - namely, whether that VM is the only VM from the subset on its physical host or whether there are 2 or more VMs from the subset on the same host. Thus the question now becomes: what is the fastest way to discover the unknown partition (of VMs among physical hosts)? We think of each subset that we activate as a query that takes unit time and we wish to use the fewest number of queries. More formally: PARTITION. You are given an unknown partition P = {S 1, S 2,..., S k } of a ground set [1... n]. You are allowed to ask queries of this partition. Each query is a subset Q = {q 1, q 2,..., } [1... n]. When you ask the query Q you get back Q bits, a bit b i for each element q i Q; let S qi denote the set of the partition P containing q i ; then b i = 1 if Q S qi = 1 (and otherwise, b i = 0 if Q S qi 2). The goal is to determine the query complexity of PARTITION, i.e., you have to find P using the fewest queries. Recall that a partition is a collection of disjoint subsets whose union is the ground set, i.e., 1 i, j, k, S i S j = and k i=1 S i = [1... n]. Observe that one can define both adaptive (where future queries are dependent on past answers) and oblivious variants of PARTITION. Obviously, adaptive queries have at least as much power as oblivious queries. In the general case we are able to show nearly tight bounds: Theorem 1. The oblivious query complexity of PARTITION is O(n 3 2 (log n) 1 4 ) and the n adaptive query complexity of PARTITION is Ω( log 2 n ). Proof Sketch: Thevolves the use of the probabilistic method in conjunction with sieving [2]. We are able to derandomize the upper bound to produce deterministic queries, employing expander graphs and pessimal estimators. The lower bound uses an adversarial argument based on the probabilistic method as well. In practice, partitions cannot be arbitrary as there is a bound on the number of VMs that a cloud service provider will map to a single host. This leads to the In the special case we are able to prove tight bounds: Theorem 2. The query complexity of B-PARTITION is θ(log n). Proof Sketch: The lower bound follows directly from the information-theoretic argument that there are Ω(2 n log n ) partitions while each query returns only n bits of information. The upper bound involves use of the probabilistic method (though the argument is simpler than for the general case) and can be derandomized to provide deterministic constructions of the queries. We provide details in Appendix B. 7. Related Work Tsafrir et al. [29] demonstrate a timing attack on the Linux 2.6 scheduler, allowing an attacking process to appear to consume no CPU and receive higher priority. McCanne and Torek [19] present the same cheat attack on 4.4BSD, and develop a uniform randomized sampling clock to estimate CPU utilization. They describe sufficient conditions for

16 16 this estimate to be accurate, but unlike section 5.2 they do not examine conditions for a theft-of-service attack. Cherkasova et al. [7, 8] have done an extensive performance analysis of scheduling in the Xen VMM. They studied I/O performance for the three schedulers: BVT, SEDF and Credit scheduler. Their work showed that both the CPU scheduling algorithm and the scheduler parameters drastically impact the I/O performance. Furthermore, they stressed that the I/O model on Xen remains an issue in resource allocation and accounting among VMs. Since Domain-0 is indirectly involved in servicing I/O for guest domains, I/O intensive domains may receive excess CPU resources by focusing on the processing resources used by Domain-0 on behalf of I/O bound domains. To tackle this problem, Gupta et al. [11] introduced the SEDF-DC scheduler, derived from Xen s SEDF scheduler, that charges guest domains for the time spent in Domain-0 on their behalf. Govindan et al. [14] proposed a CPU scheduling algorithm as an extension to Xen s SEDF scheduler that preferentially schedules I/O intensive domains. The key idea behind their algorithm is to count the number of packages flowing into or out of each domain and to schedule the one with highest count that has not yet consumed its entire slice. However, Ongaro et al. [23] pointed out that this scheme is problematic when bandwidth-intensive and latency-sensitive domains run concurrently on the same host - the bandwidth-intensive domains are likely to take priority over any latency-sensitive domains with little I/O traffic. They explored the impact of VMM scheduler on I/O performance using multiple guest domains concurrently running different types of applications and evaluated 11 different scheduler configurations within Xen VMM with both the SEDF and Credit schedulers. They also proposed multiple methods to improve I/O performance. Weng et al. [31] found from their analysis that Xen s asynchronous CPU scheduling strategy wastes considerable physical CPU time. To fix this problem, they presented a hybrid scheduling framework that groups VMs into high-throughput type and concurrent type and determines processing resource allocation among VMs based on type. In a similar vein Kim et al. [17] presented a task-aware VM scheduling mechanism to improve the performance of I/O-bound tasks within domains. Their approach employs gray-box techniques to peer into VMs and identify I/O-bound tasks in mixed workloads. There are a number of other works on improving other aspects of virtualized I/O performance [9,18,20,24,32] and VMM security [15,21,26]. To summarize, all of these papers tackle problems of long-term fairness between different classes of VMs such as CPU-bound, I/O bound, etc. 8. Conclusions Scheduling has a significant impact on the fair sharing of processing resources among virtual machines and on enforcing any applicable usage caps per virtual machine. This is specially important in commercial services like computing cloud services, where customers who pay for the same grade of service expect to receive the same access to resources and providers offer pricing models based on the enforcement of usage caps. However, the Xen hypervisor (and perhaps others) uses a scheduling mechanism which may fail to detect and account for CPU usage by poorly-behaved virtual machines, allowing malicious customers to obtain enhanced service at the expense of others. The use

17 of periodic sampling to measure CPU usage creates a loophole exploitable by an adroit attacker. We have demonstrated this vulnerability in Xen in the lab, and in Amazon s Elastic Compute Cloud (EC2) in the field. Under laboratory conditions, we found that the applications exploiting this vulnerability are able to utilize up to 98% of the CPU core on which they are scheduled, regardless of competition from other virtual machines with equal priority and share. Amazon EC2 uses a patched version of Xen, which prevents the capped amount of CPU resources of other VMs from being stolen. However, our attack scheme can steal idle CPU cycles to increase its share, and obtain up to 85% of CPU resources (as mentioned earlier, we have been in discussions with Amazon about the vulnerability reported in this paper and our recommendations for fixes; they have since implemented a fix that we have tested and verified). Finally, we describe four approaches to eliminating this cycle stealing vulnerability, and demonstrate their effectiveness at stopping our attack in a laboratory setting. In addition, we verify that the implemented schedulers offer minor or no overhead. 17

18 18 References [1] Keith Adams and Ole Agesen. A comparison of software and hardware techniques for x86 virtualization. In ASPLOS, [2] Noga Alon and Joel Spencer. The Probabilistic Method. John Wiley and Sons, Inc, [3] Amazon. Amazon Web Services: Overview of Security Processes, [4] AMD. Amd virtualization technology, [5] P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebauer, I. Pratt, and A. Warfield. Xen and the art of virtualization. In ACM SOSP, [6] C. Boulton. Novell, Microsoft Outline Virtual Collaboration. Serverwatch, [7] L. Cherkasova, D.Gupta, and A. Vahdat. Comparison of the three CPU schedulers in Xen. SIGMETERICS Performance Evaluation Review, [8] L. Cherkasova, D.Gupta, and A. Vahdat. When virtual is harder than real: Resource allocation challenges in virtual machine based IT environments. Tech Report: HPL , [9] L. Cherkasova and R. Gardner. Measuring CPU overhead for I/O processing in the Xen virtual machine monitor. In USENIX, [10] D. Chisnall. The Definitive Guide to the Xen Hypervisor. Prentice Hall PTR, [11] D.Gupta, L. Cherkasova, R. Gardner, and A. Vahdat. Enforcing performance isolation across virtual machines in Xen. In ACM/IFIP/USENIX Middleware, [12] Rogier Dittner and David Rul. The Best Damn Server Virtualization Book Period. Syngress, [13] J. J. Dongarra, J. Du Croz, I. S. Duff, and S. Hammarling. A set of Level 3 Basic Linear Algebra Subprograms. ACM Trans. on Mathematical Software, [14] S. Govindan, A.R. Nath, A. Das, B. Urgaonkar, and A. Sivasubramaniam. Xen and co.: communication-aware cpu scheduling for consolidated xen-based hosting platforms. In ACM VEE, [15] B. Jansen, H. V. Ramasamy, and M. Schunter. Policy enforcement and compliance proofs for Xen virtual machines. In ACM VEE, [16] O. Kerr. Cybercrime s access and authorization in computer misuse statutes. NYU Law Review, [17] H. Kim, H. Lim, J. Jeong, H. Jo, and J. Lee. Task-aware virtual machine scheduling for I/O performance. In ACM VEE), [18] J. Liu, W. Huang, B. Abali, and D. K. Panda. High performance VMM-bypass I/O in virtual machines. In USENIX, [19] S. McCanne and C. Torek. A randomized sampling clock for cpu utilization estimation and code profiling. In USENIX, [20] A. Menon, J. R. Santos, Y. Turner, G. J. Janakiraman, and W. Zwaenepoel. Diagnosing performance overheads in the Xen virtual machine environment. In ACM VEE, [21] D. G. Murray, G. Milos, and S. Hand. Improving Xen security through disaggregation. In ACM VEE, [22] T. Mytkowicz, A. Diwan, M. Hauswirth, and P. F. Sweeney. Producing wrong data without doing anything obviously wrong! In ASPLOS, [23] D. Ongaro, A. L. Cox, and S. Rixner. Scheduling I/O in a virtual machine monitor. In ACM VEE, [24] H. Raj and K. Schwan. High performance and scalable I/O virtualization via self-virtualized devices. In 16th Int l Symp. HPDC, [25] T. Ristenpart, E. Tromer, H. Shacham, and S. Savage. Hey, You, Get Off of My Cloud: Exploring Information Leakage in Third-Party Compute Clouds. In ACM CCS, [26] S. Rueda, Y. Sreenivasan, and T. Jaeger. Flexible security configuration for virtual machines. In ACM workshop on Computer security architecures, 2008.

19 19 [27] S. Siddha, V. Pallipadi, and A. V. D. Ven. Getting maximum mileage out of tickless. In Linux Symposium, [28] David Talbot. Vulnerability seen in amazon s cloud computing. MIT Tech Review, October [29] D. Tsafrir, Y. Etsion, and D. G. Feitelson. Secretly monopolizing the CPU without superuser privileges. In 16th USENIX Security Symposium, [30] R. P. Weicker. Dhrystone Benchmark: Rationale for Version 2 and Measurement Rules. SIG- PLAN Notices, [31] C. Weng, Z. Wang, M. Li, and X. Lu. The hybrid scheduling framework for virtual machine systems. In ACM VEE, [32] P. Willmann, J. Shafer, D. Carr, A. Menon, S. Rixner, A. L. Cox, and W. Zwaenepoel. Concurrent direct network access for virtual machine monitors. In 13th Int l Symp. HPDC, [33] F. Zhou, M. Goel, P. Desnoyers, and R. Sundaram. Scheduler vulnerabilities and coordinated attacks in cloud computing. In Northeastern Technical Report, A. Modified Dhrystone The main loop in Dhrystone 2.1 is modified by first adding the following constants and global variables. (Note that in practice the CPU speed would be measured at runtime.) /* Machine-specific values (for 2.6GHz CPU) */ #define ONE_MS /* ticks per 10ms */ #define INTERVAL (ONE_MS * 9) /* 9ms */ #define SLEEP 500 /* 0.5ms (in us) */ /* TSC counter timestamps */ u_int64_t past, now; int tmp; The loop in main() is then modified as follows: + int tmp = 0; for (Run_Index = 1; Run_Index <= Number_Of_Runs; ++Run_Index) { + /* check TSC counter every loops */ + if( tmp++ >= 10000) { + now = rdtsc(); + if ( now - past >= INTERVAL ) { + /* sleep to bypass sampling tick */ + usleep(sleep); + past = rdtsc(); + tmp = 0; + } + } Proc_5(); Proc_4();... B. Upper Bound on B-Partition We prove an upper bound of O(log n) on the query complexity of B-PARTITION, where the size of any set in the partition is at most B. Due to constraints of space we exhibit a randomized construction and leave the details of a deterministic construction to [33].

20 20 First, we query the entire ground set [1... n] and this allows us to identify any singleton sets in the partition. So now we assume that every set in our partitions has size at least 2. Consider a pair of elements x and y belonging to different sets S x and S y. Fix any other element y S y, arbitrarily (note that such a y exists because we assume there are no singleton sets left). A query Q is said to be a separating witness for the tuple T =< x, y, y, S x > iff Q S x = x and y, y Q. (Observe that for any such query it will be the case that b x = 1 while b y = 0, hence the term separating witness. Observe that any partition P is completely determined by a set of queries with separating witnesses for all the tuples the partition contains. Now, form a query Q by picking each element independently and uniformly with probability 1 2. Consider any particular tuple T =< x, y, y, S x >. Then P r Q (Q is a separating witness for T) 1 2 B+2 Recall that S x B since we are only considering partitions whose set sizes are at most B. Now consider a collection Q of such queries each chosen independently of the other. Then for a given tuple T we have that P r Q ( Q Q Q is not a separating witness for T) (1 1 ) Q 2B+2 But there are at most ( n B+2) B 3 n B+2 such tuples. Hence, P r Q ( tuple T Q Q Q is not a separating witness for T) n B+2 (1 1 ) Q 2B+2 Thus by choosing Q = O((B+2) 2 B+2 log n) = O(log n) queries we can ensure that the above probability is below 1, which means that the collection Q contains a separating witness for every possible tuple. This shows that the query complexity is O(log n). C. Obligations - Legal and Ethical Since this research essentially involves a theft of service we include a brief discussion (in separate paragraphs) of our legal obligations under statute and ethical or moral concerns. Interaction with computer systems in the United States is covered by the Computer Fraud and Abuse Act (CFAA) which broadly mandates that computer system access must be authorized. As is common with any statute there is considerable ambiguity in the term authorization and the complexity derives in large part from case precedents and subsequent interpretations [16]. We believe that we were in full compliance with this statute during the entire course of this research. We were authorized and paying customers of EC2 and we did not access any computer systems other than the one we were running on (for which we were authorized, naturally). All that we did in our virtual machine was to carefully time our sleeps which, we believe, is completely legal.

New Cloud Architectures For The Next Generation Internet

New Cloud Architectures For The Next Generation Internet New Cloud Architectures For The Next Generation Internet A dissertation presented by Fangfei Zhou to the Faculty of the Graduate School of the College of Computer and Information Science in partial fulfillment

More information

arxiv:1103.0759v1 [cs.dc] 3 Mar 2011

arxiv:1103.0759v1 [cs.dc] 3 Mar 2011 arxiv:1103.0759v1 [cs.dc] 3 Mar 2011 Scheduler Vulnerabilities and Attacks in Cloud Computing Fangfei Zhou Manish Goel Peter Desnoyers Ravi Sundaram {youyou,goelm,pjd,koods}@ccs.neu.edu College of Computer

More information

Scheduler Vulnerabilities and Coordinated Attacks in Cloud Computing

Scheduler Vulnerabilities and Coordinated Attacks in Cloud Computing Scheduler Vulnerabilities and Coordinated Attacks in Cloud Computing Fangfei Zhou Manish Goel Peter Desnoyers Ravi Sundaram College of Computer and Information Science Northeastern University, Boston,

More information

Full and Para Virtualization

Full and Para Virtualization Full and Para Virtualization Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF x86 Hardware Virtualization The x86 architecture offers four levels

More information

10.04.2008. Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details

10.04.2008. Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details Thomas Fahrig Senior Developer Hypervisor Team Hypervisor Architecture Terminology Goals Basics Details Scheduling Interval External Interrupt Handling Reserves, Weights and Caps Context Switch Waiting

More information

Real-Time Scheduling 1 / 39

Real-Time Scheduling 1 / 39 Real-Time Scheduling 1 / 39 Multiple Real-Time Processes A runs every 30 msec; each time it needs 10 msec of CPU time B runs 25 times/sec for 15 msec C runs 20 times/sec for 5 msec For our equation, A

More information

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum Scheduling Yücel Saygın These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum 1 Scheduling Introduction to Scheduling (1) Bursts of CPU usage alternate with periods

More information

Small is Better: Avoiding Latency Traps in Virtualized DataCenters

Small is Better: Avoiding Latency Traps in Virtualized DataCenters Small is Better: Avoiding Latency Traps in Virtualized DataCenters SOCC 2013 Yunjing Xu, Michael Bailey, Brian Noble, Farnam Jahanian University of Michigan 1 Outline Introduction Related Work Source of

More information

How To Stop A Malicious Process From Running On A Hypervisor

How To Stop A Malicious Process From Running On A Hypervisor Hypervisor-Based Systems for Malware Detection and Prevention Yoshihiro Oyama ( 大 山 恵 弘 ) The University of Electro-Communications ( 電 気 通 信 大 学 ), Tokyo, Japan This Talk I introduce two hypervisor-based

More information

Virtualization. Types of Interfaces

Virtualization. Types of Interfaces Virtualization Virtualization: extend or replace an existing interface to mimic the behavior of another system. Introduced in 1970s: run legacy software on newer mainframe hardware Handle platform diversity

More information

Cloud Operating Systems for Servers

Cloud Operating Systems for Servers Cloud Operating Systems for Servers Mike Day Distinguished Engineer, Virtualization and Linux August 20, 2014 mdday@us.ibm.com 1 What Makes a Good Cloud Operating System?! Consumes Few Resources! Fast

More information

Comparison of the Three CPU Schedulers in Xen

Comparison of the Three CPU Schedulers in Xen Comparison of the Three CPU Schedulers in Xen Lucy Cherkasova (HPLabs) Diwaker Gupta (UCSD) Amin Vahdat (UCSD) 2004 Hewlett-Packard Development Company, L.P. The information contained herein is subject

More information

Chapter 16: Virtual Machines. Operating System Concepts 9 th Edition

Chapter 16: Virtual Machines. Operating System Concepts 9 th Edition Chapter 16: Virtual Machines Silberschatz, Galvin and Gagne 2013 Chapter 16: Virtual Machines Overview History Benefits and Features Building Blocks Types of Virtual Machines and Their Implementations

More information

Virtualization for Cloud Computing

Virtualization for Cloud Computing Virtualization for Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF CLOUD COMPUTING On demand provision of computational resources

More information

Virtual Machine Monitors. Dr. Marc E. Fiuczynski Research Scholar Princeton University

Virtual Machine Monitors. Dr. Marc E. Fiuczynski Research Scholar Princeton University Virtual Machine Monitors Dr. Marc E. Fiuczynski Research Scholar Princeton University Introduction Have been around since 1960 s on mainframes used for multitasking Good example VM/370 Have resurfaced

More information

Cloud Computing CS 15-319

Cloud Computing CS 15-319 Cloud Computing CS 15-319 Virtualization Case Studies : Xen and VMware Lecture 20 Majd F. Sakr, Mohammad Hammoud and Suhail Rehman 1 Today Last session Resource Virtualization Today s session Virtualization

More information

Hypervisors. Introduction. Introduction. Introduction. Introduction. Introduction. Credits:

Hypervisors. Introduction. Introduction. Introduction. Introduction. Introduction. Credits: Hypervisors Credits: P. Chaganti Xen Virtualization A practical handbook D. Chisnall The definitive guide to Xen Hypervisor G. Kesden Lect. 25 CS 15-440 G. Heiser UNSW/NICTA/OKL Virtualization is a technique

More information

Project No. 2: Process Scheduling in Linux Submission due: April 28, 2014, 11:59pm

Project No. 2: Process Scheduling in Linux Submission due: April 28, 2014, 11:59pm Project No. 2: Process Scheduling in Linux Submission due: April 28, 2014, 11:59pm PURPOSE Getting familiar with the Linux kernel source code. Understanding process scheduling and how different parameters

More information

Chapter 14 Virtual Machines

Chapter 14 Virtual Machines Operating Systems: Internals and Design Principles Chapter 14 Virtual Machines Eighth Edition By William Stallings Virtual Machines (VM) Virtualization technology enables a single PC or server to simultaneously

More information

Operating Systems Concepts: Chapter 7: Scheduling Strategies

Operating Systems Concepts: Chapter 7: Scheduling Strategies Operating Systems Concepts: Chapter 7: Scheduling Strategies Olav Beckmann Huxley 449 http://www.doc.ic.ac.uk/~ob3 Acknowledgements: There are lots. See end of Chapter 1. Home Page for the course: http://www.doc.ic.ac.uk/~ob3/teaching/operatingsystemsconcepts/

More information

Basics in Energy Information (& Communication) Systems Virtualization / Virtual Machines

Basics in Energy Information (& Communication) Systems Virtualization / Virtual Machines Basics in Energy Information (& Communication) Systems Virtualization / Virtual Machines Dr. Johann Pohany, Virtualization Virtualization deals with extending or replacing an existing interface so as to

More information

System Virtual Machines

System Virtual Machines System Virtual Machines Introduction Key concepts Resource virtualization processors memory I/O devices Performance issues Applications 1 Introduction System virtual machine capable of supporting multiple

More information

CFS-v: I/O Demand-driven VM Scheduler in KVM

CFS-v: I/O Demand-driven VM Scheduler in KVM CFS-v: Demand-driven VM Scheduler in KVM Hyotaek Shim and Sung-Min Lee (hyotaek.shim, sung.min.lee@samsung.- com) Software R&D Center, Samsung Electronics 2014. 10. 16 Problem in Server Consolidation 2/16

More information

Virtual Machines. www.viplavkambli.com

Virtual Machines. www.viplavkambli.com 1 Virtual Machines A virtual machine (VM) is a "completely isolated guest operating system installation within a normal host operating system". Modern virtual machines are implemented with either software

More information

Virtualization. Pradipta De pradipta.de@sunykorea.ac.kr

Virtualization. Pradipta De pradipta.de@sunykorea.ac.kr Virtualization Pradipta De pradipta.de@sunykorea.ac.kr Today s Topic Virtualization Basics System Virtualization Techniques CSE506: Ext Filesystem 2 Virtualization? A virtual machine (VM) is an emulation

More information

Real-Time Multi-Core Virtual Machine Scheduling in Xen

Real-Time Multi-Core Virtual Machine Scheduling in Xen Department of Computer Science & Engineering 23-9 Real-Time Multi-Core Virtual Machine Scheduling in Xen Authors: Sisu Xi, Meng Xu, Chenyang Lu, Linh T.X. Phan, Christopher Gill, Oleg Sokolsky, Insup Lee

More information

An Enhanced CPU Scheduler for XEN Hypervisor to Improve Performance in Virtualized Environment

An Enhanced CPU Scheduler for XEN Hypervisor to Improve Performance in Virtualized Environment An Enhanced CPU Scheduler for XEN Hypervisor to Improve Performance in Virtualized Environment Chia-Ying Tseng 1 and Yi-Ling Chung Department of Computer Science and Engineering, Tatung University #40,

More information

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6 Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6 Winter Term 2008 / 2009 Jun.-Prof. Dr. André Brinkmann Andre.Brinkmann@uni-paderborn.de Universität Paderborn PC² Agenda Multiprocessor and

More information

BridgeWays Management Pack for VMware ESX

BridgeWays Management Pack for VMware ESX Bridgeways White Paper: Management Pack for VMware ESX BridgeWays Management Pack for VMware ESX Ensuring smooth virtual operations while maximizing your ROI. Published: July 2009 For the latest information,

More information

COS 318: Operating Systems. Virtual Machine Monitors

COS 318: Operating Systems. Virtual Machine Monitors COS 318: Operating Systems Virtual Machine Monitors Kai Li and Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall13/cos318/ Introduction u Have

More information

Rackspace Cloud Databases and Container-based Virtualization

Rackspace Cloud Databases and Container-based Virtualization Rackspace Cloud Databases and Container-based Virtualization August 2012 J.R. Arredondo @jrarredondo Page 1 of 6 INTRODUCTION When Rackspace set out to build the Cloud Databases product, we asked many

More information

Microkernels, virtualization, exokernels. Tutorial 1 CSC469

Microkernels, virtualization, exokernels. Tutorial 1 CSC469 Microkernels, virtualization, exokernels Tutorial 1 CSC469 Monolithic kernel vs Microkernel Monolithic OS kernel Application VFS System call User mode What was the main idea? What were the problems? IPC,

More information

Introduction of Virtualization Technology to Multi-Process Model Checking

Introduction of Virtualization Technology to Multi-Process Model Checking Introduction of Virtualization Technology to Multi-Process Model Checking Watcharin Leungwattanakit watcharin@is.s.u-tokyo.ac.jp Masami Hagiya hagiya@is.s.u-tokyo.ac.jp Mitsuharu Yamamoto Chiba University

More information

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture Last Class: OS and Computer Architecture System bus Network card CPU, memory, I/O devices, network card, system bus Lecture 3, page 1 Last Class: OS and Computer Architecture OS Service Protection Interrupts

More information

Chapter 5 Process Scheduling

Chapter 5 Process Scheduling Chapter 5 Process Scheduling CPU Scheduling Objective: Basic Scheduling Concepts CPU Scheduling Algorithms Why Multiprogramming? Maximize CPU/Resources Utilization (Based on Some Criteria) CPU Scheduling

More information

Security Overview of the Integrity Virtual Machines Architecture

Security Overview of the Integrity Virtual Machines Architecture Security Overview of the Integrity Virtual Machines Architecture Introduction... 2 Integrity Virtual Machines Architecture... 2 Virtual Machine Host System... 2 Virtual Machine Control... 2 Scheduling

More information

STEPPING TOWARDS A NOISELESS LINUX ENVIRONMENT

STEPPING TOWARDS A NOISELESS LINUX ENVIRONMENT ROSS 2012 June 29 2012 Venice, Italy STEPPING TOWARDS A NOISELESS LINUX ENVIRONMENT Hakan Akkan*, Michael Lang, Lorie Liebrock* Presented by: Abhishek Kulkarni * New Mexico Tech Ultrascale Systems Research

More information

Virtualization Technologies

Virtualization Technologies 12 January 2010 Virtualization Technologies Alex Landau (lalex@il.ibm.com) IBM Haifa Research Lab What is virtualization? Virtualization is way to run multiple operating systems and user applications on

More information

Database Virtualization

Database Virtualization Database Virtualization David Fetter Senior MTS, VMware Inc PostgreSQL China 2011 Guangzhou Thanks! Jignesh Shah Staff Engineer, VMware Performance Expert Great Human Being Content Virtualization Virtualized

More information

VMware Server 2.0 Essentials. Virtualization Deployment and Management

VMware Server 2.0 Essentials. Virtualization Deployment and Management VMware Server 2.0 Essentials Virtualization Deployment and Management . This PDF is provided for personal use only. Unauthorized use, reproduction and/or distribution strictly prohibited. All rights reserved.

More information

Parallels Virtuozzo Containers

Parallels Virtuozzo Containers Parallels Virtuozzo Containers White Paper Top Ten Considerations For Choosing A Server Virtualization Technology www.parallels.com Version 1.0 Table of Contents Introduction... 3 Technology Overview...

More information

APerformanceComparisonBetweenEnlightenmentandEmulationinMicrosoftHyper-V

APerformanceComparisonBetweenEnlightenmentandEmulationinMicrosoftHyper-V Global Journal of Computer Science and Technology Hardware & Computation Volume 13 Issue 2 Version 1.0 Year 2013 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals

More information

An Implementation Of Multiprocessor Linux

An Implementation Of Multiprocessor Linux An Implementation Of Multiprocessor Linux This document describes the implementation of a simple SMP Linux kernel extension and how to use this to develop SMP Linux kernels for architectures other than

More information

Chapter 5 Cloud Resource Virtualization

Chapter 5 Cloud Resource Virtualization Chapter 5 Cloud Resource Virtualization Contents Virtualization. Layering and virtualization. Virtual machine monitor. Virtual machine. Performance and security isolation. Architectural support for virtualization.

More information

PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE

PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE Sudha M 1, Harish G M 2, Nandan A 3, Usha J 4 1 Department of MCA, R V College of Engineering, Bangalore : 560059, India sudha.mooki@gmail.com 2 Department

More information

KVM: Kernel-based Virtualization Driver

KVM: Kernel-based Virtualization Driver KVM: Kernel-based Virtualization Driver White Paper Overview The current interest in virtualization has led to the creation of several different hypervisors. Most of these, however, predate hardware-assisted

More information

GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR

GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR ANKIT KUMAR, SAVITA SHIWANI 1 M. Tech Scholar, Software Engineering, Suresh Gyan Vihar University, Rajasthan, India, Email:

More information

COS 318: Operating Systems. Virtual Machine Monitors

COS 318: Operating Systems. Virtual Machine Monitors COS 318: Operating Systems Virtual Machine Monitors Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall10/cos318/ Introduction Have been around

More information

x86 ISA Modifications to support Virtual Machines

x86 ISA Modifications to support Virtual Machines x86 ISA Modifications to support Virtual Machines Douglas Beal Ashish Kumar Gupta CSE 548 Project Outline of the talk Review of Virtual Machines What complicates Virtualization Technique for Virtualization

More information

Operating Systems, 6 th ed. Test Bank Chapter 7

Operating Systems, 6 th ed. Test Bank Chapter 7 True / False Questions: Chapter 7 Memory Management 1. T / F In a multiprogramming system, main memory is divided into multiple sections: one for the operating system (resident monitor, kernel) and one

More information

Uses for Virtual Machines. Virtual Machines. There are several uses for virtual machines:

Uses for Virtual Machines. Virtual Machines. There are several uses for virtual machines: Virtual Machines Uses for Virtual Machines Virtual machine technology, often just called virtualization, makes one computer behave as several computers by sharing the resources of a single computer between

More information

CPU Scheduling Outline

CPU Scheduling Outline CPU Scheduling Outline What is scheduling in the OS? What are common scheduling criteria? How to evaluate scheduling algorithms? What are common scheduling algorithms? How is thread scheduling different

More information

Low-rate TCP-targeted Denial of Service Attack Defense

Low-rate TCP-targeted Denial of Service Attack Defense Low-rate TCP-targeted Denial of Service Attack Defense Johnny Tsao Petros Efstathopoulos University of California, Los Angeles, Computer Science Department Los Angeles, CA E-mail: {johnny5t, pefstath}@cs.ucla.edu

More information

Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5

Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5 77 16 CPU Scheduling Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5 Until now you have heard about processes and memory. From now on you ll hear about resources, the things operated upon

More information

The Art of Virtualization with Free Software

The Art of Virtualization with Free Software Master on Free Software 2009/2010 {mvidal,jfcastro}@libresoft.es GSyC/Libresoft URJC April 24th, 2010 (cc) 2010. Some rights reserved. This work is licensed under a Creative Commons Attribution-Share Alike

More information

VIRTUALIZATION AND CPU WAIT TIMES IN A LINUX GUEST ENVIRONMENT

VIRTUALIZATION AND CPU WAIT TIMES IN A LINUX GUEST ENVIRONMENT VIRTUALIZATION AND CPU WAIT TIMES IN A LINUX GUEST ENVIRONMENT James F Brady Capacity Planner for the State Of Nevada jfbrady@doit.nv.gov The virtualization environment presents the opportunity to better

More information

StACC: St Andrews Cloud Computing Co laboratory. A Performance Comparison of Clouds. Amazon EC2 and Ubuntu Enterprise Cloud

StACC: St Andrews Cloud Computing Co laboratory. A Performance Comparison of Clouds. Amazon EC2 and Ubuntu Enterprise Cloud StACC: St Andrews Cloud Computing Co laboratory A Performance Comparison of Clouds Amazon EC2 and Ubuntu Enterprise Cloud Jonathan S Ward StACC (pronounced like 'stack') is a research collaboration launched

More information

Delivering Quality in Software Performance and Scalability Testing

Delivering Quality in Software Performance and Scalability Testing Delivering Quality in Software Performance and Scalability Testing Abstract Khun Ban, Robert Scott, Kingsum Chow, and Huijun Yan Software and Services Group, Intel Corporation {khun.ban, robert.l.scott,

More information

Real-Time Multi-Core Virtual Machine Scheduling in Xen

Real-Time Multi-Core Virtual Machine Scheduling in Xen Real-Time Multi-Core Virtual Machine Scheduling in Xen Sisu Xi Meng Xu Chenyang Lu Linh T.X. Phan Christopher Gill Oleg Sokolsky Insup Lee Washington University in St. Louis University of Pennsylvania

More information

Real- Time Mul,- Core Virtual Machine Scheduling in Xen

Real- Time Mul,- Core Virtual Machine Scheduling in Xen Real- Time Mul,- Core Virtual Machine Scheduling in Xen Sisu Xi 1, Meng Xu 2, Chenyang Lu 1, Linh Phan 2, Chris Gill 1, Oleg Sokolsky 2, Insup Lee 2 1 Washington University in St. Louis 2 University of

More information

Kernel Optimizations for KVM. Rik van Riel Senior Software Engineer, Red Hat June 25 2010

Kernel Optimizations for KVM. Rik van Riel Senior Software Engineer, Red Hat June 25 2010 Kernel Optimizations for KVM Rik van Riel Senior Software Engineer, Red Hat June 25 2010 Kernel Optimizations for KVM What is virtualization performance? Benefits of developing both guest and host KVM

More information

Virtual Machine Security

Virtual Machine Security Virtual Machine Security CSE497b - Spring 2007 Introduction Computer and Network Security Professor Jaeger www.cse.psu.edu/~tjaeger/cse497b-s07/ 1 Operating System Quandary Q: What is the primary goal

More information

Virtualization. Jukka K. Nurminen 23.9.2015

Virtualization. Jukka K. Nurminen 23.9.2015 Virtualization Jukka K. Nurminen 23.9.2015 Virtualization Virtualization refers to the act of creating a virtual (rather than actual) version of something, including virtual computer hardware platforms,

More information

11.1 inspectit. 11.1. inspectit

11.1 inspectit. 11.1. inspectit 11.1. inspectit Figure 11.1. Overview on the inspectit components [Siegl and Bouillet 2011] 11.1 inspectit The inspectit monitoring tool (website: http://www.inspectit.eu/) has been developed by NovaTec.

More information

Performance tuning Xen

Performance tuning Xen Performance tuning Xen Roger Pau Monné roger.pau@citrix.com Madrid 8th of November, 2013 Xen Architecture Control Domain NetBSD or Linux device model (qemu) Hardware Drivers toolstack netback blkback Paravirtualized

More information

Confinement Problem. The confinement problem Isolating entities. Example Problem. Server balances bank accounts for clients Server security issues:

Confinement Problem. The confinement problem Isolating entities. Example Problem. Server balances bank accounts for clients Server security issues: Confinement Problem The confinement problem Isolating entities Virtual machines Sandboxes Covert channels Mitigation 1 Example Problem Server balances bank accounts for clients Server security issues:

More information

Why Relative Share Does Not Work

Why Relative Share Does Not Work Why Relative Share Does Not Work Introduction Velocity Software, Inc March 2010 Rob van der Heij rvdheij @ velocitysoftware.com Installations that run their production and development Linux servers on

More information

Monitoring Databases on VMware

Monitoring Databases on VMware Monitoring Databases on VMware Ensure Optimum Performance with the Correct Metrics By Dean Richards, Manager, Sales Engineering Confio Software 4772 Walnut Street, Suite 100 Boulder, CO 80301 www.confio.com

More information

evm Virtualization Platform for Windows

evm Virtualization Platform for Windows B A C K G R O U N D E R evm Virtualization Platform for Windows Host your Embedded OS and Windows on a Single Hardware Platform using Intel Virtualization Technology April, 2008 TenAsys Corporation 1400

More information

VIRTUALIZATION technology [19], [21] offers many advantages

VIRTUALIZATION technology [19], [21] offers many advantages IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 6, NO. X, XXXXXXX 2013 1 Who Is Your Neighbor: Net I/O Performance Interference in Virtualized Clouds Xing Pu, Ling Liu, Senior Member, IEEE, Yiduo Mei, Sankaran

More information

CPU Scheduling. Core Definitions

CPU Scheduling. Core Definitions CPU Scheduling General rule keep the CPU busy; an idle CPU is a wasted CPU Major source of CPU idleness: I/O (or waiting for it) Many programs have a characteristic CPU I/O burst cycle alternating phases

More information

Understanding Linux on z/vm Steal Time

Understanding Linux on z/vm Steal Time Understanding Linux on z/vm Steal Time June 2014 Rob van der Heij rvdheij@velocitysoftware.com Summary Ever since Linux distributions started to report steal time in various tools, it has been causing

More information

Virtualization. Explain how today s virtualization movement is actually a reinvention

Virtualization. Explain how today s virtualization movement is actually a reinvention Virtualization Learning Objectives Explain how today s virtualization movement is actually a reinvention of the past. Explain how virtualization works. Discuss the technical challenges to virtualization.

More information

0408 - Avoid Paying The Virtualization Tax: Deploying Virtualized BI 4.0 The Right Way. Ashish C. Morzaria, SAP

0408 - Avoid Paying The Virtualization Tax: Deploying Virtualized BI 4.0 The Right Way. Ashish C. Morzaria, SAP 0408 - Avoid Paying The Virtualization Tax: Deploying Virtualized BI 4.0 The Right Way Ashish C. Morzaria, SAP LEARNING POINTS Understanding the Virtualization Tax : What is it, how it affects you How

More information

Dynamic Load Balancing of Virtual Machines using QEMU-KVM

Dynamic Load Balancing of Virtual Machines using QEMU-KVM Dynamic Load Balancing of Virtual Machines using QEMU-KVM Akshay Chandak Krishnakant Jaju Technology, College of Engineering, Pune. Maharashtra, India. Akshay Kanfade Pushkar Lohiya Technology, College

More information

How do Users and Processes interact with the Operating System? Services for Processes. OS Structure with Services. Services for the OS Itself

How do Users and Processes interact with the Operating System? Services for Processes. OS Structure with Services. Services for the OS Itself How do Users and Processes interact with the Operating System? Users interact indirectly through a collection of system programs that make up the operating system interface. The interface could be: A GUI,

More information

VMware and CPU Virtualization Technology. Jack Lo Sr. Director, R&D

VMware and CPU Virtualization Technology. Jack Lo Sr. Director, R&D ware and CPU Virtualization Technology Jack Lo Sr. Director, R&D This presentation may contain ware confidential information. Copyright 2005 ware, Inc. All rights reserved. All other marks and names mentioned

More information

Ready Time Observations

Ready Time Observations VMWARE PERFORMANCE STUDY VMware ESX Server 3 Ready Time Observations VMware ESX Server is a thin software layer designed to multiplex hardware resources efficiently among virtual machines running unmodified

More information

Multi-core Programming System Overview

Multi-core Programming System Overview Multi-core Programming System Overview Based on slides from Intel Software College and Multi-Core Programming increasing performance through software multi-threading by Shameem Akhter and Jason Roberts,

More information

Scheduling. Monday, November 22, 2004

Scheduling. Monday, November 22, 2004 Scheduling Page 1 Scheduling Monday, November 22, 2004 11:22 AM The scheduling problem (Chapter 9) Decide which processes are allowed to run when. Optimize throughput, response time, etc. Subject to constraints

More information

Replication on Virtual Machines

Replication on Virtual Machines Replication on Virtual Machines Siggi Cherem CS 717 November 23rd, 2004 Outline 1 Introduction The Java Virtual Machine 2 Napper, Alvisi, Vin - DSN 2003 Introduction JVM as state machine Addressing non-determinism

More information

Virtualization. Jia Rao Assistant Professor in CS http://cs.uccs.edu/~jrao/

Virtualization. Jia Rao Assistant Professor in CS http://cs.uccs.edu/~jrao/ Virtualization Jia Rao Assistant Professor in CS http://cs.uccs.edu/~jrao/ What is Virtualization? Virtualization is the simulation of the software and/ or hardware upon which other software runs. This

More information

Capacity Estimation for Linux Workloads

Capacity Estimation for Linux Workloads Capacity Estimation for Linux Workloads Session L985 David Boyes Sine Nomine Associates 1 Agenda General Capacity Planning Issues Virtual Machine History and Value Unique Capacity Issues in Virtual Machines

More information

Hypervisors and Virtual Machines

Hypervisors and Virtual Machines Hypervisors and Virtual Machines Implementation Insights on the x86 Architecture DON REVELLE Don is a performance engineer and Linux systems/kernel programmer, specializing in high-volume UNIX, Web, virtualization,

More information

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Oracle Database Scalability in VMware ESX VMware ESX 3.5 Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises

More information

Compromise-as-a-Service

Compromise-as-a-Service ERNW GmbH Carl-Bosch-Str. 4 D-69115 Heidelberg 3/31/14 Compromise-as-a-Service Our PleAZURE Felix Wilhelm & Matthias Luft {fwilhelm, mluft}@ernw.de ERNW GmbH Carl-Bosch-Str. 4 D-69115 Heidelberg Agenda

More information

Large-scale performance monitoring framework for cloud monitoring. Live Trace Reading and Processing

Large-scale performance monitoring framework for cloud monitoring. Live Trace Reading and Processing Large-scale performance monitoring framework for cloud monitoring Live Trace Reading and Processing Julien Desfossez Michel Dagenais May 2014 École Polytechnique de Montreal Live Trace Reading Read the

More information

Managing Capacity Using VMware vcenter CapacityIQ TECHNICAL WHITE PAPER

Managing Capacity Using VMware vcenter CapacityIQ TECHNICAL WHITE PAPER Managing Capacity Using VMware vcenter CapacityIQ TECHNICAL WHITE PAPER Table of Contents Capacity Management Overview.... 3 CapacityIQ Information Collection.... 3 CapacityIQ Performance Metrics.... 4

More information

Virtual machines and operating systems

Virtual machines and operating systems V i r t u a l m a c h i n e s a n d o p e r a t i n g s y s t e m s Virtual machines and operating systems Krzysztof Lichota lichota@mimuw.edu.pl A g e n d a Virtual machines and operating systems interactions

More information

Linux Process Scheduling Policy

Linux Process Scheduling Policy Lecture Overview Introduction to Linux process scheduling Policy versus algorithm Linux overall process scheduling objectives Timesharing Dynamic priority Favor I/O-bound process Linux scheduling algorithm

More information

Virtualization. Dr. Yingwu Zhu

Virtualization. Dr. Yingwu Zhu Virtualization Dr. Yingwu Zhu What is virtualization? Virtualization allows one computer to do the job of multiple computers. Virtual environments let one computer host multiple operating systems at the

More information

ò Scheduling overview, key trade-offs, etc. ò O(1) scheduler older Linux scheduler ò Today: Completely Fair Scheduler (CFS) new hotness

ò Scheduling overview, key trade-offs, etc. ò O(1) scheduler older Linux scheduler ò Today: Completely Fair Scheduler (CFS) new hotness Last time Scheduling overview, key trade-offs, etc. O(1) scheduler older Linux scheduler Scheduling, part 2 Don Porter CSE 506 Today: Completely Fair Scheduler (CFS) new hotness Other advanced scheduling

More information

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run SFWR ENG 3BB4 Software Design 3 Concurrent System Design 2 SFWR ENG 3BB4 Software Design 3 Concurrent System Design 11.8 10 CPU Scheduling Chapter 11 CPU Scheduling Policies Deciding which process to run

More information

SIDN Server Measurements

SIDN Server Measurements SIDN Server Measurements Yuri Schaeffer 1, NLnet Labs NLnet Labs document 2010-003 July 19, 2010 1 Introduction For future capacity planning SIDN would like to have an insight on the required resources

More information

Outline. Outline. Why virtualization? Why not virtualize? Today s data center. Cloud computing. Virtual resource pool

Outline. Outline. Why virtualization? Why not virtualize? Today s data center. Cloud computing. Virtual resource pool Outline CS 6V81-05: System Security and Malicious Code Analysis Overview of System ization: The most powerful platform for program analysis and system security Zhiqiang Lin Department of Computer Science

More information

Linux scheduler history. We will be talking about the O(1) scheduler

Linux scheduler history. We will be talking about the O(1) scheduler CPU Scheduling Linux scheduler history We will be talking about the O(1) scheduler SMP Support in 2.4 and 2.6 versions 2.4 Kernel 2.6 Kernel CPU1 CPU2 CPU3 CPU1 CPU2 CPU3 Linux Scheduling 3 scheduling

More information

Chapter 3 Operating-System Structures

Chapter 3 Operating-System Structures Contents 1. Introduction 2. Computer-System Structures 3. Operating-System Structures 4. Processes 5. Threads 6. CPU Scheduling 7. Process Synchronization 8. Deadlocks 9. Memory Management 10. Virtual

More information

Hardware Assisted Virtualization

Hardware Assisted Virtualization Hardware Assisted Virtualization G. Lettieri 21 Oct. 2015 1 Introduction In the hardware-assisted virtualization technique we try to execute the instructions of the target machine directly on the host

More information

Performance Isolation of a Misbehaving Virtual Machine with Xen, VMware and Solaris Containers

Performance Isolation of a Misbehaving Virtual Machine with Xen, VMware and Solaris Containers Performance Isolation of a Misbehaving Virtual Machine with Xen, VMware and Solaris Containers Todd Deshane, Demetrios Dimatos, Gary Hamilton, Madhujith Hapuarachchi, Wenjin Hu, Michael McCabe, Jeanna

More information

Scheduling I/O in Virtual Machine Monitors

Scheduling I/O in Virtual Machine Monitors Scheduling I/O in Virtual Machine Monitors Diego Ongaro Alan L. Cox Scott Rixner Rice University Houston, Texas, USA {diego.ongaro,alc,rixner}@rice.edu Abstract This paper explores the relationship between

More information