Energy-Efficient Memory Management in Virtual Machine Environments

Size: px
Start display at page:

Download "Energy-Efficient Memory Management in Virtual Machine Environments"

Transcription

1 Energy-Efficient Memory Management in Virtual Machine Environments Lei Ye, Chris Gniady, John H. Hartman Department of Computer Science University of Arizona Tucson, USA {leiy, gniady, Abstract Main memory is one of the primary shared resources in a virtualized environment. Current trends in supporting a large number of virtual machines increase the demand for physical memory, making energy efficient memory management more significant. Several optimizations for memory energy consumption have been recently proposed for standalone operating system environments. However, these approaches cannot be directly used in a virtual machine environment because a layer of virtualization separates hardware from the operating system and the applications executing inside a virtual machine. We first adapt existing mechanisms to run at the VMM layer, offering transparent energy optimizations to the operating systems running inside the virtual machines. Static approaches have several weaknesses and we propose a dynamic approach that is able to optimize energy consumption for currently executing virtual machines and adapt to changing virtual machine behaviors. Through detailed trace driven simulation, we show that proposed dynamic mechanisms can reduce memory energy consumption by 63.4% with only.6% increase in execution time as compared to a standard virtual machine environment. Keywords-Energy Management; Virtual Machine; Memory; I. INTRODUCTION Current computing infrastructures use virtualization to increase resource utilization by deploying multiple virtual machines on the same hardware. Virtualization is particularly attractive for data center, cloud computing, and hosting services; in these environments a computer system is typically configured to have large physical memory capable of supporting many virtual machines. For example, the HP Pro- Liant DL58 G7 Server can support up to 1 TB memory [1]. As a result, memory can consume a significant fraction of a computer system s energy, making it worthwhile to consider ways to improve memory energy efficiency. In addition to improving memory energy efficiency by creating denser memory modules, memory hardware now provides low-power states that can be controlled by software. In a non-virtualized system this control is handled by the operating system [2], [3], [4], however in a virtualized system the additional layer of virtualization makes energy management more challenging. An operating system has a detailed view of the running applications, the demand they place on the system, and the power state of all memory in the system. This global knowledge allows the operating system great flexibility and aggressiveness in managing memory. Memory management in a virtualized environment is more challenging, since virtualization decouples the underlying physical hardware from the guest operating system that runs in the virtual machine. No longer does the operating system have a detailed view of the hardware and the power state of all system memory. Neither is it easy to implement energy management techniques below the operating system in the Virtual Machine Monitor (VMM) layer since the VMM does not have process-level knowledge of the running applications. In this paper we address the above challenges in a distinct way. We optimize energy efficiency in the VMM through efficient memory allocation and dynamic management that does not require any knowledge of process level execution within the guest operating system. The optimizations are transparent to the guest operating system, as a result, energy efficiency is improved without any modifications to the guest operating system code. Subsequently, we make the following contributions in this paper: (1) we implement energy-aware memory allocation in Xen to minimize the total number of active memory ranks; (2) we design and implement a lightweight memory usage tracing and dynamic power state transition mechanisms to preserve performance of the memory system for the virtual machines while improving energy efficiency. II. BACKGROUND AND MOTIVATION A. Power-Aware Memory Module Energy management in the operating system relies on hardware power states. In this paper, we consider the DDR2 RAM that serves as main memory in our system. DDR2 RAM is packaged into DRAM modules, each consisting of two ranks. Each rank includes several physical devices, registers, and a phase-lock loop (PLL). The smallest unit of power management in DDR2 RAM is a rank and all devices in a rank operate at the same power state [5]. Each rank can independently operate in one of the four power states: (1) the active state (ACT) when memory is reading or writing data; (2) the pre-charge state (PRE) is a high-power idle state where the next memory I/O can take place immediately /11/$26. c 211 IEEE

2 2.5ns PD mW READ / WRITE mW1mW 2.5ns 963.5mW ACT PRE Bandwidth 6.4 GB/s 2.5ns 8.75mW 5ns Figure 1. Power state transitions and latencies for Micron 1GB DDR2-8 memory rank. at the next clock cycle; (3) the power down state (PD) is a lower power idle state with rank components disabled, such as sense amplifier, and row/column decoder; (4) and the self-refresh state (SR) is the lowest power idle state that additionally disables PLL, and registers in the rank and as a result encounters the longest delays. Ranks in the SR and PD states have to be transitioned to the PRE state before a memory I/O is serviced. These state transitions incur delays, as show in Figure 1, and can degrade performance if not taken into consideration. Therefore, energy management mechanisms have to carefully trade performance for energy by selecting appropriate power states for ranks in the system. B. Energy Management in Standard Operating Systems Previous research has proposed several methods for reducing energy consumption of main memory based on power-aware memory allocation and dynamic power state transitions. Standard memory allocators treat every request uniformly and assign them to any free region in physical memory. Huang et al. [3] noticed energy efficiencies in this approach and proposed to allocate pages more compactly, using NUMA software layer, to minimize number of memory ranks that have to be on. Lee et al. [4] also proposed similar ideas to reduce active memory units for buffer cache. Similarly, Tolentino et al. [6] proposed proactive page allocation that enables the kernel to allocate pages from a particular physical memory device and attempts to pack allocations to minimize the total number of active memory devices. We use similar ideas to allocate virtual machines to a minimum number of ranks before we apply dynamic energy management. Lebeck et al. further explored several policies such as random, sequential first-touch, and frequency [7] to minimize memory footprint of running processes. Finally, Marchal et al. [8] proposed dynamic memory allocators to handle the bank assignment on shared multibanked SDRAM memories for multimedia applications. Research has also focused on optimizing power states for the accessed ranks. Lebeck et al. [7] introduced static policies that place a memory rank in a single power state and dynamic policies that transition memory rank between SR different power states according to runtime context. The power-mode transitions can be effectively hidden within the operating system scheduler, during context switches between processes [2]. Alternatively, Zhou et al. [9] utilized page miss ratio curve to identify and power down memory chips that are not being accessed by any application. Tolentino et al. [6] proposed history-based predictors for memory energy management. Liu et al. [1] presented a distinct mechanism to optimize memory energy efficiency by tolerating errors in the non-critical data. C. Page Migration for Energy Savings To further reduce the number of active ranks at runtime we can consider data migration. Delaluz et al. proposed an automatic data migration strategy that dynamically places the arrays with temporal affinity into the same set of banks and allows the use of more aggressive energy-saving modes [11]. Ramamurthy et al. utilized page migration mechanism in performance-directed energy management [12]. Page migration can be very expensive due to time and energy overhead to move data and also the overheads by tracking mechanisms to classify page utilization. Classifying page utilization using hardware is the most efficient [9], [13], but it is not available in general purpose machines. Subsequently, several software approaches have been explored [2], [14] that do not require specialized hardware but result in large runtime overheads. D. Access Monitoring with Performance Counters Both Intel and AMD include performance monitoring features in their processors [15], which consist of a set of registers that can be configured to track a variety of processor events. The performance monitoring counters can be used to track memory behavior of the applications by monitoring the last level cache misses, which result in memory accesses. There is no overhead associated with performance counter monitoring except to setup and read the counter, which is very small. However, the low overhead monitoring comes at the cost of reduced information content. The counters only report a number of monitored events that occurred during the monitoring period. We do not have exact timing information such as when the events occurred or if there was any clustering. Furthermore, we do not know what parts of memory have been accessed. Therefore, performance counter monitoring cannot directly be used to detect popular pages to enable page migration. It can also be challenging to accurately apply performance counters to power state prediction. Despite these limitations, we use performance counters in this paper because they offer a low overhead solution. III. DESIGN AND IMPLEMENTATION In this section, we describe the design and implementation of our system for improving energy efficiency of virtualized

3 Number of Ranks Standard Allocation Energy-Aware Allocation Memory Allocated per VM (Unit: GB) Figure 2. Memory ranks used by a virtual machine for standard rank allocation in Xen and energy aware memory allocation in. systems. First, we describe an energy-aware memory allocator that takes energy efficiency into account when allocating memory pages to virtual machines. Second, we present a Dynamic Power State Management () mechanism that transparently optimizes energy efficiency of main memory. A. Energy-Aware Memory Allocation A VMM typically allocates physical memory to individual virtual machines without considering memory ranks. VMMs use a standard memory allocator, such as Buddy, Slab, Slob, or TLSF (Two-Level Segregate Fit), that are not energyaware and do not consider the rank to which a page belongs. Subsequently, even small virtual address ranges can occupy several memory ranks and accessing them efficiently will require all ranks to be fully powered. This behavior translates to virtual machines, and Figure 2 shows the number of memory ranks allocated for a given virtual machine memory size when using the standard memory allocator in Xen. Xen employs a binary buddy allocator [16], which fragments the memory allocation to more memory ranks than necessary. Furthermore, allocation becomes more fragmented across ranks the longer a system runs due to physical address space fragmentation. To improve memory allocation, we investigate energy aware allocation developed for PAVM [3]. The PAVM memory allocation was implemented in a standard operating system kernel by using Non-Uniform Memory Access (NUMA) layer to handle memory allocation at process level. The NUMA management layer allows us to partition physical memory into virtual memory nodes, each corresponding to a memory rank. We employ the NUMA layer in Xen hypervisor to handle physical memory page allocations for virtual machines in our system equipped with AMD Phenom II X4 94 processor and 8GB Micron DDR2-8 memory. The modified Xen NUMA layer adds memory rank information by adding address ranges for 1GB memory ranks to numa emulation(). The binary buddy allocator then supports memory allocation for NUMA optimization on eight virtual memory nodes rather than a default single node. As a result, we enable energy-aware memory allocation by selecting specific memory nodes. B. Page Allocation Policies Using the NUMA layer, we consider two page allocation policies: sequential [7] and distributed. The sequential policy allocates pages to virtual machines in the order they are created so that virtual machines with different memory configurations are packed into a minimal number of memory ranks. This scheme will minimize the total number of active memory ranks, allowing unused ranks to be put into a low power state. However, this allocation can result in fragmentation of a virtual machine memory space over several ranks in case the memory of a given virtual machine needs to be increased. To minimize the number of ranks allocated to each virtual machine, we use a distributed policy that starts memory allocation to each virtual machine at a beginning of a new rank. If the memory of a virtual machine needs to be grown beyond a rank, we start allocation from a new memory rank. When the system runs out of empty ranks we then consider allocation to partially occupied ranks using worst-fit policy. The worst fit allocation policy will allow for further growth of the virtual machine memory while minimizing fragmentation of virtual machine memory space between ranks. This allocation policy is shown in Figure 2 and we can see that it can minimize rank allocation even for a statically allocated system that does not grow memory. In this paper, we allocate the maximum number of virtual machines for the given memory space and as a result the memory of individual virtual machines does not grow. In a system when the virtual machine s memory growth is possible the proposed allocation would offer even more benefit. C. Improving Energy Efficiency To take advantage of energy efficient memory allocation, we must employ power management of individual ranks. The natural optimization is to keep ranks that hold data that are currently being accessed in a PRE state. This optimization was implemented in a standard operating systems as PAVM [3]. We adopt this optimization at the VMM level and power up all memory ranks of the virtual machine being scheduled in to the PRE state in the VMM scheduler. When the virtual machine is being de-scheduled all ranks occupied by the virtual machine are transitioned to the SR state. PAVM offers high performance since the PRE state allows immediate servicing of memory I/Os and can offer energy savings if there are ranks that are not occupied by currently scheduled virtual machines. To improve energy efficiency further, we need to consider power optimizations of ranks that are occupied by data from currently scheduled virtual machines. Subsequently, we consider on-demand power-down (ODPD) and on-demand self-refresh policy (ODSR) [5], which maintain ranks in the

4 Energy Delay Product Figure 3. memory. SR-PD PD-PRE Number of Memory Accesses SR PD PRE Entire system energy delay product for power states of main PD or SR state, respectively, during the execution of the virtual machine. The memory has to be transitioned to the PRE state before servicing a memory I/O. The transitions take time and can significantly degrade performance when demand for memory I/Os is high. Similarly, keeping memory in a high power state, such as PRE in case of PAVM, can impact energy efficiency if the demand for memory I/Os is very low. Therefore, we need a dynamic approach that will match power state to the demand placed on memory in the running virtual machine. D. Capturing Memory Behavior Due to high overhead of detailed memory access tracking we use the AMD CPU performance counters to track memory accesses. Each core has a separate set of performance counters [15], [17] that can be used to track L3 cache misses by setting performance event-select registers (PerfEvtSeln). A cache miss from the last level cache corresponds to a memory access, so by counting cache misses from the L3 cache we can count memory accesses for a given core that is running a given virtual machine. We virtualize performance counters for each VM so that concurrent VMs could be tracked separately by restoring counters for a scheduled VM and saving counters for a de-scheduled VM at context switch time. Since we can only count number of memory accesses, we do not know what part of memory they occur in. The only information we know is how many memory accesses a given virtual machine performed during the scheduling period on the CPU. However, we are able to tell what subset of ranks the accesses occurred to based on the ranks allocated to the current virtual machine. Energy efficient memory allocation minimizes the number of ranks and as a result we can assign memory access characteristics to the small set of ranks that are occupied by the currently executing virtual machine. E. Dynamic Power State Management Dynamic management of the power states requires accurate prediction of memory demand for the upcoming period and selection of the best power state for the predicted demand. To accomplish that records memory access history for each virtual machine and therefore the ranks each virtual machine occupies. uses an exponential moving average (EMA) to record aggregate history of accesses for the virtual machine, and calculates it according to the following formula: EMA t = α * Access prev +(1-α) *EMA t 1 Access prev is the number of memory accesses in the previous scheduling interval, EMA t 1 is previous EMA, and α represents a weighted coefficient, which could be tuned to balance the amount of older and newer history in EMA calculation. In our system, α is chosen to be.85. EMA averages memory accesses across scheduling interval, and uses the current EMA to predict the memory demand for the next scheduling interval. Once the number of memory accesses is predicted for the upcoming interval, must select the best power state to minimize system s energy delay product. The energy delay product quantifies both performance and energy impact and a lower energy delay product indicates a better combination of performance and energy savings. We need to consider the energy delay product of the entire system since what matters are the energy savings of the entire system and not just the memory subsystem. If we consider just the energy delay product of the memory subsystem, longer delays may become more beneficial for making memory more energy efficient. However, those longer delays will cause the entire system to run longer and consume more energy, subsequently making the energy delay product and the resulting energy efficiency of the entire system much worse. We measured the power of our quad-core system to be 179.5W, or 44.9W per core. We use this information combined with memory power specifications in Figure 1 to calculate the system energy delay product for different power states. Figure 3 presents the resulting energy delay product curves for keeping memory in the SR, PD, and PRE power states during the scheduling intervals. We observe that if there are fewer than the SR-PD threshold of memory accesses during the scheduling interval, the most efficient state is SR. This represents an interval that is not memory intensive since most of the data fits in the CPU caches. The interval between SR-PD and PD-PRE thresholds is best served in the PD state as it has the lowest energy delay product. This is a wide range of accesses and low transition overhead of the PD state best accommodates them. When the memory requests reach above the PD-PRE threshold, we are faced with memory streaming applications that quickly touch a large amount of data and in this case the best state is PRE that has the highest performance. Figure 3 also illustrates the need for dynamic energy management. Keeping memory on one power state during the entire execution does not account for variance in performance demand and therefore ODSR, ODPD, and PAVM are not the most energy efficient mechanisms for a wide range of applications that may run inside a virtual machine.

5 uses the intersection points on energy delay product curves for different power states to select the power state that best matches predicted memory accesses for the upcoming scheduling interval. Subsequently, selects the power state that has the lowest energy delay product for the predicted number of memory accesses. IV. METHODOLOGY We use trace based simulation to analyze and evaluate the proposed mechanisms. We trace and then simulate a platform running 64-bit Xen and pvops Linux on an AMD Phenom II X4 94 processor with 8GB Micron DDR2-8 memory and power states specified in Figure 1. The monitoring system tracks memory accesses for every virtual machine during scheduling intervals and records the number of memory references along with a timestamp. We trace execution of each virtual machine for 1 minutes resulting in the same number of scheduling intervals for all virtual machines. We simulate a worst case scenario for performance with memory accesses distributed uniformly through the scheduling interval. In reality, some clustering of memory accesses occur resulting in better performance and higher energy savings with a given mechanism. We selected a range of benchmarks to provide a variety of execution between virtual machines. All benchmarks were installed in each virtual machine and executed randomly to mimic the workload of some general purpose applications. DaCapo [18] is a Java benchmark suite which consists of a set of open source, real world applications with non-trivial memory loads. SPEC CPU2 benchmarks measure the performance of the processor, memory, and compiler on a given system. SPECjvm28 is a benchmark suite for measuring the performance of a Java Runtime Environment (JRE), containing several real life applications and benchmarks. Memtester [19] is a memory streaming utility that produces high pressure on testing the memory subsystem. It sweeps the memory pages within a specified range performing different algebraic and logical operations. 8GB of memory in our system is distributed between the VMM and virtual machines as follows. Dom is configured with 512MB memory to run tracing program. Both Xen hypervisor and Dom are allocated in the first rank. The last rank is occupied by the frame buffer of the integrated graphics device (IGD). The remaining six ranks are dedicated for concurrent execution of six DomU virtual machines. Xen hypervisor does not support paging; therefore, each DomU virtual machine can only request any reasonable memory configuration that is not larger than available physical memory in our system, and as a result each virtual machine was configured with one virtual CPU and 1GB of memory which is adequate to execute our studied benchmarks in each virtual machine concurrently. Rank 7 that is occupied by the frame buffer always remains in the PRE state due to frequent accesses from the IGD. Rank occupied by Xen hypervisor and Dom is managed according to PAVM and is kept in the PRE state when Xen hypervisor and Dom are scheduled. For the remaining ranks we explore the following energy management mechanisms: ALL (Always On) represents a standard system without energy management that keeps all memory ranks in the PRE state. PAVM (Power-Aware Virtual Memory) keeps memory ranks of currently scheduled virtual machines in the PRE state and other ranks in the SR state [3]. ODSR (On-Demand Self-Rrefresh) keeps all memory ranks in the SR state during execution and only transitions them to the PRE state to serve the arriving memory I/O, and back to the SR state once memory I/O completes [5]. ODPD (On-Demand Power-Down) keeps all memory ranks in the PD state during execution and only transitions them to the PRE state to serve the arriving memory I/O, and back to the PD state at the completion of memory I/O [5]. is the proposed mechanism, that dynamically selects a power state for the upcoming scheduling interval. is an optimal mechanism, that uses future knowledge to select the power state that will minimize energy delay product of the system during upcoming scheduling interval. V. EVALUATION In this section, we examine the performance and energy efficiency of the studied mechanisms. A. Memory Energy Consumption Figure 4 presents memory energy distribution between power states normalized to the energy consumed by keeping all six memory ranks occupied by virtual machines (VMs) always on. There are five power states responsible for energy consumption: idling in the SR, PD, and PRE states; switching to the PRE state from the SR or PD state before serving memory I/Os; and servicing memory I/Os. Energy consumed to service memory I/Os is the same for all mechanisms because the memory has to serve the same amount of data no matter what energy saving mechanisms are used. Remaining energy is consumed according to the given mechanism s power state selection. To guarantee the performance, the PAVM mechanism keeps ranks of the currently executing virtual machines in the PRE state and other ranks of suspended virtual machines in the SR state resulting in four ranks in the PRE state and the remaining two ranks occupied by currently not running

6 Normalized Energy Consumption (%) PRE PD SR Switch I/O ALL PAVM ODSR ODPD Figure 4. Memory energy consumption normalized to energy consumed by the standard Xen configuration that keeps all six memory ranks always in the PRE state. virtual machines in the SR state. The resulting energy reduction is 3.4% without any performance degradation. Since the active ranks are kept in the PRE state at all times during the virtual machine scheduling intervals, the majority of remaining energy consumption is attributed to the PRE state. The energy consumption by the 2 ranks in the SR state only contributes 1.9% to the total energy consumption, further highlighting the argument that the inactive ranks should be in a lowest power state. Switching overhead is entirely hidden by the context switch time; therefore, switching energy is negligible. To reduce energy consumption further, we need to consider lower power states for the currently accessed memory ranks that may expose switching overheads in terms of delays and energy consumption. The two static mechanisms we consider are ODSR and ODPD mechanisms that trade performance for energy savings to a different degree. The ODPD mechanism is closest to the PAVM mechanism with almost the same behavior except it keeps the currently active ranks in the PD state. Subsequently, the energy is reduced by additional 28.6% at the cost of exposing power state transition to the PRE state before serving the memory I/O request. Similarly, majority of the remaining energy is consumed idling in the PD state. The ODSR mechanism, on the other hand utilizes a lower power SR state at the cost of higher delays. One may expect a larger energy savings due to utilization of the very low power SR state during idle period; however, as we can see from Figure 4 this is not the case. The long power state transition latencies (5ns) translate into substantial energy consumption due to frequent power state transitions from the SR to PRE state, resulting in the ODSR mechanism consuming much more energy than the ODPD mechanism. Therefore, a static ODSR mechanism is not desirable unless the virtual machine is almost idle, which is not a common case in our experiments. However such states of idleness do exist but they are usually short and need to be taken advantage of dynamically. Furthermore, certain execution periods have high memory activities that cannot tolerate any delays and therefore the Power State Distribution (%) W_PRE PRE W_PD PD W_SR SR VM 1 VM 2 VM 3 VM 4 VM 5 VM 6 Figure 5. Power state distribution determined by and mechanisms. SR, PD, and PRE present the energy delay reducing power state. W SR, W PD and W PRE represent the incorrect prediction of power states in the mechanism. PRE state is desirable. The mechanism balances those states and achieves energy consumption comparable to the mechanism. The mechanism reduces energy consumption by additional 4.4% as compared to the ODPD mechanism, which is only.4% away from the mechanism. Furthermore, switching overhead is negligible resulting in smaller energy consumption and better performance as compared to the ODPD mechanism. B. Prediction of Power States The energy consumption distribution of the and mechanisms can be correlated to the optimal and predicted power state selection as shown in Figure 5. Figure 5 compares power state selection of the and mechanisms for multiple concurrent virtual machines. The mechanism selects the best power state to minimize the overall energy delay product in the system. The mechanism uses thresholds defined in Figure 3 to compare with memory access to set memory rank power state during each scheduling interval. As Figure 5 shows, the PD state offers the best energy delay product majority of the time, averaging 53.1% across the virtual machines. The mechanism makes its prediction based on history and as result can mispredict memory behavior of the upcoming period. Figure 5 shows the mispredict portion of each state. The incorrect portion above each state represents the number of periods in that state that were incorrectly predicted and should have been in other power states. Keeping ranks incorrectly in the SR state degrades performance and keeping ranks incorrectly in the PRE state reduces energy savings. Incorrect predictions in the PD state indicate that the virtual machine could be better off in other states, either PRE or SR and as a result they can either degrade performance or increase energy consumption. In either case, incorrect predictions degrade energy efficiency.

7 Execution Time Overhead (%) % ALL PAVM ODSR ODPD Figure 6. Execution time overhead normalized to the standard Xen configuration that keeps all six memory ranks always in the PRE state. However, Figure 5 shows that the mispredicted portions are small and as a result the mechanism, on average, predicts the correct power state for 93.4% of the periods. The prediction performance and the period distribution vary across virtual machines. The benchmarks running in a virtual machine are randomly chosen by a script program, resulting in a mix of memory and compute intensive virtual machines. Furthermore, some benchmarks show relatively more fluctuations in memory behavior across scheduling intervals in a virtual machine, thus, making it difficult for the mechanism to correctly predict power state transitions. In virtual machine 3, benchmarks like derby, scimark.fft.large, gzip, and gcc which have high demand for memory bandwidth, were chosen by a random selector showing a larger portion on the PRE state. For other virtual machines, the random selector selected the benchmarks more uniformly resulting in relatively smaller portion on the PRE state. However, the mechanism shows similar performance of the predictor across virtual machines. C. Execution Time Overhead Incorrect state selection can translate into performance degradation and delays in application execution as shown in Figure 6. Figure 6 shows delays in execution time normalized to the system that keeps memory in the PRE state all the time (ALL). PAVM keeps currently active ranks in the PRE state and as a result does not show any degradation in performance. ODSR shows a prohibitive degradation in performance since it has to transition from the SR to PRE state before a memory access exposing 5ns transition latency for each memory access. This large delay translates into a significant increase in energy consumption, since the entire system must stay on for almost seven times longer. The remaining mechanisms increase execution time by less than 6% and are preferable over ODSR. ODPD increases execution time by 5.6% due to transitions between the PD and PRE states exposing 2.5ns latency on every memory access. The delay that minimizes the overall energy delay product of the system is.2% of the running time experienced by the mechanism. is again very close to with only a.6% increase in execution time. The small deviation between and is attributed to mispredictions in the mechanism resulting in extension of program execution. In Figure 5, mispredictions on the PRE state cause energy dissipation but do not introduce delay; 2.8% and 1.8% of mispredictions on the SR and PD states respectively, on average across all 6 virtual machines, result in additional.4% of delay compared with the mechanism. D. Energy Consumption in the Entire System A complete system contains many components that consume energy, including memory, CPUs, hard drives, network interfaces, and the motherboard. Figure 7 illustrates memory energy consumption as a fraction of overall system energy for various system sizes. To show impact of memory optimizations in a larger system we consider a dual Intel Xeon Processor X5667 system with Micron DDR2-8 memory. Each CPU is a quad-core processor that supports hyperthreading, allowing eight virtual cores. Each CPU has peak power of 95W and the rest of the system, not including memory, requires 22W. We consider four different memory configurations of 16, 24, 32, and 64 memory ranks. Each virtual machine is still assigned 1GB of memory resulting in a 16, 24, 32, and 64 concurrent virtual machines. We have a total of 16 virtual cores (8 cores with hyper-threading), and therefore at most 16 virtual machines can concurrently execute which implies the larger systems will put more ranks into the SR state when the virtual machines are currently not running. Finally, we recalculate the values for thresholds SR-PD and PD-PRE accordingly to account for a different system size. We first observe that increasing system memory consumes more power in case of the base system (ALL) that keeps all memory in the PRE state, reaching 35.8% for system with 64 memory ranks. In case of system with 16 ranks PAVM and ALL behave the same way, since there are 16 virtual machines running concurrently and PAVM will keep all 16 ranks active. We also note that as a number of ranks increases the energy consumption of PAVM remains relatively constant, since the number of ranks that is kept in the PRE state remains the same and only the energy consumed in the SR state is increasing as number of ranks increases. To provide any energy optimizations in case of 16 ranks we must use low power states. The trends are similar to PAVM where once energy savings of 16 ranks are obtained the increase in the number of ranks just increases energy consumed in the SR state. We observe that offers significant improvement in energy savings over ODPD and is comparable to across system configurations. We do not consider ODSR since the delays result in both performance degradation and higher energy consumption than even a base system (ALL).

8 Memory Energy Portion (%) PRE PD SR Switch I/O 22.1% 35.8% ALL PAVM ODPD ALL PAVM ODPD ALL PAVM ODPD ALL PAVM ODPD 16 Ranks 24 Ranks 32 Ranks 64 Ranks Figure 7. Fraction of energy due to memory in large memory systems. Similar to Figure 6, has very small impact on performance making execution time overhead relatively constant; ODPD mechanism incurs higher delays to the system. As we have seen so far, the mechanisms offers the best performance and energy savings combination, resulting in close to optimal energy efficiency for any system configuration. VI. CONCLUSIONS As main memories increase in size, so does their energy consumption. Increasing energy without decreasing performance is difficult in a virtualized environment. We propose a dynamic approach () that transparently provides energy optimizations for main memory by adopting existing mechanisms such as energy-aware memory allocation and static power state management. The mechanisms are low overhead since they rely on performance counters and algorithmically simple resulting in energy efficient memory management. In all scenarios, is better than PAVM, and static ODPD and ODSR mechanisms. Transparency and low complexity allow the proposed mechanisms to be easily deployable in a range of virtualized environments. VII. ACKNOWLEDGMENT This material is based upon work supported by the National Science Foundation under Grant No REFERENCES [1] Hp proliant dl58 g7 server series specifications, html, 21. [2] V. Delaluz, A. Sivasubramaniam, M. Kandemir, N. Vijaykrishnan, and M. J. Irwin, Scheduler-based dram energy management, in DAC, 22. [3] H. Huang, P. Pillai, and K. G. Shin, Design and implementation of power-aware virtual memory, in ATC, 23. [4] M. Lee, E. Seo, J. Lee, and J.-s. Kim, Pabc: Power-aware buffer cache management for low power consumption, IEEE Transactions on Computers, 27. [5] Micron, Ddr2 sdram features, pdf/datasheets/dram/ddr2/1gbddr2.pdf, 21. [6] M. E. Tolentino, J. Turner, and K. W. Cameron, Memorymiser: a performance-constrained runtime system for powerscalable clusters, in CF, 27. [7] A. R. Lebeck, X. Fan, H. Zeng, and C. Ellis, Power aware page allocation, in ASPLOS, 2. [8] P. Marchal, J. I. Gomez, L. Pinuel, D. Bruni, L. Benini, F. Catthoor, and H. Corporaal, Sdram-energy-aware memory allocation for dynamic multi-media applications on multiprocessor platforms, in DATE, 23. [9] P. Zhou, V. Pandey, J. Sundaresan, A. Raghuraman, Y. Zhou, and S. Kumar, Dynamic tracking of page miss ratio curve for memory management, in ASPLOS, 24. [1] L. Song, P. Karthik, M. Thomas, and G. Benjamin, Flikker: Saving dram refresh-power through critical data partitioning, in ASPLOS, 211. [11] V. D. L. Luz, M. Kandemir, and I. Kolcu, Automatic data migration for reducing energy consumption in multi-bank memory systems, in DAC, 22. [12] P. Ramamurthy and R. Palaniappan, Performance-directed energy management using bos, SIGOPS Oper. Syst. Rev., 27. [13] Y. Bao, M. Chen, Y. Ruan, L. Liu, J. Fan, Q. Yuan, B. Song, and J. Xu, Hmtt: a platform independent full-system memory trace monitoring system, in SIGMETRICS, 28. [14] W. Zhao and Z. Wang, Dynamic memory balancing for virtual machines, in VEE, 29 [15] AMD, Amd64 architecture programmer s manual, volume 2: System programming, TechDocs/24593.pdf, 21. [16] D. Chisnall, The Definitive Guide to the Xen Hypervisor. Upper Saddle River, NJ, USA: Prentice Hall PTR, 27. [17] AMD, Amd bios and kernel developers guide, support.amd.com/us/processor TechDocs/41256.pdf, 21. [18] Dacapo benchmarks, [19] Memtester, 21.

GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR

GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR ANKIT KUMAR, SAVITA SHIWANI 1 M. Tech Scholar, Software Engineering, Suresh Gyan Vihar University, Rajasthan, India, Email:

More information

How To Make A Buffer Cache Energy Efficient

How To Make A Buffer Cache Energy Efficient Impacts of Indirect Blocks on Buffer Cache Energy Efficiency Jianhui Yue, Yifeng Zhu,ZhaoCai University of Maine Department of Electrical and Computer Engineering Orono, USA { jyue, zhu, zcai}@eece.maine.edu

More information

Technical Paper. Moving SAS Applications from a Physical to a Virtual VMware Environment

Technical Paper. Moving SAS Applications from a Physical to a Virtual VMware Environment Technical Paper Moving SAS Applications from a Physical to a Virtual VMware Environment Release Information Content Version: April 2015. Trademarks and Patents SAS Institute Inc., SAS Campus Drive, Cary,

More information

Energy Constrained Resource Scheduling for Cloud Environment

Energy Constrained Resource Scheduling for Cloud Environment Energy Constrained Resource Scheduling for Cloud Environment 1 R.Selvi, 2 S.Russia, 3 V.K.Anitha 1 2 nd Year M.E.(Software Engineering), 2 Assistant Professor Department of IT KSR Institute for Engineering

More information

Full and Para Virtualization

Full and Para Virtualization Full and Para Virtualization Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF x86 Hardware Virtualization The x86 architecture offers four levels

More information

Enabling Technologies for Distributed and Cloud Computing

Enabling Technologies for Distributed and Cloud Computing Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Multi-core CPUs and Multithreading

More information

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Oracle Database Scalability in VMware ESX VMware ESX 3.5 Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises

More information

Enabling Technologies for Distributed Computing

Enabling Technologies for Distributed Computing Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies

More information

Linux VM Infrastructure for memory power management

Linux VM Infrastructure for memory power management Linux VM Infrastructure for memory power management Ankita Garg Vaidyanathan Srinivasan IBM Linux Technology Center Agenda - Saving Power Motivation - Why Save Power Benefits How can it be achieved Role

More information

Inter-socket Victim Cacheing for Platform Power Reduction

Inter-socket Victim Cacheing for Platform Power Reduction In Proceedings of the 2010 International Conference on Computer Design Inter-socket Victim Cacheing for Platform Power Reduction Subhra Mazumdar University of California, San Diego smazumdar@cs.ucsd.edu

More information

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies

More information

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1 Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...

More information

COS 318: Operating Systems. Virtual Machine Monitors

COS 318: Operating Systems. Virtual Machine Monitors COS 318: Operating Systems Virtual Machine Monitors Kai Li and Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall13/cos318/ Introduction u Have

More information

Dynamic Resource allocation in Cloud

Dynamic Resource allocation in Cloud Dynamic Resource allocation in Cloud ABSTRACT: Cloud computing allows business customers to scale up and down their resource usage based on needs. Many of the touted gains in the cloud model come from

More information

Basics in Energy Information (& Communication) Systems Virtualization / Virtual Machines

Basics in Energy Information (& Communication) Systems Virtualization / Virtual Machines Basics in Energy Information (& Communication) Systems Virtualization / Virtual Machines Dr. Johann Pohany, Virtualization Virtualization deals with extending or replacing an existing interface so as to

More information

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.

More information

Amazon EC2 XenApp Scalability Analysis

Amazon EC2 XenApp Scalability Analysis WHITE PAPER Citrix XenApp Amazon EC2 XenApp Scalability Analysis www.citrix.com Table of Contents Introduction...3 Results Summary...3 Detailed Results...4 Methods of Determining Results...4 Amazon EC2

More information

ACANO SOLUTION VIRTUALIZED DEPLOYMENTS. White Paper. Simon Evans, Acano Chief Scientist

ACANO SOLUTION VIRTUALIZED DEPLOYMENTS. White Paper. Simon Evans, Acano Chief Scientist ACANO SOLUTION VIRTUALIZED DEPLOYMENTS White Paper Simon Evans, Acano Chief Scientist Updated April 2015 CONTENTS Introduction... 3 Host Requirements... 5 Sizing a VM... 6 Call Bridge VM... 7 Acano Edge

More information

A Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems

A Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems A Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems Anton Beloglazov, Rajkumar Buyya, Young Choon Lee, and Albert Zomaya Present by Leping Wang 1/25/2012 Outline Background

More information

IT@Intel. Comparing Multi-Core Processors for Server Virtualization

IT@Intel. Comparing Multi-Core Processors for Server Virtualization White Paper Intel Information Technology Computer Manufacturing Server Virtualization Comparing Multi-Core Processors for Server Virtualization Intel IT tested servers based on select Intel multi-core

More information

Virtual Machine Monitors. Dr. Marc E. Fiuczynski Research Scholar Princeton University

Virtual Machine Monitors. Dr. Marc E. Fiuczynski Research Scholar Princeton University Virtual Machine Monitors Dr. Marc E. Fiuczynski Research Scholar Princeton University Introduction Have been around since 1960 s on mainframes used for multitasking Good example VM/370 Have resurfaced

More information

StACC: St Andrews Cloud Computing Co laboratory. A Performance Comparison of Clouds. Amazon EC2 and Ubuntu Enterprise Cloud

StACC: St Andrews Cloud Computing Co laboratory. A Performance Comparison of Clouds. Amazon EC2 and Ubuntu Enterprise Cloud StACC: St Andrews Cloud Computing Co laboratory A Performance Comparison of Clouds Amazon EC2 and Ubuntu Enterprise Cloud Jonathan S Ward StACC (pronounced like 'stack') is a research collaboration launched

More information

Dynamic Load Balancing of Virtual Machines using QEMU-KVM

Dynamic Load Balancing of Virtual Machines using QEMU-KVM Dynamic Load Balancing of Virtual Machines using QEMU-KVM Akshay Chandak Krishnakant Jaju Technology, College of Engineering, Pune. Maharashtra, India. Akshay Kanfade Pushkar Lohiya Technology, College

More information

0408 - Avoid Paying The Virtualization Tax: Deploying Virtualized BI 4.0 The Right Way. Ashish C. Morzaria, SAP

0408 - Avoid Paying The Virtualization Tax: Deploying Virtualized BI 4.0 The Right Way. Ashish C. Morzaria, SAP 0408 - Avoid Paying The Virtualization Tax: Deploying Virtualized BI 4.0 The Right Way Ashish C. Morzaria, SAP LEARNING POINTS Understanding the Virtualization Tax : What is it, how it affects you How

More information

Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies

Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies Kurt Klemperer, Principal System Performance Engineer kklemperer@blackboard.com Agenda Session Length:

More information

PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE

PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE Sudha M 1, Harish G M 2, Nandan A 3, Usha J 4 1 Department of MCA, R V College of Engineering, Bangalore : 560059, India sudha.mooki@gmail.com 2 Department

More information

Hypervisors. Introduction. Introduction. Introduction. Introduction. Introduction. Credits:

Hypervisors. Introduction. Introduction. Introduction. Introduction. Introduction. Credits: Hypervisors Credits: P. Chaganti Xen Virtualization A practical handbook D. Chisnall The definitive guide to Xen Hypervisor G. Kesden Lect. 25 CS 15-440 G. Heiser UNSW/NICTA/OKL Virtualization is a technique

More information

Database Systems on Virtual Machines: How Much do You Lose?

Database Systems on Virtual Machines: How Much do You Lose? Database Systems on Virtual Machines: How Much do You Lose? Umar Farooq Minhas University of Waterloo Jitendra Yadav IIT Kanpur Ashraf Aboulnaga University of Waterloo Kenneth Salem University of Waterloo

More information

Basics of Virtualisation

Basics of Virtualisation Basics of Virtualisation Volker Büge Institut für Experimentelle Kernphysik Universität Karlsruhe Die Kooperation von The x86 Architecture Why do we need virtualisation? x86 based operating systems are

More information

Energy-aware Memory Management through Database Buffer Control

Energy-aware Memory Management through Database Buffer Control Energy-aware Memory Management through Database Buffer Control Chang S. Bae, Tayeb Jamel Northwestern Univ. Intel Corporation Presented by Chang S. Bae Goal and motivation Energy-aware memory management

More information

Masters Project Proposal

Masters Project Proposal Masters Project Proposal Virtual Machine Storage Performance Using SR-IOV by Michael J. Kopps Committee Members and Signatures Approved By Date Advisor: Dr. Jia Rao Committee Member: Dr. Xiabo Zhou Committee

More information

Delivering Quality in Software Performance and Scalability Testing

Delivering Quality in Software Performance and Scalability Testing Delivering Quality in Software Performance and Scalability Testing Abstract Khun Ban, Robert Scott, Kingsum Chow, and Huijun Yan Software and Services Group, Intel Corporation {khun.ban, robert.l.scott,

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described

More information

A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing

A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing Liang-Teh Lee, Kang-Yuan Liu, Hui-Yang Huang and Chia-Ying Tseng Department of Computer Science and Engineering,

More information

White Paper. Recording Server Virtualization

White Paper. Recording Server Virtualization White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...

More information

Optimizing Shared Resource Contention in HPC Clusters

Optimizing Shared Resource Contention in HPC Clusters Optimizing Shared Resource Contention in HPC Clusters Sergey Blagodurov Simon Fraser University Alexandra Fedorova Simon Fraser University Abstract Contention for shared resources in HPC clusters occurs

More information

Exploring RAID Configurations

Exploring RAID Configurations Exploring RAID Configurations J. Ryan Fishel Florida State University August 6, 2008 Abstract To address the limits of today s slow mechanical disks, we explored a number of data layouts to improve RAID

More information

Diablo and VMware TM powering SQL Server TM in Virtual SAN TM. A Diablo Technologies Whitepaper. May 2015

Diablo and VMware TM powering SQL Server TM in Virtual SAN TM. A Diablo Technologies Whitepaper. May 2015 A Diablo Technologies Whitepaper Diablo and VMware TM powering SQL Server TM in Virtual SAN TM May 2015 Ricky Trigalo, Director for Virtualization Solutions Architecture, Diablo Technologies Daniel Beveridge,

More information

Windows Server 2008 R2 Hyper V. Public FAQ

Windows Server 2008 R2 Hyper V. Public FAQ Windows Server 2008 R2 Hyper V Public FAQ Contents New Functionality in Windows Server 2008 R2 Hyper V...3 Windows Server 2008 R2 Hyper V Questions...4 Clustering and Live Migration...5 Supported Guests...6

More information

Hyper-V R2: What's New?

Hyper-V R2: What's New? ASPE IT Training Hyper-V R2: What's New? A WHITE PAPER PREPARED FOR ASPE BY TOM CARPENTER www.aspe-it.com toll-free: 877-800-5221 Hyper-V R2: What s New? Executive Summary This white paper provides an

More information

DELL. Virtual Desktop Infrastructure Study END-TO-END COMPUTING. Dell Enterprise Solutions Engineering

DELL. Virtual Desktop Infrastructure Study END-TO-END COMPUTING. Dell Enterprise Solutions Engineering DELL Virtual Desktop Infrastructure Study END-TO-END COMPUTING Dell Enterprise Solutions Engineering 1 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL

More information

SUSE Linux Enterprise 10 SP2: Virtualization Technology Support

SUSE Linux Enterprise 10 SP2: Virtualization Technology Support Technical White Paper LINUX OPERATING SYSTEMS www.novell.com SUSE Linux Enterprise 10 SP2: Virtualization Technology Support Content and modifications. The contents of this document are not part of the

More information

Energy Conscious Virtual Machine Migration by Job Shop Scheduling Algorithm

Energy Conscious Virtual Machine Migration by Job Shop Scheduling Algorithm Energy Conscious Virtual Machine Migration by Job Shop Scheduling Algorithm Shanthipriya.M 1, S.T.Munusamy 2 ProfSrinivasan. R 3 M.Tech (IT) Student, Department of IT, PSV College of Engg & Tech, Krishnagiri,

More information

Chapter 14 Virtual Machines

Chapter 14 Virtual Machines Operating Systems: Internals and Design Principles Chapter 14 Virtual Machines Eighth Edition By William Stallings Virtual Machines (VM) Virtualization technology enables a single PC or server to simultaneously

More information

Performance Testing of a Cloud Service

Performance Testing of a Cloud Service Performance Testing of a Cloud Service Trilesh Bhurtun, Junior Consultant, Capacitas Ltd Capacitas 2012 1 Introduction Objectives Environment Tests and Results Issues Summary Agenda Capacitas 2012 2 1

More information

10.04.2008. Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details

10.04.2008. Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details Thomas Fahrig Senior Developer Hypervisor Team Hypervisor Architecture Terminology Goals Basics Details Scheduling Interval External Interrupt Handling Reserves, Weights and Caps Context Switch Waiting

More information

VIRTUALIZATION, The next step for online services

VIRTUALIZATION, The next step for online services Scientific Bulletin of the Petru Maior University of Tîrgu Mureş Vol. 10 (XXVII) no. 1, 2013 ISSN-L 1841-9267 (Print), ISSN 2285-438X (Online), ISSN 2286-3184 (CD-ROM) VIRTUALIZATION, The next step for

More information

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 This white paper discusses the SQL server workload consolidation capabilities of Dell PowerEdge R820 using Virtualization.

More information

Microsoft Exchange Server 2007 and Hyper-V high availability configuration on HP ProLiant BL680c G5 server blades

Microsoft Exchange Server 2007 and Hyper-V high availability configuration on HP ProLiant BL680c G5 server blades Microsoft Exchange Server 2007 and Hyper-V high availability configuration on HP ProLiant BL680c G5 server blades Executive summary... 2 Introduction... 2 Exchange 2007 Hyper-V high availability configuration...

More information

Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers

Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers WHITE PAPER FUJITSU PRIMERGY AND PRIMEPOWER SERVERS Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers CHALLENGE Replace a Fujitsu PRIMEPOWER 2500 partition with a lower cost solution that

More information

Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM

Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM White Paper Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM September, 2013 Author Sanhita Sarkar, Director of Engineering, SGI Abstract This paper describes how to implement

More information

HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief

HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Technical white paper HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Scale-up your Microsoft SQL Server environment to new heights Table of contents Executive summary... 2 Introduction...

More information

PARALLELS CLOUD SERVER

PARALLELS CLOUD SERVER PARALLELS CLOUD SERVER An Introduction to Operating System Virtualization and Parallels Cloud Server 1 Table of Contents Introduction... 3 Hardware Virtualization... 3 Operating System Virtualization...

More information

Intro to Virtualization

Intro to Virtualization Cloud@Ceid Seminars Intro to Virtualization Christos Alexakos Computer Engineer, MSc, PhD C. Sysadmin at Pattern Recognition Lab 1 st Seminar 19/3/2014 Contents What is virtualization How it works Hypervisor

More information

An Oracle White Paper August 2011. Oracle VM 3: Server Pool Deployment Planning Considerations for Scalability and Availability

An Oracle White Paper August 2011. Oracle VM 3: Server Pool Deployment Planning Considerations for Scalability and Availability An Oracle White Paper August 2011 Oracle VM 3: Server Pool Deployment Planning Considerations for Scalability and Availability Note This whitepaper discusses a number of considerations to be made when

More information

Monitoring Databases on VMware

Monitoring Databases on VMware Monitoring Databases on VMware Ensure Optimum Performance with the Correct Metrics By Dean Richards, Manager, Sales Engineering Confio Software 4772 Walnut Street, Suite 100 Boulder, CO 80301 www.confio.com

More information

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...

More information

- An Essential Building Block for Stable and Reliable Compute Clusters

- An Essential Building Block for Stable and Reliable Compute Clusters Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative

More information

KVM & Memory Management Updates

KVM & Memory Management Updates KVM & Memory Management Updates KVM Forum 2012 Rik van Riel Red Hat, Inc. KVM & Memory Management Updates EPT Accessed & Dirty Bits 1GB hugepages Balloon vs. Transparent Huge Pages Automatic NUMA Placement

More information

Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and IBM FlexSystem Enterprise Chassis

Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and IBM FlexSystem Enterprise Chassis White Paper Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and IBM FlexSystem Enterprise Chassis White Paper March 2014 2014 Cisco and/or its affiliates. All rights reserved. This document

More information

Toward a practical HPC Cloud : Performance tuning of a virtualized HPC cluster

Toward a practical HPC Cloud : Performance tuning of a virtualized HPC cluster Toward a practical HPC Cloud : Performance tuning of a virtualized HPC cluster Ryousei Takano Information Technology Research Institute, National Institute of Advanced Industrial Science and Technology

More information

Experimental Analysis of Task-based Energy Consumption in Cloud Computing Systems

Experimental Analysis of Task-based Energy Consumption in Cloud Computing Systems Experimental Analysis of Task-based Consumption in Cloud Computing Systems Feifei Chen, John Grundy, Yun Yang, Jean-Guy Schneider and Qiang He Faculty of Information and Communication Technologies Swinburne

More information

CloudSim: A Toolkit for Modeling and Simulation of Cloud Computing Environments and Evaluation of Resource Provisioning Algorithms

CloudSim: A Toolkit for Modeling and Simulation of Cloud Computing Environments and Evaluation of Resource Provisioning Algorithms CloudSim: A Toolkit for Modeling and Simulation of Cloud Computing Environments and Evaluation of Resource Provisioning Algorithms Rodrigo N. Calheiros, Rajiv Ranjan, Anton Beloglazov, César A. F. De Rose,

More information

Dynamic resource management for energy saving in the cloud computing environment

Dynamic resource management for energy saving in the cloud computing environment Dynamic resource management for energy saving in the cloud computing environment Liang-Teh Lee, Kang-Yuan Liu, and Hui-Yang Huang Department of Computer Science and Engineering, Tatung University, Taiwan

More information

HRG Assessment: Stratus everrun Enterprise

HRG Assessment: Stratus everrun Enterprise HRG Assessment: Stratus everrun Enterprise Today IT executive decision makers and their technology recommenders are faced with escalating demands for more effective technology based solutions while at

More information

Cloud Server. Parallels. An Introduction to Operating System Virtualization and Parallels Cloud Server. White Paper. www.parallels.

Cloud Server. Parallels. An Introduction to Operating System Virtualization and Parallels Cloud Server. White Paper. www.parallels. Parallels Cloud Server White Paper An Introduction to Operating System Virtualization and Parallels Cloud Server www.parallels.com Table of Contents Introduction... 3 Hardware Virtualization... 3 Operating

More information

How System Settings Impact PCIe SSD Performance

How System Settings Impact PCIe SSD Performance How System Settings Impact PCIe SSD Performance Suzanne Ferreira R&D Engineer Micron Technology, Inc. July, 2012 As solid state drives (SSDs) continue to gain ground in the enterprise server and storage

More information

Maximizing Your Server Memory and Storage Investments with Windows Server 2012 R2

Maximizing Your Server Memory and Storage Investments with Windows Server 2012 R2 Executive Summary Maximizing Your Server Memory and Storage Investments with Windows Server 2012 R2 October 21, 2014 What s inside Windows Server 2012 fully leverages today s computing, network, and storage

More information

Enery Efficient Dynamic Memory Bank and NV Swap Device Management

Enery Efficient Dynamic Memory Bank and NV Swap Device Management Enery Efficient Dynamic Memory Bank and NV Swap Device Management Kwangyoon Lee and Bumyong Choi Department of Computer Science and Engineering University of California, San Diego {kwl002,buchoi}@cs.ucsd.edu

More information

Best Practices. Server: Power Benchmark

Best Practices. Server: Power Benchmark Best Practices Server: Power Benchmark Rising global energy costs and an increased energy consumption of 2.5 percent in 2011 is driving a real need for combating server sprawl via increased capacity and

More information

An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform

An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform A B M Moniruzzaman 1, Kawser Wazed Nafi 2, Prof. Syed Akhter Hossain 1 and Prof. M. M. A. Hashem 1 Department

More information

Performance tuning Xen

Performance tuning Xen Performance tuning Xen Roger Pau Monné roger.pau@citrix.com Madrid 8th of November, 2013 Xen Architecture Control Domain NetBSD or Linux device model (qemu) Hardware Drivers toolstack netback blkback Paravirtualized

More information

VMware Server 2.0 Essentials. Virtualization Deployment and Management

VMware Server 2.0 Essentials. Virtualization Deployment and Management VMware Server 2.0 Essentials Virtualization Deployment and Management . This PDF is provided for personal use only. Unauthorized use, reproduction and/or distribution strictly prohibited. All rights reserved.

More information

Computing in High- Energy-Physics: How Virtualization meets the Grid

Computing in High- Energy-Physics: How Virtualization meets the Grid Computing in High- Energy-Physics: How Virtualization meets the Grid Yves Kemp Institut für Experimentelle Kernphysik Universität Karlsruhe Yves Kemp Barcelona, 10/23/2006 Outline: Problems encountered

More information

Virtualization. Dr. Yingwu Zhu

Virtualization. Dr. Yingwu Zhu Virtualization Dr. Yingwu Zhu What is virtualization? Virtualization allows one computer to do the job of multiple computers. Virtual environments let one computer host multiple operating systems at the

More information

Performance Analysis of Web based Applications on Single and Multi Core Servers

Performance Analysis of Web based Applications on Single and Multi Core Servers Performance Analysis of Web based Applications on Single and Multi Core Servers Gitika Khare, Diptikant Pathy, Alpana Rajan, Alok Jain, Anil Rawat Raja Ramanna Centre for Advanced Technology Department

More information

Performance Management in the Virtual Data Center, Part II Memory Management

Performance Management in the Virtual Data Center, Part II Memory Management Performance Management in the Virtual Data Center, Part II Memory Management Mark B. Friedman Demand Technology Software, 2013 markf@demandtech.com The Vision: Virtualization technology and delivery of

More information

MODULE 3 VIRTUALIZED DATA CENTER COMPUTE

MODULE 3 VIRTUALIZED DATA CENTER COMPUTE MODULE 3 VIRTUALIZED DATA CENTER COMPUTE Module 3: Virtualized Data Center Compute Upon completion of this module, you should be able to: Describe compute virtualization Discuss the compute virtualization

More information

Virtualizing Performance-Critical Database Applications in VMware vsphere VMware vsphere 4.0 with ESX 4.0

Virtualizing Performance-Critical Database Applications in VMware vsphere VMware vsphere 4.0 with ESX 4.0 Performance Study Virtualizing Performance-Critical Database Applications in VMware vsphere VMware vsphere 4.0 with ESX 4.0 VMware vsphere 4.0 with ESX 4.0 makes it easier than ever to virtualize demanding

More information

Virtualization @ Google

Virtualization @ Google Virtualization @ Google Alexander Schreiber Google Switzerland Libre Software Meeting 2012 Geneva, Switzerland, 2012-06-10 Introduction Talk overview Corporate infrastructure Overview Use cases Technology

More information

Benchmarking Hadoop & HBase on Violin

Benchmarking Hadoop & HBase on Violin Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages

More information

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Where IT perceptions are reality Test Report OCe14000 Performance Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Document # TEST2014001 v9, October 2014 Copyright 2014 IT Brand

More information

Virtualization. Jukka K. Nurminen 23.9.2015

Virtualization. Jukka K. Nurminen 23.9.2015 Virtualization Jukka K. Nurminen 23.9.2015 Virtualization Virtualization refers to the act of creating a virtual (rather than actual) version of something, including virtual computer hardware platforms,

More information

Virtualization. Types of Interfaces

Virtualization. Types of Interfaces Virtualization Virtualization: extend or replace an existing interface to mimic the behavior of another system. Introduced in 1970s: run legacy software on newer mainframe hardware Handle platform diversity

More information

Virtual Machines. COMP 3361: Operating Systems I Winter 2015 http://www.cs.du.edu/3361

Virtual Machines. COMP 3361: Operating Systems I Winter 2015 http://www.cs.du.edu/3361 s COMP 3361: Operating Systems I Winter 2015 http://www.cs.du.edu/3361 1 Virtualization! Create illusion of multiple machines on the same physical hardware! Single computer hosts multiple virtual machines

More information

Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and Dell PowerEdge M1000e Blade Enclosure

Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and Dell PowerEdge M1000e Blade Enclosure White Paper Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and Dell PowerEdge M1000e Blade Enclosure White Paper March 2014 2014 Cisco and/or its affiliates. All rights reserved. This

More information

Virtuoso and Database Scalability

Virtuoso and Database Scalability Virtuoso and Database Scalability By Orri Erling Table of Contents Abstract Metrics Results Transaction Throughput Initializing 40 warehouses Serial Read Test Conditions Analysis Working Set Effect of

More information

Group Based Load Balancing Algorithm in Cloud Computing Virtualization

Group Based Load Balancing Algorithm in Cloud Computing Virtualization Group Based Load Balancing Algorithm in Cloud Computing Virtualization Rishi Bhardwaj, 2 Sangeeta Mittal, Student, 2 Assistant Professor, Department of Computer Science, Jaypee Institute of Information

More information

Energy Aware Consolidation for Cloud Computing

Energy Aware Consolidation for Cloud Computing Abstract Energy Aware Consolidation for Cloud Computing Shekhar Srikantaiah Pennsylvania State University Consolidation of applications in cloud computing environments presents a significant opportunity

More information

Chapter 5 Cloud Resource Virtualization

Chapter 5 Cloud Resource Virtualization Chapter 5 Cloud Resource Virtualization Contents Virtualization. Layering and virtualization. Virtual machine monitor. Virtual machine. Performance and security isolation. Architectural support for virtualization.

More information

W H I T E P A P E R. Performance and Scalability of Microsoft SQL Server on VMware vsphere 4

W H I T E P A P E R. Performance and Scalability of Microsoft SQL Server on VMware vsphere 4 W H I T E P A P E R Performance and Scalability of Microsoft SQL Server on VMware vsphere 4 Table of Contents Introduction................................................................... 3 Highlights.....................................................................

More information

Multi-core and Linux* Kernel

Multi-core and Linux* Kernel Multi-core and Linux* Kernel Suresh Siddha Intel Open Source Technology Center Abstract Semiconductor technological advances in the recent years have led to the inclusion of multiple CPU execution cores

More information

Multi-Threading Performance on Commodity Multi-Core Processors

Multi-Threading Performance on Commodity Multi-Core Processors Multi-Threading Performance on Commodity Multi-Core Processors Jie Chen and William Watson III Scientific Computing Group Jefferson Lab 12000 Jefferson Ave. Newport News, VA 23606 Organization Introduction

More information

Muse Server Sizing. 18 June 2012. Document Version 0.0.1.9 Muse 2.7.0.0

Muse Server Sizing. 18 June 2012. Document Version 0.0.1.9 Muse 2.7.0.0 Muse Server Sizing 18 June 2012 Document Version 0.0.1.9 Muse 2.7.0.0 Notice No part of this publication may be reproduced stored in a retrieval system, or transmitted, in any form or by any means, without

More information

Virtual Machines. www.viplavkambli.com

Virtual Machines. www.viplavkambli.com 1 Virtual Machines A virtual machine (VM) is a "completely isolated guest operating system installation within a normal host operating system". Modern virtual machines are implemented with either software

More information

Resource usage monitoring for KVM based virtual machines

Resource usage monitoring for KVM based virtual machines 2012 18th International Conference on Adavanced Computing and Communications (ADCOM) Resource usage monitoring for KVM based virtual machines Ankit Anand, Mohit Dhingra, J. Lakshmi, S. K. Nandy CAD Lab,

More information

A Middleware Strategy to Survive Compute Peak Loads in Cloud

A Middleware Strategy to Survive Compute Peak Loads in Cloud A Middleware Strategy to Survive Compute Peak Loads in Cloud Sasko Ristov Ss. Cyril and Methodius University Faculty of Information Sciences and Computer Engineering Skopje, Macedonia Email: sashko.ristov@finki.ukim.mk

More information

Virtualization in Linux a Key Component for Cloud Computing

Virtualization in Linux a Key Component for Cloud Computing Virtualization in Linux a Key Component for Cloud Computing Harrison Carranza a and Aparicio Carranza a a Computer Engineering Technology New York City College of Technology of The City University of New

More information