PLANNING FOR LOAD BALANCING IN V.M. ADAPTING THE CHANGES IN LOAD DYNAMICALLY USING CENTRAL JOB DISPATCHER
|
|
|
- Walter Bell
- 10 years ago
- Views:
Transcription
1 PLANNING FOR LOAD BALANCING IN V.M. ADAPTING THE CHANGES IN LOAD DYNAMICALLY USING CENTRAL JOB DISPATCHER ABSTRACT 1 RAJESH KUMAR PATHAK, 2 DR SETU KUMAR CHATURVEDI 1,2,Department of computer Science, TIT Bhopal ID:- 1 [email protected], 2 [email protected] With the advent of World Wide Web, the life of every person has changed drastically. No one can imagine a computer without being connected to the Internet. This dependency is likely to grow in coming years with the adoption of technologies like virtualization and cloud computing. Cloud Computing and Virtualization is one of the foremost technology which has attracted many researchers recently, which is directly going to benefit the end users and data center service providers with the help of virtualization technique, we can run multiple instances of different operating systems simultaneously. Since, multiple OS are running on a single physical server, and multiple servers are running in the data centre. all connected via high speed network links. At some instance of time, one server may become overloaded, while other server may remain underutilized. This again poses challenge to distribute the load and make things work perfect in this situation. This situation can be handled using load balancing mechanism over the virtual machines. This paper work tries to find a mechanism to balance the load based on computational time parameter of the virtual machines. We have tried and implemented the existing approaches of load balancing of jobs for balancing server loads by migrating guests to lightly loaded server. The loads of the nodes in distributed system are subject to change from time to time; thus it is important for load balancing policies to adapt to the changing load quickly. Our goal is to make the best use of the global state information in order to achieve the best performance without incurring too much overhead compared to simple distributed policies. We propose a load-balancing policy with a central job dispatcher called Central Load Balancing Policy for Virtual Machines (CLBVM), which makes load-balancing decisions based on global state information. This policy has centralized information and location rules. The transfer rule is partially distributed and partially centralized.
2 INTRODUCTION A distributed system is considered as a collection of autonomous systems(nodes) located at possibly different sites and connected by a communication network. Through the communication network, resources of the system can be shared by users at different locations. With the help of virtualization technology, we are able to run multiple virtual machines or guests on a single physical hardware, and with the use of distributed computing all servers can be connected hosting several guests. The purpose of such a system is not to provide centralized control over all units of the system; rather it is to provide mechanisms which allow the otherwise autonomous processors to cooperate in a consistent and stable manner. One of the most important mechanisms which a distributed operating system can provide is the ability to move a process from one processor to another. This mechanism is called process migration.migration of Virtual Machines If we talk in terms of guest OS, the most important mechanisms which virtualization supports in distributed system is that it provides the ability to move a guest from one system to another. This mechanism is called the migration of operating systems. Migrating operating system instances across distinct physical hosts is a useful tool for administrators of data centers and clusters. It allows a clean separation between hardware and software, and facilitates fault management, load balancing, and low- level system maintenance. By carrying out the majority of migration while operating systems continue to run, it achieves impressive performance with minimal service downtimes; Migrating an entire operating system and all of its applications as one unit allows us to avoid many of the difficulties faced by process-level migration approaches. With virtual machine migration, on the other hand, the original host may be decommissioned once migration has completed. This is particularly valuable when migration is occurring in order to allow maintenance of the original host. Detailed approaches adopted for live migration of guests in Xen can be found in C. Clark et. al [1]. Migration of operating systems can be done by pausing the system and then migrating it to another system and resume the guest. It can also be done on the fly, i.e. using live migration of operating systems over in a distributed systems environment over a LAN. Performance enhancement is one of the most important issues in virtualization based distributed systems. Obvious but expensive ways of achieving this goal are to increase the capacity of the participating nodes i.e. the host servers and to add more nodes i.e. guests to the system. Adding more nodes or increasing the capacity of some of the nodes may be required in the cases in which all of the nodes in the system are overloaded; however, in many situations, poor performance is due to uneven load distribution throughout the system. The performance of the system can often be improved to an acceptable level simply by redistributing the load among the nodes. Therefore, load redistribution is a cost effective way
3 for the reliability and performance improvement of the overall system. This kind of problem is categorized as load balancing or load sharing [5]. LOAD BALANCING For parallel applications, load balancing attempts to distribute the computation load across multiple processors or machines as evenly as possible with objective to improve performance. A load balancing scheme consists of three phases: information collection, decision making and data migration. The jobs being carried out in distributed environments, require load balancing of the jobs as it may be compute intensive tasks and need long running time. There are many issues related to job migration and has been studied extensively in distributed environments. Many load balancing algorithms has been in the market which has solved most of the issues related to distributed computing environment. Few of the work done in this area include policies in which each node periodically broadcasts its status (light-loaded or heavy-loaded) to all other nodes by Kunz [4]. L. M. Ni [7] used three values (Heavy, Normal, and Light) to represent the load status of a node. Zhou [8] and Eager [2] demonstrated that algorithms using a threshold policy generally achieved better performance. Lin and Raghavendra [5] proposed a load balancing mechanism called Dynamic Load Balancing Policy with a Central Job Dispatcher (LBC). A. Need for Load Balancing in Cloud Computing Load balancing is the process by which inbound internet protocol traffic can be distributed across multiple servers. Load balancing enhances the performance of the servers, leads to their optimal utilization and ensures that no single server is overwhelmed. The term load balancing is used if the goal is to equalize certain performance measures such as the percentage of server idle times, marginal job response times, etc. On the other hand, if the objective is to improve some performance measure such as average job response time by redistributing the workload, it is called load sharing. However, the objective of load sharing can sometimes be transformed to an equivalent load-balancing objective. We can have multiple servers in a server farm or data centres which can hosts multiple guests. Each guest may differ on load, leading to a situation where some servers may become heavily loaded, moderately loaded or lightly loaded in terms of computational resources, memory resources and I/O devices. For simplicity and without loss of generality, we will consider only in terms of computational resources or cpu time. In this case, when one server is struggling to allocate sufficient cpu slice due to heavy demand by other VMs running parallel on the same server and another server is free/idle, In
4 this situation, we can distribute the load from one server to the another server by migrating one guest to the idle server. This would be an ideal situation when we require load balancing policy or mechanism in Cloud Computing scenario, thus improving the percentage of server idle times, marginal job response times, etc. Definition: According to H.C Lin [5], a dynamic load-balancing policy consists of three components - namely, information rule, transfer rule, and location rule. The information rule describes how to collect and where to store the information used in making decisions. The transfer rule is used to determine when to initiate an attempt to transfer a job and whether or not to transfer a job. The location rule chooses the nodes to or from which jobs will be transferred. Each of the three rules can either be performed at a central location or at local sites in a distributed manner. It is clear that using global state information has a better chance of making correct decisions and providing better performance than using partial information. However, it is expensive in terms of overheads for a fully distributed policy to keep the global state information in each of the nodes. Therefore partial information is usually used in making decisions in distributed load-balancing policies. B. Issues related to load balancing of jobs in distributed environment. In order to achieve, short response time and high system throughput, we need to consider these factors: 1. The load balancing process generates little traffic overhead and adds low over- head on the computational and network resources. 2. It keeps upto date load information of the systems participating in distributed computing. 3. It balances the system fairly. i.e. It must balance the heavily loaded and lightly loaded systems first. 4. The load balancing takes small time, else it may affect the overall performance. 5. The load balancing algorithm may take action instantaneously or on periodic basis.
5 6. It can run on a dedicated system, or it can be decentralized effort where everybody is fairly participating in decision making. PROPOSED POLICY AND APPROACH A. Motivation and Related Work A variety of load balancing algorithms has been studied in the past. Kunz [4] used a policy in which each node periodically broadcasts its status (light-loaded or heavy-loaded) to all participating nodes. L. M. Ni [7] used three values (Heavy, Normal, and Light) to represent the load status of a node. Our work is primarily motivated from the earlier work carried out by H.C. Lin and Raghvendra [5] and few other work related to same domain for balancing load in distributed environment. B. Proposed Approach In our system, we denote each guest OS as a job and each participating servers as a node. See Figure 1.We have tried and implemented the existing approaches of load balancing of jobs for balancing server loads by migrating guests to lightly loaded server. With the advances in communication technology, the speed of the communication networks in distributed systems is becoming faster Host OS Server1 OS1 OS2 Commniction channels Host OS Server 2 OS1 OS2 OS2 Server 3 Server 4 Host OS OS1 OS2 Host OS OS1 OS2 OS2 OS2 OS2 OS2 Figure1:Multiple Server Nodes interconnected with High-Speed communication links
6 and faster. High-speed communication can be easily achieved for local area networks. The loads of the nodes in a distributed system are subject to change from time to time; thus it is important for load balancing policies to adapt to the changing load quickly. Our goal is to make the best use of the global state information in order to achieve the best performance without incurring too much overhead compared to simple distributed policies. We propose a load-balancing policy with a central job dispatcher called Central Load Balancing Policy for Virtual Machines (CLBVM), which makes load-balancing decisions based on global state information. This policy has centralized information and location rules. The transfer rule is partially distributed and partially centralized. We will show that CLBVM can adapt to the changing load by demonstrating the insensitivity of the average job response to heterogeneous load. We also study the scalability of the central job dispatcher to handle large number of nodes. We expect that a dedicated virtual machine/hostos also known as Central Server (possibly the host OS on any one of multiple servers) can easily handle more than a thousand nodes. The drawback of any load-balancing policy with a central load balancing sys- tem is that of reliability. Failure of this virtual machine will make the load-balancing policy inoperable. One approach to deal with this problem is to design a highly reliable load balancing system. Another solution is to have a secondary (backup) virtual machine, acting as fault tolerant system which collects all the needed information in a passive manner. C. Central Load Balancing Policy for Virtual Machines (CLBVM) 1. Network load is constant and does not change frequently. 2. Each virtual machine has different identifications and IP Address. 3. The load information collector daemon process, runs continuously in each participating servers collecting aggregate cpu load and system utilization by guests. 4. Based on the data collected by the participating servers, it will mark itself as Heavily Loaded (H), Moderate Load (M) or Lightly Loaded (L). 5. The messages will be exchanged with the Master Server(Central Server), which will take decisions periodically related to load balancing and migration of virtual machines from one server to another on the fly. 6. Heavily loaded systems are balanced first with lightly loaded systems to achieve fairness in
7 balancing. 7. If all are moderately loaded, no migration is carried out. 8. Frequent change in state is taken care by the period of running the load balancing algorithm in master server, so that unnecessary migrations are avoided. EXPERIMENTAL SETUP A. Hardware Configuration Systems Configuration: Intel(R) Pentium(R) 4 CPU 2.00 GHz (~ 1994 Hz) RAM : 1 GB HDD : 40 GB Host OS : Fedora 8, Xen enabled Linux Kernel RAM : 1015 MB No. Of CPUs : 1 IP : Work : Acting as Central Server Systems Configuration: Intel(R) Pentium(R) 4 CPU 2.00 GHz (~ 1994 MHz) RAM : 1 GB
8 HDD : 40 GB Host OS : CentOS 5.2, Xen Enabled Linux Kernel ( el5xen) RAM : 1015 MB No. Of CPUs : 1 IP : Work : Acting as Participating Server Systems Configuration: Intel(R) Pentium(R) 4 CPU 2.00 GHz (~ 1994 MHz) RAM : 1 GB HDD : 40 GB Host OS : CentOS 5.2, Xen Enabled Linux Kernel ( el5xen) RAM : 1015 MB No. Of CPUs : 1 IP : Work : Acting as Participating Server Systems Configuration: Intel(R) Pentium(R) 4 CPU 1.8 GHz (~ 1794 MHz)
9 RAM : 1 GB HDD : 40 GB Host OS : CentOS 5.2, Xen Enabled Linux Kernel ( el5xen) RAM : 1015 MB No. Of CPUs : 1 IP : Work : Acting as Participating Server B. Software Tools Used Web Server Http Load Generator Performance Measurement : Httperf : Xenmon : Apache
10 Figure 2 : Experimental Setup Apache Web Server: Apache web server supports multiple pthreads, and serves a trivial cgi programs and deploy html pages and web-services. Httperf: It is an open source web server load generator. It provides a flexible facility for generating various HTTP workloads and for measuring the web server performance [6]. XenMon: xenmon reports a metric execution count that reflects how often a domain has been scheduled on a CPU during the measurement period [3]. C. Experiments Performed Experiment 1: To measure the performance of isolated web server and establish a benchmark for making a benchmark, we measured the performance of the web server by varying the load at different rates i.e, 512, 1024, 2048, 4096 (connections/sec). Figure 3 : Web Server Performance on Core Linux Kernel
11 Interpretation: We found that when data rate was 512 and 1024 conn/sec, the web server responded almost 100% times without any failure. Refer Figure 5.2. But, as we increased it to 2048 and 4096 conn/sec, we noticed many timed-out connections and file descriptor unavailability. We noticed that after providing sufficient delay time for closing the previously opened file descriptors, we found that the throughput increased from 25% to 45% when the data rate was 2048 connections/sec. Similarly, with the connection rate of 4096 connections/sec, it showed slight improved throughput, though it was far less than that of slower data rates i.e 512 and1024. The throughput is low with higher rates because of the web server is unable to handle so many requests due to it s limitations on number of file descriptors and number of processes and threads it can spawn. From this observations, we found that when the data rates were less i.e 512 and 1024 the Web Server hosted on core Linux kernel was able to handle almost all connections with very few time-outs and file descriptor-unavailability, but as we increased the data rates, we saw a drastic decrease in the throughput. The dips in the graph is due to the unavailability of file descriptors to handle the requests. There are only a limited number of file descriptors available for single processes; httperf assumes that the maximum is When a socket closes it enters the TIMEWAIT state for sixty seconds, so we must avoid reaching the port number limitation. We therefore run each benchmark for successive number of connections (i.e. 512, 1024, 2048, 4096,..., ), and then wait for all sockets to leave the TIMEWAIT state before we continue with the next benchmark run. We treated thisresults as our benchmark for the further experiments to analyse the behavior of Xen hypervisor, OpenVZ Kernels, VMs hosted on Xen hypervisor and OpenVZ containers, and overall performance of the proposed load balancing policy. From these results we can compare and conclude which virtualization technique would meet the better throughputs in which scenario. Experiment 2: To measure the performance of web server hosted on Dom0-XEN Hypervisor
12 Figure 4: XEN-Linux Web Server Performance Interpretation: We found similar behavior for Xen-Enabled Dom0 VM. It also showed almost 100% throughput when the data rates were 512 and Refer Figure 5.3. As the data rates were increased twice and even more, we noticed many timed-out connections and file descriptor unavailability, which resulted in decreased thoughput. It also, resulted with aprroximately 40% and 20% throughput at 2048 and 4096 connections/sec. It showed slight decrease in throughputs because we believe it was due to the additional layer of Xen hypervisor. This is because; Xen uses synchronous calls from a domain to Xen using a hypercall, while notifications are delivered to domains from Xen using asynchronous event mechanism. It remains to be seen whether this request processing latency is due to: Accepting incoming connections Writing the response (nonblocking write () sys. call) Managing the cache Some unforeseen problem. Experiment 3: To measure the performance of web server in VM hosted on XEN
13 Figure 5: VM1 Web Server Performance hosted on XEN Interpretation:We found that, the web server running on the VM1/Dom1 hosted on Xen, showed 100% throughput, when the data rate was slower i.e 512 conn/sec. Refer Figure 5.4. As the data rates were increased to twice and more, we noticed decrease in the throughput and many timed-out connections. Though, it showed similar throughput at higher data rates i.e 4096 conn/sec when compared to Dom0 running on Xen hypervisor. This is mainly beacuse of the bridge connec- tions used to multiplex the ethernet cards and maintain network connections with guests. Also, Dom0 i.e. HostOS (Linux) takes some processing time which resulted in this behaviour. Refer Figure 1.5. Communication from Xen to a domain is provided through an synchronous event mechanism, which replaces the usual delivery mechanisms for device interrupts. Xen uses a ring implemented as circular queue of desciptors allocated by a domain which is accessible from within Xen. The descriptors do not directly contain I/O data; Instead I/O data buffers are allocated by the OS and indirectly referenced by I/O descriptors. Access to each ring is based around two pairs of producer-consumer pointers: domain place request on a ring, advancing a request producer pointer, and Xen removes these requests for handling, advancing an associated request consumer pointer. Responses are placed back in the same fashion. The VM associates a unique identifier with each request which is reproduced in the associated response. This allows Xen to unambigously reorder I/O operations due to scheduling or priority considerations. We believe this was the main reason behind the degradation of throughput on the VMs running on Xen.
14 Experiment 4:To measure the performance of web server in VM hosted on Xen, with compute intensive task on another VM Figure 6 : VM1 Web Server Performance, VM2 running Compute intensive task, hosted on XEN Interpretation:It showed slightly degraded behavior, as expected in all the connection rates as another VM was running compute intensive task consuming more cpu time. Refer Figure 5.5. At 512 connections/second, throughput varied from 80% to 95%. We noticed quite degraded performance with higher data rates, and would be unable to meet the needs of real traffic on the web server on virtual system. After investigating the scheduling mechanism, we found that it was because of the working of credit scheduler. The other VM which was running compute intensive job consumed more credits as it also supports WC mode, which means the shares are merely guarantees, and the CPU is idle if and only if there is no runnable client. It means that in a case of two clients with equal weights and a situation when one of these clients is blocked, the other client can consume the entire CPU. Since, our script gave su fficient delay to close all the opened file descriptors, the other VM2 used to consume entire CPU, but when VM1 needed more CPU time, it suffered and got only it s share percentage, leading to relatively poor performance. This behavior of the VM1 in this scenario made us to think and resolve this issue with the load balancing mechanism running on multiple servers with virtualization support. Detailed
15 description and results are provided in Experiment 5. Experiment 5: To measure the performance of web server in OS hosted on Xen adopting CLBVM Policy Interpretation: From the results, we found that after adopting our CLBVM policy, the results were quite surprising and interesting; it behaved the way we expected and worked for this kind of setup. It showed slightly better throughput, though we expected even better results. Refer Figure 5.9, 5.10, 5.11, If we compare our results with Experiment 4 where we found quite degraded performance, the results were quite acceptable and we Figure 7 : VM1 Web server performance (512 conn/sec), running on Xen adopting CLBVM policy Figure 8: VM1 Web server performance(1024 conn/sec), running on Xen adopting CLBVM policy
16 hope it would give even better throughput if we would execute the same experiment on a real world traffic and activity. Our CLBVM algorithm had two sections where client code is running as a daemon service on the participating servers. It used to inform the central server when, the participating server changes Figure 9: VM1 Web server performance(2048 conn/sec), running on Xen adopting CLBVM policy Figure10: VM1 Web server performance4096 conn/sec), running on Xen adopting CLBVM policy
17 it s state from High to Low, Moderate and vice versa. The central server used to run the load balancing algorithm every N Minutes (for experiment we have fixed it to 10 minutes) and instructed the highly loaded server to transfer any lightly loaded VM to Lightly loaded server. During, the course of our experiments it migrated the VM on which we were performing our experiment 2 times. It can be seen during the data rate 1024 conn/sec making connections which took approximately 128 seconds to finish the experiment. We believe, despite adding extra burden to CPU by executing client code on each server, we achieved a better throughput. The client code hardly consumed 1-2% of cpu time, but this can be ignored if we achieve better results by losing this percentage of cpu time. RESULTS AND CONCLUSION From the above experiments we noticed, that OpenVZ performed well in terms of the throughput of the web server and has less overhead than the Xen hypervisor. The reasons were due to design approach in both the virtualization technique. OpenVZ uses CFQ I/O Scheduler and Fair share scheduler whereas Xen uses Credit scheduler and it s own I/O model to handle requests by implementing ring mechanism sup- ported with queuing mechanism. The main drawback with OpenVZ virtualization or container based virtualization is that it only supports linux kernel and currently has no support for other operating systems acting as guests. So, we decided to choose Xen as it supports almost all operating systems with para-virtualization and fullvirtualization support. We tried to make the system completely distributed such that, if the performance of the VM gets affected by other VM, it can move to lightly loaded server on the fly with adoption of our load balancing policy. It adds an extra cost to the overall system by consuming 1-2% of cpu time with each participating servers. Also, we need a server which can also act as central server, which also consumes around 1-2% of cpu time as observed using top command on Linux terminals. The migration of operating system adds an extra cost to the overall system, which we tried to minimize this by selecting the inactive or least active virtual machine. The only issue with this type of design is that we rely only on central server; we could find some better approach to resolve the dependency on a single central server. We could design another system where we can deploy a fault tolerant system acting as a backup and can be brought in place if the master server fails to work in any case. We haven t taken this thing into account in this implementation. After achieving this result, we are sure that it can be scaled to handle even more nodes in real situations making realization of true cloud computing using virtualization over distributed environment. We also found that if we need to support only linux environment, we could opt
18 OpenVZ kind of container based virtalization technique, which adds little or negligible burden to the linux kernel. OpenVZ is a patch over the linux kernel, so basically it is just a linux kernel with little modification to support container based virtualization. This policy can even be used in OpenVZ environment as it also supports features of live migration and check pointing based storage and retrieval of the running operating system. REFERENCES [1] Christopher Clark, Keir Fraser, Steven Hand, Jacob Gorm Hansen, Eric Jul, Christian Limpach,Ian Pratt, and Andrew Warfield. Live migration of virtual machines. In NSDI 05: Proceedings of the 2nd conference on Symposium on Networked Systems Design & Implementation, pages , Berkeley, CA, USA, USENIX Association. [2] Derek L. Eager, Edward D. Lazowska, and John Zahorjan. Adaptive load sharing in homogeneous distributed systems. IEEE Trans. Softw. Eng., 12(5): , [3] D. Gupta, R. Gardner, and L. Cherkasova. Xenmon: Qos monitoring a n d performance profiling tool. Technical Report HPL , HP Labs, [4] T. Kunz. The influence of different workload descriptions on a heuristic load balancing scheme. Software Engineering, IEEE Transactions on, 17(7): , Jul [5] Hwa-Chun Lin and C. S. Raghavendra. A dynamic load-balancing policy with a central job dispatcher (lbc). IEEE Trans. Softw. Eng., 2(2): , [6] D. Mosberger and T. Jin. httperf: A tool for measuring web server performance,1998. [7] L.M. Ni, Chong-Wei Xu, and T.B. Gendreau. A distributed drafting algorithm for load balancing. Software Engineering, IEEE Transactions on, SE-11(10): , Oct [8] S. Zhou. A trace-driven simula tion study of dynamic load balancing. IEEE Trans. Softw. Eng., 14(9): , [9] Rajesh Pathak, Asif Khan, Enhancement in Performance of CPU Scheduling using improved scheduling in Xen Ekansh, ISSN , Jan-Jun 2012, pages
Enabling Technologies for Distributed Computing
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies
Xen Live Migration. Networks and Distributed Systems Seminar, 24 April 2006. Matúš Harvan Xen Live Migration 1
Xen Live Migration Matúš Harvan Networks and Distributed Systems Seminar, 24 April 2006 Matúš Harvan Xen Live Migration 1 Outline 1 Xen Overview 2 Live migration General Memory, Network, Storage Migration
Enabling Technologies for Distributed and Cloud Computing
Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Multi-core CPUs and Multithreading
Models For Modeling and Measuring the Performance of a Xen Virtual Server
Measuring and Modeling the Performance of the Xen VMM Jie Lu, Lev Makhlis, Jianjiun Chen BMC Software Inc. Waltham, MA 2451 Server virtualization technology provides an alternative for server consolidation
Dynamic Load Balancing of Virtual Machines using QEMU-KVM
Dynamic Load Balancing of Virtual Machines using QEMU-KVM Akshay Chandak Krishnakant Jaju Technology, College of Engineering, Pune. Maharashtra, India. Akshay Kanfade Pushkar Lohiya Technology, College
Virtual Machine Monitors. Dr. Marc E. Fiuczynski Research Scholar Princeton University
Virtual Machine Monitors Dr. Marc E. Fiuczynski Research Scholar Princeton University Introduction Have been around since 1960 s on mainframes used for multitasking Good example VM/370 Have resurfaced
Full and Para Virtualization
Full and Para Virtualization Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF x86 Hardware Virtualization The x86 architecture offers four levels
VIRTUALIZATION technology [19], [21] offers many advantages
IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 6, NO. X, XXXXXXX 2013 1 Who Is Your Neighbor: Net I/O Performance Interference in Virtualized Clouds Xing Pu, Ling Liu, Senior Member, IEEE, Yiduo Mei, Sankaran
International Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 3, May-June 2015
RESEARCH ARTICLE OPEN ACCESS Ensuring Reliability and High Availability in Cloud by Employing a Fault Tolerance Enabled Load Balancing Algorithm G.Gayathri [1], N.Prabakaran [2] Department of Computer
Intro to Virtualization
Cloud@Ceid Seminars Intro to Virtualization Christos Alexakos Computer Engineer, MSc, PhD C. Sysadmin at Pattern Recognition Lab 1 st Seminar 19/3/2014 Contents What is virtualization How it works Hypervisor
Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies
Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies Kurt Klemperer, Principal System Performance Engineer [email protected] Agenda Session Length:
Computing in High- Energy-Physics: How Virtualization meets the Grid
Computing in High- Energy-Physics: How Virtualization meets the Grid Yves Kemp Institut für Experimentelle Kernphysik Universität Karlsruhe Yves Kemp Barcelona, 10/23/2006 Outline: Problems encountered
Scaling Database Performance in Azure
Scaling Database Performance in Azure Results of Microsoft-funded Testing Q1 2015 2015 2014 ScaleArc. All Rights Reserved. 1 Test Goals and Background Info Test Goals and Setup Test goals Microsoft commissioned
How To Compare Performance Of A Router On A Hypervisor On A Linux Virtualbox 2.5 (Xen) To A Virtualbox 3.5.2 (Xeen) 2.2.5-Xen-Virtualization (X
Performance Evaluation of Virtual Routers in Para-virtual Environment 1. Abhishek Bajaj [email protected] 2. Anargha Biswas [email protected] 3. Ambarish Kumar [email protected] 4.
Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009
Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized
Resource usage monitoring for KVM based virtual machines
2012 18th International Conference on Adavanced Computing and Communications (ADCOM) Resource usage monitoring for KVM based virtual machines Ankit Anand, Mohit Dhingra, J. Lakshmi, S. K. Nandy CAD Lab,
Rackspace Cloud Databases and Container-based Virtualization
Rackspace Cloud Databases and Container-based Virtualization August 2012 J.R. Arredondo @jrarredondo Page 1 of 6 INTRODUCTION When Rackspace set out to build the Cloud Databases product, we asked many
MAGENTO HOSTING Progressive Server Performance Improvements
MAGENTO HOSTING Progressive Server Performance Improvements Simple Helix, LLC 4092 Memorial Parkway Ste 202 Huntsville, AL 35802 [email protected] 1.866.963.0424 www.simplehelix.com 2 Table of Contents
Development and Evaluation of an Experimental Javabased
Development and Evaluation of an Experimental Javabased Web Server Syed Mutahar Aaqib Department of Computer Science & IT University of Jammu Jammu, India Lalitsen Sharma, PhD. Department of Computer Science
Oracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
A Migration of Virtual Machine to Remote System
ISSN (Online) : 2319-8753 ISSN (Print) : 2347-6710 International Journal of Innovative Research in Science, Engineering and Technology Volume 3, Special Issue 3, March 2014 2014 International Conference
Black-box and Gray-box Strategies for Virtual Machine Migration
Black-box and Gray-box Strategies for Virtual Machine Migration Wood, et al (UMass), NSDI07 Context: Virtual Machine Migration 1 Introduction Want agility in server farms to reallocate resources devoted
Proposal of Dynamic Load Balancing Algorithm in Grid System
www.ijcsi.org 186 Proposal of Dynamic Load Balancing Algorithm in Grid System Sherihan Abu Elenin Faculty of Computers and Information Mansoura University, Egypt Abstract This paper proposed dynamic load
A Hybrid Approach To Live Migration Of Virtual Machines
A Hybrid Approach To Live Migration Of Virtual Machines by Shashank Sahani, Vasudeva Varma in Cloud Computing in Emerging Markets (CCEM), 2012 Bangalore, India Report No: IIIT/TR/2012/-1 Centre for Search
PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE
PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE Sudha M 1, Harish G M 2, Nandan A 3, Usha J 4 1 Department of MCA, R V College of Engineering, Bangalore : 560059, India [email protected] 2 Department
COS 318: Operating Systems. Virtual Machine Monitors
COS 318: Operating Systems Virtual Machine Monitors Kai Li and Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall13/cos318/ Introduction u Have
Group Based Load Balancing Algorithm in Cloud Computing Virtualization
Group Based Load Balancing Algorithm in Cloud Computing Virtualization Rishi Bhardwaj, 2 Sangeeta Mittal, Student, 2 Assistant Professor, Department of Computer Science, Jaypee Institute of Information
Keywords Distributed Computing, On Demand Resources, Cloud Computing, Virtualization, Server Consolidation, Load Balancing
Volume 5, Issue 1, January 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Survey on Load
Xen and the Art of. Virtualization. Ian Pratt
Xen and the Art of Virtualization Ian Pratt Keir Fraser, Steve Hand, Christian Limpach, Dan Magenheimer (HP), Mike Wray (HP), R Neugebauer (Intel), M Williamson (Intel) Computer Laboratory Outline Virtualization
HPC performance applications on Virtual Clusters
Panagiotis Kritikakos EPCC, School of Physics & Astronomy, University of Edinburgh, Scotland - UK [email protected] 4 th IC-SCCE, Athens 7 th July 2010 This work investigates the performance of (Java)
COS 318: Operating Systems. Virtual Machine Monitors
COS 318: Operating Systems Virtual Machine Monitors Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall10/cos318/ Introduction Have been around
Figure 1. The cloud scales: Amazon EC2 growth [2].
- Chung-Cheng Li and Kuochen Wang Department of Computer Science National Chiao Tung University Hsinchu, Taiwan 300 [email protected], [email protected] Abstract One of the most important issues
Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
Elastic Load Balancing in Cloud Storage
Elastic Load Balancing in Cloud Storage Surabhi Jain, Deepak Sharma (Lecturer, Department of Computer Science, Lovely Professional University, Phagwara-144402) (Assistant Professor, Department of Computer
Cloud Operating Systems for Servers
Cloud Operating Systems for Servers Mike Day Distinguished Engineer, Virtualization and Linux August 20, 2014 [email protected] 1 What Makes a Good Cloud Operating System?! Consumes Few Resources! Fast
Chapter 1 - Web Server Management and Cluster Topology
Objectives At the end of this chapter, participants will be able to understand: Web server management options provided by Network Deployment Clustered Application Servers Cluster creation and management
Optimizing Shared Resource Contention in HPC Clusters
Optimizing Shared Resource Contention in HPC Clusters Sergey Blagodurov Simon Fraser University Alexandra Fedorova Simon Fraser University Abstract Contention for shared resources in HPC clusters occurs
Best Practices for Monitoring Databases on VMware. Dean Richards Senior DBA, Confio Software
Best Practices for Monitoring Databases on VMware Dean Richards Senior DBA, Confio Software 1 Who Am I? 20+ Years in Oracle & SQL Server DBA and Developer Worked for Oracle Consulting Specialize in Performance
EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications
ECE6102 Dependable Distribute Systems, Fall2010 EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications Deepal Jayasinghe, Hyojun Kim, Mohammad M. Hossain, Ali Payani
MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM?
MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM? Ashutosh Shinde Performance Architect [email protected] Validating if the workload generated by the load generating tools is applied
A Middleware Strategy to Survive Compute Peak Loads in Cloud
A Middleware Strategy to Survive Compute Peak Loads in Cloud Sasko Ristov Ss. Cyril and Methodius University Faculty of Information Sciences and Computer Engineering Skopje, Macedonia Email: [email protected]
OpenMosix Presented by Dr. Moshe Bar and MAASK [01]
OpenMosix Presented by Dr. Moshe Bar and MAASK [01] openmosix is a kernel extension for single-system image clustering. openmosix [24] is a tool for a Unix-like kernel, such as Linux, consisting of adaptive
International Journal of Scientific & Engineering Research, Volume 6, Issue 4, April-2015 36 ISSN 2229-5518
International Journal of Scientific & Engineering Research, Volume 6, Issue 4, April-2015 36 An Efficient Approach for Load Balancing in Cloud Environment Balasundaram Ananthakrishnan Abstract Cloud computing
PARALLELS CLOUD SERVER
PARALLELS CLOUD SERVER Performance and Scalability 1 Table of Contents Executive Summary... Error! Bookmark not defined. LAMP Stack Performance Evaluation... Error! Bookmark not defined. Background...
Analysis of VDI Storage Performance During Bootstorm
Analysis of VDI Storage Performance During Bootstorm Introduction Virtual desktops are gaining popularity as a more cost effective and more easily serviceable solution. The most resource-dependent process
Philips IntelliSpace Critical Care and Anesthesia on VMware vsphere 5.1
Philips IntelliSpace Critical Care and Anesthesia on VMware vsphere 5.1 Jul 2013 D E P L O Y M E N T A N D T E C H N I C A L C O N S I D E R A T I O N S G U I D E Table of Contents Introduction... 3 VMware
Remus: : High Availability via Asynchronous Virtual Machine Replication
Remus: : High Availability via Asynchronous Virtual Machine Replication Brendan Cully, Geoffrey Lefebvre, Dutch Meyer, Mike Feeley,, Norm Hutchinson, and Andrew Warfield Department of Computer Science
HRG Assessment: Stratus everrun Enterprise
HRG Assessment: Stratus everrun Enterprise Today IT executive decision makers and their technology recommenders are faced with escalating demands for more effective technology based solutions while at
Chapter 5 Cloud Resource Virtualization
Chapter 5 Cloud Resource Virtualization Contents Virtualization. Layering and virtualization. Virtual machine monitor. Virtual machine. Performance and security isolation. Architectural support for virtualization.
Windows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...
Virtual Machine Synchronization for High Availability Clusters
Virtual Machine Synchronization for High Availability Clusters Yoshiaki Tamura, Koji Sato, Seiji Kihara, Satoshi Moriai NTT Cyber Space Labs. 2007/4/17 Consolidating servers using VM Internet services
International Journal of Computer & Organization Trends Volume20 Number1 May 2015
Performance Analysis of Various Guest Operating Systems on Ubuntu 14.04 Prof. (Dr.) Viabhakar Pathak 1, Pramod Kumar Ram 2 1 Computer Science and Engineering, Arya College of Engineering, Jaipur, India.
Running a Workflow on a PowerCenter Grid
Running a Workflow on a PowerCenter Grid 2010-2014 Informatica Corporation. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise)
Analysis of Issues with Load Balancing Algorithms in Hosted (Cloud) Environments
Analysis of Issues with Load Balancing Algorithms in Hosted (Cloud) Environments Branko Radojević *, Mario Žagar ** * Croatian Academic and Research Network (CARNet), Zagreb, Croatia ** Faculty of Electrical
MODULE 3 VIRTUALIZED DATA CENTER COMPUTE
MODULE 3 VIRTUALIZED DATA CENTER COMPUTE Module 3: Virtualized Data Center Compute Upon completion of this module, you should be able to: Describe compute virtualization Discuss the compute virtualization
Load Distribution in Large Scale Network Monitoring Infrastructures
Load Distribution in Large Scale Network Monitoring Infrastructures Josep Sanjuàs-Cuxart, Pere Barlet-Ros, Gianluca Iannaccone, and Josep Solé-Pareta Universitat Politècnica de Catalunya (UPC) {jsanjuas,pbarlet,pareta}@ac.upc.edu
Quantum StorNext. Product Brief: Distributed LAN Client
Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without
Best Practices for Optimizing Your Linux VPS and Cloud Server Infrastructure
Best Practices for Optimizing Your Linux VPS and Cloud Server Infrastructure Q1 2012 Maximizing Revenue per Server with Parallels Containers for Linux www.parallels.com Table of Contents Overview... 3
An Oracle White Paper July 2011. Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide
Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide An Oracle White Paper July 2011 1 Disclaimer The following is intended to outline our general product direction.
VIRTUALIZATION, The next step for online services
Scientific Bulletin of the Petru Maior University of Tîrgu Mureş Vol. 10 (XXVII) no. 1, 2013 ISSN-L 1841-9267 (Print), ISSN 2285-438X (Online), ISSN 2286-3184 (CD-ROM) VIRTUALIZATION, The next step for
Parallels Virtuozzo Containers
Parallels Virtuozzo Containers White Paper Greener Virtualization www.parallels.com Version 1.0 Greener Virtualization Operating system virtualization by Parallels Virtuozzo Containers from Parallels is
GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR
GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR ANKIT KUMAR, SAVITA SHIWANI 1 M. Tech Scholar, Software Engineering, Suresh Gyan Vihar University, Rajasthan, India, Email:
White Paper. Recording Server Virtualization
White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...
Optimization of Cluster Web Server Scheduling from Site Access Statistics
Optimization of Cluster Web Server Scheduling from Site Access Statistics Nartpong Ampornaramveth, Surasak Sanguanpong Faculty of Computer Engineering, Kasetsart University, Bangkhen Bangkok, Thailand
Windows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described
ZEN LOAD BALANCER EE v3.02 DATASHEET The Load Balancing made easy
ZEN LOAD BALANCER EE v3.02 DATASHEET The Load Balancing made easy OVERVIEW The global communication and the continuous growth of services provided through the Internet or local infrastructure require to
Cloud Server. Parallels. Key Features and Benefits. White Paper. www.parallels.com
Parallels Cloud Server White Paper Key Features and Benefits www.parallels.com Table of Contents Introduction... 3 Key Features... 3 Distributed Cloud Storage (Containers and Hypervisors)... 3 Rebootless
Cloud Performance Benchmark Series
Cloud Performance Benchmark Series Amazon Elastic Load Balancing (ELB) Md. Borhan Uddin Bo He Radu Sion ver. 0.5b 1. Overview Experiments were performed to benchmark the Amazon Elastic Load Balancing (ELB)
Benchmarking Hadoop & HBase on Violin
Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages
BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB
BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB Planet Size Data!? Gartner s 10 key IT trends for 2012 unstructured data will grow some 80% over the course of the next
CentOS Linux 5.2 and Apache 2.2 vs. Microsoft Windows Web Server 2008 and IIS 7.0 when Serving Static and PHP Content
Advances in Networks, Computing and Communications 6 92 CentOS Linux 5.2 and Apache 2.2 vs. Microsoft Windows Web Server 2008 and IIS 7.0 when Serving Static and PHP Content Abstract D.J.Moore and P.S.Dowland
0408 - Avoid Paying The Virtualization Tax: Deploying Virtualized BI 4.0 The Right Way. Ashish C. Morzaria, SAP
0408 - Avoid Paying The Virtualization Tax: Deploying Virtualized BI 4.0 The Right Way Ashish C. Morzaria, SAP LEARNING POINTS Understanding the Virtualization Tax : What is it, how it affects you How
ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy
ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy OVERVIEW The global communication and the continuous growth of services provided through the Internet or local infrastructure require to
Windows Server Performance Monitoring
Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly
An Oracle Technical White Paper November 2011. Oracle Solaris 11 Network Virtualization and Network Resource Management
An Oracle Technical White Paper November 2011 Oracle Solaris 11 Network Virtualization and Network Resource Management Executive Overview... 2 Introduction... 2 Network Virtualization... 2 Network Resource
An Oracle White Paper August 2011. Oracle VM 3: Server Pool Deployment Planning Considerations for Scalability and Availability
An Oracle White Paper August 2011 Oracle VM 3: Server Pool Deployment Planning Considerations for Scalability and Availability Note This whitepaper discusses a number of considerations to be made when
StACC: St Andrews Cloud Computing Co laboratory. A Performance Comparison of Clouds. Amazon EC2 and Ubuntu Enterprise Cloud
StACC: St Andrews Cloud Computing Co laboratory A Performance Comparison of Clouds Amazon EC2 and Ubuntu Enterprise Cloud Jonathan S Ward StACC (pronounced like 'stack') is a research collaboration launched
Masters Project Proposal
Masters Project Proposal Virtual Machine Storage Performance Using SR-IOV by Michael J. Kopps Committee Members and Signatures Approved By Date Advisor: Dr. Jia Rao Committee Member: Dr. Xiabo Zhou Committee
Real Time Network Server Monitoring using Smartphone with Dynamic Load Balancing
www.ijcsi.org 227 Real Time Network Server Monitoring using Smartphone with Dynamic Load Balancing Dhuha Basheer Abdullah 1, Zeena Abdulgafar Thanoon 2, 1 Computer Science Department, Mosul University,
Performance Isolation of a Misbehaving Virtual Machine with Xen, VMware and Solaris Containers
Performance Isolation of a Misbehaving Virtual Machine with Xen, VMware and Solaris Containers Todd Deshane, Demetrios Dimatos, Gary Hamilton, Madhujith Hapuarachchi, Wenjin Hu, Michael McCabe, Jeanna
The Benefits of POWER7+ and PowerVM over Intel and an x86 Hypervisor
The Benefits of POWER7+ and PowerVM over Intel and an x86 Hypervisor Howard Anglin [email protected] IBM Competitive Project Office May 2013 Abstract...3 Virtualization and Why It Is Important...3 Resiliency
Basics of Virtualisation
Basics of Virtualisation Volker Büge Institut für Experimentelle Kernphysik Universität Karlsruhe Die Kooperation von The x86 Architecture Why do we need virtualisation? x86 based operating systems are
Efficient DNS based Load Balancing for Bursty Web Application Traffic
ISSN Volume 1, No.1, September October 2012 International Journal of Science the and Internet. Applied However, Information this trend leads Technology to sudden burst of Available Online at http://warse.org/pdfs/ijmcis01112012.pdf
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC
The Lagopus SDN Software Switch. 3.1 SDN and OpenFlow. 3. Cloud Computing Technology
3. The Lagopus SDN Software Switch Here we explain the capabilities of the new Lagopus software switch in detail, starting with the basics of SDN and OpenFlow. 3.1 SDN and OpenFlow Those engaged in network-related
Delivering Quality in Software Performance and Scalability Testing
Delivering Quality in Software Performance and Scalability Testing Abstract Khun Ban, Robert Scott, Kingsum Chow, and Huijun Yan Software and Services Group, Intel Corporation {khun.ban, robert.l.scott,
OPTIMIZING SERVER VIRTUALIZATION
OPTIMIZING SERVER VIRTUALIZATION HP MULTI-PORT SERVER ADAPTERS BASED ON INTEL ETHERNET TECHNOLOGY As enterprise-class server infrastructures adopt virtualization to improve total cost of ownership (TCO)
Dynamic Resource allocation in Cloud
Dynamic Resource allocation in Cloud ABSTRACT: Cloud computing allows business customers to scale up and down their resource usage based on needs. Many of the touted gains in the cloud model come from
GraySort on Apache Spark by Databricks
GraySort on Apache Spark by Databricks Reynold Xin, Parviz Deyhim, Ali Ghodsi, Xiangrui Meng, Matei Zaharia Databricks Inc. Apache Spark Sorting in Spark Overview Sorting Within a Partition Range Partitioner
Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand
Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand P. Balaji, K. Vaidyanathan, S. Narravula, K. Savitha, H. W. Jin D. K. Panda Network Based
www.see-grid-sci.eu Regional SEE-GRID-SCI Training for Site Administrators Institute of Physics Belgrade March 5-6, 2009
SEE-GRID-SCI Virtualization and Grid Computing with XEN www.see-grid-sci.eu Regional SEE-GRID-SCI Training for Site Administrators Institute of Physics Belgrade March 5-6, 2009 Milan Potocnik University
