Scheduler in Cloud Computing using Open Source Technologies Darshan Upadhyay Prof. Chirag Patel Student of M.E.I.T Asst. Prof. Computer Department S. S. Engineering College, Bhavnagar L. D. College of Engineering, Ahemdabad Gujarat Technological University Gujarat Technological University darshanit7@gmail.com chirag.email@yahoo.com Abstract Cloud Computing utilities are coming to be omnipresent, and then are beginning to serve as the essential root of computing capacity for both undertakings and private computing requisitions. Any request comes to cloud will be provided by cloud in terms of Virtual Machine. On which basis, Virtual Machine will be allocated to particular host will be decided by Scheduler. We make an effort to establish private cloud using OpenNebula an open source technology to establish cloud, and carried out tests regarding how scheduler behave with different requests. 1. Introduction The flexibility associated with fog up computing has its origin withinside the mix of virtualization technologies along together using net providers. A definition is given in [1]: Building on compute and storage virtualization, and leveraging the modern Web, Cloud Computing provides scalable, networkcentric, abstracted IT infrastructure, platforms, and applications as on-demand services that are billed by consumption. Cloud Computing is defined as a pool of virtualized computer resources. Based on this Virtualization the Cloud Computing paradigm allows workloads to be deployed and scaled-out quickly through the rapid provisioning of virtual machines or physical machines. Any request of resources will be delivered by Cloud in terms of Virtual Machine. So placement of Virtual machine is most important part in Cloud Computing. Resource Management is necessity in Cloud Computing. Because now a day s multinational company have large number of resources. And using Cloud Computing through resource Management, we can manage those resources efficiently. Through which, we can assure effective use of resources, provides scalability and elasticity. The large scalability possibilities offered by Cloud platforms can be harnessed not only for services and applications hosting but also as a raw on-demand computing resource[2]. Ultimately, Service Providers are under pressure to architect their infrastructure to enable real-time end to end visibility and dynamic resource management with fine grained control to reduce total cost of ownership while also improving agility. Again cloud computing is defined as a pooled of virtualized computer resources. So, to define effective VM placement policy[3] is necessary for dynamic resource management. 2. OpenNebula Open Source technology to build Cloud OpenNebula was first established as a research project back in 2005 by lgnacio M. Llorente and Ruben S. Montero, releasing the first version of the toolkit and continuing as an open source project in march 2008[4]. OpenNebula is one of the key technologies of reservoir plan and the flagship research project in virtualization infrastructure and cloud computing of European Union. Like nimbus, OpenNebula is also an open source cloud service framework [4]. It allows user deploy and manage virtual machines on physical resources and it can set user s data centers or clusters to flexible virtual infrastructure that can automatically adapt to the change of the service load. The main difference of OpenNebula and nimbus is that nimbus implements remote interface based on EC2 or WSRF through which user can process all security related issues, while OpenNebula does not. Using opennebula we 1093
can establish public cloud, private cloud and hybrid cloud. OpenNebula also allow to work with existing system or external modules. OpenNebula can also work with an open source technology Haizea resource scheduler tool. The Match-making algorithm as described in [7] allocates resources with a higher RANK expression first to allocate VMs. This RANK expression is important in applying placement policies like Packing, Striping and Load-aware policy. Packing policy minimizes the number of cluster nodes in use by using those nodes with more VMs running first. Striping policy maximizes the resources available to VMs in a node by using those nodes with less VMs running first while Load-aware policy do the same job by using those nodes with more free CPU first. Fig. 2 shows the comparison of various toolkits on the basis of Virtual Machine placement policy. Cloud toolkits Amazon EC2 Nimbus VM placement policies Proprietary Round robin and static greedy Support for hybrid cloud No Yes Fig. 1 OpenNebula Architecture By default, OpenNebula comes with match making scheduler. You can also work with external scheduler like Haizea with OpenNebula. The toolkit includes features for integration, management, scalability, security and accounting. It also emphasizes standardization, interoperability and portability, providing cloud users and administrators with a choice of several cloud interfaces (EC2 Query, OGF OCCI and vcloud) and hypervisors (Xen, KVM and VMware), and a flexible architecture that can accommodate multiple hardware and software combinations in a data center[3]. As shown in fig.1, OpenNebula architecture can be divided into three layers: 1. tools developed using interfaces provided by the OpenNebula core. 2. core which is the main part of OpenNebula architecture, consists like virtual machine(vm), virtual network(vn) and host management components. 3. drivers useful for the support to different virtualization and technologies like monitoring. 3. Scheduler in Cloud Computing On which basis Virtual Machine will be allocated to particular host will be decided by Scheduler using various policies. By default, OpenNebula Comes with match making Scheduler[4][6]. OpenNebula uses only Immediate lease provisioning to schedule IaaS cloud resources using Match-making Algorithm. Eucalyptus Static greedy and round robin OpenNebula Match Making - Initial placement based on rank policy OpenNebula and Haizea Dynamic placement to support advance reservation leases No Yes Yes Fig. 2 Comparison of various cloud toolkits one the basis of VM placement policy. 4. Experiments To build a private cloud, i have used OpenNebula 3.0 an open source technology and as an operating system. Experiment 1 Goal of this experiment is to visualize different states of VM and analyze behavior of scheduler. For this experiment, it is necessary to have one host connected with Cloud Front-end. You also need the operating system s image which you want to run in 1094
Virtual Machine as os. Secondly, you have to prepare Virtual machine s template file. A template file consists of a set of attributes that defines a Virtual Machine. For operating system of Virtual Machine, there are 2 ways to define it. 1) You can use image template file(same as Virtual Machine s template) which consists of a set of attributes that defines a image and then using oneimage command you can register the image in OpenNebula. From now onwards, you can use this image by image id or by image name. 2) You can use image directly in Virtual Machine s template file and set necessary attributes. In this experiment two machines were used. Their hardware details and software details are given in fig. 3. Host and Cloud Front-end are connected. IP 192.169.1.16 Virtual Machine 3 IP 192.169.1.15 Virtual Machine 2 IP 192.169.1.14 Virtual Machine 1 Lenovo core 2 duo + 1 GB RAM IP 192.169.1.11 Cloud Front-end Fig. 3 Hardware/Software setup Lenovo core 2 duo + 1 GB RAM IP 192.169.1.10 Host1 C. Experiment method & Result In this experiment, we can see the different states of Virtual Machines with respect to time. At same time, procedure of VM creation will be started, but from the graph fig.5 we can show that all Virtual Machines time of Active state is different and time of running state of VM is almost same. Here scheduler will try to discover all possible hosts, but due to only one host scheduler will allocate all VM to host1. When we create VM it will follow the VM life-cycle. States/VM VM - 1 VM - 2 VM - 3 ACTIVE 0.00 0.30 1.00 PROLOG 0.00 0.30 1.00 BOOT 2.17 2.18 3.31 RUNNING 3.30 3.33 3.37 Fig. 4 Table shows time with respect to different states Fig. 5 Graph shows all VM with respect to different states & time So, we can conclude from this experiment that by default, OpenNebula s scheduler works like First Come First Served. Experiment 2 Goal of this experiment is how many VM can be placed on single host and what scheduler will do after host doesn t have enough resources to run VM. For this experiment, we have taken one host connected with Cloud Front end. We will create fifteen Virtual Machine using Cloud Front end. Procedure for creating Virtual Machine will remain same as mention in previous experiment. The hardware and software details are as shown in fig. 6. Fig. 6 Hardware/Software setup 1095
C. Experiment Method & Result In this experiment first we had created two Virtual Machines which will be allocated to host1 by scheduler. Each and every time scheduler will try to find whether the RANK has been defined in particular VM or not. In this experiment, we haven t defined any RANK in any VM. Then after we had created thirteen Virtual Machines from which, again two Virtual Machines will be allocated to host1. Remaining eleven Virtual Machines will be in pending states due to less resources(insufficient memory) as shown in fig. 7. Stage Pending VM queue Status Total VM Initial 0-0 executing command to create VM twice 0 VM Allocated to host 2 executing command to create VM twice executing command to create VM, eleven times deletion of first two running VM 0 VM Allocated to host 11 Host filtered out due to less resources to run VM, all eleven VM remains in pending VM queue 9 Due to deletion of two running VM, Now host have capacity to run two new VM. First, 2- VM from pending queue will be allocated to host Fig. 7 Scheduler log(auto generated) Now, we have deleted first 2-running Virtual Machines. So, again host1 have capacity to run two more Virtual Machines. So from the pending Virtual Machine queue first 2-VM will be allocated to host1. And remaining VM will be in pending queue. Fig. 8 shows the graph of host1 s CPU, host1 s MEMORY and total VM. This graph is generated by sunstone which provides the GUI to OpenNebula s cloud. 4 15 13 Fig. 8 Sunstone s graph for Host1 s CPU, Memory and Total VM So, from this experiment we can conclude that, OpenNebula s scheduler will filtered out the host if it doesn t have enough capacity to run more VM. And Virtual Machine will remain in pending Virtual Machine queue if there is no host available. Experiment 3 The goal of this experiment to analyze that how scheduler will work when there are more than one host. For this experiment, we have connected three hosts with Cloud Front end. Procedure to create VM is same as mentioned in Experiment 1. The hardware/software setup is shown in fig. 9. Fig. 9 Hardware/Software setup 1096
C. Experiment Method & Result In this experiment, we have created three VM. And we have three host connected with Cloud Front end. And to see how scheduler will allocate hosts to VM. Whether scheduler will allocate all VMs to one host only? Or scheduler will allocate 2 VMs to one host and 1 VM to another host and one host remains empty? Or scheduler will allocate 1 VM to each host? Thu Mar 15 13:37:41 2012 [HOST][D]: Discovered Hosts (enabled): 13 15 17 Thu Mar 15 13:37:41 2012 [VM][D]: Pending virtual machines : 104 105 106 Thu Mar 15 13:37:41 2012 [RANK][W]: No rank defined for VM Thu Mar 15 13:37:41 2012 [RANK][W]: No rank defined for VM Thu Mar 15 13:37:41 2012 [RANK][W]: No rank defined for VM Thu Mar 15 13:37:41 2012 [SCHED][I]: Select hosts PRI HID ------------------- Virtual Machine: 104 0 17 0 15 0 13 Virtual Machine: 105 0 17 0 15 0 13 Virtual Machine: 106 0 17 0 15 0 13 Thu Mar 15 13:37:41 2012 [VM][I]: Dispatching virtual machine 104 to HID: 17 Thu Mar 15 13:37:41 2012 [VM][I]: Dispatching virtual machine 105 to HID: 15 Thu Mar 15 13:37:41 2012 [VM][I]: Dispatching virtual machine 106 to HID: 13 Fig. 10 Scheduler log Here, as shown in fig. 10 Scheduler had allocated one VM to each host. So we can say that scheduler distribute VM equally between hosts when there are multiple hosts are available. Experiment 4 The goal of this experiment is to implement rank policy and find out how scheduler will allocate host to VMs on basis of rank. For this experiment, we have connected three hosts with Cloud Front end. In this experiment we had created six VM. All VM created one after another, not at a time. Procedure to create VM is same as mentioned in Experiment 1. The hardware/ software setup is shown in fig. 11. C. Experiment method In this experiment, we had created VM one by one. And in all VM we had specify FREEMEMORY as RANK. Scheduler will sort all hosts according to rank and set priority of all hosts to particular VM. So, scheduler will allocate VM to those hosts first whose free memory is less. When scheduler allocated one VM to particular host, scheduler will update the data of all hosts before allocating another VM to particular VM. Fig. 11 Hardware/Software setup Fig. 12 shows the initial free memory of each three hosts. Fig. 12 three hosts and their free memory From above figure we can say that, new VM will be allocate to darshh2. Now, we had created one VM, scheduler had allocated this VM to darshh2, again scheduler will pool all information related to host and find out the host with less free memory and assign new VM to that particular host. Fig. 13 shows the states of all VM with respect to time. Fig. 14 shows the free memory of hosts with respect to time. Fig. 15 & 16 shows the graph which indicates when VM allocated to particular host, free memory of that particular host will decrease. 1097
State 108 110 111 112 114 115 s/vm Initia 0.00 6.30 12.30 15.30 18.30 21.30 l ACT 0.00 6.30 12.30 15.30 18.30 21.30 IVE PRO 0.00 6.30 12.30 15.30 18.30 21.30 LOG BOO 1.20 7.46 12.57 15.59 19.02 22.20 T RUN 1.26 7.56 13.45 16.38 19.42 22.40 NIN G Fig. 13 VM states with respect to time Minute/Host darshhost darshh1 darshh2 0 296 149 113 5 296 149 54 10 296 149 1 15 296 89 1 20 236 30 1 25 177 30 1 Fig. 14 host free memory(in MB) with respect to time Fig. 15 States Vs Time So, from this experiment we can conclude that scheduler will successfully work with rank policy. According to Rank policy scheduler will sort all hosts and set priorities of hosts for particular VM. 5. Conclusion & Future work From above experiments we can conclude that scheduler is the most important thing in cloud which works on various policies and on the basis of that scheduler allocate VM to particular host. In these match making scheduler there is no rank to give priority to particular VM or there is no way to give priority to particular VM. So the future direction will be to improve this match making scheduler and defined new rank through which we can give priority to VM. 6. References [1] Patrícia Takako Endo, Glauco Estácio Gonçalves, Judith Kelner, Djamel Sadok, A A Survey on Opensource Cloud Computing Solutions, VIII workshop on cloud. [2] DSA Research, The Open-Source Toolkit for building cloud infrastructure NEBULA, July 2009. [3] B. Sotomayor, R. S. Montero, I. M. Llorente, I. Foster. "Virtual Infrastructure Management in Private and Hybrid Clouds", IEEE Internet Computing, vol. 13, no. 5, pp. 14-22, Sep./Oct. 2009. [4] Vivek Shrivastava, D.S. Bhilare, Algorithms to Improve Resource Utilization and Request Acceptance rate in IaaS Cloud Scheduling, International Journal Advanced networking and Application, vol. 3, issue 5, pp 1367-1374, 2012. [5] OpenNebula Pro, OpenNebulaPro White Paper, Rev20110126,https://support.opennebula.pro/attachm ents/token/coiuzlpxct7oyvq/?name=opennebulapro_ White_Paper_Rev20110126.pdf. [6] OpenNebula Virtual Machine,http://opennebula. org/documentation:rel2.2:vm_guide.. [7] OpenNebula Scheduler, http://opennebula.org/ documentation:archives:rel2.0:schg. Fig. 16 Free Memory Vs Time 1098