International Journal of Computer Science Engineering and Information Technology Research (IJCSEITR) ISSN 2249-6831 Vol. 3, Issue 3, Aug 2013, 101-108 TJPRC Pvt. Ltd. RESOURCE MONITORING AND UTILIZATION IN SaaS PRAVEEN RESHMALAL 1 & S. H. PATIL 2 1 Research Scholar, Bharati Vidyapeeth Deemed University College of Engineering, Pune, Maharashtra, India 2 Guide, Bharati Vidyapeeth Deemed University College of Engineering, Pune, Maharashtra, India ABSTRACT Load Balancing is a method to distribute workload across one or more servers, network interfaces, hard drives, or other computing resources. Load balancing in the cloud differs from classical thinking on load- balancing architecture and implementation by using commodity servers to perform the load balancing. We design and implement a simple model which decreases the migration time of virtual machines by shared camera. KEYWORDS: Software as a Service (SaaS), Infrastructure as a Service (IaaS), Platform as a Service (PaaS) INTRODUCTION Cloud computing [1] is Internet based development and use of computer technology. The concept incorporates infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS) as well as Web 2.0 and other recent technology trends. One of the most important ideas behind cloud computing is scalability, and the key technology that makes that possible is virtualization. Virtualization allows better use of a server by aggregating multiple operating systems and applications on a single shared computer. Virtualization also permits online migration so that if a server becomes overloaded, an instance of an operating system (and its applications) can be migrated. SERVICE MODELS Cloud Software as a Service (SaaS) The capability provided to the consumer is to use the provider s applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited userspecific application configuration settings. Cloud Platform as a Service (PaaS) The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations. Cloud Infrastructure as a Service (IaaS) The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control
102 Praveen Reshmalal & S. H. Patil over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls). DEPLOYMENT MODELS Private Cloud The cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on premise or off premise. Community Cloud The cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on premise or off premise. Public Cloud The cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid Cloud The cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load- balancing between clouds). Based on the above definition, a term inter-cloud computing is defined as follows: A cloud model that, for the purpose of guaranteeing service quality, such as the performance and availability of each service, allows on-demand reassignment of resources and transfer of workload through a interworking of cloud systems of different cloud providers based on coordination of each consumer s requirements for service quality with each provider s SLA and use of standard interfaces. OUR APPROACH TO THE PROBLEM In our proposed model we establish cloud setup between two computers using Ubuntu, xen and Eucalyptus on peer to peer network. This can be discussed as follows- Cloud Setup - Creating cloud (test bed) by using (Ubuntu, Xen and Eucalyptus Resource Provisioning and Utilization - To access any critical resource like video camera it process information and utilization and swap usages etc. Load Balancing - load balancing algorithm for homogeneous and heterogeneous architectures. Testing - In order to evaluate the performance of complete setup, need to deploy resource monitoring and load balancing tools on test bed and evaluate performance of our algorithm. Why Resource Utilization? Cloud computing has become a key way for businesses to manage resources, which are now provided through remote servers and over the Internet instead of through the old hardwired systems which seem so out of date today. Cloud
Resource Monitoring and Utilization in SaaS 103 computing allows companies to outsource some resources and applications to third parties and it means less hassle and less hardware in a company. Just like any outsourced system, though, cloud computing requires monitoring. What happens when the services, servers, and Internet applications on which we rely on run into trouble, suffer downtime, or otherwise don t perform to standard? How quickly will we notice and how well will we react? Cloud monitoring allows us to track the performance of the cloud services we might be using. Whether we are using popular cloud services such as Google App Engine, Amazon Web Services, or a customized solution, cloud monitoring ensures that all systems are going. Cloud monitoring allows us to follow response times, service availability and more of cloud services so that we can respond in the event of any problems. What is Load Balancing? Load Balancing is a technique in which the workload on the resources of a node is shifts to respective resources on the other node in a network without disturbing the running task. A standard way to scale web applications is by using a hardware-based load balancer [5]. The load balancer assumes the IP address of the web application, so all communication with the web application hits the load balancer first. The load balancer is connected to one or more identical web servers in the back-end. Depending on the user session and the load on each web server, the load balancer forwards packets to different web servers for processing. The hardware-based load balancer is designed to handle high-level of load, so it can easily scale. However, a hardware-based load balancer uses application specific hardware-based components, thus it is typically expensive. Because of cloud's commodity business model, a hardware-based load balancer is rarely occurred by cloud providers as a service. Instead, one has to use a software based load balancer running on a generic server. IMPLEMENTATION Module Number 1 The priority based algorithm first selects the different resources like RAM, hard disk space, video camera etc to monitor and set some threshold value to each and every resources through which we can divert the load to another node present in the cloud. Figure 1 shows the flow diagram of it. Module Number 2 Figure 1 In this module we are submitting jobs to the cloud, that job is intended to be submitted to the different nodes present in cloud, by checking the threshold value of each and every node decision will be taken and next module get called. Figure 2 shows the flow diagram of it.
104 Praveen Reshmalal & S. H. Patil Module Number 3 Figure 2 In module number 3 threads which are ready to be submitted are checked by or load balancing algorithm along with that it also verifies the threshold value of the node as well as threshold value of the upcoming load if it satisfied the request will be forwarded to module 4 otherwise that request will get declined. Figure 3 shows the flowchart of it. Algorithm Figure 3 Our algorithm is intended to be used in large distributed systems, so satisfying the following properties is a necessity which we believe are crucial for the implementation of intracloud load balancing in a decentralized manner. Distributed execution: Algorithms are executed concurrently on independent physical hosts, and having limited
Resource Monitoring and Utilization in SaaS 105 information or no information about what the other parts of the algorithm are doing. They do not rely on central coordination by a single physical host which takes the role of a leader of coordinator. Here in order to get the total number of virtual machines that are running in current system, we need not query all physical hosts HA proxy provided related interface that can let us easily get the number of running virtual machines on each physical host via any physical host. From the algorithm we can see that there may be higher probability for a virtual machine being migrated to a physical host which has more virtual machines running on, this seems to anti-intuition, but from the view of long-range system equilibrium, this also reflects the capacity of this physical host, greater is the computing I/O capacity, more are there virtual machines running on it. This implements load allocation approximately in proportion to the capacity of physical hosts as stated in the section of problem statement. For each physical host, we define the cost function i as: c This represents that the objective we want to achieve is to equilibrate the processor usage and IO usage of the system. Our algorithm is a variant of that in reference [14], there the authors have proved that pure Nash equilibrium does always exist for the kind of model, thus ensuring stability. For simplicity, we do not consider memory use and we make an assumption that each physical host has enough memory so that this will not affect our experimental result because of memory shortage. Of course this is only a weak assumption. In our algorithm we can add the judge about memory when migrating. If this situation really occurred, all we needed do were either to randomly select physical host again until finding a suitable one or exceeding a num of retrying, or to wait for the next time execution of the algorithm. In our experiment, the practical memory use of each physical host keeps far away from its total num. During the running process of system, each host logs the processor usage every minute on the shared storage and the algorithm executes once every twenty minutes. We use the standard deviation of average processor usage of all hosts in past twenty minutes as a measure of the balance of the system. If the num of system hosts were very large, we would select some hosts uniformly at random and then compute the standard deviation of the random variable. In the experiment, all virtual machines have the same load; we artificially distribute them on different physical hosts each with different number of virtual machines, forming load imbalance in the system. We only consider the convergence according to processor usage under different initial distribution of virtual machines on physical hosts. The following two tables give the initial distribution of virtual machines for our two tests; we consider the standard deviation of processor average usage of the system as evaluation measure.
106 Praveen Reshmalal & S. H. Patil Figure 4 Figure 5 The result indicates that the time and memory requirement is linear. The time increases exponentially when the number of hosts/vm is increased in the simulation environment. CONCLUSIONS This paper presents a concept of Cloud Computing along with research challenges in load balancing. It also focus on merits and demerits of the cloud computing. Major thrust is given on the study of load balancing algorithm.. This paper aims towards the establishment of performance qualitative analysis on existing VM load balancing algorithm and then implemented in CloudSim and java language. Execution analysis of the simulation shows that change of MIPS will effect the response time. Increase in MIPS vs. VM decreases the response time. REFERENCES 1. Tony Bourke: Server Load Balancing, O'Reilly, ISBN 0-596-00050-2 2. Chandra Kopparapu : Load Balancing Servers, Firewalls & Caches, Wiley, ISBN 0-471-41550-2 3. Robert J. Shimonski : Windows Server 2003 Clustering & Load Balancing, Osborne McGraw-Hill, ISBN 0-07- 222622-6 4. Jeremy Zawodny, Derek J. Balling: High Performance MySQL, O'Reilly, ISBN 0-596-00306-4 5. J. Kruskall and M. Liberman. The Symmetric Time Warping Problem: From Continuous to Discrete. In Time Warps, String Edits and Macromolecules: The Theory and Practice of Sequence Comparison, pp. 125-161, Addison- Wesley Publishing Co., 1983. 6. Matthew Syme, Philip Goldie: Optimizing Network Performance with Content Switching: Server, Firewall and Cache Load balancing'', Prentice Hall PTR, ISBN 0-13-101468-5
Resource Monitoring and Utilization in SaaS 107 7. Anthony T.Velte, Toby J.Velte, Robert Elsenpeter, Cloud Computing A Practical Approach, TATA McGRAW- HILL Edition 8. 2010.Martin Randles, David Lamb, A. Taleb-Bendiab, A Comparative Study into Distributed Load Balancing Algorithms for Cloud Computing, 2010 IEEE 24 th International Conference on Advanced Information Networking and Applications Workshops. Mladen A. Vouk, Cloud Computing Issues, Research and Implementations, Proceedings of the ITI 2008 30th Int. Conf. on Information Technology Interfaces, 2008, June 23-26. 9. Ali M. Alakeel, A Guide to Dynamic Load Balancing in Distributed Computer Systems, IJCSNS International Journal of Computer Science and Network Security, VOL.10 No.6, June 2010. 10. http://www03.ibm.com/press/us/en/pressrelease/22613.wss 11. http://www.amazon.com/gp/browse.html?node=201590011 12. Amazon Elastic Compute Cloud http://aws.amazon.com/ec2/. 13. M. Vlachos, M. Hadjieleftheriou, D. Gunopulos, and E. Keogh. Indexing Multi-Dimensional Time-Series with Support for Multiple Distance Measures. Proc. Of SIGKDD, 2003. 14. Keogh and C. A. Ratanamahatana. Exact indexing of dynamic time warping. Journal of Knowledge and Information Systems, 2004. 15. Praveen Reshamlal and Dr S.H.Patil. Rsource Provisioning or Video on Demand in SaaS. International Journal of Computer Engineering and technology vol.4,issue 3May-Junel 2013 16. Rohini G.Khalkar, Dr. S.H..Patil, Data Integrety Proff Technique in Cloud Storage. International Journal of Computer Engineering and technology Vol.4,Issue 2,March-April 2013, 17. Rohini G.Khalkar, Dr. S.H..Patil, Data Security Techniques In cloud storage.international journal of computer science and technology,vol 4,issue 2,version 3,april-june 2013