Avoiding Overload Using Virtual Machine in Cloud Data Centre



Similar documents
A Novel Method for Resource Allocation in Cloud Computing Using Virtual Machines

Migration of Virtual Machines for Better Performance in Cloud Computing Environment

An Approach for Dynamic Resource Allocation Using Virtualization technology for Cloud Computing

Resource Allocation Using Virtual Machines and Practical Outsourcing for Cloud Computing Environment

Dynamic Resource allocation in Cloud

Allocation of Resources Dynamically in Data Centre for Cloud Environment

Efficient Resources Allocation and Reduce Energy Using Virtual Machines for Cloud Environment

Effective Resource Allocation For Dynamic Workload In Virtual Machines Using Cloud Computing

Balancing Server in Public Cloud Using AJAS Algorithm

IMPLEMENTATION OF VIRTUAL MACHINES FOR DISTRIBUTION OF DATA RESOURCES

Enhancing the Scalability of Virtual Machines in Cloud

INCREASING SERVER UTILIZATION AND ACHIEVING GREEN COMPUTING IN CLOUD

Virtualization Technology using Virtual Machines for Cloud Computing

Dynamic memory Allocation using ballooning and virtualization in cloud computing

Virtualization Technology to Allocate Data Centre Resources Dynamically Based on Application Demands in Cloud Computing

AN APPROACH TOWARDS DISTRIBUTION OF DATA RESOURCES FOR CLOUD COMPUTING ENVIRONMENT

Allocation of Datacenter Resources Based on Demands Using Virtualization Technology in Cloud

A review of Virtualization Technology to Allocate Data Centre Resources Dynamically Based on Application Demands in Cloud Computing

Energetic Resource Allocation Framework Using Virtualization in Cloud

Virtualization Technology to Allocate Data Centre Resources Dynamically Based on Application Demands in Cloud Computing

International Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 3, May-June 2015

Energy Constrained Resource Scheduling for Cloud Environment

A System for Dynamic Resource Allocation Using Virtualization technology and supports green computing in cloud computing environment

Efficient and Enhanced Load Balancing Algorithms in Cloud Computing

How To Allocate Resources On A Virtual Machine

Dynamic Resource management with VM layer and Resource prediction algorithms in Cloud Architecture

Cloud Based Dynamic Workload Management

ENHANCING MINIMAL VIRTUAL MACHINE MIGRATION IN CLOUD ENVIRONMENT

Active Resource Provision in Cloud Computing Through Virtualization

VM Based Resource Management for Cloud Computing Services

Dynamic Resource Allocation using Virtual Machines for Cloud Computing Environment

Infrastructure as a Service (IaaS)

Energetic Resource Allocation Using Virtual Products for Cloud Computing Environment 1 P.Subhani, 2 V.Kesav Kumar,

Dynamic Resource Management Using Skewness and Load Prediction Algorithm for Cloud Computing

Keywords Distributed Computing, On Demand Resources, Cloud Computing, Virtualization, Server Consolidation, Load Balancing

Multi-dimensional Affinity Aware VM Placement Algorithm in Cloud Computing

Cost Effective Automated Scaling of Web Applications for Multi Cloud Services

A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing

Dynamic Virtual Machine Scheduling for Resource Sharing In the Cloud Environment

Automation, Manageability, Architecture, Virtualization, data center, virtual machine, placement

Dynamic resource management for energy saving in the cloud computing environment

Enhancing the Performance of Live Migration of Virtual Machine s with WSClock Replacement Algorithm

Dynamic Load Balancing of Virtual Machines using QEMU-KVM

Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing

Energy Conscious Virtual Machine Migration by Job Shop Scheduling Algorithm

INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING & TECHNOLOGY (IJCET) LOAD BALANCING SERVER AVAILABILITY ISSUE

Survey on Dynamic Resource Allocation Strategy in Cloud Computing Environment

Advanced Load Balancing Mechanism on Mixed Batch and Transactional Workloads

Energy Efficient Resource Management in Virtualized Cloud Data Centers

Power Aware Live Migration for Data Centers in Cloud using Dynamic Threshold

A Survey Paper: Cloud Computing and Virtual Machine Migration

A Review of Load Balancing Algorithms for Cloud Computing

International Journal of Advance Research in Computer Science and Management Studies

RESOURCE MANAGEMENT IN CLOUD COMPUTING ENVIRONMENT

Setting deadlines and priorities to the tasks to improve energy efficiency in cloud computing

Affinity Aware VM Colocation Mechanism for Cloud

Figure 1. The cloud scales: Amazon EC2 growth [2].

Exploring Resource Provisioning Cost Models in Cloud Computing

Two-Level Cooperation in Autonomic Cloud Resource Management

Energy Efficient Resource Management in Virtualized Cloud Data Centers

A Migration of Virtual Machine to Remote System

Scheduling using Optimization Decomposition in Wireless Network with Time Performance Analysis

INVESTIGATION ON ENERGY UTILIZATION IN CLOUD DATA CENTERS

Virtual Putty: Reshaping the Physical Footprint of Virtual Machines

Green Cloud Computing 班 級 : 資 管 碩 一 組 員 : 黃 宗 緯 朱 雅 甜

Adaptive Resource Provisioning for the Cloud Using Online Bin Packing

Survey on Models to Investigate Data Center Performance and QoS in Cloud Computing Infrastructure

Dynamic Creation and Placement of Virtual Machine Using CloudSim

From Grid Computing to Cloud Computing & Security Issues in Cloud Computing

Power Management in Cloud Computing using Green Algorithm. -Kushal Mehta COP 6087 University of Central Florida

Hierarchical Approach for Green Workload Management in Distributed Data Centers

ENERGY EFFICIENT VIRTUAL MACHINE ASSIGNMENT BASED ON ENERGY CONSUMPTION AND RESOURCE UTILIZATION IN CLOUD NETWORK

Payment minimization and Error-tolerant Resource Allocation for Cloud System Using equally spread current execution load

Towards an understanding of oversubscription in cloud

Keywords: Dynamic Load Balancing, Process Migration, Load Indices, Threshold Level, Response Time, Process Age.

From Grid Computing to Cloud Computing & Security Issues in Cloud Computing

International Journal of Advancements in Research & Technology, Volume 3, Issue 8, August ISSN

USING VIRTUAL MACHINE REPLICATION FOR DYNAMIC CONFIGURATION OF MULTI-TIER INTERNET SERVICES

Flauncher and DVMS Deploying and Scheduling Thousands of Virtual Machines on Hundreds of Nodes Distributed Geographically

Sla Aware Load Balancing Algorithm Using Join-Idle Queue for Virtual Machines in Cloud Computing

Energy Aware Consolidation for Cloud Computing

Virtual Machine Migration in an Over-committed Cloud

A Novel Switch Mechanism for Load Balancing in Public Cloud

A Survey of Energy Efficient Data Centres in a Cloud Computing Environment

CHAPTER 1 INTRODUCTION

IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures

Power-efficient Virtual Machine Placement and Migration in Data Centers

A Survey on Load Balancing Technique for Resource Scheduling In Cloud

Energy Efficient Load Balancing of Virtual Machines in Cloud Environments

Environments, Services and Network Management for Green Clouds

IGI GLOBAL PROOF. An Infrastructureas-a-Service. Chapter 13. On-Demand Resource Provisioning ABSTRACT 1. INTRODUCTION

Efficient Scheduling Of On-line Services in Cloud Computing Based on Task Migration

Cloud Scale Resource Management: Challenges and Techniques

ADAPTIVE LOAD BALANCING ALGORITHM USING MODIFIED RESOURCE ALLOCATION STRATEGIES ON INFRASTRUCTURE AS A SERVICE CLOUD SYSTEMS

Minimization of Energy Consumption Based on Various Techniques in Green Cloud Computing

Optimal Service Pricing for a Cloud Cache

Policy-based optimization

International Journal of Scientific & Engineering Research, Volume 4, Issue 5, May ISSN

An Approach to Load Balancing In Cloud Computing

Transcription:

Avoiding Overload Using Virtual Machine in Cloud Data Centre Ms.S.Indumathi 1, Mr. P. Ranjithkumar 2 M.E II year, Department of CSE, Sri Subramanya College of Engineering and Technology, Palani, Dindigul, Tamilnadu, India-624 615 1 Assistant Professor, Department of CSE, Sri Subramanya College of Engineering and Technology, Palani, Dindigul, Tamilnadu, India-624 615 2 ABSTRACT Cloud data centre s improve CPU utilization of their servers (physical machines or PMs) through Virtualization (virtual machines or VMs). Over virtualized and under virtualized PMs suffer performance degradation and power dissipation respectively. In this paper, system that uses virtualization technology to allocate data center resources dynamically based on application demands and support green computing by optimizing the number of servers in use. Thus minimizes the hardware cost and saves the energy and used when a system under a heavy load thus improves the system performance. The concept of skewness is introduced to measure the unevenness in the multidimensional resource utilization of a server. By minimizing skewness, we can combine different types of workloads nicely and improve the overall utilization of server resources. Finally minimizes the overload in a server by mitigate the process from high load (i.e. hot spot) server into normal load server (i.e. warm state) with the help of mitigation list. Physical Machine does not allow using a server in case of overload. Thus avoid the failure of server under high load. KEYWORDS: Cloud Computing, Resource Management, Virtualization, Green Computing. I. INTRODUCTION Virtualization is a key technology underlying cloud computing platforms [5, 12], where applications encapsulated within virtual machines is dynamically mapped onto a pool of physical servers. Virtual machines provide several benefits in a cloud computing environment, including increased physical resource utilization via resource multiplexing, as well as flexibility and easy scale up/ scale-down through migration and fast restarts. In modern virtualization based compute clouds, applications share the underlying hardware by running in isolated Virtual Machines (VMs). Virtual machine monitors (VMMs) provide a mechanism for mapping virtual machines (VMs) to physical resources [3]. This mapping is largely hidden from the cloud users. Besides reducing the hardware cost, it also saves on electricity which contributes to a significant portion of the operational expenses in large data centers. VM live migration technology makes it possible to change the mapping between VMs and PMs While applications are running [5], [6]. However, a policy issue remains as how to decide the mapping adaptively so that the resource demands of VMs are met while the number of PMs used is minimized. This is challenging when the resource needs of VMs are heterogeneous due to the diverse set of applications they run and vary with time as the workloads grow and shrink. The capacity of PMs can also be heterogeneous because multiple generations of hardware co-exist in a data center. For overload avoidance, the utilization of PMs should keep too low to reduce the possibility of overload in case the resource needs of VMs increase later. For green computing, the utilization of PMs should keep reasonably high to make efficient use of their energy. Copyright @ IJIRCCE www.ijircce.com 701

II. RELATED WORK Automatic scaling of Web applications was previously studied in [10] [11] for data center environments. In MUSE [12], each server has replicas of all web applications running in the system. The dispatch algorithm in a frontend L7-switch makes sure requests are reasonably served while minimizing the number of under-utilized servers. Work [12] uses network flow algorithms to allocate the load of an application among its running instances. For connection oriented Internet services like Windows Live Messenger, work [10] presents an integrated approach for load dispatching and server provisioning. All works above do not use virtual machines and require the applications be structured in a multi-tier architecture with load balancing provided through an front-end dispatcher. In contrast, our work targets Amazon EC2-style environment where it places no restriction on what and how applications are constructed inside the VMs. A VM is treated like a black box. Resource management is done only at the granularity of whole VMs. VM live migration is a widely used technique for dynamic resource allocation in a virtualized environment [8] [10] [12]. Our work also belongs to this category. Sandpiper combines multi-dimensional load information into a single Volume metric [8]. It sorts the list of PMs based on their volumes and the VMs in each PM in their volume-to-size ratio (VSR). The HARMONY system applies virtualization technology across multiple resource layers [20]. It uses VM and data migration to mitigate hot spots not just on the servers, but also on network devices and the storage nodes as well. It introduces the Extended Vector Product (EVP) as an indicator of imbalance in resource utilization. Dynamic placement of virtual servers to minimize SLA violations is studied in [9]. They model it as a bin packing problem and use the well-known first-fit approximation algorithm to calculate the VM to PM layout periodically. That algorithm, however, is designed mostly for off-line use. It is likely to incur a large number of migrations when applied in on-line environment where the resource needs of VMs change dynamically. Many efforts have been made to curtail energy consumption in data centers. Hardware based approaches include novel thermal design for lower cooling power, or adopting power-proportional and low-power hardware. Work [10] uses Dynamic Voltage and Frequency Scaling (DVFS) to adjust CPU power according to its load. We do not use DVFS for green computing. A. Existing System III. SYSTEM ARCHITECTURE Previously Virtual machine monitors (VMMs) provide a mechanism for mapping virtual machines (VMs) to physical resources. This mapping is largely hidden from the cloud users. Users do not know where their VM instances run. It is up to the cloud provider to make sure the underlying physical machines (PMs) have sufficient resources to meet their needs. VM live migration technology makes it possible to change the mapping between VMs and PMs While applications are running. The capacity of PMs can also be heterogeneous because multiple generations of hardware coexist in a data center. In this when a user send a request to the server the VM start searching for a results in PM which is responsible for allocate a resource based on the current application demand. This mapping is hidden from the user s, it should not support the similar applications, so it will affect the system with various application demands. This also uses a more number of servers but the utilization of a server is not efficient that causes the performance and thus increase the hardware cost.fig.1 shows architecture of an existing system. The Major drawbacks of this system are How to decide the mapping adaptively so that the resource demands of VMs are met while the number of PMs used is minimized. Copyright @ IJIRCCE www.ijircce.com 702

When the resource needs of VMs are heterogeneous due to the diverse set of applications they run and vary with time as the workloads grow and shrink. Some other drawbacks are such as Waste of electricity Task overloaded Increased hardware cost Fig.1 System Architecture of Existing System In our scheme, shown in fig.2 server states can be further classified into three categories based on their load in a server by assigning threshold value to sever based on the number of process running in an server: cold, hot, and warm. Initially a process in an cold state, if the sever consumes process more than its hot threshold value, it indicates server in a hot state no other process could not enter into an server thus avoids overload in an server. If the server consumes average threshold value then it indicates sever in an warm state it will able to process some more requests. If the sever consumes cold threshold then the server is an cold state, the server process the fewer requests, therefore the fewer requests are migrated to an hot state or warm state servers in a manner that will not cause an overload in an sever, then turn off the server in an cold state to save the energy for future use. In some other cases it may be in an idle state that is no other process in server, then it will turned off to support the concept of green computing. Copyright @ IJIRCCE www.ijircce.com 703

Fig.2. Architecture of Proposed System B. Proposed System In Proposed System the design and implementation of automated resource management system is presented to achieve a good balance between two goals such as Overload Avoidance The capacity of a PM should be sufficient to satisfy the resource needs of all VMs running on it. Otherwise, the PM is overloaded and can lead to degraded performance of its VMs. Green Computing The number of PMs used should be minimized as long as they can still satisfy the needs of all VMs. Idle PMs can be turned off to save energy. IV. VM SCHEDULER AND PREDICTOR A. VM Scheduler VM Scheduler run and invoked periodically receives from the user, the resource demand history of VMs, the capacity and the load history of PMs (personal machine), and the current layout of VMs on PMs. It schedules the number of thread process running in a system, based on this the state of the process varies like hot state, cold state, warm state. Then it can forward the request to predictor, which predicts the state of process based on the threshold value. When multiple systems are connected together it schedules the process that is running on different systems. This will help the user to view which process running in a particular system. B. Predictor The predictor predicts the future resource demands of VMs and the future load of PMs based on past statistics. The load of a PM can be computed by aggregating the resource usage of its VMs. The local node manager at each node first attempts to satisfy the new demands locally by adjusting the resource allocation of VMs sharing the same VMM. Physical Machine can change the CPU allocation among the VMs by adjusting their weights in its CPU scheduler. Instead on make an prediction based on the external behavior of VMs, calculate the Exponentially weighted moving average (EWMA) E (t) = α * E (t-1) + (1-α) * O(t), 0 α 1 Copyright @ IJIRCCE www.ijircce.com 704

where E(t) and O(t) are the estimated and the observed load at time t, respectively. α reflects a tradeoff between stability and responsiveness. To predict the future load on server measure the load every minute and predict the load in next time, Predictive value is modeled as a linear function of its past observations. V. HOT AND COLD SPOT SOLVER AND MIGRATION LIST A. Hot and Cold Spot Solver The hot spot solver in our VM Scheduler detects if the resource utilization of any PM is above the hot threshold (i.e., a hot spot). If so, some VMs running on them will be migrated away to reduce their load. Then it can give the request to cold spot solver. The Physical Memory does not allow using the server, thus avoiding overload in a server. A server as a hot spot if the utilization of any of its resources is above a hot threshold. This indicates that the server is overloaded and hence some VMs running on it should be migrated away. The temperature of hot spots reflects its degree of overload in a server. The cold spot solver checks if the average utilization of actively used PMs (APMs) is below the green computing threshold. If so, some of those PMs could potentially be turned off to save energy. It identifies the set of PMs whose utilization is below the cold threshold (i.e., cold spots) and then attempts to migrate away all their VMs then it forward request to migration list. A server is actively used if it has at least one VM running. Otherwise, it is inactive. B. Migration List When migration list can receive the request from cold spot solver and it can compiles list of VMs and migration list can passes it response to the user control for execution. If the minimum number of process is allocated to a particular server, then that process is migrated to a hot state server in a manner that will not cause an overload in an server. Also mitigate if a server allocated to more number of process beyond the hot state, then that process migrated to cold state. By mitigate the process from hot to cold and cold to hot the energy of server is saved and thus will reduce the hardware cost. The list of hot spots is sorted in descending temperature order. VI. C. Skewness Algorithm SKEWNESS ALGORITHM AND GREEN COMPUTING This algorithm is mainly used to quantify the unevenness in the utilization of multiple resources on server Skewness (R) = 1/n (X o /x -1) +(X 1 /x -1) +.. ( X n /x -1) Where n be a number of resources, and X be the average utilization of all resources for server R and X 0 be the utilization of 0 th server and so on. This can combine different kinds of workload and improve the overall utilization server resources. D. Comparison With Vector Dot Algorithm Vector Dot is the scheduling algorithm used in HARMONY [3] virtualization system, whose architecture is to this algorithm. It optimizes utilization for three types of resources including physical servers as well as data centre network bandwidth and I/O bandwidth of the centralized storage system. Instead of Virtual Machine migration it uses Virtual storage migration. The skewness algorithm is similar to vector dot by minimizing the skewness. However Vector Dot emphasizes on load balancing. Skewness explicitly take care of the structure of the residual resources to make them as useful as possible. Compared to Vector Dot, Skewness support load balance as well as green computing. E. Green Computing The resource utilization of active server is too low then the server is turned off to save the energy. The major problem in this to reduce the number of active server when the load of server is too low without any performance degradation either current or future. Copyright @ IJIRCCE www.ijircce.com 705

This algorithm is invoked when the utilization of servers are below the green computing threshold. Then sort the list of cold spot based on ascending order of the memory size. Before going to turn off the underutilized server necessary to migrate away all of VMs. After accepting the VM must below the warm threshold, while saving the energy by consolidating underutilized server, overdoing it may be create hot spots in future, the warm threshold is designed to prevent them. The above consolidation adds extra workload on related server this is not a major problem because green computing is initiated only when the system load is too low. Eliminate the cold spots in the system only when the average load of all active servers (APMs) is below the green computing threshold. Otherwise, leave those cold spots there as potential destination machines for future offloading. F. Effects Of Load On a Server Load on a server varies, initially when the number of requesting resource process is in cold state then the load on server is like a fig.3.later the requesting resource process is in moderate load like warm state then load on server is like on fig. 4, When more number of requests on server that is it reaches above the hot threshold then the load on server is like in fig.5. Server Idle Booked Fig.3. Initially load on server Server Booked Idle Fig. 4 Normal load on server Copyright @ IJIRCCE www.ijircce.com 706

Server Booked Idle Fig. 5 High load on server VI. CONCLUSION The design, implementation, and evaluation of a resource management system are presented for cloud computing services. Our system multiplexes virtual to physical resources adaptively based on the changing application demand. The skewness metric is used to combine VMs with different resource characteristics appropriately so that the capacities of servers are well utilized. This algorithm achieves both overload avoidance and green computing for systems with multi resource constraints. Thus resource allocation on Server under high load is avoided. Failure or crashes of server is limited. The overall utilization of resources is improved without performance degradation. VII. FUTURE WORK The predictor predicts the future resource demands of VMs and the future load of PMs based on past statistics. The load of an PM can be computed by aggregating the resource usage of its VMs. In Future the resource utilization of servers are visualized in a graphical form in a cloud environment using cloud tool EyeOS. User can easily identify the load on server from the graph and thus avoid overload on server. Thus reduce the average decision time regarding resource allocation decisions. VIII. REFERENCES [1] M. Armbrust et al., Above the Clouds: A Berkeley View of Cloud Computing, technical report, Univ. of California, Berkeley, Feb. 2009. [2] G. Chen, H. Wenbo, J. Liu, S. Nath, L. Rigas, L. Xiao, and F. Zhao, Energy-Aware Server Provisioning and Load Dispatching for Connection- Intensive Internet Services, Proc. USENIX Symp.Networked Systems Design and Implementation (NSDI 08),Apr. 2008. [3] C. Clark, K. Fraser, S. Hand, J.G. Hansen, E. Jul, C. Limpach, I. Pratt, and A. Warfield, Live Migration of Virtual Machines, Proc. Symp. Networked Systems Design and Implementation (NSDI 05), May 2005. [4] Sheng Di, Cho-Li Wang, "Dynamic Optimization of Multiattribute Resource Allocation in Self-Organizing Clouds," Parallel and Distributed Systems, IEEE Transactions on, vol. 24, no. 3, pp. 464,478, 2013. doi: 10. 1109/TPDS. 2012. 144 [5] J. Sonneck and A. Chandra, Virtual Putty: Reshaping the PhysicalFootprint of Virtual Machines, Proc. Int l Hot Cloud Workshop in Conjunction with USENIX Ann. Technical Conf., 2009. Copyright @ IJIRCCE www.ijircce.com 707

[6] M. Nelson, B.-H. Lim, and G. Hutchins, Fast Transparent Migration for Virtual Machines, Proc. USENIX Ann. Technical Conf.,2005. [7] C.A. Waldspurger, Memory Resource Management in VMware ESX Server, Proc. Symp. Operating Systems Design and Implementation (OSDI 02),Aug. 2002. [8] R. Goldberg. Survey of Virtual Machine Research, IEEE Computer, 7(6), June 1974. [9] X. Meng et al., Efficient Resource Provisioning in Compute Clouds via vm Multiplexing, Proc. IEEE Seventh Int l Conf. Autonomic Computing (ICAC 10), pp. 11-20 2010. [10] R. Nathuji and K. Schwan, Virtual power: coordinated power management in virtualized enterprise systems, in Proc. of the ACM SIGOPS symposium on Operating systems principles (SOSP 07), 2007. [11] T. Sandholm and K. Lai, Mapreduce optimization using regulated dynamic prioritization, in Proc. of the international joint conference on Measurement and modeling of computer systems (SIGMETRICS 09), 2009. [12] A. Singh, M. Korupolu, and D. Mohapatra, Server storage virtualization: integration and load balancing in data centers, in Proc. of the ACM/IEEE conference on Supercomputing, 2008. Copyright @ IJIRCCE www.ijircce.com 708