CHAPTER 1 INTRODUCTION



Similar documents
CHAPTER 7 SUMMARY AND CONCLUSION

Energy Constrained Resource Scheduling for Cloud Environment

A Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems

Power Management in Cloud Computing using Green Algorithm. -Kushal Mehta COP 6087 University of Central Florida

Advanced Core Operating System (ACOS): Experience the Performance

Cloud Infrastructure Services for Service Providers VERYX TECHNOLOGIES

Radware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical

Challenges Facing Today s Data s Centers

I D C T E C H N O L O G Y S P O T L I G H T. I m p r o ve I T E f ficiency, S t o p S e r ve r S p r aw l

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 4, July-Aug 2014

International Journal of Advance Research in Computer Science and Management Studies

IT Security Risk Management Model for Cloud Computing: A Need for a New Escalation Approach.

can you effectively plan for the migration and management of systems and applications on Vblock Platforms?

solution brief September 2011 Can You Effectively Plan For The Migration And Management of Systems And Applications on Vblock Platforms?

Radware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical

Environments, Services and Network Management for Green Clouds

Getting Familiar with Cloud Terminology. Cloud Dictionary

International Journal of Scientific & Engineering Research, Volume 6, Issue 4, April ISSN

Virtualization and the Green Data Center

Federation of Cloud Computing Infrastructure

Rose Business Technologies

Global Headquarters: 5 Speen Street Framingham, MA USA P F

Cloud Management: Knowing is Half The Battle

1. Simulation of load balancing in a cloud computing environment using OMNET

The Benefits of POWER7+ and PowerVM over Intel and an x86 Hypervisor

Fibre Channel over Ethernet in the Data Center: An Introduction

Cloud Computing for Universities: A Prototype Suggestion and use of Cloud Computing in Academic Institutions

Cloud Infrastructure Foundation. Building a Flexible, Reliable and Automated Cloud with a Unified Computing Fabric from Egenera

HyperQ DR Replication White Paper. The Easy Way to Protect Your Data

White Paper. Requirements of Network Virtualization

OPTIMIZING SERVER VIRTUALIZATION

Data Centric Systems (DCS)

Reducing Data Center Energy Consumption

Getting performance & scalability on standard platforms, the Object vs Block storage debate. Copyright 2013 MPSTOR LTD. All rights reserved.

The Road to Convergence

DISTRIBUTED SYSTEMS AND CLOUD COMPUTING. A Comparative Study

CUMULUX WHICH CLOUD PLATFORM IS RIGHT FOR YOU? COMPARING CLOUD PLATFORMS. Review Business and Technology Series

Simplifying Data Center Network Architecture: Collapsing the Tiers

Cloud Computing: The Next Computing Paradigm

Application Visibility A Recipe for Conducting Successful Virtualization Projects

Experimental Awareness of CO 2 in Federated Cloud Sourcing

Cisco Cloud Enablement Services for Education

IMPROVEMENT OF RESPONSE TIME OF LOAD BALANCING ALGORITHM IN CLOUD ENVIROMENT

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

Sla Aware Load Balancing Algorithm Using Join-Idle Queue for Virtual Machines in Cloud Computing

Dynamic Power Variations in Data Centers and Network Rooms

CHAPTER 1 INTRODUCTION

DATA CENTER VIRTUALIZATION AND ITS ECONOMIC IMPLICATIONS FOR THE COMPANIES

W H I T E P A P E R E n a b l i n g D a t a c e n t e r A u t o mation with Virtualized Infrastructure

INTRODUCTION TO CLOUD COMPUTING CEN483 PARALLEL AND DISTRIBUTED SYSTEMS

Scheduling using Optimization Decomposition in Wireless Network with Time Performance Analysis

Leveraging Radware s ADC-VX to Reduce Data Center TCO An ROI Paper on Radware s Industry-First ADC Hypervisor

Role of Cloud Computing to Overcome the Issues and Challenges in E-learning

AN EFFICIENT LOAD BALANCING ALGORITHM FOR CLOUD ENVIRONMENT

A Middleware Strategy to Survive Compute Peak Loads in Cloud

Designing & Managing Reliable IT Services

System and Storage Virtualization For ios (AS/400) Environment

TBR. IBM x86 Servers in the Cloud: Serving the Cloud. February 2012

CloudSim: A Toolkit for Modeling and Simulation of Cloud Computing Environments and Evaluation of Resource Provisioning Algorithms

Data Sheet FUJITSU Server PRIMERGY CX420 S1 Out-of-the-box Dual Node Cluster Server

Huawei Enterprise A Better Way VM Aware Solution for Data Center Networks

Microsoft s Open CloudServer

Service Performance Management: Pragmatic Approach by Jim Lochran

Best Practices from Deployments of Oracle Enterprise Operations Monitor

Energy Conscious Virtual Machine Migration by Job Shop Scheduling Algorithm

Chapter 1: Introduction

Virtualized WAN Optimization

Extending the Internet of Things to IPv6 with Software Defined Networking

The Massachusetts Open Cloud (MOC)

Management of VMware ESXi. on HP ProLiant Servers

Understanding the Business Value of Migrating to Windows Server 2012

Building Private & Hybrid Cloud Solutions

Questions to be responded to by the firm submitting the application

Analysis and Optimization Techniques for Sustainable Use of Electrical Energy in Green Cloud Computing

Dynamic Power Variations in Data Centers and Network Rooms

Unisys ClearPath Forward Fabric Based Platform to Power the Weather Enterprise

Optimizing Data Center Networks for Cloud Computing

ITSM in the Cloud. An Overview of Why IT Service Management is Critical to The Cloud. Presented By: Rick Leopoldi RL Information Consulting LLC

How Microsoft Designs its Cloud-Scale Servers

Hitachi Storage Solution for Cloud Computing

Volume 01 No.15, Issue: 03 Page 69 International Journal of Communication and Computer Technologies

Enterprise Energy Management with JouleX and Cisco EnergyWise

1.1.1 Introduction to Cloud Computing

A Cloud Test Bed for China Railway Enterprise Data Center

Windows Server on WAAS: Reduce Branch-Office Cost and Complexity with WAN Optimization and Secure, Reliable Local IT Services

ClearCube White Paper Best Practices Pairing Virtualization and Centralization Increasing Performance for Power Users with Zero Client End Points

Diablo and VMware TM powering SQL Server TM in Virtual SAN TM. A Diablo Technologies Whitepaper. May 2015

Tufts University. Department of Computer Science. COMP 116 Introduction to Computer Security Fall 2014 Final Project. Guocui Gao

A Review of Load Balancing Algorithms for Cloud Computing

Accelerate Your Enterprise Private Cloud Initiative

Automatic Workload Management in Clusters Managed by CloudStack

Software Define Storage (SDs) and its application to an Openstack Software Defined Infrastructure (SDi) implementation

Virtualized Security: The Next Generation of Consolidation

END TO END DATA CENTRE SOLUTIONS COMPANY PROFILE

Windows Server Virtualization An Overview

The Evolution of the Central Office

EMERGING CLOUD COMPUTING

for Green Cloud Computing

Transcription:

CHAPTER 1 INTRODUCTION 1.1 Background The command over cloud computing infrastructure is increasing with the growing demands of IT infrastructure during the changed business scenario of the 21 st Century. The various constraints and limitations such as server capacity, storage, bandwidth and power, pose real-time challenges on datacenters. The expansion of conventional infrastructure as per the growing demand faces a real time constraint primarily to many inconvenience and inflexibilities. These complexities drive towards various issues like costing, deployment threats and various risk pertaining to the operation. Organizations based or operations on large scale IT infrastructure have to face these challenges accompanying in near future. Henceforth, for cost effectiveness and economy solutions, there is an enormous migration towards the cloud infrastructure. Therefore, the public cloud service providers need to evolve and develop their infrastructure to meet the challenges of the increasing demand in the IT dependent society. Cloud computing is not only limited to virtualization of datacenter. It was miss conceptualized because, being in the era of cloud computing, the virtualizations of datacenters were adopted to reduce the cost. Further, at various level of resource provisioning, virtualized management technology has been evolved to adapt the larger dynamic resource allocations. This further reduced costs; but also increased the datacenter flexibility and performance, ushering in a new era of optimization technology for enterprise and public clouds based upon virtualization. Cloud computing offers the possible gateways to reduce the cost and drain resources. Also creates a new organizational requirement where various teams will be responsible for networking, computation as well as storage.

1.2 Problem Description Irrespective of advantages of cloud computing, the following are the basic strategies and challenges: 1. In many datacenters today, the cost of power is only second to the actual cost of the equipment investment. Today, the cloud datacenter consumes 1-2 percent of world energy. If left unchecked, they will rapidly consume an ever-increasing amount of power consumption. 2. Network modeling is another important issue in deploying cloud computing for clientaware applications and services. Where server sprawl once reigned, now datacenter managers must deal with virtual machine sprawl. Increased utilization of physical servers through virtualization has caused increased demand in network bandwidth and storage requirements. In order to achieve these requirements, more domains, more cabling and much more management strategies are required. The operational cost increases, as complexity grows because of heterogeneous architecture in the datacenters. 3. In IT services and solutions, cloud computing can enable a fast deployment of business process. In order to realize the potential, the environment should be of open architecture; so that cloud computing with interoperable building blocks and solutions based on multi vendor innovation can be achieved. The adaptability of cloud computing will lead the dismissal of proprietary solutions. 4. To control cloud environment in order to maintain the stability of business demand, a critical application is evaluated.

5. Public cloud to be used along with an additional provisioning of data security, privacy and intellectual property protections. 6. While developing the cloud computing tools, flexibility of resource pools may not be perfect. 7. It is challenging to select the strategy for interoperability in dynamic environment of the application demand. 1.3 Objectives of the Research The focus of this research work is to analyze the various power issues on the core cloud computing infrastructure along with network and storage model with provisioning parameters for optimization of resource allocation by the cloud. The main objective of this research work is discussed as below: 1. To conduct a survey about the various energy issues of the large scale cloud computing infrastructure and capture the various risk factors related to non-power aware algorithms. 2. Conduct the trade-off analysis using the parameters for estimation of Service Level Agreement (SLA) violations; cloud estimation in heterogeneous Dynamic Voltage Frequency Scaling (DVFS) enabled data-center for better visualization of the proposed analysis.

3. Work-out with network and distributive model using resource broker in cloud computing considering power as factor of research for operation of power enabled processing elements as well as Virtual machines. 4. To carry out the analysis of all the centric parameters for designing this proposed storage model for provisioning. 1.4 Research Methodologies The consumption of the datacenters resources is low average due to reason of the power efficiency. The utilized servers cannot accumulate new service applications, therefore to keep the desired Quality of Service (QoS), all the fluctuating workload needs to be acknowledged that has lead to the concert of degradation. Conversely, servers in a non-virtualized datacenter are unlikely to be completely idle, because of background tasks (e.g. incremental backups), or distributed databases or file-systems. An additional problem of high power consumption due to increasing density of server's components (i.e. 1U, blade servers), is the heat dissipation. DVFS minimizes the number of instructions that a processor can issue in a given amount of time, and thereby it minimizes the performance too. This, in turn, increases run time for program segments which are CPU-bound. One of the significant goals of the proposed study is to analyze the root source of energy consumption on various cloud based application and then introduce a model that will be responsible for reducing the energy consumption as well as minimize the performance loss in cloud computing system. The proposed research design is shown in the fig 1.1.

Figure 1.1 Proposed Research Design The proposed study accomplishes the goal by adopting dual factors (power and performance) in the research design as seen in Fig.1.1. The first goal is accomplished by incorporating the novel design by performing the semantic evaluation of energy consumption that is primarily focused on monitoring Service Level Agreement (SLA) violation. The process is accomplished by using DVFS in the schema for energy saving approach with dual benefits e.g. i) Resource Throttling: it can scrutinize resource utilization, memory, and wait time on both peak and off hours ii) Dynamic Component Deactivation: This technique will allow to deactivate the cloud components when in idle mode for leveraging the workload variability. However, the performance loss is scaled using network model that uses broker design and cloudlet ID mainly. And By incorporating the newly established design of energy optimization using DVFS aimed for massive task execution.

Figure 1.2 Energy consumption at different levels in computing systems Traditionally, due to the demand of applications from consumer, scientific and business domains the computing system has been developed with the objective of performance improvements (Fig 1.2). However, the carbon dioxide and electricity consumptions are growing due to increased user bases. The computing system with additional performance resulting in increasing energy consumption. Consequently, the goal of the computer system design has been shifted to power and energy efficiency. To recognize open challenges in the area and facilitate future advancements, it is crucial to synthesize and classify the research on power and energy-efficient design conducted to date (Brown et al 2007; Gurumurthi et al 2002). This work discusses the problems and causes of present taxonomy of energy efficiency and energy consumption or high power design of computing systems covering virtualization, operating system, datacenter and hardware levels. The survey is to guide

the development efforts and it has been designed with the different works in the future as the map and area of the taxonomy. 1.5 Thesis Contribution The prominent contributions in this research work are as follows: 1. Retaining Robust SLA: One of the important concerns of the proposed research work is to ensure an efficient SLA in cloud computing. Both the consumers and cloud providers are of vital consequence for SLA agreements for the reliable management and flexibility. On one hand, prevention of SLA violations avoids costly penalties; the provider has to pay in case of violations. Then, on the other hand, interaction of the user s with the system can be minimized, based on appropriate response and flexibility should be possible to minimize the violations of SLA, which enables the cloud computing to take roots as a flexible and reliable form of demand computing. Although, there is a large body of work considering development of flexible and self-manageable Cloud computing infrastructures, there is a lack of adequate monitoring infrastructures, which can predict possible SLA violations. This proposed work assures the actual deployment of the algorithms for maintaining SLA. 2. Efficient Resource Provisioning: This proposed research work takes care of efficient resource provisioning along with SLA. Hence, this research work presents the cloud service providers as resource utilization and a stochastic model and resource provisioning formulation in order to address the ability of provisioning decisions. One of the prime goals for this part of contribution is to accomplish management solution and to analyze the critical parameters of the cloud service providers. Therefore, the multiple dimensions of resource

provisioning components are like memory, storage, CPU, bandwidth. For creating the provisioning problem for various customers is trickier as various customers may have been different requirements of varied dimension. The customers, wish to increase and lessen the capacity on the provision mission of the fly, adds further complications which hassle flexibility of the cloud paradigm. The accomplished design of model focuses on CPU power saving in terms of virtual machine provisioning in the cloud computing. This proposal considered the dynamic power dissipation because it is more dominating cause in the total power consumption and datacenters can augment their profit by dropping dynamic energy consumption. 3. Protocol Implementation: Uniqueness in this proposed research work is the implementation of protocol and its respective techniques, maintaining less overhead on the network. Modern datacenters are being provisioned with significant computational power based upon high performance multicore processors and dense server configurations. In multicore systems such as general purpose processors and high performance embedded processors, the operating system is responsible for dynamically adjusting the frequency of each processor to the current workload. Dynamic voltage scaling (DVS) is a standard technique of a system in managing the power. DVFS has been a widely applied technique for reducing power consumption in many domains, particularly those regarding micro-architectures such as embedded systems. The system application is very flexible in its design which is functional for various types of demanding scenario. This proposed system performs the implementation of DVFS scheme along with traditional DVS scheme, where both the schemes are considered for energy optimization in datacenters along with all its components.

1.6 Thesis Organization The thesis report is organized as follows: Chapter-1: Introduction: This is the preliminary chapter that discusses about the domain of research and its significance from application and real-life utility. This chapter also discusses the problem definition, research objectives, and methodologies. It also encapsulates the research contribution. Chapter-2: Review of Literature: This chapter is planned in two classifications. First, review on domain and second, review on prior research work. One of the prominent parts of this chapter is investigation and visualization of the research gap of the proposed research issue. Chapter-3: Energy Issues in Datacenters: This chapter discusses the most prominent issues of unwanted power consumption in datacenters, which is one of the major focus of this proposed research work. It also encompasses all the existing models and applications for encountering the present issues. The illustration describes in terms of the constituents involved in energy consumptions right from client s request, virtual machine, processing elements and finally rack of servers drawn in cloud computing services. Chapter-4: Estimating SLA violation using DVFSs: This chapter presents an indepth analysis of energy consumption at a core model of cloud computing in order to visualize the risk factors which is responsible for draining energy in datacenters. This chapter also discusses the designing an energy analysis which consist of various cloudlet-ids with estimation of SLA violation along with virtual machine migration

along with complete status capturing. One of the prime discussions on DVFS involving in mitigation of SLA violation. This chapter followed by an experimental case study to elaborate the present topic. Chapter-5: Optimizing Energy in Datacenter using DVFS: This chapter highlights a novel model for optimizing the energy factor with respect to datacenter using DVFS. This discussion represents a unique provisioning technique which amalgamates the power factor with network resources. The proposed system illustrates in order to maintain the equilibrium for the unique task processing and achievement, power utilization of the datacenter, and optimal network requirements. The system has optimized the research gap for task grouping for reducing the capacity of the application server in the datacenter and dispersing of the network resources in order to mitigate the congestion which might possibly occur in datacenter leading to unwanted power consumption. Chapter-6: Conclusion: This is the concluding chapter which summarizes the entire work being done on the thesis accompanied by feasible future work along with justification.