CHAPTER 1 INTRODUCTION 1.1 Background The command over cloud computing infrastructure is increasing with the growing demands of IT infrastructure during the changed business scenario of the 21 st Century. The various constraints and limitations such as server capacity, storage, bandwidth and power, pose real-time challenges on datacenters. The expansion of conventional infrastructure as per the growing demand faces a real time constraint primarily to many inconvenience and inflexibilities. These complexities drive towards various issues like costing, deployment threats and various risk pertaining to the operation. Organizations based or operations on large scale IT infrastructure have to face these challenges accompanying in near future. Henceforth, for cost effectiveness and economy solutions, there is an enormous migration towards the cloud infrastructure. Therefore, the public cloud service providers need to evolve and develop their infrastructure to meet the challenges of the increasing demand in the IT dependent society. Cloud computing is not only limited to virtualization of datacenter. It was miss conceptualized because, being in the era of cloud computing, the virtualizations of datacenters were adopted to reduce the cost. Further, at various level of resource provisioning, virtualized management technology has been evolved to adapt the larger dynamic resource allocations. This further reduced costs; but also increased the datacenter flexibility and performance, ushering in a new era of optimization technology for enterprise and public clouds based upon virtualization. Cloud computing offers the possible gateways to reduce the cost and drain resources. Also creates a new organizational requirement where various teams will be responsible for networking, computation as well as storage.
1.2 Problem Description Irrespective of advantages of cloud computing, the following are the basic strategies and challenges: 1. In many datacenters today, the cost of power is only second to the actual cost of the equipment investment. Today, the cloud datacenter consumes 1-2 percent of world energy. If left unchecked, they will rapidly consume an ever-increasing amount of power consumption. 2. Network modeling is another important issue in deploying cloud computing for clientaware applications and services. Where server sprawl once reigned, now datacenter managers must deal with virtual machine sprawl. Increased utilization of physical servers through virtualization has caused increased demand in network bandwidth and storage requirements. In order to achieve these requirements, more domains, more cabling and much more management strategies are required. The operational cost increases, as complexity grows because of heterogeneous architecture in the datacenters. 3. In IT services and solutions, cloud computing can enable a fast deployment of business process. In order to realize the potential, the environment should be of open architecture; so that cloud computing with interoperable building blocks and solutions based on multi vendor innovation can be achieved. The adaptability of cloud computing will lead the dismissal of proprietary solutions. 4. To control cloud environment in order to maintain the stability of business demand, a critical application is evaluated.
5. Public cloud to be used along with an additional provisioning of data security, privacy and intellectual property protections. 6. While developing the cloud computing tools, flexibility of resource pools may not be perfect. 7. It is challenging to select the strategy for interoperability in dynamic environment of the application demand. 1.3 Objectives of the Research The focus of this research work is to analyze the various power issues on the core cloud computing infrastructure along with network and storage model with provisioning parameters for optimization of resource allocation by the cloud. The main objective of this research work is discussed as below: 1. To conduct a survey about the various energy issues of the large scale cloud computing infrastructure and capture the various risk factors related to non-power aware algorithms. 2. Conduct the trade-off analysis using the parameters for estimation of Service Level Agreement (SLA) violations; cloud estimation in heterogeneous Dynamic Voltage Frequency Scaling (DVFS) enabled data-center for better visualization of the proposed analysis.
3. Work-out with network and distributive model using resource broker in cloud computing considering power as factor of research for operation of power enabled processing elements as well as Virtual machines. 4. To carry out the analysis of all the centric parameters for designing this proposed storage model for provisioning. 1.4 Research Methodologies The consumption of the datacenters resources is low average due to reason of the power efficiency. The utilized servers cannot accumulate new service applications, therefore to keep the desired Quality of Service (QoS), all the fluctuating workload needs to be acknowledged that has lead to the concert of degradation. Conversely, servers in a non-virtualized datacenter are unlikely to be completely idle, because of background tasks (e.g. incremental backups), or distributed databases or file-systems. An additional problem of high power consumption due to increasing density of server's components (i.e. 1U, blade servers), is the heat dissipation. DVFS minimizes the number of instructions that a processor can issue in a given amount of time, and thereby it minimizes the performance too. This, in turn, increases run time for program segments which are CPU-bound. One of the significant goals of the proposed study is to analyze the root source of energy consumption on various cloud based application and then introduce a model that will be responsible for reducing the energy consumption as well as minimize the performance loss in cloud computing system. The proposed research design is shown in the fig 1.1.
Figure 1.1 Proposed Research Design The proposed study accomplishes the goal by adopting dual factors (power and performance) in the research design as seen in Fig.1.1. The first goal is accomplished by incorporating the novel design by performing the semantic evaluation of energy consumption that is primarily focused on monitoring Service Level Agreement (SLA) violation. The process is accomplished by using DVFS in the schema for energy saving approach with dual benefits e.g. i) Resource Throttling: it can scrutinize resource utilization, memory, and wait time on both peak and off hours ii) Dynamic Component Deactivation: This technique will allow to deactivate the cloud components when in idle mode for leveraging the workload variability. However, the performance loss is scaled using network model that uses broker design and cloudlet ID mainly. And By incorporating the newly established design of energy optimization using DVFS aimed for massive task execution.
Figure 1.2 Energy consumption at different levels in computing systems Traditionally, due to the demand of applications from consumer, scientific and business domains the computing system has been developed with the objective of performance improvements (Fig 1.2). However, the carbon dioxide and electricity consumptions are growing due to increased user bases. The computing system with additional performance resulting in increasing energy consumption. Consequently, the goal of the computer system design has been shifted to power and energy efficiency. To recognize open challenges in the area and facilitate future advancements, it is crucial to synthesize and classify the research on power and energy-efficient design conducted to date (Brown et al 2007; Gurumurthi et al 2002). This work discusses the problems and causes of present taxonomy of energy efficiency and energy consumption or high power design of computing systems covering virtualization, operating system, datacenter and hardware levels. The survey is to guide
the development efforts and it has been designed with the different works in the future as the map and area of the taxonomy. 1.5 Thesis Contribution The prominent contributions in this research work are as follows: 1. Retaining Robust SLA: One of the important concerns of the proposed research work is to ensure an efficient SLA in cloud computing. Both the consumers and cloud providers are of vital consequence for SLA agreements for the reliable management and flexibility. On one hand, prevention of SLA violations avoids costly penalties; the provider has to pay in case of violations. Then, on the other hand, interaction of the user s with the system can be minimized, based on appropriate response and flexibility should be possible to minimize the violations of SLA, which enables the cloud computing to take roots as a flexible and reliable form of demand computing. Although, there is a large body of work considering development of flexible and self-manageable Cloud computing infrastructures, there is a lack of adequate monitoring infrastructures, which can predict possible SLA violations. This proposed work assures the actual deployment of the algorithms for maintaining SLA. 2. Efficient Resource Provisioning: This proposed research work takes care of efficient resource provisioning along with SLA. Hence, this research work presents the cloud service providers as resource utilization and a stochastic model and resource provisioning formulation in order to address the ability of provisioning decisions. One of the prime goals for this part of contribution is to accomplish management solution and to analyze the critical parameters of the cloud service providers. Therefore, the multiple dimensions of resource
provisioning components are like memory, storage, CPU, bandwidth. For creating the provisioning problem for various customers is trickier as various customers may have been different requirements of varied dimension. The customers, wish to increase and lessen the capacity on the provision mission of the fly, adds further complications which hassle flexibility of the cloud paradigm. The accomplished design of model focuses on CPU power saving in terms of virtual machine provisioning in the cloud computing. This proposal considered the dynamic power dissipation because it is more dominating cause in the total power consumption and datacenters can augment their profit by dropping dynamic energy consumption. 3. Protocol Implementation: Uniqueness in this proposed research work is the implementation of protocol and its respective techniques, maintaining less overhead on the network. Modern datacenters are being provisioned with significant computational power based upon high performance multicore processors and dense server configurations. In multicore systems such as general purpose processors and high performance embedded processors, the operating system is responsible for dynamically adjusting the frequency of each processor to the current workload. Dynamic voltage scaling (DVS) is a standard technique of a system in managing the power. DVFS has been a widely applied technique for reducing power consumption in many domains, particularly those regarding micro-architectures such as embedded systems. The system application is very flexible in its design which is functional for various types of demanding scenario. This proposed system performs the implementation of DVFS scheme along with traditional DVS scheme, where both the schemes are considered for energy optimization in datacenters along with all its components.
1.6 Thesis Organization The thesis report is organized as follows: Chapter-1: Introduction: This is the preliminary chapter that discusses about the domain of research and its significance from application and real-life utility. This chapter also discusses the problem definition, research objectives, and methodologies. It also encapsulates the research contribution. Chapter-2: Review of Literature: This chapter is planned in two classifications. First, review on domain and second, review on prior research work. One of the prominent parts of this chapter is investigation and visualization of the research gap of the proposed research issue. Chapter-3: Energy Issues in Datacenters: This chapter discusses the most prominent issues of unwanted power consumption in datacenters, which is one of the major focus of this proposed research work. It also encompasses all the existing models and applications for encountering the present issues. The illustration describes in terms of the constituents involved in energy consumptions right from client s request, virtual machine, processing elements and finally rack of servers drawn in cloud computing services. Chapter-4: Estimating SLA violation using DVFSs: This chapter presents an indepth analysis of energy consumption at a core model of cloud computing in order to visualize the risk factors which is responsible for draining energy in datacenters. This chapter also discusses the designing an energy analysis which consist of various cloudlet-ids with estimation of SLA violation along with virtual machine migration
along with complete status capturing. One of the prime discussions on DVFS involving in mitigation of SLA violation. This chapter followed by an experimental case study to elaborate the present topic. Chapter-5: Optimizing Energy in Datacenter using DVFS: This chapter highlights a novel model for optimizing the energy factor with respect to datacenter using DVFS. This discussion represents a unique provisioning technique which amalgamates the power factor with network resources. The proposed system illustrates in order to maintain the equilibrium for the unique task processing and achievement, power utilization of the datacenter, and optimal network requirements. The system has optimized the research gap for task grouping for reducing the capacity of the application server in the datacenter and dispersing of the network resources in order to mitigate the congestion which might possibly occur in datacenter leading to unwanted power consumption. Chapter-6: Conclusion: This is the concluding chapter which summarizes the entire work being done on the thesis accompanied by feasible future work along with justification.