Efficient Virtual Machine Sizing For Hosting Containers as a Service
|
|
|
- Lambert Baldwin
- 10 years ago
- Views:
Transcription
1 1 Efficient Virtual Machine Sizing For Hosting Containers as a Service Sareh Fotuhi Piraghaj, Amir Vahid Dastjerdi, Rodrigo N. Calheiros, and Rajkumar Buyya Cloud Computing and Distributed Systems (CLOUDS) Laboratory Department of Computing and Information Systems The University of Melbourne, Australia {sarehf@student, amir.vahid@, rnc@, rbuyya@}unimelb.edu.au Abstract There has been a growing effort in decreasing energy consumption of large-scale cloud data centers via maximization of host-level utilization and load balancing techniques. However, with the recent introduction of Container as a Service (CaaS) by cloud providers, maximizing the utilization at virtual machine (VM) level becomes essential. To this end, this paper focuses on finding efficient virtual machine sizes for hosting containers in such a way that the workload is executed with minimum wastage of resources on VM level. Suitable VM sizes for containers are calculated, and application tasks are grouped and clustered based on their usage patterns obtained from historical data. Furthermore, tasks are mapped to containers and containers are hosted on their associated VM types. We analyzed clouds trace logs from Google cluster and consider the cloud workload variances, which is crucial for testing and validating our proposed solutions. Experimental results showed up to 7.55% improvement in the average energy consumption compared to baseline scenarios where the virtual machine sizes are fixed. In addition, comparing to the baseline scenarios, the total number of VMs instantiated for hosting the containers is also improved by 68% on average. I. INTRODUCTION Cloud computing is a term for utility-oriented computing on a pay as you go basis. As stated by Armbrust et al. [1], cloud computing has the potential to transform a large part of the IT industry while making software even more attractive as a service. However, the major concern of cloud data centers is the drastic growth in energy consumption. Data center energy consumption results in increases in Total Cost of Ownership (TCO) while decreasing the Return of Investment (ROI) of the cloud infrastructure. Data center energy consumption has also a great impact on carbon dioxide (CO 2 ) emissions, which are estimated to be 2% of global emissions [2]. In this respect, there has been a growing effort in decreasing energy consumption, what resulted in considerable improvements in many aspects of cloud data center operations, including cooling system and server efficiency. Nevertheless, emerging new technologies make room for further improvements in server efficiency. A promising source for further improvements in energy efficiency of cloud servers are the solutions that are tailored for specific cloud service models. In addition to traditional cloud services, namely Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Fig. 1. Docker App 1 App 2 Libs VM A App 3 App 4 Docker Hypervisor Server App 1 Libs App 2 VM B A Simple CaaS Deployment Model on IaaS. Software as a Service (SaaS), recently a new type of service called Containers as a Service (CaaS) has been introduced. An example of container management system is docker which is a tool that allows developers to define containers for applications 1. CaaS lies between IaaS and PaaS: while IaaS provides virtualized compute resources and PaaS provides application specific runtime services, CaaS is the missing layer that glues these two layers together. As illustrated in Figure 1, CaaS services are usually provided on top of IaaS virtual machines. CaaS providers, such as Google and AWS, argue that at the same time as containers might offer appropriate environment for semi-trusted workloads, virtual machines provide another layer of security for untrusted workloads. To reduce energy consumption of CaaS, one may choose virtual machine consolidation, Dynamic Voltage and Frequency Scaling (DVFS), or both of them combined. However, these efforts would be in vain if VM sizes are not customized to better support deployed containers. For example, as shown in Figure 1, size of VM B in comparison tovm A is not containeroptimized. As a result, there is resource wastage that results in inefficiency in terms of energy consumption regardless of how effective and energy efficient is the VM consolidation technique in place. We tackle the issue of energy efficiency in the context of CaaS, this paper focuses on finding efficient virtual machine sizes for hosting containers in a such way that the workload is executed with minimum wastage of energy. To achieve this, we present a technique derived from the analysis of real world 1 Docker:
2 2 clouds trace logs that takes into consideration cloud workload variances, which is crucial in testing and validating the proposed solutions. Cloud computing traces released publicly are limited to those made available by Google. The first Google log provides the normalized resource usage of a set of tasks over a 7 hour period. The second version of the Google traces, which was released in 2012, contains more details in a longer time frame. The paper main contribution is an approach for efficient allocation of resources (in the form of virtual machine with defined CPU and memory entitlement) that matches closely the actual resource usage of deployed containers. In this paper, the focus is on measuring impact of VM sizing strategies on data center power consumption for a specific period of time. To this end, we assume that we have a perfect knowledge of workload (to eliminate the prediction inaccuracy effect) and that no virtual machine consolidation approach is in place. This work is carried out in three steps. In the first step, we explored task usage patterns. We found that there are similarities in the utilization patterns reported in the traces, which is confirmed by previous studies [3], [4]. These similarities helped us to group the tasks based on average resource usage via clustering techniques. In the second step, the clustering output is used for identification of customized virtual machine types for the studied traces. Finally, in the third step, each cluster of tasks is mapped to a corresponding virtual machine type. We compare our VM sizing technique with fixed VM size baseline scenarios. The experimental results show that our proposed approach results in less number of servers, which in turn results in less energy consumption in the data center. II. RELATED WORK There is a vast body of literature that considers power management in virtualized and non-virtualized data centers via hardware and software-based solutions [5] [7]. Most of the works in this area focus on host level optimization techniques neglecting energy-efficient virtual machine size selection and utilization. These approaches are suitable for IaaS cloud services, where the provider does not have any knowledge about the applications running in every virtual machine. However, for SaaS and CaaS service models, information about the workload on the virtual machines and improvements on the size selection and utilization efficiency on VM level could be the first step towards more energy efficient data centers, as results presented in this paper demonstrate. Regarding comparison between hosting SaaS on bare-metal servers or virtual machines, Daniel et al. [8] explored the differences between fixed virtual machine sizes and time shares. Although they concluded that the time share model requires less number of servers, they have not considered dynamic VM size selection in their experiments. Similarly, in the SEATS (smart energy-aware task scheduling) framework, Hosseinimotlagh et al. [9] introduced an optimal utilization level of a host to execute tasks which decrease the energy consumption of the data center. In addition, they also presented a virtual machine scheduling algorithm for maintaining the host optimal utilization level while meeting the given QoS. Apart from the level of optimization (host-level, data centerlevel, or virtualization level), most of the research in the area lack the analysis of real cloud backend traces and the variance in the cloud workload in the proposed solutions. To address this issue, Mishra et al. [3] and Chen et al. [4] explored the first version of the Google cluster traces and introduced two approaches for workload modeling and characterization. Mishra et al. [3] used the clustering algorithm k- means for forming the groups of tasks with more similarities in resource consumptions and durations. Likewise, Chen et al. [4] used k-means as the clustering algorithm. In their experiments, the authors classified jobs 2 instead of tasks. In our proposed model, we use X-means for grouping the tasks with similarities in their usage patterns. The clustering output is used for determining the virtual machine configurations and estimating the average usage of a typical task in a specific cluster. We consider the second version of the trace released in late Through configuring bare-metal servers and applying Google cluster trace, Zhang et al. [11] investigated the problem of dynamic capacity provisioning in a real production data center. The authors introduced a heterogeneity-aware resource provisioning environment named Harmony and dynamically adjusted number of active machines by considering heterogeneity in both workload and available machine hardware. k-means is used for forming workload classes with similar characteristics and resource requirements. Applying Google cluster trace, it is stated that Harmony can improve the energy consumption of the data center up to 28%. Although the energy consumption is improved, the virtualization technology as the building block of a cloud data center is not considered. Different from previous works, we leverage virtualization and containerization technology together and map the groups of tasks to containers and containers to VMs. The configuration of the container-optimized VMs is chosen based on the workload characteristics, which results in less resource wastage. In our methodology, the problem of high energy consumption that results from low resource utilization is also addressed, which is not explored in most of the previous studies. III. SYSTEM MODEL The targeted system is a CaaS service placed between a Platform as a Service (PaaS) and an IaaS data center operating as a private cloud. Users can submit their applications in one or more programming models supported by the provider to the platform. For example, the platform can support MapReduce or Bag of Tasks (BoT) applications. The users interact with the system by submitting applications supported by the platform. Each application consists of a set of jobs to be executed on the infrastructure. In our studied scenario, the job itself can be compromised of one or more tasks that are executed in containers managed by the CaaS provider. User Request Model: In the proposed model, the infrastructure is abstracted from the users. Thus, the only action required from users of the service is the submission of their application along with estimated resources required for their execution. Parameters of a task submitted by a user are: Scheduling class; 2 A job is compromised of one or more tasks [10].
3 3 tasks have enough resources to be executed and the virtual machine s resources are not wasted during the operation. The proposed model can be utilized by both Software as a Services and private clouds so the advantages of virtual machines and containers can be leveraged at the same time that energy costs are reduced through efficiently utilization of data centers. These systems are targeted because they have limited number of applications and enough information about them so that the typical cloud usage can be profiled. Usually, profiles will be available for returning users of the infrastructure. In such a case, previous information about tasks needs enable the optimization of the task needs. For new customers, where there is no historical information enable profiling, the resources requested at request submission time can be used as an estimation of resources needs. As discussed previously, resource requirements are a common input parameter in commercial providers of similar services. Fig. 2. Proposed system model. Cloud users submit requests for execution of applications in form of the container requests to the data center. Each task is executed in a container that meets application requirements. The resource allocator is responsible for mapping these containers to the relevant and available virtual machine configurations. Task priority; and Required resources (i.e., number of cores and amounts of RAM and storage). Notice that currently available commercial CaaS solutions require users to determine the required resources. Cloud Model: The cloud model explored in this paper contains three layers: Infrastructure layer: This layer contains physical server that are partitioned to virtual machine through the next layer; Virtualization layer: The virtualization technology is taken into consideration as it improves the utilization of resources by sharing them between virtual machines; Container layer: Containers run on virtual machines and all share the same Linux kernel as the OS. The combination of these technologies (i.e., container and virtualization) has been recently introduced in Google Container Engine 3, which uses the open source technology Kubernetes 4. System Objective: The objective of the proposed architecture is to find efficient virtual machine sizes for hosting the containers in a way that the workload is executed with minimum wastage of energy. The energy wastage is caused by underutilization of resources (i.e., the virtual machines). Therefore, one of the challenges is finding an optimal configuration of VM for a container, in such a way that its residing 3 Google Container Engine: 4 Kubernetes: IV. ARCHITECTURE Figure 2 shows our proposed architecture that is responsible for selecting the right size of virtual machines that host containers running users tasks. The components of the proposed architecture are further discussed in the next subsections. In the pre-execution phase, based on historical information on resource utilization of containers, our proposed system determines the virtual machine sizes. After that, in the execution phases, each container (based on its features) is mapped to a virtual machine with a specific type. A. Components Involved in the Pre-execution Phase In the pre-execution phase, some components of the architecture need to be tuned or defined before the system runtime. Task Classifier: The classifier is the entry point of the streaming of tasks submitted by cloud users. It is responsible for categorizing the submitted tasks to predefined classes based on the desired features. The features are selected according to the objective of the grouping process and the workload characteristics. The goal of the component is to identify tasks with similar usage patterns in terms of CPU, memory, and disk utilization. These patterns are obtained with application of clustering algorithms. The classifier must be trained with historical data before the system start up. The objective of clustering tasks is understanding, via profiling, the resource needs of tasks that are later used in the process of defining VM types and estimating the number of tasks that can be packed on each VM. VM Type Definer: This component is responsible for defining VM sizes based on the information provided by the Task Classifier. This is because determination of the optimal VM size requires analysis of the historical data about usage patterns of the task containers, and identification of groups of tasks with similar usage patterns reduces the complexity of estimating the average usage for a typical group. The output of this component is then saved in the VM Types Repository.
4 4 VM Types Repository: In this repository, the available virtual machine sizes including the CPU, memory, and disk specification are saved. B. Components Involved in Execution Phase: In this section we discuss the components which are involved during the execution phase of the system. By execution phase, we mean the activities carried out when the system is started up and accepting job submissions. Container Mapper: Clustering results from the Task Classifier are sent to the Container Mapper. Firstly, this component maps each task to a suitable container. Then, based on the available resources in the running virtual machines and the available VM types in the VM Types Repository, this component estimates the number and type of new virtual machines to be instantiated to support the newly arrived tasks. Apart from new VM instantiation when available VMs cannot support the arriving load, this component also reschedules rejected tasks that are stored in the rejected task repository to the available virtual machines of the type required by the VM (if any). This component prioritizes the assignment of newly arrived tasks to available resources before instantiating a new virtual machine. Virtual Machine Instantiator: This component is responsible for the instantiation of a group of VMs with the specifications received from the Container Mapper. This component decreases the start-up time of the virtual machines by instantiating a group of VMs at a time instead of one VM per time. Virtual Machine Provisioner: This component is responsible for determining the placement of each virtual machine on available hosts and turning on new hosts if required to support the new VMs. Rejected Task Repository: The tasks that are rejected by the VM Controller are submitted to this repository, where they stay until the next upcoming processing window to be rescheduled by the Container Mapper. Available VM Capacity Repository: IDs of virtual machines that have available resources are registered in this repository. It is used for assigning tasks killed by the Virtual Machine Controller along with newly arrived ones to available resource capacity. Power Monitor: This component is responsible for estimating the power consumption of the data center based on the resource utilization of available hosts. Host Controller: The Host Controller runs on each host of the data center. It periodically checks virtual machine resource usage (which is received from the Virtual Machine Controllers) and identifies underutilized machines, which are registered in the available resource repository. This component also submits killed tasks from the VMs running on its host to the Rejected Task Repository so that these tasks can be rescheduled in the next processing window. It also sends the host usage data to the Power Monitor. Virtual Machine Controller (VMC): The VMC runs on each VM of the data center. It monitors the usage of the VM and, if the resources usage exceeds the virtual machine capacity, it kills a number of containers with low priority tasks so that high priority ones can obtain the VM resources they require. In order to avoid task starvation, this component also considers the number of times a task is killed. Then the Controller sends killed tasks to the Host Controller to be submitted to the global rejected task repository. As mentioned before, the killed tasks are then rescheduled on an available virtual machine in the next processing window. As we have described in this section, Task Clustering and VM Type Definer are two core components of the architecture. Therefore, in the next two sections we elaborate more on approaches that these components use to achieve effective and container-optimized VM size selection. V. TASK CLUSTERING For validation of the proposed architecture, we use Google cluster traces. For the purpose of the evaluation of the approach, we selected to use the second day of the traces as it had the highest number of task submissions. As the first step, we have selected a subset of relevant features of tasks. A. Clustering Feature Set As our feature set, we used the following characteristics of each task: Task Length: The time during which the task was running on a machine; Submission Rate: The number of times that a task is being submitted to the data center; Scheduling Class: How sensitive to latency is the task/job. In the studied traces the scheduling class is presented by an integer number between 0 and 3. The task with a 0 scheduling class is a non-production task. The higher the scheduling class is, the most latency sensitive the task is; Priority: The priority of a task shows how important a task is. The high priority tasks get the preference for resources over the low priority ones [10]. The priority is an integer number in the range from 0 to 10; Resource Usage: The average resource utilization U T (calculated based on Equation 1) of a task T in terms of CPU, memory, and disk during the observed period, defined as: U T = nr m=1 u (T,m) nr Where nr is the number of times that the task usage (u T ) is being reported in the observed period. The selected features of the data set were used for estimating the number of task s clusters and determining the suitable virtual machine configuration for each group. (1)
5 5 B. Clustering Algorithm Clustering is the process of grouping the objects with the objective of finding the subsets with the most similarities in terms of the selected features. Therefore, the objective of the grouping and the number of groups both would affect the results of clustering. In our specific experiment, we focus on finding groups of tasks with similarities in their usage pattern so that we can allocate the available resources efficiently. A primary task in clustering is identifying the number of clusters for which we used X-means algorithm. X-means [12] is the extended version of k-means [13] used for estimating the number of groups present in the incoming tasks. K-means is a computationally efficient partitioning algorithm for grouping N-dimensional dataset into k clusters via decreasing within-class variance. However, supplying the number of groups (k) as an input of the algorithm is the major challenge in K-means. However, X-means itself estimates the number of clusters. The other difference of two is in the searching process, k-means search is prone to local minima, while X-means efficiently searches the space of cluster locations and number of clusters in order to optimise Bayesian information Criterion (BIC). The BIC is a criterion for selection of the best fitting model amongst a set of available models for the data [14]. Optimising the BIC criterion (which is achieved in X-means) results in a better fitting model, which in the context of this work means finding the optimum number of clusters for the input data set. In our implementation, we use R [15] as an environment for our statistical analysis of the data set and the statistical procedures including X-means clustering algorithm and data pre-processing. In this respect, RWeka [16] is used as our R interface to connect to the applied machine learning software Weka [17]. We used X-means from the clusterer package of Weka for estimating the number of clusters. VI. DETERMINATION OF VM TYPES Once clusters that represent groups of tasks with similar characteristics in terms of the selected features are defined, the next step is to assign a VM type that can efficiently execute tasks that belong to the cluster. By efficiently, we mean successfully executing the tasks with minimum resource wastage. Parameters of interest of a VM are number of cores, amount of memory, and amount of storage. Apart from these, we added one more parameter named task capacity, which is the maximum number of tasks that can run in one VM without resulting in task rejection. VM Sizing Strategy: For each group (cluster) of tasks, in order to determine the virtual machine s parameters, the average amount of resources required per hour for serving the user requests during the 24 hours observation period is estimated. Thus, the amount of CPU required per hour (CPU h ) is defined as follows: CPU h = nt CPU u (2) where CPU u is the average amount of CPU usage on an hourly basis and nt h is the average number of tasks per hour that are present and running in the system. When the average amount of required CPU per hour is estimated, a limit should be set for the maximum amount of the CPU that a virtual machine can obtain. This amount should be set according to the underlining infrastructure. For example if the servers have 32 cores of CPU and the provider wants to host at least two virtual machines per server, then each VM can obtain at most 16 cores. If we assume that the largest virtual machine in the system would have less than 16 cores, then if the CPU h is above 16, therefore the load should be served by more than one virtual machine. It has to be divided by the first integer n found between 2 to 9 in a way that the residual of CPU h /n is zero (For example if the CPU h is equal to 21, n would be 3, which results in 3 VMs with 7 cores each.). In other words, n would be the number of virtual machines with VM CPU = CPU h /n which is enough for serving the requests for every hour. Further, if the CPU h is bellow our virtual machine maximum CPU limit, then one virtual machine would be enough to serve the requests and n is equal to 1. Then, for defining the VM memory size, Equation 3 is used. VM memory = nt memory u (3) n In our studied trace, since the tasks required small amount of storage, the allocated amount of storage for all of the virtual machines are assumed to be 10 GB, which is enough for the OS installed on the VM and the tasks disk usage. Furthermore, the VM s task capacity VM tc is calculated based on Equation 4. VM tc = min(vm 1 /t U1,..., V M i /t Ui ),i= {CP U, memory} (4) where t Ui is the average resource usage during the observed period. Thus, for each cluster, the VM type is defined in terms of CPU h and VM memory. Once VM Types are determined, they are stored in the VM types repository, so future selection of better matches for required containers hosting the tasks are chosen based on such defined types. The effectiveness of this strategy for selection of VM Types based on clustering of tasks is evaluated in the next sections. VII. EVALUATION In this section, we discuss the experiments that we conducted to evaluate our proposed approach in terms of its efficiency in task execution and power consumption. The data set used in this paper is derived from the second version of the Google cloud trace log [10] collected during a period of 29 days. The log consists of data tables describing the machines, jobs, and tasks. In the trace log, each job consists of a number of tasks with specific constraints. Considering these constraints, the scheduler determines the placement of the tasks on the appropriate machines. The event type value in the job and tasks are reported in the event table. For the purpose of this evaluation, we utilize all the events from the trace log and we assume that all the events are happening as reported in the trace log.
6 6 TABLE I. VIRTUAL MACHINE CONFIGURATIONS FOR 18 CLUSTERS. VM Type Number of Tasks vcpu Memory (GB) VM Type Number of Tasks vcpu Memory (GB) TYPE TYPE TYPE TYPE TYPE TYPE TYPE TYPE TYPE TYPE TYPE TYPE TYPE TYPE TYPE TYPE TYPE TYPE TABLE II. VIRTUAL MACHINE SPECIFICATIONS OF AND THE SELECTED AMAZON EC2 INSTANCES. VM Type Number of Tasks vcpu Memory (GB) Amazon EC2 VM Type C max vcpu Memory (GB) TYPE TYPE m3.large TYPE m3.medium TYPE t2.medium TYPE t2.small In the available traces, resource utilization measurements and requests are normalized, and the normalization is performed separately for each column in relation to the highest amount of the particular resource found on any of the machines. In this context, to get a real sense of the data, we assume the largest amount of resources including CPU, memory, and disk to be 100% of a core of the largest machine CPU (3.2 GHz), 4GB and 1 GB respectively. Therefore, for every column we multiply each recorded data by the related amount (e.g for recorded memory utilization we have Real util = Recorded Util 4). The Google cluster hosts are heterogeneous in terms of the CPU, memory and disk capacity. However, the hosts with the same platform ID have the same architecture. In order to eliminate the placement constraints and decrease placement complexity, only the tasks scheduled on one of the three available platforms are considered. The configurations of the simulated data center servers are inspired by Google data center during the studied trace period. As suggested by Garraghan et.al [18], the servers in our selected platform are PRIMERGY RX200 S7. For the PRIMERGY platform, the tasks scheduled during the second day of the traces are all scheduled on one server type with 32 cores of CPU (3.2 GHz), 32 GB of memory and 1 TB of disk. The power profile of this server type are extracted from the SPECpower ssj2008 results [19] reported for PRIMERGY. This power profile is then used for determining the constants of the applied power linear model presented in Equation 5 (in this equation n is the CPU utilization percentage of each server). We focus on energy consumption of CPU because this is the component that presents the largest variance in energy consumption in regards to its utilization rate [20]. P n (t i )=(P max P idle ) n/100 + P idle (5) The proposed system is simulated and the tasks are assigned to the containers. Then, these containers are hosted in the corresponding virtual machine types during each processing Energy Consumption (KWh) m3.large m3.medium t2.medium t2.small WFS Time (Hour) Fig. 3. Energy consumption of the data center for the usage-based fix VM size approach versus and WFS window (one minute for the purposes of these experiments). The simulation runtime is set to 24 hours. The number of rejected tasks is reported for each experiment separately. Since the virtual machines placement also affects the simulation result, First Fit placement policy is used for all of the experiments. This placement algorithm reports the first running host that can provide the resources for the VM and if no running host is found for placing the VM then a new host will be turned on. Output metrics of interest are number of instantiated VMs, task rejection rate, and the energy consumption (Subsection VII-A3). In order to ensure the quality of service, task rejection rate is defined as the number of rejected tasks in each 1 minute processing window. A. Results Initially, we explore the factors that affect the resulting virtual machine sizes (types). The number of VM types is directly affected by the number of task clusters (k), which
7 7 t2.small m3.medium t2.medium m3.large WFS Task Rejection Rate( task/minute ) Fig. 4. Task rejection rate for WFS, and the fixed VM sizes considering the usage-based approach is estimated via applying X-means on the selected feature set. The details are further discussed in Section VII-A1. The efficiency of the proposed VM size selection technique is then explored considering the baseline scenarios in which the virtual machine types are fixed. For the baseline scenarios, the containers are assigned to VMs considering two different criteria discussed in Subsection VII-A2. 1) Feature set selection: The following approaches are considered in the investigation the effect of feature selection on virtual machine sizes: Whole Feature Set (WFS) approach: In this approach, the whole feature set listed in Section V-A is considered as the input of the X-means algorithm, which resulted in 18 clusters of tasks. The virtual machine sizes are defined following the procedures in Section VI. The obtained VM sizes are listed in Table I. As we mentioned in Section VI, 10 GB of storage is assigned to all VM types in order to have enough space for the OS installed on each VM. Reduced Feature Set () approach: By assigning the same amount of storage, WFS approach leads to the observation that the most effective parameters in VM size selection for this specific workload are the average CPU and memory utilization. Therefore, for the approach, the clustering feature set is reduced to these two main features. The application of the new feature set as input of X-means resulted in 5 clusters, which consequently results in 5 different VM sizes (shown in Table II). Between these two approaches, the one that could save more energy via reducing the number of servers has the more efficient virtual machine sizes. As Figure 3 shows, results in 81% less energy consumption in comparison to WFS. This is because causes less resource fragmentation and that leads to less number of servers. Therefore, it can be concluded that tailoring the size of virtual machines to container requirement is crucial, however it has to be done in such a way that avoids resource fragmentation. Apart from the energy perspective, comparing to WFS approach, results in 86% and 77% less number of instantiated VMs and rejection rate respectively. 2) Baseline scenarios: In order to investigate the effect of defining the virtual machine sizes according to the workload (container optimized), in these experiments we consider scenarios where VM sizes are fixed. In this respect, we consider 50 the existing Amazon EC2 instances listed in Table II. In these scenarios, the scheduler assigns the containers to only one specific VM size (e.g t2.small instance) considering the following approaches: Request-based resource allocation approach: In the request-based resource allocation approach, tasks are given the exact amount of resources requested and specified by the user while submitted. Therefore, in this approach containers are packed in the virtual machines according to their containing task specifications, as submitted by users. Usage-based resource allocation approach: In this approach, the maximum number of containers (c max ) that can be hosted in one VM instance is specified in the pre-execution phase leveraging the average resource utilization of the tasks. c max is reported for each of the Amazon instances in Table II. This approach is included in the baseline scenarios, since the concept of utilizing the usage of the tasks is also taken into account in our VM size selection technique. c max = min(vm r /Usage r ),r = CP U, Memory, Disk (6) 3) Discussion: The results obtained with the utilization of the aforementioned approaches are compared considering three different matrices. The first matrix, which shows the main effect of our proposed VM size selection selection technique, is the energy consumption matrix obtained through Equation 5. Figure 3 shows the energy consumption for the baseline scenarios considering the usage-based approach and the proposed VM size selection technique. The data center energy consumption for the baseline scenarios considering the requested-based approach is also plotted in Figure 5. Because users overestimate resource requirements, the energy consumption is improved in the usage-based approach by almost 50% comparing to the requested-based approach in the baseline scenarios, for all of the fixed VM sizes. However, still outperforms all of the experiments in terms of the data center energy consumption. For the usage-based baseline scenarios, shows 33%, 44%, 24%, and 70% less energy consumption on average comparing to t2.small, t2.medium, m3.medium, and m3.large VM sizes respectively. For the second comparison matrix, we consider the number of instantiated virtual machines (n VM ). The increase in n VM, results in more virtualization overhead and also longer delays in scheduling because of the VM instantiation delay. Results show that can execute the workload with the smallest n VM. It outperforms the usage-based baseline scenarios by 82%, 75%, 37%, and 86% less number of instantiated VMs for the t2.small, t2.medium, m3.medium, and m3.large VM sizes respectively. In order to ensure the quality of service, the task rejection rate is considered as the third comparison matrix. Task rejection rates for the usage-based baseline scenarios, WFS, and approaches are shown in Figure 4. As in previous cases, VM size selection approach outperforms the usagebased baseline scenarios by 8%, 42%, 8%, and 15% less task rejection rate for the t2.small, t2.medium, m3.medium, and m3.large VM sizes respectively. It is worth mentioning that,
8 8 Energy Consumption (KWh) m3.large m3.medium t2.medium t2.small WFS Time (Hour) Fig. 5. Energy consumption of the data center for the request-based fix VM size approach versus and WFS. Fig. 6. t2.small m3.medium t2.medium m3.large WFS Resource Allocation Mode: Usage based Request based Number of Instantiated VMs Number of instantiated virtual machines for the applied approaches. because of the over allocation of resources for the requestedbased baseline scenarios, the task rejection rate is equal to zero for all of the VM types. In summary, the experiments show that our VM size selection technique combined with workload information, saves considerable amount of energy with minimum task rejections. VIII. CONCLUSIONS AND FUTURE DIRECTIONS In this work, we investigated the problem of inefficient utilization of data center infrastructure resulted from the users overestimation of required resources on virtual machine level. To address the issue, we considered the recently introduced CaaS cloud service model and presented a technique for finding efficient virtual machine sizes for hosting containers considering their actual resource usage instead of users estimated amounts. To investigate the efficiency of our VM sizing technique, we considered baseline scenarios in which the virtual machine sizes are fixed. Because of user overestimation of resources, the usage-based approach (where VM sizes are chosen based on actual requirements of applications rather than amount requested by users) outperforms the requested-based approach by almost 50% in terms of the data center average energy consumption. Despite this improvement, our approach outperforms the usage-based baseline scenarios by almost 7.55% in 600 terms of the data center energy consumption. Apart from the energy perspective, our approach results in less number of VM instantiation comparing to all other mentioned experiments. As future work, we will consider workload prediction to estimate the real usage of containers, and will apply correlation analysis to place VMs that are not correlated in one host to minimize rejection rate. REFERENCES [1] M. Armbrust et al., Above the clouds: A Berkeley view of cloud computing, University of California at Berkeley, Berkeley, Technical Report UCB/EECS , Feb [2] R. Buyya, A. Beloglazov, and J. Abawajy, Energy-efficient management of data center resources for cloud computing: a vision, architectural elements, and open challenges, in PDPTA 2010: Proc. of the 2010 International Conference on Parallel and Distributed Processing Techniques and Applications. CSREA Press, pp [3] Mishra et al., Towards characterizing cloud backend workloads: insights from Google compute clusters, ACM SIGMETRICS Performance Evaluation Review, vol. 37, no. 4, p. 3441, [4] Y. Chen, A. S. Ganapathi, R. Griffith, and R. H. Katz, Analysis and lessons from a publicly available google cluster trace, Tech. Rep., [5] A. Kansal et al., Virtual machine power metering and provisioning, in Proc. of the 1st ACM symposium on Cloud computing, [6] R. Nathuji and K. Schwan, VirtualPower: coordinated power management in virtualized enterprise systems, in ACM SIGOPS Operating Systems Review, vol. 41, 2007, pp [7] K. H. Kim, A. Beloglazov, and R. Buyya, Power-aware provisioning of cloud resources for real-time services, in Proc. of the 7th International Workshop on Middleware for Grids, Clouds and e-science, 2009, p. 1. [8] D. Gmach, J. Rolia, and L. Cherkasova, Selling t-shirts and time shares in the cloud, in Proc. of the th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (ccgrid 2012). IEEE Computer Society, 2012, pp [9] S. Hosseinimotlagh, F. Khunjush, and R. Samadzadeh, Seats: smart energy-aware task scheduling in real-time cloud computing, The Journal of Supercomputing, vol. 71, no. 1, pp , [10] C. Reiss, J. Wilkes, and J. L. Hellerstein, Google cluster-usage traces: format+ schema, Tech. Rep., [11] Q. Zhang, M. F. Zhani, R. Boutaba, and J. L. Hellerstein, HARMONY: Dynamic heterogeneity-aware resource provisioning in the cloud, The 33rd International Conference on Distributed Computing Systems (ICDCS), July [12] D. Pelleg and A. W. Moore, X-means: Extending k-means with efficient estimation of the number of clusters, in Seventeenth International Conference on Machine Learning. Morgan Kaufmann, [13] J. Hartigan and M. Wong, Algorithm as 136: A k-means clustering algorithm, Applied Statistics, vol. 28, no. 1, pp , [14] G. Schwarz et al., Estimating the dimension of a model, The annals of statistics, vol. 6, no. 2, pp , [15] R. C. Team, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria, [16] K. Hornik et al., Open-source machine learning: R meets Weka, Computational Statistics, vol. 24, no. 2, pp , [17] M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten, The weka data mining software: an update, ACM SIGKDD explorations newsletter, vol. 11, no. 1, pp , [18] P. Townend, J. Xu, and I. S. Moreno, An analysis of failure-related energy waste in a large-scale cloud environment, IEEE Transactions on Emerging Topics in Computing, p. 1, [19] S. P. E. Corporation. (2012) Specpower ssj2008 results. [Online]. Available: ssj2008/results/ [20] M. Blackburn, Ed., Five ways to reduce data center server power consumption. The Green Grid, 2008.
Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing
Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing Deep Mann ME (Software Engineering) Computer Science and Engineering Department Thapar University Patiala-147004
Cloud Management: Knowing is Half The Battle
Cloud Management: Knowing is Half The Battle Raouf BOUTABA David R. Cheriton School of Computer Science University of Waterloo Joint work with Qi Zhang, Faten Zhani (University of Waterloo) and Joseph
Environments, Services and Network Management for Green Clouds
Environments, Services and Network Management for Green Clouds Carlos Becker Westphall Networks and Management Laboratory Federal University of Santa Catarina MARCH 3RD, REUNION ISLAND IARIA GLOBENET 2012
Optimal Service Pricing for a Cloud Cache
Optimal Service Pricing for a Cloud Cache K.SRAVANTHI Department of Computer Science & Engineering (M.Tech.) Sindura College of Engineering and Technology Ramagundam,Telangana G.LAKSHMI Asst. Professor,
Characterizing Task Usage Shapes in Google s Compute Clusters
Characterizing Task Usage Shapes in Google s Compute Clusters Qi Zhang 1, Joseph L. Hellerstein 2, Raouf Boutaba 1 1 University of Waterloo, 2 Google Inc. Introduction Cloud computing is becoming a key
CloudSim: A Toolkit for Modeling and Simulation of Cloud Computing Environments and Evaluation of Resource Provisioning Algorithms
CloudSim: A Toolkit for Modeling and Simulation of Cloud Computing Environments and Evaluation of Resource Provisioning Algorithms Rodrigo N. Calheiros, Rajiv Ranjan, Anton Beloglazov, César A. F. De Rose,
A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing
A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing Liang-Teh Lee, Kang-Yuan Liu, Hui-Yang Huang and Chia-Ying Tseng Department of Computer Science and Engineering,
A Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems
A Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems Anton Beloglazov, Rajkumar Buyya, Young Choon Lee, and Albert Zomaya Present by Leping Wang 1/25/2012 Outline Background
PERFORMANCE ANALYSIS OF PaaS CLOUD COMPUTING SYSTEM
PERFORMANCE ANALYSIS OF PaaS CLOUD COMPUTING SYSTEM Akmal Basha 1 Krishna Sagar 2 1 PG Student,Department of Computer Science and Engineering, Madanapalle Institute of Technology & Science, India. 2 Associate
International Journal of Advance Research in Computer Science and Management Studies
Volume 3, Issue 6, June 2015 ISSN: 2321 7782 (Online) International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online
Dynamic resource management for energy saving in the cloud computing environment
Dynamic resource management for energy saving in the cloud computing environment Liang-Teh Lee, Kang-Yuan Liu, and Hui-Yang Huang Department of Computer Science and Engineering, Tatung University, Taiwan
Characterizing Task Usage Shapes in Google s Compute Clusters
Characterizing Task Usage Shapes in Google s Compute Clusters Qi Zhang University of Waterloo [email protected] Joseph L. Hellerstein Google Inc. [email protected] Raouf Boutaba University of Waterloo [email protected]
Energy Constrained Resource Scheduling for Cloud Environment
Energy Constrained Resource Scheduling for Cloud Environment 1 R.Selvi, 2 S.Russia, 3 V.K.Anitha 1 2 nd Year M.E.(Software Engineering), 2 Assistant Professor Department of IT KSR Institute for Engineering
Future Generation Computer Systems. Energy-aware resource allocation heuristics for efficient management of data centers for Cloud computing
Future Generation Computer Systems 28 (2012) 755 768 Contents lists available at SciVerse ScienceDirect Future Generation Computer Systems journal homepage: www.elsevier.com/locate/fgcs Energy-aware resource
Energy Efficient Resource Management in Virtualized Cloud Data Centers
2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing Energy Efficient Resource Management in Virtualized Cloud Data Centers Anton Beloglazov* and Rajkumar Buyya Cloud Computing
ENERGY-EFFICIENT TASK SCHEDULING ALGORITHMS FOR CLOUD DATA CENTERS
ENERGY-EFFICIENT TASK SCHEDULING ALGORITHMS FOR CLOUD DATA CENTERS T. Jenifer Nirubah 1, Rose Rani John 2 1 Post-Graduate Student, Department of Computer Science and Engineering, Karunya University, Tamil
CloudAnalyst: A CloudSim-based Visual Modeller for Analysing Cloud Computing Environments and Applications
CloudAnalyst: A CloudSim-based Visual Modeller for Analysing Cloud Computing Environments and Applications Bhathiya Wickremasinghe 1, Rodrigo N. Calheiros 2, and Rajkumar Buyya 1 1 The Cloud Computing
Infrastructure as a Service (IaaS)
Infrastructure as a Service (IaaS) (ENCS 691K Chapter 4) Roch Glitho, PhD Associate Professor and Canada Research Chair My URL - http://users.encs.concordia.ca/~glitho/ References 1. R. Moreno et al.,
A Survey of Energy Efficient Data Centres in a Cloud Computing Environment
A Survey of Energy Efficient Data Centres in a Cloud Computing Environment Akshat Dhingra 1, Sanchita Paul 2 Department of Computer Science and Engineering, Birla Institute of Technology, Ranchi, India
LOAD BALANCING OF USER PROCESSES AMONG VIRTUAL MACHINES IN CLOUD COMPUTING ENVIRONMENT
LOAD BALANCING OF USER PROCESSES AMONG VIRTUAL MACHINES IN CLOUD COMPUTING ENVIRONMENT 1 Neha Singla Sant Longowal Institute of Engineering and Technology, Longowal, Punjab, India Email: 1 [email protected]
Payment minimization and Error-tolerant Resource Allocation for Cloud System Using equally spread current execution load
Payment minimization and Error-tolerant Resource Allocation for Cloud System Using equally spread current execution load Pooja.B. Jewargi Prof. Jyoti.Patil Department of computer science and engineering,
Virtual Machine Instance Scheduling in IaaS Clouds
Virtual Machine Instance Scheduling in IaaS Clouds Naylor G. Bachiega, Henrique P. Martins, Roberta Spolon, Marcos A. Cavenaghi Departamento de Ciência da Computação UNESP - Univ Estadual Paulista Bauru,
Green Cloud: Smart Resource Allocation and Optimization using Simulated Annealing Technique
Green Cloud: Smart Resource Allocation and Optimization using Simulated Annealing Technique AkshatDhingra M.Tech Research Scholar, Department of Computer Science and Engineering, Birla Institute of Technology,
Setting deadlines and priorities to the tasks to improve energy efficiency in cloud computing
Setting deadlines and priorities to the tasks to improve energy efficiency in cloud computing Problem description Cloud computing is a technology used more and more every day, requiring an important amount
Performance Analysis of VM Scheduling Algorithm of CloudSim in Cloud Computing
IJECT Vo l. 6, Is s u e 1, Sp l-1 Ja n - Ma r c h 2015 ISSN : 2230-7109 (Online) ISSN : 2230-9543 (Print) Performance Analysis Scheduling Algorithm CloudSim in Cloud Computing 1 Md. Ashifuddin Mondal,
Green Cloud Computing 班 級 : 資 管 碩 一 組 員 :710029011 黃 宗 緯 710029021 朱 雅 甜
Green Cloud Computing 班 級 : 資 管 碩 一 組 員 :710029011 黃 宗 緯 710029021 朱 雅 甜 Outline Introduction Proposed Schemes VM configuration VM Live Migration Comparison 2 Introduction (1/2) In 2006, the power consumption
Consolidation of VMs to improve energy efficiency in cloud computing environments
Consolidation of VMs to improve energy efficiency in cloud computing environments Thiago Kenji Okada 1, Albert De La Fuente Vigliotti 1, Daniel Macêdo Batista 1, Alfredo Goldman vel Lejbman 1 1 Institute
International Journal of Digital Application & Contemporary research Website: www.ijdacr.com (Volume 2, Issue 9, April 2014)
Green Cloud Computing: Greedy Algorithms for Virtual Machines Migration and Consolidation to Optimize Energy Consumption in a Data Center Rasoul Beik Islamic Azad University Khomeinishahr Branch, Isfahan,
Keywords: Cloudsim, MIPS, Gridlet, Virtual machine, Data center, Simulation, SaaS, PaaS, IaaS, VM. Introduction
Vol. 3 Issue 1, January-2014, pp: (1-5), Impact Factor: 1.252, Available online at: www.erpublications.com Performance evaluation of cloud application with constant data center configuration and variable
Reallocation and Allocation of Virtual Machines in Cloud Computing Manan D. Shah a, *, Harshad B. Prajapati b
Proceedings of International Conference on Emerging Research in Computing, Information, Communication and Applications (ERCICA-14) Reallocation and Allocation of Virtual Machines in Cloud Computing Manan
Dynamic Resource Pricing on Federated Clouds
Dynamic Resource Pricing on Federated Clouds Marian Mihailescu and Yong Meng Teo Department of Computer Science National University of Singapore Computing 1, 13 Computing Drive, Singapore 117417 Email:
Multilevel Communication Aware Approach for Load Balancing
Multilevel Communication Aware Approach for Load Balancing 1 Dipti Patel, 2 Ashil Patel Department of Information Technology, L.D. College of Engineering, Gujarat Technological University, Ahmedabad 1
Hierarchical Approach for Green Workload Management in Distributed Data Centers
Hierarchical Approach for Green Workload Management in Distributed Data Centers Agostino Forestiero, Carlo Mastroianni, Giuseppe Papuzzo, Mehdi Sheikhalishahi Institute for High Performance Computing and
DataCenter optimization for Cloud Computing
DataCenter optimization for Cloud Computing Benjamín Barán National University of Asuncion (UNA) [email protected] Paraguay Content Cloud Computing Commercial Offerings Basic Problem Formulation Open Research
Li Sheng. [email protected]. Nowadays, with the booming development of network-based computing, more and more
36326584 Li Sheng Virtual Machine Technology for Cloud Computing Li Sheng [email protected] Abstract: Nowadays, with the booming development of network-based computing, more and more Internet service vendors
An Energy-aware Multi-start Local Search Metaheuristic for Scheduling VMs within the OpenNebula Cloud Distribution
An Energy-aware Multi-start Local Search Metaheuristic for Scheduling VMs within the OpenNebula Cloud Distribution Y. Kessaci, N. Melab et E-G. Talbi Dolphin Project Team, Université Lille 1, LIFL-CNRS,
USING VIRTUAL MACHINE REPLICATION FOR DYNAMIC CONFIGURATION OF MULTI-TIER INTERNET SERVICES
USING VIRTUAL MACHINE REPLICATION FOR DYNAMIC CONFIGURATION OF MULTI-TIER INTERNET SERVICES Carlos Oliveira, Vinicius Petrucci, Orlando Loques Universidade Federal Fluminense Niterói, Brazil ABSTRACT In
EFFICIENT VM LOAD BALANCING ALGORITHM FOR A CLOUD COMPUTING ENVIRONMENT
EFFICIENT VM LOAD BALANCING ALGORITHM FOR A CLOUD COMPUTING ENVIRONMENT Jasmin James, 38 Sector-A, Ambedkar Colony, Govindpura, Bhopal M.P Email:[email protected] Dr. Bhupendra Verma, Professor
Resource Allocation Avoiding SLA Violations in Cloud Framework for SaaS
Resource Allocation Avoiding SLA Violations in Cloud Framework for SaaS Shantanu Sasane Abhilash Bari Kaustubh Memane Aniket Pathak Prof. A. A.Deshmukh University of Pune University of Pune University
GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR
GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR ANKIT KUMAR, SAVITA SHIWANI 1 M. Tech Scholar, Software Engineering, Suresh Gyan Vihar University, Rajasthan, India, Email:
A Middleware Strategy to Survive Compute Peak Loads in Cloud
A Middleware Strategy to Survive Compute Peak Loads in Cloud Sasko Ristov Ss. Cyril and Methodius University Faculty of Information Sciences and Computer Engineering Skopje, Macedonia Email: [email protected]
Efficient and Enhanced Load Balancing Algorithms in Cloud Computing
, pp.9-14 http://dx.doi.org/10.14257/ijgdc.2015.8.2.02 Efficient and Enhanced Load Balancing Algorithms in Cloud Computing Prabhjot Kaur and Dr. Pankaj Deep Kaur M. Tech, CSE P.H.D [email protected],
CDBMS Physical Layer issue: Load Balancing
CDBMS Physical Layer issue: Load Balancing Shweta Mongia CSE, School of Engineering G D Goenka University, Sohna [email protected] Shipra Kataria CSE, School of Engineering G D Goenka University,
A Distributed Approach to Dynamic VM Management
A Distributed Approach to Dynamic VM Management Michael Tighe, Gastón Keller, Michael Bauer and Hanan Lutfiyya Department of Computer Science The University of Western Ontario London, Canada {mtighe2 gkeller2
Cloud Computing Simulation Using CloudSim
Cloud Computing Simulation Using CloudSim Ranjan Kumar #1, G.Sahoo *2 # Assistant Professor, Computer Science & Engineering, Ranchi University, India Professor & Head, Information Technology, Birla Institute
A Open Source Tools & Comparative Study on Cloud Computing
International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 6, Issue 7 (April 2013), PP.69-73 A Open Source Tools & Comparative Study on Cloud
SLA-aware Resource Scheduling for Cloud Storage
SLA-aware Resource Scheduling for Cloud Storage Zhihao Yao Computer and Information Technology Purdue University West Lafayette, Indiana 47906 Email: [email protected] Ioannis Papapanagiotou Computer and
EPOBF: ENERGY EFFICIENT ALLOCATION OF VIRTUAL MACHINES IN HIGH PERFORMANCE COMPUTING CLOUD
Journal of Science and Technology 51 (4B) (2013) 173-182 EPOBF: ENERGY EFFICIENT ALLOCATION OF VIRTUAL MACHINES IN HIGH PERFORMANCE COMPUTING CLOUD Nguyen Quang-Hung, Nam Thoai, Nguyen Thanh Son Faculty
Dynamic Resource Management Using Skewness and Load Prediction Algorithm for Cloud Computing
Dynamic Resource Management Using Skewness and Load Prediction Algorithm for Cloud Computing SAROJA V 1 1 P G Scholar, Department of Information Technology Sri Venkateswara College of Engineering Chennai,
A Case Study about Green Cloud Computing: An Attempt towards Green Planet
A Case Study about Green Cloud Computing: An Attempt towards Green Planet Devinder Kaur Padam 1 Analyst, HCL Technologies Infra Structure Department, Sec-126 Noida, India 1 ABSTRACT: Cloud computing is
Enhancing the Scalability of Virtual Machines in Cloud
Enhancing the Scalability of Virtual Machines in Cloud Chippy.A #1, Ashok Kumar.P #2, Deepak.S #3, Ananthi.S #4 # Department of Computer Science and Engineering, SNS College of Technology Coimbatore, Tamil
RESOURCE MANAGEMENT IN CLOUD COMPUTING ENVIRONMENT
RESOURCE MANAGEMENT IN CLOUD COMPUTING ENVIRONMENT A.Chermaraj 1, Dr.P.Marikkannu 2 1 PG Scholar, 2 Assistant Professor, Department of IT, Anna University Regional Centre Coimbatore, Tamilnadu (India)
Keywords Distributed Computing, On Demand Resources, Cloud Computing, Virtualization, Server Consolidation, Load Balancing
Volume 5, Issue 1, January 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Survey on Load
INCREASING SERVER UTILIZATION AND ACHIEVING GREEN COMPUTING IN CLOUD
INCREASING SERVER UTILIZATION AND ACHIEVING GREEN COMPUTING IN CLOUD M.Rajeswari 1, M.Savuri Raja 2, M.Suganthy 3 1 Master of Technology, Department of Computer Science & Engineering, Dr. S.J.S Paul Memorial
IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures
IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Introduction
This is an author-deposited version published in : http://oatao.univ-toulouse.fr/ Eprints ID : 12902
Open Archive TOULOUSE Archive Ouverte (OATAO) OATAO is an open access repository that collects the work of Toulouse researchers and makes it freely available over the web where possible. This is an author-deposited
Energy Conscious Virtual Machine Migration by Job Shop Scheduling Algorithm
Energy Conscious Virtual Machine Migration by Job Shop Scheduling Algorithm Shanthipriya.M 1, S.T.Munusamy 2 ProfSrinivasan. R 3 M.Tech (IT) Student, Department of IT, PSV College of Engg & Tech, Krishnagiri,
A Performance Interference-aware Virtual Machine Placement Strategy for Supporting Soft Real-time Applications in the Cloud
A Performance Interference-aware Virtual Machine Placement Strategy for Supporting Soft Real-time Applications in the Cloud Faruk Caglar, Shashank Shekhar and Aniruddha Gokhale Department of Electrical
Service Broker Algorithm for Cloud-Analyst
Service Broker Algorithm for Cloud-Analyst Rakesh Kumar Mishra, Sreenu Naik Bhukya Department of Computer Science & Engineering National Institute of Technology Calicut, India Abstract Cloud computing
Simulation-based Evaluation of an Intercloud Service Broker
Simulation-based Evaluation of an Intercloud Service Broker Foued Jrad, Jie Tao and Achim Streit Steinbuch Centre for Computing, SCC Karlsruhe Institute of Technology, KIT Karlsruhe, Germany {foued.jrad,
An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform
An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform A B M Moniruzzaman 1, Kawser Wazed Nafi 2, Prof. Syed Akhter Hossain 1 and Prof. M. M. A. Hashem 1 Department
Energy Optimized Virtual Machine Scheduling Schemes in Cloud Environment
Abstract Energy Optimized Virtual Machine Scheduling Schemes in Cloud Environment (14-18) Energy Optimized Virtual Machine Scheduling Schemes in Cloud Environment Ghanshyam Parmar a, Dr. Vimal Pandya b
NCTA Cloud Architecture
NCTA Cloud Architecture Course Specifications Course Number: 093019 Course Length: 5 days Course Description Target Student: This course is designed for system administrators who wish to plan, design,
AN IMPLEMENTATION OF E- LEARNING SYSTEM IN PRIVATE CLOUD
AN IMPLEMENTATION OF E- LEARNING SYSTEM IN PRIVATE CLOUD M. Lawanya Shri 1, Dr. S. Subha 2 1 Assistant Professor,School of Information Technology and Engineering, Vellore Institute of Technology, Vellore-632014
What Is It? Business Architecture Research Challenges Bibliography. Cloud Computing. Research Challenges Overview. Carlos Eduardo Moreira dos Santos
Research Challenges Overview May 3, 2010 Table of Contents I 1 What Is It? Related Technologies Grid Computing Virtualization Utility Computing Autonomic Computing Is It New? Definition 2 Business Business
Virtual Machine Placement in Cloud systems using Learning Automata
2013 13th Iranian Conference on Fuzzy Systems (IFSC) Virtual Machine Placement in Cloud systems using Learning Automata N. Rasouli 1 Department of Electronic, Computer and Electrical Engineering, Qazvin
Affinity Aware VM Colocation Mechanism for Cloud
Affinity Aware VM Colocation Mechanism for Cloud Nilesh Pachorkar 1* and Rajesh Ingle 2 Received: 24-December-2014; Revised: 12-January-2015; Accepted: 12-January-2015 2014 ACCENTS Abstract The most of
Exploring Resource Provisioning Cost Models in Cloud Computing
Exploring Resource Provisioning Cost Models in Cloud Computing P.Aradhya #1, K.Shivaranjani *2 #1 M.Tech, CSE, SR Engineering College, Warangal, Andhra Pradesh, India # Assistant Professor, Department
Analysis of the influence of application deployment on energy consumption
Analysis of the influence of application deployment on energy consumption M. Gribaudo, Nguyen T.T. Ho, B. Pernici, G. Serazzi Dip. Elettronica, Informazione e Bioingegneria Politecnico di Milano Motivation
Managing Capacity Using VMware vcenter CapacityIQ TECHNICAL WHITE PAPER
Managing Capacity Using VMware vcenter CapacityIQ TECHNICAL WHITE PAPER Table of Contents Capacity Management Overview.... 3 CapacityIQ Information Collection.... 3 CapacityIQ Performance Metrics.... 4
Auto-Scaling Model for Cloud Computing System
Auto-Scaling Model for Cloud Computing System Che-Lun Hung 1*, Yu-Chen Hu 2 and Kuan-Ching Li 3 1 Dept. of Computer Science & Communication Engineering, Providence University 2 Dept. of Computer Science
Power Management in Cloud Computing using Green Algorithm. -Kushal Mehta COP 6087 University of Central Florida
Power Management in Cloud Computing using Green Algorithm -Kushal Mehta COP 6087 University of Central Florida Motivation Global warming is the greatest environmental challenge today which is caused by
Optimizing the Cost for Resource Subscription Policy in IaaS Cloud
Optimizing the Cost for Resource Subscription Policy in IaaS Cloud Ms.M.Uthaya Banu #1, Mr.K.Saravanan *2 # Student, * Assistant Professor Department of Computer Science and Engineering Regional Centre
Power Aware Load Balancing for Cloud Computing
, October 19-21, 211, San Francisco, USA Power Aware Load Balancing for Cloud Computing Jeffrey M. Galloway, Karl L. Smith, Susan S. Vrbsky Abstract With the increased use of local cloud computing architectures,
How To Schedule Cloud Computing On A Data Center
An Efficient Scheduling of HPC Applications on Geographically Distributed Cloud Data Centers Aboozar Rajabi 1, Hamid Reza Faragardi 2, Thomas Nolte 2 1 School of Electrical and Computer Engineering, University
Cloud Computing Architectures and Design Issues
Cloud Computing Architectures and Design Issues Ozalp Babaoglu, Stefano Ferretti, Moreno Marzolla, Fabio Panzieri {babaoglu, sferrett, marzolla, panzieri}@cs.unibo.it Outline What is Cloud Computing? A
Cloud Computing Capacity Planning. Maximizing Cloud Value. Authors: Jose Vargas, Clint Sherwood. Organization: IBM Cloud Labs
Cloud Computing Capacity Planning Authors: Jose Vargas, Clint Sherwood Organization: IBM Cloud Labs Web address: ibm.com/websphere/developer/zones/hipods Date: 3 November 2010 Status: Version 1.0 Abstract:
Survey on Models to Investigate Data Center Performance and QoS in Cloud Computing Infrastructure
Survey on Models to Investigate Data Center Performance and QoS in Cloud Computing Infrastructure Chandrakala Department of Computer Science and Engineering Srinivas School of Engineering, Mukka Mangalore,
Two-Level Cooperation in Autonomic Cloud Resource Management
Two-Level Cooperation in Autonomic Cloud Resource Management Giang Son Tran, Laurent Broto, and Daniel Hagimont ENSEEIHT University of Toulouse, Toulouse, France Email: {giang.tran, laurent.broto, daniel.hagimont}@enseeiht.fr
Round Robin with Server Affinity: A VM Load Balancing Algorithm for Cloud Based Infrastructure
J Inf Process Syst, Vol.9, No.3, September 2013 pissn 1976-913X eissn 2092-805X http://dx.doi.org/10.3745/jips.2013.9.3.379 Round Robin with Server Affinity: A VM Load Balancing Algorithm for Cloud Based
International Journal of Computer & Organization Trends Volume21 Number1 June 2015 A Study on Load Balancing in Cloud Computing
A Study on Load Balancing in Cloud Computing * Parveen Kumar * Er.Mandeep Kaur Guru kashi University,Talwandi Sabo Guru kashi University,Talwandi Sabo Abstract: Load Balancing is a computer networking
Performance Analysis of Resource Provisioning for Cloud Computing Frameworks
Proc. of Int. Conf. on Advances in Communication, Network, and Computing, CNC Performance Analysis of Resource Provisioning for Cloud Computing Frameworks Sanjeev Kumar Pippal 1, Rahul Kumar Dubey 2, Arjun
